You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Grokking the coding interview equivalent leetcode problems
GROKKING NOTES
I liked the way Grokking the coding interview organized problems into learnable patterns. However, the course is expensive and the majority of the time the problems are copy-pasted from leetcode. As the explanations on leetcode are usually just as good, the course really boils down to being a glorified curated list of leetcode problems.
So below I made a list of leetcode problems that are as close to grokking problems as possible.
Dummy's Guide to Modern LLM Sampling -- @AlpinDale
Dummy's Guide to Modern LLM Sampling
Intro Knowledge
Large Language Models (LLMs) work by taking a piece of text (e.g. user prompt) and calculating the next word. In a more technical term, token. LLMs have a vocabulary, or a dictionary, of valid tokens, and will reference those in training and inference (the process of generating text). More on that below. You need to understand why we use tokens (sub-words) instead of words or letters first. But first, a short glossary of some technical terms that aren't explained in the sections below in-depth:
Short Glossary
Logits: The raw, unnormalized scores output by the model for each token in its vocabulary. Higher logits indicate tokens the model considers more likely to come next.
Softmax: A mathematical function that converts logits into a proper probability distribution - values between 0 and 1 that sum to 1.
Entropy: A measure of uncertainty or randomness in a probability distribution. Higher entropy means the model is less certain abou