We’re already into the second month of 2025, and each passing day brings us nearer to Synthetic Common Intelligence (AGI)—AI that may sort out advanced issues throughout a number of sectors at a human stage.
Take DeepSeek, as an illustration. Till lately, might you might have imagined a company earlier than 2024 that would construct a cutting-edge Generative AI mannequin for just some million {dollars} and nonetheless go toe-to-toe with OpenAI’s flagship fashions? Most likely not. Nevertheless it’s occurring.
Now, OpenAI has countered with the discharge of o3-mini, additional accelerating AI’s evolution. Its reasoning capabilities are pushing the boundaries of AI growth, making the know-how extra accessible and highly effective. This AI battle will go on! Additionally lately, as Sam Altman famous in his Three Observations weblog, the price of utilizing a given stage of AI is dropping tenfold each 12 months, and with decrease costs comes exponentially better adoption.
At this fee, in a decade, each particular person on Earth might accomplish greater than immediately’s most impactful people—solely due to developments in AI. This isn’t simply progress; it’s a revolution. On this battle of Massive Language Fashions (LLMs), the important thing to dominance lies in one among many elementary facets reminiscent of – Pretraining.
On this article, we’ll discuss LLM pretraining as talked about in Andrej Karapathy – “Deep Dive into LLMs like ChatGPT” — what it’s, the way it works, and why it’s the inspiration of recent AI capabilities.
What’s the LLM Pre-training?
Earlier than speaking about, the Pretraining stage of LLM, the larger image right here is, how ChatGPT, Claude or some other LLM generate the output. As an example, If we ask ChatGPT – “Who’s your Father or mother Firm?”

The query right here can be – how this output is generated by ChatGPT or you possibly can say what’s occurring behind the scenes of ChatGPT?
Let’s start with – What’s the LLM Pretraining Stage?
The LLM pretraining stage is the primary section of educating a big language mannequin (LLM) like the way to perceive and generate textual content. Consider it as studying an enormous variety of books, articles, and web sites to study grammar, information, and customary patterns in language. Throughout this stage, the mannequin processes billions of phrases (knowledge) and predicts the subsequent phrase (token) in a sentence repeatedly, refining its potential to generate coherent and related responses. Nevertheless, at this level, it doesn’t totally “perceive” which means like a human—it simply acknowledges patterns and chances.
What can a Pre-trained LLM do?
Pre-trained Massive Language Fashions (LLMs) can carry out a variety of duties, together with textual content era, summarization, translation, and sentiment evaluation. They help in code era, question-answering, and content material suggestion. LLMs can extract insights from unstructured knowledge, facilitate chatbots, and automate buyer assist. They improve artistic writing, present tutoring, and even generate lifelike conversations. Moreover, they help in knowledge augmentation, authorized evaluation, and medical analysis by analyzing huge quantities of data effectively. Their potential to know and generate human-like textual content makes them beneficial for numerous industries, from schooling and finance to healthcare and leisure. Nevertheless, they require fine-tuning for domain-specific accuracy.
Right here we’re taking ChatGPT to know the ideas.
LLM Pretraining Step 1: Course of the Web Knowledge
There are a number of levels of coaching an LLM however right here we’ll first speak concerning the LLM Pretraining stage.
The efficiency of a giant language mannequin (LLM) is deeply influenced by the standard and scale of its pretraining dataset. In case your dataset is clear, structured and straightforward to course of, the mannequin will work accordingly.
Nevertheless, for a lot of state-of-the-art open LLMs like Llama 3 and Mixtral, the small print of their pretraining knowledge stay a thriller—these datasets aren’t publicly obtainable, and little is understood about how they had been curated.
To handle this hole, Hugging collected knowledge from the web and curated FineWeb, a large-scale dataset ( this can be a portion of knowledge obtainable on the web) particularly designed for LLM pretraining. This high-quality and various dataset has 15 trillion tokens and occupies 44TB of disk area, FineWeb is constructed from 96 CommonCrawl snapshots and has been proven to supply better-performing fashions than different publicly obtainable pretraining datasets.
What units FineWeb aside is its transparency:
It meticulously documented each design selection, working detailed ablations on deduplication and filtering methods to refine the dataset’s high quality.
The place Does the Uncooked Knowledge Come From?
There are two foremost sources:
- Crawling the net your self – Utilized by firms like OpenAI and Anthropic.
- Utilizing public repositories – CommonCrawl, a non-profit that has been archiving internet knowledge since 2007.
For FineWeb, they adopted the method of many LLM groups and used CommonCrawl (CC) as the place to begin. CC releases a brand new dataset each 1-2 months, sometimes containing 200-400 TiB of textual content.
For instance, the April 2024 crawl contains 2.7 billion internet pages with 386 TiB of uncompressed HTML. Since 2013, CC has launched 96 crawls, plus 3 older-format crawls from 2008-2012.
1. URL Filtering
- The pipeline begins with URL filtering, the place internet pages from sure domains or with sure traits are blocked primarily based on a pre-defined listing.
- This helps take away grownup content material, spam, or different undesirable knowledge on the preliminary stage.
- As soon as URLs are filtered, the textual content is extracted from the net pages.
- This step removes HTML, JavaScript, and different non-text parts whereas preserving the significant content material.

3. Language Filtering
- The extracted textual content is then filtered primarily based on language.
- A fastText classifier is used to detect whether or not the content material is in English.
- Solely texts with a confidence rating of ≥ 0.65 are saved.

4. Gopher Filtering
- That is a further high quality filter designed to take away low-quality textual content.
- It’d embody checks for repetitive content material, nonsensical textual content, or dangerous content material.
5. MinHash Deduplication
- This step detects and removes duplicate content material utilizing the MinHash method.
- MinHash helps effectively examine giant quantities of textual content to search out near-duplicate paperwork and eradicate redundancy.
6. C4 Filters

- The filtered knowledge then passes via C4 filters, which additional refine the dataset.
- C4 (Colossal Clear Crawled Corpus) filters sometimes take away boilerplate content material, extreme repetition, and low-quality textual content.
7. Customized Filters
- At this stage, further customized filtering guidelines are utilized.
- These might contain eradicating particular patterns, dealing with formatting points, or eliminating recognized sources of noise.
8. PII Removing
- Lastly, the pipeline features a PII (Personally Identifiable Info) Removing step.
- This ensures that personal or delicate info (reminiscent of names, addresses, emails, and telephone numbers) is scrubbed from the dataset.
The Final result of the Course of
- The FineWeb pipeline ensures that the ensuing dataset is clear, high-quality, and optimized for coaching AI fashions.
- Knowledge Discount: After filtering, 36 trillion tokens stay from the unique internet dumps.
This structured method helps enhance the efficiency of AI fashions by making certain that they’re skilled on high-quality, various, and secure textual knowledge.
LLM Pretraining Step 2: Tokenization
If you’re accomplished with step 1 of processing the uncooked knowledge, now the query arises is – The best way to practice the neural community on this knowledge? As talked about within the FineWeb, there are 15 trillion tokens and 44TB of disk area knowledge set that should be fed to the neural community for additional processing.
The following important step is tokenization, a course of that prepares the uncooked textual content knowledge for coaching giant language fashions (LLMs). Let’s break down how tokenization works and its significance primarily based on the transcript.
Tokenization is the method of changing giant sequences of textual content into smaller, manageable items referred to as tokens. These tokens are discrete parts that neural networks course of throughout coaching. However how precisely will we flip an enormous textual content corpus into tokens {that a} machine can perceive and study from?
1. From Uncooked Textual content to One-Dimensional Sequence
Earlier than feeding the info to the neural community, we’ve to resolve how are we going to symbolize the textual content. Neural networks don’t course of uncooked textual content instantly; as a substitute, they anticipate enter within the type of a finite one-dimensional sequence of symbols.
2. Binary Illustration – Bits and Bytes
- An extended sequence of 0s and 1s can be inefficient for storage and processing in neural networks.
- As an alternative of encoding textual content as a uncooked sequence of bits, a extra environment friendly method is to group bits into significant symbols.
Computer systems symbolize textual content utilizing binary encoding (zeros and ones). Every character could be encoded right into a sequence of 8 bits (1 byte). This types the idea of how textual content knowledge is represented internally. Since bytes can take 256 doable values (0–255), we now have a vocabulary of 256 distinctive symbols, which could be regarded as distinctive IDs representing every character or mixture.
Notice: 1 Byte = 8 bits
Since every bit could be 0 or 1, an 8-bit sequence can symbolize:
28 = 256
This implies a single byte can encode 256 distinctive values, starting from 0 to 255.
- Every character (or image) is saved in 1 byte (8 bits).
- Every byte can take one among 256 doable values.
- Thus, the vocabulary measurement is 256 distinctive symbols.
Whenever you encode textual content in UTF-8, you change human-readable characters into binary representations (uncooked bits).
4. Lowering Sequence Size – Past Bytes
Though the binary (byte-based) encoding is environment friendly, storing lengthy sequences of binary bits would make the enter sequences unnecessarily prolonged. To handle this, tokenization strategies reminiscent of Byte Pair Encoding (BPE) are employed to cut back sequence size whereas rising the dimensions of the vocabulary.
- Byte Pair Encoding (BPE): This methodology teams steadily occurring pairs of symbols (bytes) into new symbols. As an example, if any sequence reminiscent of “135 32” seems repeatedly, will probably be changed by a brand new token with an ID (like 256). The method iteratively reduces the sequence size whereas increasing the token vocabulary.
5. Vocabulary Dimension – Commerce-off Between Sequence Size and Token Granularity
In follow, state-of-the-art LLMs like GPT-4 use a vocabulary measurement of 100,277 tokens. This iterative merging stops when a predefined vocabulary measurement is reached. This stability permits shorter sequences for use for coaching whereas sustaining token granularity that captures important language options. Every token can symbolize characters, phrases, areas, and even widespread phrase mixtures.
6. Tokenizing Textual content – Instance and Sensible Insights
Utilizing a tokenizer like GPT-4’s base mannequin (CL100k_base), the enter textual content is break up into tokens primarily based on the mannequin’s predefined vocabulary. For instance:
- The phrase “hiya world” is tokenized into two tokens: one for “hiya” and one for “area + world.”
- Including or eradicating areas ends in totally different tokens resulting from delicate variations in textual content patterns.
Why Is This Helpful?
- Optimizing Neural Community Enter: Massive Language Fashions (LLMs) like GPT-4 don’t learn uncooked textual content. As an alternative, they course of tokenized enter.
- Understanding Compression: Some phrases are break up into a number of tokens, whereas others keep intact.
- Effectivity in Coaching: Tokenization permits environment friendly storage and manipulation of textual content knowledge.
The method of changing the uncooked textual content into symbols or tokens is known as Tokenization. Tokenization is essential as a result of it interprets uncooked textual content knowledge right into a format that will get transformed to vectors (vector embedding utilizing similarity search or one thing else) and neural networks can effectively perceive and course of. It additionally strikes a trade-off between vocabulary richness and sequence size, which is vital to optimizing the coaching course of for large-scale LLMs. This step units the inspiration for the next phases of LLM pretraining, the place these tokens turn out to be the constructing blocks of the mannequin’s understanding of language patterns, syntax, and semantics.
LLM Pretraining Step 3: Neural Community
A neural community is a computational mannequin designed to simulate the way in which the human mind processes info. It consists of layers of interconnected nodes (neurons) that work collectively to acknowledge patterns, make selections, and clear up advanced duties.
Key Traits:
- Impressed by the Human Mind – Mimics how organic neurons course of and transmit info.
- Layered Construction – Composed of an enter layer, hidden layers, and an output layer.
- Studying via Coaching – Adjusts inner parameters (weights) over a number of iterations to enhance accuracy.
- Job-Particular Adaptability – Can deal with numerous issues reminiscent of classification, sample recognition, and clustering.
How It Works:
- Nodes (Neurons): Elementary items that course of knowledge.
- Connections (Weights): Retailer realized info and alter primarily based on enter.
- Coaching Course of: Weights are up to date over a number of iterations utilizing coaching knowledge.
- Closing Mannequin: A skilled neural community can effectively carry out the supposed process.
A neural community is a robust AI instrument that learns from knowledge and improves over time, enabling machines to make human-like selections.
Additionally learn: Introduction to Neural Community in Machine Studying
Neural Community I/O
Enter: Tokenized Sequences
The enter to the neural community consists of sequences of tokens derived from a dataset via tokenization. Tokenization breaks down the textual content into discrete items, that are assigned distinctive numerical IDs. On this instance, we contemplate a sequence of 4 tokens:
If you’re accomplished with step1
Token ID | Token |
2746 | “If” |
499 | “you” |
527 | “are” |
2884 | “Achieved” |
449 | with |
3094 | step |
16 | 1 |


These tokens are fed into the neural community as context, aiming to foretell the subsequent token within the sequence.
Processing: Chance Distribution Prediction
As soon as the token sequence is handed via the neural community, it generates a likelihood distribution over a vocabulary of doable subsequent tokens. On this case, the vocabulary measurement of GPT-4 is 100,277 distinctive tokens. The output is a likelihood rating assigned to every doable token, representing the probability of its prevalence as the subsequent token.
Backpropagation and Adjustment
To appropriate its predictions, the neural community goes via a mathematical replace course of:
- Calculate Loss – A loss perform (like cross-entropy loss) measures how far the anticipated chances are from the right chances. A decrease likelihood for the right token ends in a greater loss.
- Compute Gradients – The community makes use of gradient descent to find out the way to alter the weights of its neurons.
- Replace Weights – The mannequin’s inner parameters (weights) are tweaked barely in order that the subsequent time it sees the identical sequence, it will increase the likelihood of “Publish” and reduces the likelihood of incorrect choices.
Coaching and Refinement
The neural community updates its parameters utilizing a mathematical optimization course of. Given the right token, the coaching algorithm adjusts the community weights such that:
- The likelihood of the right token will increase.
- The chances of incorrect tokens lower.
As an example, after an replace, the likelihood of a token might enhance from 4% to six%, whereas the chances of different tokens alter accordingly. This iterative course of happens throughout giant batches of coaching knowledge, refining the community’s potential to mannequin the statistical relationships between tokens.
By means of steady publicity to knowledge and iterative updates, the neural community improves its predictive functionality. By analyzing context home windows of tokens and refining likelihood distributions, it learns to generate textual content sequences that align with real-world linguistic patterns.
Inside Working of Neural Community

A neural community, significantly trendy architectures like Transformers, follows a structured computational course of to generate significant predictions primarily based on enter knowledge. Under is an in depth clarification of its internals, damaged down into key levels.
1. Enter Illustration: Token Sequences
Neural networks course of enter knowledge within the type of token sequences. Every token is a numerical illustration of a phrase or a subword.
- The enter size can differ from 0 to eight,000 tokens (relying on the mannequin), however computational constraints restrict the utmost context size.
- Token sequences are the first knowledge buildings that stream via the community.
2. Mathematical Processing with Parameters (Weights)
As soon as token sequences are fed into the community, they’re processed mathematically utilizing a lot of parameters (additionally referred to as weights).
- Parameters are initially random, resulting in random predictions.
- By means of coaching, these parameters are adjusted to replicate patterns within the coaching dataset.
3. The Mathematical Expressions Behind Neural Networks
The community itself is a big mathematical perform with a hard and fast construction. It mixes inputs X1,X2,…with weights W1,W2….via:
- Multiplication
- Addition
- Exponentiation
- Normalization (LayerNorm)
- Matrix Operations
- Activation Capabilities (Softmax, and so forth.)
Despite the fact that trendy networks comprise billions of parameters, at their core, they carry out easy mathematical operations repeatedly.
Instance: A primary operation in a neural community might appear to be:

You possibly can know extra about it right here: {hyperlink of article}
4. The Transformer Structure: The Spine of Trendy Neural Networks
We’re speaking about – the mannequin nano-GPT, with a mere 85,000 parameters.
Right here we’re taking a sequence – C B A B B C and sorting it to ABBBCC.
Every letter is known as a tokens with the token index: