Understanding the Evolution of ChatGPT: Half 1—An In-Depth Take a look at GPT-1 and What Impressed It | by Shirley Li | Jan, 2025

Pre-training and Finetuning

The pretraining + finetuning paradigm, which firstly turned widespread in Pc Imaginative and prescient, refers back to the course of of coaching a mannequin utilizing two levels: pretraining after which finetuning.

In pretraining stage, the mannequin is skilled on a large-scale dataset that associated to the downstream job at hand. In Pc Imaginative and prescient, that is carried out often by studying a picture classification mannequin on ImageNet, with its mostly used subset ILSVR containing 1K classes, every has 1K photos.

Though 1M photos doesn’t sound like “large-scale” by at the moment’s commonplace, ILSVR was actually exceptional in a decade in the past and was certainly a lot a lot bigger than what we might have for particular CV duties.

Additionally, the CV neighborhood has explored a number of methods to do away with supervised pre-training as effectively, for instance MoCo (by Kaiming He et al.) and SimCLR (by Ting Chen et al.), and many others.

After pre-training, the mannequin is assumed to have learnt some common data in regards to the job, which might speed up the educational course of on the downstream job.

Then involves finetuning: On this stage, the mannequin can be skilled on a particular downstream job with high-quality labeled knowledge, typically in a lot smaller scale in comparison with ImageNet. Throughout this stage, the mannequin will decide up some domain-specific data associated to the duty at-hand, which helps enhance its efficiency.

For lots of CV duties, this pretraining + finetuning paradigm demonstrates higher efficiency in comparison with instantly coaching the identical mannequin from scratch on the restricted task-specific knowledge, particularly when the mannequin is advanced and therefore extra more likely to overfit on restricted coaching knowledge. Mixed with trendy CNN networks resembling ResNet, this results in a efficiency leap in lots of CV benchmarks, the place a few of which even obtain near-human efficiency.

Subsequently, a pure query arises: how can we replicate such success in NLP?

Earlier Explorations of Pretraining Previous to GPT-1

Actually, the NLP neighborhood by no means stops attempting on this course, and a few of the efforts can date again to as early as 2013, resembling Word2Vec and GloVe (World Vectors for Phrase Illustration).

Word2Vec

The Word2Vec paper “Distributed Representations of Phrases and Phrases and their Compositionality” was honored with the “Take a look at of Time” award at NeurIPS 2023. It’s actually a must-read for anybody not aware of this work.

At the moment it feels so pure to characterize phrases or tokens as embedding vectors, however this wasn’t the case earlier than Word2Vec. At the moment, phrases had been generally represented by one-hot encoding or some count-based statistics resembling TD-IDF (time period frequency-inverse doc frequency) or co-occurrence matrices.

For instance in one-hot encoding, given a vocabulary of measurement N, every phrase on this vocabulary can be assigned an index i, after which it is going to be represented as a sparse vector of size N the place solely the i-th factor is ready to 1.

Take the next case for example: on this toy vocabulary we solely have 4 phrases: the (index 0), cat (index 1), sat (index 2) and on (index 3), and due to this fact every phrase can be represented as a sparse vector of size 4(the ->1000, cat -> 0100, sat -> 0010, on -> 0001).

Determine 1. An instance of one-hot encoding with a toy vocabulary of 4 phrases. (picture by the writer)

The issue with this straightforward technique is that, as vocabulary grows bigger and bigger in real-world instances, the one-hot vectors will grow to be extraordinarily lengthy. Additionally, neural networks should not designed to deal with these sparse vectors effectively.

Moreover, the semantic relationships between associated phrases can be misplaced throughout this course of because the index for every phrase is randomly assigned, which means related phrases haven’t any connection on this illustration.

With that, you possibly can higher perceive the importance of Word2Vec’s contribution now: By representing phrases as steady vectors in a high-dimensional area the place phrases with related contexts have related vectors, it utterly revolutionized the sphere of NLP.

With Word2Vec, associated phrases can be mapped nearer within the embedding area. For instance, within the determine beneath the authors present the PCA projection of phrase embeddings for some international locations and their corresponding capitals, with their relationships robotically captured by Word2Vec with none supervised info offered.

Determine 2. PCA projection of nations and capital vectors by Word2Vec. (Picture from Word2Vec paper)

Word2Vec is learnt in an unsupervised method, and as soon as the embeddings are learnt, they are often simply utilized in downstream duties. This is among the earliest efforts exploring semi-supervised studying in NLP.

Extra particularly, it could actually leverage both the CBOW (Steady Bag of Phrases) or Skip-Gram architectures to study phrase embeddings.

In CBOW, the mannequin tries to foretell the goal phrase primarily based on its surrounding phrases. For instance, given the sentence “The cat sat on the mat,” CBOW would attempt to predict the goal phrase “sat” given the context phrases “The,” “cat,” “on,” “the.” This structure is efficient when the purpose is to foretell a single phrase from the context.

Nevertheless, Skip-Gram works fairly the alternative approach — it makes use of a goal phrase to foretell its surrounding context phrases. Taking the identical sentence as instance, this time the goal phrase “sat” turns into the enter, and the mannequin would attempt to predict context phrases like “The,” “cat,” “on,” and “the.” Skip-Gram is especially helpful for capturing uncommon phrases by leveraging the context wherein they seem.

Determine 3. CBOW vs. Skip-Gram architectures in Word2Vec, the place “sat” is the goal phrase. (Picture by the writer)

GloVe

One other work alongside this line of analysis is GloVe, which can be an unsupervised technique to generate phrase embeddings. In contrast to Word2Vec which focuses on a neighborhood context, GloVe is designed to seize international statistical info by establishing a phrase co-occurrence matrix and factorizing it to acquire dense phrase vectors.

Determine 4. Illustration of phrase embedding technology in GloVe. (Picture by the writer)

Be aware that each Word2Vec and GloVe can primarily switch word-level info, which is commonly not ample in dealing with advanced NLP duties as we have to seize high-level semantics within the embeddings. This results in newer explorations on unsupervised pre-training of NLP fashions.

Unsupervised Pre-Coaching

Earlier than GPT, many works have explored unsupervised pre-training with totally different aims, resembling language mannequin, machine translation and discourse coherence, and many others. Nevertheless, every technique solely outperforms others on sure downstream duties and it remained unclear what optimization aims had been simplest or most helpful for switch.

You could have observed that language fashions had already been explored as coaching aims in a few of the earlier works, however why didn’t these strategies succeed like GPT?

The reply is Transformer fashions.

When the sooner works had been proposed, there is no such thing as a Transformer fashions but, so researchers might solely depend on RNN fashions like LSTM for pre-training.

This brings us to the subsequent subject: the Transformer structure utilized in GPT.

Decoder-only Transformer

In GPT, the Transformer structure is a modified model of the unique Transformer referred to as decoder-only Transformer. This can be a simplified Transformer structure proposed by Google in 2018, and it comprises solely the decoder.

Beneath is a comparability of the encoder-decoder structure launched within the authentic Transformer vs. the decoder-only Transformer structure utilized in GPT. Principally, the decoder-only structure removes the encoder half completely together with the cross-attention, resulting in a extra simplified structure.

Determine 5. Comparability of the encoder-decoder structure in Transformer vs. decoder-only Transformer in GPT. (Picture from the Transformer and GPT paper)

So what’s the advantage of making Transformer decoder-only?

In contrast with encoder-only fashions resembling BERT, decoder-only fashions typically carry out higher in producing coherent and contextually related textual content, making them supreme for textual content technology duties.

Encoder-only fashions like BERT, alternatively, typically carry out higher in duties that require understanding the enter knowledge, like textual content classification, sentiment evaluation, and named entity recognition, and many others.

There may be one other sort of fashions that make use of each the encoder and decoder Transformer, resembling T5 and BART, with the encoder processes the enter, whereas the decoder generates the output primarily based on the encoded illustration. Whereas such a design makes them extra versatile in dealing with a variety of duties, they’re typically extra computationally intensive than encoder-only or decoder-only fashions.

In a nutshell, whereas each constructed on Transformer fashions and tried to leverage pre-training + finetuning scheme, GPT and BERT have chosen very alternative ways to realize that related purpose. Extra particularly, GPT conducts pre-training in an auto-regressive method, whereas BERT follows an auto-encoding method.

Auto-regressive vs. Auto-encoding Language Fashions

A straightforward technique to perceive their distinction is to check their coaching aims.

In Auto-regressive language fashions, the coaching goal is commonly to foretell the subsequent token within the sequence, primarily based on earlier tokens. Because of the dependency on earlier tokens, this often result in a unidirectional (usually left-to-right) method, as we present within the left of Determine 6.

In contrast, auto-encoding language fashions are sometimes skilled with aims like Masked Language Mannequin or reconstructing all the enter from corrupted variations. That is typically carried out in a bi-directional method the place the mannequin can leverage all of the tokens across the masked one, in different phrases, each the left and proper aspect tokens. That is illustrated in the proper of Determine 6.

Determine 6. Auto-regressive Language Mannequin vs. Auto-encoding Language Mannequin. (Picture by the writer)

Merely put, auto-regressive LM is extra appropriate for textual content technology, however its unidirectional modeling method might restrict its functionality in understanding the complete context. Auto-encoding LM, alternatively, can do a greater job at context understanding, however shouldn’t be designed for generative duties.