Working a SOTA 7B Parameter Embedding Mannequin on a Single GPU | by Szymon Palucha | Aug, 2024

Set Up

The mannequin that we’ll experiment with is the Alibaba-NLP/gte-Qwen2-7B-instruct from Transformers. The mannequin card is right here.

To carry out this experiment, I’ve used Python 3.10.8 and put in the next packages:

torch==2.3.0
transformers==4.41.2
xformers==0.0.26.post1
flash-attn @ https://github.com/Dao-AILab/flash-attention/releases/obtain/v2.5.8/flash_attn-2.5.8+cu122torch2.3cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
speed up==0.31.0

I bumped into some problem in putting in flash-attn required to run this mannequin and so needed to set up the precise model listed above. If anybody has a greater workaround please let me know!

The Amazon SageMaker occasion I used for this experiment is the ml.g5.2xlarge. It has a 24GB NVIDIA A10G GPU and 32GB of CPU reminiscence and it prices $1.69/hour. The beneath screenshot from AWS reveals all the main points of the occasion

SageMaker g5 occasion varieties from AWS docs.

Really to be exact if you happen to run nvidia-smi you will notice that the occasion solely has 23GB of GPU reminiscence which is barely lower than marketed. The CUDA model on this GPU is 12.2.

Easy methods to Run — In Element

If you happen to take a look at the mannequin card, one of many advised methods to make use of this mannequin is by way of the sentence-transformers library as present beneath

from sentence_transformers import SentenceTransformer

# This is not going to run on our 24GB GPU!
mannequin = SentenceTransformer("Alibaba-NLP/gte-Qwen2-7B-instruct", trust_remote_code=True)
embeddings = mannequin.encode(list_of_examples)

Sentence-transformers is an extension of the Transformers bundle for computing embeddings and could be very helpful as you may get issues working with two strains of code. The draw back is that you’ve got much less management on easy methods to load the mannequin because it hides away tokenisation and pooling particulars. The above code is not going to run on our GPU occasion as a result of it makes an attempt to load the mannequin in full float32 precision which might take 28GB of reminiscence. When the sentence transformer mannequin is initialised it checks for obtainable units (cuda for GPU) and robotically shifts the Pytorch mannequin onto the gadget. Consequently it will get caught after loading 5/7ths of the mannequin and crashes.

As a substitute we’d like to have the ability to load the mannequin in float16 precision earlier than we transfer it onto the GPU. As such we have to use the decrease stage Transformers library. (I’m not certain of a approach to do it with sentence-transformers however let me know if one exists!) We do that as follows

import transformers
import torch

model_path = "Alibaba-NLP/gte-Qwen2-7B-instruct"
mannequin = transformers.AutoModel.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.float16).to("cuda")

With the torch_dtype parameter we specify that the mannequin needs to be loaded in float16 precision immediately, thus solely requiring 14GB of reminiscence. We then want to maneuver the mannequin onto the GPU gadget which is achieved with the to methodology. Utilizing the above code, the mannequin takes virtually 2min to load!

Since we’re utilizing transformers we have to individually load the tokeniser to tokenise the enter texts as follows:

tokenizer = transformers.AutoTokenizer.from_pretrained(model_path)

The subsequent step is to tokenise the enter texts which is completed as follows:

texts = ["example text 1", "example text 2 of different length"]
max_length = 32768
batch_dict = tokenizer(texts, max_length=max_length, padding=True, truncation=True, return_tensors="pt").to(DEVICE)

The utmost size of the Qwen2 mannequin is 32678, nonetheless as we are going to see later we’re unable to run it with such a protracted sequence on our 24GB GPU as a result of further reminiscence necessities. I’d advocate decreasing this to not more than 24,000 to keep away from out of reminiscence errors. Padding ensures that each one the inputs within the batch have the identical size while truncation ensures that any inputs longer than the utmost size will likely be truncated. For extra info please see the docs. Lastly, we be sure that we return PyTorch tensors (default can be lists as a substitute) and transfer these tensors onto the GPU to be obtainable to cross to the mannequin.

The subsequent step is to cross the inputs via our mannequin and carry out pooling. That is achieved as follows

with torch.no_grad():
outputs = mannequin(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict["attention_mask"])

with the last_token_pool which seems as follows:

def last_token_pool(last_hidden_states: torch.Tensor, attention_mask: torch.Tensor) -> torch.Tensor:
# checks whether or not there's any padding (the place consideration masks = 0 for a given textual content)
no_padding = attention_mask[:, -1].sum() == attention_mask.form[0]
# if no padding - solely would occur if batch measurement of 1 or all sequnces have the identical size, then take the final tokens because the embeddings
if no_padding:
return last_hidden_states[:, -1]
# in any other case use the final non padding token for every textual content within the batch
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.form[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengthsLet’s break down what happened in the above code snippets!
  • The torch.no_grad() context manager is used to disable gradient calculation, since we are not training the model and hence to speed up the inference.
  • We then pass the tokenised inputs into the transformer model.
  • We retrieve the outputs from the last layer of the model with the last_hidden_state attribute. This is a tensor of shape (batch_size, max_sequence_length, embedding dimension). Essentially for each example in the batch the transformer outputs embeddings for all the tokens in the sequence.
  • We now need some way of combining all the token embeddings into a single embedding to represent the input text. This is called pooling and it is done in the same way as during training of the model.
  • In older BERT based models the first token was typically used (which represented the special classification [CLS] token). Nevertheless, the Qwen2 mannequin is LLM-based, i.e. transformer decoder based mostly. Within the decoder, the tokens are generated auto regressively (one after one other) and so the final token comprises all the knowledge encoded in regards to the sentence.
  • The purpose of the last_token_pool perform is to due to this fact choose the embedding of the final generated token (which was not the padding token) for every instance within the batch.
  • It makes use of the attention_mask which tells the mannequin which of the tokens are padding tokens for every instance within the batch (see the docs).

Annotated Instance

Let’s take a look at an instance to grasp it in a bit extra element. Let’s say we wish to embed two examples in a single batch:

texts = ["example text 1", "example text 2 of different length"]

The outputs of the tokeniser (the batch_dict ) will look as follows:

>>> batch_dict
{'input_ids': tensor([[ 8687, 1467, 220, 16, 151643, 151643, 151643],
[ 8687, 1467, 220, 17, 315, 2155, 3084]],
gadget='cuda:0'), 'attention_mask': tensor([[1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1]], gadget='cuda:0')}

From this you’ll be able to see that the primary sentence will get cut up into 4 tokens (8687, 1467, 220, 16), whereas the second sentence get cut up into seven tokens. Consequently, the primary sentence is padded (with three padding tokens with id 151643) as much as size seven — the utmost within the batch. The eye masks displays this — it has three zeros for the primary instance equivalent to the placement of the padding tokens. Each the tensors have the identical measurement

>>> batch_dict.input_ids.form
torch.Measurement([2, 7])
>>> batch_dict.attention_mask.form
torch.Measurement([2, 7])

Now passing the batch_dict via the mannequin we are able to retrieve the fashions final hidden state of form:

>>> outputs.last_hidden_state.form
torch.Measurement([2, 7, 3584])

We are able to see that that is of form (batch_size, max_sequence_length, embedding dimension). Qwen2 has an embedding dimension of 3584!

Now we’re within the last_token_pool perform. The primary line checks if padding exists, it does it by summing the final “column” of the attention_mask and evaluating it to the batch_size (given by attention_mask.form[0]. This can solely lead to true if there exists a 1 in the entire consideration masks, i.e. if all of the examples are the identical size or if we solely have one instance.

>>> attention_mask.form[0]
2
>>> attention_mask[:, -1]
tensor([0, 1], gadget='cuda:0')

If there was certainly no padding we might merely choose the final token embedding for every of the examples with last_hidden_states[:, -1]. Nevertheless, since now we have padding we have to choose the final non-padding token embedding from every instance within the batch. So as to choose this embedding we have to get its index for every instance. That is achieved by way of

>>> sequence_lengths = attention_mask.sum(dim=1) - 1
>>> sequence_lengths
tensor([3, 6], gadget='cuda:0')

So now we have to merely index into the tensor, with the right indices within the first two dimensions. To get the indices for all of the examples within the batch we are able to use torch.arange as follows:

>>> torch.arange(batch_size, gadget=last_hidden_states.gadget)
tensor([0, 1], gadget='cuda:0')

Then we are able to pluck out the right token embeddings for every instance utilizing this and the indices of the final non padding token:

>>> embeddings = last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
>>> embeddings.form
torch.Measurement([2, 3584])

And we get two embeddings for the 2 examples handed in!

Easy methods to Run — TLDR

The complete code separated out into capabilities seems like

import numpy as np
import numpy.typing as npt
import torch
import transformers

DEVICE = torch.gadget("cuda")

def last_token_pool(last_hidden_states: torch.Tensor, attention_mask: torch.Tensor) -> torch.Tensor:
# checks whether or not there's any padding (the place consideration masks = 0 for a given textual content)
no_padding = attention_mask[:, -1].sum() == attention_mask.form[0]
# if no padding - solely would occur if batch measurement of 1 or all sequnces have the identical size, then take the final tokens because the embeddings
if no_padding:
return last_hidden_states[:, -1]
# in any other case use the final non padding token for every textual content within the batch
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.form[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths

def encode_with_qwen_model(
model: transformers.PreTrainedModel,
tokenizer: transformers.tokenization_utils.PreTrainedTokenizer | transformers.tokenization_utils_fast.PreTrainedTokenizerFast,
texts: list[str],
max_length: int = 32768,
) -> npt.NDArray[np.float16]:
batch_dict = tokenizer(texts, max_length=max_length, padding=True, truncation=True, return_tensors="pt").to(DEVICE)

with torch.no_grad():
outputs = mannequin(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict["attention_mask"])
return embeddings.cpu().numpy()

def predominant() -> None:
model_path = "Alibaba-NLP/gte-Qwen2-7B-instruct"
tokenizer = transformers.AutoTokenizer.from_pretrained(model_path)
mannequin = transformers.AutoModel.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.float16).to(DEVICE)
print("Loaded tokeniser and mannequin")

texts_to_encode = ["example text 1", "example text 2 of different length"]
embeddings = encode_with_qwen_model(mannequin, tokenizer, texts_to_encode)
print(embeddings.form)

if __name__ == "__main__":
predominant()

The encode_with_qwen_model returns a numpy array. So as to convert a PyTorch tensor to a numpy array we first have to maneuver it off the GPU again onto the CPU which is achieved with the cpu() methodology. Please notice that in case you are planning to run lengthy texts you need to scale back the batch measurement to 1 and solely embed one instance at a time (thus decreasing the listing texts_to_encode to size 1).