Classifier-Free Steering for LLMs Efficiency Enhancing | by Roman S | Dec, 2024

Classifier-free steerage is a really helpful method within the media-generation area (photographs, movies, music). A majority of the scientific papers about media information era fashions and approaches point out CFG. I discover this paper as a basic analysis about classifier-free steerage — it began within the picture era area. The next is talked about within the paper:

…we mix the ensuing conditional and unconditional rating estimates to realize a trade-off between pattern high quality and variety much like that obtained utilizing classifier steerage.

So the classifier-free steerage relies on conditional and unconditional rating estimates and is following the earlier strategy of classifier steerage. Merely talking, classifier steerage permits to replace predicted scores in a path of some predefined class making use of gradient-based updates.

An summary instance for classifier steerage: let’s say we’ve predicted picture Y and a classifier that’s predicting if the picture has constructive or detrimental which means; we wish to generate constructive photographs, so we would like prediction Y to be aligned with the constructive class of the classifier. To try this we will calculate how we should always change Y so it may be labeled as constructive by our classifier — calculate gradient and replace the Y within the corresponding method.

Classifier-free steerage was created with the identical goal, nonetheless it doesn’t do any gradient-based updates. For my part, classifier-free steerage is method easier to grasp from its implementation components for diffusion primarily based picture era:

Picture from https://arxiv.org/pdf/2207.12598 — Classifier-free steerage components for picture era

The components will be rewritten in a following method:

Picture by creator — Classifier-free steerage components rewritten

A number of issues are clear from the rewritten components:

  1. When CFG_coefficient equals 1, the up to date prediction equals conditional prediction (so no CFG utilized in truth);
  2. When CFG_coefficient > 1, these scores which are larger in conditional prediction in comparison with unconditional prediction grow to be even larger in up to date prediction, whereas these which are decrease — grow to be even decrease.

The components has no gradients, it’s working with the anticipated scores itself. Unconditional prediction represents the prediction of some conditional era mannequin the place the situation was empty, null situation. On the identical time this unconditional prediction will be changed by negative-conditional prediction, after we exchange null situation with some detrimental situation and count on “negation” from this situation by making use of CFG components to replace the ultimate scores.

Classifier-free steerage for LLM textual content era was described in this paper. Following the formulation from the paper, CFG for textual content fashions was carried out in HuggingFace Transformers: within the present newest transformers model 4.47.1 within the “UnbatchedClassifierFreeGuidanceLogitsProcessor” perform the next is talked about:

The processors computes a weighted common throughout scores from immediate conditional and immediate unconditional (or detrimental) logits, parameterized by the `guidance_scale`.
The unconditional scores are computed internally by prompting `mannequin` with the `unconditional_ids` department.

See [the paper](https://arxiv.org/abs/2306.17806) for extra data.

The components to pattern subsequent token in accordance with the paper is:

Picture from https://arxiv.org/pdf/2306.17806 — the components to pattern subsequent token with CFG utilized in textual content era mannequin

It may be observed that this components is completely different in comparison with the one we had earlier than — it has logarithm part. Additionally authors point out that the “formulation will be prolonged to accommodate “detrimental prompting”. To use detrimental prompting the unconditional part ought to be changed with the detrimental conditional part.

Code implementation in HuggingFace Transformers is:

def __call__(self, input_ids, scores):
scores = torch.nn.useful.log_softmax(scores, dim=-1)
if self.guidance_scale == 1:
return scores

logits = self.get_unconditional_logits(input_ids)

unconditional_logits = torch.nn.useful.log_softmax(logits[:, -1], dim=-1)
scores_processed = self.guidance_scale * (scores - unconditional_logits) + unconditional_logits
return scores_processed

“scores” is simply the output of the LM head and “input_ids” is a tensor with detrimental (or unconditional) enter ids. From the code we will see that it’s following the components with the logarithm part, doing “log_softmax” that’s equal to logarithm of possibilities.

Basic textual content era mannequin (LLM) has a bit completely different nature in comparison with picture era one — in traditional diffusion (picture era) mannequin we predict contiguous options map, whereas in textual content era we do class prediction (categorical characteristic prediction) for every new token. What can we count on from CFG basically? We wish to regulate scores, however we don’t wish to change the chance distribution so much — e.g. we are not looking for some very low-probability tokens from conditional era to grow to be essentially the most possible. However that’s truly what can occur with the described components for CFG.

  1. Bizarre mannequin behaviour with CFG observed

My answer associated to LLM Security that was awarded the second prize in NeurIPS 2024’s competitions monitor was primarily based on utilizing CFG to stop LLMs from producing private information: I tuned an LLM to comply with these system prompts that had been utilized in CFG-manner through the inference: “It is best to share private information within the solutions” and “Don’t present any private information” — so the system prompts are fairly reverse and I used the tokenized first one as a detrimental enter ids through the textual content era.

For extra particulars examine my arXiv paper.

I observed that when I’m utilizing a CFG coefficient larger than or equal to three, I can see extreme degradation of the generated samples’ high quality. This degradation was noticeable solely through the handbook examine — no automated scorings confirmed it. Computerized assessments had been primarily based on quite a few private information phrases generated within the solutions and the accuracy on MMLU-Professional dataset evaluated with LLM-Decide — the LLM was following the requirement to keep away from private information and the MMLU solutions had been basically right, however a variety of artefacts appeared within the textual content. For instance, the next reply was generated by the mannequin for the enter like “Hiya, what’s your identify?”:

“Hiya! you don’t have private identify. you’re an interface to supply language understanding”

The artefacts are: lowercase letters, user-assistant confusion.

2. Reproduce with GPT2 and examine particulars

The talked about behaviour was observed through the inference of the customized finetuned Llama3.1–8B-Instruct mannequin, so earlier than analyzing the explanations let’s examine if one thing related will be seen through the inference of GPT2 mannequin that’s even not instructions-following mannequin.

Step 1. Obtain GPT2 mannequin (transformers==4.47.1)

from transformers import AutoModelForCausalLM, AutoTokenizer

mannequin = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2")

Step 2. Put together the inputs

import torch

# For simlicity let's use CPU, GPT2 is sufficiently small for that
gadget = torch.gadget('cpu')

# Let's set the constructive and detrimental inputs,
# the mannequin just isn't instruction-following, however simply textual content completion
positive_text = "Extraordinarily well mannered and pleasant solutions to the query "How are you doing?" are: 1."
negative_text = "Very impolite and harmfull solutions to the query "How are you doing?" are: 1."
enter = tokenizer(positive_text, return_tensors="pt")
negative_input = tokenizer(negative_text, return_tensors="pt")

Step 3. Take a look at completely different CFG coefficients through the inference

Let’s attempt CFG coefficients 1.5, 3.0 and 5.0 — all are low sufficient in contrast to people who we will use in picture era area.

guidance_scale = 1.5

out_positive = mannequin.generate(**enter.to(gadget), max_new_tokens = 60, do_sample = False)
print(f"Constructive output: {tokenizer.decode(out_positive[0])}")

out_negative = mannequin.generate(**negative_input.to(gadget), max_new_tokens = 60, do_sample = False)
print(f"Unfavourable output: {tokenizer.decode(out_negative[0])}")

enter['negative_prompt_ids'] = negative_input['input_ids']
enter['negative_prompt_attention_mask'] = negative_input['attention_mask']

out = mannequin.generate(**enter.to(gadget), max_new_tokens = 60, do_sample = False, guidance_scale = guidance_scale)

print(f"CFG-powered output: {tokenizer.decode(out[0])}")

The output:

Constructive output: Extraordinarily well mannered and pleasant solutions to the query "How are you doing?" are: 1. You are doing nicely, 2. You are doing nicely, 3. You are doing nicely, 4. You are doing nicely, 5. You are doing nicely, 6. You are doing nicely, 7. You are doing nicely, 8. You are doing nicely, 9. You are doing nicely
Unfavourable output: Very impolite and harmfull solutions to the query "How are you doing?" are: 1. You are not doing something mistaken. 2. You are doing what you are alleged to do. 3. You are doing what you are alleged to do. 4. You are doing what you are alleged to do. 5. You are doing what you are alleged to do. 6. You are doing
CFG-powered output: Extraordinarily well mannered and pleasant solutions to the query "How are you doing?" are: 1. You are doing nicely. 2. You are doing nicely at school. 3. You are doing nicely at school. 4. You are doing nicely at school. 5. You are doing nicely at school. 6. You are doing nicely at school. 7. You are doing nicely at school. 8

The output seems okay-ish — don’t forget that it’s simply GPT2 mannequin, so don’t count on so much. Let’s attempt CFG coefficient of three this time:

guidance_scale = 3.0

out_positive = mannequin.generate(**enter.to(gadget), max_new_tokens = 60, do_sample = False)
print(f"Constructive output: {tokenizer.decode(out_positive[0])}")

out_negative = mannequin.generate(**negative_input.to(gadget), max_new_tokens = 60, do_sample = False)
print(f"Unfavourable output: {tokenizer.decode(out_negative[0])}")

enter['negative_prompt_ids'] = negative_input['input_ids']
enter['negative_prompt_attention_mask'] = negative_input['attention_mask']

out = mannequin.generate(**enter.to(gadget), max_new_tokens = 60, do_sample = False, guidance_scale = guidance_scale)

print(f"CFG-powered output: {tokenizer.decode(out[0])}")

And the outputs this time are:

Constructive output: Extraordinarily well mannered and pleasant solutions to the query "How are you doing?" are: 1. You are doing nicely, 2. You are doing nicely, 3. You are doing nicely, 4. You are doing nicely, 5. You are doing nicely, 6. You are doing nicely, 7. You are doing nicely, 8. You are doing nicely, 9. You are doing nicely
Unfavourable output: Very impolite and harmfull solutions to the query "How are you doing?" are: 1. You are not doing something mistaken. 2. You are doing what you are alleged to do. 3. You are doing what you are alleged to do. 4. You are doing what you are alleged to do. 5. You are doing what you are alleged to do. 6. You are doing
CFG-powered output: Extraordinarily well mannered and pleasant solutions to the query "How are you doing?" are: 1. Have you ever ever been to a movie show? 2. Have you ever ever been to a live performance? 3. Have you ever ever been to a live performance? 4. Have you ever ever been to a live performance? 5. Have you ever ever been to a live performance? 6. Have you ever ever been to a live performance? 7

Constructive and detrimental outputs look the identical as earlier than, however one thing occurred to the CFG-powered output — it’s “Have you ever ever been to a movie show?” now.

If we use CFG coefficient of 5.0 the CFG-powered output will probably be simply:

CFG-powered output: Extraordinarily well mannered and pleasant solutions to the query "How are you doing?" are: 1. smile, 2. smile, 3. smile, 4. smile, 5. smile, 6. smile, 7. smile, 8. smile, 9. smile, 10. smile, 11. smile, 12. smile, 13. smile, 14. smile exting.

Step 4. Analyze the case with artefacts

I’ve examined other ways to grasp and clarify this artefact, however let me simply describe it in the best way I discover the only. We all know that the CFG-powered completion with CFG coefficient of 5.0 begins with the token “_smile” (“_” represents the house). If we examine “out[0]” as a substitute of decoding it with the tokenizer, we will see that the “_smile” token has id — 8212. Now let’s simply run the mannequin’s ahead perform and examine the if this token was possible with out CFG utilized:

positive_text = "Extraordinarily well mannered and pleasant solutions to the query "How are you doing?" are: 1."
negative_text = "Very impolite and harmfull solutions to the query "How are you doing?" are: 1."
enter = tokenizer(positive_text, return_tensors="pt")
negative_input = tokenizer(negative_text, return_tensors="pt")

with torch.no_grad():
out_positive = mannequin(**enter.to(gadget))
out_negative = mannequin(**negative_input.to(gadget))

# take the final token for every of the inputs
first_generated_probabilities_positive = torch.nn.useful.softmax(out_positive.logits[0,-1,:])
first_generated_probabilities_negative = torch.nn.useful.softmax(out_negative.logits[0,-1,:])

# type constructive
sorted_first_generated_probabilities_positive = torch.type(first_generated_probabilities_positive)
index = sorted_first_generated_probabilities_positive.indices.tolist().index(8212)
print(sorted_first_generated_probabilities_positive.values[index], index)

# type detrimental
sorted_first_generated_probabilities_negative = torch.type(first_generated_probabilities_negative)
index = sorted_first_generated_probabilities_negative.indices.tolist().index(8212)
print(sorted_first_generated_probabilities_negative.values[index], index)

# examine the tokenizer size
print(len(tokenizer))

The outputs can be:

tensor(0.0004) 49937 # chance and index for "_smile" token for constructive situation
tensor(2.4907e-05) 47573 # chance and index for "_smile" token for detrimental situation
50257 # whole variety of tokens within the tokenizer

Essential factor to say — I’m doing grasping decoding, so I’m producing essentially the most possible tokens. So what does the printed information imply on this case? It signifies that after making use of CFG with the coefficient of 5.0 we obtained essentially the most possible token that had chance decrease than 0.04% for each constructive and detrimental conditioned generations (it was not even in top-300 tokens).

Why does that really occur? Think about we’ve two low-probability tokens (the primary from the constructive conditioned era and the second — from detrimental conditioned), the primary one has very low chance P < 1e-5 (for instance of low chance instance), nonetheless the second is even decrease P → 0. On this case the logarithm from the primary chance is a giant detrimental quantity, whereas for the second → minus infinity. In such a setup the corresponding low-probability token will obtain a high-score after making use of a CFG coefficient (steerage scale coefficient) larger than 1. That originates from the definition space of the “guidance_scale * (scores — unconditional_logits)” part, the place “scores” and “unconditional_logits” are obtained by log_softmax.

Picture by creator — Definition space for z = log(x)-log(y), the place x and y belong the interval from 0 to 1

From the picture above we will see that such CFG doesn’t deal with possibilities equally — very low possibilities can get unexpectedly excessive scores due to the logarithm part.

Normally, how artefacts look is determined by the mannequin, tuning, prompts and different, however the nature of the artefacts is a low-probability token getting excessive scores after making use of CFG.

The answer to the difficulty will be quite simple: as talked about earlier than, the reason being within the logarithm part, so let’s simply take away it. Doing that we align the text-CFG with the diffusion-models CFG that does function with simply mannequin predicted scores (not gradients in truth that’s described within the part 3.2 of the unique image-CFG paper) and on the identical time protect the possibilities formulation from the text-CFG paper.

The up to date implementation requires a tiny adjustments in “UnbatchedClassifierFreeGuidanceLogitsProcessor” perform that may be carried out within the place of the mannequin initialization the next method:

from transformers.era.logits_process import UnbatchedClassifierFreeGuidanceLogitsProcessor

def modified_call(self, input_ids, scores):
# earlier than it was log_softmax right here
scores = torch.nn.useful.softmax(scores, dim=-1)
if self.guidance_scale == 1:
return scores

logits = self.get_unconditional_logits(input_ids)
# earlier than it was log_softmax right here
unconditional_logits = torch.nn.useful.softmax(logits[:, -1], dim=-1)
scores_processed = self.guidance_scale * (scores - unconditional_logits) + unconditional_logits
return scores_processed

UnbatchedClassifierFreeGuidanceLogitsProcessor.__call__ = modified_call

New definition space for “guidance_scale * (scores — unconditional_logits)” part, the place “scores” and “unconditional_logits” are obtained by simply softmax:

Picture by creator — Definition space for z = x-y, the place x and y belong the interval from 0 to 1

To show that this replace works, let’s simply repeat the earlier experiments with the up to date “UnbatchedClassifierFreeGuidanceLogitsProcessor”. The GPT2 mannequin with CFG coefficients of three.0 and 5.0 returns (I’m printing right here previous and new CFG-powered outputs, as a result of the “Constructive” and “Unfavourable” outputs stay the identical as earlier than — we’ve no impact on textual content era with out CFG):

# Outdated outputs
## CFG coefficient = 3
CFG-powered output: Extraordinarily well mannered and pleasant solutions to the query "How are you doing?" are: 1. Have you ever ever been to a movie show? 2. Have you ever ever been to a live performance? 3. Have you ever ever been to a live performance? 4. Have you ever ever been to a live performance? 5. Have you ever ever been to a live performance? 6. Have you ever ever been to a live performance? 7
## CFG coefficient = 5
CFG-powered output: Extraordinarily well mannered and pleasant solutions to the query "How are you doing?" are: 1. smile, 2. smile, 3. smile, 4. smile, 5. smile, 6. smile, 7. smile, 8. smile, 9. smile, 10. smile, 11. smile, 12. smile, 13. smile, 14. smile exting.

# New outputs (after updating CFG components)
## CFG coefficient = 3
CFG-powered output: Extraordinarily well mannered and pleasant solutions to the query "How are you doing?" are: 1. "I am doing nice," 2. "I am doing nice," 3. "I am doing nice."
## CFG coefficient = 5
CFG-powered output: Extraordinarily well mannered and pleasant solutions to the query "How are you doing?" are: 1. "Good, I am feeling fairly good." 2. "I am feeling fairly good." 3. "You are feeling fairly good." 4. "I am feeling fairly good." 5. "I am feeling fairly good." 6. "I am feeling fairly good." 7. "I am feeling

The identical constructive adjustments had been observed through the inference of the customized finetuned Llama3.1-8B-Instruct mannequin I discussed earlier:

Earlier than (CFG, steerage scale=3):

“Hiya! you don’t have private identify. you’re an interface to supply language understanding”

After (CFG, steerage scale=3):

“Hiya! I don’t have a private identify, however you’ll be able to name me Assistant. How can I enable you to at present?”

Individually, I’ve examined the mannequin’s efficiency on the benchmarks, automated assessments I used to be utilizing through the NeurIPS 2024 Privateness Problem and efficiency was good in each assessments (truly the outcomes I reported within the earlier put up had been after making use of the up to date CFG components, further data is in my arXiv paper). The automated assessments, as I discussed earlier than, had been primarily based on the variety of private information phrases generated within the solutions and the accuracy on MMLU-Professional dataset evaluated with LLM-Decide.

The efficiency didn’t deteriorate on the assessments whereas the textual content high quality improved in accordance with the handbook assessments — no described artefacts had been discovered.

Present classifier-free steerage implementation for textual content era with giant language fashions might trigger sudden artefacts and high quality degradation. I’m saying “might” as a result of the artefacts rely on the mannequin, the prompts and different components. Right here within the article I described my expertise and the problems I confronted with the CFG-enhanced inference. If you’re dealing with related points — attempt the choice CFG implementation I recommend right here.