Tips on how to Enhance LLM Responses With Higher Sampling Parameters | by Dr. Leon Eversberg | Sep, 2024

A deep dive into stochastic decoding with temperature, top_p, top_k, and min_p

Example Python code taken from the OpenAI Python SDK where the chat completion API is called with the parameters temperature and top_p.
When calling the OpenAI API with the Python SDK, have you ever ever questioned what precisely the temperature and top_p parameters do?

If you ask a Massive Language Mannequin (LLM) a query, the mannequin outputs a chance for each potential token in its vocabulary.

After sampling a token from this chance distribution, we are able to append the chosen token to our enter immediate in order that the LLM can output the chances for the subsequent token.

This sampling course of could be managed by parameters such because the well-known temperature and top_p.

On this article, I’ll clarify and visualize the sampling methods that outline the output habits of LLMs. By understanding what these parameters do and setting them in response to our use case, we are able to enhance the output generated by LLMs.

For this text, I’ll use VLLM because the inference engine and Microsoft’s new Phi-3.5-mini-instruct mannequin with AWQ quantization. To run this mannequin regionally, I’m utilizing my laptop computer’s NVIDIA GeForce RTX 2060 GPU.

Desk Of Contents

· Understanding Sampling With Logprobs
LLM Decoding Concept
Retrieving Logprobs With the OpenAI Python SDK
· Grasping Decoding
· Temperature
· High-k Sampling
· High-p Sampling
· Combining High-p