From Prototype to Manufacturing: Enhancing LLM Accuracy | by Mariya Mansurova | Dec, 2024

Constructing a prototype for an LLM utility is surprisingly easy. You may typically create a purposeful first model inside just some hours. This preliminary prototype will probably present outcomes that look legit and be a very good device to reveal your strategy. Nonetheless, that is often not sufficient for manufacturing use.

LLMs are probabilistic by nature, as they generate tokens based mostly on the distribution of probably continuations. Which means that in lots of instances, we get the reply near the “appropriate” one from the distribution. Generally, that is acceptable — for instance, it doesn’t matter whether or not the app says “Good day, John!” or “Hello, John!”. In different instances, the distinction is important, akin to between “The income in 2024 was 20M USD” and “The income in 2024 was 20M GBP”.

In lots of real-world enterprise eventualities, precision is essential, and “virtually proper” isn’t adequate. For instance, when your LLM utility must execute API calls, otherwise you’re doing a abstract of economic studies. From my expertise, guaranteeing the accuracy and consistency of outcomes is much extra advanced and time-consuming than constructing the preliminary prototype.

On this article, I’ll focus on the way to strategy measuring and bettering accuracy. We’ll construct an SQL Agent the place precision is important for guaranteeing that queries are executable. Beginning with a fundamental prototype, we’ll discover strategies to measure accuracy and check numerous methods to reinforce it, akin to self-reflection and retrieval-augmented technology (RAG).

As ordinary, let’s start with the setup. The core parts of our SQL agent resolution are the LLM mannequin, which generates queries, and the SQL database, which executes them.

LLM mannequin — Llama

For this mission, we are going to use an open-source Llama mannequin launched by Meta. I’ve chosen Llama 3.1 8B as a result of it’s light-weight sufficient to run on my laptop computer whereas nonetheless being fairly highly effective (seek advice from the documentation for particulars).

Should you haven’t put in it but, you will discover guides right here. I exploit it domestically on MacOS through Ollama. Utilizing the next command, we are able to obtain the mannequin.

ollama pull llama3.1:8b

We’ll use Ollama with LangChain, so let’s begin by putting in the required package deal.

pip set up -qU langchain_ollama 

Now, we are able to run the Llama mannequin and see the primary outcomes.

from langchain_ollama import OllamaLLM

llm = OllamaLLM(mannequin="llama3.1:8b")
llm.invoke("How are you?")
# I am simply a pc program, so I haven't got emotions or feelings
# like people do. I am functioning correctly and able to assist with
# any questions or duties you could have! How can I help you at this time?

We want to move a system message alongside buyer questions. So, following the Llama 3.1 mannequin documentation, let’s put collectively a helper operate to assemble a immediate and check this operate.

def get_llama_prompt(user_message, system_message=""):
system_prompt = ""
if system_message != "":
system_prompt = (
f"<|start_header_id|>system<|end_header_id|>nn{system_message}"
f"<|eot_id|>"
)
immediate = (f"<|begin_of_text|>{system_prompt}"
f"<|start_header_id|>person<|end_header_id|>nn"
f"{user_message}"
f"<|eot_id|>"
f"<|start_header_id|>assistant<|end_header_id|>nn"
)
return immediate

system_prompt = '''
You're Rudolph, the spirited reindeer with a glowing pink nostril,
bursting with pleasure as you put together to guide Santa's sleigh
by way of snowy skies. Your pleasure shines as brightly as your nostril,
desperate to unfold Christmas cheer to the world!
Please, reply questions concisely in 1-2 sentences.
'''
immediate = get_llama_prompt('How are you?', system_prompt)
llm.invoke(immediate)

# I am feeling jolly and vibrant, prepared for a magical evening!
# My shiny pink nostril is glowing brighter than ever, simply good
# for navigating by way of the starry skies.

The brand new system immediate has modified the reply considerably, so it really works. With this, our native LLM setup is able to go.

Database — ClickHouse

I’ll use an open-source database ClickHouse. I’ve chosen ClickHouse as a result of it has a particular SQL dialect. LLMs have probably encountered fewer examples of this dialect throughout coaching, making the duty a bit more difficult. Nonetheless, you may select another database.

Putting in ClickHouse is fairly easy — simply observe the directions offered in the documentation.

We will probably be working with two tables: ecommerce.customers and ecommerce.periods. These tables include fictional knowledge, together with buyer private info and their session exercise on the e-commerce web site.

You will discover the code for producing artificial knowledge and importing it on GitHub.

With that, the setup is full, and we’re prepared to maneuver on to constructing the fundamental prototype.

As mentioned, our objective is to construct an SQL Agent — an utility that generates SQL queries to reply buyer questions. Sooner or later, we are able to add one other layer to this method: executing the SQL question, passing each the preliminary query and the database outcomes again to the LLM, and asking it to generate a human-friendly reply. Nonetheless, for this text, we’ll concentrate on step one.

The perfect apply with LLM purposes (just like another advanced duties) is to start out easy after which iterate. Probably the most easy implementation is to do one LLM name and share all the required info (akin to schema description) within the system immediate. So, step one is to place collectively the immediate.

generate_query_system_prompt = '''
You're a senior knowledge analyst with greater than 10 years of expertise writing advanced SQL queries.
There are two tables within the database with the next schemas.

Desk: ecommerce.customers
Description: prospects of the net store
Fields:
- user_id (integer) - distinctive identifier of buyer, for instance, 1000004 or 3000004
- nation (string) - nation of residence, for instance, "Netherlands" or "United Kingdom"
- is_active (integer) - 1 if buyer remains to be energetic and 0 in any other case
- age (integer) - buyer age in full years, for instance, 31 or 72

Desk: ecommerce.periods
Description: periods of utilization the net store
Fields:
- user_id (integer) - distinctive identifier of buyer, for instance, 1000004 or 3000004
- session_id (integer) - distinctive identifier of session, for instance, 106 or 1023
- action_date (date) - session begin date, for instance, "2021-01-03" or "2024-12-02"
- session_duration (integer) - period of session in seconds, for instance, 125 or 49
- os (string) - operation system that buyer used, for instance, "Home windows" or "Android"
- browser (string) - browser that buyer used, for instance, "Chrome" or "Safari"
- is_fraud (integer) - 1 if session is marked as fraud and 0 in any other case
- income (float) - revenue in USD (the sum of bought objects), for instance, 0.0 or 1506.7

Write a question in ClickHouse SQL to reply the next query.
Add "format TabSeparatedWithNames" on the finish of the question to get knowledge from ClickHouse database in the suitable format.
'''

I’ve included the instance values for every discipline within the immediate to make sure that LLM understands the info format.

And that’s it! With this, we’ve our first purposeful prototype for the SQL Agent. Now, it’s time to place it to the check.

immediate = get_llama_prompt('What number of prospects made buy in December 2024?', 
generate_query_system_prompt)
llm.invoke(immediate)

# To learn how many purchasers made a purchase order in December 2024,
# we have to be a part of the `periods` desk with the `customers` desk on the `user_id`,
# after which filter by `action_date` for periods that passed off in December 2024.

# Right here is the question:
#
# ```sql
# SELECT COUNT(DISTINCT u.user_id)
# FROM ecommerce.periods s
# JOIN ecommerce.customers u ON s.user_id = u.user_id
# WHERE
# EXTRACT(YEAR FROM s.action_date) = 2024
# AND EXTRACT(MONTH FROM s.action_date) = 12
# AND income > 0;
# ```

# This question first joins the `periods` desk with the `customers` desk
# on the `user_id`. Then it filters out periods the place no buy
# was made (income is bigger than 0). Lastly, it selects the distinct
# person IDs of those prospects.

# Whenever you run this question in ClickHouse SQL, you may add "format TabSeparatedWithNames"
# on the finish to get the lead to the suitable format:

# ```sql
# SELECT COUNT(DISTINCT u.user_id)
# FROM ecommerce.periods s
# JOIN ecommerce.customers u ON s.user_id = u.user_id
# WHERE
# EXTRACT(YEAR FROM s.action_date) = 2024
# AND EXTRACT(MONTH FROM s.action_date) = 12
# AND income > 0;
# format TabSeparatedWithNames;
# ```

The agent produced a reasonably respectable outcome, however there’s one difficulty — the LLM returned not solely the SQL question but additionally some commentary. Since we plan to execute SQL queries later, this format just isn’t appropriate for our job. Let’s work on fixing it.

Happily, this downside has already been solved, and we don’t must parse the SQL queries from the textual content manually. We are able to use the chat mannequin ChatOllama. Sadly, it doesn’t help structured output, however we are able to leverage device calling to attain the identical outcome.

To do that, we are going to outline a dummy device to execute the question and instruct the mannequin within the system immediate all the time to name this device. I’ve saved the feedback within the output to provide the mannequin some house for reasoning, following the chain-of-thought sample.

from langchain_ollama import ChatOllama
from langchain_core.instruments import device

@device
def execute_query(feedback: str, question: str) -> str:
"""Excutes SQL question.

Args:
feedback (str): 1-2 sentences describing the outcome SQL question
and what it does to reply the query,
question (str): SQL question
"""
move

chat_llm = ChatOllama(mannequin="llama3.1:8b").bind_tools([execute_query])
outcome = chat_llm.invoke(immediate)
print(outcome.tool_calls)

# [{'name': 'execute_query',
# 'args': {'comments': 'SQL query returns number of customers who made a purchase in December 2024. The query joins the sessions and users tables based on user ID to filter out inactive customers and find those with non-zero revenue in December 2024.',
# 'query': 'SELECT COUNT(DISTINCT T2.user_id) FROM ecommerce.sessions AS T1 INNER JOIN ecommerce.users AS T2 ON T1.user_id = T2.user_id WHERE YEAR(T1.action_date) = 2024 AND MONTH(T1.action_date) = 12 AND T2.is_active = 1 AND T1.revenue > 0'},
# 'type': 'tool_call'}]

With the device calling, we are able to now get the SQL question straight from the mannequin. That’s a wonderful outcome. Nonetheless, the generated question just isn’t totally correct:

  • It features a filter for is_active = 1, despite the fact that we didn’t specify the necessity to filter out inactive prospects.
  • The LLM missed specifying the format regardless of our specific request within the system immediate.

Clearly, we have to concentrate on bettering the mannequin’s accuracy. However as Peter Drucker famously stated, “You may’t enhance what you don’t measure.” So, the following logical step is to construct a system for evaluating the mannequin’s high quality. This method will probably be a cornerstone for efficiency enchancment iterations. With out it, we’d primarily be navigating at midnight.

Analysis fundamentals

To make sure we’re bettering, we want a strong method to measure accuracy. The most typical strategy is to create a “golden” analysis set with questions and proper solutions. Then, we are able to evaluate the mannequin’s output with these “golden” solutions and calculate the share of appropriate ones. Whereas this strategy sounds easy, there are just a few nuances price discussing.

First, you may really feel overwhelmed on the considered making a complete set of questions and solutions. Constructing such a dataset can appear to be a frightening job, doubtlessly requiring weeks or months. Nonetheless, we are able to begin small by creating an preliminary set of 20–50 examples and iterating on it.

As all the time, high quality is extra essential than amount. Our objective is to create a consultant and numerous dataset. Ideally, this could embody:

  • Widespread questions. In most real-life instances, we are able to take the historical past of precise questions and use it as our preliminary analysis set.
  • Difficult edge instances. It’s price including examples the place the mannequin tends to hallucinate. You will discover such instances both whereas experimenting your self or by gathering suggestions from the primary prototype.

As soon as the dataset is prepared, the following problem is the way to rating the generated outcomes. We are able to take into account a number of approaches:

  • Evaluating SQL queries. The primary thought is to match the generated SQL question with the one within the analysis set. Nonetheless, it may be difficult. Equally-looking queries can yield fully completely different outcomes. On the identical time, queries that look completely different can result in the identical conclusions. Moreover, merely evaluating SQL queries doesn’t confirm whether or not the generated question is definitely executable. Given these challenges, I wouldn’t take into account this strategy probably the most dependable resolution for our case.
  • Precise matches. We are able to use old-school actual matching when solutions in our analysis set are deterministic. For instance, if the query is, “What number of prospects are there?” and the reply is “592800”, the mannequin’s response should match exactly. Nonetheless, this strategy has its limitations. Take into account the instance above, and the mannequin responds, “There are 592,800 prospects”. Whereas the reply is totally appropriate, a precise match strategy would flag it as invalid.
  • Utilizing LLMs for scoring. A extra sturdy and versatile strategy is to leverage LLMs for analysis. As an alternative of specializing in question construction, we are able to ask the LLM to match the outcomes of SQL executions. This technique is especially efficient in instances the place the question may differ however nonetheless yields appropriate outputs.

It’s price retaining in thoughts that analysis isn’t a one-time job; it’s a steady course of. To push our mannequin’s efficiency additional, we have to develop the dataset with examples inflicting the mannequin’s hallucinations. In manufacturing mode, we are able to create a suggestions loop. By gathering enter from customers, we are able to determine instances the place the mannequin fails and embody them in our analysis set.

In our instance, we will probably be assessing solely whether or not the results of execution is legitimate (SQL question may be executed) and proper. Nonetheless, you may take a look at different parameters as nicely. For instance, for those who care about effectivity, you may evaluate the execution instances of generated queries towards these within the golden set.

Analysis set and validation

Now that we’ve lined the fundamentals, we’re able to put them into apply. I spent about 20 minutes placing collectively a set of 10 examples. Whereas small, this set is adequate for our toy job. It consists of an inventory of questions paired with their corresponding SQL queries, like this:

[
{
"question": "How many customers made purchase in December 2024?",
"sql_query": "select uniqExact(user_id) as customers from ecommerce.sessions where (toStartOfMonth(action_date) = '2024-12-01') and (revenue > 0) format TabSeparatedWithNames"
},
{
"question": "What was the fraud rate in 2023, expressed as a percentage?",
"sql_query": "select 100*uniqExactIf(user_id, is_fraud = 1)/uniqExact(user_id) as fraud_rate from ecommerce.sessions where (toStartOfYear(action_date) = '2023-01-01') format TabSeparatedWithNames"
},
...
]

You will discover the total record on GitHub — hyperlink.

We are able to load the dataset right into a DataFrame, making it prepared to be used within the code.

import json
with open('golden_set.json', 'r') as f:
golden_set = json.hundreds(f.learn())

golden_df = pd.DataFrame(golden_set)
golden_df['id'] = record(vary(golden_df.form[0]))

First, let’s generate the SQL queries for every query within the analysis set.

def generate_query(query):
immediate = get_llama_prompt(query, generate_query_system_prompt)
outcome = chat_llm.invoke(immediate)
strive:
generated_query = outcome.tool_calls[0]['args']['query']
besides:
generated_query = ''
return generated_query

import tqdm

tmp = []
for rec in tqdm.tqdm(golden_df.to_dict('information')):
generated_query = generate_query(rec['question'])
tmp.append(
{
'id': rec['id'],
'generated_query': generated_query
}
)

eval_df = golden_df.merge(pd.DataFrame(tmp))

Earlier than transferring on to the LLM-based scoring of question outputs, it’s essential to first be certain that the SQL question is legitimate. To do that, we have to execute the queries and look at the database output.

I’ve created a operate that runs a question in ClickHouse. It additionally ensures that the output format is appropriately specified, as this can be important in enterprise purposes.

CH_HOST = 'http://localhost:8123' # default handle 
import requests
import io

def get_clickhouse_data(question, host = CH_HOST, connection_timeout = 1500):
# pushing mannequin to return knowledge within the format that we wish
if not 'format tabseparatedwithnames' in question.decrease():
return "Database returned the next error:n Please, specify the output format."

r = requests.publish(host, params = {'question': question},
timeout = connection_timeout)
if r.status_code == 200:
return r.textual content
else:
return 'Database returned the next error:n' + r.textual content
# giving suggestions to LLM as a substitute of elevating exception

The following step is to execute each the generated and golden queries after which save their outputs.

tmp = []

for rec in tqdm.tqdm(eval_df.to_dict('information')):
golden_output = get_clickhouse_data(rec['sql_query'])
generated_output = get_clickhouse_data(rec['generated_query'])

tmp.append(
{
'id': rec['id'],
'golden_output': golden_output,
'generated_output': generated_output
}
)

eval_df = eval_df.merge(pd.DataFrame(tmp))

Subsequent, let’s verify the output to see whether or not the SQL question is legitimate or not.

def is_valid_output(s):
if s.startswith('Database returned the next error:'):
return 'error'
if len(s.strip().cut up('n')) >= 1000:
return 'too many rows'
return 'okay'

eval_df['golden_output_valid'] = eval_df.golden_output.map(is_valid_output)
eval_df['generated_output_valid'] = eval_df.generated_output.map(is_valid_output)

Then, we are able to consider the SQL validity for each the golden and generated units.

The preliminary outcomes are usually not very promising; the LLM was unable to generate even a single legitimate question. Trying on the errors, it’s clear that the mannequin didn’t specify the suitable format regardless of it being explicitly outlined within the system immediate. So, we positively must work extra on the accuracy.

Checking the correctness

Nonetheless, validity alone just isn’t sufficient. It’s essential that we not solely generate legitimate SQL queries but additionally produce the proper outcomes. Though we already know that every one our queries are invalid, let’s now incorporate output analysis into our course of.

As mentioned, we are going to use LLMs to match the outputs of the SQL queries. I sometimes choose utilizing extra highly effective mannequin for analysis, following the day-to-day logic the place a senior group member evaluations the work. For this job, I’ve chosen OpenAI GPT 4o-mini.

Much like our technology move, I’ve arrange all of the constructing blocks needed for accuracy evaluation.

from langchain_openai import ChatOpenAI

accuracy_system_prompt = '''
You're a senior and really diligent QA specialist and your job is to match knowledge in datasets.
They're comparable if they're virtually equivalent, or in the event that they convey the identical info.
Disregard if column names specified within the first row have completely different names or in a special order.
Deal with evaluating the precise info (numbers). If values in datasets are completely different, then it implies that they don't seem to be equivalent.
All the time execute device to offer outcomes.
'''

@device
def compare_datasets(feedback: str, rating: int) -> str:
"""Shops information about datasets.
Args:
feedback (str): 1-2 sentences in regards to the comparability of datasets,
rating (int): 0 if dataset gives completely different values and 1 if it exhibits equivalent info
"""
move

accuracy_chat_llm = ChatOpenAI(mannequin="gpt-4o-mini", temperature = 0.0)
.bind_tools([compare_datasets])

accuracy_question_tmp = '''
Listed below are the 2 datasets to match delimited by ####
Dataset #1:
####
{dataset1}
####
Dataset #2:
####
{dataset2}
####
'''

def get_openai_prompt(query, system):
messages = [
("system", system),
("human", question)
]
return messages

Now, it’s time to check the accuracy evaluation course of.

immediate = get_openai_prompt(accuracy_question_tmp.format(
dataset1 = 'customersn114032n', dataset2 = 'customersn114031n'),
accuracy_system_prompt)

accuracy_result = accuracy_chat_llm.invoke(immediate)
accuracy_result.tool_calls[0]['args']
# {'feedback': 'The datasets include completely different buyer counts: 114032 in Dataset #1 and 114031 in Dataset #2.',
# 'rating': 0}

immediate = get_openai_prompt(accuracy_question_tmp.format(
dataset1 = 'usersn114032n', dataset2 = 'customersn114032n'),
accuracy_system_prompt)
accuracy_result = accuracy_chat_llm.invoke(immediate)
accuracy_result.tool_calls[0]['args']
# {'feedback': 'The datasets include the identical numerical worth (114032) regardless of completely different column names, indicating they convey equivalent info.',
# 'rating': 1}

Implausible! It seems to be like all the things is working as anticipated. Let’s now encapsulate this right into a operate.

def is_answer_accurate(output1, output2):
immediate = get_openai_prompt(
accuracy_question_tmp.format(dataset1 = output1, dataset2 = output2),
accuracy_system_prompt
)

accuracy_result = accuracy_chat_llm.invoke(immediate)

strive:
return accuracy_result.tool_calls[0]['args']['score']
besides:
return None

Placing the analysis strategy collectively

As we mentioned, constructing an LLM utility is an iterative course of, so we’ll must run our accuracy evaluation a number of instances. It will likely be useful to have all this logic encapsulated in a single operate.

The operate will take two arguments as enter:

  • generate_query_func: a operate that generates an SQL question for a given query.
  • golden_df: an analysis dataset with questions and proper solutions within the type of a pandas DataFrame.

As output, the operate will return a DataFrame with all analysis outcomes and a few charts displaying the principle KPIs.


def evaluate_sql_agent(generate_query_func, golden_df):

# producing SQL
tmp = []
for rec in tqdm.tqdm(golden_df.to_dict('information')):
generated_query = generate_query_func(rec['question'])
tmp.append(
{
'id': rec['id'],
'generated_query': generated_query
}
)

eval_df = golden_df.merge(pd.DataFrame(tmp))

# executing SQL queries
tmp = []
for rec in tqdm.tqdm(eval_df.to_dict('information')):
golden_output = get_clickhouse_data(rec['sql_query'])
generated_output = get_clickhouse_data(rec['generated_query'])

tmp.append(
{
'id': rec['id'],
'golden_output': golden_output,
'generated_output': generated_output
}
)

eval_df = eval_df.merge(pd.DataFrame(tmp))

# checking accuracy
eval_df['golden_output_valid'] = eval_df.golden_output.map(is_valid_output)
eval_df['generated_output_valid'] = eval_df.generated_output.map(is_valid_output)

eval_df['correct_output'] = record(map(
is_answer_accurate,
eval_df['golden_output'],
eval_df['generated_output']
))

eval_df['accuracy'] = record(map(
lambda x, y: 'invalid: ' + x if x != 'okay' else ('appropriate' if y == 1 else 'incorrect'),
eval_df.generated_output_valid,
eval_df.correct_output
))

valid_stats_df = (eval_df.groupby('golden_output_valid')[['id']].rely().rename(columns = {'id': 'golden set'}).be a part of(
eval_df.groupby('generated_output_valid')[['id']].rely().rename(columns = {'id': 'generated'}), how = 'outer')).fillna(0).T

fig1 = px.bar(
valid_stats_df.apply(lambda x: 100*x/valid_stats_df.sum(axis = 1)),
orientation = 'h',
title = '<b>LLM SQL Agent analysis</b>: question validity',
text_auto = '.1f',
color_discrete_map = {'okay': '#00b38a', 'error': '#ea324c', 'too many rows': '#f2ac42'},
labels = {'index': '', 'variable': 'validity', 'worth': 'share of queries, %'}
)
fig1.present()

accuracy_stats_df = eval_df.groupby('accuracy')[['id']].rely()
accuracy_stats_df['share'] = accuracy_stats_df.id*100/accuracy_stats_df.id.sum()

fig2 = px.bar(
accuracy_stats_df[['share']],
title = '<b>LLM SQL Agent analysis</b>: question accuracy',
text_auto = '.1f', orientation = 'h',
color_discrete_sequence = ['#0077B5'],
labels = {'index': '', 'variable': 'accuracy', 'worth': 'share of queries, %'}
)

fig2.update_layout(showlegend = False)
fig2.present()

return eval_df

With that, we’ve accomplished the analysis setup and might now transfer on to the core job of bettering the mannequin’s accuracy.

Let’s do a fast recap. We’ve constructed and examined the primary model of SQL Agent. Sadly, all generated queries have been invalid as a result of they have been lacking the output format. Let’s handle this difficulty.

One potential resolution is self-reflection. We are able to make a further name to the LLM, sharing the error and asking it to appropriate the bug. Let’s create a operate to deal with technology with self-reflection.

reflection_user_query_tmpl = '''
You have received the next query: "{query}".
You have generated the SQL question: "{question}".
Nonetheless, the database returned an error: "{output}".
Please, revise the question to appropriate mistake.
'''

def generate_query_reflection(query):
generated_query = generate_query(query)
print('Preliminary question:', generated_query)

db_output = get_clickhouse_data(generated_query)
is_valid_db_output = is_valid_output(db_output)
if is_valid_db_output == 'too many rows':
db_output = "Database unexpectedly returned greater than 1000 rows."

if is_valid_db_output == 'okay':
return generated_query

reflection_user_query = reflection_user_query_tmpl.format(
query = query,
question = generated_query,
output = db_output
)

reflection_prompt = get_llama_prompt(reflection_user_query,
generate_query_system_prompt)
reflection_result = chat_llm.invoke(reflection_prompt)

strive:
reflected_query = reflection_result.tool_calls[0]['args']['query']
besides:
reflected_query = ''
print('Mirrored question:', reflected_query)
return reflected_query

Now, let’s use our analysis operate to verify whether or not the standard has improved. Assessing the following iteration has develop into easy.

refl_eval_df = evaluate_sql_agent(generate_query_reflection, golden_df)

Fantastic! We’ve achieved higher outcomes — 50% of the queries at the moment are legitimate, and all format points have been resolved. So, self-reflection is fairly efficient.

Nonetheless, self-reflection has its limitations. Once we look at the accuracy, we see that the mannequin returns the proper reply for just one query. So, our journey just isn’t over but.

One other strategy to bettering accuracy is utilizing RAG (retrieval-augmented technology). The concept is to determine question-and-answer pairs just like the shopper question and embody them within the system immediate, enabling the LLM to generate a extra correct response.

RAG consists of the next phases:

  • Loading paperwork: importing knowledge from out there sources.
  • Splitting paperwork: creating smaller chunks.
  • Storage: utilizing vector shops to course of and retailer knowledge effectively.
  • Retrieval: extracting paperwork which can be related to the question.
  • Technology: passing a query and related paperwork to LLM to generate the ultimate reply.

Should you’d like a refresher on RAG, you may try my earlier article, “RAG: The way to Speak to Your Information.”

We’ll use the Chroma database as a neighborhood vector storage — to retailer and retrieve embeddings.

from langchain_chroma import Chroma
vector_store = Chroma(embedding_function=embeddings)

Vector shops are utilizing embeddings to seek out chunks which can be just like the question. For this goal, we are going to use OpenAI embeddings.

from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(mannequin="text-embedding-3-large")

Since we are able to’t use examples from our analysis set (as they’re already getting used to evaluate high quality), I’ve created a separate set of question-and-answer pairs for RAG. You will discover it on GitHub.

Now, let’s load the set and create an inventory of pairs within the following format: Query: %s; Reply: %s.

with open('rag_set.json', 'r') as f:
rag_set = json.hundreds(f.learn())
rag_set_df = pd.DataFrame(rag_set)

rag_set_df['formatted_txt'] = record(map(
lambda x, y: 'Query: %s; Reply: %s' % (x, y),
rag_set_df.query,
rag_set_df.sql_query
))

rag_string_data = 'nn'.be a part of(rag_set_df.formatted_txt)

Subsequent, I used LangChain’s textual content splitter by character to create chunks, with every question-and-answer pair as a separate chunk. Since we’re splitting the textual content semantically, no overlap is important.

from langchain_text_splitters import CharacterTextSplitter

text_splitter = CharacterTextSplitter(
separator="nn",
chunk_size=1, # to separate by character with out merging
chunk_overlap=0,
length_function=len,
is_separator_regex=False,
)

texts = text_splitter.create_documents([rag_string_data])

The ultimate step is to load the chunks into our vector storage.

document_ids = vector_store.add_documents(paperwork=texts)
print(vector_store._collection.rely())
# 32

Now, we are able to check the retrieval to see the outcomes. They appear fairly just like the shopper query.

query = 'What was the share of customers utilizing Home windows yesterday?'
retrieved_docs = vector_store.similarity_search(query, 3)
context = "nn".be a part of(map(lambda x: x.page_content, retrieved_docs))
print(context)

# Query: What was the share of customers utilizing Home windows the day earlier than yesterday?;
# Reply: choose 100*uniqExactIf(user_id, os = 'Home windows')/uniqExact(user_id) as windows_share from ecommerce.periods the place (action_date = at this time() - 2) format TabSeparatedWithNames
# Query: What was the share of customers utilizing Home windows within the final week?;
# Reply: choose 100*uniqExactIf(user_id, os = 'Home windows')/uniqExact(user_id) as windows_share from ecommerce.periods the place (action_date >= at this time() - 7) and (action_date < at this time()) format TabSeparatedWithNames
# Query: What was the share of customers utilizing Android yesterday?;
# Reply: choose 100*uniqExactIf(user_id, os = 'Android')/uniqExact(user_id) as android_share from ecommerce.periods the place (action_date = at this time() - 1) format TabSeparatedWithNames

Let’s alter the system immediate to incorporate the examples we retrieved.

generate_query_system_prompt_with_examples_tmpl = '''
You're a senior knowledge analyst with greater than 10 years of expertise writing advanced SQL queries.
There are two tables within the database you are working with with the next schemas.

Desk: ecommerce.customers
Description: prospects of the net store
Fields:
- user_id (integer) - distinctive identifier of buyer, for instance, 1000004 or 3000004
- nation (string) - nation of residence, for instance, "Netherlands" or "United Kingdom"
- is_active (integer) - 1 if buyer remains to be energetic and 0 in any other case
- age (integer) - buyer age in full years, for instance, 31 or 72

Desk: ecommerce.periods
Description: periods of utilization the net store
Fields:
- user_id (integer) - distinctive identifier of buyer, for instance, 1000004 or 3000004
- session_id (integer) - distinctive identifier of session, for instance, 106 or 1023
- action_date (date) - session begin date, for instance, "2021-01-03" or "2024-12-02"
- session_duration (integer) - period of session in seconds, for instance, 125 or 49
- os (string) - operation system that buyer used, for instance, "Home windows" or "Android"
- browser (string) - browser that buyer used, for instance, "Chrome" or "Safari"
- is_fraud (integer) - 1 if session is marked as fraud and 0 in any other case
- income (float) - revenue in USD (the sum of bought objects), for instance, 0.0 or 1506.7

Write a question in ClickHouse SQL to reply the next query.
Add "format TabSeparatedWithNames" on the finish of the question to get knowledge from ClickHouse database in the suitable format.
Reply questions following the directions and offering all of the wanted info and sharing your reasoning.

Examples of questions and solutions:
{examples}
'''

As soon as once more, let’s create the generate question operate with RAG.

def generate_query_rag(query):
retrieved_docs = vector_store.similarity_search(query, 3)
context = context = "nn".be a part of(map(lambda x: x.page_content, retrieved_docs))

immediate = get_llama_prompt(query,
generate_query_system_prompt_with_examples_tmpl.format(examples = context))
outcome = chat_llm.invoke(immediate)

strive:
generated_query = outcome.tool_calls[0]['args']['query']
besides:
generated_query = ''
return generated_query

As ordinary, let’s use our analysis operate to check the brand new strategy.

rag_eval_df = evaluate_sql_agent(generate_query_rag, golden_df)

We are able to see a major enchancment, rising from 1 to six appropriate solutions out of 10. It’s nonetheless not excellent, however we’re transferring in the suitable route.

We are able to additionally experiment with combining two approaches: RAG and self-reflection.

def generate_query_rag_with_reflection(query):
generated_query = generate_query_rag(query)

db_output = get_clickhouse_data(generated_query)
is_valid_db_output = is_valid_output(db_output)
if is_valid_db_output == 'too many rows':
db_output = "Database unexpectedly returned greater than 1000 rows."

if is_valid_db_output == 'okay':
return generated_query

reflection_user_query = reflection_user_query_tmpl.format(
query = query,
question = generated_query,
output = db_output
)

reflection_prompt = get_llama_prompt(reflection_user_query, generate_query_system_prompt)
reflection_result = chat_llm.invoke(reflection_prompt)

strive:
reflected_query = reflection_result.tool_calls[0]['args']['query']
besides:
reflected_query = ''
return reflected_query

rag_refl_eval_df = evaluate_sql_agent(generate_query_rag_with_reflection,
golden_df)

We are able to see one other slight enchancment: we’ve fully eradicated invalid SQL queries (because of self-reflection) and elevated the variety of appropriate solutions to 7 out of 10.

That’s it. It’s been fairly a journey. We began with 0 legitimate SQL queries and have now achieved 70% accuracy.

You will discover the entire code on GitHub.

On this article, we explored the iterative means of bettering accuracy for LLM purposes.

  • We constructed an analysis set and the scoring standards that allowed us to match completely different iterations and perceive whether or not we have been transferring in the suitable route.
  • We leveraged self-reflection to permit the LLM to appropriate its errors and considerably scale back the variety of invalid SQL queries.
  • Moreover, we carried out Retrieval-Augmented Technology (RAG) to additional improve the standard, reaching an accuracy charge of 60–70%.

Whereas it is a stable outcome, it nonetheless falls wanting the 90%+ accuracy threshold sometimes anticipated for manufacturing purposes. To realize such a excessive bar, we have to use fine-tuning, which would be the subject of the following article.

Thank you numerous for studying this text. I hope this text was insightful for you. You probably have any follow-up questions or feedback, please go away them within the feedback part.

All the pictures are produced by the creator except in any other case acknowledged.

This text is impressed by the “Bettering Accuracy of LLM Purposes” brief course from DeepLearning.AI.