LLM Brokers Demystified. Palms-on implementation with LightRAG… | by Li Yin | Jul, 2024

We’ll arrange two units of fashions, llama3–70b-8192 by Groq and gpt-3.5-turbo by OpenAI, to check two queries. For comparability, we are going to examine these with a vanilla LLM response with out utilizing the agent. Listed below are the code snippets:

from lightrag.parts.agent import ReActAgent
from lightrag.core import Generator, ModelClientType, ModelClient
from lightrag.utils import setup_env

setup_env()

# Outline instruments
def multiply(a: int, b: int) -> int:
"""
Multiply two numbers.
"""
return a * b
def add(a: int, b: int) -> int:
"""
Add two numbers.
"""
return a + b
def divide(a: float, b: float) -> float:
"""
Divide two numbers.
"""
return float(a) / b
llama3_model_kwargs = {
"mannequin": "llama3-70b-8192", # llama3 70b works higher than 8b right here.
"temperature": 0.0,
}
gpt_model_kwargs = {
"mannequin": "gpt-3.5-turbo",
"temperature": 0.0,
}

def test_react_agent(model_client: ModelClient, model_kwargs: dict):
instruments = [multiply, add, divide]
queries = [
"What is the capital of France? and what is 465 times 321 then add 95297 and then divide by 13.2?",
"Give me 5 words rhyming with cool, and make a 4-sentence poem using them",
]
# outline a generator with out instruments for comparability
generator = Generator(
model_client=model_client,
model_kwargs=model_kwargs,
)
react = ReActAgent(
max_steps=6,
add_llm_as_fallback=True,
instruments=instruments,
model_client=model_client,
model_kwargs=model_kwargs,
)
# print(react)
for question in queries:
print(f"Question: {question}")
agent_response = react.name(question)
llm_response = generator.name(prompt_kwargs={"input_str": question})
print(f"Agent response: {agent_response}")
print(f"LLM response: {llm_response}")
print("")

The construction of React, together with the initialization arguments and two main parts: tool_manager and planner, is proven under.

ReActAgent(
max_steps=6, add_llm_as_fallback=True,
(tool_manager): ToolManager(Instruments: [FunctionTool(fn: , async: False, definition: FunctionDefinition(func_name='multiply', func_desc='multiply(a: int, b: int) -> intnn Multiply two numbers.n ', func_parameters={'type': 'object', 'properties': {'a': {'type': 'int'}, 'b': {'type': 'int'}}, 'required': ['a', 'b']})), FunctionTool(fn: , async: False, definition: FunctionDefinition(func_name='add', func_desc='add(a: int, b: int) -> intnn Add two numbers.n ', func_parameters={'kind': 'object', 'properties': {'a': {'kind': 'int'}, 'b': {'kind': 'int'}}, 'required': ['a', 'b']})), FunctionTool(fn: , async: False, definition: FunctionDefinition(func_name='divide', func_desc='divide(a: float, b: float) -> floatnn Divide two numbers.n ', func_parameters={'kind': 'object', 'properties': {'a': {'kind': 'float'}, 'b': {'kind': 'float'}}, 'required': ['a', 'b']})), FunctionTool(fn: .llm_tool at 0x11384b740>, async: False, definition: FunctionDefinition(func_name='llm_tool', func_desc="llm_tool(enter: str) -> strnI reply any enter question with llm's world information. Use me as a fallback software or when the question is straightforward.", func_parameters={'kind': 'object', 'properties': {'enter': {'kind': 'str'}}, 'required': ['input']})), FunctionTool(fn: .end at 0x11382fa60>, async: False, definition: FunctionDefinition(func_name='end', func_desc='end(reply: str) -> strnFinish the duty with reply.', func_parameters={'kind': 'object', 'properties': {'reply': {'kind': 'str'}}, 'required': ['answer']}))], Further Context: {})
(planner): Generator(
model_kwargs={'mannequin': 'llama3-70b-8192', 'temperature': 0.0},
(immediate): Immediate(
template:
{# function/job description #}
You're a useful assistant.
Reply the person's question utilizing the instruments supplied under with minimal steps and most accuracy.
{# REACT directions #}
Every step you'll learn the earlier Thought, Motion, and Statement(execution results of the motion) after which present the subsequent Thought and Motion.
{# Instruments #}
{% if instruments %}

You accessible instruments are:
{# instruments #}
{% for software in instruments %}
{{ loop.index }}.
{{software}}
------------------------
{% endfor %}

{% endif %}
{# output format and examples #}

{{output_format_str}}

{# Process specification to show the agent find out how to assume utilizing 'divide and conquer' technique #}
- For easy queries: Immediately name the ``end`` motion and supply the reply.
- For advanced queries:
- Step 1: Learn the person question and doubtlessly divide it into subqueries. And get began with the primary subquery.
- Name one accessible software at a time to resolve every subquery/subquestion.
- At step 'end', be a part of all subqueries solutions and end the duty.
Keep in mind:
- Motion should name one of many above instruments with title. It can't be empty.
- You'll all the time finish with 'end' motion to complete the duty. The reply may be the ultimate reply or failure message.

-----------------
Person question:
{{ input_str }}
{# Step Historical past #}
{% if step_history %}

{% for historical past in step_history %}
Step {{ loop.index }}.
"Thought": "{{historical past.motion.thought}}",
"Motion": "{{historical past.motion.motion}}",
"Statement": "{{historical past.statement}}"
------------------------
{% endfor %}

{% endif %}
You:, prompt_kwargs: {'instruments': ['func_name: multiplynfunc_desc: "multiply(a: int, b: int) -> intnn Multiply two numbers.n "nfunc_parameters:n type: objectn properties:n a:n type: intn b:n type: intn required:n - an - bn', 'func_name: addnfunc_desc: "add(a: int, b: int) -> intnn Add two numbers.n "nfunc_parameters:n type: objectn properties:n a:n type: intn b:n type: intn required:n - an - bn', 'func_name: dividenfunc_desc: "divide(a: float, b: float) -> floatnn Divide two numbers.n "nfunc_parameters:n type: objectn properties:n a:n type: floatn b:n type: floatn required:n - an - bn', "func_name: llm_toolnfunc_desc: 'llm_tool(input: str) -> strnn I answer any input query with llm''s world knowledge. Use me as a fallback tooln or when the query is simple.'nfunc_parameters:n type: objectn properties:n input:n type: strn required:n - inputn", "func_name: finishnfunc_desc: 'finish(answer: str) -> strnn Finish the task with answer.'nfunc_parameters:n type: objectn properties:n answer:n type: strn required:n - answern"], 'output_format_str': 'Your output must be formatted as a normal JSON occasion with the next schema:n```n{n "thought": "Why the perform is known as (Non-obligatory[str]) (non-compulsory)",n "motion": "FuncName() Legitimate perform name expression. Instance: "FuncName(a=1, b=2)" Comply with the info kind specified within the perform parameters.e.g. for Sort object with x,y properties, use "ObjectType(x=1, y=2) (str) (required)"n}n```nExamples:n```n{n "thought": "I've completed the duty.",n "motion": "end(reply="remaining reply: 'reply'")"n}n________n```n-Make certain to all the time enclose the JSON output in triple backticks (```). Please don't add something apart from legitimate JSON output!n-Use double quotes for the keys and string values.n-DO NOT mistaken the "properties" and "kind" within the schema because the precise fields within the JSON output.n-Comply with the JSON formatting conventions.'}, prompt_variables: ['input_str', 'tools', 'step_history', 'output_format_str']
)
(model_client): GroqAPIClient()
(output_processors): JsonOutputParser(
data_class=FunctionExpression, examples=[FunctionExpression(thought='I have finished the task.', action='finish(answer="final answer: 'answer'")')], exclude_fields=None, return_data_class=True
(output_format_prompt): Immediate(
template: Your output must be formatted as a normal JSON occasion with the next schema:
```
{{schema}}
```
{% if instance %}
Examples:
```
{{instance}}
```
{% endif %}
-Make certain to all the time enclose the JSON output in triple backticks (```). Please don't add something apart from legitimate JSON output!
-Use double quotes for the keys and string values.
-DO NOT mistaken the "properties" and "kind" within the schema because the precise fields within the JSON output.
-Comply with the JSON formatting conventions., prompt_variables: ['example', 'schema']
)
(output_processors): JsonParser()
)
)
)

Now, let’s run the check perform to see the agent in motion.

test_react_agent(ModelClientType.GROQ(), llama3_model_kwargs)
test_react_agent(ModelClientType.OPENAI(), gpt_model_kwargs)

Our agent will present the core steps for builders through coloured printout, together with input_query, steps, and the ultimate reply. The printout of the primary question with llama3 is proven under (with out the colour right here):

2024-07-10 16:48:47 - [react.py:287:call] - input_query: What's the capital of France? and what's 465 instances 321 then add 95297 after which divide by 13.2

2024-07-10 16:48:48 - [react.py:266:_run_one_step] - Step 1:
StepOutput(step=1, motion=FunctionExpression(thought="Let's break down the question into subqueries and begin with the primary one.", motion='llm_tool(enter="What's the capital of France?")'), perform=Operate(thought=None, title='llm_tool', args=[], kwargs={'enter': 'What's the capital of France?'}), statement='The capital of France is Paris!')
_______
2024-07-10 16:48:49 - [react.py:266:_run_one_step] - Step 2:
StepOutput(step=2, motion=FunctionExpression(thought="Now, let's transfer on to the second subquery.", motion='multiply(a=465, b=321)'), perform=Operate(thought=None, title='multiply', args=[], kwargs={'a': 465, 'b': 321}), statement=149265)
_______
2024-07-10 16:48:49 - [react.py:266:_run_one_step] - Step 3:
StepOutput(step=3, motion=FunctionExpression(thought="Now, let's add 95297 to the outcome.", motion='add(a=149265, b=95297)'), perform=Operate(thought=None, title='add', args=[], kwargs={'a': 149265, 'b': 95297}), statement=244562)
_______
2024-07-10 16:48:50 - [react.py:266:_run_one_step] - Step 4:
StepOutput(step=4, motion=FunctionExpression(thought="Now, let's divide the outcome by 13.2.", motion='divide(a=244562, b=13.2)'), perform=Operate(thought=None, title='divide', args=[], kwargs={'a': 244562, 'b': 13.2}), statement=18527.424242424244)
_______
2024-07-10 16:48:50 - [react.py:266:_run_one_step] - Step 5:
StepOutput(step=5, motion=FunctionExpression(thought="Now, let's mix the solutions of each subqueries.", motion='end(reply="The capital of France is Paris! and the results of the mathematical operation is 18527.424242424244.")'), perform=Operate(thought=None, title='end', args=[], kwargs={'reply': 'The capital of France is Paris! and the results of the mathematical operation is 18527.424242424244.'}), statement='The capital of France is Paris! and the results of the mathematical operation is 18527.424242424244.')
_______
2024-07-10 16:48:50 - [react.py:301:call] - reply:
The capital of France is Paris! and the results of the mathematical operation is 18527.424242424244.

For the second question, the printout:

2024-07-10 16:48:51 - [react.py:287:call] - input_query: Give me 5 phrases rhyming with cool, and make a 4-sentence poem utilizing them
2024-07-10 16:48:52 - [react.py:266:_run_one_step] - Step 1:
StepOutput(step=1, motion=FunctionExpression(thought="I would like to search out 5 phrases that rhyme with 'cool'.", motion='llm_tool(enter="What are 5 phrases that rhyme with 'cool'?")'), perform=Operate(thought=None, title='llm_tool', args=[], kwargs={'enter': "What are 5 phrases that rhyme with 'cool'?"}), statement='Listed below are 5 phrases that rhyme with "cool":nn1. Rulen2. Tooln3. Fooln4. Pooln5. Faculty')
_______
2024-07-10 16:49:00 - [react.py:266:_run_one_step] - Step 2:
StepOutput(step=2, motion=FunctionExpression(thought='Now that I've the rhyming phrases, I have to create a 4-sentence poem utilizing them.', motion='llm_tool(enter="Create a 4-sentence poem utilizing the phrases 'rule', 'software', 'idiot', 'pool', and 'faculty'.")'), perform=Operate(thought=None, title='llm_tool', args=[], kwargs={'enter': "Create a 4-sentence poem utilizing the phrases 'rule', 'software', 'idiot', 'pool', and 'faculty'."}), statement="Here's a 4-sentence poem utilizing the phrases 'rule', 'software', 'idiot', 'pool', and 'faculty':nnIn the classroom, we study to rule,nWith a pencil as our trusty software.nBut if we're not cautious, we could be a idiot,nAnd find yourself swimming within the faculty pool.")
_______
2024-07-10 16:49:12 - [react.py:266:_run_one_step] - Step 3:
StepOutput(step=3, motion=FunctionExpression(thought='I've the poem, now I would like to complete the duty.', motion='end(reply="Listed below are 5 phrases that rhyme with 'cool': rule, software, idiot, pool, faculty. Here's a 4-sentence poem utilizing the phrases: Within the classroom, we study to rule, With a pencil as our trusty software. But when we're not cautious, we could be a idiot, And find yourself swimming within the faculty pool.")'), perform=Operate(thought=None, title='end', args=[], kwargs={'reply': "Listed below are 5 phrases that rhyme with 'cool': rule, software, idiot, pool, faculty. Here's a 4-sentence poem utilizing the phrases: Within the classroom, we study to rule, With a pencil as our trusty software. But when we're not cautious, we could be a idiot, And find yourself swimming within the faculty pool."}), statement="Listed below are 5 phrases that rhyme with 'cool': rule, software, idiot, pool, faculty. Here's a 4-sentence poem utilizing the phrases: Within the classroom, we study to rule, With a pencil as our trusty software. But when we're not cautious, we could be a idiot, And find yourself swimming within the faculty pool.")
_______
2024-07-10 16:49:12 - [react.py:301:call] - reply:
Listed below are 5 phrases that rhyme with 'cool': rule, software, idiot, pool, faculty. Here's a 4-sentence poem utilizing the phrases: Within the classroom, we study to rule, With a pencil as our trusty software. But when we're not cautious, we could be a idiot, And find yourself swimming within the faculty pool.

The comparability between the agent and the vanilla LLM response is proven under:

Reply with agent: The capital of France is Paris! and the results of the mathematical operation is 18527.424242424244.
Reply with out agent: GeneratorOutput(information="I might be completely happy that can assist you with that!nnThe capital of France is Paris.nnNow, let's deal with the mathematics drawback:nn1. 465 × 321 = 149,485n2. Add 95,297 to that outcome: 149,485 + 95,297 = 244,782n3. Divide the outcome by 13.2: 244,782 ÷ 13.2 = 18,544.09nnSo, the reply is eighteen,544.09!", error=None, utilization=None, raw_response="I might be completely happy that can assist you with that!nnThe capital of France is Paris.nnNow, let's deal with the mathematics drawback:nn1. 465 × 321 = 149,485n2. Add 95,297 to that outcome: 149,485 + 95,297 = 244,782n3. Divide the outcome by 13.2: 244,782 ÷ 13.2 = 18,544.09nnSo, the reply is eighteen,544.09!", metadata=None)

For the second question, the comparability is proven under:

Reply with agent: Listed below are 5 phrases that rhyme with 'cool': rule, software, idiot, pool, faculty. Here's a 4-sentence poem utilizing the phrases: Within the classroom, we study to rule, With a pencil as our trusty software. But when we're not cautious, we could be a idiot, And find yourself swimming within the faculty pool.
Reply with out agent: GeneratorOutput(information='Listed below are 5 phrases that rhyme with "cool":nn1. rulen2. tooln3. fooln4. pooln5. schoolnnAnd here is a 4-sentence poem utilizing these phrases:nnIn the summer time warmth, I prefer to be cool,nFollowing the rule, I take a dip within the pool.nI'm not a idiot, I do know simply what to do,nI seize my software and head again to high school.', error=None, utilization=None, raw_response='Listed below are 5 phrases that rhyme with "cool":nn1. rulen2. tooln3. fooln4. pooln5. schoolnnAnd here is a 4-sentence poem utilizing these phrases:nnIn the summer time warmth, I prefer to be cool,nFollowing the rule, I take a dip within the pool.nI'm not a idiot, I do know simply what to do,nI seize my software and head again to high school.', metadata=None)

The ReAct agent is especially useful for answering queries that require capabilities like computation or extra sophisticated reasoning and planning. Nevertheless, utilizing it on basic queries is perhaps an overkill, as it would take extra steps than essential to reply the question.

Template

The very first thing you need to customise is the template itself. You are able to do this by passing your personal template to the agent’s constructor. We advise you to switch our default template: DEFAULT_REACT_AGENT_SYSTEM_PROMPT.

Examples for Higher Output Format

Secondly, the examples within the constructor permit you to present extra examples to implement the proper output format. As an illustration, if we wish it to discover ways to accurately name multiply, we will cross in an inventory of FunctionExpression cases with the proper format. Classmethod from_function can be utilized to create a FunctionExpression occasion from a perform and its arguments.

from lightrag.core.sorts import FunctionExpression
# generate an instance of calling multiply with key-word arguments
example_using_multiply = FunctionExpression.from_function(
func=multiply,
thought="Now, let's multiply two numbers.",
a=3,
b=4,
)
examples = [example_using_multiply]
# cross it to the agent

We are able to visualize how that is handed to the planner immediate through:

react.planner.print_prompt()

The above instance will likely be formated as:

<OUTPUT_FORMAT>
Your output must be formatted as a normal JSON occasion with the next schema:
```
{
"thought": "Why the perform is known as (Non-obligatory[str]) (non-compulsory)",
"motion": "FuncName(<kwargs>) Legitimate perform name expression. Instance: "FuncName(a=1, b=2)" Comply with the info kind specified within the perform parameters.e.g. for Sort object with x,y properties, use "ObjectType(x=1, y=2) (str) (required)"
}
```
Examples:
```
{
"thought": "Now, let's multiply two numbers.",
"motion": "multiply(a=3, b=4)"
}
________
{
"thought": "I've completed the duty.",
"motion": "end(reply="remaining reply: 'reply'")"
}
________
```
-Make certain to all the time enclose the JSON output in triple backticks (```). Please don't add something apart from legitimate JSON output!
-Use double quotes for the keys and string values.
-DO NOT mistaken the "properties" and "kind" within the schema because the precise fields within the JSON output.
-Comply with the JSON formatting conventions.
</OUTPUT_FORMAT>

Subclass ReActAgent

If you wish to customise the agent additional, you’ll be able to subclass the ReActAgent and override the strategies you need to change.

[1] A survey on giant language mannequin primarily based autonomous brokers: Paitesanshi/LLM-Agent-Survey

[2] ReAct: https://arxiv.org/abs/2210.03629

Leave a Reply