LangChain vs CrewAI vs AutoGen to Construct a Information Evaluation Agent

In at this time’s data-driven world, organizations depend on information analysts to interpret complicated datasets, uncover actionable insights, and drive decision-making. However what if we may improve the effectivity and scalability of this course of utilizing AI? Enter the Information Evaluation Agent, to automate analytical duties, execute code, and adaptively reply to information queries. LangGraph, CrewAI, and AutoGen are three widespread frameworks used to construct AI Brokers. We will probably be utilizing and evaluating all three on this article to construct a easy information evaluation agent.

Working of Information Evaluation Agent

The info evaluation agent will first take the question from the person and generate the code to learn the file and analyze the info within the file. Then the generated code will probably be executed utilizing the Python repl device. The results of the code is shipped again to the agent. The agent then analyzes the outcome acquired from the code execution device and replies to the person question. LLMs can generate arbitrary code, so we should rigorously execute the LLM-generated code in a neighborhood atmosphere.

Constructing a Information Evaluation Agent with LangGraph

In case you are new to this subject or want to brush up in your information of LangGraph, right here’s an article I’d advocate: What’s LangGraph?

Pre-requisites

Earlier than constructing brokers, guarantee you have got the mandatory API keys for the required LLMs.

Load the .env file with the API keys wanted.

from dotenv import load_dotenv

load_dotenv(./env)

Key Libraries Required

langchain – 0.3.7

langchain-experimental – 0.3.3

langgraph – 0.2.52

crewai – 0.80.0

Crewai-tools – 0.14.0

autogen-agentchat – 0.2.38

Now that we’re all set, let’s start constructing our agent.

Steps to Construct a Information Evaluation Agent with LangGraph

1. Import the mandatory libraries.

import pandas as pd
from IPython.show import Picture, show
from typing import Record, Literal, Elective, TypedDict, Annotated
from langchain_core.instruments import device
from langchain_core.messages import ToolMessage
from langchain_experimental.utilities import PythonREPL
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.reminiscence import MemorySaver

2. Let’s outline the state.

class State(TypedDict):
	messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)

3. Outline the LLM and the code execution perform and bind the perform to the LLM.

llm = ChatOpenAI(mannequin="gpt-4o-mini", temperature=0.1)

@device
def python_repl(code: Annotated[str, "filename to read the code from"]):
    """Use this to execute python code learn from a file. If you wish to see the output of a price,
    Just remember to learn the code from appropriately
    it's best to print it out with `print(...)`. That is seen to the person."""

    strive:
        outcome = PythonREPL().run(code)
        print("RESULT CODE EXECUTION:", outcome)
    besides BaseException as e:
        return f"Didn't execute. Error: {repr(e)}"
    return f"Executed:n```pythonn{code}n```nStdout: {outcome}"

llm_with_tools = llm.bind_tools([python_repl])

4. Outline the perform for the agent to answer and add it as a node to the graph.

def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}
    
graph_builder.add_node("agent", chatbot)

5. Outline the ToolNode and add it to the graph.

code_execution = ToolNode(instruments=[python_repl])

graph_builder.add_node("instruments", code_execution)

If the LLM returns a device name, we have to route it to the device node; in any other case, we are able to finish it. Let’s outline a perform for routing. Then we are able to add different edges.

def route_tools(state: State,):
    """
    Use within the conditional_edge to path to the ToolNode if the final message
    has device calls. In any other case, path to the top.
    """
    if isinstance(state, checklist):
        ai_message = state[-1]
    elif messages := state.get("messages", []):
        ai_message = messages[-1]
    else:
        increase ValueError(f"No messages present in enter state to tool_edge: {state}")
    if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
        return "instruments"
    return END
    
graph_builder.add_conditional_edges(
    "agent",
    route_tools,
    {"instruments": "instruments", END: END},
)

graph_builder.add_edge("instruments", "agent")

6. Allow us to additionally add the reminiscence in order that we are able to chat with the agent.

reminiscence = MemorySaver()

graph = graph_builder.compile(checkpointer=reminiscence)

7. Compile and show the graph.

graph = graph_builder.compile(checkpointer=reminiscence)

show(Picture(graph.get_graph().draw_mermaid_png()))

8. Now we are able to begin the chat. Since we now have added reminiscence, we’ll give every dialog a singular thread_id and begin the dialog on that thread.

config = {"configurable": {"thread_id": "1"}}

def stream_graph_updates(user_input: str):
    occasions = graph.stream(
        {"messages": [("user", user_input)]}, config, stream_mode="values"
    )
    for occasion in occasions:
        occasion["messages"][-1].pretty_print()
        
whereas True:
    user_input = enter("Person: ")
    if user_input.decrease() in ["quit", "exit", "q"]:
        print("Goodbye!")
        break
    stream_graph_updates(user_input)

Whereas the loop is working, we begin by giving the trail of the file after which asking any questions primarily based on the info.

The output will probably be as follows:

LangChain vs CrewAI vs AutoGen to Construct a Information Evaluation Agent

As we now have included reminiscence, we are able to ask any questions on the dataset within the chat. The agent will generate the required code and the code will probably be executed. The code execution outcome will probably be despatched again to the LLM. An instance is proven under:

LangChain Data Analysis Agent

Additionally Learn: The right way to Create Your Personalised Information Digest Agent with LangGraph

Constructing a Information Evaluation Agent with CrewAI

Now, we’ll use CrewAI for information evaluation activity.

1. Import the mandatory libraries.

from crewai import Agent, Process, Crew
from crewai.instruments import device
from crewai_tools import DirectoryReadTool, FileReadTool
from langchain_experimental.utilities import PythonREPL

2. We are going to construct one agent for producing the code and one other for executing that code.

coding_agent = Agent(
	function="Python Developer",
	purpose="Craft well-designed and thought-out code to reply the given downside",
	backstory="""You're a senior Python developer with intensive expertise in software program and its finest practices.
            	You may have experience in writing clear, environment friendly, and scalable code. """,
	llm='gpt-4o',
	human_input=True,
)
coding_task = Process(
	description="""Write code to reply the given downside
                	assign the code output to the 'outcome' variable
                    	Drawback: {downside},
                    	""",
	expected_output="code to get the outcome for the issue. output of the code needs to be assigned to the 'outcome' variable",
	agent=coding_agent
)

3. To execute the code, we’ll use PythonREPL(). Outline it as a crewai device.

@device("repl")
def repl(code: str) -> str:
	"""Helpful for executing Python code"""
	return PythonREPL().run(command=code)

4. Outline executing agent and duties with entry to repl and FileReadTool()

executing_agent = Agent(
	function="Python Executor",
	purpose="Run the acquired code to reply the given downside",
	backstory="""You're a Python developer with intensive expertise in software program and its finest practices.
            	"You may execute code, debug, and optimize Python options successfully.""",
	llm='gpt-4o-mini',
	human_input=True,
	instruments=[repl, FileReadTool()]
)
executing_task = Process(
	description="""Execute the code to reply the given downside
                	assign the code output to the 'outcome' variable
                    	Drawback: {downside},
                    	""",
	expected_output="the outcome for the issue",
	agent=executing_agent
)

5. Construct the crew with each brokers and corresponding duties.

analysis_crew = Crew(
	brokers=[coding_agent, executing_agent],
	duties=[coding_task, executing_task],
	verbose=True
)

6. Run the crew with the next inputs.

inputs = {'downside': """learn this file and return the column names and discover imply age
   "/house/santhosh/Tasks/Code/LangGraph/gym_members_exercise_tracking.csv""",}

outcome = analysis_crew.kickoff(inputs=inputs)

print(outcome.uncooked)

Right here’s how the output will seem like:

CrewAI Data Analysis Agent

Additionally Learn: Construct LLM Brokers on the Fly With out Code With CrewAI

Constructing a Information Evaluation Agent with AutoGen

1. Import the mandatory libraries.

from autogen import ConversableAgent
from autogen.coding import LocalCommandLineCodeExecutor, DockerCommandLineCodeExecutor

2. Outline the code executor and an agent to make use of the code executor.

executor = LocalCommandLineCodeExecutor(
	timeout=10,  # Timeout for every code execution in seconds.
	work_dir="./Information",  # Use the listing to retailer the code information.
)
code_executor_agent = ConversableAgent(
	"code_executor_agent",
	llm_config=False,
	code_execution_config={"executor": executor},
	human_input_mode="ALWAYS",
)

3. Outline an agent to jot down the code with a customized system message.

Take the code_writer system message from https://microsoft.github.io/autogen/0.2/docs/tutorial/code-executors/



code_writer_agent = ConversableAgent(
    "code_writer_agent",
    system_message=code_writer_system_message,
    llm_config={"config_list": [{"model": "gpt-4o-mini"}]},
    code_execution_config=False,
)

4. Outline the issue to resolve and provoke the chat.

downside = """Learn the file on the path '/house/santhosh/Tasks/Code/LangGraph/gym_members_exercise_tracking.csv'
        	and print imply age of the folks."""

chat_result = code_executor_agent.initiate_chat(
	code_writer_agent,
	message=downside,
)

As soon as the chat begins, we are able to additionally ask any subsequent questions on the dataset talked about above. If the code encounters any error, we are able to ask to change the code. If the code is okay, we are able to simply press ‘enter’ to proceed executing the code.

5. We are able to additionally print the questions requested by us and their solutions, if required, utilizing this code.

for message in chat_result.chat_history:
    if message['role'] == 'assistant':
        if 'exitcode' not in message['content']:
            print(message['content'])
            print('n')
            
    else:
        if 'TERMINATE' in message['content']:        
            print(message['content'])
            print("----------------------------------------")

Right here’s the outcome:

AutoGen Data Analysis Agent

Additionally Learn: Fingers-on Information to Constructing Multi-Agent Chatbots with AutoGen

LangGraph vs CrewAI vs AutoGen

Now that you simply’ve discovered to construct a knowledge evaluation agent with all the three frameworks, let’s discover the variations between them, relating to code execution:

Framework Key Options Strengths Greatest Suited For
LangGraph – Graph-based construction (nodes characterize brokers/instruments, edges outline interactions)
– Seamless integration with PythonREPL
– Extremely versatile for creating structured, multi-step workflows
– Secure and environment friendly code execution with reminiscence preservation throughout duties
Complicated, process-driven analytical duties that demand clear, customizable workflows
CrewAI – Collaboration-focused
– A number of brokers working in parallel with predefined roles
– Integrates with LangChain instruments
– Process-oriented design
– Wonderful for teamwork and function specialization
– Helps secure and dependable code execution with PythonREPL
Collaborative information evaluation, code evaluation setups, activity decomposition, and role-based execution
AutoGen – Dynamic and iterative code execution
– Conversable brokers for interactive execution and debugging
– Constructed-in chat characteristic
– Adaptive and conversational workflows
– Give attention to dynamic interplay and debugging
– Excellent for speedy prototyping and troubleshooting
Speedy prototyping, troubleshooting, and environments the place duties and necessities evolve incessantly

Conclusion

On this article, we demonstrated the best way to construct information evaluation brokers utilizing LangGraph, CrewAI, and AutoGen. These frameworks allow brokers to generate, execute, and analyze code to handle information queries effectively. By automating repetitive duties, these instruments make information evaluation sooner and extra scalable. The modular design permits customization for particular wants, making them priceless for information professionals. These brokers showcase the potential of AI to simplify workflows and extract insights from information with ease.

To know extra about AI Brokers, checkout our unique Agentic AI Pioneer Program!

Often Requested Questions

Q1. What are the important thing advantages of utilizing AI frameworks like LangGraph, CrewAI, and AutoGen for information evaluation?

A. These frameworks automate code technology and execution, enabling sooner information processing and insights. They streamline workflows, cut back guide effort, and improve productiveness for data-driven duties.

Q2. Can these information evaluation brokers deal with a number of datasets or complicated queries?

A. Sure, the brokers might be custom-made to deal with various datasets and complicated analytical queries by integrating acceptable instruments and adjusting their workflows.

Q3. What precautions needs to be taken when executing LLM-generated code?

A. LLM-generated code could embody errors or unsafe operations. All the time validate the code in a managed atmosphere to make sure accuracy and safety earlier than execution.

This autumn. How does reminiscence integration improve these information evaluation brokers?

A. Reminiscence integration permits brokers to retain the context of previous interactions, enabling adaptive responses and continuity in complicated or multi-step queries.

Q5. What varieties of duties can these information evaluation brokers automate?

A. These brokers can automate duties similar to studying information, performing information cleansing, producing summaries, executing statistical analyses, and answering person queries concerning the information.

I’m working as an Affiliate Information Scientist at Analytics Vidhya, a platform devoted to constructing the Information Science ecosystem. My pursuits lie within the fields of Pure Language Processing (NLP), Deep Studying, and AI Brokers.