Accomplish Advanced Duties Utilizing Code Executors in AutoGen

AI brokers are designed to behave autonomously, fixing issues and executing duties in dynamic environments. A key characteristic in Autogen, enabling their adaptability is AutoGen’s code executors. This characteristic together with LLMs permits AI brokers to generate, consider, and execute code in real-time. This functionality bridges the hole between static AI fashions and actionable intelligence. By automating workflows, performing knowledge evaluation, and debugging complicated programs, it transforms brokers from mere thinkers into efficient doers. On this article, we are going to be taught extra about code executors in AutoGen and the best way to implement them.

Kinds of Code Executors in AutoGen

AutoGen has three sorts of code executors that can be utilized for various functions.

  1. Command Line Executor: It permits AI brokers to run the code within the command line. It’s going to save every code block to a separate file and execute that file. This executor is right for automating duties like file administration, script execution, or dealing with exterior instruments. It offers flexibility and low-level management in a workflow.
  2. Jupyter Code Executor: It permits brokers to execute Python code inside a Jupyter-like surroundings. Right here, you may outline variables in a single code block and reuse them in subsequent blocks. One benefit of this setup is that when an error happens, solely the particular block of code with the error must be re-executed, slightly than the complete script.
  3. Customized Code Executor: It offers builders the flexibility to create specialised code execution logic. For instance, the customized code executor can entry variables outlined within the surroundings with out explicitly offering them to the LLM.

These Code Executors could be run on each the host machine (native) in addition to the Docker containers.

Additionally Learn: 4 Steps to Construct Multi-Agent Nested Chats with AutoGen

Find out how to Construct AI Brokers with Code Executors in AutoGen?

Now let’s find out how you should utilize these totally different code executors in AutoGen:

Pre-requisites

Earlier than constructing AI brokers, guarantee you may have the required API keys for the required LLMs.

Load the .env file with the API keys wanted.

from dotenv import load_dotenv

load_dotenv(./env)

Key Libraries Required

autogen-agentchat – 0.2.38

jupyter_kernel_gateway-3.0.1

Constructing an AI Agent Utilizing Command Line Executor

Let’s construct an AI agent to know the affords and reductions out there on an e-commerce web site utilizing the command line executor. Listed here are the steps to observe.

1. Import the required libraries.

from autogen import ConversableAgent, AssistantAgent, UserProxyAgent
from autogen.coding import LocalCommandLineCodeExecutor, DockerCommandLineCodeExecutor

2. Outline the brokers.

user_proxy = UserProxyAgent(
	title="Person",
	llm_config=False,
	is_termination_msg=lambda msg: msg.get("content material") just isn't None and "TERMINATE" in msg["content"],
	human_input_mode="TERMINATE",
	code_execution_config=False
)
code_writer_agent = ConversableAgent(
	title="CodeWriter",
	system_message="""You're a Python developer.
	You employ your coding talent to unravel issues.
	As soon as the duty is completed, returns 'TERMINATE'.""",
	llm_config={"config_list": [{"model": "gpt-4o-mini"}]},
)

local_executor = LocalCommandLineCodeExecutor(
	timeout=15,
	work_dir="./code recordsdata")

local_executor_agent = ConversableAgent(
	"local_executor_agent",
	llm_config=False,
	code_execution_config={"executor": local_executor},
	human_input_mode="ALWAYS",
)

We’re utilizing the ‘local_executor’ within the code_execution_config of the local_executor_agent.

3. Outline the messages that are used to initialize the chat.

messages = ["""To check whether there are any offers or discounts available on a given e-commerce website -
            	https://www.flipkart.com/
            	Follow these steps,
            	1. download the html page of the given URL
            	2. we only need html content, so remove any CSS, JavaScript, and Image tags content
            	3. save the remaining html content.
           	""" ,
      	"read the text and list all the offers and discounts available"]

# Intialize the chat
chat_result = local_executor_agent.initiate_chat(
	code_writer_agent,
	message=messages[0],
)

It’s going to ask for human enter after every message from the codeWriter agent. You simply have to press the ‘Enter’ key to execute the code written by the agent. We will additionally any additional directions if there’s any downside with the code.

Listed here are the questions we now have requested and the output on the finish.

Accomplish Advanced Duties Utilizing Code Executors in AutoGen

As we will see, with the talked about questions, we will get a listing of affords and reductions from an e-commerce web site.

Additionally Learn: Palms-on Information to Constructing Multi-Agent Chatbots with AutoGen

Constructing an ML Mannequin Utilizing Jupyter Code Executor

Through the use of this, we will entry the variables outlined in a single code block from one other code block, in contrast to the command line executor.

Now, let’s attempt to construct an ML mannequin utilizing this.

1. Import the extra strategies.

from autogen.coding.jupyter import LocalJupyterServer, DockerJupyterServer, JupyterCodeExecutor
from pathlib import Path

2. Initialize the jupyter server and output listing.

server = LocalJupyterServer()
output_dir = Path("coding")
output_dir.mkdir()

Be aware that LocalJupyterServer might not perform on Home windows on account of a bug. On this case, you should utilize the DockerJupyterServer as an alternative or use the EmbeddedIPythonCodeExecutor.

3. Outline the executor agent and author agent with a customized system message.

jupyter_executor_agent = ConversableAgent(
	title="jupyter_executor_agent",
	llm_config=False,
	code_execution_config={
    	"executor": JupyterCodeExecutor(server, output_dir=output_dir),
	},
	human_input_mode="ALWAYS",
)
code_writer_system_message = """
You may have been given coding functionality to unravel duties utilizing Python code in a stateful IPython kernel.
You might be answerable for writing the code, and the consumer is answerable for executing the code.

If you write Python code, put the code in a markdown code block with the language set to Python.
For instance:
```python
x = 3
```
You need to use the variable `x` in subsequent code blocks.
```python
print(x)
```
At all times use print statements for the output of the code.
Write code incrementally and leverage the statefulness of the kernel to keep away from repeating code.
Import libraries in a separate code block.
Outline a perform or a category in a separate code block.
Run code that produces output in a separate code block.
Run code that entails costly operations like obtain, add, and name exterior APIs in a separate code block.

When your code produces an output, the output can be returned to you.
As a result of you may have restricted dialog reminiscence, in case your code creates a picture,
the output can be a path to the picture as an alternative of the picture itself."""

code_writer_agent = ConversableAgent(
	"code_writer",
	system_message=code_writer_system_message,
	llm_config={"config_list": [{"model": "gpt-4o"}]},
	human_input_mode="TERMINATE",
)

4. Outline the preliminary message and initialize the chat

message = "learn the datasets/user_behavior_dataset.csv and print what the information is about"

chat_result = jupyter_executor_agent.initiate_chat(
	code_writer_agent,
	message=message,
)

# As soon as the chat is accomplished we will cease the server.
server.cease()

5. As soon as the chat is accomplished we will cease the server.

We will print the messages as follows

for chat in chat_result.chat_history[:]:
    if chat['name'] == 'code_writer' and 'TERMINATE' not in chat['content']:
        print("--------agent-----------")
        print(chat['content'])
    if chat['name'] == 'jupyter_executor_agent' and 'exitcode' not in chat['content']:
        print("--------user------------")
        print(chat['content'])

Right here’s the pattern

"

As we will see, we will get the code generated by the agent and in addition the outcomes after executing the code.

Additionally Learn: Constructing Agentic Chatbots Utilizing AutoGen

Constructing an AI Agent Utilizing Customized Executor

Now, let’s attempt to create a customized executor that may run the code in the identical jupyter pocket book the place we’re creating this executor. So, we will learn a CSV file, after which ask an agent to construct an ML mannequin on the already imported file.

Right here’s how we’ll do it.

1. Import the required libraries.

import pandas as pd
from typing import Checklist
from IPython import get_ipython
from autogen.coding import CodeBlock, CodeExecutor, CodeExtractor, CodeResult, MarkdownCodeExtractor

2. Outline the executor that may extract and run the code from jupyter cells.

class NotebookExecutor(CodeExecutor):
    @property
    def code_extractor(self) -> CodeExtractor:
        # Extact code from markdown blocks.
        return MarkdownCodeExtractor()

    def __init__(self) -> None:
        # Get the present IPython occasion working on this pocket book.
        self._ipython = get_ipython()

    def execute_code_blocks(self, code_blocks: Checklist[CodeBlock]) -> CodeResult:
        log = ""
        for code_block in code_blocks:
            end result = self._ipython.run_cell("%%seize --no-display capn" + code_block.code)
            log += self._ipython.ev("cap.stdout")
            log += self._ipython.ev("cap.stderr")
            if end result.end result just isn't None:
                log += str(end result.end result)
            exitcode = 0 if end result.success else 1
            if end result.error_before_exec just isn't None:
                log += f"n{end result.error_before_exec}"
                exitcode = 1
            if end result.error_in_exec just isn't None:
                log += f"n{end result.error_in_exec}"
                exitcode = 1
            if exitcode != 0:
                break
        return CodeResult(exit_code=exitcode, output=log)

3. Outline the brokers.

code_writer_agent = ConversableAgent(
    title="CodeWriter",
    system_message="You're a useful AI assistant.n"
    "You employ your coding talent to unravel issues.n"
    "You may have entry to a IPython kernel to execute Python code.n"
    "You'll be able to recommend Python code in Markdown blocks, every block is a cell.n"
    "The code blocks can be executed within the IPython kernel within the order you recommend them.n"
    "All vital libraries have already been put in.n"
    "Add return or print statements to the code to get the outputn"
    "As soon as the duty is completed, returns 'TERMINATE'.",
    llm_config={"config_list": [{"model": "gpt-4o-mini"}]},
)
code_executor_agent = ConversableAgent(
	title="CodeExecutor",
	llm_config=False,
	code_execution_config={"executor": NotebookExecutor()},
	is_termination_msg=lambda msg: "TERMINATE" in msg.get("content material", "").strip().higher(),
	human_input_mode="ALWAYS"
)

4. Learn the file and provoke the chat with the file.

df = pd.read_csv('datasets/mountains_vs_beaches_preferences.csv')

chat_result = code_executor_agent.initiate_chat(
	code_writer_agent,
	message="What are the column names within the dataframe outlined above as df?",
)

5. We will print the chat historical past as follows:

for chat in chat_result.chat_history[:]:
    if chat['name'] == 'CodeWriter' and 'TERMINATE' not in chat['content']:
        print("--------agent-----------")
        print(chat['content'])
    if chat['name'] == 'CodeExecutor' and 'exitcode' not in chat['content']:
        print("--------user------------")
        print(chat['content'])

As we will see once more, we will get the code generated by the agent and in addition the outcomes after executing the code.

Conclusion

AutoGen’s code executors present flexibility and performance for AI brokers to carry out real-world duties. The command line executor permits script execution, whereas the Jupyter code executor helps iterative growth. Customized executors, alternatively, permit builders to create tailor-made workflows.

These instruments empower AI brokers to transition from downside solvers to resolution implementers. Builders can use these options to construct clever programs that ship actionable insights and automate complicated processes.

Steadily Requested Questions

Q1. What’s the major goal of Code Executors in AutoGen?

A. Code Executors in AutoGen permit AI brokers to generate, execute, and consider code in actual time. This allows brokers to automate duties, carry out knowledge evaluation, debug programs, and implement dynamic workflows.

Q2. What are the variations between Command Line and Jupyter Code Executors?

A. The Command Line Executor saves and executes code as separate recordsdata, superb for duties like file administration and script execution. The Jupyter Code Executor operates in a stateful surroundings, permitting reuse of variables and selective re-execution of code blocks, making it extra appropriate for iterative coding duties like constructing ML fashions.

Q3. Can Code Executors be used with Docker containers?

A. Sure, each the Command Line Executor and Jupyter Code Executor could be configured to run on Docker containers, offering a versatile surroundings for execution.

This fall. What’s the benefit of utilizing a Customized Code Executor?

A. Customized Code Executors permit builders to outline specialised execution logic, equivalent to working code throughout the identical Jupyter pocket book. That is helpful for duties requiring a excessive degree of integration or customization.

Q5. What are the stipulations for utilizing Code Executors in AutoGen?

A. Earlier than utilizing Code Executors, guarantee you may have the required API keys on your most popular LLMs. You must also have the required libraries, equivalent to `autogen-agentchat` and `jupyter_kernel_gateway`, put in in your surroundings.

I’m working as an Affiliate Information Scientist at Analytics Vidhya, a platform devoted to constructing the Information Science ecosystem. My pursuits lie within the fields of Pure Language Processing (NLP), Deep Studying, and AI Brokers.