One of many fascinating areas into these Agentic Ai programs is the distinction between constructing a single versus multi-agent workflow, or maybe the distinction between working with extra versatile vs managed programs.
This text will assist you to perceive what agentic AI is, methods to construct easy workflows with LangGraph, and the variations in outcomes you possibly can obtain with the completely different architectures. I’ll reveal this by constructing a tech information agent with varied knowledge sources.
As for the use case, I’m a bit obsessive about getting automated information updates, based mostly on my preferences, with out me drowning in info overload daily.

Working with summarizing and gathering analysis is a kind of areas that agentic AI can actually shine.
So observe alongside whereas I preserve making an attempt to make AI do the grunt work for me, and we’ll see how single-agent compares to multi-agent setups.
I at all times preserve my work jargon-free, so in case you’re new to agentic AI, this piece ought to assist you to perceive what it’s and methods to work with it. If you happen to’re not new to it, you possibly can scroll previous a few of the sections.
Agentic AI (& LLMs)
Agentic AI is about programming with pure language. As a substitute of utilizing inflexible, express code, you’re instructing giant language fashions (LLMs) to route knowledge and carry out actions by means of plain language in automating duties.
Utilizing pure language in workflows isn’t new, we’ve used NLP for years to extract and course of knowledge. What’s new is the quantity of freedom we are able to now give language fashions, permitting them to deal with ambiguity and make selections dynamically.

However simply because LLMs can perceive nuanced language doesn’t imply they inherently validate information or keep knowledge integrity. I see them primarily as a communication layer that sits on prime of structured programs and present knowledge sources.

I normally clarify it like this to non-technical individuals: they work a bit like we do. If we don’t have entry to scrub, structured knowledge, we begin making issues up. Similar with LLMs. They generate responses based mostly on patterns, not truth-checking.
So similar to us, they do their greatest with what they’ve acquired. If we would like higher output, we have to construct programs that give them dependable knowledge to work with. So, with Agentic programs we combine methods for them to work together with completely different knowledge sources, instruments and programs.
Now, simply because we can use these bigger fashions in additional locations, doesn’t imply we ought to. LLMs shine when decoding nuanced pure language, assume customer support, analysis, or human-in-the-loop collaboration.
However for structured duties — like extracting numbers and sending them someplace — it’s essential use conventional approaches. LLMs aren’t inherently higher at math than a calculator. So, as an alternative of getting an LLM do calculations, you give an LLM entry to a calculator.
So everytime you can construct components of a workflow programmatically, that may nonetheless be the higher possibility.
Nonetheless, LLMs are nice at adapting to messy real-world enter and decoding imprecise directions so combining the 2 will be an effective way to construct programs.
Agentic Frameworks
I do know lots of people soar straight to CrewAI or AutoGen right here, however I’d advocate testing LangGraph, Agno, Mastra, and Smolagents. Based mostly on my analysis, these frameworks have obtained a few of the strongest suggestions thus far.

LangGraph is extra technical and will be advanced, but it surely’s the popular alternative for a lot of builders. Agno is simpler to get began with however much less technical. Mastra is a strong possibility for JavaScript builders, and Smolagents exhibits quite a lot of promise as a light-weight different.
On this case, I’ve gone with LangGraph — constructed on prime of LangChain — not as a result of it’s my favourite, however as a result of it’s turning into a go-to framework that extra devs are adopting.
So, it’s price being aware of.
It has quite a lot of abstractions although, the place you might need to rebuild a few of it simply to have the ability to management and perceive it higher.
I can’t go into element on LangGraph right here, so I made a decision to construct a fast information for people who have to get a overview.
As for this use case, you’ll have the ability to run the workflow with out coding something, however in case you’re right here to be taught you might also need to perceive the way it works.
Selecting an LLM
Now, you would possibly soar into this and marvel why I’m selecting sure LLMs as the bottom for the brokers.
You may’t simply decide any mannequin, particularly when working inside a framework. They have to be appropriate. Key issues to search for are instrument calling help and the power to generate structured outputs.
I’d advocate checking HuggingFace’s Agent Leaderboard to see which fashions really carry out effectively in real-world agentic programs.
For this workflow, try to be high quality utilizing fashions from Anthropic, OpenAI, or Google. If you happen to’re contemplating one other one, simply be sure that it’s appropriate with LangChain.
Single vs. Multi-Agent Programs
If you happen to construct a system round one LLM and provides it a bunch of instruments you need it to make use of, you’re working with a single-agent workflow. It’s quick, and in case you’re new to agentic AI, it would seem to be the mannequin ought to simply determine issues out by itself.

However the factor is these workflows are simply one other type of system design. Like every software program undertaking, it’s essential plan the method, outline the steps, construction the logic, and resolve how every half ought to behave.

That is the place multi-agent workflows are available in.
Not all of them are hierarchical or linear although, some are collaborative. Collaborative workflows would then additionally fall into the extra versatile method that I discover harder to work with, no less than as it’s now with the capabilities that exist.
Nonetheless, collaborative workflows do additionally break aside completely different capabilities into their very own modules.
Single-agent and collaborative workflows are nice to begin with once you’re simply enjoying round, however they don’t at all times provide the precision wanted for precise duties.
For the workflow I’ll construct right here, I already know the way the APIs must be used — so it’s my job to information the system to make use of it the correct means.
We’ll undergo evaluating a single-agent setup with a hierarchical multi-agent system, the place a lead agent delegates duties throughout a small staff so you possibly can see how they behave in apply.
Constructing a Single Agent Workflow
With a single thread — i.e., one agent — we give an LLM entry to a number of instruments. It’s as much as the agent to resolve which instrument to make use of and when, based mostly on the consumer’s query.

The problem with a single agent is management.
Irrespective of how detailed the system immediate is, the mannequin might not observe our requests (this may occur in additional managed environments too). If we give it too many instruments or choices, there’s a very good likelihood it received’t use all of them and even use the correct ones.
As an example this, we’ll construct a tech information agent that has entry to a number of API endpoints with customized knowledge with a number of choices as parameters within the instruments. It’s as much as the agent to resolve what number of to make use of and methods to setup the ultimate abstract.
Keep in mind, I construct these workflows utilizing LangGraph. I received’t go into LangGraph in depth right here, so if you wish to be taught the fundamentals to have the ability to tweak the code, go right here.
You could find the single-agent workflow right here. To run it, you’ll want LangGraph Studio and the newest model of Docker put in.
When you’re arrange, open the undertaking folder in your pc, add your GOOGLE_API_KEY
in a .env
file, and save. You will get a key from Google right here.
Gemini Flash 2.0 has a beneficiant free tier, so working this shouldn’t value something (however you might run into errors in case you use it an excessive amount of).
If you wish to change to a different LLM or instruments, you possibly can tweak the code instantly. However, once more, keep in mind the LLM must be appropriate.
After setup, launch LangGraph Studio and choose the proper folder.
This may boot up our workflow so we are able to check it.

If you happen to run into points booting this up, double-check that you just’re utilizing the newest model of Docker.
As soon as it’s loaded, you possibly can check the workflow by getting into a human message and hitting submit.

You may see me run the workflow beneath.

You may see the ultimate response beneath.

For this immediate it determined that it could test weekly trending key phrases filtered by the class ‘corporations’ solely, after which it fetched the sources of these key phrases and summarized for us.
It had some points in giving us a unified abstract, the place it merely used the knowledge it acquired final and failed to make use of the entire analysis.
In actuality we would like it to fetch each trending and prime key phrases inside a number of classes (not simply corporations), test sources, observe particular key phrases, and cause and summarize all of it properly earlier than returning a response.
We will after all probe it and preserve asking it questions however as you possibly can think about if we’d like one thing extra advanced it could begin to make shortcuts within the workflow.
The important thing factor is, an agent system isn’t simply gonna assume the way in which we anticipate, we’ve to truly orchestrate it to do what we would like.
So a single agent is nice for one thing easy however as you possibly can think about it might not assume or behave as we expect.
For this reason going for a extra advanced system the place every agent is chargeable for one factor will be actually helpful.
Testing a Multi-Agent Workflow
Constructing multiagent workflows is much more tough than constructing a single agent with entry to some instruments. To do that, it’s essential rigorously take into consideration the structure beforehand and the way knowledge ought to circulation between the brokers.
The multi-agent workflow I’ll arrange right here makes use of two completely different groups — a analysis staff and an modifying staff — with a number of brokers underneath every.
Each agent has entry to a selected set of instruments.

We’re introducing some new instruments, like a analysis pad that acts as a shared house — one staff writes their findings, the opposite reads from it. The final LLM will learn all the pieces that has been researched and edited to make a abstract.
An alternative choice to utilizing a analysis pad is to retailer knowledge in a scratchpad in state, isolating short-term reminiscence for every staff or agent. However that additionally means considering rigorously about what every agent’s reminiscence ought to embody.
I additionally determined to construct out the instruments a bit extra to supply richer knowledge upfront, so the brokers don’t need to fetch sources for every key phrase individually. Right here I’m utilizing regular programmatic logic as a result of I can.
A key factor to recollect: if you should use regular programming logic, do it.
Since we’re utilizing a number of brokers, you possibly can decrease prices by utilizing cheaper fashions for many brokers and reserving the dearer ones for the essential stuff.
Right here, I’m utilizing Gemini Flash 2.0 for all brokers besides the summarizer, which runs on OpenAI’s GPT-4o. In order for you higher-quality summaries, you should use an much more superior LLM with a bigger context window.
The workflow is about up for you right here. Earlier than loading it, be sure that so as to add each your OpenAI and Google API keys in a .env
file.
On this workflow, the routes (edges) are setup dynamically as an alternative of manually like we did with the one agent. It’ll look extra advanced in case you peek into the code.
When you boot up the workflow in LangGraph Studio — similar course of as earlier than — you’ll see the graph with all these nodes prepared.

LangGraph Studio lets us visualize how the system delegates work between brokers after we run it—similar to we noticed within the less complicated workflow above.
Since I perceive the instruments every agent is utilizing, I can immediate the system in the correct means. However common customers received’t know the way to do that correctly. So in case you’re constructing one thing comparable, I’d recommend introducing an agent that transforms the consumer’s question into one thing the opposite brokers can really work with.
We will check it out by setting a message.
“I’m an investor and I’m involved in getting an replace for what has occurred throughout the week in tech, and what persons are speaking about (this implies classes like corporations, individuals, web sites and topics are fascinating). Please additionally observe these particular key phrases: AI, Google, Microsoft, and Giant Language Fashions”
Then selecting “supervisor” because the Subsequent parameter (we’d usually do that programmatically).

This workflow will take a number of minutes to run, in contrast to the single-agent workflow we ran earlier which completed in underneath a minute.
So be affected person whereas the instruments are working.
Typically, these programs take time to collect and course of info and that’s simply one thing we have to get used to.
The ultimate abstract will look one thing like this:

You may learn the entire thing right here as an alternative if you wish to test it out.
The information will clearly fluctuate relying on once you run the workflow. I ran it the twenty eighth of March so the instance report shall be for this date.
It ought to save the abstract to a textual content doc, however in case you’re working this inside a container, you possible received’t have the ability to entry that file simply. It’s higher to ship the output elsewhere — like Google Docs or through e mail.
As for the outcomes, I’ll allow you to resolve for your self the distinction between utilizing a extra advanced system versus a easy one, and the way it offers us extra management over the method.
Ending Notes
I’m working with a very good knowledge supply right here. With out that, you’d want so as to add much more error dealing with, which might gradual all the pieces down much more.
Clear and structured knowledge is vital. With out it, the LLM received’t carry out at its greatest.
Even with strong knowledge, it’s not excellent. You continue to have to work on the brokers to ensure they do what they’re purported to.
You’ve most likely already seen the system works — but it surely’s not fairly there but.
There are nonetheless a number of issues that want enchancment: parsing the consumer’s question right into a extra structured format, including guardrails so brokers at all times use their instruments, summarizing extra successfully to maintain the analysis doc concise, enhancing error dealing with, and introducing long-term reminiscence to raised perceive what the consumer really wants.
State (short-term reminiscence) is very essential if you wish to optimize for efficiency and value.
Proper now, we’re simply pushing each message into state and giving all brokers entry to it, which isn’t very best. We actually need to separate state between the groups. On this case, it’s one thing I haven’t executed, however you possibly can strive it by introducing a scratchpad within the state schema to isolate what every staff is aware of.
Regardless, I hope it was a enjoyable expertise to grasp the outcomes we are able to get by constructing completely different Agentic Workflows.
If you wish to see extra of what I’m engaged on, you possibly can observe me right here but additionally on Medium, GitHub, or LinkedIn (although I’m hoping to maneuver over to X quickly). I even have a Substack, the place I hope to publishing shorter items in.
❤️