Deep Dive into LlamaIndex Workflow: Occasion-Pushed LLM Structure | by Peng Qian | Dec, 2024

Progress and shortcomings after follow

Deep Dive into LlamaIndex Workflows: Event-driven LLM architecture.
Deep Dive into LlamaIndex Workflows: Occasion-driven LLM structure. Picture by DALL-E-3

Not too long ago, LlamaIndex launched a brand new characteristic referred to as Workflow in one in all its variations, offering event-driven and logic decoupling capabilities for LLM purposes.

In as we speak’s article, we’ll take a deep dive into this characteristic by way of a sensible mini-project, exploring what’s new and nonetheless missing. Let’s get began.

Increasingly more LLM purposes are shifting in direction of clever agent architectures, anticipating LLMs to fulfill person requests by way of calling totally different APIs or a number of iterative calls.

This shift, nonetheless, brings an issue: as agent purposes make extra API calls, program responses decelerate and code logic turns into extra complicated.

A typical instance is ReActAgent, which includes steps like Thought, Motion, Commentary, and Last Reply, requiring at the very least three LLM calls and one instrument name. If loops are wanted, there will likely be much more I/O calls.

A typical ReAct agent will make at least three calls to LLM.
A typical ReAct agent will make at the very least three calls to LLM. Picture by Writer