Easy methods to Construct a Streaming Agent with Burr, FastAPI, and React | by Stefan Krawczyk | Jul, 2024

An summary of how one can leverage streaming utilizing open supply instruments utilized to constructing a easy agentic chat bot

The mannequin of our agentic utility. We’ll present how one can construct this with streaming so you may create an awesome consumer expertise. Picture by creator.

On this publish we are going to go over how one can construct an agentic chatbot that streams responses to the consumer, leveraging Burr’s (I’m an creator) streaming capabilities, FastAPI’s StreamingResponse, and server-sent-events (SSEs) queried by React. All of those are open supply instruments. That is aimed toward those that wish to be taught extra about streaming in Python and how one can add interactivity to their agent/utility. Whereas the instruments we use will likely be pretty particular, the teachings must be relevant to a variety of streaming response implementations.

First, we’ll discuss why streaming is essential. Then we’ll go over the open-source tooling we use. We’ll stroll by means of an instance, and level you out to code that you should utilize to get began, then share extra assets and alternate implementations.

You possibly can comply with together with the Burr + FastAPI code right here and the frontend code right here. You may also run this instance (you’ll want an OPENAI_API_KEY env variable) by operating pip set up “burr[start]” && burr, then navigating to localhost:7241/demos/streaming-chatbot (the browser will open robotically, simply click on demos/streaming-chatbot on the left. Notice this instance requires burr>=0.23.0.

Whereas streaming media by means of the online is a expertise from the 90s, and is now ubiquitous (video video games, streaming TV, music, and so forth…), the current surge in generative AI purposes has seen an curiosity in serving and rendering streaming textual content, phrase by phrase.

LLMs are a enjoyable expertise (even perhaps helpful), however comparatively gradual to run, and customers don’t like ready. Fortunately, it’s doable to stream the outcomes so {that a} consumer sees an LLM’s response as it’s being generated. Moreover, given the widely robotic and stuffy nature of LLMs, streaming could make them seem extra interactive, virtually as in the event that they’re considering.

A correct implementation will permit streaming communication throughout a number of service boundaries, enabling intermediate proxies to reinforce/retailer the streaming information as it’s offered to the consumer.

A easy show of a chatbot structure. Picture by creator.

Whereas none of that is rocket science, the identical instruments that make internet growth simple and largely standardized (OpenAPI / FastAPI / React + buddies, and so forth…) all have various levels of assist, that means that you just usually have a number of selections which are completely different than what you’re used to. Streaming is commonly an afterthought in framework design, main to numerous limitations that you just won’t know till you’re midway by means of constructing.

Let’s go over a number of the instruments we’ll use to implement the stack above, then stroll by means of an instance.

The instruments we’ll leverage to construct this are properly decoupled from one another — you may swap like with like in order for you and nonetheless apply the identical classes/code.

Burr is a light-weight Python library you employ to construct purposes as state machines. You assemble your utility out of a sequence of actions (these could be both embellished capabilities or objects), which declare inputs from state, in addition to inputs from the consumer. These specify customized logic (delegating to any framework), in addition to directions on how one can replace state. State is immutable, which lets you examine it at any given level. Burr handles orchestration, monitoring, persistence, and so forth…).

@motion(reads=["count"], writes=["count"])
def counter(state: State) -> State:
return state.replace(counter=state.get("depend", 0) +1)

You run your Burr actions as a part of an utility — this lets you string them along with a sequence of (optionally) conditional transitions from motion to motion.

from burr.core import ApplicationBuilder, default, expr
app = (
ApplicationBuilder()
.with_actions(
depend=depend,
achieved=achieved # implementation unnoticed above
).with_transitions(
("counter", "counter", expr("depend < 10")), # Preserve counting if the counter is < 10
("counter", "achieved", default) # In any other case, we're achieved
).with_state(depend=0)
.with_entrypoint("counter") # we now have to start out someplace
.construct()
)

Burr comes with a user-interface that permits monitoring/telemetry, in addition to hooks to persist state/execute arbitrary code throughout execution.

You possibly can visualize this as a move chart, i.e. graph / state machine:

Burr offers you this picture without spending a dime. Picture by creator.

And monitor it utilizing the native telemetry debugger:

The OS telemetry UI tells you the state of your utility at any given time limit. Picture by creator.

Whereas the above instance is an easy illustration, Burr is often used for Brokers (like on this instance), RAG purposes, and human-in-the-loop AI interfaces. See the repository examples for a (extra exhaustive) set of use-cases. We’ll go over streaming and some extra highly effective options slightly later.

FastAPI is a framework that permits you to expose python capabilities in a REST API. It has a easy interface — you write your capabilities then beautify them, and run your script — turning it right into a server with self-documenting endpoints by means of OpenAPI.

@app.get("/")
def read_root():
return {"Good day": "World"}

@app.get("/objects/{item_id}")
def read_item(item_id: int, q: Union[str, None] = None):
return {"item_id": item_id, "q": q}

FastAPI offers a myriad of advantages. It’s async native, provides documentation by means of OpenAPI, and is simple to deploy on any cloud supplier. It’s infrastructure agnostic and might usually scale horizontally (as long as consideration into state administration is finished). See this web page for extra info.

React wants no introduction — it’s an especially standard software that powers a lot of the web. Even current standard instruments (comparable to subsequent.js/remix) construct on prime of it. For extra studying, see react.dev. We will likely be utilizing React together with typescript and tailwind, however you may usually change along with your favourite frontend instruments and be capable to reuse a lot of this publish.

Let’s construct a easy agentic chatbot — will probably be agentic because it really makes two LLM calls:

  1. A name to find out the mannequin to question. Our mannequin can have a couple of “modes” — generate a poem, reply a query, and so forth…
  2. A name to the precise mannequin (on this case immediate + mannequin mixture)

With the OpenAI API that is extra of a toy instance — their fashions are spectacular jacks of all trades. That mentioned, this sample of software delegation exhibits up in all kinds of AI methods, and this instance could be extrapolated cleanly.

Modeling as a State Machine

To leverage Burr, we mannequin our agentic utility as a state machine. The essential move of logic appears to be like like this:

We begin at a consumer immediate enter (prime). Then we test for security, and if it’s not protected, we go the particular response for “unsafe”. In any other case we resolve on the mode, and swap primarily based on the worth of the state area mode. Every of those returns a streaming response. As soon as they’re achieved streaming, it circles again to immediate and waits for an additional consumer enter… Picture by creator.

To mannequin this with Burr, we are going to first create corresponding actions, utilizing the streaming API. Then we’ll tie them collectively as an utility.

Streaming Actions

In Burr, actions can leverage each a synchronous and asynchronous API. On this case we’ll be utilizing async. Streaming capabilities in Burr may also be blended and match with non-streaming actions, however to simplify we are going to implement every little thing as streaming. So, whether or not it’s streaming from OpenAPI (which has its personal async streaming interface), or returning a hard and fast Sorry I can not reply this query response, it is going to nonetheless be applied as a generator.

For many who are unfamiliar, turbines are a Python assemble that permits environment friendly, lazy analysis over a sequence of values. They’re created by the yield key phrase, which cedes management from the operate again to the caller, till the following merchandise is required. Async turbines operate equally, besides additionally they cede management of the occasion loop on yield. Learn extra about synchronous turbines and asynchronous turbines.

Streaming actions in Burr are applied as a generator that yields tuples, consisting of:

  1. The intermediate end result (on this case, delta token within the message)
  2. The ultimate state replace, whether it is full, or None whether it is nonetheless producing

Thus the ultimate yield will point out that the stream is full, and output a last end result for storage/debugging later. A fundamental response that proxies to OpenAI with some customized immediate manipulation appears to be like like this:

@streaming_action(reads=["prompt", "chat_history", "mode"], writes=["response"])
async def chat_response(
state: State, prepend_prompt: str, mannequin: str = "gpt-3.5-turbo"
) -> AsyncGenerator[Tuple[dict, Optional[State]], None]:
"""A easy proxy.

This massages the chat historical past to move the context to OpenAI,
streams the end result again, and eventually yields the finished end result
with the state replace.
"""
consumer = _get_openai_client()
# code skipped that prepends a customized immediate and codecs chat historical past
chat_history_for_openai = _format_chat_history(
state["chat_history"],
prepend_final_promprt=prepend_prompt)
end result = await consumer.chat.completions.create(
mannequin=mannequin, messages=chat_history_api_format, stream=True
)
buffer = []

async for chunk in end result:
chunk_str = chunk.selections[0].delta.content material
if chunk_str is None:
proceed
buffer.append(chunk_str)
yield {"delta": chunk_str}, None

end result = {
"response": {"content material": "".be part of(buffer), "kind": "textual content", "position": "assistant"},
}
yield end result, state.replace(**end result).append(chat_history=end result["response"])

Within the instance, we even have a couple of different streaming actions — these will symbolize the “terminal” actions — actions that may set off the workflow to pause when the state machine completes them.

Constructing an Software

To construct the appliance, we’re first going to construct a graph. We’ll be utilizing the Graph API for Burr, permitting us to decouple the form of the graph from different utility issues. In an online service the graph API is a really clear strategy to specific state machine logic. You possibly can construct it as soon as, globally, then reuse it per particular person utility cases. The graph builder appears to be like like this — be aware it refers back to the operate chat_response from above:

# Establishing a graph from actions (labeled by kwargs) and 
# transitions (conditional or default).
graph = (
GraphBuilder()
.with_actions(
immediate=process_prompt,
check_safety=check_safety,
decide_mode=choose_mode,
generate_code=chat_response.bind(
prepend_prompt="Please reply with *solely* code and no different textual content"
"(in any respect) to the next",
),
# extra unnoticed for brevity
)
.with_transitions(
("immediate", "check_safety", default),
("check_safety", "decide_mode", when(protected=True)),
("check_safety", "unsafe_response", default),
("decide_mode", "generate_code", when(mode="generate_code")),
# extra unnoticed for brevity
)
.construct()
)

Lastly, we will add this collectively in an Software — which exposes the precise execution strategies for the server to work together with:

# Right here we couple extra utility issues (telemetry, monitoring, and so forth…).
app = ApplicationBuilder()
.with_entrypoint("immediate")
.with_state(chat_history=[])
.with_graph(graph)
.with_tracker(undertaking="demo_chatbot_streaming")
.with_identifiers(app_id=app_id)
.construct()
)

Once we wish to run it, we will name out to astream_results. This takes in a set of halting circumstances, and returns an AsyncStreamingResultContainer (a generator that caches the end result and ensures Burr monitoring is named), in addition to the motion that triggered the halt.

# Operating the appliance as you'll to check, 
# (in a jupyter pocket book, as an illustration).
motion, streaming_container = await app.astream_result(
halt_after=["generate_code", "unsafe_response", ...], # terminal actions
inputs={
"immediate": "Please generate a limerick about Alexander Hamilton and Aaron Burr"
}
)

async for merchandise in streaming_container:
print(merchandise['delta'], finish="")

Now that we now have the Burr utility, we’ll wish to combine with FastAPI’s streaming response API utilizing server-sent-events (SSEs). Whereas we received’t dig an excessive amount of into SSEs, the TL;DR is that they operate as a a technique (server → consumer) model of web-sockets. You possibly can learn extra within the hyperlinks on the finish.

To make use of these in FastAPI, we declare an endpoint as a operate that returns a StreamingResponse — a category that wraps a generator. The usual is to supply streaming responses in a particular form, “information: <contents> nn”. Learn extra about why right here. Whereas that is largely meant for the EventSource API (which we will likely be bypassing in favor of fetch and getReader()), we are going to hold this format for requirements (and in order that anybody utilizing the EventSource API can reuse this code).

We have now individually applied _get_application, a utility operate to get/load an utility by ID.

The operate will likely be a POST endpoint, as we’re including information to the server, though may simply be a PUT as nicely.

@app.publish("/response/{project_id}/{app_id}", response_class=StreamingResponse)
async def chat_response(project_id: str, app_id: str, immediate: PromptInput) -> StreamingResponse:
"""A easy API that wraps our Burr utility."""
burr_app = _get_application(project_id, app_id)
chat_history = burr_app.state.get("chat_history", [])
motion, streaming_container = await burr_app.astream_result(
halt_after=chat_application.TERMINAL_ACTIONS, inputs=dict(immediate=immediate.immediate)
)

async def sse_generator():
yield f"information: {json.dumps({'kind': 'chat_history', 'worth': chat_history})}nn"

async for merchandise in streaming_container:
yield f"information: {json.dumps({'kind': 'delta', 'worth': merchandise['delta']})} nn"

return StreamingResponse(sse_generator())

Notice that we outline a generator contained in the operate that wraps the Burr end result and turns it into SSE-friendly outputs. This permits us to impose some construction on the end result, which we are going to use on the frontend. Sadly, we must parse it on our personal, as fastAPI doesn’t allow strict typing of a StreamingResponse.

Moreover, we really yield your entire state at first, previous to execution. Whereas this isn’t strictly vital (we will even have a separate API for chat historical past), it is going to make rendering simpler.

To check this you should utilize the requests library Response.iter_lines API.

Now that we now have a server, our state machine, and our LLM lined up, let’s make it look good! That is the place all of it ties collectively. When you can obtain and play with the whole lot of the code in the instance, we will likely be focusing in on the operate that queries the API while you click on “ship”.

That is what the UI appears to be like like. You possibly can run this by way of the packaged Telemetry UI that Burr comes with. Picture by creator.

First, let’s question our API utilizing fetch (clearly modify this to your endpoint, on this case we’re proxying all /api calls to a different server…):

// A easy fetch name with getReader()
const response = await fetch(
`/api/v0/streaming_chatbot/response/${props.projectId}/${props.appId}`,
{
technique: 'POST',
headers: { 'Content material-Kind': 'utility/json' },
physique: JSON.stringify({ immediate: currentPrompt })
}
);
const reader = response.physique?.getReader();

This appears to be like like a plain outdated API name, leveraging the typescript async API. This extracts a reader object, which is able to assist us stream outcomes as they arrive in.

Let’s outline some information varieties to leverage the construction we created above. Along with the ChatItem information varieties (which was generated utilizing openapi-typescript-codegen), we’ll additionally outline two lessons, which correspond to the information varieties returned by the server.

// Datatypes on the frontend. 
// The contract is free, as nothing within the framework encodes it
kind Occasion = 'chat_history';
;

kind ChatMessageEvent = Occasion & {
worth: string;
};

kind ChatHistoryEvent = Occasion & {
worth: ChatItem[];
};

Subsequent, we’ll iterate by means of the reader and parse. This assumes the next state variables in react:

  • setCurrentResponse/currentResponse
  • setDisplayedChatHistory

We learn by means of, splitting on “information:”, then looping by means of splits and parsing/reacting relying on the occasion kind.

// Loop by means of, regularly getting the stream. 
// For every merchandise, parse it as our desired datatype and react appropriately.
whereas (true) {
const end result = await reader.learn();
if (end result.achieved) {
break;
}
const message = decoder.decode(end result.worth, { stream: true });
message
.cut up('information: ')
.slice(1)
.forEach((merchandise) => {
const occasion: Occasion = JSON.parse(merchandise);
if (occasion.kind === 'chat_history') {
const chatMessageEvent = occasion as ChatHistoryEvent;
setDisplayedChatHistory(chatMessageEvent.worth);
}
if (occasion.kind === 'delta') {
const chatMessageEvent = occasion as ChatMessageEvent;
chatResponse += chatMessageEvent.worth;
setCurrentResponse(chatResponse);
}
});
}

We’ve unnoticed some cleanup/error dealing with code (to clear, initialize the state variables earlier than/after requests, deal with failure, and so forth…) — you may see extra within the instance.

Lastly, we will render it (be aware this refers to extra state variables which are set/unset outdoors of the code above, in addition to a ChatMessage react part that merely shows a chat message with the suitable icon.

<!-- Extra to illustrates the instance -->
<div className="flex-1 overflow-y-auto p-4 hide-scrollbar" id={VIEW_END_ID}>
{displayedChatHistory.map((message, i) => (
<ChatMessage
message={message}
key={i}
/>
))}
{isChatWaiting && (
<ChatMessage
message={{
position: ChatItem.position.USER,
content material: currentPrompt,
kind: ChatItem.kind.TEXT
}}
/>
)}
{isChatWaiting && (
<ChatMessage
message={{
content material: currentResponse,
kind: ChatItem.kind.TEXT,
position: ChatItem.position.ASSISTANT
}}
/>
)}
</div>
<!-- Notice: We have unnoticed the isChatWaiting and currentPrompt state fields above,
see StreamingChatbot.tsx for the total implementation. -->

We lastly have our complete app! For all of the code click on right here.

Notice that what we offered above is only one method to streaming with FastAPI/react/Burr. There are a number of different instruments you should utilize, together with:

In addition to a number of different weblog posts (which are superior! I learn these to get began). These gives you a greater sense of structure as nicely.

On this publish we coated quite a bit — we went over Burr, FastAPI, and React, talked about how one can construct a streaming agentic chatbot utilizing the OpenAI API, constructed out your entire stack, and streamed information during! When you could not use each one of many applied sciences, the person items ought to be capable to work on their very own.

To obtain and play with this instance, you may run:

pip set up "burr[start]"
burr # will open up in a brand new window

Notice you’ll want an API key from OpenAI for this particular demo. You can see the Burr + FastAPI code right here and the frontend code right here.

Leave a Reply