Implementing “Modular RAG” with Haystack and Hypster | by Gilad Rubin | Oct, 2024

Reworking RAG programs into LEGO-like reconfigurable frameworks

11 min learn

13 hours in the past

Picture Generated utilizing Midjourney AI, Prompted by the writer

Maintaining with the most recent in AI could be a problem, particularly relating to an more and more evolving subject like Retrieval Augmented Technology (RAG). You’ve in all probability seen numerous articles and code examples on completely different platforms and also you may’ve felt overwhelmed by the. With so many alternative options and implementations, one can simply really feel misplaced.

I struggled with this myself for a very long time, making an attempt to wrap my head round each new article or “trick” to make RAG programs higher in a technique or one other. Each new paper, tutorial or blogpost felt like one thing utterly new and it turned more and more troublesome to maintain up with all of the acrynoms for all the latest fancy strategies – HyDE, RAPTOR, CRAG, FLARE — they began to sound like Pokémon character names to me.

Then I got here throughout this paper by Gao et al. (2024) “Modular RAG: Reworking RAG Techniques into LEGO-like Reconfigurable Frameworks”.

The principle determine from the paper that exhibits the parts from which the authors assemble RAG options. Supply: Modular RAG

This paper offers a structured strategy for breaking down RAG programs right into a unified framework that may embody numerous options and approaches. They proposed six primary parts:

  • Indexing: Arrange your knowledge for environment friendly search.
  • Pre-Retrieval: Course of the person’s question earlier than looking.
  • Retrieval: Discover essentially the most related info.
  • Publish-Retrieval: Refine the retrieved info.
  • Technology: Use an LLM to generate a response.
  • Orchestration: Management the general move of the system.

The important thing perception from this papaer is that a variety of present RAG options could be described utilizing these parts in a LEGO-like method. This modularity offers a framework for understanding, designing, and navigating the method of constructing a RAG system with better flexibility and readability.

Within the paper, the authors showcase how that is attainable by taking examples of present RAG options and expressing them utilizing the identical constructing blocks. For instance:

Adaptive RAG move – the place the “choose” decides whether or not or to not use retrieval. Supply: Modular RAG
FLARE – Forward-Looking Active REtrieval the place every sentence can set off a retrieval step. Supply: Modular RAG

I extremely advocate studying this paper and the set of blog-posts by the writer of the paper, Yunfan Gao: Modular RAG and RAG Circulation: Half I, Half II.

Personally, this framework helped me perceive how completely different RAG approaches relate to one another, and now I can simply make sense of latest papers and implementations.

So, how can we truly implement this “Modular RAG” framework?

Because it’s extra of a meta-framework — what does that imply in sensible phrases? Does it imply that we have to implement all the attainable combos of parts? Or will we simply construct the person parts and let builders work out how one can put them collectively?

I imagine that in most real-life conditions — it’s not essential to attempt to cowl each attainable RAG configuration, however to slim down the area of related configurations primarily based on the necessities and constraints of every mission.

On this tutorial, I’ll present you a concrete instance of how one can construct a configurable system utilizing a small set of choices. Hopefully, this gives you the precise perspective and instruments to create your individual model of a Modular RAG that comprises the set of related configurations to your particular use-case.

Let’s go on to discover the 2 primary instruments we’ll be utilizing:

haystack is an open-source framework for constructing production-ready LLM purposes, retrieval-augmented generative pipelines and state-of-the-art search programs that work intelligently over giant doc collections.

Professionals:

  • Nice part design
  • The pipeline may be very versatile and permits for dynamic configurations
  • Extraordinarily (!) properly documented
  • The framework consists of many present implementations and integrations with Generative AI suppliers.

Cons:

  • The pipeline interface could be a bit verbose
  • Utilizing parts outdoors of a pipeline isn’t very ergonomic.

I’ve performed round with a couple of completely different Generative AI frameworks, and Haystack was by far the simplest for me to grasp, use and customise.

hypster is a light-weight pythonic configuration system for AI & Machine Studying tasks. It provides minimal, intuitive pythonic syntax, supporting hierarchical and swappable configurations.

Hypster is a brand new open-source mission that I’ve developed to allow a brand new form of programming paradigm for AI & ML workflows — one which strikes past single options in the direction of a “superposition of workflows” or a “hyper-workflow.”

Hypster lets you outline a variety of attainable configurations and simply swap between them for experimentation and optimization. This makes it easy so as to add and customise your individual configuration areas, instantiate them with completely different settings, and in the end choose the optimum configuration to your manufacturing atmosphere.

Be aware: Hypster is at present beneath energetic growth. It’s not but advisable for manufacturing environments.

That is a complicated tutorial. It assumes you’re already accustomed to the primary parts of RAG.

I’ll break down the primary components of the codebase and supply my insights as we go.

The total and up to date code is within the following repository. Don’t overlook so as to add your ⭐️

Let’s begin with our LLM configuration-space definition:

from hypster import config, HP
@config
def llm_config(hp: HP):
anthropic_models = {"haiku": "claude-3-haiku-20240307",
"sonnet": "claude-3-5-sonnet-20240620"}
openai_models = {"gpt-4o-mini": "gpt-4o-mini",
"gpt-4o": "gpt-4o",
"gpt-4o-latest": "gpt-4o-2024-08-06"}

model_options = {**anthropic_models, **openai_models}
mannequin = hp.choose(model_options, default="gpt-4o-mini")
temperature = hp.number_input(0.0)

if mannequin in openai_models.values():
from haystack.parts.mills import OpenAIGenerator

llm = OpenAIGenerator(mannequin=mannequin,
generation_kwargs={"temperature": temperature})
else: #anthropic
from haystack_integrations.parts.mills.anthropic import AnthropicGenerator

llm = AnthropicGenerator(mannequin=mannequin,
generation_kwargs={"temperature": temperature})

This code snippet demonstrates a primary instance of Hypster and Haystack. Utilizing the @config decorator, we outline a perform referred to as llm_config that encapsulates the configuration area for our LLM. This area consists of choices for choosing completely different LLM suppliers (Anthropic or OpenAI) and their corresponding fashions, in addition to a parameter for controlling the temperature.

Throughout the llm_config perform, we use conditional logic to instantiate the suitable Haystack part primarily based on the chosen mannequin. This permits us to seamlessly swap between completely different LLMs utilizing a variety with out modifying the construction of our code.

For instance, to create an Anthropic generator with the “haiku” mannequin and a temperature of 0.5, we will instantiate the configuration as follows:

consequence = llm_config(final_vars=["llm"], 
alternatives={"mannequin" : "haiku"},
overrides={"temperature" : 0.5})

Let’s transfer on to create our indexing pipeline, the place we’ll outline how one can course of our enter information. In our case — PDF information.

@config
def indexing_config(hp: HP):
from haystack import Pipeline
from haystack.parts.converters import PyPDFToDocument
pipeline = Pipeline()
pipeline.add_component("loader", PyPDFToDocument())

Subsequent, we’ll add an non-obligatory performance — enriching the doc with an LLM abstract primarily based on the primary 1000 characters of the doc.

This can be a good trick the place we use the primary n characters of a doc after which, upon splitting the doc into chunks, every chunk “inherits” this enriched info for its embeddings and response era.

  enrich_doc_w_llm = hp.choose([True, False], default=True)
if enrich_doc_w_llm:
from textwrap import dedent
from haystack.parts.builders import PromptBuilder
from src.haystack_utils import AddLLMMetadata

template = dedent("""
Summarize the doc's primary matter in a single sentence (15 phrases max).
Then checklist 3-5 key phrases or acronyms that greatest
signify its content material for search functions.
Context:
{{ paperwork[0].content material[:1000] }}

============================

Output format:
Abstract:
Key phrases:
""")

llm = hp.propagate("configs/llm.py")
pipeline.add_component("prompt_builder", PromptBuilder(template=template))
pipeline.add_component("llm", llm["llm"])
pipeline.add_component("document_enricher", AddLLMMetadata())

pipeline.join("loader", "prompt_builder")
pipeline.join("prompt_builder", "llm")
pipeline.join("llm", "document_enricher")
pipeline.join("loader", "document_enricher")
splitter_source = "document_enricher"
else:
splitter_source = "loader"

split_by = hp.choose(["sentence", "word", "passage", "page"],
default="sentence")
splitter = DocumentSplitter(split_by=split_by,
split_length=hp.int_input(10),
split_overlap=hp.int_input(2))
pipeline.add_component("splitter", splitter)
pipeline.join(splitter_source, "splitter")

Right here we will see Haystack’s pipeline in motion. If the person selects enrich_doc_w_llm==True we go on so as to add parts and connections that allow this enrichment. In our case: PromptBuilder → LLM → AddLLMMetadata.

As you may see — it’s very versatile and we will assemble it on-the-fly utilizing conditional logic. That is extraordinarily highly effective.

Now we will instantiate the configuration object in a few methods. For instance:

outcomes = indexing_config(alternatives={"enrich_doc_w_llm": False, 
"split_by" : "web page"},
overrides={"split_length" : 1})

Right here we get a easy pipeline with a loader and a splitter, with the chosen splitter configurations

In any other case, we will choose to complement the doc with an LLM abstract:

outcomes = indexing_config(alternatives={"enrich_doc_w_llm": True})

Discover that Hypster takes on default values which can be outlined in every parameter, so there’s no must specify all of the parameter alternatives each time. Right here’s an illustration of the ensuing pipeline:

Discover how we casually inserted the llm_config inside our indexing pipeline utilizing hp.propagte(“configs/llm_config.py"). This propagation skill lets us create nested configurations in a hierarchical manner. We will choose and override parameters inside the nested llm_config utilizing dot notation. For instance:

outcomes = indexing_config(alternatives={"llm.mannequin" : "gpt-4o-latest"})

This may lead to instantiating an indexing pipeline with the LLM enrichment activity utilizing the OpenAI gpt-4o-2024–08 mannequin.