How Agentic Chunking Improves RAG functions.

The expensive but highly effective splitting method for superior RAG retrieval

Agentic chuning offers near human-level performance in chunking
Picture by Nataliya Vaitkevich

Good chunks make good RAGs.

Chunking, embedding, and indexing are essential features of RAGs. A RAG app that makes use of the suitable chunking method performs effectively by way of output high quality and velocity.

When engineering an LLM pipeline, we use completely different methods to separate the textual content. Recursive character splitting is the most well-liked method. It makes use of a sliding window strategy with a hard and fast token size. Nonetheless, this strategy doesn’t assure that it could sufficiently maintain a theme inside its window dimension. Additionally, there’s a danger that a part of the context falls into completely different chunks.

The opposite method I really like is semantic splitting. Semantic splitting breaks the textual content at any time when there’s a major change between two consecutive sentences. It has no size constraints. So, it could have many sentences or only a few. But it surely’s extra prone to seize the completely different themes extra precisely.

Even the semantic splitting strategy has an issue.

What if sentences removed from one another are nearer of their which means?