I just lately began to favor Graph RAGs greater than vector store-backed ones.
No offense to vector databases; they work fantastically typically. The caveat is that you simply want express mentions within the textual content to retrieve the right context.
We now have workarounds for that, and I’ve coated just a few in my earlier posts.
As an illustration, ColBERT and Multi-representation are useful retrieval fashions we must always contemplate when constructing RAG apps.
GraphRAGs endure much less from retrieval points (I didn’t say they don’t endure.) Every time the retrieval requires some reasoning, GraphRAG performs terribly.
Offering related context solves a key downside in LLM-based purposes: hallucination. Nevertheless, it doesn’t remove hallucinations altogether.
When you’ll be able to’t repair one thing, you measure it. And that’s the main focus of this submit. In different phrases, how will we consider RAG apps?