Retrieval-augmented era (RAG) is one distinguished method employed to combine LLM into enterprise use instances, permitting proprietary information to be infused into LLM. This submit assumes you already possess information about RAG and you’re right here to enhance your RAG accuracy.
Let’s assessment the method briefly. The RAG mannequin consists of two predominant steps: retrieval and era. Within the retrieval step, a number of sub-steps are concerned, together with changing context textual content to vectors, indexing the context vector, retrieving the context vector for the consumer question, and reranking the context vector. As soon as the contexts for the question are retrieved, we transfer on to the era stage. Through the era stage, the contexts are mixed with prompts and despatched to the LLM to generate a response. Earlier than sending to the LLM, the context-infused prompts could endure caching and routing steps to optimize effectivity.
For every of the pipeline steps, we’ll conduct quite a few experiments to collectively improve RAG accuracy. You may consult with the under picture that lists(however just isn’t restricted to) the experiments carried out in every step.