An LLM can deal with basic routing. Semantic search can deal with personal information higher. Which one would you decide?
A single immediate can’t deal with the whole lot, and a single information supply is probably not appropriate for all the info.
Right here’s one thing you usually see in manufacturing however not in demos:
You want a couple of information supply to retrieve data. Multiple vector retailer, graph DB, and even an SQL database. And also you want totally different prompts to deal with totally different duties, too.
If that’s the case, we’ve got an issue. Given unstructured, usually ambiguous, and poorly formatted person enter, how can we determine which database to retrieve information from?
If, for some motive, you continue to suppose it’s too straightforward, right here’s an instance.
Suppose you may have a tour-guiding chatbot, and one traveler asks for an optimum journey schedule between 5 locations. Letting the LLM reply might hallucinate, as LLMs aren’t good with location-based calculations.
As a substitute, if you happen to retailer this data in a graph database, the LLM might generate a question to fetch the shortest journey path between the factors…