If you have ever used a generative artificial intelligence (AI) tool, it’s lied to you. Probably multiple times.
These recurring fabrications are often called AI hallucinations, and developers are feverishly working to make generative AI tools more reliable by reigning in these unfortunate fibs. One of the most popular approaches to reducing AI hallucinations—and one that is quickly growing more popular in Silicon Valley—is called retrieval augmented generation.
The RAG process is quite complicated, but on a basic level it augments your prompts by gathering info from a custom database, and then the large language model generates an answer based on that data. For example, a company could upload all of its HR policies and benefits to a RAG database and have the AI chatbot just focus on answers that can be found in those documents.
Please select this link to read the complete article from WIRED.