Mon. Dec 23rd, 2024
Why Rags Don’t Solve The Generative Ai Hallucination Problem

Illusions (basically, lies that generative AI models teach) are a huge problem for companies looking to integrate technology into their operations.

Because models have no real intelligence and simply predict words, images, sounds, music, and other data according to their private schemas, they can sometimes get it wrong. Big mistake. In a recent article in the Wall Street Journal, sauce It details an example in which Microsoft’s generative AI fabricated meeting attendees and implied that the call was about topics that were not actually discussed on the call.

As I wrote a while ago, hallucinations may be a problem that cannot be solved with today’s transformer-based model architectures. However, many generative AI vendors suggest that: can It is more or less obsolete by a technical approach called search augmentation generation (RAG).

One vendor, Squirro, says: suggest it:

At the core of this product is the concept of Acquisition Augmentation LLM or Acquisition Augmentation Generation (RAG) built into the solution. [our generative AI] It is unique in that it promises zero hallucinations. All information generated is traceable to the source, ensuring authenticity.

here it is similar pitch From the shift hub:

Using RAG technology and fine-tuned large-scale language models with industry-specific knowledge training, SiftHub enables companies to generate personalized responses without hallucinations. This ensures increased transparency and reduced risk, creating absolute trust in using AI for all your needs.

RAG was developed by data scientist Patrick Lewis, a researcher at Meta and University College London and lead author of the 2020 paper. paper That gave rise to the term. When you apply RAG to your model, it essentially uses a keyword search to retrieve documents that may be relevant to your question (such as a Wikipedia page about the Super Bowl) and generates an answer considering this additional context. request the model to do so.

“When you’re working with a generative AI model like ChatGPT or Llama, and you ask a question, by default the model answers from its ‘parametric memory’, or the knowledge stored in its parameters. It’s training using large amounts of data from the web,” explained David Wadden, a research scientist at his AI2, his AI-focused research arm at the nonprofit Allen Institute. “But just as having references can give you a more accurate answer; [like a book or a file] The same is true for the model in front of you and in some cases. ”

RAG is definitely useful. This allows you to attribute what your model has produced to the retrieved document and verify that fact (and, as an added benefit, avoids potential copyright infringement reversals). RAGs also allow companies that do not want to use their own documents to train models (for example, companies in highly regulated industries such as healthcare or law) to use them in a more secure and temporary manner. documentation will be available.

But RAG is certainly Can not Stop the model from hallucinating. And there are also limitations that many vendors ignore.

Wadden said RAG is most effective in “knowledge-intensive” scenarios where users want to use the model to address an “information need,” such as finding out who won last year’s Super Bowl. It states that there is. In such a scenario, the document that answers the question is likely to contain many of the same keywords as the question (e.g., “Super Bowl,” “last year”) and can be found relatively easily with a keyword search.

Things get even more complicated for “reasoning-heavy” tasks like coding and math. With keyword-based search queries, it is difficult to specify the concepts needed to answer the request, much less identify which documents are relevant.

Even for basic questions, the model can become “distracted” by extraneous content within the document, especially for long documents where the answers are not obvious. Alternatively, for reasons still unclear, we could simply ignore the contents of the retrieved document and rely on parametric memory instead.

RAGs are also expensive in terms of the hardware required to apply them at scale.

This is because documents retrieved from the web, internal databases, and other locations must be stored in memory, at least temporarily, so that the model can reference them. Another expense is computing the increased context that the model must process before generating a response. This represents a significant consideration for a technology already notorious for the amount of computation and power required for even basic operations.

This is not to suggest that RAG cannot be improved. Wadden mentioned a number of ongoing efforts to train models to better utilize the documents captured by RAG.

These efforts include models that let you “decide” when to use a document, or choose not to perform a search in the first place if you decide it’s not needed. Others are focused on how to index large document datasets more efficiently, or on improving search through better representation of documents, representations that go beyond keywords.

“We’re good at retrieving documents based on keywords, but we’re not so good at retrieving documents based on more abstract concepts, such as proof techniques needed to solve math problems.” says Wadden. “Research is needed to build document representation and retrieval techniques that can identify documents relevant to more abstract generative tasks. I think this is largely an open question at this point. ”

Therefore, while RAG can help reduce hallucinations in a model, it is not the answer to all hallucination problems in AI. Be wary of vendors who try to claim otherwise.