Complete Story
 

01/22/2025

AI Hallucinations Cannot Be Stopped

However, these techniques can limit their damage

When computer scientist Andy Zou researches artificial intelligence (AI), he often asks a chatbot to suggest background reading and references. But this doesn’t always go well.

"Most of the time, it gives me different authors than the ones it should, or maybe sometimes the paper doesn’t exist at all," said Zou, a graduate student at Carnegie Mellon University in Pittsburgh, Pennsylvania.

It's well known that all kinds of generative AI, including the large language models (LLMs) behind AI chatbots, make things up. This is both a strength and a weakness. It is the reason for their celebrated inventive capacity, but it also means they sometimes blur truth and fiction, inserting incorrect details into apparently factual sentences.

Please select this link to read the complete article from Nature.

Printer-Friendly Version