In Defense of AI Hallucinations

No one knows whether artificial intelligence will be a boon or curse in the far future. But right now, there’s almost universal discomfort and contempt for one habit of these chatbots and agents: hallucinations, those made-up facts that appear in the outputs of large language models like ChatGPT. In the middle of what seems like a carefully constructed answer, the LLM will slip in something that seems reasonable but is a total fabrication. Your typical chatbot can make disgraced ex-congressman George Santos look like Abe Lincoln. Since it looks inevitable that chatbots will one day generate the vast majority of all prose ever written, all the AI companies are obsessed with minimizing and eliminating hallucinations, or at least convincing the world the problem is in hand.

Obviously, the value of LLMs will reach a new level when and if hallucinations approach zero. But before that happens, I ask you to raise a toast to AI’s confabulations.

Hallucinations fascinate me, even though AI scientists have a pretty good idea why they happen. An AI startup called Vectara has studied them and their prevalence, even compiling the hallucination rates of various models when asked to summarize a document. (OpenAI’s GPT-4 does best, hallucinating only around 3 percent of the time; Google’s now outdated Palm Chat—not its chatbot Bard!—had a shocking 27 percent rate, although to be fair, summarizing documents wasn’t in Palm Chat’s wheelhouse.) Vectara’s CTO, Amin Ahmad, says that LLMs create a compressed representation of all the training data fed through its artificial neurons. “The nature of compression is that the fine details can get lost,” he says. A model ends up primed with the most likely answers to queries from users but doesn’t have the exact facts at its disposal. “When it gets to the details it starts making things up,” he says.

Santosh Vempala, a computer science professor at Georgia Tech, has also studied hallucinations. “A language model is just a probabilistic model of the world,” he says, not a truthful mirror of reality. Vempala explains that an LLM’s answer strives for a general calibration with the real world—as represented in its training data—which is “a weak version of accuracy.” His research, published with OpenAI’s Adam Kalai, found that hallucinations are unavoidable for facts that can’t be verified using the information in a model’s training data.

That’s the science/math of AI hallucinations, but they’re also notable for the experience they can elicit in humans. At times, these generative fabrications can seem more plausible than actual facts, which are often astonishingly bizarre and unsatisfying. How often do you hear something described as so strange that no screenwriter would dare script it in a movie? These days, all the time! Hallucinations can seduce us by appearing to ground us to a world less jarring than the actual one we live in. What’s more, I find it telling to note just which details the bots tend to concoct. In their desperate attempt to fill in the blanks of a satisfying narrative, they gravitate toward the most statistically likely version of reality as represented in their internet-scale training data, which can be a truth in itself. I liken it to a fiction writer penning a novel inspired by real events. A good author will veer from what actually happened to an imagined scenario that reveals a deeper truth, striving to create something more real than reality.

When I asked ChatGPT to write an obituary for me—admit it, you’ve tried this too—it got many things right but a few things wrong. It gave me grandchildren I didn’t have, bestowed an earlier birth date, and added a National Magazine Award to my résumé for articles I didn’t write about the dotcom bust in the late 1990s. In the LLM’s assessment of my life, this is something that should have happened based on the facts of my career. I agree! It’s only because of real life’s imperfectness that the American Society of Magazine Editors failed to award me the metal elephant sculpture that comes with that honor. After almost 50 years of magazine writing, that’s on them, not me! It’s almost as if ChatGPT took a poll of possible multiverses and found that in most of them I had an Ellie award. Sure, I would have preferred that, here in my own corner of the multiverse, human judges had called me to the podium. But recognition from a vamping artificial neural net is better than nothing.

Besides providing an instructive view of plausible alternate realities, the untethering of AI outputs from the realm of fact can also be productive. Because LLMs don’t necessarily think like humans, their flights of statistical fancy can be valuable tools to spur creativity. “That’s why generative systems are being explored more by artists, to get ideas they wouldn’t have necessarily have thought of,” says Vectara’s Ahmad. One of the most important missions of those building AI is to help solve humanity’s intractable problems, ostensibly by coming up with ideas that leap past the bounds of human imagination. Ahmad is one of several people I spoke with who believe that even if we figure out how to largely eliminate those algorithmic fibs, we should still keep them around. “LLMs should be capable of producing things without hallucinations, but then we can flip them into a mode where they can produce hallucinations and help us brainstorm,” he says. Vempala of Georgia Tech agrees: “There should be a knob that you can turn,” he says. “When you want to drive your car you don’t want AI to hallucinate what’s on the road, but you do when you’re trying to write a poem for a friend.”

There’s one other big reason why I value hallucinations. Right now, their inaccuracies are providing humanity with some breathing room in the transition to coexistence with superintelligent AI entities. Because we can’t trust LLMs to be correct, we still must do the work of fact-checking them. This keeps us in touch with reality, at least until we turn the whole shebang over to GPTs.

Take the legal profession as an example. Even though LLMs can churn out what looks like a credible legal brief, the result can be disastrously fictional. We saw this in the celebrated case where two attorneys submitted a brief citing six judicial opinions that proved to have been wholly imagined by ChatGPT, which they had asked to draw up the document. The judge sanctioned the lawyers, and they became a laughingstock. After that, only a lawyer who was a total idiot would use chatbots to dig up case law supporting an argument and not double-check the result. It turns out such people exist. Michael Cohen is currently serving a jail term for being Donald Trump’s fixer and is appealing his sentence. He provided his attorneys with fictional court decisions fabricated by Google’s LLM-powered chatbot Bard, and got caught.

But Cohen is an exception. For now, lawyers are actually required to surface precedents themselves. That’s a good thing. Imagine if AI can reliably generate a compelling legal brief, with total accuracy in locating precedents. With total recall of case law, an LLM could include dozens of cases. Lawyers will submit these without bothering to look at them, confident that they are relevant. It will be up to the poor judges to consider how these cases apply. Naturally, they will turn to AI to summarize the cases and maybe even use AI to draft their final opinion. Eventually, our entire case law will be determined by arguments launched and adjudicated by AI. Humans will be just spectators. And there will be very little work to do for actual lawyers. Heck, ChatGPT-4 can already pass the bar exam!

The same applies for other fields that demand accuracy, which is pretty much every human profession. Because we can’t trust LLMs, there’s still work for humans to do. Hallucinations are kind of a firewall between us and massive unemployment.

Will that day ever come when we get rid of hallucinations? Scientists disagree. “The answer in a broad sense is no,” says Vempala, whose paper was called “Calibrated Models Must Hallucinate.” Ahmad, on the other hand, thinks that we can do it. “There’s a red-hot focus in the research community right now on the problem of hallucination, and it’s being tackled from all kinds of angles,” he says.

Meanwhile, since we’re stuck with hallucinations, we should at least respect them. Yes, I understand the value, and even the necessity, of making LLMs stick to the facts for some tasks. But I shudder to think of how much we humans will miss if given a free pass to skip over the sources of information that make us truly knowledgeable. And I will mourn the countless jobs lost when we turn over the bulk of white-collar work to robots. For now, though, AI can’t be trusted. Here’s to you, hallucinations!

Time Travel

I’ve been writing about artificial intelligence for at least 40 years—I dealt with it in my first book, Hackers—but can make a special claim to a degree of prescience for an essay leading off a WIRED special section about AI in 2010. It was called “The AI Revolution Is On.” (I’ve included portions of this in some other episodes of Time Travel but suspect my human readers have probably forgotten, though ChatGPT probably has not.) What sounded science fiction-y then is now reality.

We are engaged in a permanent dance with machines, locked in an increasingly dependent embrace. And yet, because the bots’ behavior isn’t based on human thought processes, we are often powerless to explain their actions. Wolfram Alpha, the website created by scientist Stephen Wolfram, can solve many mathematical problems. It also seems to display how those answers are derived. But the logical steps that humans see are completely different from the website’s actual calculations. “It doesn’t do any of that reasoning,” Wolfram says. “Those steps are pure fake. We thought, how can we explain this to one of those humans out there?”

The lesson is that our computers sometimes have to humor us, or they will freak us out. Eric Horvitz—now a top Microsoft researcher—helped build an AI system in the 1980s to aid pathologists in their studies, analyzing each result and suggesting the next test to perform. There was just one problem—it provided the answers too quickly. “We found that people trusted it more if we added a delay loop with a flashing light, as though it were huffing and puffing to come up with an answer,” Horvitz says.

But we must learn to adapt. AI is so crucial to some systems—like the financial infrastructure—that getting rid of it would be a lot harder than simply disconnecting HAL 9000’s modules. “In some sense, you can argue that the science fiction scenario is already starting to happen,” Thinking Machines’ Hillis says. “The computers are in control, and we just live in their world.” Wolfram says this conundrum will intensify as AI takes on new tasks, spinning further out of human comprehension. “Do you regulate an underlying algorithm?” he asks. “That’s crazy, because you can’t foresee in most cases what consequences that algorithm will have.”

Image may contain Symbol
Ask Me One Thing

Derek asks, “Other than tie-dye T-shirts and Birkenstocks, what did the Silicon Valley tech world of the ’70s and ’80s have that’s missing today?”

Thanks for the question, Derek. As someone who has chronicled the Valley of the ’70s and spent time onsite in the ’80s, I can answer with some authority. Many of the major companies back then, like Hewlett-Packard and Intel, were fairly button-down in both attire and culture, even as their products were trailblazing. But the driving spirit, as your question implies, was a quasi-countercultural zeal to revolutionize the world with personal computers and digital technology. Though it was uncelebrated at the time, the Homebrew Computer Club changed the world by kick-starting the PC industry, including Apple. Among its geeks were freaks, some of whom indeed rocked Birkenstocks.

While you can find similar geeky enthusiasm among tech pioneers today, there’s not so much of the counterculture and its purity of intent. After all, the outsiders are now the insiders. While today’s technologists still hew to the mantra of “build build build” you are as likely to hear that from life coaches as your Y Combinator adviser. When Apple and Microsoft emerged, the path to super wealth wasn’t well-trodden—or even imagined. Yes, the nascent VC industry had its eyes on big prizes—but those in the trenches had an innocence then that is now lost.

On the other hand, Birkenstocks are popular again! In fact, just last October, Birkenstock went public, valued at over $8 billion. It seems that those hippie shoes, like those onetime hippie Silicon Valley companies, are now hopelessly part of the establishment.

You can submit questions to mail@wired.com. Write ASK LEVY in the subject line.

Divider Links
End Times Chronicle

Netflix’s recently revealed viewer statistics show that in 2023 people gobbled up apocalypse-themed programming, including the hit Leave the World Behind. Do you think they know something?

Image may contain Label Text Symbol and Sign
Last but Not Least

Meta’s chief AI scientist Yann LeCun always looks on the bright side of AI life.

Twitter isn’t the only avian-related company to have problems. Here’s an account of the fall of Bird, which scooted from billions to bankruptcy.

Placebos work even when people know they’re not taking real medicine. So don’t blame your doctors when they prescribe fake drugs.

Mark Zuckerberg’s $100 million compound in Hawaii has a huge underground bunker to keep out people he may or may not know during the apocalypse. Mahalo!

Image may contain Logo Symbol Trademark Text and Label

Don’t miss future subscriber-only editions of this column. Subscribe to WIRED (50% off for Plaintext readers) today.

https://www.wired.com/feed/rss

Steven Levy

Leave a Reply