ChatGPT spat out gibberish for many users overnight before OpenAI fixed it

/

OpenAI acknowledged and ‘remediated’ user reports of nonsensical responses from the chatbot.

Illustration: The Verge

ChatGPT users began reporting odd responses from the chatbot last night that included switching between languages, getting stuck in loops, or even repeatedly correcting itself. Some of the responses were just pure gibberish.

While discussing the Jackson family of musicians, the chatbot explained to a Reddit user that “Schwittendly, the sparkle of tourmar on the crest has as much to do with the golver of the ‘moon paths’ as it shifts from follow.”

At 10:40PM ET, OpenAI acknowledged on its status page that it was “investigating reports of unexpected responses from ChatGPT,” and a few minutes later said it had found the issue and was working on a fix.

As of this writing, the status page still features a yellow-bordered box at the top that says the company is “continuing to monitor the situation.” Many of the errors look similar to a common social media meme asking others to type a word on their phone, then repeatedly tap the same next word suggestion to see what happens. Which makes sense — large language models are essentially just very fancy predictive text.

Take these examples with a healthy dose of salt — it’s very easy for trolls to create their own images like this, after all. But there are some common threads in social media posts reporting the issue, such as ChatGPT getting stuck in a loop. In the images above, for example, ChatGPT appeared to repeat “.Happy Listening!🎶” and “It is – and it is.” for a while before it stopped generating a response.

Reddit users have reported similar issues in the past. We’ve reached out to OpenAI for more information about the malfunction.

It’s a good reminder that generative AI chatbots are a rapidly advancing technology, and what’s more, the large language models that power them are essentially black boxes that even their creators don’t entirely understand in an extremely detailed way. They operate by looking for patterns in data and then making their best guesses about how to respond to input, and it usually leads to impressive results. But it also can also yield hallucinations, emotional manipulation, and even laziness.

https://www.theverge.com/rss/index.xml

Wes Davis

Leave a Reply