ChatGPT had a public meltdown but OpenAI says it’s fine now

The latest unexplained kerfuffle with ChatGPT’s outputs highlights the dangers of automation.

ChatGPT had a public meltdown but OpenAI says it’s fine now

OpenAI’s popular ChatGPT artificial intelligence (AI) system suffered a bit of a public meltdown between Feb. 20 and 21 that had it confusing and confounding users by spouting gibberish and other strangeness including unprompted pseudo-Shakespeare. 

As of 8:14 Pacific Standard Time Feb. 21, the problem has apparently been solved. The latest update on OpenAI’s Status page indicates that “ChatGPT is operating normally.” This indicates that the problem was solved 18 hours after OpenAI first reported it.

chatgpt is apparently going off the rails right now and no one can explain why pic.twitter.com/0XSSsTfLzP

— sean mcguire (@seanw_m) February 21, 2024

Something in my data file sent ChatGPT into a spiral that ended with it… speaking like Shakespeare? To its credit, it worked…. #surcease pic.twitter.com/4wHxVzF0Pw

— NCAAB Market Picks (@cbbmarketpicks) February 21, 2024

It’s unclear at this time what precisely caused the problem and OpenAI hasn’t yet responded to our request for comment.

Based on a cursory examination of the reported outputs, it would appear as though ChatGPT experienced some form of tokenization confusion. Due to the black box nature of large language models built on GPT technology, it may not be possible for scientists at OpenAI to diagnose exactly what went wrong. If this is the case, it’s likely the team will focus on preventative measures such as implementing further guardrails against long strings of apparent gibberish.

Social media sentiment appears to indicate that any damage caused by the chatbot was, by-and-large limited to wasting the time of users expecting sensical responses to queries.

Related: ChatGPT can write smart contracts; just don’t use it as a security auditor

However, this instance illustrates the potential for generative AI systems to send unexpected, hallucinated, or inconsistent messages at any time. These types of unwanted responses can have negative consequences.

Air Canada, for example, recently found out that it couldn’t blame the algorithm when a court ordered it to pay a partial refund to a customer who’d received bad information about booking policies from a customer service chatbot.

In the cryptocurrency world, investors are increasingly using automated systems built on LLMs and GPT technology to create portfolios and execute trades. As ChatGPT’s recent failure indicates, even the most robust of models can experience unexpected failure at any scale.

Related Articles

Responses