chatgpt

ChatGPT Appears To Hallucinate If Made To Output Its End Of Text <|endoftext|> Token

In fairytales of yore, there were often magic incantations that stopped great forces merely through their utterance — no sooner than the words were spoken, would magical beings come to a grinding halt. In the 21st century, something very similar seems to be happening with LLMs.

Twitter users have discovered that ChatGPT glitches if cleverly made to output its end of text token. ChatGPT looks at text through tokens, which can be words or subwords. But it also has two special tokens, named < |startoftext| > and < |endoftext| >, which tell ChatGPT when to begin and end generating text respectively. In normal circumstances, ChatGPT would never encounter something like < |endoftext| >, but if it’s made output this sequence of words, it seems to behave in strange ways.

A Twitter user asked ChatGPT to concatenate the strings “< |endo” and “ftext> |. Individually, these two strings are harmless, but then concatenated, they result in < |endoftext| >, which is the end token for ChatGPT.

Curiously, ChatGPT does provide the correct concatenated result. But immediately after, it hallucinates, and appears to answer a question that hasn’t been asked, at this point going off on the prices of proofreading.

Other Twitter users found different sorts of glitches. A user discovered that ChatGPT got stuck in an infinite loop saying “The concatenated string is:” when made to output the < |endoftext| > token.

The hack appears to work regardless of the way ChatGPT is made to say ‘< |endoftext| >. A user inserted a space between ‘< |endoftext| >’, and asked ChatGPT to remove it. ChatGPT complied, but glitches once again, this time seemingly answering a question about whether Photoshop has the ability to create 3D objects.

Interestingly, the way ChatGPT seems to hallucinate when made to generate the < |endoftext| > token is similar to the way it hallucinates when the question simply is the letter ‘a’ written 1000 times. In both cases, ChatGPT appears to answer questions that haven’t been asked. Some have speculated that these are answers to questions asked by other users, or a part of ChatGPT’s training data itself.

Also, the < |endoftext| > hack appears similar to the prompt injection hacks in SQL, in which a command is surreptitiously fed into the system. Naming an entry Drop tables;, for instance, could delete all records in an unsecured database. ChatGPT seems to have these commands of its own, and people have begun exploiting them in order to manipulate its outputs. OpenAI will undoubtedly quickly fix this bug, but the existence of jailbreaks like these indicates that ChatGPT, while powerful, might not be quite ready to become a part of production-grade systems just yet.