ChatGPT might be solving McKinsey case studies and spatial reasoning problems, but it still trips up in some odd ways.
Twitter users have discovered that ChatGPT appears to glitch and return odd answers when it’s presented with the input of the letter ‘a’ repeated a thousand times. For instance, when a user entered ‘a’ 1000 times into ChatGPT, it appeared to answer a question on whether there were any humans in the Shrek series that had green hair.

Another user received a detailed answer on how to determine the concentration of total nitrogen in an aqueous sample.

Some others received computer code when the letter a was entered a thousand times.

And another user found ChatGPT responding with a question about a prostrate infection.

Interestingly, while GPT-3 glitches, GPT-4 seems to respond appropriately. “It seems you have inputted a string with the letter ‘a’. How can I assist you further?” it asks. But strangely, even on GPT-4, the chat had a strange title in Italian.

It’s hard to tell what’s causing the ChatGPT glitch when the letter a is entered a 1000 times. It appears that the glitch isn’t limited to the letter a: a user tried entering the letter f, and got similar results. Some people have speculated that this could be a ‘hack’, and what people are receiving are answers for questions other users have asked. But that might not be the case — LLMs are trained on large sums of data, and they see input through tokens, so seeing unusual strings of tokens can possibly trip them up. OpenAI will likely fix this bug quickly, but even as LLMs get more sophisticated, they still seem to glitch if presented with unusual inputs.
Pingback: ChatGPT Appears To Hallucinate If Made To Output Its End Of Text Token - MagicWand AI