GPT-4 Can Understand Instructions With Jumbled Up Words, Correctly Respond To Them

GPT-4 is trained on a massive set of data, and as recent developments have shown, has an advanced understanding of math, reasoning consulting. But the software sometimes seems to exhibit behaviour that it’s very unlikely it was trained on.

Users report that GPT-4 can respond to queries even when the individual letters in words are jumbled up beyond recognition. “I ened ot sak na itnrpamtot qiosuten: od urnsnteadd em fi jmlbue pu teh lteters of wdors ekil tihs?” Now this wouldn’t make sense to most people, but GPT-4 responded.

“I understand that you are asking an important question and you have jumbled up the letters of the words. With some effort, I can still decipher your question: “I need to ask an important question: do you understand me if I jumble up the letters of words like this?”
Yes, I can understand your message even if the letters in the words are jumbled, although it might take a bit more effort to decipher the intended meaning. Please feel free to ask your important question,” GPT-4 replied.

Now this would be pretty incredible by itself, but GPT-4 doesn’t stop there. GPT-4 can even recognize jumbled up words if they’re fed into it in reverse order.

And it’s not just jumbled words that GPT-4 can recognize. Another Twitter user reported that something messed up when they were copying a part of a journal article from the PDF viewer in Chrome. “Something messed up when I copied part of a journal article from the pdf viewer in chrome. “And it started answering the question I asked! Turns out not only did it work out what happened but it also just ignored it and answered me…,” the user marvelled.

These are pretty surprising results. GPT-4 is trained on a massive text set, but it’s very unlikely that it would’ve ever encountered jumbled words in the order that the first user had put them into the program. Also, GPT-3 had a token length that comprises of sub words and not letters, so the attention mechanism would’ve found it difficult to find links between individual letters in words, and unscrambled them. Maybe the researchers at OpenAI have an idea about GPT-4 does these things, but to the lay user, it’s becoming increasingly apparent that GPT-4 has abilities that were considered impossible for conventional LLMs to solve.

2 thoughts on “GPT-4 Can Understand Instructions With Jumbled Up Words, Correctly Respond To Them

  1. Pingback: AI Is Teaching Itself Things That We Didn't Program, Says Google - MagicWand AI

  2. Pingback: OpenAI President Greg Brockman Says He Was Blown Away By ChatGPT's Emergent Property Of Adding 40-Digit Numbers - MagicWand AI

Comments are closed.