LLMs are rapidly becoming more and more sophisticated, but there are now indications that there might be constraint on how good they can get — the abilities of the humans who’ve created them.
GPT is becoming more “human-like” in both its strenghs and weaknesses, a paper by Oxford Researchers has found. “Humans in Humans Out: On GPT Converging Toward Common Sense in both Success and Failure,” the paper is titled.
“Increase in computational scale and fine-tuning has seen a dramatic improvement in the quality of outputs of large language models (LLMs) like GPT. Given that both GPT-3 and GPT-4 were trained on large quantities of human-generated text, we might ask to what extent their outputs reflect patterns of human thinking, both for correct and incorrect cases,” the paper says.
The research found that GPT-3.5 was better than GPT in tasks involving propositional, quantified, probabilistic reasoning as well as decision-making. “GPT-3 showed evidence of ETR-predicted outputs for 59% of these examples, rising to 77% in GPT-3.5 and 75% in GPT-4,” the paper says. But GPT-4 was also prone to making human-like errors than earlier versions. “Remarkably, the production of human-like fallacious judgments increased from 18% in GPT-3 to 33% in GPT-3.5 and 34% in GPT-4,” the paper says.
“This suggests that larger and more advanced LLMs may develop a tendency toward more human-like mistakes, as relevant thought patterns are inherent in human-produced training data. According to ETR, the same fundamental patterns are involved both in successful and unsuccessful ordinary reasoning, so that the “bad” cases could paradoxically be learned from the “good” cases,” the paper posits.
This is a pretty interesting result. As LLMs are getting more advanced, now only are they becoming more human-like, but also making more fallacies than humans usually make. This would make intuitive sense — LLMs are usually trained through RLHF (Reinforcement Learning Through Human Feedback), and they are being trained to act as per the feedback that’s being provided to them by humans. But this result also shows how both in their successes and follies, LLMs appear to be well on the path to replicating their human creators.