As AI has appeared to become more powerful over the last few quarters, the conspiracy theories around it are also getting more fanciful.
Several mainstream publications over the last few days had reported how an AI drone had “killed” it human operator. Most articles included fine print that the killing had been done in a simulated test, and no human was actually harmed. It now turns out that there was no simulated test either, and the entire furore was based on a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation.
But several publications had written fearmongering headlines about the issue. “AI-Controlled Drone Goes Rogue, Kills Human Operator in USAF Simulated Test,” Vice had written. “AI-controlled military drone ‘KILLS’ its human operator in simulated test,” wrote the Daily Mail.


It has now turned out that the entire story was much brouhaha over nothing. An Air Force spokesperson told Insider that the Air Force has not conducted such a test, and that the Air Force official’s comments were taken out of context. “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Air Force spokesperson Ann Stefanek told Insider. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal,” a spokesperson said
But that didn’t prevent the news from going viral over social media. Even Elon Musk had responded to the news. He had first responded with a “!” at a story.
He later shared “If only someone had warned us …” to a Zerohedge post about the story.
It has now been conclusively established that the story had little to do with reality, and even Twitter’s community notes are fact-checking these posts. But the fact that so many publications — and even AI experts like Musk — fell for it indicates the kind of paranoia that currently exists around AI. High-quality LLMs like ChatGPT have now been around for nearly 6 months, and even open-source models without guardrails have been created. But thus far, there seems to be no conclusive proof of anyone having caused too much real-world harm with AI. Now this isn’t to say that AI, or AGI, cannot cause real-world hard in the future, but the reaction to this obviously fake story indicates that there might be quite a bit of paranoia floating around about AI at the moment, with very little real-world evidence to back it up.