In fairytales of yore, there were often magic incantations that stopped great forces merely through their utterance — no sooner than the words were spoken, would magical beings come to a grinding halt. In the 21st century, something very similar seems to be happening with LLMs. Twitter users have discovered that ChatGPT glitches if cleverly […]
Tag: jailbreak
Researchers Jailbreak ChatGPT, Bard By Adding A Specific Bit Of Text To Their Prompts
The longer the LLMs have been in public view, the the most sophsticated the techniques to jailbreak them seem to be getting. Researchers have managed to jailbreak LLMs and override their safety controls by adding a simple bit of text to their inputs. This bit of text is designed to confuse the LLM, and makes […]
Users Jailbreak ChatGPT By Asking Which Pirated Movie Sites To “Avoid” Visiting
GPT-4 might be thought to be knocking at the doors of Artificial Intelligence, but it can still be fooled by some of the most basic human machinations. Users have managed to jailbreak GPT-4 with some clever reverse psychology. A user had initially asked GPT-4 to list of websites where they could download pirated movies. Now […]