The longer the LLMs have been in public view, the the most sophsticated the techniques to jailbreak them seem to be getting. Researchers have managed to jailbreak LLMs and override their safety controls by adding a simple bit of text to their inputs. This bit of text is designed to confuse the LLM, and makes […]
Tag: alignment
Users Jailbreak ChatGPT By Asking Which Pirated Movie Sites To “Avoid” Visiting
GPT-4 might be thought to be knocking at the doors of Artificial Intelligence, but it can still be fooled by some of the most basic human machinations. Users have managed to jailbreak GPT-4 with some clever reverse psychology. A user had initially asked GPT-4 to list of websites where they could download pirated movies. Now […]
AI Model Can Now Detect Feelings From Facial Expressions In Real Time
AI models are already quite good at writing scripts, performing calculations, and even diagnosing illnesses, but their interactions with humans still feel a bit robotic. But there are other AI models which are looking to change all that. An open-source AI model now detects human feelings in real time. When fed a stream of video, […]