AI models are already quite good at writing scripts, performing calculations, and even diagnosing illnesses, but their interactions with humans still feel a bit robotic. But there are other AI models which are looking to change all that.
An open-source AI model now detects human feelings in real time. When fed a stream of video, the model predicts the current feeling the featured in the video is experiencing in a circular grid. The feelings are in categorized in four quadrants – ‘happy’, ‘excited’ and ‘surprise’ are one, while ‘angry’, ‘fear’ and ‘disgust’ are another. Two other quadrants are ‘bad’, ‘depressed’, ‘bored’ and ‘tired’, while the fourth quadrant lists out “relaxed” and “sleepy”.
As the subject speaks, a yellow dot jumps from emotion to emotion, indicating what they subject is currently feeling. The demo appears to be quite impressive, and the code is open sourced as well.
Now the implications of such a model can be profound. Current communication with AI is only through text. But if intelligent systems and understand how human beings are feeling while interacting with them, they would be able to give more life like answers: the same sentence, when spoke with different emotions can carry different meanings, and an AI which can better understand how humans are feeling will be able to better align and understand to their needs. It’s still early days, but advances like LLMs, robotics, and even understanding human feelings by AI models all appear to be converging towards creating an intelligence that’s not too dissimilar from our own.