News Technologies 07-03-2025 at 15:39 comment views icon

AI language models have learned to recognize emotions and hidden meaning better than some people

author avatar

Oleksandr Fedotkin

Author of news and articles

AI language models have learned to recognize emotions and hidden meaning better than some people

Large AI language models are already capable of recognizing hidden subtext and emotions and other subtleties in the texts on a par with people.

Recent studies have shown, that LLMs such as GPT-4, Gemini, Llama-3.1-70B, and Mixtral 8×7B are able to detect political views, sarcasm, and positive or negative connotations in words to a certain extent. The authors of the study found that LLM is almost as good at analyze the moods, political views, emotions, and sarcasm, just like people.

The study also involved 33 volunteers. In total, about 100 selected text fragments were analyzed. In particular, the GPT-4 proved to be even more consistent than humans in identifying political views.

GPT-4 also proved to be able to detect emotional coloring and negative and positive connotations in the text. The AI recognized whether the text was written by an irritated person or a deeply outraged one. The most difficult type for LLM to recognize was sarcasm. However, the humans involved in the study did not demonstrate significant success in recognizing sarcasm.

Understanding how much emotion a person is feeling or how sarcastic they are can be crucial for supporting mental health, improving customer service, and even ensuring security. Using LLMs like GPT-4 can significantly reduce the time and cost of analyzing large amounts of textual information.

Sociologists often spend months analyzing users’ texts to identify certain trends. On the other hand, GPT-4 opens the door to faster and more responsive investigations, which is especially important during crises, elections, or emergencies.

Using similar language models can simplify the work of investigative journalists and fact-checkers. GPT-4-based tools can help flag emotionally charged or politically biased posts. 

However, transparency, fairness, impartiality, and political views are still a challenge for AI. Meanwhile, in understanding and analyzing texts, Big Language Models are quickly catching up with humans. Further research should include a systematic and thorough analysis of how stable the output of AI models is. 

The results of the study were published in the journal Scientific Reports

Source: The Conversation



Spelling error report

The following text will be sent to our editors: