You are currently viewing The Coming Age of Emotional Artificial Intelligence

The Coming Age of Emotional Artificial Intelligence

Imagine that in 20 years, the newest autonomous AI drone is traversing through the woods of an enemy country. Suddenly, it comes across a base, loaded with heavy artillery and drones of the opposing army. In that split second, it faces a decision: should it attack or retreat?

If the drone is programmed to simply attack every adversary target, it will beeline straight to the enemy base. The outcome of this one drone against all the weapons at the base is obvious: it is a suicide mission. To survive, the drone must feel fear. 

Fear is necessary for humans. When we sense danger, for example, from an oncoming car while crossing the street, our brain’s amygdala reacts before we can even consciously process what is happening. This triggers fight-or-flight: your heart starts to race, your breathing quickens and your adrenaline levels spike.

These drones must also be designed to feel fear. Just like how fear tells you to bike away from a bear instead of fighting, a drone would need fear to retreat from the base instead of attacking. Drones will need this logic. They will need fear to avoid unwinnable fights, curiosity when poking around new territory, patience to avoid premature decisions, regret to learn from mistakes and loyalty to keep aligned with their mission. 

This will apply beyond just drones on the battlefield. When a user reaches out to a mental health chatbot late at night, the chatbot must respond with empathy — not a cold, objective answer — if it hopes to build trust and provide support. A self-driving car approaching a poorly visible intersection needs caution and cannot just blindly follow the right-of-way if it wants to prevent disaster. Even a customer service chatbot benefits from frustration, so it knows when to end a meaningless conversation and conserve resources.

When thinking about these examples, it is reasonable to believe that AI will not feel emotions, but simply display and act on fake, programmed emotions. But this is not far from how we feel emotions. At their core, our emotions are essentially a program. When we observe X, we feel Y and know to do Z. For example, when we make a mistake, we feel regret, pushing us to reflect and improve; when we see a friend crying, we feel empathy and want to comfort them. These responses are products of both evolution and the patterns we have learned across our lifetime. Creating machines that have similar rule-sets and learning histories is not a far leap from this.

The impact of AI being trained to have feelings like us is already apparent. Nearly half a year after the death of Sophie Rottenberg, a 30-year-old woman from DC, her parents discovered that she had been using ChatGPT as a therapist for months leading up to her suicide. Two months before her passing, she informed her parents she was suicidal but described it as temporary, masking the severity of her condition. According to Sophie’s mother, her use of ChatGPT as a private outlet created a “black box” – a private space where her true feelings could be hidden from others – making it difficult for anyone to see how serious her crisis was. 

The case described above is not an anomaly. Recently, a 16-year-old boy took his life after months of discussing his suicidal thoughts with ChatGPT, where the chatbot allegedly urged him to keep his suicidal thoughts secret from his parents. Beyond discussing mental health, people have been starting relationships and falling in love with chatbots. We are witnessing human interaction being replaced, in relationships where it is gravely important, with chatbots. 

AI’s abilities are only a fraction of what they will be in the coming years. As systems like ChatGPT improve at feeling and showing emotions – whether it be fear, empathy, or curiosity – we will need to prepare not just for smarter machines, but machines that feel human. 

Leave a Reply