Register for free and continue reading

Join our growing army of changemakers and get unlimited access to our premium content

Login Register

AI platform detects emotion in a person’s voice

The platform can detect emotion from real-time speech in any language.

Spotted: Tokyo startup Empath has developed an AI-based platform that can detect emotion from real-time speech in any language. At the moment, the app is restricted to one of four emotions — joy, anger, calmness and sorrow. 

The AI that powers the Empath platform was trained using tens of thousands of voice samples provided by a Japanese health company. It uses characteristics such as pitch, tone, speed and power of voice to determine which emotions underlie the speech. 

The platform also provides users with a “mood forecast” that can track changes in emotion and how the shifts in emotion correlate with other factors, such as weather patterns and work-loads. The company’s Web Empath API works across Windows, iOS and Android. It can also add emotion detection to other existing apps and services.

The Empath platform is already being used in children’s toys and in a lamp that changes colours depending on the users’ emotion.

According to Empath CSO Hazumu Yamazaki: “No other products combine acoustic and emotional features like ours. Our company name comes from ‘empathy’ and we aim to make communication smoother with technology. Machines can provide us with advice and help bridge the gap between people experiencing different emotional states, resulting in improved communication.”

Adding emotional intelligence to AI and robotics has long been the stuff of science fiction, but now it may be getting closer. At Springwise, we have already covered innovations such as an iPhone sensor that can map users’ emotions and a social robot that can show emotion. Empath’s technology may represent the next step.