Call-monitoring service Cogito uses AI to analyze the voices of people on the phone with customer service and guides human agents to speak with more empathy when it detects frustration.
Affectiva, a project spun out of MIT’s Media Lab, makes AI software that can detect vocal and facial expressions from humans, using data from millions of videos and recordings of people across cultures.
The chatbot Woebot, which bills itself as "your charming robot friend who is ready to listen, 24/7,” uses artificial intelligence to offer emotional support and talk therapy, like a friend or a therapist.
She used the basic infrastructure from her bot project to create something new, feeding her text messages with Mazurenko into a neural network and creating a bot in his likeness. If Kuyda could make something that she could talk to—and that could talk back—almost like her friend then maybe, she realized, she could empower others to build something similar for themselves.
The chatbot uses a neural network to hold an ongoing, one-on-one conversation with its user, and over time, learn how to speak like them.
It can’t answer trivia questions, order pizza, or control smart home appliances like other AI apps.
What if our voice assistants and chatbots could adjust their tone based on emotional cues?
If we can teach machines to think, can we also teach them to feel?