Oct 09, 2021 | Shaoni Ghosh
Robotics has become one of the most significant fields of research and development and is induced with mechanical engineering, computer science and others. It basically concentrates on the building of robots in general and designing it with respect to its purpose being served.
In the past few years, a variety of robots has been generated which either instructs the elderly or serves as a companion in order to introduce methods such that it improves one's health and introduces a positive outlook towards life.
AI researchers have always been generating robots employing machine learning in order to promote interaction, communication and induce human-like attributes at social levels such that they are considered as one showing empathy and support.
Recently, the researchers at Hitachi R&D Group and University of Tsukuba in Japan have generated a new and fresh method to imbibe companion robots with emotional speech. This would enable them to behave the way caregivers or other health professionals interact with the elderly or vulnerable patients.
Takeshi Homma et al elaborated on this method in the paper named arXiv and stated, "For a robot...to imitate the range of human emotions,...we propose a speech synthesis method" to bring out a blueprint of the emotional states associated with humans.
(Recommended Blog: Emotional Artificial Intelligence - An Overview)
At first, a machine learning model was assisted depending on a dataset of human voice recordings collected at different points of a day. The method integrated speech synthesis along with emotional speech recognition procedures.
(Also Check: The Looming Future of Machine Learning in Robotization)
The new approach requires less manual work in aligning emotions with synthesized speech, unlike other emotional speech synthesis techniques in the past.The researchers stated that the synthesizer obtains an emotional vector, which is again received through human speeches using a "speech emotion recognizer".
The researchers then evaluated the new model's efficiency in generating imitative emotional speech. Through a chain of experiments, it has been observed that a robot succeeded in producing an impact over mood and arousal levels in the elderly users.
When the users were asked to give feedback in order to know how they perceived companion robots to be and if they have felt more awake or sleepy after listening to the speech samples.
As reported by TechXplore, "The results are highly promising", since the patients reported that they felt more active in the morning and calmer in the dead of night.This proves that the emotional speech synthesizer has succeeded in imitating caregiver-like speech adjusted with the circadian rhythms observed in most of the elderly.
Conclusively, the researchers envisioned that the companion robots would be able to invoke emotion in their speeches, based on the time they are interacting with the participants such that they vibe depending on their "wakefulness or arousal."