Autonomous Learning for Social Behavior Profile in emotive Human-Robot-Interaction
Context and Background:
In the context of emotive human robot interaction, facial expressions and vocal expressions are channels of expressive communication between human and robot. By applying methods of Computer Vision, and Bayesian classification algorithms, it is possible to analyse facial expressions from a user. By applying Bayesian network on features extracted from an audio signal, it is possible to classify vocal expressions. A method called emotional vector can be used for emulating an artificial social behavior profile on the robot. The fusion from both classifiers enters as input on the emotional vector, together with a pre-defined social behavior profile. During the analysis, the features are detected in order to classify one expression; in the synthesis, the input is an expression and output is the features which compose it. Since our purpose is to go beyond human-imitation, to create an intelligent component in our system, that allows us to add a personality/social behavior profile to the robot, was also a concern. Autonomous learning can be used to re-fill the likelihood tables from the initial learning, thus the robot learns by it-self what is the best emotion to express during the conversation.
The importance of emotive characteristic become more special for robots supporting everyday life of persons. (e.g.: elderly or children).
The major outcomes of the project are:
• Emotional Vector Module with autonomous learning.
The deliverables of the projects include:
• Conduct a detailed survey of emotive robots.
1. Facial expression analysis ROS module
• Programming Skills
• Professor Jorge Dias; Institute of Systems and Robotics, University of Coimbra, Jorge@isr.uc.pt