Title:
Autonomous Learning for Social Behavior Profile in emotive Human-Robot-Interaction
Description:

Context and Background:

In the context of emotive human robot interaction, facial expressions and vocal expressions are channels of expressive communication between human and robot. By applying methods of Computer Vision, and Bayesian classification algorithms, it is possible to analyse facial expressions from a user. By applying Bayesian network on features extracted from an audio signal, it is possible to classify vocal expressions. A method called emotional vector can be used for emulating an artificial social behavior profile on the robot. The fusion from both classifiers enters as input on the emotional vector, together with a pre-defined social behavior profile. During the analysis, the features are detected in order to classify one expression; in the synthesis, the input is an expression and output is the features which compose it. Since our purpose is to go beyond human-imitation, to create an intelligent component in our system, that allows us to add a personality/social behavior profile to the robot, was also a concern. Autonomous learning can be used to re-fill the likelihood tables from the initial learning, thus the robot learns by it-self what is the best emotion to express during the conversation.

Problem Statement:

The importance of emotive characteristic become more special for robots supporting everyday life of persons. (e.g.: elderly or children).
Applicability 1) With no one around, a person is injured and helpless. The robot can detect an abnormal emotional situation and trigger an alert to someone to help this person
Applicability 2) An elderly that lives in an elderly house, has his/her daily morning exercises at the agenda. The robot can guide exercises with emotional support.
How the robot shall select the emotion to synthesize? It is context dependant, that’s why we have created the Emotional Vector, to make the robot configurable to different scenarios.


Outcomes and Deliverables:

The major outcomes of the project are:

• Emotional Vector Module with autonomous learning.
• Robust facial expression analysis module integrated on ROS
• Programming skills using ROS and C++ 

The deliverables of the projects include:

• Conduct a detailed survey of emotive robots.
• Development of modules in ROS for analysis, synthesis and emotional vector.
• Simulation and experimentation on hardware to be available at the ISR.
• Dissemination and Technical report on findings.

Tasks:

1. Facial expression analysis ROS module
2. Facial expression synthesis ROS module
3. Vocal expression analysis ROS module
4. Vocal expression synthesis ROS module
5. Emotional Vector with static SBPs
6. Emotional Vector with dynamic SBPs
7. Integration of all modules
8. Technical report and Dissemination


Pre-requisites Required for Student:

• Programming Skills
• Interest in Computer Vision and Image Processing

Supervisors:

• Professor Jorge Dias; Institute of Systems and Robotics, University of Coimbra, Jorge@isr.uc.pt
• José Prado; Institute of Systems and Robotics, University of Coimbra, jaugusto@isr.uc.pt