Faria, Diego R., Faria, Fernanda C. C. and Premebida, Cristiano (2017). Towards multimodal affective expression:merging facial expressions and body motion into emotion. IN: IEEE RO-MAN'17: Workshop Proceedings on Artificial Perception, Machine Learning and Datasets for Human-Robot Interaction (ARMADA'17), pp.16-20. IEEE.
Abstract
Affect recognition plays an important role in human everyday life and it is a substantial way of communication through expressions. Humans can rely on different channels of information to understand the affective messages communicated with others. Similarly, it is expected that an automatic affect recognition system should be able to analyse different types of emotion expressions. In this respect, an important issue to be addressed is the fusion of different channels of expression, taking into account the relationship and correlation across different modalities. In this work, affective facial and bodily motion expressions are addressed as channels for the communication of affect, designed as an emotion recognition system. A probabilistic approach is used to combine features from two modalities by incorporating geometric facial expression features and body motion skeleton-based features. Preliminary results show that the presented approach has potential for automatic emotion recognition and it can be used for human robot interaction.
Divisions: | College of Engineering & Physical Sciences |
---|---|
Additional Information: | Copyright: IEEE & ARMADA 2017 |
Uncontrolled Keywords: | Emotion recognition,probabilistic approach,human-robot interaction |
Last Modified: | 30 Oct 2024 08:48 |
Date Deposited: | 26 Oct 2017 10:10 |
Full Text Link: |
https://sites.g ... an17armada/home |
Related URLs: | PURE Output Type: | Conference contribution |
Published Date: | 2017-09-18 |
Accepted Date: | 2017-09-01 |
Authors: |
Faria, Diego R.
(
0000-0002-2771-1713)
Faria, Fernanda C. C. Premebida, Cristiano |