Towards multimodal affective expression:merging facial expressions and body motion into emotion


Affect recognition plays an important role in human everyday life and it is a substantial way of communication through expressions. Humans can rely on different channels of information to understand the affective messages communicated with others. Similarly, it is expected that an automatic affect recognition system should be able to analyse different types of emotion expressions. In this respect, an important issue to be addressed is the fusion of different channels of expression, taking into account the relationship and correlation across different modalities. In this work, affective facial and bodily motion expressions are addressed as channels for the communication of affect, designed as an emotion recognition system. A probabilistic approach is used to combine features from two modalities by incorporating geometric facial expression features and body motion skeleton-based features. Preliminary results show that the presented approach has potential for automatic emotion recognition and it can be used for human robot interaction.

Divisions: College of Engineering & Physical Sciences
Additional Information: Copyright: IEEE & ARMADA 2017
Uncontrolled Keywords: Emotion recognition,probabilistic approach,human-robot interaction
Last Modified: 13 Jun 2024 07:44
Date Deposited: 26 Oct 2017 10:10
Full Text Link: https://sites.g ... an17armada/home
Related URLs:
PURE Output Type: Conference contribution
Published Date: 2017-09-18
Accepted Date: 2017-09-01
Authors: Faria, Diego R. (ORCID Profile 0000-0002-2771-1713)
Faria, Fernanda C. C.
Premebida, Cristiano



Version: Accepted Version

| Preview

Export / Share Citation


Additional statistics for this record