TU Delft EMMA: Database for emotion and mood recognition

Author: TU Delft

Partner: Yes

Contact: Judith Redi (J.A.Redi@tudelft.nl)

Tags:

Categories:

Description

EMMA is a collection of videos, recorded in the lab of the Interactive Intelligence Group of Delft University of Technology. We employed young Dutch actors to portray certain daily moods, occurring without interaction, such as sadness, anxiety and amusement. To make sure the actors were “into the mood”, we induced to them the desired moods with music, inspired them with scenarios (receiving a phone call, looking for something they lost, etc.), and let them free to improvise according to their felt mood and the situation. We recorded with two cameras, one focused on the face and one capturing face and body, and with Microsoft Kinect, located at a distance of approximately 2 metres away from the actors. All sensors remained at fixed positions, to resemble a typical smart ambience equipped with sensors. The actors could move around the setting, including sitting on the chair/couch, walking and standing. Due to the actors’ movements, the actor’s face is not properly captured in a lot of frames. We used the Kinect, to extract postural features, such as the joints and the rotational speed of the joints. We extracted the joints for each video offline in two modes: seated (10 joints) and standing (20 joints), each mode optimized for a different situation. Due to the suboptimal distance of Kinect from the actors, the skeleton joints are not detected in all frames of the videos. The joints are transcribed in separate text files, which can be downloaded under the title “Kinect joints”.

Access

All video sequences and other materials are available at the website. Link: http://ii.tudelft.nl/iqlab/EMMA.html The video files are password protected. To obtain them, contact Judith Redi (J.A.Redi@tudelft.nl).

References and Citation

The data included in the database EMMA(videos, Kinect features, annotations) are publicly available to the research community. If you are using EMMA in your research, please cite the following paper [KR15].

References

  • KR15: Katsimerou C and Redi J, “Neural prediction of the user’s mood from visual input”, 5th International Workshop on Empathic Computing (IWEC 2014), 2014.