HuTwin Dataset - Multimodal Behavioral Dynamics in Collaborative VR Interactions
This comprehensive multimodal dataset captures behavioral dynamics during collaborative virtual reality interactions. The dataset includes synchronized recordings from 40 participants (20 pairs) performing a VR cooking-game task across three sessions, totaling 120 recordings.
Description
This comprehensive multimodal dataset captures behavioral dynamics during collaborative virtual reality interactions. The dataset includes synchronized recordings from 40 participants (20 pairs) performing a VR cooking-game task across three sessions, totaling 120 recordings.
The dataset encompasses objective behavioral measurements including body motion capture, facial expression tracking, and speech acoustic features, alongside subjective assessments of quality of experience, presence, and emotional responses. The experimental design uses a within-subjects approach manipulating multiple factors: avatar type (Chef vs. Humanoid), connection type (Host vs. Client), role (Student vs. Teacher), and network quality conditions (delay/jitter: 0ms vs. 500ms).
This research was conducted at the Net4U Laboratory, University of Cagliari, and funded by the European Commission through Next Generation EU and the Italian National Recovery Plan RESTART program. The dataset enables research on understanding how avatar design, network conditions, and social roles affect user experience, behavior, and emotions in collaborative VR environments. Related work by the research team was presented at QoMEX 2025 on QoE in multi-user collaborative VR games.
Access
Openly available for download from Zenodo
License
Creative Commons Attribution 4.0 International