Visual stimulus, Intent, Person (VIP) Dataset
Description
Eye tracking data from 75 participants for 150 images from the NUSEF dataset, with one group performing anomaly detection, the other free viewing. Includes demographic and personality data for trait-specific models. The images were selected from the NUSEF dataset, which contains both neutral and affective images. Out of 758 NUSEF images, 150 were randomly selected. 75 subjects were recruited from a mixture of undergraduate, postgraduate and working adult population. The male and female subjects are recruited separately to ensure an even distribution. They were tasked to view the 150 images in a free-viewing (i.e. without assigned task) or anomaly detection setting. Each image was displayed for 5 seconds, followed by 2 seconds viewing of a gray screen. The images were displayed in random order. Their eye-gaze data was recorded with a binocular infra-red based remote eye-tracking device SMI RED 250. The recording was done at 120Hz. The subjects were seated at 50 centimeters distance from a 22 inch LCD monitor with 1680x1050 resolution. This setup is similar to other ones used in eye-gaze research. Before start of the viewing experiment, the subjects also provided their demographic data: gender, age-group, ethnicity, religion, field of study/work, highest education qualification, income group, expenditure group and nationality. 3 personality type questions are posed based on the Jung’s Psychological types. The recorded eye-gaze data were preprocessed with the SMI SDK to extract the fixations from the preferred eye as chosen by the subjects.
Access
Data and code available in one archive (183.2MB): Link: http://mmas.comp.nus.edu.sg/VIP_files/VIP_Dataset.zip
Broken link!
The original link was: http://mmas.comp.nus.edu.sg/VIP.html
License
VIP dataset is available for research purposes only. By downloading or using the dataset, you are deemed to agree to terms and conditions. Link: http://mmas.comp.nus.edu.sg/VIP_files/terms.html
References and Citation
If you are using this dataset, please cite [MSK13].
References
MSK13
: Keng-Teck Ma. Terence Sim and Mohan Kankanhalli, A Unifying Framework for Computational Eye-Gaze Research. 4th International Workshop on Human Behavior Understanding. Barcelona, Spain. 2013.