USC iLab Video Dataset

Author: University of Southern California

Partner: No

Contact: Laurent Itti (itti@pollux.usc.edu)

Tags:

Categories:

Total: 50

Ratings: 14

Description

The collected 50 uncompressed YUV format video clips were presented to 14 subjects and their eye fixation points were recorded over frames from each clip by an eye-tracker machine. The recorded eye traces represent the subjects’ shifting overt attention, thus the eye-tracking data are qualified to validate the performance of the attention prediction model and the visual subjective quality.

Access

50 HD video clips, raw eye-tracking experiment data and video clips with visualized eye-tracking fixations are available at the following link. Link: http://ilab.usc.edu/vagba/dataset/

References and Citation

We have put a lot of effort into making these databases available to you. By downloading any of the databases below, you agree to properly cite the associated master reference, which typically is the paper where we first described the database and used it with our model, and to provide a link to the present web page.

References

  • LQI11: van Li, Z., Qin, S., Itti, L. Visual attention guided bit allocation in video compression, Image and Vision Computing 29 (1), 2011, pp. 1-14.