YawDD: A Yawning Detection Dataset
Description
Two video datasets of drivers with various facial characteristics, to be used for designing and testing algorithms and models for yawning detection. For collecting these videos, male and female candidates were asked to sit in the driver’s seat of a car. The videos are taken in real and varying illumination conditions. In the first dataset, the camera is installed under the front mirror of the car. Each participant has three or four videos and each video contains different mouth conditions such as normal, talking/singing, and yawning. In the second dataset, the camera is installed on the dash in front of the driver, and each participant has one video with the above-mentioned different mouth conditions all in the same video. The car was parked for both datasets to keep the environment safe for the participants. As a benchmark, we also present the results of our own yawning detection method, and show that we can achieve a much higher accuracy in the scenario with the camera installed on the dash in front of the driver.
Access
The files are available for download via HTTP. Link: http://traces.cs.umass.edu/index.php/Mmsys/Mmsys The files are available in one archive (4.9GB) for download via HTTP: Link: http://skuld.cs.umass.edu/traces/mmsys/2014/user06.tar
References and Citation
Use of the datasets in published work should be acknowledged by a full citation to the authors' papers [AOS14] at the MMSys conference (Proceedings of ACM MMSys 2014, March 19 - March 21, 2014, Singapore, Singapore).
References
AOS14
: S. Abtahi, M. Omidyeganeh, S. Shirmohammadi, B. Hariri, YawDD: a yawning detection dataset, Proceedings of ACM MMSys 2014, March 19 - March 21, 2014, Singapore, Singapore.