Multi-Lens Stereoscopic Synthetic Video Dataset

Author: Portland State University

Partner: No

Contact: Fan Zhang (

Tags: ,



Total: 20


This dataset paper describes a synthetically-generated, multi-lens stereoscopic video dataset and associated 3D models. Creating a multi-lens video stream requires that the lens be placed at a spacing less than one inch. While such cameras can be built out of off-the-shelf parts, they are not “professional” enough to allow for necessary requirements such as zoom-lens control or synchronization between cameras. Other dedicated devices exist but do not have sufficient resolution per image. This dataset provides 20 synthetic models, each with an associated multi-lens walkthrough, and the uncompressed video from its generation. This dataset can be used for multi-view compression, multi-view streaming, view-interpolation, or other computer graphics related research.


The files are available for download via HTTP. Link:

Broken link!

The original link was:

References and Citation

Use of the datasets in published work should be acknowledged by a full citation to the authors' papers [ZFF15] at the MMSys conference (Proceedings of ACM MMSys '15, Portland, Oregon, March 18-20, 2015).


  • ZFF15: Fan Zhang, Wu-chi Feng, and Feng Liu. A Multi-Lens Stereoscopic Synthetic Video Dataset, Proceedings of ACM MMSys '15, Portland, Oregon, March 18-20, 2015.