We describe four fundamental challenges that complex real-life Virtual Reality (VR) productions are facing today (such as multi-camera management, quality control, automatic annotation with cinematography and 360˚?depth estimation) and describe an integrated solution, called Hyper 360, to address them. We demonstrate our solution and its evaluation in the context of practical productions and present related results.
References
[1]
Sheikh, A., Brown, A., Evans, M. and Watson, Z. (2016) Directing Attention in 360-Degree Video. Proceedings of IBC 2016 Conference, Amsterdam, 8-12 September 2016, 1-9. https://doi.org/10.1049/ibc.2016.0029
[2]
Tang, A. and Fakourfar, O. (2017) Watching 360˚ Videos Together. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, ACM, New York, 4501-4506. https://doi.org/10.1145/3025453.3025519
[3]
Fonseca, D. and Kraus, M. (2016) A Comparison of Head-Mounted and Hand-Held Displays for 360° Videos with Focus on Attitude and Behaviour Change. In: Proceedings of the 20th International Academic Mindtrek Conference, ACM, New York, 287-296. https://doi.org/10.1145/2994310.2994334
[4]
Warren, M. (2017) Making Your First 360 Video? Here Are 10 Important Things to Keep in Mind.
https://www.filmindependent.org/blog/making-first-360-video-10-important-things-keep-mind/
[5]
Elmezeny, A., Edenhofer, N. and Wimmer, J. (2018) Immersive Storytelling in 360-Degree Videos: An Analysis of Interplay between Narrative and Technical Immersion. Journal of Virtual Worlds, 11, No. 1.
https://doi.org/10.4101/jvwr.v11i1.7298
[6]
Malik, R. and Bajcsy, P. (2008) Automated Placement of Multiple Stereo Cameras. The 8th Workshop on Omnidirectional Vision, Camera Networks and Non-Classical Cameras—OMNIVIS, October 2008, Marseille, France.
[7]
Chikkerur, S., Sundaram, V., Reisslein, M. and Karam, L.J. (2011) Objective Video Quality Assessment Methods: A Classification, Review, and Performance Comparison. IEEE Transactions on Broadcasting, 57, 165-182.
https://doi.org/10.1109/TBC.2011.2104671
[8]
Lai, W.S., Huang, Y., Joshi, N., Buehler, C., Yang, M.H. and Kang, S.B. (2018) Semantic-Driven Generation of Hyperlapse from 360 Degree Video. IEEE Transactions on Visualization and Computer Graphics, 24, 2610-2621.
https://doi.org/10.1109/TVCG.2017.2750671
[9]
Song, S., Zeng, A., Chang, A.X., Savva, M., Savarese, S. and Funkhouser, T. (2018) Im2Pano3D: Extrapolating 360˚ Structure and Semantics beyond the Field of View. 2018 the IEEE Conference on Computer Vision and Pat tern Recognition, Salt Lake City, UT, 18-23 June 2018, 3847-3856. https://doi.org/10.1109/CVPR.2018.00405
[10]
Hyper 360 Project (2019) http://www.Hyper360.eu/
[11]
Adobe (2019) Creative Cloud Premier Pro & After Effects.
https://www.adobe.com/creativecloud/video/virtual-reality.html
Andersson Technologies (2019) SynthEyes 3D Camera Tracking and Stabilization Software. https://www.ssontech.com/synovu.html
[15]
Kopf, J. (2016) 360˚ Video Stabilization. ACM Transactions on Graphics, 35, Article No. 195. https://dl.acm.org/citation.cfm?id=2982405
https://doi.org/10.1145/2980179.2982405
[16]
Matos, T., Nóbrega, R., Rodrigues, R. and Pinheiro, M. (2018) Dynamic Annotations on an Interactive Web-Based 360 & Deg; Video Player. In: Proceedings of the 23rd International ACM Conference on 3D Web Technology, ACM, New York, Article 22. https://doi.org/10.1145/3208806.3208818
Su, Y.-C., Jayaraman, D. and Grauman, K. (2017) Pano2Vid: Automatic Cinematography for Watching 360˚ Videos. In: Bares, W., Gandhi, V., Galvane, Q. and Ronfard, R., Eds., Eurographics Workshop on Intelligent Cinematography and Editing, The Eurographics Association, Lyon, France.
[27]
Su, Y.-C. and Grauman, K. (2017) Making 360˚ Video Watchable in 2D: Learning Videography for Click Free Viewing. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, 21-26 July 2017, 1368-1376.
[28]
Hu, H.-N., Lin, Y.-C., Liu, M.-Y., Cheng, H.-T., Chang, Y.-J. and Sun, M. (2017) Deep 360 Pilot: Learning a Deep Agent for Piloting through 360˚ Sports Videos. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition, ACM, New York, 1396-1405.
[29]
Truong, A., Chen, S., Yumer, E., Li, W. and Salesin, D. (2018) Extracting Regular FOV Shots from 360 Event Footage. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, ACM, New York, 316.
https://doi.org/10.1145/3173574.3173890
[30]
Huang, J., Chen, Z., Ceylan, D. and Jin, H. (2017) 6-DOF VR Videos with a Single 360-Camera. 2017 IEEE Virtual Reality, Los Angeles, CA, 18-22 March 2017, 37-44.
https://doi.org/10.1109/VR.2017.7892229
[31]
Wang, F.-E., Hu, H.-N., Cheng, H.-T., Lin, J.-T., Yang, S.-T., Shih, M.-L., Chu, H.-K. and Sun, M. (2018) Self-Supervised Learning of Depth and Camera Motion from 360˚ Videos. CoRR, abs/1811.05304.
[32]
Thatte, J., Boin, J.B., Lakshman, H. and Girod, B. (2016) Depth Augmented Stereo Panorama for Cinematic Virtual Reality with Head-Motion Parallax. 2016 IEEE International Conference on Multimedia and Expo, Seattle, WA, 11-15 July 2016, 1-6.
https://doi.org/10.1109/ICME.2016.7552858
[33]
Zioulis, N., Karakottas, A., Zarpalas, D. and Daras, P. (2018) OmniDepth: Dense Depth Estimation for Indoors Spherical Panoramas. In: Ferrari, V., Hebert, M., Sminchisescu, C. and Weiss, Y., Eds., Computer Vision—ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, Springer, Cham, 448-465.
https://doi.org/10.1007/978-3-030-01231-1_28
[34]
Youtube (2017) Hot and Cold: Heatmaps in VR.
https://youtube-creators.googleblog.com/2017/06/hot-and-cold-heatmaps-in-vr.html
[35]
Tsatsou, D., Dasiopoulou, S., Kompatsiaris, I. and Mezaris, V. (2014) LiFR: A Lightweight Fuzzy DL Reasoner. In: Presutti, V., Blomqvist, E., Troncy, R., Sack, H., Papadakis, I. and Tordai, A., Eds., The Semantic Web: ESWC 2014 Satellite Events. ESWC 2014. Lecture Notes in Computer Science, Springer, Cham, 263-267.
https://doi.org/10.1007/978-3-319-11955-7_32
[36]
Redmon, J. and Farhadi, A. (2018) YOLOv3: An Incremental Improvement. Computer Science, arXiv: 1804.02767. http://arxiv.org/abs/1804.02767
[37]
Werlberger, M., Trobin, W., Pock, T., Wedel, A., Cremers, D. and Bischof, H. (2009) Anisotropic Huber-L1 Optical Flow. In: Cavallaro, A., Prince, S. and Alexander, D., Eds., Proceedings of the British Machine Vision Conference, BMVA Press, London, 108.1-108.11. https://doi.org/10.5244/C.23.108
[38]
Karakottas, A., Zioulis, N., Zarpalas, D. and Daras, P. (2018) 360D: A Dataset and Baseline for Dense Depth Estimation from 360 Images. 1st Workshop on 360o Perception and Interaction, European Conference on Computer Vision (ECCV), Munich, Germany, 8-14 September 2018, 1-4.
[39]
Handa, A., Pătrăucean, V., Stent, S. and Cipolla, R. (2016) Scenenet: An Annotated Model Generator for Indoor Scene Understanding. 2016 IEEE International Conference on Robotics and Automation, Stockholm, 16-21 May 2016, 5737-5743.
https://doi.org/10.1109/ICRA.2016.7487797
[40]
Armeni, I., Sener, O., Zamir, A.R., Jiang, H., Brilakis, I., Fischer, M. and Savarese, S. (2016) 3D Semantic Parsing of Large-Scale Indoor Spaces. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, 27-30 June 2016, 1534-1543. https://doi.org/10.1109/CVPR.2016.170
[41]
Chang, A., Dai, A., Funkhouser, T., Halber, M., Niessner, M., Savva, M., Song, S., Zeng, A. and Zhang, Y. (2017) Matterport3D: Learning from RGB-D Data in Indoor Environments. 2017 International Conference on 3D Vision, Qingdao, 10-12 October 2017, 667-676. https://doi.org/10.1109/3DV.2017.00081
[42]
Eigen, D., Puhrsch, C. and Fergus, R. (2014) Depth Map Prediction from a Single Image Using a Multi-Scale Deep Network. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D. and Weinberger, K.Q., Eds., Proceedings of the 27th International Conference on Neural Information Processing Systems, MIT Press, Cambridge, MA, 2366-2374.
[43]
Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M. and Funkhouser, T. (2017) Semantic Scene Completion from a Single Depth Image. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, 21-26 July 2017, 1746-1754. https://doi.org/10.1109/CVPR.2017.28