Holographic display is considered as a promising three-dimensional (3D) display technology and has been widely studied. However, to date, the real-time holographic display for real scenes is still far from… Click to show full abstract
Holographic display is considered as a promising three-dimensional (3D) display technology and has been widely studied. However, to date, the real-time holographic display for real scenes is still far from being incorporated in our life. The speed and quality of information extraction and holographic computing need to be further improved. In this paper, we propose an end-to-end real-time holographic display based on real-time capture of real scenes, where the parallax images are collected from the scene and a convolutional neural network (CNN) builds the mapping from the parallax images to the hologram. Parallax images are acquired in real time by a binocular camera, and contain depth information and amplitude information needed for 3D hologram calculation. The CNN, which can transform parallax images into 3D holograms, is trained by datasets consisting of parallax images and high-quality 3D holograms. The static colorful reconstruction and speckle-free real-time holographic display based on real-time capture of real scenes have been verified by the optical experiments. With simple system composition and affordable hardware requirements, the proposed technique will break the dilemma of the existing real-scene holographic display, and open up a new direction for the application of real-scene holographic 3D display such as holographic live video and solving vergence-accommodation conflict (VAC) problems for head-mounted display devices.
               
Click one of the above tabs to view related content.