Due to hardware restriction, it is costly to capture densely-sampled Light Fields (LFs) with high angular and spatial resolution, which becomes the main bottleneck of LFs development. In this paper,… Click to show full abstract
Due to hardware restriction, it is costly to capture densely-sampled Light Fields (LFs) with high angular and spatial resolution, which becomes the main bottleneck of LFs development. In this paper, we propose a learning-based framework to synthesize novel views and reconstruct densely-sampled LFs from sparsely-sampled LFs. In the proposed framework, micro-lens image stacks and view image stacks are separately grouped, in which details in novel views are explored from spatial and angular domain. The two kinds of stacks contain epipolar information and 3D convolution layers are employed to effectively extract features that include structure information. Moreover, an innovative way is proposed to synthesize views by upsampling micro-lens image stacks using deconvolution layers. The parameters in decovolution layers provide the view position information and different interpolation and extrapolation tasks can be explicitly modeled. It is validated that this view synthesis module can be embedded in different frameworks and improve the related performances. Without precise depth estimation and view warping, the proposed method is mainly designed for reconstructing LFs with small baselines. Related experimental results show that the proposed model outperforms other state-of-the-art methods in terms of both visual and numerical evaluations. Furthermore, the consistency between synthesized views and the intrinsic structure information is well preserved in the proposed method.
               
Click one of the above tabs to view related content.