Deep learning-based image signal processor (ISP) models for mobile cameras can generate high-quality images that rival those of professional DSLR cameras. However, their computational demands often make them unsuitable for… Click to show full abstract
Deep learning-based image signal processor (ISP) models for mobile cameras can generate high-quality images that rival those of professional DSLR cameras. However, their computational demands often make them unsuitable for mobile settings. Additionally, modern mobile cameras employ non-Bayer color filter arrays (CFA) such as Quad Bayer, Nona Bayer, and $\text{Q}\times \text{Q}$ Bayer to enhance image quality, yet most existing deep learning-based ISP (or demosaicing) models focus primarily on standard Bayer CFAs. In this study, we present PyNET-$\text{Q}\times \text{Q}$ , a lightweight demosaicing model specifically designed for $\text{Q}\times \text{Q}$ Bayer CFA patterns, which is derived from the original PyNET. We also propose a knowledge distillation method called progressive distillation to train the reduced network more effectively. Consequently, PyNET-$\text{Q}\times \text{Q}$ contains less than 2.5% of the parameters of the original PyNET while preserving its performance. Experiments using $\text{Q}\times \text{Q}$ images captured by a prototype $\text{Q}\times \text{Q}$ camera sensor show that PyNET-$\text{Q}\times \text{Q}$ outperforms existing conventional algorithms in terms of texture and edge reconstruction, despite its significantly reduced parameter count. Code and partial datasets can be found at https://github.com/Minhyeok01/PyNET-QxQ.
Share on Social Media:
  
        
        
        
Sign Up to like & get recommendations! 1
Related content
More Information
            
News
            
Social Media
            
Video
            
Recommended
               
Click one of the above tabs to view related content.