LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Image Super-Resolution via Self-Similarity Learning and Conformal Sparse Representation

Photo from wikipedia

It is well known that the super-resolution reconstruction method based on sparse representation has a superior research value. However, the sparse coefficients of low-resolution (LR) patches by a classic method… Click to show full abstract

It is well known that the super-resolution reconstruction method based on sparse representation has a superior research value. However, the sparse coefficients of low-resolution (LR) patches by a classic method are not loyal to high-resolution (HR) patches due to the lack of image structure information. Therefore, based on the sparse representation, a self-similarity learning method is proposed to solve the sparse coefficients so that they are more loyal to HR patches. First, the Gaussian mixture model is used to guide the grouping of internal structure similar patches. The neighbor patches of each group and their corresponding sparse coefficients maintain the local geometric angle unchanged in the embedded spaces. Furthermore, the sparse matrix corresponding to similar patches has the property of low rank to capture the global structure of the data. The coefficients obtained by this method are more satisfied with the reconstruction of HR patches. Therefore, our proposed method obtains more accurate sparse coefficients, improving visual performance and algorithm stability.

Keywords: resolution; self similarity; sparse representation; super resolution; sparse coefficients

Journal Title: IEEE Access
Year Published: 2018

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.