LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Fusion of Handcrafted and Deep Features for Forgery Detection in Digital Images

Photo by markusspiske from unsplash

Content authentication of digital images has captured the attention of forensic experts and security researchers due to a multi-fold increase in the dissemination of multimedia data through the open and… Click to show full abstract

Content authentication of digital images has captured the attention of forensic experts and security researchers due to a multi-fold increase in the dissemination of multimedia data through the open and vulnerable Internet. Shrewd attackers successfully devise novel ways to challenge state-of-art forensic tools used for forgery detection in digital images. Feature engineering approaches have yielded up to 97% accuracy on benchmarked datasets. Deep learning approaches have shown promising results in various image classification problems but cannot find hidden patterns in digital images, which can reliably detect image forgeries. State-of-art accuracy of deep learning approaches for forgery detection is up to 98% on benchmarked datasets. The objective of the proposed approach is to further escalate the detection accuracy, pushing it near 100%. In this paper, a synergy of handcrafted features based on color characteristics and deep features using the image’s luminance channel is employed to mine patterns responsible for accurate forgery detection. In the first Stream, 648-D Markov-based features are computed from the quaternion discrete cosine transform of the image. In the second Stream, the luminance channel of YCbCr colorspace is used to extract the Local Binary Pattern of the image. Further, local binary feature maps are fed to the pre-trained ResNet-18 model to obtain a 512-D feature vector called ’ResFeats’ from the last layer of the model’s convolutional base portion. The handcrafted features from Stream I and ResFeats from Stream II are combined to form an 1160-D feature vector. Further, classification is performed using a shallow neural network, and the method is tested on CASIA v1 and CASIA v2 datasets. The accuracy of the proposed fusion-based approach is 99.3% on benchmark datasets.

Keywords: digital images; detection digital; image; deep features; detection; forgery detection

Journal Title: IEEE Access
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.