The key objective of producing artificial digital data is to closely mimic real data. However, because of improper use by malevolent users, the legitimacy of this kind of digital content… Click to show full abstract
The key objective of producing artificial digital data is to closely mimic real data. However, because of improper use by malevolent users, the legitimacy of this kind of digital content may be under threat in society. Deepfake techniques, which replace a person’s face with another using computer vision and graphics, have caused a great deal of anxiety. These techniques make it simple to conceal someone’s genuine identity, which highlights the necessity of having a way to confirm the legitimacy of face photos and videos. To address this difficulty, we have created a model based on deep learning that is enriched with a strategy of visual attention to distinguish real images and videos from ones that have been altered using deep-fake techniques. To create feature maps, we first extract the facial area from video frames then run it through a ResNeXt-50, CNN that has already been trained. Next, we focus on detecting artifacts unique to deepfake video modification by utilizing the visual attention mechanism. Our model outperformed the others when evaluated under cross-dataset settings using Face Forensic++ C23 for training and Celeb-DFv2/ DFDC for independent testing.
               
Click one of the above tabs to view related content.