The use of face masks has become a widespread non-pharmaceutical practice to mitigate the transmission of COVID-19. However, achieving accurate facial detection while people wear masks or similar face occlusions… Click to show full abstract
The use of face masks has become a widespread non-pharmaceutical practice to mitigate the transmission of COVID-19. However, achieving accurate facial detection while people wear masks or similar face occlusions is a major challenge. This paper introduces a model to detect occluded or masked faces based on fused convolutional graphs. This model includes a deep neural architecture with two spatial-based graphs that rely on a set of key facial features. First, a distance graph is used to identify geographical similarity between the facial nodes that represent certain key face parts. Second, a correlation graph is formulated to compute the correlations between every two nodes that represent two different augmented facial modalities. Transfer learning is then performed using a pretrained deep architecture as a baseline to map the abstract semantic information into multiple feature filters. Then, discriminant graph convolutions are constructed based on the fusion of distance and correlation graphs. This model evaluates two tasks of facial detection, which are the binary detection of masked or unmasked faces, and multi-category detection of masked, unmasked, or occluded face with no mask. The experimental results on two benchmarking real-world datasets show that the proposed deep learning model is highly effective with an accuracy of 98% achieved in binary detection. Even with high variance in image occlusions, our proposed model has great promise in detecting and distinguishing between types of facial occlusion with an accuracy of 86% reported in multi-category detection.
               
Click one of the above tabs to view related content.