ABSTRACT Despite the fact that observers typically exhibit considerable expertise at familiar face recognition, unfamiliar faces are recognized and discriminated from one another very poorly. In particular, recent results demonstrate… Click to show full abstract
ABSTRACT Despite the fact that observers typically exhibit considerable expertise at familiar face recognition, unfamiliar faces are recognized and discriminated from one another very poorly. In particular, recent results demonstrate that a key difficulty observers face when attempting to sort unfamiliar faces by identity is how to “tell faces together”. That is, different images of the same individual are frequently assigned different labels, especially when natural viewing conditions introduce substantial appearance variation. By comparison, observers are generally well able to “tell faces apart”: Images of different individuals are rarely assigned the same label. Accurate face recognition depends on avoiding both kinds of error (failing to tell people apart or failing to “tell them together”). Unfamiliar face recognition suffers in particular from a reduced ability to reliably identify the former. Here, we used an unconstrained identity sorting task to examine how intra- and extra-personal variability are recognized in unfamiliar faces, bodies, and images depicting both the face and the body. Specifically, we investigated whether observers make quantitatively similar errors when attempting to sort faces and bodies by identity, and whether these errors are substantially attenuated when face and body appearance can be integrated to achieve person recognition in natural scenes. We address these issues by introducing a novel method for quantifying sorting errors of both types that facilitates direct comparison within and across tasks and makes it possible to quantify sorting performance within a signal detection framework.
               
Click one of the above tabs to view related content.