As spatial languages, sign languages rely on spatial cognitive processes that are not involved for spoken languages. Interlocutors have different visual perspectives of the signer's hands requiring a mental transformation… Click to show full abstract
As spatial languages, sign languages rely on spatial cognitive processes that are not involved for spoken languages. Interlocutors have different visual perspectives of the signer's hands requiring a mental transformation for successful communication about spatial scenes. It is unknown whether visual-spatial perspective-taking (VSPT) or mental rotation (MR) abilities support signers' comprehension of perspective-dependent American Sign Language (ASL) structures. A total of 33 deaf ASL adult signers completed tasks examining nonlinguistic VSPT ability, MR ability, general ASL proficiency (ASL-Sentence Reproduction Task [ASL-SRT]), and an ASL comprehension test involving perspective-dependent classifier constructions (the ASL Spatial Perspective Comprehension Test [ASPCT] test). Scores on the linguistic (ASPCT) and VSPT tasks positively correlated with each other and both correlated with MR ability; however, VSPT abilities predicted linguistic perspective-taking better than did MR ability. ASL-SRT scores correlated with ASPCT accuracy (as both require ASL proficiency) but not with VSPT scores. Therefore, the ability to comprehend perspective-dependent ASL classifier constructions relates to ASL proficiency and to nonlinguistic VSPT and MR abilities.
               
Click one of the above tabs to view related content.