Knowledge of phonotactics is commonly assumed to derive from the lexicon. However, computational studies have suggested that phonotactic constraints might arise before the lexicon is in place, in particular from… Click to show full abstract
Knowledge of phonotactics is commonly assumed to derive from the lexicon. However, computational studies have suggested that phonotactic constraints might arise before the lexicon is in place, in particular from co-occurrences in continuous speech. The current study presents two artificial language learning experiments aimed at testing whether phonotactic learning can take place in the absence of words. Dutch participants were presented with novel consonant constraints embedded in continuous artificial languages. Vowels occurred at random, which resulted in an absence of recurring word forms in the speech stream. In Experiment 1 participants with different training languages showed significantly different preferences on a set of novel test items. However, only one of the two languages resulted in preferences that were above chance-level performance. In Experiment 2 participants were exposed to a control language without novel statistical cues. Participants did not develop a preference for either phonotactic structure in the test items. An analysis of Dutch phonotactics indicated that the failure to induce novel phonotactics in one condition might have been due to interference from the native language. Our findings suggest that novel phonotactics can be learned from continuous speech, but participants have difficulty learning novel patterns that go against the native language.
               
Click one of the above tabs to view related content.