Loading Events

Even from a brief exposure to an artificial spoken language, infants and adults can identify candidate word forms and other adjacent and non-adjacent dependencies present in the acoustic input.
Among the influential theories resulting from this research is the proposal that humans compute conditional probabilities of adjacent and non-adjacent elements to discover underlying structure. However, there exist findings in the literature, including the results presented here, that are not consistent with such accounts. We propose that the perception of rhythm played an important role in prior studies, and in general that rhythm perception plays a critical role in determining which dependencies listeners can identify. We created a novel experimental paradigm to introduce a rhythm in the language stream that allowed us to manipulate the rhythm systematically. We showed that even learning adjacent dependencies with conditional probabilities of 1.0 depends on rhythm perception. We also developed a computational model to explain word segmentation in terms of rhythm perception, and we present simulations showing that the model captures patterns of human data from multiple experiments. We argue that the perception of rhythm holds explanatory power not only to our experiments and other dependency learning studies, but it is likely to be a fundamental mechanism that is ubiquitous in language and other auditory domains.