Categories
All Lab News Publication

“Substantive Bias in Artificial Phonology Learning” published on Lang. Linguist. Compass

We are pleased to announce the publication of a review article by Ivy and Youngah, in Language and Linguistics Compass. The article, titled “Substantive Bias in Artificial Phonology Learning,” provides a comprehensive review of the research on substantive bias in phonological learning since the influential 2012 paper by Moreton and Pater.

The review categorizes studies into vowel, consonant, and suprasegmental patterns, highlighting advancements in experimental paradigms, the definition of phonetic naturalness, and the exploration of various phonological phenomena. It emphasizes how subtle methodological choices in experimental designs can affect the results of substantive bias.

Key findings from the review include:

  • Vowel Patterns: Studies on vowel harmony have consistently developed more sophisticated paradigms, highlighting the role of naturalness in learning. The review showed how different training parameters (variable input, iterative learning) influence the effect of substantive bias.
  • Consonant Patterns: Research in this domain has explored various phonological phenomena, including nasalization, voicing, and saltatory alternations, suggesting the importance of considering phonetic precursor strength and the roles of articulatory and perceptual factors when assessing substantive bias effects.
  • Suprasegmental Patterns: Studies on tone and stress patterns have consistently shown a positive effect of substantive bias, differing from segmental patterns. The review suggests that these differences might be related to the learnability of the phonological patterns.

Based on their review, Ivy and Youngah suggest that future research should include:

  1. An examination of the articulatory and perceptual foundations of each phonological pattern
  2. An analysis of the similarities in features, articulation, and perception

The paper not only summarizes current findings but also provides important guidance for future research in phonological learning, particularly in the area of substantive bias.

Zheng, S., & Do, Y. (2025). Substantive Bias in Artificial Phonology Learning. Language and Linguistics Compass, 19(1), e70005. open_in_newDOI

Categories
Lab News

Presenting at LabPhon19 @ Seoul

Our lab members recently presented in the recent LabPhon19 conference, held from June 27th to 29th, 2024, in Seoul, South Korea. The conference theme was “Where speech sounds meet the architecture of the grammar and beyond.” We presented findings from three distinct areas of investigation: phonetic substance in language learning, tonal representation in Hakka dialects, and the influence of naturalness bias on phonological variation.

Poster Presentations

  • Phonetic substance in alternation learning: This study by Ivy and Youngah investigated how learners acquire grammatical and sound patterns in different domains. The results suggest that while both domains involve structural complexity and naturalness as learning biases, these biases play a stronger role in phonological learning, particularly when the target pattern is complex and unnatural.
  • Syllable-based or Word-based? Representation of tones undergoing merger in Hakka: Ming and Jon explored how native speakers of Wangmudu Hakka represent tones in their minds. Their findings suggest that for tones undergoing merger, speakers rely on word-level representations rather than syllable-level or generalized sandhi rules.
  • The acquisition, contact, and transmission of phonological variation: Xiaoyu, Samuel, Thomas, Bingzi, Frank, Stephen, Wayne and Youngah examined how biases influence phonological variation learning in different language learning contexts. Their results suggest that a bias towards phonetically natural patterns guides learning in acquisition and contact situations, but not necessarily during language transmission.

Corpus Workshop Presentation

  • Attention-LSTM Autoencoder for Phonotactics Learning from Raw Audio Input: Frank and Youngah presented a study on how a neural network model can learn phonotactic knowledge from raw audio data. Their model, designed to mimic early stages of infant language learning, successfully captured the influence of surrounding sounds on the pronunciation of stops following a sibilant fricative in English.
Frank, Xiaoyu, Ivy, Youngah, Jon and Ming enjoying a meal in Seoul.