Categories
Lab News Publication

“Iconicity and semantic transparency in Hong Kong Sign Language: Evidence from ratings and three guessing paradigms” published in Language and Cognition

We are pleased to announce the publication of a new article in Language and Cognition by Arthur, Aaron, Mavies, Rachel, Judy and Youngah, titled Iconicity and semantic transparency in Hong Kong Sign Language: Evidence from ratings and three guessing paradigms.

This study investigates how strongly signs in Hong Kong Sign Language (HKSL) are perceived to resemble their meanings, a property known as iconicity, and how this relates to how easily meanings can be inferred by people with no knowledge of HKSL. The authors collected iconicity ratings for 972 HKSL signs from both Deaf native HKSL signers and hearing Cantonese-speaking non-signers, and examined how these ratings relate to performance in several meaning‑guessing tasks.

Results show that HKSL signs are rated as comparably iconic to signs in other well‑studied sign languages, including American Sign Language and Israeli Sign Language, with Deaf signers assigning higher iconicity ratings overall. Across tasks, signs rated as more iconic were also more likely to be guessed correctly by hearing non-signers. Importantly, the study shows that semantic transparency is not all‑or‑nothing: when contextual information is provided through multiple‑choice options, many signs become “translucent,” allowing accurate inference, whereas open‑ended guessing without context is much more difficult.

By combining large‑scale iconicity ratings with multiple guessing paradigms and cross‑linguistic comparisons, this work provides a new empirical baseline for studying iconicity and semantic transparency in HKSL and contributes to broader discussions about how form–meaning relationships are perceived across sign languages.

Thompson, A., Chik, A., Ngai, M., Chen, R., Ng, J., Do, Y. (2026). Iconicity and semantic transparency in Hong Kong Sign Language: Evidence from ratings and three guessing paradigms. Language and Cognition, 18, Article e21. open_in_newDOI

Categories
Lab News Publication

“Modeling Prosodic Development with Prenatal Audio Attenuation” published in the Proceedings of the Annual Meetings on Phonology.

We are pleased to share a new publication from Frank, Shuang, Ming, and Youngah in the Proceedings of the Annual Meetings on Phonology. The paper, titled “Modeling Prosodic Development with Prenatal Audio Attenuation,” investigates how the sound environment before birth may shape early prosodic learning—the ability to perceive patterns such as stress and tone in speech.

Preterm infants often experience delayed language development, and one contributing factor may be the reduced duration of prenatal auditory exposure. To better understand this, the authors used convolutional neural networks to simulate infants’ early learning environment. The models were first trained on low‑frequency audio, reflecting the kinds of sounds fetuses can hear in utero, before being exposed to full‑frequency speech that resembles postnatal auditory input.

The study shows that longer exposure to low‑frequency audio provides an initial advantage for learning stress and tone patterns, though this early benefit fades over time. Interestingly, the simulations also reveal that learning improves even more when models are trained on full‑frequency audio for the same duration, suggesting that infants may rely on a wider range of acoustic cues than previously assumed. These findings underscore the importance of both the quantity and quality of auditory input in early prosodic development.

Thompson, A., Chik, A., Ngai, M., Chen, R., Ng, J., & Do, Y (2026). Iconicity and semantic transparency in Hong Kong Sign Language: Evidence from ratings and three guessing paradigms. Language and Cognition, 2(1). open_in_newDOI

Categories
All Lab News Publication

“Investigating the Tone–Segment Asymmetry in Phonological Counting: A Learnability Experiment” published in the Proceedings of the Annual Meetings on Phonology.

We are pleased to announce a new publication by Jian, Hanna, Youngah and Jesse in the Proceedings of the Annual Meetings on Phonology. The paper, titled “Investigating the Tone-Segment Asymmetry in Phonological Counting: A Learnability Experiment,” examines how learners acquire rules that rely on counting either tones or segments, two fundamental components of spoken language.

Tone-segment asymmetry has long attracted attention in phonological theory, with many proposals suggesting that tones and segments behave differently in how they pattern across languages. This study provides the first experimental test of whether these typological differences are connected to how easily such patterns can be learned. Using an artificial-language learning paradigm, the authors compared learners’ ability to acquire a tonal counting rule with their ability to learn a structurally parallel segmental rule.

The results reveal that an unattested segmental counting pattern is significantly more difficult for learners than its tonal equivalent. This asymmetry in learnability suggests that cognitive biases may contribute to the distribution of tone‑ and segment‑based counting patterns observed cross‑linguistically.

Cui, J., Shine, H., Do, Y., & Snedeker, J. (2026). Investigating the tone-segment asymmetry in phonological counting: A learnability experiment. Proceedings of the Annual Meetings on Phonology, 2(1). open_in_newDOI

Categories
All Lab News Publication

“Bottom-up modeling of phoneme learning: Universal sensitivity and language-specific transformation” published in Speech Communication

We are pleased to announce the publication of a new paper titled “Bottom-up modeling of phoneme learning: Universal sensitivity and language-specific transformation” in the journal Speech Communication. This study was conducted by Frank and Youngah.

The research investigates the emergence and development of universal phonetic sensitivity during early phonological learning using an unsupervised modeling approach. The authors trained autoencoder models on raw acoustic input from English and Mandarin to simulate bottom-up perceptual development, focusing on phoneme contrast learning.

The results demonstrate that phoneme-like categories and feature-aligned representational spaces can emerge from context-free acoustic exposure alone. The study reveals that universal phonetic sensitivity is a transient developmental stage that varies across contrasts and gradually gives way to language-specific perception, mirroring infant perceptual development. Different featural contrasts remain universally discriminable for varying durations over the course of learning. These findings support the view that universal sensitivity is not innately fixed but emerges through learning, and that early phonological development proceeds along a mosaic, feature-dependent trajectory.

Tan, F. & Do, Y. (2025). Bottom-up modeling of phoneme learning: Universal sensitivity and language-specific transformation. Speech Communication. open_in_newDOI

Categories
All Lab News Publication

“Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input” published in Linguistics Vanguard

We are pleased to announce the publication of a new paper by Frank Lihui Tan and Youngah Do in the journal Linguistics Vanguard. The paper, titled “Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input,” explores a novel approach to phonotactic learning using an attention-based long short-term memory (LSTM) autoencoder trained on raw audio input.

Unlike previous models that rely on abstract phonological representations, this study simulates early phonotactic acquisition stages by processing continuous acoustic signals. The research focuses on an English phonotactic pattern, specifically the distribution of aspirated and unaspirated voiceless stops. The model implicitly acquires phonotactic knowledge through reconstruction tasks, demonstrating its ability to capture essential phonotactic relations via attention mechanisms. The findings suggest that the model initially relies heavily on contextual cues to identify phonotactic patterns but gradually internalizes these constraints, reducing its dependence on specific phonotactic cues over time.

This study provides valuable insights into both computational modeling and infants’ phonotactic acquisition, highlighting the feasibility of early phonotactic learning models based on raw auditory input.

Tan, F. & Do, Y. (2025). Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input. Linguistics Vanguard. open_in_newDOI

Categories
Lab News Publication

“Tonal Assignment of Chinese Lettered Words” published in Journal of Chinese Linguistics

We are pleased to announce the publication of a new paper by Zhihao Wang and Youngah Do in the Journal of Chinese Linguistics. The paper, titled “Tonal Assignment of Chinese Lettered Words,” explores the complex patterns of tonal assignment in Chinese lettered words, particularly in Beijing Mandarin.

The study reveals that Chinese lettered words display a clear stress-to-tone match pattern, with additional rules of phonetic contrast maximization and a default rule also playing a role in tonal assignment. The findings suggest that the complex patterns previously reported in studies of ordinary Chinese loanwords are influenced by external factors related to the Chinese writing system.

This research provides valuable insights into the inherent strategies of tonal assignment in the Chinese language and contributes to our understanding of the phonological adaptation of loanwords.

Wang, Z., & Do, Y. (2025). Tonal assignment of Chinese lettered words [Preprint]. Journal of Chinese Linguistics. open_in_newDOI

Categories
All Lab News Publication

“Iconic hand gestures from ideophones exhibit stability and emergent phonological properties” published in CogLing

We are pleased to announce the publication of a new paper by Arthur, Thomas (joint first authors), Aaron, and Youngah in the journal Cognitive Linguistics.

The paper, titled “Iconic hand gestures from ideophones exhibit stability and emergent phonological properties: an iterated learning study,” explores the stability and phonological properties of iconic hand gestures associated with ideophones. Ideophones are marked words that depict sensory imagery and are usually considered iconic by native speakers. The study investigates how these gestures are transmitted across generations using a linear iterated learning paradigm.

The findings reveal that despite noise in the visual signal, participants’ hand gestures converged, indicating the emergence of phonological targets. Handshape configurations over time exhibited finger coordination reminiscent of unmarked handshapes observed in phonological inventories of signed languages. Well-replicated gestures were correlated with well-guessed ideophones from a spoken language study, highlighting the complementary nature of the visual and spoken modalities in formulating mental representations.

Thompson, A. L., Van Hoey, T., Chik, A. W. C., & Do, Y. (2025). Iconic hand gestures from ideophones exhibit stability and emergent phonological properties: An iterated learning study. Cognitive Linguistics. open_in_newDOI

Categories
All Lab News Publication

“Bilinguals’ advantages in executive function” published in Second Lang. Res.

We are pleased to announce the publication of a new paper by Samuel, Xiaoyu, Thomas, Bingzi, and Youngah. The paper, titled “Bilinguals’ Advantages in Executive Function: Learning Phonotactics and Alternation,” has been published in Second Language Research.

This study investigates the relationship between phonotactics and alternation in phonological acquisition and explores whether bilingual speakers have an advantage in learning alternation patterns that are not fully supported by phonotactics. Phonotactics refers to the legal sequences and structures within a language’s phonology, while alternation involves context-sensitive changes in morphemes. The research predicts that bilinguals, due to their enhanced executive function and multitasking abilities, will outperform monolinguals in handling multiple independent phonological pattern learning tasks simultaneously.

The findings reveal that bilingual participants successfully learned alternation patterns regardless of their consistency with stem-internal phonotactic patterns. In contrast, monolinguals only acquired alternation patterns with full phonotactic support. This suggests that bilingualism may confer advantages in managing phonotactics and alternation learning tasks simultaneously.

Sze, S. L., Yu, X., Van Hoey, T., Yu, B., & Do, Y. (2025). Bilinguals’ advantages in executive function: learning phonotactics and alternation. Second Language Researchopen_in_new DOI

Categories
All Lab News Publication

“Learners’ generalization of alternation patterns from ambiguous data” published in AMP2024 Proceedings

We are delighted to announce that the paper “Learners’ generalization of alternation patterns from ambiguous data,” presented at the Annual Meeting on Phonology 2024 (AMP2024), has been published in the conference proceedings. This paper is authored by Bingzi (former member of LDL and current PhD candidate at MIT), Ivy, and Youngah.

The published paper investigates how learners generalize phonological alternation patterns when faced with ambiguous data. It explores whether learners prefer simple or complex rules in their generalizations, shedding light on the biases and mechanisms underlying phonological learning.

The findings indicate that learners tend to favor simpler generalizations, contributing to our understanding of phonological acquisition and cognitive processes involved in language learning. This research represents a significant advancement in the study of phonological learning.

Yu, B., Zheng, S., & Do, Y. (2025). Learners’ Generalization of Alternation Patterns from Ambiguous Data. Proceedings of the Annual Meetings on Phonology, 1(1), Article 1.
open_in_newDOI

Categories
All Lab News Publication

“Preference for Distinct Variants in Learning Sound Correspondences During Dialect Acquisition” published on Language and Speech

We are pleased to announce that Xiaoyu and Youngah’s paper, “Preference for Distinct Variants in Learning Sound Correspondences During Dialect Acquisition,” has been published in the journal Language and Speech.

This research delves into how learners acquire sound correspondences (SCs) in second dialect acquisition. SCs occur when sounds occupy corresponding positions in cognate words of related languages or dialects. While SCs can consist of both similar and distinct variants, the impact of this similarity on learning has been understudied.

In their study, Xiaoyu and Youngah investigated whether the degree of similarity between dialect variants affects SC learning. They employed an artificial language learning experiment where participants learned SCs between Standard Mandarin and “artificial dialects,” using a set of carefully controlled sound contrasts. The degree of similarity between the variants was evaluated using multiple measures, including phonetic and phonological metrics validated by typological evidence.

The findings revealed that while similarity did not impact the learning of simple one-to-one SCs, learners showed a preference for more distinct variants when the SC mapping structure was more complex (i.e., two-to-one or one-to-two mappings). This preference, however, only emerged when the dissimilarity between the variants was sufficiently large to cross a certain threshold.

This study demonstrates that although learners initially display a general lack of sensitivity to similarity differences, a preference for distinct variants emerges when SC mapping structures become more complex and the dissimilarity between variants reaches a critical level. This suggests that when acquiring complex SC patterns, learners seek out more salient cues, leading to an improved ability to differentiate between distinct variants.

Yu, X., & Do, Y. (2025). Preference for Distinct Variants in Learning Sound Correspondences During Dialect Acquisition. Language and Speech. open_in_newDOI