Categories
All Lab News Publication

“Bottom-up modeling of phoneme learning: Universal sensitivity and language-specific transformation” published in Speech Communication

We are pleased to announce the publication of a new paper titled “Bottom-up modeling of phoneme learning: Universal sensitivity and language-specific transformation” in the journal Speech Communication. This study was conducted by Frank and Youngah.

The research investigates the emergence and development of universal phonetic sensitivity during early phonological learning using an unsupervised modeling approach. The authors trained autoencoder models on raw acoustic input from English and Mandarin to simulate bottom-up perceptual development, focusing on phoneme contrast learning.

The results demonstrate that phoneme-like categories and feature-aligned representational spaces can emerge from context-free acoustic exposure alone. The study reveals that universal phonetic sensitivity is a transient developmental stage that varies across contrasts and gradually gives way to language-specific perception, mirroring infant perceptual development. Different featural contrasts remain universally discriminable for varying durations over the course of learning. These findings support the view that universal sensitivity is not innately fixed but emerges through learning, and that early phonological development proceeds along a mosaic, feature-dependent trajectory.

Tan, F. & Do, Y. (2025). Bottom-up modeling of phoneme learning: Universal sensitivity and language-specific transformation. Speech Communication. open_in_newDOI

Categories
All Lab News

Congratulating Our New PhD, Dr. Yu

🎉 What a fantastic day in the lab.

We celebrated Xiaoyu’s brilliant PhD defence and new doctorate status with joy, laughter, and warm toasts.

Congrats, Xiaoyu!

We’re immensely proud and wish you every success and many exciting opportunities ahead! 🥂

Categories
All Lab News

Lunch Gathering: A Flavorful Hotpot Experience

The LDL team came together for an enjoyable lunch gathering at a hotpot restaurant. This event provided a wonderful opportunity for team members to connect over a shared meal, fostering stronger bonds while sparking discussions on ongoing research.

Categories
All Lab News

LDL Shines at AMP 2025

Our lab was well represented at the Annual Meeting on Phonology (AMP 2025) at UC Berkeley. Ivy, Frank, and Youngah not only enjoyed a fun Waymo experience, but also presented their research as below:

Youngah, along with scholars from Harvard University, presented a paper titled “Investigating the Tone-Segment Asymmetry in Phonological Counting: A Learnability Experiment.” Scholars involved in this research were Jian Cui, Hanna Shine, Jesse Snedeker.

Frank, Ivy, and Youngah presented a talk titled “Modeling Prosodic Development with Prenatal Audio Attenuation.”

Additionally, Youngah participated in a keynote panel discussion on “Future Directions in Deep Phonology” with other scholars, including Volya Kapatsinski, Joe Pater, Mike Hammond, Jason Shaw, and Huteng Dai.

Overall, AMP 2025 was a rewarding and excellent opportunity for our team to engage in deep intellectual conversations with leading experts in phonology, fostering new ideas and collaborations that will propel our research forward.

Categories
All Lab News Publication

“Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input” published in Linguistics Vanguard

We are pleased to announce the publication of a new paper by Frank Lihui Tan and Youngah Do in the journal Linguistics Vanguard. The paper, titled “Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input,” explores a novel approach to phonotactic learning using an attention-based long short-term memory (LSTM) autoencoder trained on raw audio input.

Unlike previous models that rely on abstract phonological representations, this study simulates early phonotactic acquisition stages by processing continuous acoustic signals. The research focuses on an English phonotactic pattern, specifically the distribution of aspirated and unaspirated voiceless stops. The model implicitly acquires phonotactic knowledge through reconstruction tasks, demonstrating its ability to capture essential phonotactic relations via attention mechanisms. The findings suggest that the model initially relies heavily on contextual cues to identify phonotactic patterns but gradually internalizes these constraints, reducing its dependence on specific phonotactic cues over time.

This study provides valuable insights into both computational modeling and infants’ phonotactic acquisition, highlighting the feasibility of early phonotactic learning models based on raw auditory input.

Tan, F. & Do, Y. (2025). Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input. Linguistics Vanguard. open_in_newDOI

Categories
Behind-the-scenes Lab News

WELCOME TO OUR NEW members!

We’re excited to have you join our academic community. Let’s embark on this journey of knowledge and discovery together. LDL welcomes our new postgraduate students, Clarissa Ki and Qisheng Liao, and Research Assistant Professor, Dr. Yuyan Xue, this semester. Welcome! They will be involved in several lab projects.

Categories
Behind-the-scenes

1 September 2025

Categories
Behind-the-scenes

1 September 2025

Categories
Lab News Publication

“Tonal Assignment of Chinese Lettered Words” published in Journal of Chinese Linguistics

We are pleased to announce the publication of a new paper by Zhihao Wang and Youngah Do in the Journal of Chinese Linguistics. The paper, titled “Tonal Assignment of Chinese Lettered Words,” explores the complex patterns of tonal assignment in Chinese lettered words, particularly in Beijing Mandarin.

The study reveals that Chinese lettered words display a clear stress-to-tone match pattern, with additional rules of phonetic contrast maximization and a default rule also playing a role in tonal assignment. The findings suggest that the complex patterns previously reported in studies of ordinary Chinese loanwords are influenced by external factors related to the Chinese writing system.

This research provides valuable insights into the inherent strategies of tonal assignment in the Chinese language and contributes to our understanding of the phonological adaptation of loanwords.

Wang, Z., & Do, Y. (2025). Tonal assignment of Chinese lettered words [Preprint]. Journal of Chinese Linguistics. open_in_newDOI

Categories
Lab News

Frank, Lihui Tan, our Ph.D. student has been awarded an NSF scholarship to attend the Abstract and Item-Specific Knowledge Across Domains and Frameworks workshop

Congratulations Frank!

Frank, Lihui Tan, our Ph.D. student has been awarded an NSF scholarship to attend the workshop.

This workshop explores how speakers balance abstract linguistic knowledge, enabling flexible generalization, with item-specific knowledge, facilitating efficient handling of familiar contexts, aiming to unify insights across domains and methods.

Key Points of the Workshops

  • Dual Knowledge: Speakers use abstract rules and experience-based specific knowledge.
  • Ongoing Debate: How these knowledge types interact and apply in language use is under debate.
  • Recent Advances: New experimental methods and computational models drive progress.
  • Interdisciplinary Focus: Integrates phonology, lexical semantics, syntax, and psycholinguistics using methods like:
    • Experimental linguistics (e.g., wug-tests)
    • Language acquisition
    • Morphological processing
    • Corpus data
    • Computational modeling
  • Workshop Format: Features a student poster session, invited talks, and panels on:
    • Evidence: Data on abstract vs. specific knowledge.
    • Modeling: Computational models of dual knowledge.
    • Learning: Simultaneous acquisition of both knowledge types.
    • Brain: Neural basis of storage and abstraction.
    • Evolution: Influence on language evolution and processing.

Goal of the Workshop

To develop a coherent, evidence-based understanding of abstract and item-specific knowledge in language.