Categories
Lab News

How Do We Learn a New Dialect? Xiaoyu Shares Findings at CityU Phorum

Xiaoyu presented at the CityU Phonetics and Phonology Forum (“Phorum”) on March 4, 2026, organized by the Phonetics, Acquisition, and Multilingualism Lab (PAMLab). 

In his presentation “Learning Sound Correspondences during Second Dialect Acquisition”, Xiaoyu presented an artificial dialect learning study and an ERP experiment that examined the learnability and processing of sound correspondences.

Categories
Lab News

A Beautiful Sharing Session: Voices Beyond Silence

We had the pleasure of attending a sharing session organized by the Equal Opportunity Unit at HKU: Diversity & Inclusion Week: Voices Beyond Silence – Adia’s Journey as a Child of Deaf Adults (CODA).

Hearing Adia’s perspective as a CODA and learning about her journey was truly eye-opening. She taught us some sign language and shared both the challenges and beautiful moments she experienced growing up. We also had a chance to read her illustration storybook — such a simple yet touching story, with the most adorable bunny characters! 🐰

After the session, we connected with Adia in person, and we are so glad we had that moment to chat.

We’re thankful for the chance to connect, learn, and be moved by Adia’s story. Here’s to continuing these important conversations and building a more inclusive community together. ✨

Categories
Lab News

Planting Seeds of Inclusion: A Sharing at ESF Kennedy School

Youngah visited ESF Kennedy School and spoke to the primary school children about our sign language project! She shared simple, practical ways everyone can help make our society more inclusive.

It was truly heartwarming to see how engaged and thoughtful the children were. We hope the session inspired them to think about both small everyday actions and bigger steps they can take to create a kinder, more welcoming, and truly inclusive world for all.

We’re grateful for moments like these — planting those important seeds in young minds is how we build a more inclusive future together.

Categories
Lab News

Exploring the Neural Network Implementation of Phonological Systems: Insights from Vowel Harmony and Disharmony

We are pleased to announce that Ivy presented a talk titled “Phonetic substance is encoded in the neural network implementation of the phonological system: the case of vowel harmony and vowel disharmony” at the satellite workshop of the 23rd Old-World Conference in Phonology (OCP23).

The study investigates the role of phonetic substance in phonological acquisition. We aim to disentangle pure phonological learning from the speech production and perception channel by employing text-based neural network simulations. Sequence-to-sequence models were trained to learn the underlying-surface mappings of vowel harmony and vowel disharmony. The results revealed significantly more productions with backness agreement errors in the disharmony condition compared to the harmony condition, confirming a learning bias favoring vowel harmony. We attributed the difference to how the phonetic basis underlying vowel harmony can be reflected as adjacency in featural representations, thereby inducing a simpler computational structure. Our findings thus call for a reconsideration of the distinction between structural and substantive biases.

Please check more details here: https://www.phonetics.mmll.cam.ac.uk/ocp23/representation

Categories
Lab News

Empowering the Future of Arts & AI at HKU

Youngah featured in HKU’s spotlight! Our research exploring the intersections of human learning and machine learning is now directly powering change: it underpins the new wave of innovative AI courses in teaching & learning in the Faculty of Arts, as well as  strengthened internship collaborations with industry leaders  — creating unique opportunities that fuse arts, technology, and authentic real-world impact.

Please check more details here:

HKU’s Official Instagram

HKU’s Official Facebook

HKU’s Official WeChat Channel

Categories
All Lab News Publication

“Bottom-up modeling of phoneme learning: Universal sensitivity and language-specific transformation” published in Speech Communication

We are pleased to announce the publication of a new paper titled “Bottom-up modeling of phoneme learning: Universal sensitivity and language-specific transformation” in the journal Speech Communication. This study was conducted by Frank and Youngah.

The research investigates the emergence and development of universal phonetic sensitivity during early phonological learning using an unsupervised modeling approach. The authors trained autoencoder models on raw acoustic input from English and Mandarin to simulate bottom-up perceptual development, focusing on phoneme contrast learning.

The results demonstrate that phoneme-like categories and feature-aligned representational spaces can emerge from context-free acoustic exposure alone. The study reveals that universal phonetic sensitivity is a transient developmental stage that varies across contrasts and gradually gives way to language-specific perception, mirroring infant perceptual development. Different featural contrasts remain universally discriminable for varying durations over the course of learning. These findings support the view that universal sensitivity is not innately fixed but emerges through learning, and that early phonological development proceeds along a mosaic, feature-dependent trajectory.

Tan, F. & Do, Y. (2025). Bottom-up modeling of phoneme learning: Universal sensitivity and language-specific transformation. Speech Communication. open_in_newDOI

Categories
All Lab News

Congratulating Our New PhD, Dr. Yu

🎉 What a fantastic day in the lab.

We celebrated Xiaoyu’s brilliant PhD defence and new doctorate status with joy, laughter, and warm toasts.

Congrats, Xiaoyu!

We’re immensely proud and wish you every success and many exciting opportunities ahead! 🥂

Categories
All Lab News

Lunch Gathering: A Flavorful Hotpot Experience

The LDL team came together for an enjoyable lunch gathering at a hotpot restaurant. This event provided a wonderful opportunity for team members to connect over a shared meal, fostering stronger bonds while sparking discussions on ongoing research.

Categories
All Lab News

LDL Shines at AMP 2025

Our lab was well represented at the Annual Meeting on Phonology (AMP 2025) at UC Berkeley. Ivy, Frank, and Youngah not only enjoyed a fun Waymo experience, but also presented their research as below:

Youngah, along with scholars from Harvard University, presented a paper titled “Investigating the Tone-Segment Asymmetry in Phonological Counting: A Learnability Experiment.” Scholars involved in this research were Jian Cui, Hanna Shine, Jesse Snedeker.

Frank, Ivy, and Youngah presented a talk titled “Modeling Prosodic Development with Prenatal Audio Attenuation.”

Additionally, Youngah participated in a keynote panel discussion on “Future Directions in Deep Phonology” with other scholars, including Volya Kapatsinski, Joe Pater, Mike Hammond, Jason Shaw, and Huteng Dai.

Overall, AMP 2025 was a rewarding and excellent opportunity for our team to engage in deep intellectual conversations with leading experts in phonology, fostering new ideas and collaborations that will propel our research forward.

Categories
All Lab News Publication

“Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input” published in Linguistics Vanguard

We are pleased to announce the publication of a new paper by Frank Lihui Tan and Youngah Do in the journal Linguistics Vanguard. The paper, titled “Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input,” explores a novel approach to phonotactic learning using an attention-based long short-term memory (LSTM) autoencoder trained on raw audio input.

Unlike previous models that rely on abstract phonological representations, this study simulates early phonotactic acquisition stages by processing continuous acoustic signals. The research focuses on an English phonotactic pattern, specifically the distribution of aspirated and unaspirated voiceless stops. The model implicitly acquires phonotactic knowledge through reconstruction tasks, demonstrating its ability to capture essential phonotactic relations via attention mechanisms. The findings suggest that the model initially relies heavily on contextual cues to identify phonotactic patterns but gradually internalizes these constraints, reducing its dependence on specific phonotactic cues over time.

This study provides valuable insights into both computational modeling and infants’ phonotactic acquisition, highlighting the feasibility of early phonotactic learning models based on raw auditory input.

Tan, F. & Do, Y. (2025). Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input. Linguistics Vanguard. open_in_newDOI