Categories
Lab News Publication

“Iconicity and semantic transparency in Hong Kong Sign Language: Evidence from ratings and three guessing paradigms” published in Language and Cognition

We are pleased to announce the publication of a new article in Language and Cognition by Arthur, Aaron, Mavies, Rachel, Judy and Youngah, titled Iconicity and semantic transparency in Hong Kong Sign Language: Evidence from ratings and three guessing paradigms.

This study investigates how strongly signs in Hong Kong Sign Language (HKSL) are perceived to resemble their meanings, a property known as iconicity, and how this relates to how easily meanings can be inferred by people with no knowledge of HKSL. The authors collected iconicity ratings for 972 HKSL signs from both Deaf native HKSL signers and hearing Cantonese-speaking non-signers, and examined how these ratings relate to performance in several meaning‑guessing tasks.

Results show that HKSL signs are rated as comparably iconic to signs in other well‑studied sign languages, including American Sign Language and Israeli Sign Language, with Deaf signers assigning higher iconicity ratings overall. Across tasks, signs rated as more iconic were also more likely to be guessed correctly by hearing non-signers. Importantly, the study shows that semantic transparency is not all‑or‑nothing: when contextual information is provided through multiple‑choice options, many signs become “translucent,” allowing accurate inference, whereas open‑ended guessing without context is much more difficult.

By combining large‑scale iconicity ratings with multiple guessing paradigms and cross‑linguistic comparisons, this work provides a new empirical baseline for studying iconicity and semantic transparency in HKSL and contributes to broader discussions about how form–meaning relationships are perceived across sign languages.

Thompson, A., Chik, A., Ngai, M., Chen, R., Ng, J., Do, Y. (2026). Iconicity and semantic transparency in Hong Kong Sign Language: Evidence from ratings and three guessing paradigms. Language and Cognition, 18, Article e21. open_in_newDOI

Categories
Lab News Publication

“Modeling Prosodic Development with Prenatal Audio Attenuation” published in the Proceedings of the Annual Meetings on Phonology.

We are pleased to share a new publication from Frank, Shuang, Ming, and Youngah in the Proceedings of the Annual Meetings on Phonology. The paper, titled “Modeling Prosodic Development with Prenatal Audio Attenuation,” investigates how the sound environment before birth may shape early prosodic learning—the ability to perceive patterns such as stress and tone in speech.

Preterm infants often experience delayed language development, and one contributing factor may be the reduced duration of prenatal auditory exposure. To better understand this, the authors used convolutional neural networks to simulate infants’ early learning environment. The models were first trained on low‑frequency audio, reflecting the kinds of sounds fetuses can hear in utero, before being exposed to full‑frequency speech that resembles postnatal auditory input.

The study shows that longer exposure to low‑frequency audio provides an initial advantage for learning stress and tone patterns, though this early benefit fades over time. Interestingly, the simulations also reveal that learning improves even more when models are trained on full‑frequency audio for the same duration, suggesting that infants may rely on a wider range of acoustic cues than previously assumed. These findings underscore the importance of both the quantity and quality of auditory input in early prosodic development.

Thompson, A., Chik, A., Ngai, M., Chen, R., Ng, J., & Do, Y (2026). Iconicity and semantic transparency in Hong Kong Sign Language: Evidence from ratings and three guessing paradigms. Language and Cognition, 2(1). open_in_newDOI

Categories
All Lab News Publication

“Investigating the Tone–Segment Asymmetry in Phonological Counting: A Learnability Experiment” published in the Proceedings of the Annual Meetings on Phonology.

We are pleased to announce a new publication by Jian, Hanna, Youngah and Jesse in the Proceedings of the Annual Meetings on Phonology. The paper, titled “Investigating the Tone-Segment Asymmetry in Phonological Counting: A Learnability Experiment,” examines how learners acquire rules that rely on counting either tones or segments, two fundamental components of spoken language.

Tone-segment asymmetry has long attracted attention in phonological theory, with many proposals suggesting that tones and segments behave differently in how they pattern across languages. This study provides the first experimental test of whether these typological differences are connected to how easily such patterns can be learned. Using an artificial-language learning paradigm, the authors compared learners’ ability to acquire a tonal counting rule with their ability to learn a structurally parallel segmental rule.

The results reveal that an unattested segmental counting pattern is significantly more difficult for learners than its tonal equivalent. This asymmetry in learnability suggests that cognitive biases may contribute to the distribution of tone‑ and segment‑based counting patterns observed cross‑linguistically.

Cui, J., Shine, H., Do, Y., & Snedeker, J. (2026). Investigating the tone-segment asymmetry in phonological counting: A learnability experiment. Proceedings of the Annual Meetings on Phonology, 2(1). open_in_newDOI

Categories
Lab News

How Do We Learn a New Dialect? Xiaoyu Shares Findings at CityU Phorum

Xiaoyu presented at the CityU Phonetics and Phonology Forum (“Phorum”) on March 4, 2026, organized by the Phonetics, Acquisition, and Multilingualism Lab (PAMLab). 

In his presentation “Learning Sound Correspondences during Second Dialect Acquisition”, Xiaoyu presented an artificial dialect learning study and an ERP experiment that examined the learnability and processing of sound correspondences.

Categories
Lab News

A Beautiful Sharing Session: Voices Beyond Silence

We had the pleasure of attending a sharing session organized by the Equal Opportunity Unit at HKU: Diversity & Inclusion Week: Voices Beyond Silence – Adia’s Journey as a Child of Deaf Adults (CODA).

Hearing Adia’s perspective as a CODA and learning about her journey was truly eye-opening. She taught us some sign language and shared both the challenges and beautiful moments she experienced growing up. We also had a chance to read her illustration storybook — such a simple yet touching story, with the most adorable bunny characters! 🐰

After the session, we connected with Adia in person, and we are so glad we had that moment to chat.

We’re thankful for the chance to connect, learn, and be moved by Adia’s story. Here’s to continuing these important conversations and building a more inclusive community together. ✨

Categories
Lab News

Planting Seeds of Inclusion: A Sharing at ESF Kennedy School

Youngah visited ESF Kennedy School and spoke to the primary school children about our sign language project! She shared simple, practical ways everyone can help make our society more inclusive.

It was truly heartwarming to see how engaged and thoughtful the children were. We hope the session inspired them to think about both small everyday actions and bigger steps they can take to create a kinder, more welcoming, and truly inclusive world for all.

We’re grateful for moments like these — planting those important seeds in young minds is how we build a more inclusive future together.

Categories
Lab News

Exploring the Neural Network Implementation of Phonological Systems: Insights from Vowel Harmony and Disharmony

We are pleased to announce that Ivy presented a talk titled “Phonetic substance is encoded in the neural network implementation of the phonological system: the case of vowel harmony and vowel disharmony” at the satellite workshop of the 23rd Old-World Conference in Phonology (OCP23).

The study investigates the role of phonetic substance in phonological acquisition. We aim to disentangle pure phonological learning from the speech production and perception channel by employing text-based neural network simulations. Sequence-to-sequence models were trained to learn the underlying-surface mappings of vowel harmony and vowel disharmony. The results revealed significantly more productions with backness agreement errors in the disharmony condition compared to the harmony condition, confirming a learning bias favoring vowel harmony. We attributed the difference to how the phonetic basis underlying vowel harmony can be reflected as adjacency in featural representations, thereby inducing a simpler computational structure. Our findings thus call for a reconsideration of the distinction between structural and substantive biases.

Please check more details here: https://www.phonetics.mmll.cam.ac.uk/ocp23/representation

Categories
Lab News

Empowering the Future of Arts & AI at HKU

Youngah featured in HKU’s spotlight! Our research exploring the intersections of human learning and machine learning is now directly powering change: it underpins the new wave of innovative AI courses in teaching & learning in the Faculty of Arts, as well as  strengthened internship collaborations with industry leaders  — creating unique opportunities that fuse arts, technology, and authentic real-world impact.

Please check more details here:

HKU’s Official Instagram

HKU’s Official Facebook

HKU’s Official WeChat Channel

Categories
All Lab News Publication

“Bottom-up modeling of phoneme learning: Universal sensitivity and language-specific transformation” published in Speech Communication

We are pleased to announce the publication of a new paper titled “Bottom-up modeling of phoneme learning: Universal sensitivity and language-specific transformation” in the journal Speech Communication. This study was conducted by Frank and Youngah.

The research investigates the emergence and development of universal phonetic sensitivity during early phonological learning using an unsupervised modeling approach. The authors trained autoencoder models on raw acoustic input from English and Mandarin to simulate bottom-up perceptual development, focusing on phoneme contrast learning.

The results demonstrate that phoneme-like categories and feature-aligned representational spaces can emerge from context-free acoustic exposure alone. The study reveals that universal phonetic sensitivity is a transient developmental stage that varies across contrasts and gradually gives way to language-specific perception, mirroring infant perceptual development. Different featural contrasts remain universally discriminable for varying durations over the course of learning. These findings support the view that universal sensitivity is not innately fixed but emerges through learning, and that early phonological development proceeds along a mosaic, feature-dependent trajectory.

Tan, F. & Do, Y. (2025). Bottom-up modeling of phoneme learning: Universal sensitivity and language-specific transformation. Speech Communication. open_in_newDOI

Categories
All Lab News

Congratulating Our New PhD, Dr. Yu

🎉 What a fantastic day in the lab.

We celebrated Xiaoyu’s brilliant PhD defence and new doctorate status with joy, laughter, and warm toasts.

Congrats, Xiaoyu!

We’re immensely proud and wish you every success and many exciting opportunities ahead! 🥂