Categories
All Lab News

Lunch Gathering: A Flavorful Hotpot Experience

The LDL team came together for an enjoyable lunch gathering at a hotpot restaurant. This event provided a wonderful opportunity for team members to connect over a shared meal, fostering stronger bonds while sparking discussions on ongoing research.

Categories
All Lab News

LDL Shines at AMP 2025

Our lab was well represented at the Annual Meeting on Phonology (AMP 2025) at UC Berkeley. Ivy, Frank, and Youngah not only enjoyed a fun Waymo experience, but also presented their research as below:

Youngah, along with scholars from Harvard University, presented a paper titled “Investigating the Tone-Segment Asymmetry in Phonological Counting: A Learnability Experiment.” Scholars involved in this research were Jian Cui, Hanna Shine, Jesse Snedeker.

Frank, Ivy, and Youngah presented a talk titled “Modeling Prosodic Development with Prenatal Audio Attenuation.”

Additionally, Youngah participated in a keynote panel discussion on “Future Directions in Deep Phonology” with other scholars, including Volya Kapatsinski, Joe Pater, Mike Hammond, Jason Shaw, and Huteng Dai.

Overall, AMP 2025 was a rewarding and excellent opportunity for our team to engage in deep intellectual conversations with leading experts in phonology, fostering new ideas and collaborations that will propel our research forward.

Categories
All Lab News Publication

“Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input” published in Linguistics Vanguard

We are pleased to announce the publication of a new paper by Frank Lihui Tan and Youngah Do in the journal Linguistics Vanguard. The paper, titled “Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input,” explores a novel approach to phonotactic learning using an attention-based long short-term memory (LSTM) autoencoder trained on raw audio input.

Unlike previous models that rely on abstract phonological representations, this study simulates early phonotactic acquisition stages by processing continuous acoustic signals. The research focuses on an English phonotactic pattern, specifically the distribution of aspirated and unaspirated voiceless stops. The model implicitly acquires phonotactic knowledge through reconstruction tasks, demonstrating its ability to capture essential phonotactic relations via attention mechanisms. The findings suggest that the model initially relies heavily on contextual cues to identify phonotactic patterns but gradually internalizes these constraints, reducing its dependence on specific phonotactic cues over time.

This study provides valuable insights into both computational modeling and infants’ phonotactic acquisition, highlighting the feasibility of early phonotactic learning models based on raw auditory input.

Tan, F. & Do, Y. (2025). Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input. Linguistics Vanguard. open_in_newDOI

Categories
Behind-the-scenes Lab News

WELCOME TO OUR NEW members!

We’re excited to have you join our academic community. Let’s embark on this journey of knowledge and discovery together. LDL welcomes our new postgraduate students, Clarissa Ki and Qisheng Liao, and Research Assistant Professor, Dr. Yuyan Xue, this semester. Welcome! They will be involved in several lab projects.

Categories
Behind-the-scenes

1 September 2025

Categories
Behind-the-scenes

1 September 2025

Categories
Lab News Publication

“Tonal Assignment of Chinese Lettered Words” published in Journal of Chinese Linguistics

We are pleased to announce the publication of a new paper by Zhihao Wang and Youngah Do in the Journal of Chinese Linguistics. The paper, titled “Tonal Assignment of Chinese Lettered Words,” explores the complex patterns of tonal assignment in Chinese lettered words, particularly in Beijing Mandarin.

The study reveals that Chinese lettered words display a clear stress-to-tone match pattern, with additional rules of phonetic contrast maximization and a default rule also playing a role in tonal assignment. The findings suggest that the complex patterns previously reported in studies of ordinary Chinese loanwords are influenced by external factors related to the Chinese writing system.

This research provides valuable insights into the inherent strategies of tonal assignment in the Chinese language and contributes to our understanding of the phonological adaptation of loanwords.

Wang, Z., & Do, Y. (2025). Tonal assignment of Chinese lettered words [Preprint]. Journal of Chinese Linguistics. open_in_newDOI

Categories
Lab News

Frank, Lihui Tan, our Ph.D. student has been awarded an NSF scholarship to attend the Abstract and Item-Specific Knowledge Across Domains and Frameworks workshop

Congratulations Frank!

Frank, Lihui Tan, our Ph.D. student has been awarded an NSF scholarship to attend the workshop.

This workshop explores how speakers balance abstract linguistic knowledge, enabling flexible generalization, with item-specific knowledge, facilitating efficient handling of familiar contexts, aiming to unify insights across domains and methods.

Key Points of the Workshops

  • Dual Knowledge: Speakers use abstract rules and experience-based specific knowledge.
  • Ongoing Debate: How these knowledge types interact and apply in language use is under debate.
  • Recent Advances: New experimental methods and computational models drive progress.
  • Interdisciplinary Focus: Integrates phonology, lexical semantics, syntax, and psycholinguistics using methods like:
    • Experimental linguistics (e.g., wug-tests)
    • Language acquisition
    • Morphological processing
    • Corpus data
    • Computational modeling
  • Workshop Format: Features a student poster session, invited talks, and panels on:
    • Evidence: Data on abstract vs. specific knowledge.
    • Modeling: Computational models of dual knowledge.
    • Learning: Simultaneous acquisition of both knowledge types.
    • Brain: Neural basis of storage and abstraction.
    • Evolution: Influence on language evolution and processing.

Goal of the Workshop

To develop a coherent, evidence-based understanding of abstract and item-specific knowledge in language.

Categories
All Lab News

Advancing Inclusivity: Youngah’s Keynote on Hong Kong Sign Language at Media For All Conference 2025

Youngah presents at the Media For All 2025 conference at the University of Hong Kong.

On May 30, 2025, Youngah, our Lab Principal Investigator, delivered a compelling keynote at the Media For All 2025 conference at The University of Hong Kong. Titled “Empowering Cultural Preservation and Inclusivity Through Technology: Innovations in Hong Kong Sign Language”, the address showcased our lab’s pioneering efforts to preserve Hong Kong Sign Language (HKSL) and promote inclusivity for the Deaf community.

Preserving HKSL’s Cultural Heritage

Our research focuses on safeguarding the linguistic and cultural richness of HKSL. Through meticulous documentation and archiving of HKSL signs, narratives, and dialogues, we are building a lasting repository to ensure this vital aspect of Hong Kong’s heritage endures. These efforts provide a foundation for cultural preservation, enabling future generations to engage with and learn from the Deaf community’s unique linguistic identity.

Breakthroughs in Sign Language Technology

Central to our work is an innovative HKSL handshape detection model, which leverages advanced machine learning to enhance the accuracy and speed of sign language recognition. This technology marks a significant leap forward in interpreting HKSL, enabling seamless communication. Key applications include:

  • A comprehensive HKSL curriculum designed for hearing learners, making the language accessible to a broader audience and fostering cross-community understanding.
  • Practical tools, such as real-time sign language interpretation for paramedic services, ensuring effective communication during emergencies, and art exhibition accessibility, enriching cultural participation for Deaf individuals.

Building Bridges Between Communities

Our work goes beyond technology—it’s about building unity. By developing tools that facilitate communication, we aim to create a deeper connection between the Deaf and hearing communities. These efforts promote a society that celebrates diversity, embraces cultural heritage, and ensures inclusivity for all.

Youngah’s keynote resonated with attendees, sparking conversations about the role of technology in social good. The Media For All 2025 conference provided an ideal platform to share our vision, and we’re excited to continue this journey toward a more inclusive future.

Looking Ahead

The advancements shared in the keynote are just the beginning. Our team remains dedicated to pushing the boundaries of HKSL research and its applications. We invite collaborators, community partners, and stakeholders to join us in this mission to preserve HKSL and empower the Deaf community.

For more information about our work or to explore potential partnerships, please contact our lab through the Knowledge Exchange Office at The University of Hong Kong. Together, we can create a more inclusive and culturally rich society.

Youngah presents at the Media For All 2025 conference at the University of Hong Kong.
Youngah presents at the Media For All 2025 conference at the University of Hong Kong.
Youngah presents at the Media For All 2025 conference at the University of Hong Kong.
Categories
All Lab News Publication

“Iconic hand gestures from ideophones exhibit stability and emergent phonological properties” published in CogLing

We are pleased to announce the publication of a new paper by Arthur, Thomas (joint first authors), Aaron, and Youngah in the journal Cognitive Linguistics.

The paper, titled “Iconic hand gestures from ideophones exhibit stability and emergent phonological properties: an iterated learning study,” explores the stability and phonological properties of iconic hand gestures associated with ideophones. Ideophones are marked words that depict sensory imagery and are usually considered iconic by native speakers. The study investigates how these gestures are transmitted across generations using a linear iterated learning paradigm.

The findings reveal that despite noise in the visual signal, participants’ hand gestures converged, indicating the emergence of phonological targets. Handshape configurations over time exhibited finger coordination reminiscent of unmarked handshapes observed in phonological inventories of signed languages. Well-replicated gestures were correlated with well-guessed ideophones from a spoken language study, highlighting the complementary nature of the visual and spoken modalities in formulating mental representations.

Thompson, A. L., Van Hoey, T., Chik, A. W. C., & Do, Y. (2025). Iconic hand gestures from ideophones exhibit stability and emergent phonological properties: An iterated learning study. Cognitive Linguistics. open_in_newDOI