Categories
Lab News Publication

“Vividness of Mandarin ABB Words”: LangCog

The latest LDL research article titled “What ratings and corpus data reveal about the vividness of Mandarin ABB words” has been published in the journal Language and Cognition. This research was conducted by members of our laboratory, Thomas (currently at KU Leuven), Xiaoyu, Youngah, in collaboration with PAN Tung-le from National Taiwan University.

The journal article.

The goal of this study was to understand the vividness of Mandarin ABB words. ABB words are a type of phrasal compound in Mandarin, consisting of a prosaic syllable A and a reduplicated BB part, resulting in a vivid phrasal compound.

The researchers collected subjective ratings regarding familiarity, iconicity, imagery/imageability, concreteness, sensory experience rating (SER), valence, and arousal for Mandarin ABB words. They contrasted these ratings with two other sets of prosaic word ratings to understand the distinctive role of variables that characterize ABB words.

The findings revealed that the variable that characterizes ABB items consistently throughout these case studies is their high score for imageability, showing that they are indeed rightfully characterized as vivid. The study also demonstrated the importance of contrasting rating data with other comparable datasets of a different phenomenon or data about the same phenomenon compiled in an ontologically different manner.

Van Hoey, T., Yu, X., Pan, T.-L., & Do, Y. (2024). What ratings and corpus data reveal about the vividness of Mandarin ABB words. Language and Cognition, 1–23. open_in_newDOI

Categories
Knowledge Exchange 2023 (Hong Kong Sign Language) Lab News

LDL Student RAs Foster Cross-Cultural Communication at “Point Line Mean” Exhibition

Student Research Assistants from the University of Hong Kong’s Language Development Lab immerse themselves in the world of Deaf and Hearing artists at the Point Line Mean Exhibition.

As part of the Knowledge Exchange project on Hong Kong Sign Language, our Student Research Assistants (SRAs), Hannah, Kevin, Rachel, and Joanna, serve as docents in the “Point Line Mean” exhibition currently underway at Hart Haus in Kennedy Town.

This unique exhibit explores communication and understanding across divides, particularly between the Deaf and hearing worlds. It features the works of six Hong Kong-based artists, including two familiar faces within LDL: KK, who has served as our Hong Kong Sign Language Consultant, and Arthur (何明偉), a postdoctoral researcher at our lab.

Two of the exhibits were even produced within the Department of Linguistics’ Fieldwork Room, highlighting the close collaboration between the artists and the university.

SRAs Become Docents and Bridge the Gap

A dedicated team of SRAs – Hannah, Kevin, Rachel, and Joanna – took on the role of docents for the exhibition. To prepare for this exciting task, they familiarized themselves with the artwork and the messages conveyed by the artists. This involved not only understanding the pieces themselves but also learning about Deaf culture and the intricacies of Hong Kong Sign Language.

Equipped with this knowledge, the SRAs were able to effectively communicate with the artists throughout the exhibition, using both spoken and sign languages. This fostered a truly immersive experience for the SRAs, allowing them to become deeply involved with the Deaf community and its artistic expression.

Guiding Visitors Towards a Deeper Understanding

The SRAs’ role extended to guiding visitors through the exhibition. By providing insightful explanations and fostering open discussions, they helped visitors gain a richer understanding of the artwork and the Deaf experience. This valuable contribution no doubt played a significant role in the success of the “Point Line Mean” exhibition.

The LDL is thrilled to have our SRAs play such a vital role in this important project. Their dedication and willingness to learn have not only enhanced their own knowledge and perspectives but have also enriched the experience for visitors to the exhibition.

Point Line Mean

HART Haus G/F (G/F, Cheung Hing Industrial Building, 12P Smithfield Road, Kennedy Town)

Bridge of Signs documentary Special Screening

27 April 2024 (Sat)
5PM to 6PM
HART Haus 4/F (4/F, Cheung Hing Industrial Building, 12P Smithfield Road, Kennedy Town)

Categories
Lab News Publication

“Perceptual and featural measures of Mandarin consonant similarity” published on Data in Brief

Xiaoyu and Youngah’s paper, “Perceptual and featural measures of Mandarin consonant similarity: Confusion matrices and phonological features dataset,” has recently been published in Data in Brief.

The paper presents a comprehensive dataset containing two types of similarity measures for 23 Mandarin consonant phonemes: perceptual and featural measures. The perceptual measures are derived from confusion matrices obtained through native speakers’ identification tasks in quiet and noise-masked conditions. Based on these matrices, specific perceptual measures, such as confusion rate and perceptual distance, are calculated. Additionally, the authors propose a phonological feature system to evaluate the featural differences between each pair of consonants, providing insights into phonological similarity.

The dataset reveals a significant positive correlation between the perceptual and featural measures of similarity. Distance matrices are generated using the perceptual distance data, and a hierarchical cluster dendrogram is plotted using the unweighted pair group method with arithmetic mean (UPGMA). This dendrogram displays five major clusters of consonants.

This dataset can serve as a valuable reference for future studies seeking quantified perceptual measures of Mandarin consonant similarity. Additionally, it can be beneficial for research exploring consonant similarity in perceptual and phonological domains, as well as investigating the influence of linguistic and extralinguistic factors on consonant perception.

Yu, X., & Do, Y. (2024). Perceptual and featural measures of Mandarin consonant similarity: Confusion matrices and phonological features dataset. Data in Brief52, 109868. open_in_newDOI

Categories
Lab News

Unsupervised learning of phonemes in early language acquisition: Insights from an autoencoder model @ SNU

Youngah presented at the 2023 Linguistics Colloquium organized by the Seoul National University.

Infants require two crucial skills to successfully begin language acquisition: (a) the ability to learn fundamental speech sound units, or phonemes, and (b) the capacity to decompose sound sequences into meaningful units. This talk will discuss the effectiveness of an autoencoder model in learning phonemes and phoneme boundaries from unsegmented, non-transcribed wave data, similar to the early stages of infant language acquisition. The experiment was conducted in Mandarin and English, and the results demonstrate that phonemes and their associated features can be learned through repeated projection and reconstruction without prior knowledge of segmentation. The model clusters segments of the same phoneme and projects different phonemes to separate regions in the hidden space. Furthermore, the model successfully decomposes words into phonemes in sequential order, which is a crucial foundation for phonotactic knowledge. However, the model struggles to cluster allophones closely, indicating the boundary between bottom-up and top-down information in phonological learning. This study suggests that fundamental sound knowledge in the early stages of language acquisition can be learned to some extent through unsupervised learning without labeled data or prior knowledge of segmentation, providing valuable insights into early human language acquisition.

Do, Y. (2023). Unsupervised learning of phonemes in early language acquisition: Insights from an autoencoder model. 2023 Linguistics Colloquium, Seoul National University.

Categories
Lab News

Summer Meetings Conclude and Bingzi Departs for MIT

The last of the summer research meetings has just been concluded. Our team has made significant progress, and we are excited about the potential outcomes of our research.

We want to take this opportunity to express our appreciation to the research interns who have contributed immensely to the success of the projects throughout the summer. Their efforts have been invaluable, and we are proud to have them as part of our team.

We also want to acknowledge the contributions of Bingzi, who will soon be commencing her research journey at MIT. We wish her all the best in her future endeavours.

Thanks everyone for your support in our research.

Categories
Lab News Publication

“Substantive bias and variation in the acquisition of vowel harmony” published on Glossa

Youngah and Tingyu’s paper have recently published in the journal Glossa. The paper is titled “Substantive bias and variation in the acquisition of vowel harmony.”

The study delves into substantive bias, a phenomenon where learners exhibit a preference for phonetically motivated patterns during language acquisition. The paper provides evidence that variable input, as opposed to categorical input, can activate substantive bias. In the experiment, native Hong Kong Cantonese speakers were randomly assigned to either categorical or variable training conditions for vowel backness harmony or disharmony, or to a no-training control condition. The results reveal that participants in the categorical and control conditions did not show a bias towards either pattern. However, those in the variable conditions demonstrated a preference for vowel harmony. This suggests that input variability can strengthen the effect of substantive bias. This research contributes to our understanding of the role of input variability in phonological learning and the mechanisms involved in acquiring phonetically motivated and unmotivated phonological patterns.

Congratulations to Youngah and Tingyu on their successful publication! The paper is accessible through Glossa under open access.

Huang, T., & Do, Y. (2023). Substantive bias and variation in the acquisition of vowel harmony. Glossa: A Journal of General Linguistics, 8(1), Article 1. open_in_new DOI

Categories
Lab News

Bridging Corpus and Norm: Mandarin Sensory Adjectival Phrases : ICLC16 @ HHU

Thomas recently presented the work with Xiaoyu, Youngah and Tungle from National Taiwan University at the 16th International Cognitive Linguistics Conference in Heinrich Heine University Düsseldorf. The study converged ratings and corpus measures for ABB words in Chinese through PCA.

The first page of the presentation slides.
Categories
Lab News Publication

“Variation learning in phonology and morphosyntax” published in Cognition

Youngah, Jon, and Samuel’s article, “Variation learning in phonology and morphosyntax”, has been published in the journal “Cognition”.

Phonological variation includes phonetic variation, which is influenced by articulatory or perceptual factors, whereas morphosyntactic variation is not. The researchers aimed to identify whether learning differences exist when children are exposed to phonological or morphosyntactic patterns with equal complexity. Cantonese-speaking children were taught an artificial language involving rounding harmony and gender agreement, with patterns applying variably or categorically.

The results showed that in the categorical learning conditions, participants had comparable rates of harmony and agreement. However, in the variable phonological learning conditions, children’s application of harmony exceeded the rate of exposure in training, suggesting a bias towards phonetically grounded rounding harmony. In the variable morphosyntactic condition, participants applied agreement below the rate of exposure.

These findings reveal a qualitative difference between learning in the two domains, with phonological learning being influenced by substantive grounding, while morphosyntactic learning is not. This research contributes to our understanding of language acquisition in children and may have implications for educational practices and interventions.

The article can be accessed here until 13 September 2023.

Do, Y., Havenhill, J., & Sze, S. S. L. (2023). Variation learning in phonology and morphosyntax. Cognition239, 105573. https://doi.org/10.1016/j.cognition.2023.105573

Categories
Lab News

UGC Grants for 2023 approved

The University Grants Committee has approved the use of General Research Fund this year for the following LDL projects.

Breaking down and rebuilding iconicity

Youngah will lead postdoc researchers and students in this project, which aims to understand the phonological and cognitive mechanisms behind spoken iconicity and its accessibility to a variety of language speakers, by training a neural network with ideophone corpus data to generate sound-meaning associations and test them in lab-based experiments with human participants.

Sociophonetic variation in Hong Kong and Heritage Cantonese

Jon will lead postdoc researchers and students in this project, which seeks to understand how phonological representation and sound change occur in bilingual communities. In particular, they will look at the sociophonetic variation of Cantonese sibilants among speakers of different linguistic backgrounds.

Categories
Lab News

Learning phonology without phonology : Talk at Todai, 5 Jul 2023

Youngah has been invited to give a talk at the University of Tokyo on 5 July 2023. The talk will discuss her collaborative work with Frank, “Learning phonology without phonology: insights from autoencoder modelling”, exploring the topic of how infants learn phonology without any prior knowledge of it.

The talk will present the results of their autoencoder modeling research. The study suggests that it might be possible for phonemes and distinctive features to be learned from unsegmented, non-transcribed wave data that resembles the language acquisition stages of infants.

The experiment conducted on Mandarin and English indicates that features could potentially be learned through repetitive projection and reconstruction, even without any prior knowledge of segmentation. The model appears to cluster segments of the same phoneme and separates different phonemes into distinct regions in the hidden space.

This research suggests that sound knowledge might be acquired to a certain extent through unsupervised learning, without the need for labeled data or previous phonological understanding. The findings offer insights into the early stages of human language acquisition and the ability of infants to recognize the sounds of their native language.