{"id":9,"date":"2018-04-20T11:24:42","date_gmt":"2018-04-20T03:24:42","guid":{"rendered":"http:\/\/www.linguistics.hku.hk\/ldlhku\/wp\/?page_id=9"},"modified":"2025-04-08T14:33:31","modified_gmt":"2025-04-08T06:33:31","slug":"projects","status":"publish","type":"page","link":"https:\/\/linguistics.hku.hk\/ldlhku\/projects","title":{"rendered":"Projects"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"9\" class=\"elementor elementor-9\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-8d62680 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"8d62680\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-b2bad29\" data-id=\"b2bad29\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-f181702 elementor-widget elementor-widget-heading\" data-id=\"f181702\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<h1 class=\"elementor-heading-title elementor-size-default\">Projects<\/h1>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-16d9353 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"16d9353\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-f20bddf\" data-id=\"f20bddf\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-cc2d0e5 elementor-tabs-view-vertical elementor-widget elementor-widget-tabs\" data-id=\"cc2d0e5\" data-element_type=\"widget\" data-widget_type=\"tabs.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<div class=\"elementor-tabs\">\n\t\t\t<div class=\"elementor-tabs-wrapper\" role=\"tablist\" >\n\t\t\t\t\t\t\t\t\t<div id=\"elementor-tab-title-2141\" class=\"elementor-tab-title elementor-tab-desktop-title\" aria-selected=\"true\" data-tab=\"1\" role=\"tab\" tabindex=\"0\" aria-controls=\"elementor-tab-content-2141\" aria-expanded=\"false\">Breaking down and rebuilding iconicity: machine learning verified by human learning<\/div>\n\t\t\t\t\t\t\t\t\t<div id=\"elementor-tab-title-2142\" class=\"elementor-tab-title elementor-tab-desktop-title\" aria-selected=\"false\" data-tab=\"2\" role=\"tab\" tabindex=\"-1\" aria-controls=\"elementor-tab-content-2142\" aria-expanded=\"false\">Exploring the learnability of ideophones through articulatory and manual gestures<\/div>\n\t\t\t\t\t\t\t\t\t<div id=\"elementor-tab-title-2143\" class=\"elementor-tab-title elementor-tab-desktop-title\" aria-selected=\"false\" data-tab=\"3\" role=\"tab\" tabindex=\"-1\" aria-controls=\"elementor-tab-content-2143\" aria-expanded=\"false\"> Learning Biases in L2 Acquisition of Hong Kong Sign Language by Hearing Learners<\/div>\n\t\t\t\t\t\t\t\t\t<div id=\"elementor-tab-title-2144\" class=\"elementor-tab-title elementor-tab-desktop-title\" aria-selected=\"false\" data-tab=\"4\" role=\"tab\" tabindex=\"-1\" aria-controls=\"elementor-tab-content-2144\" aria-expanded=\"false\">A neurolinguistic approach to second language processing of prosody\u2013syntax interfaces<\/div>\n\t\t\t\t\t\t\t\t\t<div id=\"elementor-tab-title-2145\" class=\"elementor-tab-title elementor-tab-desktop-title\" aria-selected=\"false\" data-tab=\"5\" role=\"tab\" tabindex=\"-1\" aria-controls=\"elementor-tab-content-2145\" aria-expanded=\"false\">The Sound of Silence: A Journey through Deaf Culture in Hong Kong<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t\t<div class=\"elementor-tabs-content-wrapper\" role=\"tablist\" aria-orientation=\"vertical\">\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-tab-title elementor-tab-mobile-title\" aria-selected=\"true\" data-tab=\"1\" role=\"tab\" tabindex=\"0\" aria-controls=\"elementor-tab-content-2141\" aria-expanded=\"false\">Breaking down and rebuilding iconicity: machine learning verified by human learning<\/div>\n\t\t\t\t\t<div id=\"elementor-tab-content-2141\" class=\"elementor-tab-content elementor-clearfix\" data-tab=\"1\" role=\"tabpanel\" aria-labelledby=\"elementor-tab-title-2141\" tabindex=\"0\" hidden=\"false\"><div class=\"row\">\n<h6><strong>Abstract<\/strong><\/h6>\n<p>An English speaker who hears the Cantonese word <em>dang<\/em> would be hard-pressed to guess the correct translation (\u201cchair\u201d) above chance level. Some words, however, are easy to guess, for example ideophones. Ideophones are words that depict sensory imagery and exist in every spoken language. An English speaker who hears the Japanese ideophone <em>kira-kira<\/em> is very likely to guess the correct translation (\u201cflashing\u201d).<\/p>\n\n<p>What are the special properties of ideophones that allow speakers to easily guess their meaning? This is still not well-understood. What we do know is that ideophones rely on iconicity to be meaningful. Iconicity is a connection between form and meaning. Since ideophones are spoken, their \u201cform\u201d is sound. Ideophones essentially \u201csound like\u201d what they mean.<\/p>\n\n<p>What we don\u2019t know is the answer to this question: what is it about <em>kira-kira<\/em> that sounds like \u201cflashing\u201d to native and non-native speakers? This question is simple yet speaks to a unifying and fundamental aspect of human cognition: how do we relate sounds to the world? By striving to answer this question, our objectives feed into language acquisition, psychology, and machine learning.<\/p>\n\n<p>The main goal of our project is to identify which sound properties cause ideophones to sound like what they mean. We do this by teaching ideophones from a multilingual database to a neural network. To do this, we train our network on pronunciation (e.g., <em>kira-kira<\/em>) and meaning (e.g., \u201cflashing\u201d) alone, replicating circumstances participants face during guessing tasks. Next, we pinpoint which sounds the neural network relied on to guess meanings more accurately.<\/p>\n\n<p>We then test the psychological reality of what the neural network has learned by first asking it to generate new ideophones, then using these as stimuli in two experiments: (1) a learning study, and (2) a transmission study (a game of telephone), to see how the new ideophones \u201csurvive in the wild\u201d as they are passed from one participant to the next.<\/p>\n\n<p>Our project has two impact pathways: (1) developing an open-access database of ideophones from many languages, labeled with sound-meaning mappings identified by our neural network and verified through experimental evidence, and (2) designing an open-source brain-teaser game that helps improve one\u2019s memory, while allowing us to continue to improve our network. For (1), we convert our neural network\u2019s training set into a searchable website. For (2), we harness the sound-meaning mappings pin-pointed by our network to design memory tasks shown to improve memory performance.<\/p>\n\n<hr \/>\n<br>\n<h6 class=\"col-xs-3\"><strong>Principal Investigator<\/strong><\/h6>\n<div class=\"col-xs-9\">\n<div><a class=\"authority\" href=\"https:\/\/hub.hku.hk\/cris\/rp\/rp02160\">Dr Do, Youngah\u00a0<i class=\"fa fa-user\"><\/i><\/a>\u00a0\u00a0\u00a0(Principal Investigator (PI))<\/div>\n<\/div>\n<\/div>\n<div class=\"row\">\n<h6 class=\"col-xs-3\"><strong>Co-Investigator(s)<\/strong><\/h6>\n<div class=\"col-xs-9\">\n<div>Dr Van Hoey Thomas Greta R. \u00a0\u00a0(Co-Investigator)<\/div>\n<div><a class=\"authority\" href=\"https:\/\/hub.hku.hk\/cris\/rp\/rp02448\">Dr Coupe Christophe Dominique Michel\u00a0<i class=\"fa fa-user\"><\/i><\/a>\u00a0\u00a0\u00a0(Co-Investigator)<\/div>\n<div>Baayen Harald \u00a0\u00a0(Co-Investigator)<\/div>\n<\/div>\n<\/div>\n<div class=\"row\">\n<h6 class=\"col-xs-3\"><strong>Keywords<\/strong><\/h6>\n<div class=\"col-xs-9\">\n<div>Ideophone , Language learning , Machine learning<\/div>\n<\/div>\n<\/div>\n<div class=\"row\">\n<h6 class=\"col-xs-3\"><strong>Discipline<\/strong><\/h6>\n<div class=\"col-xs-9\">\n<div>Others &#8211; Psychology and LinguisticsLinguistics and Languages<\/div>\n<\/div>\n<\/div><\/div>\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-tab-title elementor-tab-mobile-title\" aria-selected=\"false\" data-tab=\"2\" role=\"tab\" tabindex=\"-1\" aria-controls=\"elementor-tab-content-2142\" aria-expanded=\"false\">Exploring the learnability of ideophones through articulatory and manual gestures<\/div>\n\t\t\t\t\t<div id=\"elementor-tab-content-2142\" class=\"elementor-tab-content elementor-clearfix\" data-tab=\"2\" role=\"tabpanel\" aria-labelledby=\"elementor-tab-title-2142\" tabindex=\"0\" hidden=\"hidden\"><div class=\"row\"><h6><strong>Abstract<\/strong><\/h6><p>Imitation is a core part of learning and expressing language. In order to understand certain words, we must know what makes them imitative. These &#8216;certain words&#8217; are ideophones. Ideophones exist in all known spoken languages. They are known to be easily understood by non-native speakers due to their imitative nature. Studies show that if, for example, a Dutch speaker hears a Japanese ideophone, even with zero Japanese experience, they can intuit that ideophone&#8217;s meaning. This implies that ideophones tap into a universal cognitive ability that gives sound a meaning under the right communicative circumstances.<\/p><p>The goal of our research is to investigate how ideophones express meaning in terms of a universally accessible ability for all spoken language users: articulatory movement of speech organs. To date, linguistic research has largely ignored ideophones because most meanings cannot be expressed by imitation, eg, foot, pink, mountain. Ideophones are limited to descriptive meanings like sounds, motions, visuals, touch\/feel, and inner feelings, eg, plonk, zig-zag, bling-bling.<\/p><p>Despite this, parent-child interactions are full of ideophones, so much so that ideophones have been proposed as a crucial component to language learning. Still, we do not know how ideophones are learnt, nor do we know what makes them easily learnable. What we do know is that ideophones frequently co-occur with what is also largely ignored by traditional linguists: descriptive hand gestures. Some researchers claim that ideophones are incomplete without their co-occurring hand gestures, arguing that ideophones are analogous to descriptive gestures made with the mouth instead of the hands.<\/p><p>Given that movement is imperative for understanding hand gestures, this project hypothesizes that movement of speech organs is key to learning and understanding ideophones. No study has investigated ideophones in terms of articulatory (speech organ) movement or co-speech hand gesture. The current project seeks to close this gap by being the first to empirically incorporate movement and hand gesture as factors into two ideophone learning studies.<\/p><p>Our first study investigates whether articulatory complexity affects how well non-native speakers learn ideophones without gestures, following a well-established ideophone learning paradigm. Our second study investigates how participants use hand gestures to teach and learn ideophones of varying articulatory complexity in an iterated learning task, a pioneering study for ideophones.<\/p><p>Cumulatively, our project will lead to a deeper understanding of how audio-visual movement can improve language learning and instruction, allowing for impact beyond the realm of research and into the classroom.<\/p><hr \/><h6 class=\"col-xs-3\"><strong>Principal Investigator<\/strong><\/h6><div class=\"col-xs-9\"><div><a class=\"authority\" href=\"https:\/\/hub.hku.hk\/cris\/rp\/rp02160\">Dr Do, Youngah\u00a0<i class=\"fa fa-user\"><\/i><\/a>\u00a0\u00a0\u00a0(Principal Investigator (PI))<\/div><\/div><\/div><div class=\"row\"><h6 class=\"col-xs-3\"><strong>Co-Investigator(s)<\/strong><\/h6><div class=\"col-xs-9\"><div>Dr Dingemanse Mark \u00a0\u00a0(Co-Investigator)<\/div><div>Dr Thompson Arthur Lewis \u00a0\u00a0(Co-Investigator)<\/div><h6><strong>Keywords<\/strong><\/h6><\/div><\/div><div class=\"row\"><div class=\"col-xs-9\"><div>Articulation, Gesture, Iconicity, Ideophone<\/div><\/div><\/div><div class=\"row\"><h6 class=\"col-xs-3\"><strong>Discipline<\/strong><\/h6><div class=\"col-xs-9\"><div>Others &#8211; Psychology and Linguistics,Linguistics and Languages<\/div><\/div><\/div><div class=\"row\"><h6 class=\"col-xs-3\"><strong>Objectives<\/strong><\/h6><div class=\"col-xs-9\"><ol><li>Create the first open-access, cross-linguistic database of ideophones<\/li><li>Implement both a theoretically-rooted and a physiologically-rooted approach to quantifying the degree of articulatory movement in ideophones across multiple languages<\/li><li>Demonstrate that the degree of articulatory movement in an ideophone has a direct effect on its learnability<\/li><li>Establish the first baseline as to how co-speech depictive hand gestures play a role in ideophone learning vs. arbitrary word learning<\/li><li>Create the first open-access gesture and speech database for future multimodal and pragmatics-based research<\/li><\/ol><\/div><\/div><\/div>\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-tab-title elementor-tab-mobile-title\" aria-selected=\"false\" data-tab=\"3\" role=\"tab\" tabindex=\"-1\" aria-controls=\"elementor-tab-content-2143\" aria-expanded=\"false\"> Learning Biases in L2 Acquisition of Hong Kong Sign Language by Hearing Learners<\/div>\n\t\t\t\t\t<div id=\"elementor-tab-content-2143\" class=\"elementor-tab-content elementor-clearfix\" data-tab=\"3\" role=\"tabpanel\" aria-labelledby=\"elementor-tab-title-2143\" tabindex=\"0\" hidden=\"hidden\"><div class=\"row\">\n<h6><strong>Abstract<\/strong><\/h6>\n\n<p>When hearing individuals learn a second language, they rely on implicit knowledge from their first language in terms of sounds, words, grammar et cetera. But what happens when this knowledge does not apply? This is precisely the situation hearing individuals encounter when learning a sign language as a second language. Currently, there is no consensus on how people of primary linguistic experience with spoken language acquire a second language that is not spoken but signed. The main goal of our project is to understand how hearing individuals acquire Hong Kong Sign Language (HKSL) as a second language.<\/p>\n\n<p>Specifically, we focus on how learning biases affect the learning of HKSL. One example of a learning bias is the structural bias, whereby learners prefer phonological structures involving simpler featural specifications over complex ones. Learning biases have been studied extensively in terms of how they affect the learning of spoken languages, but how they affect the learning of signed languages remains unexplored.<\/p>\n\n<p>We propose a longitudinal study to uncover how learning biases affect hearing individual\u2019s acquisition of HKSL as a second language. This longitudinal study is essentially videorecording, from multiple angles, 1-on-1 immersion lessons between Deaf HKSL instructors and hearing native Cantonese participants. The instructors and students will meet twice a week for 12 weeks and follow a curriculum set by the Professional Sign Language Training Centre (\u9999\u6e2f\u624b\u8a9e\u5c08\u696d\u57f9\u8a13\u4e2d\u5fc3). A longitudinal study in this closely documented format, for hearing learners with zero knowledge of signed languages, has never been done before.<\/p>\n\n<p>Footage will be coded for phonological contrasts along with other factors, such as handshape complexity, and sign errors made by learners. Our database will allow for detailed erroranalysis to assess how L2 learning of HKSL is affected by learning biases. Also, our database will be open access so that interested researchers can explore how sign language pedagogy works in real-time and\/or longitudinally.<\/p>\n\n<p>Our project has two impact pathways: (1) contributing to the development of an automated translator of HKSL and (2) sign language pedagogy for hearers. For (1), we employ machine learning techniques to detect and analyze errors, which will serve as training data for a sign language translator model. For (2), we zone in on common errors, chart their progression over time, and propose corrective strategies for instructors to implement and raise learners\u2019 awareness of errors.<\/p>\n\n<hr>\n<br>\n\n<h6 class=\"col-xs-3\"><strong>Principal Investigator<\/strong><\/h6>\n<div class=\"col-xs-9\">\n<div><a class=\"authority\" href=\"https:\/\/hub.hku.hk\/cris\/rp\/rp02160\">Dr Do, Youngah\u00a0<i class=\"fa fa-user\"><\/i><\/a>\u00a0\u00a0\u00a0(Principal Investigator (PI))<\/div>\n<\/div>\n<\/div>\n<div class=\"row\">\n<h6 class=\"col-xs-3\"><strong>Co-Investigator(s)<\/strong><\/h6>\n<div class=\"col-xs-9\">\n<div>Emmorey Karen \u00a0\u00a0(Co-Investigator)<\/div>\n<div>Sehyr Zed Sevcikova \u00a0\u00a0(Co-Investigator)<\/div>\n<div>Mr Getzie Gabriel Paul \u00a0\u00a0(Co-Investigator)<\/div>\n<\/div>\n<\/div>\n<div class=\"row\"><\/div>\n<div class=\"row\">\n<div class=\"col-xs-9\">\n<h6><strong>Keywords<\/strong><\/h6>\n<\/div>\n<\/div>\n<div class=\"row\">\n<div class=\"col-xs-9\">\n<div>Sign language , Language acquisition , Learning biases , Phonology ,<\/div>\n<\/div>\n<\/div>\n<div class=\"row\">\n<h6 class=\"col-xs-3\"><strong>Discipline<\/strong><\/h6>\n<div class=\"col-xs-9\">\n<div>Others &#8211; Psychology and LinguisticsLinguistics and Languages<\/div>\n<\/div>\n<\/div><\/div>\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-tab-title elementor-tab-mobile-title\" aria-selected=\"false\" data-tab=\"4\" role=\"tab\" tabindex=\"-1\" aria-controls=\"elementor-tab-content-2144\" aria-expanded=\"false\">A neurolinguistic approach to second language processing of prosody\u2013syntax interfaces<\/div>\n\t\t\t\t\t<div id=\"elementor-tab-content-2144\" class=\"elementor-tab-content elementor-clearfix\" data-tab=\"4\" role=\"tabpanel\" aria-labelledby=\"elementor-tab-title-2144\" tabindex=\"0\" hidden=\"hidden\"><h6><strong>Abstract<\/strong><\/h6>\n<p>This research explores one of the most understudied research topics in the field of second language (L2) acquisition and processing, namely, the interface between prosody and syntax. Specifically, we first examine whether L2 learners can successfully attain native-like L2 prosody when first language (L1) and L2 prosody fundamentally differ. We further investigate whether successful learners of L2 prosody can utilize prosodic information to facilitate syntactic processing (and thus semantic processing as well) during real-time sentence comprehension.<\/p>\n\n<p>The current research concerns Cantonese-speaking learners of English, whose L1 and L2 show marked differences not only in prosody per se but also in its interaction with syntax. First, these two languages differ in how prosodic boundaries are formed: Stress plays a major role in prosodic boundary formation in English, but not in Cantonese. Second, they differ in how to prosodically signal major syntactic boundaries (i.e., boundaries of syntactic phrases such as noun and verb phrases): English places phrasal stress before syntactic boundaries, while Cantonese does not, relying on other cues such as pause particles.<\/p>\n\n<p>These marked prosodic differences between the two languages allow us to explore how successfully L2 learners can overcome dramatic L1\u2013L2 differences in prosody by examining how sensitive Cantonese-speaking learners of English are to English prosodic boundaries and how effectively they utilize phrasal stress for syntactic processing.<\/p>\n\n<p>The present study employs electroencephalography (EEG), a non-invasive neuroimaging technique that has been found highly effective in exploring the fine-grained time courses of cognitive subprocesses underlying language processing. We utilize the two most widely used EEG data analysis techniques, namely, event-related potential (ERP) and time-frequency (TF) analysis. In our ERP analysis, we examine the Closure Positive Shift (CPS) ERP component to test our L2 learners\u2019 sensitivity to prosodic boundaries in English (i.e., phrasal stress). In our TF analysis, we focus on increases in power in the low beta and gamma bands of our L2 learners\u2019 EEG waveforms to examine how their L2 sentence comprehension (i.e., syntactic and semantic processing) is facilitated by prosodic information.<\/p>\n\n<p>The findings of this research will improve our understanding of the nature of L2 prosody acquisition and processing, as well as the interfaces between different subdomains of language in L2. Since L2 prosody has largely been overlooked by the L2 research community, not to mention its interface with other subdomains of language, this research will make novel and rare contributions to theories of L2 processing and acquisition.<\/p>\n\n\n<hr \/>\n\n<div class=\"row\">\n<div><\/div>\n<h6 class=\"col-xs-3\"><strong>Project Title<\/strong><\/h6>\n<div class=\"col-xs-9\">\n<div>A neurolinguistic approach to second language processing of prosody\u2013syntax interfaces<\/div>\n<\/div>\n<\/div>\n<div class=\"row\">\n<h6 class=\"col-xs-3\"><strong>Principal Investigator<\/strong><\/h6>\n<div class=\"col-xs-9\">\n<div><a class=\"authority\" href=\"https:\/\/hub.hku.hk\/cris\/rp\/rp02641\">Dr Song, Yoonsang\u00a0<i class=\"fa fa-user\"><\/i><\/a>\u00a0\u00a0\u00a0(Principal Investigator (PI))<\/div>\n<\/div>\n<\/div>\n<div class=\"row\">\n<h6 class=\"col-xs-3\"><strong>Co-Investigator(s)<\/strong><\/h6>\n<div class=\"col-xs-9\">\n<div><a class=\"authority\" href=\"https:\/\/hub.hku.hk\/cris\/rp\/rp02160\">Dr Do Youngah\u00a0<i class=\"fa fa-user\"><\/i><\/a>\u00a0\u00a0\u00a0(Co-Investigator)<\/div>\n<div><a class=\"authority\" href=\"https:\/\/hub.hku.hk\/cris\/rp\/rp02315\">Dr Ouyang Guang\u00a0<i class=\"fa fa-user\"><\/i><\/a>\u00a0\u00a0\u00a0(Co-Investigator)<\/div>\n<div>Jiang Nan \u00a0\u00a0(Co-Investigator)<\/div>\n<\/div>\n<\/div>\n<div class=\"row\">\n<h6 class=\"col-xs-3\"><strong>Keywords<\/strong><\/h6>\n<div class=\"col-xs-9\">\n<div>second language processing, L2 prosody acquisition, prosody-syntax interface, sentence processing, EEG<\/div>\n<\/div>\n<\/div>\n<div class=\"row\">\n<h6 class=\"col-xs-3\"><strong>Discipline<\/strong><\/h6>\n<div class=\"col-xs-9\">\n<div>Psycholinguistics<\/div>\n<\/div>\n<\/div><\/div>\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-tab-title elementor-tab-mobile-title\" aria-selected=\"false\" data-tab=\"5\" role=\"tab\" tabindex=\"-1\" aria-controls=\"elementor-tab-content-2145\" aria-expanded=\"false\">The Sound of Silence: A Journey through Deaf Culture in Hong Kong<\/div>\n\t\t\t\t\t<div id=\"elementor-tab-content-2145\" class=\"elementor-tab-content elementor-clearfix\" data-tab=\"5\" role=\"tabpanel\" aria-labelledby=\"elementor-tab-title-2145\" tabindex=\"0\" hidden=\"hidden\"><p>Hong Kong Sign Language (HKSL) faces endangerment. Our project tackles this by working with the Deaf community to document HKSL, explore its signs, and empower Deaf culture. Join us in preserving HKSL and building bridges between Deaf and hearing communities.<\/p><p><a href=\"https:\/\/linguistics.hku.hk\/ldlhku\/hong-kong-sign-language\">Read more&#8230;<\/a><\/p><\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Projects Breaking down and rebuilding iconicity: machine learning verified by human learning Exploring the learnability of ideophones through articulatory and manual gestures Learning Biases in L2 Acquisition of Hong Kong Sign Language by Hearing Learners A neurolinguistic approach to second language processing of prosody\u2013syntax interfaces The Sound of Silence: A Journey through Deaf Culture in [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"elementor_header_footer","meta":[],"_links":{"self":[{"href":"https:\/\/linguistics.hku.hk\/ldlhku\/wp-json\/wp\/v2\/pages\/9"}],"collection":[{"href":"https:\/\/linguistics.hku.hk\/ldlhku\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/linguistics.hku.hk\/ldlhku\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/linguistics.hku.hk\/ldlhku\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/linguistics.hku.hk\/ldlhku\/wp-json\/wp\/v2\/comments?post=9"}],"version-history":[{"count":78,"href":"https:\/\/linguistics.hku.hk\/ldlhku\/wp-json\/wp\/v2\/pages\/9\/revisions"}],"predecessor-version":[{"id":3430,"href":"https:\/\/linguistics.hku.hk\/ldlhku\/wp-json\/wp\/v2\/pages\/9\/revisions\/3430"}],"wp:attachment":[{"href":"https:\/\/linguistics.hku.hk\/ldlhku\/wp-json\/wp\/v2\/media?parent=9"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}