An English speaker who hears the Cantonese word dang would be hard-pressed to guess the correct translation (“chair”) above chance level. Some words, however, are easy to guess, for example ideophones. Ideophones are words that depict sensory imagery and exist in every spoken language. An English speaker who hears the Japanese ideophone kira-kira is very likely to guess the correct translation (“flashing”).
What are the special properties of ideophones that allow speakers to easily guess their meaning? This is still not well-understood. What we do know is that ideophones rely on iconicity to be meaningful. Iconicity is a connection between form and meaning. Since ideophones are spoken, their “form” is sound. Ideophones essentially “sound like” what they mean.
What we don’t know is the answer to this question: what is it about kira-kira that sounds like “flashing” to native and non-native speakers? This question is simple yet speaks to a unifying and fundamental aspect of human cognition: how do we relate sounds to the world? By striving to answer this question, our objectives feed into language acquisition, psychology, and machine learning.
The main goal of our project is to identify which sound properties cause ideophones to sound like what they mean. We do this by teaching ideophones from a multilingual database to a neural network. To do this, we train our network on pronunciation (e.g., kira-kira) and meaning (e.g., “flashing”) alone, replicating circumstances participants face during guessing tasks. Next, we pinpoint which sounds the neural network relied on to guess meanings more accurately.
We then test the psychological reality of what the neural network has learned by first asking it to generate new ideophones, then using these as stimuli in two experiments: (1) a learning study, and (2) a transmission study (a game of telephone), to see how the new ideophones “survive in the wild” as they are passed from one participant to the next.
Our project has two impact pathways: (1) developing an open-access database of ideophones from many languages, labeled with sound-meaning mappings identified by our neural network and verified through experimental evidence, and (2) designing an open-source brain-teaser game that helps improve one’s memory, while allowing us to continue to improve our network. For (1), we convert our neural network’s training set into a searchable website. For (2), we harness the sound-meaning mappings pin-pointed by our network to design memory tasks shown to improve memory performance.
Imitation is a core part of learning and expressing language. In order to understand certain words, we must know what makes them imitative. These ‘certain words’ are ideophones. Ideophones exist in all known spoken languages. They are known to be easily understood by non-native speakers due to their imitative nature. Studies show that if, for example, a Dutch speaker hears a Japanese ideophone, even with zero Japanese experience, they can intuit that ideophone’s meaning. This implies that ideophones tap into a universal cognitive ability that gives sound a meaning under the right communicative circumstances.
The goal of our research is to investigate how ideophones express meaning in terms of a universally accessible ability for all spoken language users: articulatory movement of speech organs. To date, linguistic research has largely ignored ideophones because most meanings cannot be expressed by imitation, eg, foot, pink, mountain. Ideophones are limited to descriptive meanings like sounds, motions, visuals, touch/feel, and inner feelings, eg, plonk, zig-zag, bling-bling.
Despite this, parent-child interactions are full of ideophones, so much so that ideophones have been proposed as a crucial component to language learning. Still, we do not know how ideophones are learnt, nor do we know what makes them easily learnable. What we do know is that ideophones frequently co-occur with what is also largely ignored by traditional linguists: descriptive hand gestures. Some researchers claim that ideophones are incomplete without their co-occurring hand gestures, arguing that ideophones are analogous to descriptive gestures made with the mouth instead of the hands.
Given that movement is imperative for understanding hand gestures, this project hypothesizes that movement of speech organs is key to learning and understanding ideophones. No study has investigated ideophones in terms of articulatory (speech organ) movement or co-speech hand gesture. The current project seeks to close this gap by being the first to empirically incorporate movement and hand gesture as factors into two ideophone learning studies.
Our first study investigates whether articulatory complexity affects how well non-native speakers learn ideophones without gestures, following a well-established ideophone learning paradigm. Our second study investigates how participants use hand gestures to teach and learn ideophones of varying articulatory complexity in an iterated learning task, a pioneering study for ideophones.
Cumulatively, our project will lead to a deeper understanding of how audio-visual movement can improve language learning and instruction, allowing for impact beyond the realm of research and into the classroom.
When hearing individuals learn a second language, they rely on implicit knowledge from their first language in terms of sounds, words, grammar et cetera. But what happens when this knowledge does not apply? This is precisely the situation hearing individuals encounter when learning a sign language as a second language. Currently, there is no consensus on how people of primary linguistic experience with spoken language acquire a second language that is not spoken but signed. The main goal of our project is to understand how hearing individuals acquire Hong Kong Sign Language (HKSL) as a second language.
Specifically, we focus on how learning biases affect the learning of HKSL. One example of a learning bias is the structural bias, whereby learners prefer phonological structures involving simpler featural specifications over complex ones. Learning biases have been studied extensively in terms of how they affect the learning of spoken languages, but how they affect the learning of signed languages remains unexplored.
We propose a longitudinal study to uncover how learning biases affect hearing individual’s acquisition of HKSL as a second language. This longitudinal study is essentially videorecording, from multiple angles, 1-on-1 immersion lessons between Deaf HKSL instructors and hearing native Cantonese participants. The instructors and students will meet twice a week for 12 weeks and follow a curriculum set by the Professional Sign Language Training Centre (香港手語專業培訓中心). A longitudinal study in this closely documented format, for hearing learners with zero knowledge of signed languages, has never been done before.
Footage will be coded for phonological contrasts along with other factors, such as handshape complexity, and sign errors made by learners. Our database will allow for detailed erroranalysis to assess how L2 learning of HKSL is affected by learning biases. Also, our database will be open access so that interested researchers can explore how sign language pedagogy works in real-time and/or longitudinally.
Our project has two impact pathways: (1) contributing to the development of an automated translator of HKSL and (2) sign language pedagogy for hearers. For (1), we employ machine learning techniques to detect and analyze errors, which will serve as training data for a sign language translator model. For (2), we zone in on common errors, chart their progression over time, and propose corrective strategies for instructors to implement and raise learners’ awareness of errors.
This research explores one of the most understudied research topics in the field of second language (L2) acquisition and processing, namely, the interface between prosody and syntax. Specifically, we first examine whether L2 learners can successfully attain native-like L2 prosody when first language (L1) and L2 prosody fundamentally differ. We further investigate whether successful learners of L2 prosody can utilize prosodic information to facilitate syntactic processing (and thus semantic processing as well) during real-time sentence comprehension.
The current research concerns Cantonese-speaking learners of English, whose L1 and L2 show marked differences not only in prosody per se but also in its interaction with syntax. First, these two languages differ in how prosodic boundaries are formed: Stress plays a major role in prosodic boundary formation in English, but not in Cantonese. Second, they differ in how to prosodically signal major syntactic boundaries (i.e., boundaries of syntactic phrases such as noun and verb phrases): English places phrasal stress before syntactic boundaries, while Cantonese does not, relying on other cues such as pause particles.
These marked prosodic differences between the two languages allow us to explore how successfully L2 learners can overcome dramatic L1–L2 differences in prosody by examining how sensitive Cantonese-speaking learners of English are to English prosodic boundaries and how effectively they utilize phrasal stress for syntactic processing.
The present study employs electroencephalography (EEG), a non-invasive neuroimaging technique that has been found highly effective in exploring the fine-grained time courses of cognitive subprocesses underlying language processing. We utilize the two most widely used EEG data analysis techniques, namely, event-related potential (ERP) and time-frequency (TF) analysis. In our ERP analysis, we examine the Closure Positive Shift (CPS) ERP component to test our L2 learners’ sensitivity to prosodic boundaries in English (i.e., phrasal stress). In our TF analysis, we focus on increases in power in the low beta and gamma bands of our L2 learners’ EEG waveforms to examine how their L2 sentence comprehension (i.e., syntactic and semantic processing) is facilitated by prosodic information.
The findings of this research will improve our understanding of the nature of L2 prosody acquisition and processing, as well as the interfaces between different subdomains of language in L2. Since L2 prosody has largely been overlooked by the L2 research community, not to mention its interface with other subdomains of language, this research will make novel and rare contributions to theories of L2 processing and acquisition.
This project will investigate how native speakers of English and Mandarin produce and perceive the vowels of Cantonese. Previous work shows that perceiving and producing nonnative vowel contrasts can be challenging for second language learners, with the degree of difficulty depending on the similarity to vowels from the speaker’s native language. It remains unclear, however, whether nonnative speakers rely on acoustic similarity or articulatory similarity, with competing theories arguing for each. Notably, few studies have considered nonnative production of vowels in terms of their underlying articulatory gestures, such as lip rounding and tongue position, instead relying on acoustic measurements, which can be ambiguous.
The study focuses on the Cantonese vowel sounds /i y u/ (as in sı̄ 1 絲 ‘silk’, sȳu 1 書 ‘book’, fů 3 褲 ‘pants’), and /ɛ œ ɔ/ (as in sē1 些 ‘some’, hōe 1 靴 ‘boot’, sō1 梳 ‘comb’). The vowel pairs /i u/ and /ɛ ɔ/ are distinct in both tongue position and lip rounding, with similar vowels found in all three languages. On the other hand, /y/ and /œ/ bear similarity to both /i ɛ/ and /u ɔ/; they are produced with fronted tongue positions like /i ɛ/ and rounded lips like /u ɔ/. Both Cantonese and Mandarin contain /y/, while /œ/ exists in Cantonese, but not Mandarin. English contains neither /y/ nor /œ/, but these vowels are acoustically similar to English vowels in some contexts.
Participants for this study include native English and Mandarin speakers who are advanced learners of Cantonese, as well as those with no prior Cantonese experience. A perception experiment will test participants’ ability to discriminate Cantonese vowel pairs (such as /i/-/y/ and /y/-/u/), and examine how they categorize Cantonese vowels relative to their native and nonnative vowel inventories.
The production experiment will collect acoustic speech recordings, ultrasound images of tongue position, and video of lip rounding. Data will be analyzed to determine whether nonnative speakers more closely match native Cantonese vowels in terms of acoustics or articulation, and whether their production reflects articulatory configurations and timings that are present in their native language.
Results of this project will be used to test theories of phonology and second language acquisition, and provide essential articulatory data that is especially lacking for Cantonese and Mandarin. Findings will pave the way for future studies investigating the use of ultrasound tongue imaging as a means of teaching new sounds to second language learners.
All languages exhibit variation, with multiple ways of saying the same thing (eg, walking vs. walkin’). No two people speak exactly the same way, nor does one person use the same speech patterns in all contexts. This type of variation is far from random, and does not reflect poor linguistic command or laziness. Rather, speakers make productive use of variation to convey social information, construct personal identities, and navigate and shape their societies. Research on sociolinguistic variation has predominantly been carried out in monolingual, English-speaking communities. Yet the majority of the world’s population is multilingual, changing not only their speech style within one language, but also switching between languages appropriate to various contexts. Determining how multilingual speakers perceive and make use of variable linguistic forms is essential for understanding how speech sounds take on social meaning and how they are organized in the mind.
This project will investigate sociolinguistic variation in the multilingual environment of Hong Kong. The first study will collect spontaneous conversational speech from Cantonese speakers of different ages and backgrounds. The sibilant sounds “s” (as in Sai Kung) and “ts/ch” (as in Tsuen Wan or Wan Chai) have recently experienced change in their pronunciation, leading to a partial restructuring of the Cantonese sound system. This study will compare how sibilant pronunciation differs according to age, gender, socioeconomic status, and English proficiency, as well as how these sounds are produced under various topics and speech styles.
The second study will use ultrasound tongue imaging to observe the tongue shapes that are used in sibilant production. Participants will recite words in Cantonese and English, to test how speakers produce sounds that are similar in their first and second language. In addition to Cantonese-English bilingual Hongkongers, this study will include Heritage speakers of Cantonese in Canada. These individuals have spoken Cantonese with their families from an early age, but have high English exposure and proficiency through living in an English-dominant community.
The third study will experimentally test which social meanings are associated with Cantonese sibilant variants. Listeners will provide qualitative descriptions and quantitative ratings on a range of social scales, to examine how they evaluate speakers who use specific pronunciations.
Together, these experiments will provide deeper understanding of how speakers use and understand linguistic variants across multiple languages, how first and second languages exert both social and linguistic influence on one another , and how language contact contributes to variation and change.
Hong Kong Sign Language (HKSL) faces endangerment. Our project tackles this by working with the Deaf community to document HKSL, explore its signs, and empower Deaf culture. Join us in preserving HKSL and building bridges between Deaf and hearing communities.