An English speaker who hears the Cantonese word dang would be hard-pressed to guess the correct translation (“chair”) above chance level. Some words, however, are easy to guess, for example ideophones. Ideophones are words that depict sensory imagery and exist in every spoken language. An English speaker who hears the Japanese ideophone kira-kira is very likely to guess the correct translation (“flashing”).
What are the special properties of ideophones that allow speakers to easily guess their meaning? This is still not well-understood. What we do know is that ideophones rely on iconicity to be meaningful. Iconicity is a connection between form and meaning. Since ideophones are spoken, their “form” is sound. Ideophones essentially “sound like” what they mean.
What we don’t know is the answer to this question: what is it about kira-kira that sounds like “flashing” to native and non-native speakers? This question is simple yet speaks to a unifying and fundamental aspect of human cognition: how do we relate sounds to the world? By striving to answer this question, our objectives feed into language acquisition, psychology, and machine learning.
The main goal of our project is to identify which sound properties cause ideophones to sound like what they mean. We do this by teaching ideophones from a multilingual database to a neural network. To do this, we train our network on pronunciation (e.g., kira-kira) and meaning (e.g., “flashing”) alone, replicating circumstances participants face during guessing tasks. Next, we pinpoint which sounds the neural network relied on to guess meanings more accurately.
We then test the psychological reality of what the neural network has learned by first asking it to generate new ideophones, then using these as stimuli in two experiments: (1) a learning study, and (2) a transmission study (a game of telephone), to see how the new ideophones “survive in the wild” as they are passed from one participant to the next.
Our project has two impact pathways: (1) developing an open-access database of ideophones from many languages, labeled with sound-meaning mappings identified by our neural network and verified through experimental evidence, and (2) designing an open-source brain-teaser game that helps improve one’s memory, while allowing us to continue to improve our network. For (1), we convert our neural network’s training set into a searchable website. For (2), we harness the sound-meaning mappings pin-pointed by our network to design memory tasks shown to improve memory performance.
Imitation is a core part of learning and expressing language. In order to understand certain words, we must know what makes them imitative. These ‘certain words’ are ideophones. Ideophones exist in all known spoken languages. They are known to be easily understood by non-native speakers due to their imitative nature. Studies show that if, for example, a Dutch speaker hears a Japanese ideophone, even with zero Japanese experience, they can intuit that ideophone’s meaning. This implies that ideophones tap into a universal cognitive ability that gives sound a meaning under the right communicative circumstances.
The goal of our research is to investigate how ideophones express meaning in terms of a universally accessible ability for all spoken language users: articulatory movement of speech organs. To date, linguistic research has largely ignored ideophones because most meanings cannot be expressed by imitation, eg, foot, pink, mountain. Ideophones are limited to descriptive meanings like sounds, motions, visuals, touch/feel, and inner feelings, eg, plonk, zig-zag, bling-bling.
Despite this, parent-child interactions are full of ideophones, so much so that ideophones have been proposed as a crucial component to language learning. Still, we do not know how ideophones are learnt, nor do we know what makes them easily learnable. What we do know is that ideophones frequently co-occur with what is also largely ignored by traditional linguists: descriptive hand gestures. Some researchers claim that ideophones are incomplete without their co-occurring hand gestures, arguing that ideophones are analogous to descriptive gestures made with the mouth instead of the hands.
Given that movement is imperative for understanding hand gestures, this project hypothesizes that movement of speech organs is key to learning and understanding ideophones. No study has investigated ideophones in terms of articulatory (speech organ) movement or co-speech hand gesture. The current project seeks to close this gap by being the first to empirically incorporate movement and hand gesture as factors into two ideophone learning studies.
Our first study investigates whether articulatory complexity affects how well non-native speakers learn ideophones without gestures, following a well-established ideophone learning paradigm. Our second study investigates how participants use hand gestures to teach and learn ideophones of varying articulatory complexity in an iterated learning task, a pioneering study for ideophones.
Cumulatively, our project will lead to a deeper understanding of how audio-visual movement can improve language learning and instruction, allowing for impact beyond the realm of research and into the classroom.
When hearing individuals learn a second language, they rely on implicit knowledge from their first language in terms of sounds, words, grammar et cetera. But what happens when this knowledge does not apply? This is precisely the situation hearing individuals encounter when learning a sign language as a second language. Currently, there is no consensus on how people of primary linguistic experience with spoken language acquire a second language that is not spoken but signed. The main goal of our project is to understand how hearing individuals acquire Hong Kong Sign Language (HKSL) as a second language.
Specifically, we focus on how learning biases affect the learning of HKSL. One example of a learning bias is the structural bias, whereby learners prefer phonological structures involving simpler featural specifications over complex ones. Learning biases have been studied extensively in terms of how they affect the learning of spoken languages, but how they affect the learning of signed languages remains unexplored.
We propose a longitudinal study to uncover how learning biases affect hearing individual’s acquisition of HKSL as a second language. This longitudinal study is essentially videorecording, from multiple angles, 1-on-1 immersion lessons between Deaf HKSL instructors and hearing native Cantonese participants. The instructors and students will meet twice a week for 12 weeks and follow a curriculum set by the Professional Sign Language Training Centre (香港手語專業培訓中心). A longitudinal study in this closely documented format, for hearing learners with zero knowledge of signed languages, has never been done before.
Footage will be coded for phonological contrasts along with other factors, such as handshape complexity, and sign errors made by learners. Our database will allow for detailed erroranalysis to assess how L2 learning of HKSL is affected by learning biases. Also, our database will be open access so that interested researchers can explore how sign language pedagogy works in real-time and/or longitudinally.
Our project has two impact pathways: (1) contributing to the development of an automated translator of HKSL and (2) sign language pedagogy for hearers. For (1), we employ machine learning techniques to detect and analyze errors, which will serve as training data for a sign language translator model. For (2), we zone in on common errors, chart their progression over time, and propose corrective strategies for instructors to implement and raise learners’ awareness of errors.
This research explores one of the most understudied research topics in the field of second language (L2) acquisition and processing, namely, the interface between prosody and syntax. Specifically, we first examine whether L2 learners can successfully attain native-like L2 prosody when first language (L1) and L2 prosody fundamentally differ. We further investigate whether successful learners of L2 prosody can utilize prosodic information to facilitate syntactic processing (and thus semantic processing as well) during real-time sentence comprehension.
The current research concerns Cantonese-speaking learners of English, whose L1 and L2 show marked differences not only in prosody per se but also in its interaction with syntax. First, these two languages differ in how prosodic boundaries are formed: Stress plays a major role in prosodic boundary formation in English, but not in Cantonese. Second, they differ in how to prosodically signal major syntactic boundaries (i.e., boundaries of syntactic phrases such as noun and verb phrases): English places phrasal stress before syntactic boundaries, while Cantonese does not, relying on other cues such as pause particles.
These marked prosodic differences between the two languages allow us to explore how successfully L2 learners can overcome dramatic L1–L2 differences in prosody by examining how sensitive Cantonese-speaking learners of English are to English prosodic boundaries and how effectively they utilize phrasal stress for syntactic processing.
The present study employs electroencephalography (EEG), a non-invasive neuroimaging technique that has been found highly effective in exploring the fine-grained time courses of cognitive subprocesses underlying language processing. We utilize the two most widely used EEG data analysis techniques, namely, event-related potential (ERP) and time-frequency (TF) analysis. In our ERP analysis, we examine the Closure Positive Shift (CPS) ERP component to test our L2 learners’ sensitivity to prosodic boundaries in English (i.e., phrasal stress). In our TF analysis, we focus on increases in power in the low beta and gamma bands of our L2 learners’ EEG waveforms to examine how their L2 sentence comprehension (i.e., syntactic and semantic processing) is facilitated by prosodic information.
The findings of this research will improve our understanding of the nature of L2 prosody acquisition and processing, as well as the interfaces between different subdomains of language in L2. Since L2 prosody has largely been overlooked by the L2 research community, not to mention its interface with other subdomains of language, this research will make novel and rare contributions to theories of L2 processing and acquisition.
Hong Kong Sign Language (HKSL) faces endangerment. Our project tackles this by working with the Deaf community to document HKSL, explore its signs, and empower Deaf culture. Join us in preserving HKSL and building bridges between Deaf and hearing communities.