ASL-LEX: mapping the ASL lexicon
ASL-LEX is a lexical database that catalogues information about signs in American Sign Language (Caselli, Sevcikova Sehyr, Cohen-Goldberg, & Emmorey, 2016). It currently includes information about frequency (how often signs are used in everyday conversation), iconicity (how much signs look like what they mean), and phonology (which handshapes, locations, movements etc. are used). ASL teachers can use ASL-LEX to support vocabulary acquisition (e.g., to develop vocabulary lessons that prioritize commonly used signs). Students can also look up signs based on their sign form, without knowing a sign’s English translation, and begin to learn about linguistic patterns in the forms of signs. It can also be used by ASL researchers to develop experiments, or to develop technology. This project is supported by the National Science Foundation (1625793, 1918252, and 1749384).
|
|
ASL Vocabulary Acquisition
Many deaf children have limited access to language early in life: they often do not have signing role models, and cannot hear the sounds of spoken language. These children are at risk of incomplete acquisition of their first language; this is called language deprivation. My work explores the trajectory of vocabulary development in deaf children with or without language deprivation, because early vocabulary is a critical building block in language acquisition. The goal is to identify the signs that children learn, the factors that promote vocabulary acquisition, and to develop assessment tools for evaluating early ASL vocabulary. With these tools in hand, researchers and educators will be better able to develop interventions to mitigate the effects of language deprivation. Find out more about this project. This project is supported by the National Institute of Deafness and other Communication Disorders (R21DC016104 and R01DC018279).
|
Sign Language Computation
The vast majority of communication technologies are designed for written and spoken languages (e.g., voice recognition, automatic translation), and exclude people who use sign languages. With advances in machine learning, computer vision, and human pose estimation, there is rapidly growing interest among computer scientists in sign language computation. In my work, I explore the ethical landscape of this emerging field: Do deaf people want sign language technologies? Who has a seat at the table in decision-making about new technologies? What concerns may emerge about privacy, fairness, and audism? I am also interested leveraging resources like ASL-LEX to address some of the practical barriers in developing linguistically informed sign language technologies.
|
|