鶹ý

Academics

鶹ý

The first thing you might notice as you look around this coffee shop is that the donuts look delicious. There are a lot of other tempting choices listed on the blackboard menu, which is hanging above a sleek white countertop. Standing there is a barista patiently waiting to take your order — and teach you how to sign “coffee.”

Welcome to the virtual reality realm of “ASL Champ,” an educational program being developed through the Signing Avatars & Immersive Learning (SAIL) project. SAIL is a part of both the and the , with Dr. Lorna Quandt serving as the Principal Investigator and Melissa Malzkuhn as the co-PI. Work began in 2017 when Quandt and Malzkuhn had the idea to harness emerging technology to create a new way for people to interact with American Sign Language (ASL). After a successful smaller-scale pilot, SAIL was ready to take off in 2021, with support from the National Science Foundation.

“’ASL is a three-dimensional, spatial language, and our team is creating a new pathway for people to learn it through VR headsets that immerse them right in signing environments,” Malzkuhn says.

The program “ASL Champ” is now at a pivotal point in its development. The first test subjects were recently invited into that virtual coffee shop, inspired by the signing Starbucks on H Street near the Gallaudet campus. Through Oculus headsets, they saw the barista demonstrate the signs for “coffee,” “milk,” and “tea,” and then she judged their signing. If they did it wrong, she shook her head and asked them to try again. The encouraging results, reported in , are helping guide the next steps for the project, which Quandt and Malzkuhn hope to have ready to release to the public within the next two years.

The team effort has included the work of Human-Computer Interaction engineer Jason Lamberton, who uses motion capture technology with multiple sensors and cameras to record the movements of native deaf signers, as well as 3D artist and animator Jianye Wang, who places the resulting avatars in realistic environments, such as that coffee shop. “We have been drawing on deaf talent who know the field inside and out,” says Quandt, noting that renowned deaf technologist Myles de Bastion and signing avatar expert Heather L. Smith are both consultants on the project.

Man sits at a desk holding a VR headset. Two monitors are visible on the desk behind him. The one on the left shows a man next to an avatar of a woman under the words, "What should your signs look like? (Money)." On the right, the screen shows the virtual reality coffee shop featuring a signing avatar.
Post-doctoral researcher Dr. Md Shahinur Alam holds a virtual reality headset in front of two screens showing work that has gone into “ASL Champ.” Above, a participant signs “tea” while testing the program.

SAIL also brought on full-time post-doctoral researcher Dr. Md Shahinur Alam, who has an extensive background in gesture recognition and machine learning. “To learn new signs, you need feedback. Without that, it’s a one-way communication system,” Quandt explains. So when users try to copy the barista’s signs, it has been up to Alam to figure out how to ensure that ASL Champ lets them know how they did. “I use a deep learning model to see if you’re creating a sign correctly or not,” Alam says.

Thanks to recent developments in technology, this is possible with just a standard Virtual Reality headset. “Extra accessories would make the process more cumbersome for users,” he adds. And it would make SAIL less convenient to use, which goes against Quandt and Malzkuhn’s vision.

“One picture we like to paint is of a young hearing couple who have just given birth to a deaf baby. We know the best thing those parents can do is pick up some ASL. But that is easy to say and hard to do,” Quandt says. “When are they supposed to take ASL classes? They are limited by distance, money, and time.” 

Ideally, they would be able to find an in-person or online course taught by an ASL teacher that fits their schedule and budget. If they can’t, there are not many viable alternatives. “Maybe you find YouTube videos, but there is no feedback system. You don’t know if you’re creating the sign properly,” Alam says. As VR continues to develop and more people have access to this technology, ASL Champ could be something they could use from their own home whenever they want.

The goal is to create something comparable to Duolingo that will allow users to learn and practice making many different signs. In addition to that coffee shop, there will be at least four other environments, each with a tailored vocabulary list. Up next is a maker space geared toward technology and art projects, where signs will include “paint” and “wood.” Users will also get to learn how to “cook” in the kitchen, find a “book” in the library, and take their skills outdoors to see the “sun.”

“It has been a huge challenge to decide which signs will be in the system,” Quandt says. They not only need to be signs that are appropriate for new signers, but they also have to be signs that current technology can successfully recognize. Adding another level of complexity, ASL Champ has no English captions that identify what objects are. “From a learning perspective, it’s more helpful to learn signs that connect to a concept rather than a written word,” she says. “So a cup coffee has to look exactly like a cup of coffee.”

This aspect of the SAIL project will be the focus of an upcoming study that will use neuroimaging to assess how people’s brains change when they encounter ASL instruction in VR. Quandt’s hypothesis is that this format should lead to a higher level of comprehension. “Embodied learning helps people retain language better,” she says.

The more impressive the results, the more likely that ASL Champ will reach more people. Quandt and Malzkuhn plan to pursue partnerships with game design companies that can continue to build on Gallaudet’s efforts and make sure that these lessons are widely available on whatever new platforms emerge.

Recent News

Stay up to date on all the gallaudet happenings, both stories, and initiatives, we are doing with our Signing community!