top of page

SpeeChin: A Smart Necklace for Silent Speech Recognition 

Published on Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT)/UbiComp ’21

Ruidong Zhang, Mingyang Chen, Benjamin Steeper,Yaxuan Li, Zihan Yan, Yizhuo ChenSongyun Tao, Tuochao Chen, Hyunchul LimCheng Zhang 

Selected Media Coverage: Cornell Chronicle , Gizmodo , Hackster News Atlas , Tech Xplore.

Overview of SpeeChin

This paper presents SpeeChin, a smart necklace that can recognize 54 English and 44 Chinese silent speech commands. A customized infrared (IR) imaging system is mounted on a necklace to capture images of the neck and face from under the chin. These images are first pre-processed and then deep learned by an end-to-end deep convolutional-recurrent-neural-network (CRNN) model to infer different silent speech commands. A user study with 20 participants (10 participants for each language) showed that SpeeChin could recognize 54 English and 44 Chinese silent speech commands with average cross-session accuracies of 90.5% and 91.6%, respectively. To further investigate the potential of SpeeChin in recognizing other silent speech commands, we conducted another study with 10 participants distinguishing between 72 one-syllable nonwords. Based on the results from the user studies, we further discuss the challenges and opportunities of deploying SpeeChin in real-world applications.

Comparison of captured IR images on different speech commands

logo_desktop.png

107 Hoy Rd, Ithaca, NY 14853 

Cornell University

bottom of page