EMG Vocalizer


Matt Banks
Jennifer Padgett
Sophie Tsalkhelishvili
Kristin Weidmann

Professor Rose
EMG Vocalizer


Reliable communication is a necessity in emergency situations. Firefighters in a burning building need to reliably communicate with each other despite loud surrounding noises. Military operatives under fire need to coordinate in order to best complete a mission or extract themselves from a dangerous situation. It is essential to either back up or replace vocal communication with other modes of communication to make coordination in emergency situations reliable.


Sudden, loud noises make vocal communication difficult over radios and cell phones. One solution to this problem is a system that allows a person to communicate vocally while there is low noise but relies on non-vocal signals to interpret speech and relay information should a high noise situation suddenly arise.


The system designed relies on information about muscle movement that occurs when speaking. An EMG is taken at the chin and cheek with ground at the base of the throat. The signal acquired contains information about the movement of the cheeks, jaw, and lips when words are formed. Information from the EMG is passed through a recognition system that is able to interpret several words. As of now the system recognizes six words (alpha, omega, left, right, forward, reverse). Once the recognition algorithm is completed, the system runs a vocal synthesizer to make the word audible to any person using the system. The system allows commands to be given clearly in the presence of noise. A future system may start interpreting muscle movement as soon as a high noise situation is detected to allow the continuation of communication.


The main problem of the overall system is dealing with electrode placement. Differences in electrode placement on an individual from use to use make it necessary to calibrate the system before each use. Even though the EMG circuit has a very high SNR, if the electrodes are not placed properly on an individual there may be little or no signal to interpret. However, the recognition algorithm can achieve an accuracy of 85% with good electrode placement. The speech synthesis needs to be made a little smoother for the final implementation of the system, but overall is working well. Other potential applications of this system include a new human-machine interface and an easy communication system for the mute.