MyoKey

The seamless textual input in Augmented Reality (AR) is very challenging and essential for enabling user-friendly AR applications. Existing approaches such as speech input and vision-based gesture recognition suffer from environmental obstacles and the large default keyboard size, sacrificing the majority of the screen’s real estate in AR. In this paper, we propose MyoKey, a system that enables users to effectively and unobtrusively input text in a constrained environment of AR by jointly leveraging surface Electromyography (sEMG) and Inertial Motion Unit (IMU) signals transmitted by wearable sensors on a user’s forearm. MyoKey adopts a deep learning-based classifier to infer hand gestures using sEMG. In order to show the feasibility of our approach, we implement a mobile AR application using the Unity application building framework. We present novel interaction and system designs to incorporate information of hand gestures from sEMG and arm motions from IMU to provide seamless text entry solution. We demonstrate the applicability of MyoKey by conducting a series of experiments achieving the accuracy of 0.91 on identifying five gestures in real-time (Inference time: 97.43 ms).

Authors:
Kwon Young D., Shatilov Kirill A., Lee Lik-Hang, Kumyol Serkan, Lam Kit-Yung, Yau Yui-Pan, Hui Pan

Publication type:
A4 Article in conference proceedings

Place of publication:
2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)

Keywords:
augmented reality, Deep learning, EMG, IMU, Textual Input

Published:

Full citation:
Y. D. Kwon et al., “MyoKey: Surface Electromyography and Inertial Motion Sensing-based Text Entry in AR,” 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Austin, TX, USA, 2020, pp. 1-4, doi: 10.1109/PerComWorkshops48775.2020.9156084

DOI:
https://doi.org/10.1109/PerComWorkshops48775.2020.9156084

Read the publication here:
http://urn.fi/urn:nbn:fi-fe2020081148292