A Machine Learning Approach for Predicting Upper Limb Motion Intentions with Multimodal Data

Pavan Uttej Ravva, Pinar Kullu, Mohammad Fahim Abrar, Roghayeh Leila Barmaki

View paper (PDF)

Abstract: Over the last decade, there has been significant progress in the field of interactive virtual rehabilitation. Physical therapy (PT) stands as a highly effective approach for enhancing physical impairments. However, patient motivation and progress tracking in rehabilitation outcomes have been a challenge. This work aims to address this gap by proposing a computational approach that uses machine learning to objectively measure reaching task outcomes from an upper limb virtual therapy user study. In this study, we use virtual reality to perform several tracing tasks while collecting motion and movement data using a KinArm robot and a custom-made wearable sleeve sensor. We introduce a two-step machine learning architecture to predict the motion intention of participants: The first step predicts reaching task segments to which the participant-marked points belonged using gaze, while the second step employs a Long Short-Term Memory (LSTM) model to predict directional movements based on resistance change values from the wearable sensor and the KinArm robot used to give the support the participant. We specifically propose to transpose our raw resistance data to the time-domain that significantly improve the accuracy of our models. To evaluate the effectiveness of our model, we compared different classification techniques with various data configurations. The results show that our proposed computational method is exceptionally good at predicting what participants are about to do, demonstrating the great promise of using multimodal data, including eye-tracking and resistance change, to objectively measure the performance and intention in virtual rehabilitation settings.