Multimodal EMG-EEG Biosignal Fusion in Upper-Limb Gesture Classification

Abstract

Upper-limb gesture identification is an important problem in the advancement of robotic prostheses. Prevailing research into classifying electromyographic (EMG) muscular data or electroencephalographic (EEG) brain data for this purpose is often limited in granularity of gestures classified, the extent to which generalisation is demonstrated, and methodological rigour. This work proposes three architectures for multimodal fusion of EMG & EEG data in gesture classification, including techniques grounded in literature precedent and a novel “Hierarchical” strategy. Classification systems of these architectures are designed via Combined Algorithm Selection & Hyperparameter Optimisation (CASH) to ensure comparisons between the approaches are unbiased; likely this methodology’s first application to the biosignal classification domain. All architectures are demonstrated suitable for use in a same-hand multi-gesture classification problem less separable than seen in much Brain-Computer-Interface research. Fusion of EMG & EEG is shown to provide significantly higher (p<0.05) subject-independent classification accuracy (73.4%) than an equivalent single-mode EMG model, when tested on unseen individuals’ data. Subject-independent single-mode EEG classification achieved accuracies (51.9%) competitive with those reached by many subject-specific systems in the literature on similar, or more separable, problems. The efficacy of CASH optimisation as a means of determining modelling choices — over inferring such decisions from literature — is also evidenced. A desire to minimise the burden placed on potential prosthesis users motivates investigation of cross-subject and cross-session classification. Strategies for minimising per-session calibration, including through transfer learning, are explored. Results demonstrate that less session-specific data is needed to adapt a model pre-trained on an individual’s previous-session data than would be needed to train a session-specific classifier to a similar accuracy (85%). Domain transfer using data collected from other individuals as the basis for adaptation is proven capable of accuracies (83%) nearing those of the subject-specific approach, laying the groundwork for future developments in low-calibration gesture classification systems.

Publication DOI: https://doi.org/10.48780/publications.aston.ac.uk.00047375
Divisions: College of Engineering & Physical Sciences
Additional Information: Copyright © Michael George Pritchard, 2024. Michael George Pritchard asserts their moral right to be identified as the author of this thesis. This copy of the thesis has been supplied on condition that anyone who consults it is understood to recognise that its copyright rests with its author and that no quotation from the thesis and no information derived from it may be published without appropriate permission or acknowledgement. If you have discovered material in Aston Publications Explorer which is unlawful e.g. breaches copyright, (either yours or that of a third party) or any other law, including but not limited to those relating to patent, trademark, confidentiality, data protection, obscenity, defamation, libel, then please read our Takedown Policy and contact the service immediately.
Institution: Aston University
Uncontrolled Keywords: Biosignal Fusion,Hybrid Brain Computer Interface (hBCI),Gesture Classification,Robotic Prostheses,Machine Learning,Multimodal Classification,Cross-Subject Learning,Inter-Session Calibration,Data Fusion,Hand Gesture Recognition
Last Modified: 21 Mar 2025 16:19
Date Deposited: 21 Mar 2025 15:53
Completed Date: 2024-03
Authors: Pritchard, Michael George (ORCID Profile 0000-0002-3783-0230)

Export / Share Citation


Statistics

Additional statistics for this record