An investigation of multimodal EMG-EEG fusion strategies for upper-limb gesture classification

Abstract

Objective: Upper-limb gesture identification is an important problem in the advancement of robotic prostheses. Prevailing research into classifying electromyographic (EMG) muscular data or electroencephalographic (EEG) brain data for this purpose is often limited in methodological rigour, the extent to which generalisation is demonstrated, and the granularity of gestures classified. This work evaluates three architectures for multimodal fusion of EMG & EEG data in gesture classification, including a novel Hierarchical strategy, in both subject-specific and subject-independent settings. Approach: We propose an unbiased methodology for designing classifiers centred on Automated Machine Learning through Combined Algorithm Selection & Hyperparameter Optimisation (CASH); the first application of this technique to the biosignal domain. Using CASH, we introduce an end-to-end pipeline for data handling, algorithm development, modelling, and fair comparison, addressing established weaknesses among biosignal literature. Main results: EMG-EEG fusion is shown to provide significantly higher subject-independent accuracy in same-hand multi-gesture classification than an equivalent EMG classifier. Our CASH-based design methodology produces a more accurate subject-specific classifier design than recommended by literature. Our novel Hierarchical ensemble of classical models outperforms a domain-standard CNN architecture. We achieve a subject-independent EEG multiclass accuracy competitive with many subject-specific approaches used for similar, or more easily separable, problems. Significance: To our knowledge, this is the first work to establish a systematic framework for automatic, unbiased designing and testing of fusion architectures in the context of multimodal biosignal classification. We demonstrate a robust end-to-end modelling pipeline for biosignal classification problems which if adopted in future research can help address the risk of bias common in multimodal BCI studies, enabling more reliable and rigorous comparison of proposed classifiers than is usual in the domain. We apply the approach to a more complex task than typical of EMG-EEG fusion research, surpassing literature-recommended designs and verifying the efficacy of a novel Hierarchical fusion architecture.

Publication DOI: https://doi.org/10.1088/1741-2552/ade1f9
Divisions: College of Engineering & Physical Sciences
College of Engineering & Physical Sciences > School of Computer Science and Digital Technologies > Applied AI & Robotics
Additional Information: As the Version of Record of this article is going to be / has been published on a gold open access basis under a CC BY 4.0 licence, this Accepted Manuscript is available for reuse under a CC BY 4.0 licence immediately. Everyone is permitted to use all or part of the original content in this article, provided that they adhere to all the terms of the licence https://creativecommons.org/licences/by/4.0
Uncontrolled Keywords: Biosignal Fusion,Multimodal Gesture Classification,Brain-Computer-Interface,Automated Machine Learning
Publication ISSN: 1741-2552
Last Modified: 18 Jun 2025 08:21
Date Deposited: 11 Jun 2025 12:32
Full Text Link:
Related URLs: https://iopscie ... 552/ade1f9/meta (Publisher URL)
PURE Output Type: Article
Published Date: 2025-06-06
Published Online Date: 2025-06-06
Accepted Date: 2025-06-06
Authors: Pritchard, Michael (ORCID Profile 0000-0002-3783-0230)
Campelo, Felipe (ORCID Profile 0000-0001-8432-4325)
Goldingay, Harry

Download

[img]

Version: Accepted Version

License: Creative Commons Attribution


Export / Share Citation


Statistics

Additional statistics for this record