The project proposes an innovative rehabilitation system that leverages Brain-Computer Interface (BCI) technology combined with Virtual Reality (VR) to aid patients suffering from upper limb impairments caused by stroke or other conditions. The goal of this system is to provide a more holistic rehabilitation approach by integrating motor, cognitive, and emotional components to improve patient outcomes. Current rehabilitation methods focus primarily on isolated motor functions. Still, the proposed system addresses the need for a comprehensive solution that promotes recovery by actively engaging patients and tracking their progress remotely.
AI Implementation:
The AI implementation is focused on extracting and processing brain signals to translate them into movement commands, rendering them in VR. The key components are as follows:
Signal Collection and Processing:
The system uses EEG signals captured through the Emotiv EPOC X headset. The raw signals are processed using MNE-Python, which filters and denoises the signals, performs artifact removal, and extracts relevant features using Wavelet Packet Decomposition (WPD).
Classification Model:
The extracted wavelet coefficients are classified using a 2-layer Long Short-Term Memory (LSTM) model with 64 hidden units. The LSTM model translates EEG signals into seven distinct movement classes: elbow flexion, elbow extension, hand open, hand close, supination, pronation, and rest.
The model achieved a classification accuracy of 78.1%, outperforming other methods like KNN and CNN, making it suitable for real-time applications.
VR Integration:
The system incorporates real-time neurofeedback by rendering the predicted movements in a 3D immersive VR environment using Unity. This feedback helps patients visualize their movement, contributing to neuroplasticity and motor recovery.
Remote Monitoring:
The system provides a web portal that enables therapists to monitor patients' progress remotely. The portal presents data such as session statistics, EMG readings, and detailed reports on patient performance. ThingSpeak, an IoT platform for real-time data monitoring, facilitates this.
Dataset:
The classification model is trained on a publicly available dataset provided by Ofner (2017), which includes EEG data from 15 healthy subjects. The dataset contains six movement types (elbow flexion/extension, forearm supination/pronation, hand open/close) and a rest class, collected using a motor imagery paradigm. This dataset was selected because it closely matches the motor tasks required for upper limb rehabilitation.
Methodology:
The methodology is divided into several key steps:
Signal Processing:
Filtering: Bandpass and notch filtering are applied to remove noise and unwanted frequencies.
Artifact Removal: Independent Component Analysis (ICA) is used to remove artifacts from muscle movements and eye blinks.
Epoching and Segmenting: EEG signals are divided into time windows for targeted analysis during motor imagery tasks.
Wavelet Packet Decomposition (WPD): This method extracts the time-frequency features from EEG signals, which are used for classification.
Classification:
A 2-layer LSTM model processes the wavelet coefficients and classifies the input into one of the seven movement classes. The LSTM model was chosen due to its ability to handle sequential data effectively, outperforming other models such as CNNs and SVMs.
VR Rendering:
Once the movement class is predicted, the system sends the output to Unity via web sockets, where the corresponding movement is rendered in the VR environment. The VR setup includes 10 scenes that simulate real-life tasks, such as weight lifting, radio tuning, and trash disposal, to engage the patient in meaningful exercises.
Remote Monitoring:
Patient data, including EEG and EMG signals, is stored in MongoDB, and therapists can access progress reports through the web portal. The use of EMG sensors helps track muscle activity, offering a more objective measure of patient progress compared to traditional Manual Muscle Testing (MMT).
Results:
The system demonstrated significant improvements over traditional rehabilitation methods:
Signal Processing:
The Signal-to-Noise Ratio (SNR) of the processed EEG signals ranged between 20-60 dB, which is well above the minimum requirement of 10 dB for EEG signals in the low-frequency range.
Classification Accuracy:
The LSTM model achieved a classification accuracy of 78.1%, with high precision and recall for movements like elbow flexion and supination. This represents a 30% improvement over previous methods used with the same dataset.
VR Environment:
The VR scenes were realistic and engaging, passing all VR evaluation guidelines for stereo vision, tracking, and latency perception. The immersive nature of the VR environment helps improve patient motivation and engagement during rehabilitation.
Conclusion:
The proposed BCI-VR system provides a comprehensive and innovative solution for patients with upper limb impairments. By combining EEG-based motor intention decoding with real-time VR feedback, the system promotes motor recovery and enhances the patient experience. The addition of remote monitoring and real-time data tracking further contributes to the system’s practical use in at-home rehabilitation settings.






