Abstract
Wearable robots have the potential to improve the lives of countless individuals; however, challenges associated with controlling these systems must be addressed before they can reach their full potential. Modern control strategies for wearable robots are predicated on activity-specific implementations, and testing is usually limited to a single, fixed activity within the laboratory (e.g., level ground walking). To accommodate various activities in real-world scenarios, control strategies must include the ability to safely and seamlessly transition between activity-specific controllers. One potential solution to this challenge is to the infer wearer's intent using pattern recognition of locomotion sensor data. To this end, we developed an intent recognition framework implementing convolutional neural networks with image encoding (i.e. spectrogram) that enables prediction of the upcoming locomotor activity of the wearer's next step. In this letter, we describe our intent recognition system, comprised of a mel-spectrogram and subsequent neural network architecture. In addition, we analyzed the effect of sensor locations and modalities on the recognition system, and compared our proposed system to state-of-the-art locomotor intent recognition strategies. We were able to attain high classification performance (error rate: 1.1%), which was comparable or better than previous systems.
Original language | English (US) |
---|---|
Article number | 9134897 |
Pages (from-to) | 5440-5447 |
Number of pages | 8 |
Journal | IEEE Robotics and Automation Letters |
Volume | 5 |
Issue number | 4 |
DOIs | |
State | Published - Oct 2020 |
Keywords
- Wearable robots
- deep learning
- intent recognition
- prosthetics and exoskeletons
- sensor fusion
ASJC Scopus subject areas
- Control and Systems Engineering
- Biomedical Engineering
- Human-Computer Interaction
- Mechanical Engineering
- Computer Vision and Pattern Recognition
- Computer Science Applications
- Control and Optimization
- Artificial Intelligence