
Researchers from the RIKEN Guardian Robot Project in Japan have developed an exoskeleton system that uses artificial intelligence to provide better, more adaptable assistance during everyday tasks, reports Tech Xplore. The goal: move beyond the limitations of exoskeletons that follow preset motion paths or rely heavily on muscle sensors (like EMG).
The new system combines several input sources. First, it uses a head-mounted camera near the user’s eyes to capture visual data from the user’s point of view. Then it takes kinematic sensor information from the user’s body, around the knees and torso. A transformer-based AI model processes both streams to infer what the user is trying to do (e.g., picking up an object, climbing a step) and then actuates the exoskeleton to assist accordingly. In tests, this approach reduced muscle activation in participants doing these tasks, meaning the exoskeleton does more work appropriately, lessening strain on the user. Moreover, the model trained on one user’s data could transfer to other users without retraining. That cross-user adaptability is important, since many exoskeletons require custom calibration per user, which is time-consuming and limits broader usability.
The researchers see this as promising for health care, rehabilitation, and elder care, where people often need wearable robotic help but in varied, unpredictable environments. Adaptivity and generalization are key for those settings. Still, challenges remain: robustness in diverse conditions, latency, power, comfort, and ensuring safety when assisting complex motions in real-world settings. The study isn’t yet deployed in everyday usage beyond the lab.