Mind-controlled robots take a step forward

Two EPFL research groups have collaborated to develop a machine learning program that can be connected to a human brain and used to control a robot. The software adjusts the robot’s movements based on electrical signals from the brain. The hope is that with this invention, quadriplegia patients will be able to perform more of their daily activities on their own.

Patients with quadriplegia are prisoners of their own bodies, unable to speak or do the slightest movement. Researchers have been working for years to develop systems that can help these patients perform certain tasks on their own. “People with a spinal cord injury often have permanent neurological deficits and severe motor impairments that prevent them from performing even the simplest of tasks, such as holding objects,” explains Professor Aude Billard, Head of the EPFL Systems Learning Algorithms and Systems Laboratory. “Helping robots can help these people regain some of their lost skill, as the robot can perform tasks for them.”

Professor Bellard conducted a study with Professor José del R Milan, who at the time headed the Brain-Machine Interface Laboratory at EPFL but has since moved to the University of Texas. The two research groups have developed a computer program capable of controlling a robot using electrical signals emitted from a patient’s brain. No voice commands or touch functions required; Patients can move the robot simply with their thoughts. The study was published in Communication biologyan open access journal from Nature Portfolio.

Avoid obstacles

To develop their system, the researchers started from a robotic arm developed several years ago. This arm can move back and forth from right to left, reposition objects in front of it, and move around objects in its path. “In our study, we programmed a robot to avoid obstacles, but we could choose any other type of task, such as filling a glass of water or pushing or pulling something,” explains Professor Bellard.

Engineers have begun to improve the robot’s obstacle avoidance mechanism to be more precise. “Initially, the robot chose a path that was too wide for some obstacles, making it too far, and not wide enough for others, making it too close,” explains Carolina Gaspar Pinto Ramos Correia, PhD student at Professor Carolina Gaspar Pinto Ramos Correia. Billiard Lab. “Since the purpose of our robot is to help paralyzed patients, we needed to find a way for users to communicate with it without talking or moving.”

An algorithm that can learn from ideas

This involved developing an algorithm that could modify the robot’s movements based only on the patient’s thoughts. The algorithm was connected to a headset with electrodes to perform electroencephalogram (EEG) scans of the patient’s brain activity. To use the system, the patient only has to look at the robot. If the robot makes an incorrect movement, the patient’s brain will emit an “error message” via a clearly identifiable signal, as if the patient were saying “No, it’s not like that.” The bot will then understand that what it’s doing is wrong, but at first it won’t know exactly why. For example, was he getting too close to the object or too far from it? To help the robot find the correct answer, an error message is fed into the algorithm, which uses a reverse reinforcement learning approach to determine what the patient wants and what actions the robot should take. This is done through a process of trial and error in which the robot tries different motions to see which one is correct. The process happens very quickly – usually the robot needs only three to five attempts to find the correct answer and implement the patient’s wishes. “The AI ​​program for a robot can learn quickly, but you have to tell it when it makes a mistake so it can correct its behaviour,” says Professor Millan. “Developing error-signaling technology has been one of the biggest technical challenges we have faced. What was a huge challenge in our study was linking the patient’s brain activity to the robot’s control system — or in other words, ‘translating’ brain signals,” adds Eason Bazianolis, lead author of the study. the patient to the actions of the robot. We did this by using machine learning to associate a specific brain signal with a specific task. Then we assigned tasks to individual robot commands so that the robot would do what the patient was thinking.

The next step: a mind-controlled wheelchair

The researchers eventually hope to use their algorithm to control wheelchairs. “So far, there are still a lot of technical hurdles to overcome,” says Professor Bellard. “Wheelchairs pose a whole new set of challenges, as the patient and the robot move. The team also plans to use their algorithm with a robot that can read several different types of signals and coordinate data from the brain with that of visual motor functions.

Story source:

Material provided by Federal Institute of Technology in Lausanne. Original by Valerie Jennos. Note: Content can be modified according to style and length.

Leave a Comment