A partially paralysed man has been able to feed himself through a brain-computer interface connected to a robotic arm.
Researchers at Johns Hopkins Applied Physics Laboratory in the US built a two arm system that allowed the man to manipulate a knife and fork in order to cut food and bring it to his mouth.
The man, who has not been able to use his fingers in about 30 years, was able to eat dessert using his mind in less than 90 seconds.
“Although our results are preliminary, we are excited about giving users with limited capability a true sense of control over increasingly intelligent assistive machines,” said Dr Francesco Tenore, a senior project manager in APL’s Research and Exploratory Development department.
Advances in brain-computer interfaces, also referred to as brain-machine interfaces, have occurred rapidly in recent years. The technology holds near-term promise for transforming the lives of paralysed people, as well as those impacted by neurological disorders.
They come in a variety of forms – from brain implants to external ensors – but essentially work by decoding neural signals and translating them into external functions, such as moving the cursor of a computer mouse, to controlling a robot.
The research team at Johns Hopkins used two arrays of 96 channels and two arrays of 32 channels to facilitate the control of the robotic arm, which is a relatively small amount compared to brain-computer interfaces being developed elsewhere.
Devices built by Elon Musk’s Neuralink startup utilise thousands of channels, with the tech billionaire hoping to one day allow humans to compete with advanced forms of artificial intelligence (AI).
The team at Johns Hopkins are already working on the next iteration of the system, which could allow amputees to transform feelings of a phantom limb into real-world movements of a robotic prosthetic.
“This research is a great example of this philosophy where we knew we had all the tools to demonstrate this complex bimanual activity of daily living that non-disabled people take for granted,” Dr Tenore said.
“Many challenges still lie ahead, including improved task execution, in terms of both accuracy and timing, and closed-loop control without the contstant need for visual feedback.”
A paper detailing the research at Johns Hopkins was published in the journal Frontiers in Neurorobotics.