Human beings use a complex system that perceives and interacts with their surroundings. Our sensory system not only processes external stimuli but also anticipates the outcomes of our actions, which helps in guiding movements.
This concept of anticipation-based perception is well-established in neuroscience but poses challenges when applied to soft robots. Soft robots, unlike rigid ones, face difficulties due to their flexible and deformable bodies, which make it harder to distinguish between internal movements and external interactions.
In soft robotics, the perception-action loop, basically how the robot senses its environment and reacts to it, is still under development. Unlike rigid robots, soft robots need to account for body deformations, and they must differentiate between changes caused by internal movement versus external contact.
Effective sensory methods such as tactile sensors and force sensing have been developed, but they struggle with rapid, unanticipated interactions and complex 3D environments.
To address these challenges, multi-modal perception systems are required, integrating various sensors to detect and interpret a robot’s movements, both internally and from external interactions. That is perception and proprioception systems, like in the human nervous system.
The study
Technology inspired by the human perception-action loops has been proposed for soft robots. By creating a system that integrates expected data and actual sensor data, the robot can quickly identify discrepancies, such as unexpected contact.
Researchers have developed a soft robot with such an advanced perception ability that can detect and respond to external contact. The robot is designed with flexible rods and a sensing system integrated into the soft body, enabling it to compare expected and actual shapes during movement.
By measuring these deformations, the robot can distinguish between internal movements and external contacts, allowing it to detect the presence and direction of forces acting on it. The robot responds in under a second to changes, distinguishing between internal and external influences, even without visual feedback.
Real-Time Sensing and Shape Estimation
The robot’s perception system works by calculating the expected shape based on movement commands and comparing it with the actual shape sensed by embedded sensors.
This process helps the robot determine whether any external force is interacting with it or not and, if so, the direction and magnitude of that force. The system has been shown to be accurate, even when external loads or obstacles are encountered, maintaining high precision in determining the position of the robot regardless of the applied forces.
Robot sensory nerves?
The robot can detect contact with obstacles of varying stiffness and can adapt its actions based on real-time feedback. For example, when the robot comes into contact with a rigid or soft object, it can accurately detect the force and direction of the contact.
This level of sensitivity enables the robot to make informed decisions in dynamic environments, even under conditions where visual feedback is unavailable, such as in dark or cluttered spaces.
Autonomous Exploration and Maze Navigation
The enhanced sensory capabilities of this soft robot enable it to autonomously navigate environments, such as mazes, by detecting and responding to obstacles. The robot uses sensors embedded in its flexible body to detect interactions with walls and obstacles.
Through continuous shape and contact feedback, the robot adjusts its movements, just like a human would, using the sensory nerves to explore and find its way to the goal, demonstrating a significant leap in autonomous soft robotics for tasks like exploration and manipulation.
This feedback allows the robot to navigate paths, detect and avoid obstacles, and adapt its behavior based on the walls it encounters. Real-time interaction with the environment, like the human nervous system, makes it ideal for tasks requiring delicate touch and intelligent decision-making.
Learning from Human Interaction
In another experiment, the robot is trained to perform specific tasks, like a massage, by a human operator. The operator manually guides the robot to desired positions and adjusts the force applied, which is measured by force sensors.
The robot learns the correct sequence of movements and force levels and then repeats them without further human input. This task involves recording the robot's movements and forces and then using these records to autonomously replicate the actions, showing impressive precision in repeating the taught tasks.
Enhanced Perception for Better Performance
The robot’s ability to differentiate between internal movements and external interactions is based on advanced perception techniques.
It continuously monitors its body’s deformation and compares it with expected movements, distinguishing between actions caused by its own motion and those triggered by external contact.
This sophisticated feedback system, which resembles our human proprioceptive nervous system, enables the robot to perform complex tasks like navigation and manipulation with minimal errors, even in dynamic environments.
This approach holds promise for applications like autonomous navigation and human-robot collaboration.
About the scientific paper:
First author: Peiyi Wang, China and Singapore
Published: Nature Communications, November 2024
Link to paper: https://www.nature.com/articles/s41467-024-54327-6
Member discussion: