A robot learns to imagine itself

Science Daily  July 13, 2022
Internal computational models allow robots to consider outcomes of multiple possible future actions without trying them out in physical reality. Recent progress in fully data-driven self-modeling has enabled machines to learn their own forward kinematics directly from task-agnostic interaction data. However, forward kinematic models can only predict limited aspects of the morphology, such as the position of end effectors or velocity of joints and masses. A key challenge is to model the entire morphology and kinematics without prior knowledge of what aspects of the morphology will be relevant to future tasks. Researchers at Columbia University proposed that instead of directly modeling forward kinematics, a more useful form of self-modeling is one that could answer space occupancy queries, conditioned on the robot’s state. Such query-driven self-models are continuous in the spatial domain, memory efficient, fully differentiable, and kinematic aware and can be used across a broader range of tasks. They demonstrated how a visual self-model is accurate to about 1% of the workspace, enabling the robot to perform various motion planning and control tasks. Visual self-modeling can also allow the robot to detect, localize, and recover from real-world damage, leading to improved machine resiliency…read more. Open Access TECHNICAL ARTICLE 

Implicit visual self-model representation. Credit: SCIENCE ROBOTICS, 13 Jul 2022, Vol 7, Issue 68 

Posted in Robots and tagged , .

Leave a Reply