Drones navigate unseen environments with liquid neural networks

MIT News April 19, 2023
Autonomous robots can learn to perform visual navigation tasks from offline human demonstrations and generalize online and unseen scenarios within the same environment they have been trained on. It is challenging for these agents to take a step further and robustly generalize to new environments with drastic scenery changes that they have never encountered. Researchers at MIT have developed a method to create robust flight navigation agents that successfully perform vision-based fly-to-target tasks beyond their training environment under drastic distribution shifts. They designed an imitation learning framework using liquid neural networks, a brain-inspired class of continuous-time neural models that are causal and adapt to changing conditions. The liquid agents learned to distill the task they are given from visual inputs and drop irrelevant features. Thus, their learned navigation skills transferred to new environments. When compared with several other state-of-the-art deep agents, experiments showed that this level of robustness in decision-making is exclusive to liquid networks, both in their differential equation and closed-form representations… read more. Open Access TECHNICAL ARTICLE

End-to-end learning setup. Credit: SCIENCE ROBOTICS, 19 Apr 2023, Vol 8, Issue 77 

Posted in Autonomous systems and robotics and tagged , , , .

Leave a Reply