Learning the Goal Seeking Behaviour for Mobile Robots
Machine Learning techniques have been widely used for the navigation of mobile robots moving towards a region of interest while avoiding obstacles. Surprisingly, there is a sparingly little literature on the use of machine learning techniques for navigating the robot towards a precisely defined goal configuration amidst static and dynamic obstacles. The need to have the robot reach a precise configuration is needed for applications like placing the robot for charging, robots carrying mobile manipulation, etc. This paper takes the problem of planning motion of an autonomous robot to reach a specific goal configuration in presence of static and dynamic obstacles as a machine learning problem; and compares two approaches, namely supervised learning and reinforcement learning using neural networks. The comparisons are done on a simulation as well as on the physical robot Amigobot operating at the robotics laboratory of the host institute. Experimental results show that reinforcement learning is more suited to solve the problem as the technique does not require a human expert to generate data, which is hence expensive. Abstract
Link to the paper: https://ieeexplore.ieee.org/abstract/document/8467230
Video of the bot
Youtube Screencast |
---|
|