Consider a situation where a robot has to move a teapot from kitchen to a room. It can do this task by performing some actions. The problem is that in practical situations we have huge number of actions to consider. The solution therefore is to select relevant actions from the action set. Like in our example, for transferring teapot from kitchen to a room, picking up microwave is less relevant as compared to picking up teapot. For more information, read this.
In this project we make an attempt to solve the situation described above by learning relevancy of actions. We use Naive Bayes Classifier to learn the relevancy of actions.
Implementation wise, this project had three major components:
- Learning Relevancy of Actions
This work was done in following phases:
- Developing an encoding for converting files involved in planning to learning variables. [Blog]
- Implementing code to convert file involved in planning to learning variables. [Commits]
- Implementing Learning Algorithm which outputs relevancy of action given some attributes. [Commits]
- Merging Planning Code and Learning Code
This work was done in following phases:
- Extending A* search algorithm used in Planner to an algorithm which can use action relevancy to it's benifit. Two such algorithms were developed TED-A* and SED-A*. [Blog]
- Merging Learning Code with Planning code using TED-A* algorithm. [Commits] [Blog]
- More Generalization and Support for Future Work
This work was done in following phases:
- An API was created for testing new encodings in future and to generalize the learning interface for planner. [Commits] [Pull Request]
- Adding Tutorial which explains different learning code components. [Commits][Pull Request]
For the first two parts there is not explicit pull request since the work directories were directly added to the current branch of the main repository [Commits]. All the commits with commit message Merge Lashit work on GSoC'17 or Integration of Lashit's code belong to work done under this project.