GSoC 2020 Work Product | Pulkit Aggarwal
Project Idea: Introduce additional interaction types
Organization: Oppia Foundation
Student: Pulkit Aggarwal (aggawalpullkit596)
Project Decription: Oppia Android currently includes a subset of the interactions (i.e., question types) from the web app. Some additional interactions can be implemented in the Android app, since they are likely to be useful for the mathematics lessons that will be released on it. These interactions include:
- Image Region Selection
- Ratio Input
- Drag and Drop Sort
Oppia's mission is to help anyone learn anything they want in an effective and enjoyable way.
By creating a set of free, high-quality, demonstrably effective lessons with the help of educators from around the world, Oppia aims to provide students with quality education — regardless of where they are or what traditional resources they have access to.
By teaching with Oppia, you can improve your skills in communication and empathy while helping to improve education for students around the world. Or, if you're not ready to teach yet, you can still share feedback on lessons to help make them better for other students!
Whether you're a K-12 educator, a graduate student, or an individual who's passionate about a specific subject and wants to share your knowledge, Oppia welcomes you. Join the community and start exploring with us.
The following goals were framed according to the requirements and the scope of the project:
- Adding support for Drag Drop and Sort Interaction.
- Making Drag Drop and Sort Interaction accessible.
- Adding support for Imag Region Selection Interaction.
- Making Image Region Selection Interaction accessible.
- Adding a new Interaction - Ratio Input to the creators and backend of the project.
- Implementing Ratio Interaction in the Android App.
Work Done during the Coding Period
My First milestone was introducing a new interaction into the android app called a Drag Drop And Sort Interaction which seems pretty straight forward at first because having the support of drag-drop inside android comes out of the box but it wasn't that simple at all. This interaction had a special mode which when enabled allows users 's/learner's to club multiple items together as one and that wasn't easy to implement. Other than this i had to take to care of making this interaction accessible to every user and implementing drag and drop work when the screen reader is enabled which is a cumbersome task and i had to come to with different designs and thought processes to make it work correctly. The working product is below:
My second milestone which i thought that it would be tricky to implement because i haven't implemented any such thing in the past but it turned out to be much simpler than expected. I initially planned to use a third library for this entire work and it was going well until my last week and in that week I introduced some UI tests which seem to be failing due to some lag from the library itself. I had to come up with my solution at that point because I couldn't rely on any 3rd party library now and neither i did have a lot of time to implement something from the beginning.
Another task for the interaction was to make it accessible and i didn't think of making this interaction accessible(image regions don't speak for them :P) but thanks to my mentor we came up with a nice idea to make this interaction accessible although it would still lack as this interaction is more about looking at the image and then guessing the answer. The working product is below:
My third milestone was a quite challenging as i had to work on backend project of oppia which is using angular js along with google app engine and i didn't have any prior experience on any of them and understanding the complete project along with implementing the same in 2 weeks was quite challenging but i was able to complete it although i implemented only the creator's flow. I also implemented the same for the Android app. The working product is below:
In general, some issues and polish work need to be completed along with adding new features as we progress. Some of the essential tasks out of these are as follows.
- Completing the learner's view for Ratio Input interaction.
- Fixing roboelectric tests for Image Region Interaction.
- Adding support content description for image region into the backend.
- Documentation for adding a new interaction into the android app.