In The Fair Platform you be able create assignments that essentially are datasets of instructions and students' responses. Each assignment can test different workflows, for example, you can try an Agentic Workflow or a simple Workflow with RAG. Each workflow basically takes that dataset and does different processing to it. Any researcher can define their workflow using any technology, models and methods to grade in the fairer way possible. In this sense, Fair is "platform-agnostic", and you only use it to (optionally) get preferences from the assignor (maybe for rubrics, documents for RAG or instructions on what to allow and what not) and to report the results from their workflow in a nice UI.
So how will this get implemented? Right now I would like to implement two ways of defining "workflows". The first one would be a simple Python module (a simple script.py or a folder with different python files) and modules/packages that can be installed via pip install
. Both are pretty similar actual