ML in law symposium
Andreas’ idea: Given explainability / the ability to explain decisions, let’s maximize the performance we can get.
My three takeaways
- Tech folks have a tendency to “fly in and fix everything.” That feels like a dangerous approach here. It’s far better to stand on the shoulders of existing legal precedent, which has studied fairness, discrimination, and bias for decades, even if that slows down progress.
- Machine learning systems mirror and amplify bias by default. We cannot simply ignore sensitive attributes because the system averages loss over the majority. (Disparate mistreatment). Pithy corollary: this problem will only go away if we devote resources into making it go away.
- Providing explanations for decisions is the only humane way to build automatic classification systems. Why? If I can’t test a result, I can’t contest it. If the decisions must be testable and explainable, they will be much more reliable as a result.