Skip to content

Instantly share code, notes, and snippets.

NIPS ML in the Law symposium 2016 notes

Including notes for the second session and the first panel.

My three takeaways

  • Tech folks have a tendency to "fly in and fix everything." That feels like a dangerous approach here. It's far better to stand on the shoulders of existing legal precedent, which has studied fairness, discrimination, and bias for decades, even if that slows down progress.
  • Machine learning systems mirror and amplify bias by default. We cannot simply ignore sensitive attributes because the system averages loss over the majority. (Disparate mistreatment). Pithy corollary: this problem will only go away if we devote resources into making it go away.
  • Providing explanations for decisions is the only humane way to build automatic classification systems. Why? If I can't test a result, I can't contest it. If the decisions must be testable and explainable, they will be much more reliable as a result.

Aaron Roth: Quantitative tradeoffs between fairness and accuracy in machine learning