Skip to content

Instantly share code, notes, and snippets.

@petermchale
Last active December 2, 2023 15:46
Show Gist options
  • Save petermchale/a4fc2ca750048d21a0cbb8fafcc690af to your computer and use it in GitHub Desktop.
Save petermchale/a4fc2ca750048d21a0cbb8fafcc690af to your computer and use it in GitHub Desktop.
A derivation of the bias-variance decomposition of test error in machine learning.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@petermchale
Copy link
Author

petermchale commented Dec 2, 2023

In active machine learning, we assume that the learner is unbiased, and focus on algorithms that minimize the learner's variance, as shown in Cohn et al (1996): https://arxiv.org/abs/cs/9603104 (Eq. 4 is difficult to interpret precisely, though, in the absence of further reading).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment