Skip to content

Instantly share code, notes, and snippets.

@inexxt
Last active February 20, 2017 12:13
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save inexxt/af4a10242f3df0f654637ff1cd34f89b to your computer and use it in GitHub Desktop.
Save inexxt/af4a10242f3df0f654637ff1cd34f89b to your computer and use it in GitHub Desktop.

http://stackoverflow.com/questions/26355942/why-is-the-f-measure-a-harmonic-mean-and-not-an-arithmetic-mean-of-the-precision

Q: When we calculate the F-Measure considering both Precision and Recall, we take the harmonic mean of the two measures instead of a simple arithmetic mean.

What is the intuitive reason behind taking the harmonic mean and not a simple average?


Answer 1

The harmonic mean is the equivalent of the arithmetic mean for reciprocals of quantities that should be averaged by the arithmetic mean. More precisely, with the harmonic mean, you transform all your numbers to the "averageable" form (by taking the reciprocal), you take their arithmetic mean and then transform the result back to the original representation (by taking the reciprocal again).

Precision and the recall are "naturally" reciprocals because their numerator is the same and their denominators are different. Fractions are more sensible to average by arithmetic mean when they have the same denominator.

For more intuition, suppose that we keep the number of true positive items constant. Then by taking the harmonic mean of the precision and the recall, you implicitly take the arithmetic mean of the false positives and the false negatives. It basically means that false positives and false negatives are equally important to you when the true positives stay the same. If an algorithm has N more false positive items but N less false negatives (while having the same true positives), the F-measure stays the same.

In other words, the F-measure is suitable when:

  1. mistakes are equally bad, whether they are false positives or false negatives
  2. the number of mistakes is measured relative to the number of true positives
  3. true negatives are uninteresting

Point 1 may or may not be true, there are weighted variants of the F-measure that can be used if this assumption isn't true. Point 2 is quite natural since we can expect the results to scale if we just classify more and more points. The relative numbers should stay the same.

Point 3 is quite interesting. In many applications negatives are the natural default and it may even be hard or arbitrary to specify what really counts as a true negative. For example a fire alarm is having a true negative event every second, every nanosecond, every time a Planck time has passed etc. Even a piece of rock has these true negative fire-detection events all the time.

Or in a face detection case, most of the time you "correctly don't return" billions of possible areas in the image but this is not interesting. The interesting cases are when you do return a proposed detection or when you should return it.

By contrast the classification accuracy cares equally about true positives and true negatives and is more suitable if the total number of samples (classification events) is well-defined and rather small.

Answer 2

Because it punishes extreme values more.

Consider a trivial method (e.g. always returning class A). There are infinite data elements of class B, and a single element of class A:

  • Precision: 0.0
  • Recall: 1.0

When taking the arithmetic mean, it would have 50% correct. Despite being the worst possible outcome! With the harmonic mean, the F1-measure is 0.

  • Arithmetic mean: 0.5
  • Harmonic mean: 0.0

In other words, to have a high F1, you need to both have a high precision and recall.

Answer 3

To explain, consider for example, what the average of 30mph and 40mph is? if you drive for 1 hour at each speed, the average speed over the 2 hours is indeed the arithmetic average, 35mph.

However if you drive for the same distance at each speed -- say 10 miles -- then the average speed over 20 miles is the harmonic mean of 30 and 40, about 34.3mph.

The reason is that for the average to be valid, you really need the values to be in the same scaled units. Miles per hour need to be compared over the same number of hours; to compare over the same number of miles you need to average hours per mile instead, which is exactly what the harmonic mean does.

Precision and recall both have true positives in the numerator, and different denominators. To average them it really only makes sense to average their reciprocals, thus the harmonic mean.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment