Skip to content

Instantly share code, notes, and snippets.

@alexklibisz
Created April 11, 2017 20:30
Show Gist options
  • Save alexklibisz/34d4865c721d3047b8f124195b225ffb to your computer and use it in GitHub Desktop.
Save alexklibisz/34d4865c721d3047b8f124195b225ffb to your computer and use it in GitHub Desktop.
Keras weighted log loss
def weighted_log_loss(yt, yp):
'''Log loss that weights false positives or false negatives more.
Punish the false negatives if you care about making sure all the neurons
are found and don't mind some false positives. Vice versa for punishing
the false positives. Concept taken from the UNet paper where they
weighted boundary errors to get cleaner boundaries.'''
emphasis = 'fn'
assert emphasis in ['fn', 'fp']
m = 2
# Apply the multiplier to y_true.
if emphasis == 'fn':
# [0,1] -> [0,m].
w = yt * m
elif emphasis == 'fp':
# [0,1] -> [-1,0] -> [1,0] -> [m-1,0] -> [m,1]
w = ((yt - 1) * -1) * (m - 1) + 1
a = yt * K.log(yp + K.epsilon())
b = (1 - yt) * K.log(1 + K.epsilon() - yp)
return -1 * K.mean(w * (a + b))
# Below return statement would match the keras binary_crossentropy value.
# return -1. * K.mean((a + b))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment