Skip to content

Instantly share code, notes, and snippets.

@RafayAK
Last active November 10, 2019 09:38
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save RafayAK/53c8c403775210b1817e79f262042068 to your computer and use it in GitHub Desktop.
Save RafayAK/53c8c403775210b1817e79f262042068 to your computer and use it in GitHub Desktop.
This function computes the Binary Cross-Entropy(stable_bce) Cost function the way Keras implements it
def compute_keras_like_bce_cost(Y, P_hat, from_logits=False):
"""
This function computes the Binary Cross-Entropy(stable_bce) Cost function the way Keras
implements it. Accepting either probabilities(P_hat) from the sigmoid neuron or values direct
from the linear node(Z)
Args:
Y: labels of data
P_hat: Probabilities from sigmoid function
from_logits: flag to check if logits are being provided or not(Default: False)
Returns:
cost: The "Stable" Binary Cross-Entropy Cost result
dZ_last: gradient of Cost w.r.t Z_last
"""
if from_logits:
# assume that P_hat contains logits and not probabilities
return compute_stable_bce_cost(Y, Z=P_hat)
else:
# Assume P_hat contains probabilities. So make logits out of them
# First clip probabilities to stable range
EPSILON = 1e-07
P_MAX = 1- EPSILON # 0.9999999
P_hat = np.clip(P_hat, a_min=EPSILON, a_max=P_MAX)
# Second, Convert stable probabilities to logits(Z)
Z = np.log(P_hat/(1-P_hat))
# now call compute_stable_bce_cost
return compute_stable_bce_cost(Y, Z)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment