This is just the middle section of Bob Carpenter's note for evaluating log-loss via the binary logistic functoin https://lingpipe-blog.com/2012/02/16/howprevent-overflow-underflow-logistic-regression/
The logp
function calculates the negative cross-entropy:
dotproduct( [y, 1-y], [logP(y=1), logP(y=0)] )
where the input s
is the beta'x
log-odds scalar value. The trick is to make this numerically stable for any choice of s
and y
.