- 
                Notifications
    
You must be signed in to change notification settings  - Fork 604
 
Description
I have some issues with train on pytorch. I rewrite loss function so its same as urs.
I trained as u reccommended and get bad results, after investigation i mention that nn predict awful probabilities in channel 0 and channel 1 output. Its all above 1, so u cant search for good results using threshold 0.5 or something like that.
Than i rerun training and printing l1,obj and noobj losses and it cames obj and noobj losses always became 0 after few iterations.
Because ur loss function logloss :
def logloss(Ptrue,Pred,szs,eps=10e-10):
b,h,w,ch = szs
Pred = tf.clip_by_value(Pred,eps,1.)
Pred = -tf.log(Pred)
Pred = PredPtrue
Pred = tf.reshape(Pred,(b,hw*ch))
Pred = tf.reduce_sum(Pred,1)
return Pred
equal to 0 when predicted values are above 1 (ln 1=0). So net learn to predict big values, and its not how it is supposed to work, cuz later we need this probs to estimate good predicitons and nms and so on.
So whats the point of using this loss func or maybe i am wrong somewhere?