You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A quick one-over and nothing specific pops out at me as wrong... looks to be a standard unet. I can only offer some ideas - maybe try adding BN into it (since you're comparing against tiramisu). Also, this paper might be of interest to you - https://arxiv.org/pdf/1511.00561.pdf it's a every so slightly modified unet such that the encoder path is VGG... which gives you the ability to use TL. They also used the same dataset so you'll be able to closer compare your results. Last thing is you're using sparse cross entropy loss, but aren't doing any class weighing. That will have some negative effect too... but from what I recall looking atyour 100-tiramisu notebook, you didn't have any in there either, so... maybe that's just a better model ?_?
A quick one-over and nothing specific pops out at me as wrong... looks to be a standard unet. I can only offer some ideas - maybe try adding BN into it (since you're comparing against tiramisu). Also, this paper might be of interest to you - https://arxiv.org/pdf/1511.00561.pdf it's a every so slightly modified unet such that the encoder path is VGG... which gives you the ability to use TL. They also used the same dataset so you'll be able to closer compare your results. Last thing is you're using sparse cross entropy loss, but aren't doing any class weighing. That will have some negative effect too... but from what I recall looking atyour 100-tiramisu notebook, you didn't have any in there either, so... maybe that's just a better model ?_?