Created
March 2, 2018 19:14
-
-
Save dangoldner/7d4fa3988c36c3a981d4ec1fa47027e4 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
I ran the dogs-vs-cats code on images of roses and tulips. | |
I expected the fit to be the same if I fit to the same data, but it wasn't: | |
[In:] | |
arch=resnet34 | |
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz)) | |
learn = ConvLearner.pretrained(arch, data, precompute=True) | |
learn.fit(0.01, 3) | |
[Out:] | |
epoch trn_loss val_loss accuracy | |
0 0.958563 0.742574 0.6 | |
1 0.961629 0.686729 0.666667 | |
2 0.883588 0.673014 0.666667 | |
[0.67301399, 0.66666668653488159] | |
# Do the same thing again without modification: | |
[In:] | |
arch=resnet34 | |
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz)) | |
learn = ConvLearner.pretrained(arch, data, precompute=True) | |
learn.fit(0.01, 3) | |
epoch trn_loss val_loss accuracy | |
0 0.686942 0.698771 0.566667 | |
1 0.682425 0.547717 0.7 | |
2 0.627959 0.444131 0.8 | |
[0.4441309, 0.80000001192092896] | |
# And again: | |
[In:] | |
arch=resnet34 | |
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz)) | |
learn = ConvLearner.pretrained(arch, data, precompute=True) | |
learn.fit(0.01, 3) | |
[Out:] | |
epoch trn_loss val_loss accuracy | |
0 0.911289 0.781548 0.466667 | |
1 0.856045 0.710843 0.566667 | |
2 0.776736 0.665686 0.633333 | |
[0.66568589, 0.63333332538604736] | |
When I went back to look at the dogs-vs-cats results from lesson 1, I saw they were not the same either: | |
[In:] | |
arch=resnet34 | |
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz)) | |
learn = ConvLearner.pretrained(arch, data, precompute=True) | |
learn.fit(0.01, 3) | |
[Out:] | |
epoch trn_loss val_loss accuracy | |
0 0.040586 0.025997 0.990234 | |
1 0.041112 0.022978 0.990234 | |
2 0.047694 0.027809 0.989258 | |
[0.02780919, 0.9892578125] | |
# Do it again: | |
[In:] | |
arch=resnet34 | |
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz)) | |
learn = ConvLearner.pretrained(arch, data, precompute=True) | |
learn.fit(0.01, 3) | |
[Out:] | |
epoch trn_loss val_loss accuracy | |
0 0.048328 0.022449 0.992188 | |
1 0.035571 0.022375 0.992188 | |
2 0.040946 0.023837 0.991211 | |
[0.023837499, 0.9912109375] | |
Much closer to the same, but not the same. Perhaps we are sampling randomly from the provided images. My flowers set is much smaller than the dogs and cats set so I'd expect bigger changes on re-runs. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment