Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

@alexkarargyris
Created October 19, 2020 19:53
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save alexkarargyris/c556662a4b4fd7bd99147a7db2546618 to your computer and use it in GitHub Desktop.
Save alexkarargyris/c556662a4b4fd7bd99147a7db2546618 to your computer and use it in GitHub Desktop.
Small Shallow Network
/Users/alexandroskarargyris/Desktop/XPHealthEnv/bin/python3 /Users/alexandroskarargyris/PycharmProjects/idash2020task1/train_cnn_or_nn.py
(array([], dtype=int64),) (1899, 11) (814, 11)
2020-10-19 12:52:19.917486: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-10-19 12:52:19.934841: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fa8d0bcda90 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-19 12:52:19.934862: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 64) 357568
_________________________________________________________________
dropout (Dropout) (None, 64) 0
_________________________________________________________________
dense_1 (Dense) (None, 11) 715
=================================================================
Total params: 358,283
Trainable params: 358,283
Non-trainable params: 0
_________________________________________________________________
Epoch 1/200
Epoch 00001: val_acc improved from -inf to 0.52632, saving model to baseline_cnn.h5
54/54 - 1s - loss: 58.4105 - acc: 0.3171 - val_loss: 53.8845 - val_acc: 0.5263
Epoch 2/200
Epoch 00002: val_acc improved from 0.52632 to 0.58421, saving model to baseline_cnn.h5
54/54 - 0s - loss: 50.3117 - acc: 0.5249 - val_loss: 46.3016 - val_acc: 0.5842
Epoch 3/200
Epoch 00003: val_acc improved from 0.58421 to 0.61579, saving model to baseline_cnn.h5
54/54 - 0s - loss: 42.7695 - acc: 0.5957 - val_loss: 39.0059 - val_acc: 0.6158
Epoch 4/200
Epoch 00004: val_acc improved from 0.61579 to 0.65263, saving model to baseline_cnn.h5
54/54 - 0s - loss: 35.6278 - acc: 0.6378 - val_loss: 32.1380 - val_acc: 0.6526
Epoch 5/200
Epoch 00005: val_acc did not improve from 0.65263
54/54 - 0s - loss: 28.9808 - acc: 0.6624 - val_loss: 25.8113 - val_acc: 0.6368
Epoch 6/200
Epoch 00006: val_acc did not improve from 0.65263
54/54 - 0s - loss: 22.9607 - acc: 0.6905 - val_loss: 20.2204 - val_acc: 0.6421
Epoch 7/200
Epoch 00007: val_acc improved from 0.65263 to 0.65789, saving model to baseline_cnn.h5
54/54 - 0s - loss: 17.6488 - acc: 0.7115 - val_loss: 15.2766 - val_acc: 0.6579
Epoch 8/200
Epoch 00008: val_acc did not improve from 0.65789
54/54 - 0s - loss: 13.1494 - acc: 0.7086 - val_loss: 11.1838 - val_acc: 0.6579
Epoch 9/200
Epoch 00009: val_acc did not improve from 0.65789
54/54 - 0s - loss: 9.4917 - acc: 0.7209 - val_loss: 8.0030 - val_acc: 0.6526
Epoch 10/200
Epoch 00010: val_acc did not improve from 0.65789
Epoch 00010: ReduceLROnPlateau reducing learning rate to 9.999999747378752e-06.
54/54 - 0s - loss: 6.7549 - acc: 0.7045 - val_loss: 5.7011 - val_acc: 0.6526
Epoch 11/200
Epoch 00011: val_acc did not improve from 0.65789
54/54 - 0s - loss: 5.4541 - acc: 0.7098 - val_loss: 5.3796 - val_acc: 0.6526
Epoch 12/200
Epoch 00012: val_acc did not improve from 0.65789
54/54 - 0s - loss: 5.2246 - acc: 0.7121 - val_loss: 5.1874 - val_acc: 0.6526
Epoch 13/200
Epoch 00013: val_acc did not improve from 0.65789
Epoch 00013: ReduceLROnPlateau reducing learning rate to 9.999999747378752e-07.
54/54 - 0s - loss: 5.0286 - acc: 0.7162 - val_loss: 5.0133 - val_acc: 0.6526
Epoch 14/200
Epoch 00014: val_acc did not improve from 0.65789
54/54 - 0s - loss: 4.9139 - acc: 0.7115 - val_loss: 4.9823 - val_acc: 0.6526
Epoch 15/200
Epoch 00015: val_acc did not improve from 0.65789
54/54 - 0s - loss: 4.9092 - acc: 0.7057 - val_loss: 4.9639 - val_acc: 0.6526
Epoch 16/200
Epoch 00016: val_acc did not improve from 0.65789
Epoch 00016: ReduceLROnPlateau reducing learning rate to 9.999999974752428e-08.
54/54 - 0s - loss: 4.8847 - acc: 0.7074 - val_loss: 4.9467 - val_acc: 0.6526
Epoch 17/200
Epoch 00017: val_acc did not improve from 0.65789
54/54 - 0s - loss: 4.8770 - acc: 0.7115 - val_loss: 4.9436 - val_acc: 0.6526
Epoch 18/200
Epoch 00018: val_acc did not improve from 0.65789
54/54 - 0s - loss: 4.8645 - acc: 0.7273 - val_loss: 4.9418 - val_acc: 0.6526
Epoch 19/200
Epoch 00019: val_acc did not improve from 0.65789
Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.0000000116860975e-08.
54/54 - 0s - loss: 4.8674 - acc: 0.7074 - val_loss: 4.9400 - val_acc: 0.6526
Epoch 20/200
Epoch 00020: val_acc did not improve from 0.65789
54/54 - 0s - loss: 4.8800 - acc: 0.7098 - val_loss: 4.9397 - val_acc: 0.6526
Epoch 21/200
Epoch 00021: val_acc did not improve from 0.65789
54/54 - 0s - loss: 4.8576 - acc: 0.7092 - val_loss: 4.9395 - val_acc: 0.6526
Epoch 22/200
Epoch 00022: val_acc did not improve from 0.65789
Epoch 00022: ReduceLROnPlateau reducing learning rate to 9.999999939225292e-10.
54/54 - 0s - loss: 4.8576 - acc: 0.7197 - val_loss: 4.9393 - val_acc: 0.6526
Epoch 23/200
Epoch 00023: val_acc did not improve from 0.65789
54/54 - 0s - loss: 4.8563 - acc: 0.7133 - val_loss: 4.9393 - val_acc: 0.6526
Epoch 24/200
Epoch 00024: val_acc did not improve from 0.65789
54/54 - 0s - loss: 4.8599 - acc: 0.7080 - val_loss: 4.9393 - val_acc: 0.6526
Epoch 25/200
Epoch 00025: val_acc did not improve from 0.65789
Epoch 00025: ReduceLROnPlateau reducing learning rate to 9.999999717180686e-11.
54/54 - 0s - loss: 4.8675 - acc: 0.7028 - val_loss: 4.9393 - val_acc: 0.6526
Epoch 26/200
Epoch 00026: val_acc did not improve from 0.65789
54/54 - 0s - loss: 4.8718 - acc: 0.7028 - val_loss: 4.9393 - val_acc: 0.6526
Epoch 27/200
Epoch 00027: val_acc did not improve from 0.65789
54/54 - 0s - loss: 4.8640 - acc: 0.6963 - val_loss: 4.9393 - val_acc: 0.6526
Epoch 28/200
Epoch 00028: val_acc did not improve from 0.65789
Epoch 00028: ReduceLROnPlateau reducing learning rate to 9.99999943962493e-12.
54/54 - 0s - loss: 4.8729 - acc: 0.7039 - val_loss: 4.9393 - val_acc: 0.6526
Epoch 29/200
Epoch 00029: val_acc did not improve from 0.65789
54/54 - 0s - loss: 4.8467 - acc: 0.7221 - val_loss: 4.9393 - val_acc: 0.6526
Epoch 30/200
Epoch 00030: val_acc did not improve from 0.65789
54/54 - 0s - loss: 4.8756 - acc: 0.7057 - val_loss: 4.9393 - val_acc: 0.6526
Epoch 31/200
Epoch 00031: val_acc did not improve from 0.65789
Epoch 00031: ReduceLROnPlateau reducing learning rate to 9.999999092680235e-13.
54/54 - 0s - loss: 4.8684 - acc: 0.7028 - val_loss: 4.9393 - val_acc: 0.6526
Epoch 32/200
Epoch 00032: val_acc did not improve from 0.65789
54/54 - 0s - loss: 4.8738 - acc: 0.7080 - val_loss: 4.9393 - val_acc: 0.6526
Epoch 33/200
Epoch 00033: val_acc did not improve from 0.65789
54/54 - 0s - loss: 4.8753 - acc: 0.6846 - val_loss: 4.9393 - val_acc: 0.6526
Epoch 00033: early stopping
Average precision score, micro-averaged over all classes: 0.79
Process finished with exit code 0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment