Skip to content

Instantly share code, notes, and snippets.

@alexkarargyris
Created October 16, 2020 03:05
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save alexkarargyris/fdc8b9d0ecc0bb0347d724acb4219c35 to your computer and use it in GitHub Desktop.
Save alexkarargyris/fdc8b9d0ecc0bb0347d724acb4219c35 to your computer and use it in GitHub Desktop.
iDash NN
/Users/alexandroskarargyris/Desktop/XPHealthEnv/bin/python3 /Users/alexandroskarargyris/PycharmProjects/idash2020task1/train_cnn.py
(array([], dtype=int64),) (1899, 11) (814, 11)
2020-10-15 19:52:58.460474: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-10-15 19:52:58.584910: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7ff70a142980 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-15 19:52:58.584939: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 256) 1430272
_________________________________________________________________
dropout (Dropout) (None, 256) 0
_________________________________________________________________
dense_1 (Dense) (None, 128) 32896
_________________________________________________________________
dropout_1 (Dropout) (None, 128) 0
_________________________________________________________________
dense_2 (Dense) (None, 64) 8256
_________________________________________________________________
dropout_2 (Dropout) (None, 64) 0
_________________________________________________________________
dense_3 (Dense) (None, 11) 715
=================================================================
Total params: 1,472,139
Trainable params: 1,472,139
Non-trainable params: 0
_________________________________________________________________
Epoch 1/100
Epoch 00001: val_acc improved from -inf to 0.51579, saving model to baseline_cnn.h5
54/54 - 3s - loss: 3.2149 - acc: 0.2276 - val_loss: 1.6323 - val_acc: 0.5158
Epoch 2/100
Epoch 00002: val_acc did not improve from 0.51579
54/54 - 1s - loss: 2.4149 - acc: 0.3049 - val_loss: 1.6743 - val_acc: 0.4895
Epoch 3/100
Epoch 00003: val_acc improved from 0.51579 to 0.52632, saving model to baseline_cnn.h5
54/54 - 1s - loss: 2.0351 - acc: 0.3745 - val_loss: 1.6032 - val_acc: 0.5263
Epoch 4/100
Epoch 00004: val_acc improved from 0.52632 to 0.53684, saving model to baseline_cnn.h5
54/54 - 1s - loss: 1.8731 - acc: 0.4260 - val_loss: 1.5575 - val_acc: 0.5368
Epoch 5/100
Epoch 00005: val_acc improved from 0.53684 to 0.57368, saving model to baseline_cnn.h5
54/54 - 1s - loss: 1.7115 - acc: 0.4716 - val_loss: 1.3988 - val_acc: 0.5737
Epoch 6/100
Epoch 00006: val_acc improved from 0.57368 to 0.61579, saving model to baseline_cnn.h5
54/54 - 1s - loss: 1.5316 - acc: 0.5085 - val_loss: 1.2319 - val_acc: 0.6158
Epoch 7/100
Epoch 00007: val_acc improved from 0.61579 to 0.64211, saving model to baseline_cnn.h5
54/54 - 1s - loss: 1.4878 - acc: 0.5319 - val_loss: 1.2150 - val_acc: 0.6421
Epoch 8/100
Epoch 00008: val_acc improved from 0.64211 to 0.65263, saving model to baseline_cnn.h5
54/54 - 1s - loss: 1.4707 - acc: 0.5453 - val_loss: 1.1896 - val_acc: 0.6526
Epoch 9/100
Epoch 00009: val_acc did not improve from 0.65263
54/54 - 1s - loss: 1.3678 - acc: 0.5775 - val_loss: 1.1571 - val_acc: 0.6526
Epoch 10/100
Epoch 00010: val_acc improved from 0.65263 to 0.66842, saving model to baseline_cnn.h5
54/54 - 1s - loss: 1.2482 - acc: 0.6243 - val_loss: 1.0855 - val_acc: 0.6684
Epoch 11/100
Epoch 00011: val_acc improved from 0.66842 to 0.72105, saving model to baseline_cnn.h5
54/54 - 1s - loss: 1.1574 - acc: 0.6325 - val_loss: 1.0355 - val_acc: 0.7211
Epoch 12/100
Epoch 00012: val_acc did not improve from 0.72105
54/54 - 1s - loss: 1.0799 - acc: 0.6653 - val_loss: 0.9851 - val_acc: 0.7158
Epoch 13/100
Epoch 00013: val_acc did not improve from 0.72105
54/54 - 1s - loss: 1.0638 - acc: 0.6712 - val_loss: 0.9795 - val_acc: 0.7105
Epoch 14/100
Epoch 00014: val_acc did not improve from 0.72105
Epoch 00014: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
54/54 - 1s - loss: 0.9816 - acc: 0.7109 - val_loss: 1.0063 - val_acc: 0.6789
Epoch 15/100
Epoch 00015: val_acc did not improve from 0.72105
54/54 - 1s - loss: 0.9305 - acc: 0.7162 - val_loss: 0.9781 - val_acc: 0.7000
Epoch 16/100
Epoch 00016: val_acc did not improve from 0.72105
54/54 - 1s - loss: 0.8478 - acc: 0.7320 - val_loss: 0.9633 - val_acc: 0.7158
Epoch 17/100
Epoch 00017: val_acc improved from 0.72105 to 0.72632, saving model to baseline_cnn.h5
54/54 - 1s - loss: 0.8435 - acc: 0.7349 - val_loss: 0.9397 - val_acc: 0.7263
Epoch 18/100
Epoch 00018: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.8435 - acc: 0.7303 - val_loss: 0.9216 - val_acc: 0.7263
Epoch 19/100
Epoch 00019: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.8164 - acc: 0.7501 - val_loss: 0.9121 - val_acc: 0.7211
Epoch 20/100
Epoch 00020: val_acc did not improve from 0.72632
Epoch 00020: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
54/54 - 1s - loss: 0.8377 - acc: 0.7379 - val_loss: 0.9063 - val_acc: 0.7263
Epoch 21/100
Epoch 00021: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.8024 - acc: 0.7525 - val_loss: 0.9060 - val_acc: 0.7263
Epoch 22/100
Epoch 00022: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.7908 - acc: 0.7624 - val_loss: 0.9053 - val_acc: 0.7263
Epoch 23/100
Epoch 00023: val_acc did not improve from 0.72632
Epoch 00023: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06.
54/54 - 1s - loss: 0.7835 - acc: 0.7548 - val_loss: 0.9051 - val_acc: 0.7263
Epoch 24/100
Epoch 00024: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.7569 - acc: 0.7671 - val_loss: 0.9050 - val_acc: 0.7263
Epoch 25/100
Epoch 00025: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.7574 - acc: 0.7636 - val_loss: 0.9049 - val_acc: 0.7263
Epoch 26/100
Epoch 00026: val_acc did not improve from 0.72632
Epoch 00026: ReduceLROnPlateau reducing learning rate to 1.0000001111620805e-07.
54/54 - 1s - loss: 0.8006 - acc: 0.7496 - val_loss: 0.9048 - val_acc: 0.7263
Epoch 27/100
Epoch 00027: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.7533 - acc: 0.7642 - val_loss: 0.9048 - val_acc: 0.7263
Epoch 28/100
Epoch 00028: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.7428 - acc: 0.7706 - val_loss: 0.9048 - val_acc: 0.7263
Epoch 29/100
Epoch 00029: val_acc did not improve from 0.72632
Epoch 00029: ReduceLROnPlateau reducing learning rate to 1.000000082740371e-08.
54/54 - 1s - loss: 0.7409 - acc: 0.7689 - val_loss: 0.9048 - val_acc: 0.7263
Epoch 30/100
Epoch 00030: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.7808 - acc: 0.7496 - val_loss: 0.9048 - val_acc: 0.7263
Epoch 31/100
Epoch 00031: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.7790 - acc: 0.7537 - val_loss: 0.9047 - val_acc: 0.7263
Epoch 32/100
Epoch 00032: val_acc did not improve from 0.72632
Epoch 00032: ReduceLROnPlateau reducing learning rate to 1.000000082740371e-09.
54/54 - 1s - loss: 0.8023 - acc: 0.7437 - val_loss: 0.9047 - val_acc: 0.7263
Epoch 33/100
Epoch 00033: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.7765 - acc: 0.7583 - val_loss: 0.9047 - val_acc: 0.7263
Epoch 34/100
Epoch 00034: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.8248 - acc: 0.7548 - val_loss: 0.9047 - val_acc: 0.7263
Epoch 35/100
Epoch 00035: val_acc did not improve from 0.72632
Epoch 00035: ReduceLROnPlateau reducing learning rate to 1.000000082740371e-10.
54/54 - 1s - loss: 0.7988 - acc: 0.7531 - val_loss: 0.9047 - val_acc: 0.7263
Epoch 36/100
Epoch 00036: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.7533 - acc: 0.7665 - val_loss: 0.9047 - val_acc: 0.7263
Epoch 37/100
Epoch 00037: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.7987 - acc: 0.7414 - val_loss: 0.9047 - val_acc: 0.7263
Epoch 38/100
Epoch 00038: val_acc did not improve from 0.72632
Epoch 00038: ReduceLROnPlateau reducing learning rate to 1.000000082740371e-11.
54/54 - 1s - loss: 0.7592 - acc: 0.7800 - val_loss: 0.9047 - val_acc: 0.7263
Epoch 39/100
Epoch 00039: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.7974 - acc: 0.7472 - val_loss: 0.9047 - val_acc: 0.7263
Epoch 40/100
Epoch 00040: val_acc did not improve from 0.72632
54/54 - 1s - loss: 0.8048 - acc: 0.7507 - val_loss: 0.9047 - val_acc: 0.7263
Epoch 00040: early stopping
Average precision score, micro-averaged over all classes: 0.84
Process finished with exit code 0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment