Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save briansp2020/dd49704da9d43eac3b429ed4a804c8c9 to your computer and use it in GitHub Desktop.
Save briansp2020/dd49704da9d43eac3b429ed4a804c8c9 to your computer and use it in GitHub Desktop.
root@C-c6fe3f8f-7781-431b-9423-4071ddb07676-146:~/git/Mask_RCNN# python NucleiExperiment.py
Using TensorFlow backend.
Downloading pretrained model to /root/git/Mask_RCNN/mask_rcnn_coco.h5 ...
... done downloading pretrained model!
ROOT_DIR : /root/git/Mask_RCNN
MODEL_DIR : /root/git/Mask_RCNN/logs
COCO_MODEL_PATH : /root/git/Mask_RCNN/mask_rcnn_coco.h5
Configurations:
BACKBONE resnet101
BACKBONE_STRIDES [4, 8, 16, 32, 64]
BATCH_SIZE 2
BBOX_STD_DEV [0.1 0.1 0.2 0.2]
DETECTION_MAX_INSTANCES 100
DETECTION_MIN_CONFIDENCE 0.7
DETECTION_NMS_THRESHOLD 0.3
GPU_COUNT 1
GRADIENT_CLIP_NORM 5.0
IMAGES_PER_GPU 2
IMAGE_MAX_DIM 512
IMAGE_META_SIZE 14
IMAGE_MIN_DIM 512
IMAGE_MIN_SCALE 0
IMAGE_RESIZE_MODE square
IMAGE_SHAPE [512 512 3]
LEARNING_MOMENTUM 0.9
LEARNING_RATE 0.002
MASK_POOL_SIZE 14
MASK_SHAPE [28, 28]
MAX_GT_INSTANCES 350
MEAN_PIXEL [43.53 39.56 48.22]
MINI_MASK_SHAPE (56, 56)
NAME nuclei
NUM_CLASSES 2
POOL_SIZE 7
POST_NMS_ROIS_INFERENCE 1000
POST_NMS_ROIS_TRAINING 2000
ROI_POSITIVE_RATIO 0.33
RPN_ANCHOR_RATIOS [0.5, 1, 2]
RPN_ANCHOR_SCALES (8, 16, 32, 64, 128)
RPN_ANCHOR_STRIDE 1
RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2]
RPN_NMS_THRESHOLD 0.7
RPN_TRAIN_ANCHORS_PER_IMAGE 256
STEPS_PER_EPOCH 880
TRAIN_BN False
TRAIN_ROIS_PER_IMAGE 128
USE_MINI_MASK True
USE_RPN_ROIS True
VALIDATION_STEPS 32
WEIGHT_DECAY 0.0001
Number of training images (multiplied by boost factor) : 1759
Number of validation images : 63
2018-05-03 04:37:08.926778: W tensorflow/stream_executor/rocm/rocm_driver.cc:405] creating context when one is currently active; existing: 0x7f25a07f4610
2018-05-03 04:37:08.926920: I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] Found device 0 with properties:
name: Device 6863
AMDGPU ISA: gfx900
memoryClockRate (GHz) 1.6
pciBusID 0000:03:00.0
Total memory: 15.98GiB
Free memory: 15.73GiB
2018-05-03 04:37:08.926939: I tensorflow/core/common_runtime/gpu/gpu_device.cc:928] DMA: 0
2018-05-03 04:37:08.926948: I tensorflow/core/common_runtime/gpu/gpu_device.cc:938] 0: Y
2018-05-03 04:37:08.926958: I tensorflow/core/common_runtime/gpu/gpu_device.cc:996] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Device 6863, pci bus id: 0000:03:00.0)
Training head
Starting at epoch 0. LR=0.004
Checkpoint Path: /root/git/Mask_RCNN/logs/nuclei20180503T0437/mask_rcnn_nuclei_{epoch:04d}.h5
/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gradients_impl.py:95: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
/usr/local/lib/python3.5/dist-packages/keras/engine/training.py:2087: UserWarning: Using a generator with `use_multiprocessing=True` and multiple workers may duplicate your data. Please consider using the`keras.utils.Sequence class.
UserWarning('Using a generator with `use_multiprocessing=True`'
Epoch 1/1
2018-05-03 04:37:26.492269: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:29.459387: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:29.465388: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:33.735238: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:37.732585: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:37.737571: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:42.494306: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:46.565524: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:46.582044: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:51.645939: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:55.853344: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:55.856066: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:55.862179: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:55.864213: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:55.869966: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:55.872580: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:55.879173: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:55.883378: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:55.893080: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:56.107964: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:56.118004: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 04:37:56.349910: I tensorflow/core/kernels/conv_ops.cc:670] running auto-tune for Convolve
2018-05-03 04:37:56.442202: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 04:37:58.937853: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
2018-05-03 04:37:58.937990: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
2018-05-03 04:38:04.798258: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 04:38:04.832497: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 04:38:04.832655: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
2018-05-03 04:38:07.127302: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
2018-05-03 04:38:07.127545: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 04:38:09.817687: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
2018-05-03 04:38:12.266350: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 04:38:12.267916: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
2018-05-03 04:38:13.705999: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 04:38:13.753557: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
2018-05-03 04:38:13.856992: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 04:38:13.857191: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 04:38:13.875050: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 04:38:14.845277: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
2018-05-03 04:38:14.853498: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
2018-05-03 04:38:16.180344: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
2018-05-03 04:38:16.610671: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 04:38:17.062568: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
/usr/local/lib/python3.5/dist-packages/keras/engine/training.py:2330: UserWarning: Using a generator with `use_multiprocessing=True` and multiple workers may duplicate your data. Please consider using the`keras.utils.Sequence class.
UserWarning('Using a generator with `use_multiprocessing=True`'
- 756s - loss: 1.8185 - rpn_class_loss: 0.1532 - rpn_bbox_loss: 0.7845 - mrcnn_class_loss: 0.2370 - mrcnn_bbox_loss: 0.3409 - mrcnn_mask_loss: 0.3029 - val_loss: 1.4512 - val_rpn_class_loss: 0.0906 - val_rpn_bbox_loss: 0.6808 - val_mrcnn_class_loss: 0.2069 - val_mrcnn_bbox_loss: 0.2208 - val_mrcnn_mask_loss: 0.2520
Epoch 0 ended at 2018-05-03 04:50:11.456944.
dataset_train.max_masks : 0
Training 5+
Starting at epoch 1. LR=0.006
Checkpoint Path: /root/git/Mask_RCNN/logs/nuclei20180503T0437/mask_rcnn_nuclei_{epoch:04d}.h5
Epoch 2/2
2018-05-03 04:50:21.783535: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 04:50:21.785319: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
2018-05-03 04:50:23.623337: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
2018-05-03 04:50:27.756448: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
- 731s - loss: 1.6804 - rpn_class_loss: 0.1197 - rpn_bbox_loss: 0.7701 - mrcnn_class_loss: 0.2113 - mrcnn_bbox_loss: 0.3078 - mrcnn_mask_loss: 0.2714 - val_loss: 1.5042 - val_rpn_class_loss: 0.0920 - val_rpn_bbox_loss: 0.7303 - val_mrcnn_class_loss: 0.1383 - val_mrcnn_bbox_loss: 0.2661 - val_mrcnn_mask_loss: 0.2774
Epoch 1 ended at 2018-05-03 05:02:31.872085.
Training 4+
Starting at epoch 2. LR=0.004
Checkpoint Path: /root/git/Mask_RCNN/logs/nuclei20180503T0437/mask_rcnn_nuclei_{epoch:04d}.h5
Epoch 3/3
2018-05-03 05:02:55.334515: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 05:03:00.698775: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 05:03:05.804288: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
2018-05-03 05:03:08.910191: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
- 799s - loss: 1.3462 - rpn_class_loss: 0.0858 - rpn_bbox_loss: 0.6029 - mrcnn_class_loss: 0.1977 - mrcnn_bbox_loss: 0.2158 - mrcnn_mask_loss: 0.2440 - val_loss: 0.9118 - val_rpn_class_loss: 0.0352 - val_rpn_bbox_loss: 0.3446 - val_mrcnn_class_loss: 0.1560 - val_mrcnn_bbox_loss: 0.1546 - val_mrcnn_mask_loss: 0.2213
Epoch 2 ended at 2018-05-03 05:16:11.124218.
Training 3+
Starting at epoch 3. LR=0.002
Checkpoint Path: /root/git/Mask_RCNN/logs/nuclei20180503T0437/mask_rcnn_nuclei_{epoch:04d}.h5
Epoch 4/4
2018-05-03 05:16:40.823217: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 05:16:46.983038: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 05:16:51.724515: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 05:16:51.725984: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
2018-05-03 05:16:53.726867: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
2018-05-03 05:16:56.486437: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
- 861s - loss: 0.9352 - rpn_class_loss: 0.0396 - rpn_bbox_loss: 0.3388 - mrcnn_class_loss: 0.1587 - mrcnn_bbox_loss: 0.1696 - mrcnn_mask_loss: 0.2283 - val_loss: 0.9022 - val_rpn_class_loss: 0.0386 - val_rpn_bbox_loss: 0.3069 - val_mrcnn_class_loss: 0.1631 - val_mrcnn_bbox_loss: 0.1596 - val_mrcnn_mask_loss: 0.2338
Epoch 3 ended at 2018-05-03 05:30:58.442910.
Training 2+
Starting at epoch 4. LR=0.002
Checkpoint Path: /root/git/Mask_RCNN/logs/nuclei20180503T0437/mask_rcnn_nuclei_{epoch:04d}.h5
Epoch 5/6
2018-05-03 05:31:35.624769: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 05:31:41.707752: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 05:31:46.223435: I tensorflow/core/kernels/conv_grad_input_ops.cc:858] running auto-tune for Backward-Data
2018-05-03 05:31:46.241443: I tensorflow/core/kernels/conv_grad_filter_ops.cc:778] running auto-tune for Backward-Filter
- 916s - loss: 0.8824 - rpn_class_loss: 0.0361 - rpn_bbox_loss: 0.3037 - mrcnn_class_loss: 0.1521 - mrcnn_bbox_loss: 0.1636 - mrcnn_mask_loss: 0.2268 - val_loss: 0.8045 - val_rpn_class_loss: 0.0240 - val_rpn_bbox_loss: 0.2611 - val_mrcnn_class_loss: 0.1352 - val_mrcnn_bbox_loss: 0.1492 - val_mrcnn_mask_loss: 0.2349
Epoch 4 ended at 2018-05-03 05:46:46.758007.
Epoch 6/6
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py", line 676, in _data_generator_task
self.queue.qsize() < self.max_queue_size):
File "/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py", line 676, in _data_generator_task
self.queue.qsize() < self.max_queue_size):
File "<string>", line 2, in qsize
File "/usr/lib/python3.5/multiprocessing/managers.py", line 717, in _callmethod
kind, result = conn.recv()
File "/usr/lib/python3.5/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/usr/lib/python3.5/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/usr/lib/python3.5/multiprocessing/connection.py", line 383, in _recv
raise EOFError
File "/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py", line 676, in _data_generator_task
self.queue.qsize() < self.max_queue_size):
File "<string>", line 2, in qsize
File "/usr/lib/python3.5/multiprocessing/managers.py", line 717, in _callmethod
kind, result = conn.recv()
File "/usr/lib/python3.5/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/usr/lib/python3.5/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py", line 678, in _data_generator_task
self.queue.put((True, generator_output))
File "<string>", line 2, in put
EOFError
File "/usr/lib/python3.5/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError
File "/usr/lib/python3.5/multiprocessing/managers.py", line 717, in _callmethod
kind, result = conn.recv()
File "/usr/lib/python3.5/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/usr/lib/python3.5/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/usr/lib/python3.5/multiprocessing/connection.py", line 383, in _recv
raise EOFError
File "<string>", line 2, in qsize
File "/usr/lib/python3.5/multiprocessing/managers.py", line 717, in _callmethod
kind, result = conn.recv()
File "/usr/lib/python3.5/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/usr/lib/python3.5/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/usr/lib/python3.5/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError
EOFError
Process Process-45:
Process Process-44:
Process Process-43:
Process Process-42:
EOFError
EOFError
During handling of the above exception, another exception occurred:
EOFError
EOFError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
During handling of the above exception, another exception occurred:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py", line 688, in _data_generator_task
self.queue.put((False, e))
File "<string>", line 2, in put
File "/usr/lib/python3.5/multiprocessing/managers.py", line 716, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py", line 688, in _data_generator_task
self.queue.put((False, e))
File "/usr/lib/python3.5/multiprocessing/connection.py", line 206, in send
self._send_bytes(ForkingPickler.dumps(obj))
File "/usr/lib/python3.5/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "<string>", line 2, in put
File "/usr/lib/python3.5/multiprocessing/managers.py", line 716, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.5/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
File "/usr/lib/python3.5/multiprocessing/connection.py", line 206, in send
self._send_bytes(ForkingPickler.dumps(obj))
File "/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py", line 688, in _data_generator_task
self.queue.put((False, e))
File "<string>", line 2, in put
File "/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py", line 688, in _data_generator_task
self.queue.put((False, e))
File "<string>", line 2, in put
File "/usr/lib/python3.5/multiprocessing/managers.py", line 716, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/usr/lib/python3.5/multiprocessing/connection.py", line 206, in send
self._send_bytes(ForkingPickler.dumps(obj))
BrokenPipeError: [Errno 32] Broken pipe
File "/usr/lib/python3.5/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/usr/lib/python3.5/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
File "/usr/lib/python3.5/multiprocessing/managers.py", line 716, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/usr/lib/python3.5/multiprocessing/connection.py", line 206, in send
self._send_bytes(ForkingPickler.dumps(obj))
File "/usr/lib/python3.5/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/usr/lib/python3.5/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
BrokenPipeError: [Errno 32] Broken pipe
File "/usr/lib/python3.5/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/usr/lib/python3.5/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
- 892s - loss: 0.8489 - rpn_class_loss: 0.0320 - rpn_bbox_loss: 0.2825 - mrcnn_class_loss: 0.1462 - mrcnn_bbox_loss: 0.1605 - mrcnn_mask_loss: 0.2277 - val_loss: 0.8036 - val_rpn_class_loss: 0.0265 - val_rpn_bbox_loss: 0.2616 - val_mrcnn_class_loss: 0.1615 - val_mrcnn_bbox_loss: 0.1326 - val_mrcnn_mask_loss: 0.2213
Epoch 5 ended at 2018-05-03 06:01:40.396654.
Starting at epoch 6. LR=0.001
Checkpoint Path: /root/git/Mask_RCNN/logs/nuclei20180503T0437/mask_rcnn_nuclei_{epoch:04d}.h5
^C^C^C^C^CTraceback (most recent call last):
File "NucleiExperiment.py", line 501, in <module>
layers="all") # "5+", "4+", "3+", "2+", "all"
File "/root/git/Mask_RCNN/mrcnn/model.py", line 2458, in train
use_multiprocessing=True,
File "/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 2133, in fit_generator
callbacks.set_model(callback_model)
File "/usr/local/lib/python3.5/dist-packages/keras/callbacks.py", line 52, in set_model
callback.set_model(model)
File "/usr/local/lib/python3.5/dist-packages/keras/callbacks.py", line 720, in set_model
self.sess = K.get_session()
File "/usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py", line 199, in get_session
session.run(tf.variables_initializer(uninitialized_vars))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1124, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1321, in _do_run
options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1327, in _do_call
return fn(*args)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1297, in _run_fn
self._extend_graph()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1358, in _extend_graph
self._session, graph_def.SerializeToString(), status)
KeyboardInterrupt
^C^C
root@C-c6fe3f8f-7781-431b-9423-4071ddb07676-146:~/git/Mask_RCNN# ^C
root@C-c6fe3f8f-7781-431b-9423-4071ddb07676-146:~/git/Mask_RCNN# ^C
root@C-c6fe3f8f-7781-431b-9423-4071ddb07676-146:~/git/Mask_RCNN# shutdown -h now
root@C-c6fe3f8f-7781-431b-9423-4071ddb07676-146:~/git/Mask_RCNN# Connection to 172.104.98.186 closed by remote host.
Connection to 172.104.98.186 closed.
[1]+ Done gedit .ssh/guest
briansp@Ryzen1800X:~$ ssh root@172.104.98.186 -p 22 -i ~/.ssh/guest.pem -o ServerAliveInterval=10
ssh_exchange_identification: Connection closed by remote host
briansp@Ryzen1800X:~$
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment