Skip to content

Instantly share code, notes, and snippets.

@wasertech
Last active April 22, 2022 21:50
Show Gist options
  • Save wasertech/543cd54e59079a6764aae98dfc977737 to your computer and use it in GitHub Desktop.
Save wasertech/543cd54e59079a6764aae98dfc977737 to your computer and use it in GitHub Desktop.
This test never reaches the end.
_____ __
/__ / _____/ /_
/ / / ___/ __ \
/ /__(__ ) / / /
/____/____/_/ /_/
Your wish is my command.
❯ docker run \
-it \
--gpus=all \
--privileged \
--shm-size=1g \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
--mount type=bind,src=/mnt/Archives/Données/STT/data,dst=/mnt \
--env TF_CUDNN_RESET_RND_GEN_STATE=1 --entrypoint "bash" stt-train && \
docker container prune -f || \
docker container prune -f
root@9f31365fd26b:/code# ./bin/run-ci-mailabs_time.sh /mnt/extracted/data/M-AILABS/ fr_FR /mnt/models/alphabet.txt /mnt/lm/kenlm.scorer
+ mailabs_dir=/mnt/extracted/data/M-AILABS/
+ mailabs_lang=fr_FR
+ alphabet_path=/mnt/models/alphabet.txt
+ scorer_path=/mnt/lm/kenlm.scorer
+ mailabs_train_csv=/mnt/extracted/data/M-AILABS/fr_FR/fr_FR_train.csv
+ mailabs_dev_csv=/mnt/extracted/data/M-AILABS/fr_FR/fr_FR_test.csv
+ mailabs_test_csv=/mnt/extracted/data/M-AILABS/fr_FR/fr_FR_test.csv
+ epoch_count=1
+ audio_sample_rate=16000
+ [ ! -f /mnt/extracted/data/M-AILABS/fr_FR/fr_FR_train.csv ]
+ date +%s
+ st=1650641552
+ echo Index 0 starts at 1650641552.
Index 0 starts at 1650641552.
+ python -u train.py --alphabet_config_path /mnt/models/alphabet.txt --show_progressbar false --early_stop false --train_files /mnt/extracted/data/M-AILABS/fr_FR/fr_FR_train.csv --train_batch_size 32 --feature_cache /tmp/mailabs_cache --dev_files --dev_batch_size 32 --test_files --test_batch_size 32 --n_hidden 100 --epochs 1 --max_to_keep 1 --checkpoint_dir /tmp/mailabs_ckpt --learning_rate 0.001 --dropout_rate 0.05 --export_dir /tmp/mailabs_train --scorer_path /mnt/lm/kenlm.scorer --audio_sample_rate 16000 --export_tflite false --log_level 0
Using the top level train.py script is deprecated and will be removed in a future release. Instead use: python -m coqui_stt_training.train
2022-04-22 15:32:33.223686: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
2022-04-22 15:32:34.546024: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3494015000 Hz
2022-04-22 15:32:34.547257: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4aa4ad0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2022-04-22 15:32:34.547282: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2022-04-22 15:32:34.549152: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2022-04-22 15:32:34.801435: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4abe950 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2022-04-22 15:32:34.801481: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA TITAN RTX, Compute Capability 7.5
2022-04-22 15:32:34.801497: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (1): NVIDIA TITAN RTX, Compute Capability 7.5
2022-04-22 15:32:34.802180: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1666] Found device 0 with properties:
name: NVIDIA TITAN RTX major: 7 minor: 5 memoryClockRate(GHz): 1.77
pciBusID: 0000:08:00.0
2022-04-22 15:32:34.802532: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1666] Found device 1 with properties:
name: NVIDIA TITAN RTX major: 7 minor: 5 memoryClockRate(GHz): 1.77
pciBusID: 0000:43:00.0
2022-04-22 15:32:34.802578: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-04-22 15:32:34.805820: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2022-04-22 15:32:34.829818: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2022-04-22 15:32:34.830166: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2022-04-22 15:32:34.830792: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
2022-04-22 15:32:34.832648: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2022-04-22 15:32:34.832814: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2022-04-22 15:32:34.833683: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1794] Adding visible gpu devices: 0, 1
2022-04-22 15:32:34.833710: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-04-22 15:32:35.271926: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1206] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-04-22 15:32:35.271974: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] 0 1
2022-04-22 15:32:35.271994: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1225] 0: N N
2022-04-22 15:32:35.272001: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1225] 1: N N
2022-04-22 15:32:35.272807: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/device:GPU:0 with 22856 MB memory) -> physical GPU (device: 0, name: NVIDIA TITAN RTX, pci bus id: 0000:08:00.0, compute capability: 7.5)
2022-04-22 15:32:35.273405: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/device:GPU:1 with 22101 MB memory) -> physical GPU (device: 1, name: NVIDIA TITAN RTX, pci bus id: 0000:43:00.0, compute capability: 7.5)
I Performing dummy training to check for memory problems.
I If the following process crashes, you likely have batch sizes that are too big for your available system memory (or GPU memory).
2022-04-22 15:32:35.822279: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1666] Found device 0 with properties:
name: NVIDIA TITAN RTX major: 7 minor: 5 memoryClockRate(GHz): 1.77
pciBusID: 0000:08:00.0
2022-04-22 15:32:35.822517: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1666] Found device 1 with properties:
name: NVIDIA TITAN RTX major: 7 minor: 5 memoryClockRate(GHz): 1.77
pciBusID: 0000:43:00.0
2022-04-22 15:32:35.822547: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-04-22 15:32:35.822584: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2022-04-22 15:32:35.822600: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2022-04-22 15:32:35.822626: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2022-04-22 15:32:35.822644: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
2022-04-22 15:32:35.822662: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2022-04-22 15:32:35.822681: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2022-04-22 15:32:35.823455: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1794] Adding visible gpu devices: 0, 1
2022-04-22 15:32:36.385500: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1666] Found device 0 with properties:
name: NVIDIA TITAN RTX major: 7 minor: 5 memoryClockRate(GHz): 1.77
pciBusID: 0000:08:00.0
2022-04-22 15:32:36.385737: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1666] Found device 1 with properties:
name: NVIDIA TITAN RTX major: 7 minor: 5 memoryClockRate(GHz): 1.77
pciBusID: 0000:43:00.0
2022-04-22 15:32:36.385768: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-04-22 15:32:36.385803: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2022-04-22 15:32:36.385823: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2022-04-22 15:32:36.385840: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2022-04-22 15:32:36.385858: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
2022-04-22 15:32:36.385874: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2022-04-22 15:32:36.385894: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2022-04-22 15:32:36.386670: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1794] Adding visible gpu devices: 0, 1
2022-04-22 15:32:36.386706: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1206] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-04-22 15:32:36.386714: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] 0 1
2022-04-22 15:32:36.386720: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1225] 0: N N
2022-04-22 15:32:36.386725: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1225] 1: N N
2022-04-22 15:32:36.387323: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 22856 MB memory) -> physical GPU (device: 0, name: NVIDIA TITAN RTX, pci bus id: 0000:08:00.0, compute capability: 7.5)
2022-04-22 15:32:36.387528: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 22101 MB memory) -> physical GPU (device: 1, name: NVIDIA TITAN RTX, pci bus id: 0000:43:00.0, compute capability: 7.5)
D Session opened.
I Could not find best validating checkpoint.
I Could not find most recent checkpoint.
I Initializing all variables.
I STARTING Optimization
I Training epoch 0...
2022-04-22 15:32:37.676174: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2022-04-22 15:32:40.930579: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
I Finished training epoch 0 - loss: 2875.326660
--------------------------------------------------------------------------------
I FINISHED optimization in 0:00:05.528734
D Session closed.
I Dummy run finished without problems, now starting real training process.
2022-04-22 15:32:43.191934: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1666] Found device 0 with properties:
name: NVIDIA TITAN RTX major: 7 minor: 5 memoryClockRate(GHz): 1.77
pciBusID: 0000:08:00.0
2022-04-22 15:32:43.192156: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1666] Found device 1 with properties:
name: NVIDIA TITAN RTX major: 7 minor: 5 memoryClockRate(GHz): 1.77
pciBusID: 0000:43:00.0
2022-04-22 15:32:43.192191: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-04-22 15:32:43.192230: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2022-04-22 15:32:43.192250: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2022-04-22 15:32:43.192268: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2022-04-22 15:32:43.192287: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
2022-04-22 15:32:43.192303: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2022-04-22 15:32:43.192319: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2022-04-22 15:32:43.193097: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1794] Adding visible gpu devices: 0, 1
2022-04-22 15:32:43.193136: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1206] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-04-22 15:32:43.193144: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] 0 1
2022-04-22 15:32:43.193149: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1225] 0: N N
2022-04-22 15:32:43.193154: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1225] 1: N N
2022-04-22 15:32:43.193755: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 22856 MB memory) -> physical GPU (device: 0, name: NVIDIA TITAN RTX, pci bus id: 0000:08:00.0, compute capability: 7.5)
2022-04-22 15:32:43.193960: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 22101 MB memory) -> physical GPU (device: 1, name: NVIDIA TITAN RTX, pci bus id: 0000:43:00.0, compute capability: 7.5)
D Session opened.
I STARTING Optimization
I Training epoch 0...
I Finished training epoch 0 - loss: 2875.326660
--------------------------------------------------------------------------------
I FINISHED optimization in 0:00:00.860954
D Session closed.
W Specifying --export_dir when calling train module. Use python -m coqui_stt_training.export Using the training module as a generic driver for all training related functionality is deprecated and will be removed soon. Use the specific modules:
W python -m coqui_stt_training.train
W python -m coqui_stt_training.evaluate
W python -m coqui_stt_training.export
W python -m coqui_stt_training.training_graph_inference
I Exporting the model...
2022-04-22 15:32:44.745563: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1666] Found device 0 with properties:
name: NVIDIA TITAN RTX major: 7 minor: 5 memoryClockRate(GHz): 1.77
pciBusID: 0000:08:00.0
2022-04-22 15:32:44.745801: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1666] Found device 1 with properties:
name: NVIDIA TITAN RTX major: 7 minor: 5 memoryClockRate(GHz): 1.77
pciBusID: 0000:43:00.0
2022-04-22 15:32:44.745831: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-04-22 15:32:44.745869: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2022-04-22 15:32:44.745885: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2022-04-22 15:32:44.745900: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2022-04-22 15:32:44.745914: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
2022-04-22 15:32:44.745929: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2022-04-22 15:32:44.745943: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2022-04-22 15:32:44.746714: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1794] Adding visible gpu devices: 0, 1
2022-04-22 15:32:44.746749: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1206] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-04-22 15:32:44.746756: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] 0 1
2022-04-22 15:32:44.746763: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1225] 0: N N
2022-04-22 15:32:44.746768: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1225] 1: N N
2022-04-22 15:32:44.747384: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 22856 MB memory) -> physical GPU (device: 0, name: NVIDIA TITAN RTX, pci bus id: 0000:08:00.0, compute capability: 7.5)
2022-04-22 15:32:44.747602: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 22101 MB memory) -> physical GPU (device: 1, name: NVIDIA TITAN RTX, pci bus id: 0000:43:00.0, compute capability: 7.5)
I Could not find best validating checkpoint.
I Loading most recent checkpoint from /tmp/mailabs_ckpt/train-1
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/bias
I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/kernel
I Loading variable from checkpoint: layer_1/bias
I Loading variable from checkpoint: layer_1/weights
I Loading variable from checkpoint: layer_2/bias
I Loading variable from checkpoint: layer_2/weights
I Loading variable from checkpoint: layer_3/bias
I Loading variable from checkpoint: layer_3/weights
I Loading variable from checkpoint: layer_5/bias
I Loading variable from checkpoint: layer_5/weights
I Loading variable from checkpoint: layer_6/bias
I Loading variable from checkpoint: layer_6/weights
I Models exported at /tmp/mailabs_train
I Model metadata file saved to /tmp/mailabs_train/author_model_0.0.1.md. Before submitting the exported model for publishing make sure all information in the metadata file is correct, and complete the URL fields.
^CProcess ForkPoolWorker-24:
Process ForkPoolWorker-21:
Process ForkPoolWorker-13:
Process ForkPoolWorker-18:
Process ForkPoolWorker-9:
Process ForkPoolWorker-16:
Process ForkPoolWorker-20:
Process ForkPoolWorker-19:
Process ForkPoolWorker-17:
Process ForkPoolWorker-22:
Process ForkPoolWorker-14:
Process ForkPoolWorker-5:
Process ForkPoolWorker-23:
Process ForkPoolWorker-15:
Process ForkPoolWorker-10:
Error in atexit._run_exitfuncs:
Process ForkPoolWorker-1:
Process ForkPoolWorker-7:
Process ForkPoolWorker-11:
Traceback (most recent call last):
Process ForkPoolWorker-8:
File "/usr/lib/python3.8/multiprocessing/util.py", line 300, in _run_finalizers
finalizer()
File "/usr/lib/python3.8/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/pool.py", line 692, in _terminate_pool
Process ForkPoolWorker-2:
Process ForkPoolWorker-12:
cls._help_stuff_finish(inqueue, task_handler, len(pool))
File "/usr/lib/python3.8/multiprocessing/pool.py", line 672, in _help_stuff_finish
inqueue._rlock.acquire()
KeyboardInterrupt
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
KeyboardInterrupt
KeyboardInterrupt
KeyboardInterrupt
KeyboardInterrupt
KeyboardInterrupt
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
KeyboardInterrupt
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
KeyboardInterrupt
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
KeyboardInterrupt
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
KeyboardInterrupt
KeyboardInterrupt
File "/usr/lib/python3.8/multiprocessing/queues.py", line 356, in get
res = self._reader.recv_bytes()
KeyboardInterrupt
KeyboardInterrupt
KeyboardInterrupt
KeyboardInterrupt
File "/usr/lib/python3.8/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
buf = self._recv(4)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Process ForkPoolWorker-4:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Process ForkPoolWorker-6:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Process ForkPoolWorker-3:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
root@9f31365fd26b:/code#
@wasertech
Copy link
Author

This test should end like that:

Index -1 ends at ${ent}
Execution took ${ext} seconds to return exit code ${exit_code}.

@wasertech
Copy link
Author

wasertech commented Apr 22, 2022

I started it @ 1650641552 unix time. I'll leave it an hour to see if it can return by itself or I'll press Ctrl+C and update the logs here.

@wasertech
Copy link
Author

An hour after... had to force close the process. (Logs have been updated)

@wasertech
Copy link
Author

To see a working example of this script try with my fix-exit-train branch where it works without the memory test:
https://gist.github.com/wasertech/5ea825277026613db8a28c415a7a49e0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment