Skip to content

Instantly share code, notes, and snippets.

@abueg
Created June 28, 2022 13:34
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save abueg/421ca972563b5c32825cde17525a49bf to your computer and use it in GitHub Desktop.
Save abueg/421ca972563b5c32825cde17525a49bf to your computer and use it in GitHub Desktop.
deepconsensus run_all_tests.sh output
Running tests under Python 3.8.8: /lustre/fs5/vgl/scratch/labueg//venvs/deepconsensus_venv_1/bin/python3
[ RUN ] QuickInferenceTest.test_end_to_end0 (subreads='human_1m/subreads_to_ccs.bam', fasta='human_1m/ccs.fasta', expected_lengths=[17141, 16320])
2022-06-27 17:34:44.970818: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-06-27 17:34:45.010789: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
[E::idx_find_and_load] Could not retrieve index file for 'deepconsensus/testdata/human_1m/subreads_to_ccs.bam'
/lustre/fs5/vgl/scratch/labueg/deepconsensus/deepconsensus/preprocess/utils.py:134: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
self.seq_indices = np.zeros(len(self.bases), dtype=np.int)
/lustre/fs5/vgl/scratch/labueg/deepconsensus/deepconsensus/preprocess/utils.py:134: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
self.seq_indices = np.zeros(len(self.bases), dtype=np.int)
[ OK ] QuickInferenceTest.test_end_to_end0 (subreads='human_1m/subreads_to_ccs.bam', fasta='human_1m/ccs.fasta', expected_lengths=[17141, 16320])
[ RUN ] QuickInferenceTest.test_end_to_end_multiprocessing0 (cpus=0, batch_zmws=1)
[E::idx_find_and_load] Could not retrieve index file for 'deepconsensus/testdata/human_1m/subreads_to_ccs.bam'
/lustre/fs5/vgl/scratch/labueg/deepconsensus/deepconsensus/preprocess/utils.py:134: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
self.seq_indices = np.zeros(len(self.bases), dtype=np.int)
[ OK ] QuickInferenceTest.test_end_to_end_multiprocessing0 (cpus=0, batch_zmws=1)
[ RUN ] QuickInferenceTest.test_end_to_end_multiprocessing1 (cpus=0, batch_zmws=0)
[E::idx_find_and_load] Could not retrieve index file for 'deepconsensus/testdata/human_1m/subreads_to_ccs.bam'
/lustre/fs5/vgl/scratch/labueg/deepconsensus/deepconsensus/preprocess/utils.py:134: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
self.seq_indices = np.zeros(len(self.bases), dtype=np.int)
[ OK ] QuickInferenceTest.test_end_to_end_multiprocessing1 (cpus=0, batch_zmws=0)
[ RUN ] QuickInferenceTest.test_end_to_end_multiprocessing2 (cpus=1, batch_zmws=1)
[E::idx_find_and_load] Could not retrieve index file for 'deepconsensus/testdata/human_1m/subreads_to_ccs.bam'
expected lengths: [17141, 16320] output lengths: [70, 196]
/lustre/fs5/vgl/scratch/labueg/deepconsensus/deepconsensus/preprocess/utils.py:134: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
self.seq_indices = np.zeros(len(self.bases), dtype=np.int)
/lustre/fs5/vgl/scratch/labueg/deepconsensus/deepconsensus/preprocess/utils.py:134: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
self.seq_indices = np.zeros(len(self.bases), dtype=np.int)
[ OK ] QuickInferenceTest.test_end_to_end_multiprocessing2 (cpus=1, batch_zmws=1)
[ RUN ] QuickInferenceTest.test_end_to_end_multiprocessing3 (cpus=1, batch_zmws=100)
[E::idx_find_and_load] Could not retrieve index file for 'deepconsensus/testdata/human_1m/subreads_to_ccs.bam'
/lustre/fs5/vgl/scratch/labueg/deepconsensus/deepconsensus/preprocess/utils.py:134: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
self.seq_indices = np.zeros(len(self.bases), dtype=np.int)
[ OK ] QuickInferenceTest.test_end_to_end_multiprocessing3 (cpus=1, batch_zmws=100)
----------------------------------------------------------------------
Ran 5 tests in 18.011s
OK
Running tests under Python 3.8.8: /lustre/fs5/vgl/scratch/labueg//venvs/deepconsensus_venv_1/bin/python3
[ RUN ] DataProvidersTest.test_dataset_with_limit_option_limit number of examples inference
2022-06-27 17:35:08.487025: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-06-27 17:35:08.490436: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
[ OK ] DataProvidersTest.test_dataset_with_limit_option_limit number of examples inference
[ RUN ] DataProvidersTest.test_dataset_with_limit_option_limit number of examples train
[ OK ] DataProvidersTest.test_dataset_with_limit_option_limit number of examples train
[ RUN ] DataProvidersTest.test_dataset_with_limit_option_limit set to size greater than dataset inference
[ OK ] DataProvidersTest.test_dataset_with_limit_option_limit set to size greater than dataset inference
[ RUN ] DataProvidersTest.test_dataset_with_limit_option_limit set to size greater than dataset train
[ OK ] DataProvidersTest.test_dataset_with_limit_option_limit set to size greater than dataset train
[ RUN ] DataProvidersTest.test_get_dataset_batch size does not evenly divide # examples inference
[ OK ] DataProvidersTest.test_get_dataset_batch size does not evenly divide # examples inference
[ RUN ] DataProvidersTest.test_get_dataset_batch size does not evenly divide # examples train
[ OK ] DataProvidersTest.test_get_dataset_batch size does not evenly divide # examples train
[ RUN ] DataProvidersTest.test_get_dataset_batch size evenly divides # examples inference
[ OK ] DataProvidersTest.test_get_dataset_batch size evenly divides # examples inference
[ RUN ] DataProvidersTest.test_get_dataset_batch size evenly divides # examples train
[ OK ] DataProvidersTest.test_get_dataset_batch size evenly divides # examples train
[ RUN ] DataProvidersTest.test_get_dataset_multiple epochs inference
[ OK ] DataProvidersTest.test_get_dataset_multiple epochs inference
[ RUN ] DataProvidersTest.test_get_dataset_multiple epochs train
[ OK ] DataProvidersTest.test_get_dataset_multiple epochs train
[ RUN ] DataProvidersTest.test_get_dataset_with_metadata_batch size does not evenly divide # examples inference
[ OK ] DataProvidersTest.test_get_dataset_with_metadata_batch size does not evenly divide # examples inference
[ RUN ] DataProvidersTest.test_get_dataset_with_metadata_batch size does not evenly divide # examples train
[ OK ] DataProvidersTest.test_get_dataset_with_metadata_batch size does not evenly divide # examples train
[ RUN ] DataProvidersTest.test_get_dataset_with_metadata_batch size evenly divides # examples inference
[ OK ] DataProvidersTest.test_get_dataset_with_metadata_batch size evenly divides # examples inference
[ RUN ] DataProvidersTest.test_get_dataset_with_metadata_batch size evenly divides # examples train
[ OK ] DataProvidersTest.test_get_dataset_with_metadata_batch size evenly divides # examples train
[ RUN ] DataProvidersTest.test_get_dataset_with_metadata_multiple epochs inference
[ OK ] DataProvidersTest.test_get_dataset_with_metadata_multiple epochs inference
[ RUN ] DataProvidersTest.test_get_dataset_with_metadata_multiple epochs train
[ OK ] DataProvidersTest.test_get_dataset_with_metadata_multiple epochs train
[ RUN ] DataProvidersTest.test_get_dataset_with_pw_ip_batch size evenly divides # examples inference
[ OK ] DataProvidersTest.test_get_dataset_with_pw_ip_batch size evenly divides # examples inference
[ RUN ] DataProvidersTest.test_get_dataset_with_pw_ip_batch size evenly divides # examples train
[ OK ] DataProvidersTest.test_get_dataset_with_pw_ip_batch size evenly divides # examples train
[ RUN ] DataProvidersTest.test_remove_internal_gaps_and_shift
[ OK ] DataProvidersTest.test_remove_internal_gaps_and_shift
----------------------------------------------------------------------
Ran 19 tests in 19.167s
OK
Running tests under Python 3.8.8: /lustre/fs5/vgl/scratch/labueg//venvs/deepconsensus_venv_1/bin/python3
[ RUN ] AlignmentLossTest.test_alignment_loss_Hard, correct insertions only, no pad
2022-06-27 17:35:32.320917: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-06-27 17:35:32.324595: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
[ OK ] AlignmentLossTest.test_alignment_loss_Hard, correct insertions only, no pad
[ RUN ] AlignmentLossTest.test_alignment_loss_Hard, correct insertions only, with pad
[ OK ] AlignmentLossTest.test_alignment_loss_Hard, correct insertions only, with pad
[ RUN ] AlignmentLossTest.test_alignment_loss_Hard, identical sequences, no pad
[ OK ] AlignmentLossTest.test_alignment_loss_Hard, identical sequences, no pad
[ RUN ] AlignmentLossTest.test_alignment_loss_Hard, identical sequences, with different pad
[ OK ] AlignmentLossTest.test_alignment_loss_Hard, identical sequences, with different pad
[ RUN ] AlignmentLossTest.test_alignment_loss_Hard, identical sequences, with same pad
[ OK ] AlignmentLossTest.test_alignment_loss_Hard, identical sequences, with same pad
[ RUN ] AlignmentLossTest.test_alignment_loss_Hard, one deletion at cost one, with pad
[ OK ] AlignmentLossTest.test_alignment_loss_Hard, one deletion at cost one, with pad
[ RUN ] AlignmentLossTest.test_alignment_loss_Hard, one deletion at cost two, with pad
[ OK ] AlignmentLossTest.test_alignment_loss_Hard, one deletion at cost two, with pad
[ RUN ] AlignmentLossTest.test_alignment_loss_Hard, one deletion, large deletion cost, with pad
[ OK ] AlignmentLossTest.test_alignment_loss_Hard, one deletion, large deletion cost, with pad
[ RUN ] AlignmentLossTest.test_alignment_loss_Hard, one deletion, small deletion cost, with pad
[ OK ] AlignmentLossTest.test_alignment_loss_Hard, one deletion, small deletion cost, with pad
[ RUN ] AlignmentLossTest.test_alignment_loss_Hard, one erroneous insertion, no pad
WARNING:tensorflow:5 out of the last 10 calls to <function left_shift_sequence at 0x7f3a0051ae50> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
W0627 17:35:33.004411 139890282825536 def_function.py:150] 5 out of the last 10 calls to <function left_shift_sequence at 0x7f3a0051ae50> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
[ OK ] AlignmentLossTest.test_alignment_loss_Hard, one erroneous insertion, no pad
[ RUN ] AlignmentLossTest.test_alignment_loss_Hard, one error, no pad
[ OK ] AlignmentLossTest.test_alignment_loss_Hard, one error, no pad
[ RUN ] AlignmentLossTest.test_alignment_loss_Hard, two deletions at cost one, with pad
[ OK ] AlignmentLossTest.test_alignment_loss_Hard, two deletions at cost one, with pad
[ RUN ] AlignmentLossTest.test_alignment_loss_Hard, two errors, no pad
[ OK ] AlignmentLossTest.test_alignment_loss_Hard, two errors, no pad
[ RUN ] AlignmentLossTest.test_alignment_loss_with band of 1,one del, one align, two pads, one del
[ OK ] AlignmentLossTest.test_alignment_loss_with band of 1,one del, one align, two pads, one del
[ RUN ] AlignmentLossTest.test_alignment_loss_with band of 2, two dels, one align, two pads
[ OK ] AlignmentLossTest.test_alignment_loss_with band of 2, two dels, one align, two pads
[ RUN ] AlignmentLossTest.test_alignment_loss_with band, correct insertions only, no pad
WARNING:tensorflow:5 out of the last 13 calls to <function left_shift_sequence at 0x7f3a0051ae50> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
W0627 17:35:33.309349 139890282825536 def_function.py:150] 5 out of the last 13 calls to <function left_shift_sequence at 0x7f3a0051ae50> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
[ OK ] AlignmentLossTest.test_alignment_loss_with band, correct insertions only, no pad
[ RUN ] AlignmentLossTest.test_alignment_loss_with band, correct insertions only, with pad
[ OK ] AlignmentLossTest.test_alignment_loss_with band, correct insertions only, with pad
[ RUN ] AlignmentLossTest.test_alignment_loss_with band, identical sequences
[ OK ] AlignmentLossTest.test_alignment_loss_with band, identical sequences
[ RUN ] AlignmentLossTest.test_alignment_loss_with band, identical sequences, with same pad
[ OK ] AlignmentLossTest.test_alignment_loss_with band, identical sequences, with same pad
[ RUN ] AlignmentLossTest.test_alignment_loss_with band, one deletion at cost one, with pad
[ OK ] AlignmentLossTest.test_alignment_loss_with band, one deletion at cost one, with pad
[ RUN ] AlignmentLossTest.test_alignment_loss_with band, two errors, no pad
[ OK ] AlignmentLossTest.test_alignment_loss_with band, two errors, no pad
[ RUN ] LeftShiftTrueLabels.test_left_shift_sequence_Convert internal gaps
[ OK ] LeftShiftTrueLabels.test_left_shift_sequence_Convert internal gaps
[ RUN ] LeftShiftTrueLabels.test_left_shift_sequence_Do not convert internal gaps
[ OK ] LeftShiftTrueLabels.test_left_shift_sequence_Do not convert internal gaps
[ RUN ] PerClassAccuracyTest.test_accuracy_all correct
[ OK ] PerClassAccuracyTest.test_accuracy_all correct
[ RUN ] PerClassAccuracyTest.test_accuracy_all positions correct for given class value
[ OK ] PerClassAccuracyTest.test_accuracy_all positions correct for given class value
[ RUN ] PerClassAccuracyTest.test_accuracy_given class value not present
[ OK ] PerClassAccuracyTest.test_accuracy_given class value not present
[ RUN ] PerClassAccuracyTest.test_accuracy_some positions incorrect for given class value
[ OK ] PerClassAccuracyTest.test_accuracy_some positions incorrect for given class value
[ RUN ] PerExampleAccuracyTest.test_accuracy_Left shift testing
[ OK ] PerExampleAccuracyTest.test_accuracy_Left shift testing
[ RUN ] PerExampleAccuracyTest.test_accuracy_all padding
[ OK ] PerExampleAccuracyTest.test_accuracy_all padding
[ RUN ] PerExampleAccuracyTest.test_accuracy_multiple_updates
[ OK ] PerExampleAccuracyTest.test_accuracy_multiple_updates
[ RUN ] XentropyInsCostFn.test_xentropy_subs_cost_fn_Base case
[ OK ] XentropyInsCostFn.test_xentropy_subs_cost_fn_Base case
[ RUN ] XentropySubsCostFn.test_xentropy_subs_cost_fn_Equal lengths
[ OK ] XentropySubsCostFn.test_xentropy_subs_cost_fn_Equal lengths
[ RUN ] XentropySubsCostFn.test_xentropy_subs_cost_fn_Unequal lengths
[ OK ] XentropySubsCostFn.test_xentropy_subs_cost_fn_Unequal lengths
----------------------------------------------------------------------
Ran 33 tests in 1.661s
OK
Running tests under Python 3.8.8: /lustre/fs5/vgl/scratch/labueg//venvs/deepconsensus_venv_1/bin/python3
[ RUN ] EditDistanceTest.test_edit_distance0 ('ATCG', 'ATCG', 0)
[ OK ] EditDistanceTest.test_edit_distance0 ('ATCG', 'ATCG', 0)
[ RUN ] EditDistanceTest.test_edit_distance1 ('ATCG', 'TT', 3)
[ OK ] EditDistanceTest.test_edit_distance1 ('ATCG', 'TT', 3)
[ RUN ] EditDistanceTest.test_edit_distance2 ('ATCG', 'ZZZZ', 4)
[ OK ] EditDistanceTest.test_edit_distance2 ('ATCG', 'ZZZZ', 4)
[ RUN ] EditDistanceTest.test_edit_distance3 (' A T C G ', 'ATCG', 0)
[ OK ] EditDistanceTest.test_edit_distance3 (' A T C G ', 'ATCG', 0)
[ RUN ] RepeatContentTest.test_repeat_content0 (' ', 0.0)
[ OK ] RepeatContentTest.test_repeat_content0 (' ', 0.0)
[ RUN ] RepeatContentTest.test_repeat_content1 ('ABCD', 0.0)
[ OK ] RepeatContentTest.test_repeat_content1 ('ABCD', 0.0)
[ RUN ] RepeatContentTest.test_repeat_content2 ('AAABBBCD', 0.75)
[ OK ] RepeatContentTest.test_repeat_content2 ('AAABBBCD', 0.75)
[ RUN ] RepeatContentTest.test_repeat_content3 ('AAABBBCCCDDD', 1.0)
[ OK ] RepeatContentTest.test_repeat_content3 ('AAABBBCCCDDD', 1.0)
[ RUN ] RepeatContentTest.test_repeat_content4 ('AAA BBB CCC DDD ', 1.0)
[ OK ] RepeatContentTest.test_repeat_content4 ('AAA BBB CCC DDD ', 1.0)
----------------------------------------------------------------------
Ran 9 tests in 0.001s
OK
Running tests under Python 3.8.8: /lustre/fs5/vgl/scratch/labueg//venvs/deepconsensus_venv_1/bin/python3
[ RUN ] ModelsTest.test_outputs0 (True, 'fc+test', True)
2022-06-27 17:35:43.521341: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-06-27 17:35:43.524699: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
[ OK ] ModelsTest.test_outputs0 (True, 'fc+test', True)
[ RUN ] ModelsTest.test_outputs1 (True, 'fc+test', False)
[ OK ] ModelsTest.test_outputs1 (True, 'fc+test', False)
[ RUN ] ModelsTest.test_outputs10 (False, 'conv_net-resnet50+test', True)
[ OK ] ModelsTest.test_outputs10 (False, 'conv_net-resnet50+test', True)
[ RUN ] ModelsTest.test_outputs11 (False, 'conv_net-resnet50+test', False)
[ OK ] ModelsTest.test_outputs11 (False, 'conv_net-resnet50+test', False)
[ RUN ] ModelsTest.test_outputs12 (False, 'transformer+test', True)
WARNING:tensorflow:From /lustre/fs5/vgl/scratch/labueg/venvs/deepconsensus_venv_1/lib/python3.8/site-packages/official/nlp/transformer/attention_layer.py:54: DenseEinsum.__init__ (from official.nlp.modeling.layers.dense_einsum) is deprecated and will be removed in a future version.
Instructions for updating:
DenseEinsum is deprecated. Please use tf.keras.experimental.EinsumDense layer instead.
W0627 17:35:48.525807 140681805023040 deprecation.py:341] From /lustre/fs5/vgl/scratch/labueg/venvs/deepconsensus_venv_1/lib/python3.8/site-packages/official/nlp/transformer/attention_layer.py:54: DenseEinsum.__init__ (from official.nlp.modeling.layers.dense_einsum) is deprecated and will be removed in a future version.
Instructions for updating:
DenseEinsum is deprecated. Please use tf.keras.experimental.EinsumDense layer instead.
[ OK ] ModelsTest.test_outputs12 (False, 'transformer+test', True)
[ RUN ] ModelsTest.test_outputs13 (False, 'transformer+test', False)
[ OK ] ModelsTest.test_outputs13 (False, 'transformer+test', False)
[ RUN ] ModelsTest.test_outputs14 (False, 'transformer_learn_values+test', True)
[ OK ] ModelsTest.test_outputs14 (False, 'transformer_learn_values+test', True)
[ RUN ] ModelsTest.test_outputs15 (False, 'transformer_learn_values+test', False)
[ OK ] ModelsTest.test_outputs15 (False, 'transformer_learn_values+test', False)
[ RUN ] ModelsTest.test_outputs2 (True, 'conv_net-resnet50+test', True)
[ OK ] ModelsTest.test_outputs2 (True, 'conv_net-resnet50+test', True)
[ RUN ] ModelsTest.test_outputs3 (True, 'conv_net-resnet50+test', False)
[ OK ] ModelsTest.test_outputs3 (True, 'conv_net-resnet50+test', False)
[ RUN ] ModelsTest.test_outputs4 (True, 'transformer+test', True)
[ OK ] ModelsTest.test_outputs4 (True, 'transformer+test', True)
[ RUN ] ModelsTest.test_outputs5 (True, 'transformer+test', False)
[ OK ] ModelsTest.test_outputs5 (True, 'transformer+test', False)
[ RUN ] ModelsTest.test_outputs6 (True, 'transformer_learn_values+test', True)
[ OK ] ModelsTest.test_outputs6 (True, 'transformer_learn_values+test', True)
[ RUN ] ModelsTest.test_outputs7 (True, 'transformer_learn_values+test', False)
[ OK ] ModelsTest.test_outputs7 (True, 'transformer_learn_values+test', False)
[ RUN ] ModelsTest.test_outputs8 (False, 'fc+test', True)
[ OK ] ModelsTest.test_outputs8 (False, 'fc+test', True)
[ RUN ] ModelsTest.test_outputs9 (False, 'fc+test', False)
[ OK ] ModelsTest.test_outputs9 (False, 'fc+test', False)
[ RUN ] ModelsTest.test_predict_and_model_fn_equal0 ('fc+test', True)
WARNING:tensorflow:5 out of the last 5 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7ff2403f9b80> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
W0627 17:35:54.120294 140681805023040 def_function.py:150] 5 out of the last 5 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7ff2403f9b80> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
[ OK ] ModelsTest.test_predict_and_model_fn_equal0 ('fc+test', True)
[ RUN ] ModelsTest.test_predict_and_model_fn_equal1 ('fc+test', False)
WARNING:tensorflow:6 out of the last 6 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7ff240258430> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
W0627 17:35:54.334069 140681805023040 def_function.py:150] 6 out of the last 6 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7ff240258430> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
[ OK ] ModelsTest.test_predict_and_model_fn_equal1 ('fc+test', False)
[ RUN ] ModelsTest.test_predict_and_model_fn_equal2 ('conv_net-resnet50+test', True)
[ OK ] ModelsTest.test_predict_and_model_fn_equal2 ('conv_net-resnet50+test', True)
[ RUN ] ModelsTest.test_predict_and_model_fn_equal3 ('conv_net-resnet50+test', False)
[ OK ] ModelsTest.test_predict_and_model_fn_equal3 ('conv_net-resnet50+test', False)
[ RUN ] ModelsTest.test_predict_and_model_fn_equal4 ('transformer+test', True)
[ OK ] ModelsTest.test_predict_and_model_fn_equal4 ('transformer+test', True)
[ RUN ] ModelsTest.test_predict_and_model_fn_equal5 ('transformer+test', False)
[ OK ] ModelsTest.test_predict_and_model_fn_equal5 ('transformer+test', False)
[ RUN ] ModelsTest.test_predict_and_model_fn_equal6 ('transformer_learn_values+test', True)
[ OK ] ModelsTest.test_predict_and_model_fn_equal6 ('transformer_learn_values+test', True)
[ RUN ] ModelsTest.test_predict_and_model_fn_equal7 ('transformer_learn_values+test', False)
[ OK ] ModelsTest.test_predict_and_model_fn_equal7 ('transformer_learn_values+test', False)
----------------------------------------------------------------------
Ran 24 tests in 16.551s
OK
Running tests under Python 3.8.8: /lustre/fs5/vgl/scratch/labueg//venvs/deepconsensus_venv_1/bin/python3
[ RUN ] ConvertToFastqStrDoFnTest.test_convert_to_fastq_str
[ OK ] ConvertToFastqStrDoFnTest.test_convert_to_fastq_str
[ RUN ] GetFullSequenceTest.test_get_full_sequences
[ OK ] GetFullSequenceTest.test_get_full_sequences
[ RUN ] GetFullSequenceTest.test_get_partial_sequences
[ OK ] GetFullSequenceTest.test_get_partial_sequences
[ RUN ] IsQualityAboveThresholdTest.test_is_quality_above_threshold0 (min_quality=20, read_qualities=(19, 19, 19, 19), should_pass=False)
[ OK ] IsQualityAboveThresholdTest.test_is_quality_above_threshold0 (min_quality=20, read_qualities=(19, 19, 19, 19), should_pass=False)
[ RUN ] IsQualityAboveThresholdTest.test_is_quality_above_threshold1 (min_quality=20, read_qualities=(20, 20, 20, 20), should_pass=True)
[ OK ] IsQualityAboveThresholdTest.test_is_quality_above_threshold1 (min_quality=20, read_qualities=(20, 20, 20, 20), should_pass=True)
[ RUN ] IsQualityAboveThresholdTest.test_is_quality_above_threshold2 (min_quality=40, read_qualities=(40, 40, 40, 40), should_pass=True)
[ OK ] IsQualityAboveThresholdTest.test_is_quality_above_threshold2 (min_quality=40, read_qualities=(40, 40, 40, 40), should_pass=True)
[ RUN ] IsQualityAboveThresholdTest.test_is_quality_above_threshold3 (min_quality=40, read_qualities=(39, 39, 41, 41), should_pass=False)
[ OK ] IsQualityAboveThresholdTest.test_is_quality_above_threshold3 (min_quality=40, read_qualities=(39, 39, 41, 41), should_pass=False)
[ RUN ] RemoveGapsAndPaddingTest.test_remove_gaps_and_padding_all gaps/padding
[ OK ] RemoveGapsAndPaddingTest.test_remove_gaps_and_padding_all gaps/padding
[ RUN ] RemoveGapsAndPaddingTest.test_remove_gaps_and_padding_no gaps/padding
[ OK ] RemoveGapsAndPaddingTest.test_remove_gaps_and_padding_no gaps/padding
[ RUN ] RemoveGapsAndPaddingTest.test_remove_gaps_and_padding_some gaps/padding
[ OK ] RemoveGapsAndPaddingTest.test_remove_gaps_and_padding_some gaps/padding
----------------------------------------------------------------------
Ran 10 tests in 0.002s
OK
Running tests under Python 3.8.8: /lustre/fs5/vgl/scratch/labueg//venvs/deepconsensus_venv_1/bin/python3
[ RUN ] PreprocessE2E.test_e2e_inference0 (0)
I0627 17:36:09.222267 140445132179264 preprocess.py:214] Generating tf.Examples in inference mode.
[E::idx_find_and_load] Could not retrieve index file for 'deepconsensus/testdata/human_1m/subreads_to_ccs.bam'
I0627 17:36:09.247650 140445132179264 preprocess.py:233] Using a single cpu.
/lustre/fs5/vgl/scratch/labueg/deepconsensus/deepconsensus/preprocess/utils.py:134: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
self.seq_indices = np.zeros(len(self.bases), dtype=np.int)
I0627 17:36:10.016362 140445132179264 preprocess.py:267] Completed processing 3 ZMWs.
I0627 17:36:10.016550 140445132179264 preprocess.py:273] Writing /tmp/absl_testing/PreprocessE2E/test_e2e_inference0/tmpdy_8nx6v/tf-summary.inference.json.
2022-06-27 17:36:10.027583: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-06-27 17:36:10.029547: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
/lustre/fs5/vgl/scratch/labueg/deepconsensus/deepconsensus/preprocess/preprocess_test.py:51: ResourceWarning: unclosed file <_io.TextIOWrapper name='/tmp/absl_testing/PreprocessE2E/test_e2e_inference0/tmpdy_8nx6v/tf-summary.inference.json' mode='r' encoding='UTF-8'>
return json.load(open(summary_path, 'r'))
ResourceWarning: Enable tracemalloc to get the object allocation traceback
[ OK ] PreprocessE2E.test_e2e_inference0 (0)
[ RUN ] PreprocessE2E.test_e2e_inference1 (2)
I0627 17:36:10.326184 140445132179264 preprocess.py:214] Generating tf.Examples in inference mode.
[E::idx_find_and_load] Could not retrieve index file for 'deepconsensus/testdata/human_1m/subreads_to_ccs.bam'
I0627 17:36:10.342225 140445132179264 preprocess.py:244] Processing in parallel using 2 cores
I0627 17:36:10.926568 140445132179264 preprocess.py:189] Processed 3 ZMWs.
I0627 17:36:11.427729 140445132179264 preprocess.py:189] Processed 3 ZMWs.
I0627 17:36:11.929452 140445132179264 preprocess.py:189] Processed 3 ZMWs.
I0627 17:36:11.947690 140445132179264 preprocess.py:267] Completed processing 3 ZMWs.
I0627 17:36:11.947807 140445132179264 preprocess.py:273] Writing /tmp/absl_testing/PreprocessE2E/test_e2e_inference1/tmpza1pnjwk/tf-summary.inference.json.
/lustre/fs5/vgl/scratch/labueg/deepconsensus/deepconsensus/preprocess/preprocess_test.py:51: ResourceWarning: unclosed file <_io.TextIOWrapper name='/tmp/absl_testing/PreprocessE2E/test_e2e_inference1/tmpza1pnjwk/tf-summary.inference.json' mode='r' encoding='UTF-8'>
return json.load(open(summary_path, 'r'))
ResourceWarning: Enable tracemalloc to get the object allocation traceback
[ OK ] PreprocessE2E.test_e2e_inference1 (2)
[ RUN ] PreprocessE2E.test_e2e_train0 (0)
I0627 17:36:12.209041 140445132179264 preprocess.py:203] Generating tf.Examples in training mode.
[E::idx_find_and_load] Could not retrieve index file for 'deepconsensus/testdata/human_1m/subreads_to_ccs.bam'
I0627 17:36:12.230944 140445132179264 preprocess.py:233] Using a single cpu.
[W::sam_hrecs_update_hashes] Duplicate entry "231b5401" in sam header
I0627 17:36:18.828143 140445132179264 utils.py:941] No truth_range defined for m54238_180901_011437/4194387/ccs.
I0627 17:36:19.775406 140445132179264 preprocess.py:267] Completed processing 9 ZMWs.
I0627 17:36:19.775591 140445132179264 preprocess.py:273] Writing /tmp/absl_testing/PreprocessE2E/test_e2e_train0/tmpq7cajaz_/tf-summary.training.json.
/lustre/fs5/vgl/scratch/labueg/deepconsensus/deepconsensus/preprocess/preprocess_test.py:51: ResourceWarning: unclosed file <_io.TextIOWrapper name='/tmp/absl_testing/PreprocessE2E/test_e2e_train0/tmpq7cajaz_/tf-summary.training.json' mode='r' encoding='UTF-8'>
return json.load(open(summary_path, 'r'))
ResourceWarning: Enable tracemalloc to get the object allocation traceback
[ OK ] PreprocessE2E.test_e2e_train0 (0)
[ RUN ] PreprocessE2E.test_e2e_train1 (2)
I0627 17:36:21.742841 140445132179264 preprocess.py:203] Generating tf.Examples in training mode.
[E::idx_find_and_load] Could not retrieve index file for 'deepconsensus/testdata/human_1m/subreads_to_ccs.bam'
I0627 17:36:21.763510 140445132179264 preprocess.py:244] Processing in parallel using 2 cores
[W::sam_hrecs_update_hashes] Duplicate entry "231b5401" in sam header
I0627 17:36:22.330120 140445132179264 utils.py:941] No truth_range defined for m54238_180901_011437/4194387/ccs.
I0627 17:36:22.905350 140445132179264 preprocess.py:189] Processed 9 ZMWs.
I0627 17:36:23.407818 140445132179264 preprocess.py:189] Processed 9 ZMWs.
I0627 17:36:23.908718 140445132179264 preprocess.py:189] Processed 9 ZMWs.
I0627 17:36:24.410717 140445132179264 preprocess.py:189] Processed 9 ZMWs.
I0627 17:36:24.911575 140445132179264 preprocess.py:189] Processed 9 ZMWs.
I0627 17:36:25.413545 140445132179264 preprocess.py:189] Processed 9 ZMWs.
I0627 17:36:25.914580 140445132179264 preprocess.py:189] Processed 9 ZMWs.
I0627 17:36:26.416632 140445132179264 preprocess.py:189] Processed 9 ZMWs.
I0627 17:36:26.917729 140445132179264 preprocess.py:189] Processed 9 ZMWs.
I0627 17:36:27.419998 140445132179264 preprocess.py:189] Processed 9 ZMWs.
I0627 17:36:27.647327 140445132179264 preprocess.py:267] Completed processing 9 ZMWs.
I0627 17:36:27.647543 140445132179264 preprocess.py:273] Writing /tmp/absl_testing/PreprocessE2E/test_e2e_train1/tmpkec14cqt/tf-summary.training.json.
/lustre/fs5/vgl/scratch/labueg/deepconsensus/deepconsensus/preprocess/preprocess_test.py:51: ResourceWarning: unclosed file <_io.TextIOWrapper name='/tmp/absl_testing/PreprocessE2E/test_e2e_train1/tmpkec14cqt/tf-summary.training.json' mode='r' encoding='UTF-8'>
return json.load(open(summary_path, 'r'))
ResourceWarning: Enable tracemalloc to get the object allocation traceback
[ OK ] PreprocessE2E.test_e2e_train1 (2)
----------------------------------------------------------------------
Ran 4 tests in 20.387s
OK
Running tests under Python 3.8.8: /lustre/fs5/vgl/scratch/labueg//venvs/deepconsensus_venv_1/bin/python3
[ RUN ] TestBounds.test_ccs_bounds_bounds extend beyond ccs
/lustre/fs5/vgl/scratch/labueg/deepconsensus/deepconsensus/preprocess/utils.py:134: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
self.seq_indices = np.zeros(len(self.bases), dtype=np.int)
[ OK ] TestBounds.test_ccs_bounds_bounds extend beyond ccs
[ RUN ] TestBounds.test_ccs_bounds_label alignment with deletions and softmatch ends
[ OK ] TestBounds.test_ccs_bounds_label alignment with deletions and softmatch ends
[ RUN ] TestBounds.test_ccs_bounds_label alignment with insertions and softmatch ends
[ OK ] TestBounds.test_ccs_bounds_label alignment with insertions and softmatch ends
[ RUN ] TestBounds.test_ccs_bounds_label alignment with softmatch ends
[ OK ] TestBounds.test_ccs_bounds_label alignment with softmatch ends
[ RUN ] TestBounds.test_ccs_bounds_left side of slice beyond bound
[ OK ] TestBounds.test_ccs_bounds_left side of slice beyond bound
[ RUN ] TestBounds.test_ccs_bounds_no overlap slice
[ OK ] TestBounds.test_ccs_bounds_no overlap slice
[ RUN ] TestBounds.test_ccs_bounds_right side of slice beyond bound
[ OK ] TestBounds.test_ccs_bounds_right side of slice beyond bound
[ RUN ] TestBounds.test_ccs_bounds_shifted start pos match
[ OK ] TestBounds.test_ccs_bounds_shifted start pos match
[ RUN ] TestBounds.test_ccs_bounds_simple match
[ OK ] TestBounds.test_ccs_bounds_simple match
[ RUN ] TestDcConfig.test_dc_config_max_passes=20
[ OK ] TestDcConfig.test_dc_config_max_passes=20
[ RUN ] TestDcConfig.test_dc_config_max_passes=5
[ OK ] TestDcConfig.test_dc_config_max_passes=5
[ RUN ] TestDcConfigFromShape.test_dc_config_from_shape_expanded shape
[ OK ] TestDcConfigFromShape.test_dc_config_from_shape_expanded shape
[ RUN ] TestDcConfigFromShape.test_dc_config_from_shape_standard shape
[ OK ] TestDcConfigFromShape.test_dc_config_from_shape_standard shape
[ RUN ] TestDcExampleFunctionality.test_dc_example_functions
[ OK ] TestDcExampleFunctionality.test_dc_example_functions
[ RUN ] TestDcExampleFunctionality.test_inference_setup
[ OK ] TestDcExampleFunctionality.test_inference_setup
[ RUN ] TestDcExampleFunctionality.test_large_label_insertion
[ OK ] TestDcExampleFunctionality.test_large_label_insertion
[ RUN ] TestDcExampleFunctionality.test_remove_gaps_and_pad
[ OK ] TestDcExampleFunctionality.test_remove_gaps_and_pad
[ RUN ] TestDcExampleFunctionality.test_tf_example_train
2022-06-27 17:36:34.322043: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-06-27 17:36:34.325575: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
[ OK ] TestDcExampleFunctionality.test_tf_example_train
[ RUN ] TestEncodeDecodeBases.test_encode_decode_bases
[ OK ] TestEncodeDecodeBases.test_encode_decode_bases
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_alignment match
[ OK ] TestExpandClipIndent.test_expand_clip_indent_alignment match
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_bases match and mismatch
[ OK ] TestExpandClipIndent.test_expand_clip_indent_bases match and mismatch
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_deletion
[ OK ] TestExpandClipIndent.test_expand_clip_indent_deletion
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_hard clip
[ OK ] TestExpandClipIndent.test_expand_clip_indent_hard clip
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_indent
[ OK ] TestExpandClipIndent.test_expand_clip_indent_indent
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_indent and soft
[ OK ] TestExpandClipIndent.test_expand_clip_indent_indent and soft
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_insertion
[ OK ] TestExpandClipIndent.test_expand_clip_indent_insertion
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_skip region
[ OK ] TestExpandClipIndent.test_expand_clip_indent_skip region
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_soft clip
[ OK ] TestExpandClipIndent.test_expand_clip_indent_soft clip
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_strand forward
[ OK ] TestExpandClipIndent.test_expand_clip_indent_strand forward
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_strand forward ip/pw values
[ OK ] TestExpandClipIndent.test_expand_clip_indent_strand forward ip/pw values
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_strand forward with indent
[ OK ] TestExpandClipIndent.test_expand_clip_indent_strand forward with indent
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_strand reverse
[ OK ] TestExpandClipIndent.test_expand_clip_indent_strand reverse
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_strand reverse ip/pw values
[ OK ] TestExpandClipIndent.test_expand_clip_indent_strand reverse ip/pw values
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_strand reverse with indent
[ OK ] TestExpandClipIndent.test_expand_clip_indent_strand reverse with indent
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_subread with complex cigar
[ OK ] TestExpandClipIndent.test_expand_clip_indent_subread with complex cigar
[ RUN ] TestExpandClipIndent.test_expand_clip_indent_subread with match insert match
[ OK ] TestExpandClipIndent.test_expand_clip_indent_subread with match insert match
[ RUN ] TestFetchCcsBases.test_fetch_bases
[ OK ] TestFetchCcsBases.test_fetch_bases
[ RUN ] TestFetchLabelBases.test_fetch_bases_known label bases
[ OK ] TestFetchLabelBases.test_fetch_bases_known label bases
[ RUN ] TestFetchLabelBases.test_fetch_bases_unknown label
[ OK ] TestFetchLabelBases.test_fetch_bases_unknown label
[ RUN ] TestProcFeeder.test_proc_feeder_inference
[E::idx_find_and_load] Could not retrieve index file for 'deepconsensus/testdata/human_1m/subreads_to_ccs.bam'
[ OK ] TestProcFeeder.test_proc_feeder_inference
[ RUN ] TestProcFeeder.test_proc_feeder_training
[E::idx_find_and_load] Could not retrieve index file for 'deepconsensus/testdata/human_1m/subreads_to_ccs.bam'
[W::sam_hrecs_update_hashes] Duplicate entry "231b5401" in sam header
I0627 17:36:35.499197 140253922744128 utils.py:941] No truth_range defined for m54238_180901_011437/4194387/ccs.
[ OK ] TestProcFeeder.test_proc_feeder_training
[ RUN ] TestRightPad.test_right_pad
[ OK ] TestRightPad.test_right_pad
[ RUN ] TestSpaceOutSubreads.test_space_out_subreads_adjacent insertions
[ OK ] TestSpaceOutSubreads.test_space_out_subreads_adjacent insertions
[ RUN ] TestSpaceOutSubreads.test_space_out_subreads_complex alignment case
[ OK ] TestSpaceOutSubreads.test_space_out_subreads_complex alignment case
[ RUN ] TestSpaceOutSubreads.test_space_out_subreads_ignore label insertion
[ OK ] TestSpaceOutSubreads.test_space_out_subreads_ignore label insertion
[ RUN ] TestSpaceOutSubreads.test_space_out_subreads_two subreads with different lengths
[ OK ] TestSpaceOutSubreads.test_space_out_subreads_two subreads with different lengths
[ RUN ] TestSpaceOutSubreads.test_space_out_subreads_two subreads with one D
[ OK ] TestSpaceOutSubreads.test_space_out_subreads_two subreads with one D
[ RUN ] TestSpaceOutSubreads.test_space_out_subreads_two subreads with one I
[ OK ] TestSpaceOutSubreads.test_space_out_subreads_two subreads with one I
[ RUN ] TestSpaceOutSubreads.test_space_out_subreads_two subreads with same sequence
[ OK ] TestSpaceOutSubreads.test_space_out_subreads_two subreads with same sequence
[ RUN ] TestSubreadGrouper.test_read_bam
[E::idx_find_and_load] Could not retrieve index file for 'deepconsensus/testdata/human_1m/subreads_to_ccs.bam'
[ OK ] TestSubreadGrouper.test_read_bam
----------------------------------------------------------------------
Ran 50 tests in 1.302s
OK
Running tests under Python 3.8.8: /lustre/fs5/vgl/scratch/labueg//venvs/deepconsensus_venv_1/bin/python3
[ RUN ] QualityScoreToStringTest.test_score_list_to_string0 ([], '')
[ OK ] QualityScoreToStringTest.test_score_list_to_string0 ([], '')
[ RUN ] QualityScoreToStringTest.test_score_list_to_string1 ([0, 10, 20, 30, 40], '!+5?I')
[ OK ] QualityScoreToStringTest.test_score_list_to_string1 ([0, 10, 20, 30, 40], '!+5?I')
[ RUN ] QualityScoreToStringTest.test_score_to_string0 (0, '!')
[ OK ] QualityScoreToStringTest.test_score_to_string0 (0, '!')
[ RUN ] QualityScoreToStringTest.test_score_to_string1 (40, 'I')
[ OK ] QualityScoreToStringTest.test_score_to_string1 (40, 'I')
[ RUN ] QualityScoreToStringTest.test_score_to_string2 (20, '5')
[ OK ] QualityScoreToStringTest.test_score_to_string2 (20, '5')
[ RUN ] QualityStringToArrayTest.test_string_to_int0 ('', [])
[ OK ] QualityStringToArrayTest.test_string_to_int0 ('', [])
[ RUN ] QualityStringToArrayTest.test_string_to_int1 ('!', [0])
[ OK ] QualityStringToArrayTest.test_string_to_int1 ('!', [0])
[ RUN ] QualityStringToArrayTest.test_string_to_int2 ('I', [40])
[ OK ] QualityStringToArrayTest.test_string_to_int2 ('I', [40])
[ RUN ] QualityStringToArrayTest.test_string_to_int3 ('5', [20])
[ OK ] QualityStringToArrayTest.test_string_to_int3 ('5', [20])
[ RUN ] QualityStringToArrayTest.test_string_to_int4 ('!+5?I', [0, 10, 20, 30, 40])
[ OK ] QualityStringToArrayTest.test_string_to_int4 ('!+5?I', [0, 10, 20, 30, 40])
----------------------------------------------------------------------
Ran 10 tests in 0.001s
OK
Running tests under Python 3.8.8: /lustre/fs5/vgl/scratch/labueg//venvs/deepconsensus_venv_1/bin/python3
[ RUN ] ModelInferenceTest.test_inference_e2e
2022-06-27 17:36:40.959825: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-06-27 17:36:40.963317: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
WARNING:tensorflow:There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.
W0627 17:36:41.017965 140626666125120 cross_device_ops.py:1387] There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0',)
I0627 17:36:41.021556 140626666125120 mirrored_strategy.py:376] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0',)
WARNING:tensorflow:From /lustre/fs5/vgl/scratch/labueg/venvs/deepconsensus_venv_1/lib/python3.8/site-packages/official/nlp/transformer/attention_layer.py:54: DenseEinsum.__init__ (from official.nlp.modeling.layers.dense_einsum) is deprecated and will be removed in a future version.
Instructions for updating:
DenseEinsum is deprecated. Please use tf.keras.experimental.EinsumDense layer instead.
W0627 17:36:41.141611 140626666125120 deprecation.py:341] From /lustre/fs5/vgl/scratch/labueg/venvs/deepconsensus_venv_1/lib/python3.8/site-packages/official/nlp/transformer/attention_layer.py:54: DenseEinsum.__init__ (from official.nlp.modeling.layers.dense_einsum) is deprecated and will be removed in a future version.
Instructions for updating:
DenseEinsum is deprecated. Please use tf.keras.experimental.EinsumDense layer instead.
2022-06-27 17:36:42.049672: W tensorflow/core/framework/dataset.cc:744] Input of GeneratorDatasetOp::Dataset will not be optimized because the dataset does not implement the AsGraphDefInternal() method needed to apply optimizations.
Model: "encoder_only_learned_values_transformer"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
relative_position_embedding multiple 0
(RelativePositionEmbedding
)
encoder_stack (EncoderStack multiple 21319168
)
dense (Dense) multiple 2805
softmax (Softmax) multiple 0
embedding_shared_weights (E multiple 40
mbeddingSharedWeights)
embedding_shared_weights_1 multiple 80
(EmbeddingSharedWeights)
embedding_shared_weights_2 multiple 80
(EmbeddingSharedWeights)
embedding_shared_weights_3 multiple 128
(EmbeddingSharedWeights)
embedding_shared_weights_4 multiple 6
(EmbeddingSharedWeights)
=================================================================
Total params: 21,322,307
Trainable params: 21,322,307
Non-trainable params: 0
_________________________________________________________________
1301/1301 [==============================] - 426s 323ms/step - loss: 213.3057 - accuracy: 0.2499 - per_example_accuracy: 7.6864e-04 - A_per_class_accuracy: 0.0000e+00 - T_per_class_accuracy: 0.2296 - C_per_class_accuracy: 0.4078 - G_per_class_accuracy: 0.0000e+00 - gap_or_pad_per_class_accuracy: 0.3614
[ OK ] ModelInferenceTest.test_inference_e2e
----------------------------------------------------------------------
Ran 1 test in 426.696s
OK
Exception ignored in: <function Pool.__del__ at 0x7fe60613b820>
Traceback (most recent call last):
File "/vggpfs/fs3/vgl/store/labueg/anaconda3/lib/python3.8/multiprocessing/pool.py", line 268, in __del__
self._change_notifier.put(None)
File "/vggpfs/fs3/vgl/store/labueg/anaconda3/lib/python3.8/multiprocessing/queues.py", line 368, in put
self._writer.send_bytes(obj)
File "/vggpfs/fs3/vgl/store/labueg/anaconda3/lib/python3.8/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/vggpfs/fs3/vgl/store/labueg/anaconda3/lib/python3.8/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/vggpfs/fs3/vgl/store/labueg/anaconda3/lib/python3.8/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
OSError: [Errno 9] Bad file descriptor
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer
W0627 17:43:47.836062 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer-0
W0627 17:43:47.836266 140626666125120 util.py:181] Unresolved object in checkpoint: (root).model.layer-0
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer-1
W0627 17:43:47.836327 140626666125120 util.py:181] Unresolved object in checkpoint: (root).model.layer-1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-0
W0627 17:43:47.836389 140626666125120 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-0
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer-3
W0627 17:43:47.836440 140626666125120 util.py:181] Unresolved object in checkpoint: (root).model.layer-3
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-1
W0627 17:43:47.836488 140626666125120 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer-5
W0627 17:43:47.836536 140626666125120 util.py:181] Unresolved object in checkpoint: (root).model.layer-5
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-2
W0627 17:43:47.836585 140626666125120 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer-7
W0627 17:43:47.836633 140626666125120 util.py:181] Unresolved object in checkpoint: (root).model.layer-7
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer-8
W0627 17:43:47.836681 140626666125120 util.py:181] Unresolved object in checkpoint: (root).model.layer-8
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
W0627 17:43:47.836728 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
W0627 17:43:47.836775 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
W0627 17:43:47.836823 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
W0627 17:43:47.836870 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
W0627 17:43:47.836918 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-0.kernel
W0627 17:43:47.836965 140626666125120 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-0.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-0.bias
W0627 17:43:47.837013 140626666125120 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-0.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-1.kernel
W0627 17:43:47.837060 140626666125120 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-1.bias
W0627 17:43:47.837107 140626666125120 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-2.kernel
W0627 17:43:47.837154 140626666125120 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-2.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-2.bias
W0627 17:43:47.837211 140626666125120 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-2.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-0.kernel
W0627 17:43:47.837259 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-0.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-0.bias
W0627 17:43:47.837307 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-0.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-1.kernel
W0627 17:43:47.837360 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-1.bias
W0627 17:43:47.837408 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-2.kernel
W0627 17:43:47.837455 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-2.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-2.bias
W0627 17:43:47.837502 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-2.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-0.kernel
W0627 17:43:47.837550 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-0.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-0.bias
W0627 17:43:47.837597 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-0.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-1.kernel
W0627 17:43:47.837645 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-1.bias
W0627 17:43:47.837692 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-2.kernel
W0627 17:43:47.837739 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-2.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-2.bias
W0627 17:43:47.837786 140626666125120 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-2.bias
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
W0627 17:43:47.837833 140626666125120 util.py:189] A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
Running tests under Python 3.8.8: /lustre/fs5/vgl/scratch/labueg//venvs/deepconsensus_venv_1/bin/python3
[ RUN ] GetStepCountsTest.test_get_step_counts_simple
[ OK ] GetStepCountsTest.test_get_step_counts_simple
[ RUN ] GetStepCountsTest.test_get_step_counts_with_limit
[ OK ] GetStepCountsTest.test_get_step_counts_with_limit
[ RUN ] ModelTrainTest.test_train_e2e0 ('fc+test')
2022-06-27 17:43:53.342665: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-06-27 17:43:53.347043: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
I0627 17:43:54.047276 140649649518400 model_train_custom_loop.py:199] Building model.
I0627 17:43:54.080218 140649649518400 model_train_custom_loop.py:201] Done building model.
2022-06-27 17:43:54.223793: W tensorflow/core/framework/dataset.cc:744] Input of GeneratorDatasetOp::Dataset will not be optimized because the dataset does not implement the AsGraphDefInternal() method needed to apply optimizations.
I0627 17:43:54.299579 140649649518400 model_train_custom_loop.py:265] Starting to run epoch: 0
I0627 17:43:57.467439 140649649518400 model_train_custom_loop.py:147] epoch: 0 step: 0 of 253 loss: 167.329956
I0627 17:44:11.819717 140649649518400 model_train_custom_loop.py:147] epoch: 0 step: 100 of 253 loss: 117.147575
I0627 17:44:25.736131 140649649518400 model_train_custom_loop.py:147] epoch: 0 step: 200 of 253 loss: 131.114822
I0627 17:44:42.248521 140649649518400 model_train_custom_loop.py:147] epoch: 0 step: 252 of 253 loss: 133.378159
I0627 17:44:42.318886 140649649518400 model_train_custom_loop.py:167] Saved checkpoint to /tmp/absl_testing/ModelTrainTest/test_train_e2e0/tmpa6bm7oey/checkpoint-1
I0627 17:44:42.319014 140649649518400 model_train_custom_loop.py:168] Logging checkpoint /tmp/absl_testing/ModelTrainTest/test_train_e2e0/tmpa6bm7oey/checkpoint-1 metrics.
[ OK ] ModelTrainTest.test_train_e2e0 ('fc+test')
[ RUN ] ModelTrainTest.test_train_e2e1 ('transformer+test')
I0627 17:44:42.566816 140649649518400 model_train_custom_loop.py:199] Building model.
I0627 17:44:42.572724 140649649518400 model_train_custom_loop.py:201] Done building model.
2022-06-27 17:44:42.638019: W tensorflow/core/framework/dataset.cc:744] Input of GeneratorDatasetOp::Dataset will not be optimized because the dataset does not implement the AsGraphDefInternal() method needed to apply optimizations.
I0627 17:44:42.709506 140649649518400 model_train_custom_loop.py:265] Starting to run epoch: 0
WARNING:tensorflow:From /lustre/fs5/vgl/scratch/labueg/venvs/deepconsensus_venv_1/lib/python3.8/site-packages/official/nlp/transformer/attention_layer.py:54: DenseEinsum.__init__ (from official.nlp.modeling.layers.dense_einsum) is deprecated and will be removed in a future version.
Instructions for updating:
DenseEinsum is deprecated. Please use tf.keras.experimental.EinsumDense layer instead.
W0627 17:44:43.336385 140649649518400 deprecation.py:341] From /lustre/fs5/vgl/scratch/labueg/venvs/deepconsensus_venv_1/lib/python3.8/site-packages/official/nlp/transformer/attention_layer.py:54: DenseEinsum.__init__ (from official.nlp.modeling.layers.dense_einsum) is deprecated and will be removed in a future version.
Instructions for updating:
DenseEinsum is deprecated. Please use tf.keras.experimental.EinsumDense layer instead.
I0627 17:44:47.877969 140649649518400 model_train_custom_loop.py:147] epoch: 0 step: 0 of 253 loss: 187.053268
I0627 17:45:56.842165 140649649518400 model_train_custom_loop.py:147] epoch: 0 step: 100 of 253 loss: 132.829910
I0627 17:47:04.240570 140649649518400 model_train_custom_loop.py:147] epoch: 0 step: 200 of 253 loss: 148.135254
I0627 17:48:53.844231 140649649518400 model_train_custom_loop.py:147] epoch: 0 step: 252 of 253 loss: 121.157104
I0627 17:48:54.031528 140649649518400 model_train_custom_loop.py:167] Saved checkpoint to /tmp/absl_testing/ModelTrainTest/test_train_e2e1/tmparvoizpp/checkpoint-1
I0627 17:48:54.031733 140649649518400 model_train_custom_loop.py:168] Logging checkpoint /tmp/absl_testing/ModelTrainTest/test_train_e2e1/tmparvoizpp/checkpoint-1 metrics.
[ OK ] ModelTrainTest.test_train_e2e1 ('transformer+test')
----------------------------------------------------------------------
Ran 4 tests in 300.870s
OK
Running tests under Python 3.8.8: /lustre/fs5/vgl/scratch/labueg//venvs/deepconsensus_venv_1/bin/python3
[ RUN ] GetModelTest.test_invalid_model_name_throws_error
2022-06-27 17:48:59.782110: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-06-27 17:48:59.785532: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
[ OK ] GetModelTest.test_invalid_model_name_throws_error
[ RUN ] GetModelTest.test_valid_model_name
[ OK ] GetModelTest.test_valid_model_name
[ RUN ] ModifyParamsTest.test_params_modified0 ('transformer+test')
[ OK ] ModifyParamsTest.test_params_modified0 ('transformer+test')
[ RUN ] ModifyParamsTest.test_params_modified1 ('fc+test')
[ OK ] ModifyParamsTest.test_params_modified1 ('fc+test')
[ RUN ] RunInferenceAndWriteResultsTest.test_output_dir_created
WARNING:tensorflow:From /lustre/fs5/vgl/scratch/labueg/venvs/deepconsensus_venv_1/lib/python3.8/site-packages/official/nlp/transformer/attention_layer.py:54: DenseEinsum.__init__ (from official.nlp.modeling.layers.dense_einsum) is deprecated and will be removed in a future version.
Instructions for updating:
DenseEinsum is deprecated. Please use tf.keras.experimental.EinsumDense layer instead.
W0627 17:49:02.041287 140590770792256 deprecation.py:341] From /lustre/fs5/vgl/scratch/labueg/venvs/deepconsensus_venv_1/lib/python3.8/site-packages/official/nlp/transformer/attention_layer.py:54: DenseEinsum.__init__ (from official.nlp.modeling.layers.dense_einsum) is deprecated and will be removed in a future version.
Instructions for updating:
DenseEinsum is deprecated. Please use tf.keras.experimental.EinsumDense layer instead.
1301/1301 [==============================] - 411s 311ms/step - loss: 210.2064 - accuracy: 0.1807 - per_example_accuracy: 0.0000e+00 - A_per_class_accuracy: 0.1310 - T_per_class_accuracy: 0.8777 - C_per_class_accuracy: 0.0094 - G_per_class_accuracy: 0.0000e+00 - gap_or_pad_per_class_accuracy: 0.0216
[ OK ] RunInferenceAndWriteResultsTest.test_output_dir_created
----------------------------------------------------------------------
Ran 5 tests in 411.365s
OK
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer
W0627 17:55:51.310979 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer-0
W0627 17:55:51.311236 140590770792256 util.py:181] Unresolved object in checkpoint: (root).model.layer-0
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer-1
W0627 17:55:51.311293 140590770792256 util.py:181] Unresolved object in checkpoint: (root).model.layer-1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-0
W0627 17:55:51.311341 140590770792256 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-0
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer-3
W0627 17:55:51.311392 140590770792256 util.py:181] Unresolved object in checkpoint: (root).model.layer-3
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-1
W0627 17:55:51.311437 140590770792256 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-1
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer-5
W0627 17:55:51.311481 140590770792256 util.py:181] Unresolved object in checkpoint: (root).model.layer-5
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-2
W0627 17:55:51.311525 140590770792256 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-2
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer-7
W0627 17:55:51.311569 140590770792256 util.py:181] Unresolved object in checkpoint: (root).model.layer-7
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer-8
W0627 17:55:51.311612 140590770792256 util.py:181] Unresolved object in checkpoint: (root).model.layer-8
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
W0627 17:55:51.311655 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
W0627 17:55:51.311699 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
W0627 17:55:51.311742 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
W0627 17:55:51.311785 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
W0627 17:55:51.311829 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-0.kernel
W0627 17:55:51.311895 140590770792256 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-0.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-0.bias
W0627 17:55:51.311939 140590770792256 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-0.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-1.kernel
W0627 17:55:51.311982 140590770792256 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-1.bias
W0627 17:55:51.312025 140590770792256 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-2.kernel
W0627 17:55:51.312068 140590770792256 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-2.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).model.layer_with_weights-2.bias
W0627 17:55:51.312111 140590770792256 util.py:181] Unresolved object in checkpoint: (root).model.layer_with_weights-2.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-0.kernel
W0627 17:55:51.312154 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-0.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-0.bias
W0627 17:55:51.312198 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-0.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-1.kernel
W0627 17:55:51.312242 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-1.bias
W0627 17:55:51.312285 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-2.kernel
W0627 17:55:51.312328 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-2.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-2.bias
W0627 17:55:51.312375 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).model.layer_with_weights-2.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-0.kernel
W0627 17:55:51.312418 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-0.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-0.bias
W0627 17:55:51.312461 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-0.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-1.kernel
W0627 17:55:51.312505 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-1.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-1.bias
W0627 17:55:51.312548 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-1.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-2.kernel
W0627 17:55:51.312607 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-2.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-2.bias
W0627 17:55:51.312650 140590770792256 util.py:181] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).model.layer_with_weights-2.bias
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
W0627 17:55:51.312697 140590770792256 util.py:189] A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment