Skip to content

Instantly share code, notes, and snippets.

@kurianbenoy
Created April 25, 2020 06:25
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kurianbenoy/e5dc2b3a72a31aabca2712f634bb35a6 to your computer and use it in GitHub Desktop.
Save kurianbenoy/e5dc2b3a72a31aabca2712f634bb35a6 to your computer and use it in GitHub Desktop.
[NbConvertApp] Converting notebook __notebook__.ipynb to notebook
[NbConvertApp] Executing notebook with kernel: python3
2020-04-25 02:07:01.976986: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F
2020-04-25 02:07:01.998346: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2000175000 Hz
2020-04-25 02:07:02.000270: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5610c4dfcb60 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-04-25 02:07:02.000319: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-04-25 02:07:02.021687: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:300] Initialize GrpcChannelCache for job worker -> {0 -> 10.0.0.2:8470}
2020-04-25 02:07:02.021760: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:300] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:30013}
2020-04-25 02:07:02.040809: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:300] Initialize GrpcChannelCache for job worker -> {0 -> 10.0.0.2:8470}
2020-04-25 02:07:02.040945: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:300] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:30013}
2020-04-25 02:07:02.045507: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:390] Started server with target: grpc://localhost:30013
2020-04-25 02:07:05.305056: W tensorflow/core/platform/cloud/google_auth_provider.cc:178] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Cancelled: GCE check skipped due to presence of $NO_GCE_CHECK environment variable.".
2020-04-25 02:07:05.354452: W tensorflow/core/platform/cloud/google_auth_provider.cc:178] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Cancelled: GCE check skipped due to presence of $NO_GCE_CHECK environment variable.".
2020-04-25 02:07:05.387079: W tensorflow/core/platform/cloud/google_auth_provider.cc:178] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Cancelled: GCE check skipped due to presence of $NO_GCE_CHECK environment variable.".
2020-04-25 02:07:05.620050: W tensorflow/core/platform/cloud/google_auth_provider.cc:178] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Cancelled: GCE check skipped due to presence of $NO_GCE_CHECK environment variable.".
2020-04-25 02:16:54.376517: W tensorflow/core/distributed_runtime/eager/remote_tensor_handle_data.cc:75] Unable to destroy remote tensor handles. If you are running a tf.function, it usually indicates some op in the graph gets an error: {{function_node __inference_distributed_function_529926}} End of sequence
[[{{node cond_14/else/_139/IteratorGetNext}}]]
2020-04-25 02:24:57.412101: W tensorflow/core/distributed_runtime/eager/remote_tensor_handle_data.cc:75] Unable to destroy remote tensor handles. If you are running a tf.function, it usually indicates some op in the graph gets an error: {{function_node __inference_distributed_function_969761}} End of sequence
[[{{node cond_9/else/_84/IteratorGetNext}}]]
2020-04-25 02:33:08.686316: W tensorflow/core/distributed_runtime/eager/remote_tensor_handle_data.cc:75] Unable to destroy remote tensor handles. If you are running a tf.function, it usually indicates some op in the graph gets an error: {{function_node __inference_distributed_function_1409599}} End of sequence
[[{{node cond_12/else/_117/IteratorGetNext}}]]
2020-04-25 02:41:24.765419: W tensorflow/core/distributed_runtime/eager/remote_tensor_handle_data.cc:75] Unable to destroy remote tensor handles. If you are running a tf.function, it usually indicates some op in the graph gets an error: 3 root error(s) found.
(0) Out of range: {{function_node __inference_distributed_function_1849425}} End of sequence
[[{{node cond_13/else/_128/IteratorGetNext}}]]
(1) Out of range: {{function_node __inference_distributed_function_1849425}} End of sequence
[[{{node cond_12/else/_117/IteratorGetNext}}]]
(2) Out of range: {{function_node __inference_distributed_function_1849425}} End of sequence
[[{{node cond_11/else/_106/IteratorGetNext}}]]
0 successful operations.
6 derived errors ignored.
[NbConvertApp] ERROR | Kernel died while waiting for execute reply.
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 478, in _poll_for_reply
msg = self.kc.shell_channel.get_msg(timeout=timeout)
File "/opt/conda/lib/python3.6/site-packages/jupyter_client/blocking/channels.py", line 57, in get_msg
raise Empty
queue.Empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/bin/jupyter-nbconvert", line 11, in <module>
sys.exit(main())
File "/opt/conda/lib/python3.6/site-packages/jupyter_core/application.py", line 268, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/traitlets/config/application.py", line 664, in launch_instance
app.start()
File "/opt/conda/lib/python3.6/site-packages/nbconvert/nbconvertapp.py", line 340, in start
self.convert_notebooks()
File "/opt/conda/lib/python3.6/site-packages/nbconvert/nbconvertapp.py", line 510, in convert_notebooks
self.convert_single_notebook(notebook_filename)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/nbconvertapp.py", line 481, in convert_single_notebook
output, resources = self.export_single_notebook(notebook_filename, resources, input_buffer=input_buffer)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/nbconvertapp.py", line 410, in export_single_notebook
output, resources = self.exporter.from_filename(notebook_filename, resources=resources)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 179, in from_filename
return self.from_file(f, resources=resources, **kw)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 197, in from_file
return self.from_notebook_node(nbformat.read(file_stream, as_version=4), resources=resources, **kw)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/exporters/notebook.py", line 32, in from_notebook_node
nb_copy, resources = super(NotebookExporter, self).from_notebook_node(nb, resources, **kw)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 139, in from_notebook_node
nb_copy, resources = self._preprocess(nb_copy, resources)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 316, in _preprocess
nbc, resc = preprocessor(nbc, resc)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/base.py", line 47, in __call__
return self.preprocess(nb, resources)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 405, in preprocess
nb, resources = super(ExecutePreprocessor, self).preprocess(nb, resources)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/base.py", line 69, in preprocess
nb.cells[index], resources = self.preprocess_cell(cell, resources, index)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 438, in preprocess_cell
reply, outputs = self.run_cell(cell, cell_index, store_history)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 578, in run_cell
exec_reply = self._poll_for_reply(parent_msg_id, cell, timeout)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 483, in _poll_for_reply
self._check_alive()
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 510, in _check_alive
raise DeadKernelError("Kernel died")
nbconvert.preprocessors.execute.DeadKernelError: Kernel died
[NbConvertApp] Converting notebook __notebook__.ipynb to html
[NbConvertApp] Writing 393544 bytes to __results__.html
[NbConvertApp] Converting notebook __notebook__.ipynb to notebook
[NbConvertApp] Executing notebook with kernel: python3
2020-04-25 02:07:01.976986: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F
2020-04-25 02:07:01.998346: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2000175000 Hz
2020-04-25 02:07:02.000270: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5610c4dfcb60 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-04-25 02:07:02.000319: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-04-25 02:07:02.021687: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:300] Initialize GrpcChannelCache for job worker -> {0 -> 10.0.0.2:8470}
2020-04-25 02:07:02.021760: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:300] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:30013}
2020-04-25 02:07:02.040809: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:300] Initialize GrpcChannelCache for job worker -> {0 -> 10.0.0.2:8470}
2020-04-25 02:07:02.040945: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:300] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:30013}
2020-04-25 02:07:02.045507: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:390] Started server with target: grpc://localhost:30013
2020-04-25 02:07:05.305056: W tensorflow/core/platform/cloud/google_auth_provider.cc:178] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Cancelled: GCE check skipped due to presence of $NO_GCE_CHECK environment variable.".
2020-04-25 02:07:05.354452: W tensorflow/core/platform/cloud/google_auth_provider.cc:178] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Cancelled: GCE check skipped due to presence of $NO_GCE_CHECK environment variable.".
2020-04-25 02:07:05.387079: W tensorflow/core/platform/cloud/google_auth_provider.cc:178] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Cancelled: GCE check skipped due to presence of $NO_GCE_CHECK environment variable.".
2020-04-25 02:07:05.620050: W tensorflow/core/platform/cloud/google_auth_provider.cc:178] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Cancelled: GCE check skipped due to presence of $NO_GCE_CHECK environment variable.".
2020-04-25 02:16:54.376517: W tensorflow/core/distributed_runtime/eager/remote_tensor_handle_data.cc:75] Unable to destroy remote tensor handles. If you are running a tf.function, it usually indicates some op in the graph gets an error: {{function_node __inference_distributed_function_529926}} End of sequence
[[{{node cond_14/else/_139/IteratorGetNext}}]]
2020-04-25 02:24:57.412101: W tensorflow/core/distributed_runtime/eager/remote_tensor_handle_data.cc:75] Unable to destroy remote tensor handles. If you are running a tf.function, it usually indicates some op in the graph gets an error: {{function_node __inference_distributed_function_969761}} End of sequence
[[{{node cond_9/else/_84/IteratorGetNext}}]]
2020-04-25 02:33:08.686316: W tensorflow/core/distributed_runtime/eager/remote_tensor_handle_data.cc:75] Unable to destroy remote tensor handles. If you are running a tf.function, it usually indicates some op in the graph gets an error: {{function_node __inference_distributed_function_1409599}} End of sequence
[[{{node cond_12/else/_117/IteratorGetNext}}]]
2020-04-25 02:41:24.765419: W tensorflow/core/distributed_runtime/eager/remote_tensor_handle_data.cc:75] Unable to destroy remote tensor handles. If you are running a tf.function, it usually indicates some op in the graph gets an error: 3 root error(s) found.
(0) Out of range: {{function_node __inference_distributed_function_1849425}} End of sequence
[[{{node cond_13/else/_128/IteratorGetNext}}]]
(1) Out of range: {{function_node __inference_distributed_function_1849425}} End of sequence
[[{{node cond_12/else/_117/IteratorGetNext}}]]
(2) Out of range: {{function_node __inference_distributed_function_1849425}} End of sequence
[[{{node cond_11/else/_106/IteratorGetNext}}]]
0 successful operations.
6 derived errors ignored.
[NbConvertApp] ERROR | Kernel died while waiting for execute reply.
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 478, in _poll_for_reply
msg = self.kc.shell_channel.get_msg(timeout=timeout)
File "/opt/conda/lib/python3.6/site-packages/jupyter_client/blocking/channels.py", line 57, in get_msg
raise Empty
queue.Empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/bin/jupyter-nbconvert", line 11, in <module>
sys.exit(main())
File "/opt/conda/lib/python3.6/site-packages/jupyter_core/application.py", line 268, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/traitlets/config/application.py", line 664, in launch_instance
app.start()
File "/opt/conda/lib/python3.6/site-packages/nbconvert/nbconvertapp.py", line 340, in start
self.convert_notebooks()
File "/opt/conda/lib/python3.6/site-packages/nbconvert/nbconvertapp.py", line 510, in convert_notebooks
self.convert_single_notebook(notebook_filename)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/nbconvertapp.py", line 481, in convert_single_notebook
output, resources = self.export_single_notebook(notebook_filename, resources, input_buffer=input_buffer)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/nbconvertapp.py", line 410, in export_single_notebook
output, resources = self.exporter.from_filename(notebook_filename, resources=resources)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 179, in from_filename
return self.from_file(f, resources=resources, **kw)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 197, in from_file
return self.from_notebook_node(nbformat.read(file_stream, as_version=4), resources=resources, **kw)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/exporters/notebook.py", line 32, in from_notebook_node
nb_copy, resources = super(NotebookExporter, self).from_notebook_node(nb, resources, **kw)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 139, in from_notebook_node
nb_copy, resources = self._preprocess(nb_copy, resources)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 316, in _preprocess
nbc, resc = preprocessor(nbc, resc)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/base.py", line 47, in __call__
return self.preprocess(nb, resources)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 405, in preprocess
nb, resources = super(ExecutePreprocessor, self).preprocess(nb, resources)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/base.py", line 69, in preprocess
nb.cells[index], resources = self.preprocess_cell(cell, resources, index)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 438, in preprocess_cell
reply, outputs = self.run_cell(cell, cell_index, store_history)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 578, in run_cell
exec_reply = self._poll_for_reply(parent_msg_id, cell, timeout)
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 483, in _poll_for_reply
self._check_alive()
File "/opt/conda/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 510, in _check_alive
raise DeadKernelError("Kernel died")
nbconvert.preprocessors.execute.DeadKernelError: Kernel died
[NbConvertApp] Converting notebook __notebook__.ipynb to html
[NbConvertApp] Writing 393544 bytes to __results__.html
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment