Skip to content

Instantly share code, notes, and snippets.

@jonasschneider
Created November 12, 2016 02:37
Show Gist options
  • Save jonasschneider/49375b23bbb8d9fdffe1fffac44ce6d4 to your computer and use it in GitHub Desktop.
Save jonasschneider/49375b23bbb8d9fdffe1fffac44ce6d4 to your computer and use it in GitHub Desktop.
jonas@31:~$ cat tftest.py
import tensorflow.python.client.device_lib
tensorflow.python.client.device_lib.list_local_devices()
jonas@31:~$ bfboost client -l "172.16.2.26;172.16.1.2" gdb python
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.2) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from python...done.
(gdb) run tftest.py
Starting program: /opt/anaconda/4.2.0/bin/python tftest.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff2e4e700 (LWP 9487)]
[New Thread 0x7ffff264d700 (LWP 9488)]
[New Thread 0x7fffefe4c700 (LWP 9489)]
[New Thread 0x7fffed64b700 (LWP 9490)]
[New Thread 0x7fffeae4a700 (LWP 9491)]
[New Thread 0x7fffe8649700 (LWP 9492)]
[New Thread 0x7fffe5e48700 (LWP 9493)]
[New Thread 0x7fffe3647700 (LWP 9494)]
[New Thread 0x7fffe0e46700 (LWP 9495)]
[New Thread 0x7fffde645700 (LWP 9496)]
[New Thread 0x7fffdbe44700 (LWP 9497)]
[New Thread 0x7fffd9643700 (LWP 9498)]
[New Thread 0x7fffd6e42700 (LWP 9499)]
[New Thread 0x7fffd4641700 (LWP 9500)]
[New Thread 0x7fffd1e40700 (LWP 9501)]
[New Thread 0x7fffcf63f700 (LWP 9502)]
[New Thread 0x7fffcce3e700 (LWP 9503)]
[New Thread 0x7fffca63d700 (LWP 9504)]
[New Thread 0x7fffc7e3c700 (LWP 9505)]
[New Thread 0x7fffc563b700 (LWP 9506)]
[New Thread 0x7fffc2e3a700 (LWP 9507)]
[New Thread 0x7fffc0639700 (LWP 9508)]
[New Thread 0x7fffbde38700 (LWP 9509)]
[New Thread 0x7fffbb637700 (LWP 9510)]
[New Thread 0x7fffb8e36700 (LWP 9511)]
[New Thread 0x7fffb6635700 (LWP 9512)]
[New Thread 0x7fffb3e34700 (LWP 9513)]
[New Thread 0x7fffb1633700 (LWP 9514)]
[New Thread 0x7fffaee32700 (LWP 9515)]
[New Thread 0x7fffac631700 (LWP 9516)]
[New Thread 0x7fffa9e30700 (LWP 9517)]
[New Thread 0x7fffa762f700 (LWP 9518)]
[New Thread 0x7fffa4e2e700 (LWP 9519)]
[New Thread 0x7fffa262d700 (LWP 9520)]
[New Thread 0x7fff9fe2c700 (LWP 9521)]
[New Thread 0x7fff9d62b700 (LWP 9522)]
[New Thread 0x7fff9ae2a700 (LWP 9523)]
[New Thread 0x7fff98629700 (LWP 9524)]
[New Thread 0x7fff95e28700 (LWP 9525)]
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:102] Couldn't open CUDA library libcudnn.so. LD_LIBRARY_PATH: /opt/bitfusionio/lib/x86_64-linux-gnu/bitfusion/lib/nvml:/opt/intel/opencl/lib64:/opt/bitfusionio/lib/x86_64-linux-gnu/bitfusion/lib/cuda:/etc/bitfusionio/icd:/opt/bitfusionio/lib/x86_64-linux-gnu/bitfusion/lib/opencl:
I tensorflow/stream_executor/cuda/cuda_dnn.cc:2259] Unable to load cuDNN DSO
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
[Thread 0x7fffe8649700 (LWP 9492) exited]
[Thread 0x7fff98629700 (LWP 9524) exited]
[Thread 0x7fffefe4c700 (LWP 9489) exited]
[Thread 0x7fffaee32700 (LWP 9515) exited]
[Thread 0x7fffe0e46700 (LWP 9495) exited]
[Thread 0x7fffb6635700 (LWP 9512) exited]
[Thread 0x7fffed64b700 (LWP 9490) exited]
[Thread 0x7fff95e28700 (LWP 9525) exited]
[Thread 0x7fffa9e30700 (LWP 9517) exited]
[Thread 0x7fff9ae2a700 (LWP 9523) exited]
[Thread 0x7fffeae4a700 (LWP 9491) exited]
[Thread 0x7fff9d62b700 (LWP 9522) exited]
[Thread 0x7fffe3647700 (LWP 9494) exited]
[Thread 0x7fff9fe2c700 (LWP 9521) exited]
[Thread 0x7fffc2e3a700 (LWP 9507) exited]
[Thread 0x7fffa262d700 (LWP 9520) exited]
[Thread 0x7fffbb637700 (LWP 9510) exited]
[Thread 0x7fffa4e2e700 (LWP 9519) exited]
[Thread 0x7fffc0639700 (LWP 9508) exited]
[Thread 0x7fffa762f700 (LWP 9518) exited]
[Thread 0x7fffbde38700 (LWP 9509) exited]
[Thread 0x7fffac631700 (LWP 9516) exited]
[Thread 0x7fffc7e3c700 (LWP 9505) exited]
[Thread 0x7fffb1633700 (LWP 9514) exited]
[Thread 0x7fffb3e34700 (LWP 9513) exited]
[Thread 0x7fffd1e40700 (LWP 9501) exited]
[Thread 0x7fffb8e36700 (LWP 9511) exited]
[Thread 0x7ffff264d700 (LWP 9488) exited]
[Thread 0x7fffcce3e700 (LWP 9503) exited]
[Thread 0x7fffc563b700 (LWP 9506) exited]
[Thread 0x7fffca63d700 (LWP 9504) exited]
[Thread 0x7fffe5e48700 (LWP 9493) exited]
[Thread 0x7ffff2e4e700 (LWP 9487) exited]
[Thread 0x7fffd9643700 (LWP 9498) exited]
[Thread 0x7fffd6e42700 (LWP 9499) exited]
[Thread 0x7fffdbe44700 (LWP 9497) exited]
[Thread 0x7fffcf63f700 (LWP 9502) exited]
[Thread 0x7fffd4641700 (LWP 9500) exited]
[Thread 0x7fffde645700 (LWP 9496) exited]
[New Thread 0x7fff95e28700 (LWP 9609)]
[New Thread 0x7fff98629700 (LWP 9610)]
[New Thread 0x7fff9ae2a700 (LWP 9611)]
[New Thread 0x7fff9d62b700 (LWP 9612)]
[New Thread 0x7fffd6e42700 (LWP 9613)]
[New Thread 0x7fffd4641700 (LWP 9614)]
[New Thread 0x7fffd1e40700 (LWP 9615)]
[New Thread 0x7fffcf63f700 (LWP 9616)]
[New Thread 0x7fffc2e3a700 (LWP 9617)]
[New Thread 0x7fffc0639700 (LWP 9618)]
[New Thread 0x7fffbde38700 (LWP 9619)]
[New Thread 0x7fffbb637700 (LWP 9620)]
[New Thread 0x7fffb8e36700 (LWP 9621)]
[New Thread 0x7fffb6635700 (LWP 9622)]
[New Thread 0x7fffb3e34700 (LWP 9623)]
[New Thread 0x7fffb1633700 (LWP 9624)]
[New Thread 0x7fffaee32700 (LWP 9625)]
[New Thread 0x7fffac631700 (LWP 9626)]
[New Thread 0x7fffa9e30700 (LWP 9627)]
[New Thread 0x7fffa762f700 (LWP 9628)]
[New Thread 0x7fffa4e2e700 (LWP 9629)]
[New Thread 0x7fffa262d700 (LWP 9630)]
[New Thread 0x7fff9fe2c700 (LWP 9631)]
[New Thread 0x7fff69042700 (LWP 9632)]
[New Thread 0x7fff68841700 (LWP 9633)]
[New Thread 0x7fff23fff700 (LWP 9634)]
[New Thread 0x7fff237fe700 (LWP 9635)]
[New Thread 0x7fff22ffd700 (LWP 9636)]
[New Thread 0x7fff227fc700 (LWP 9637)]
[New Thread 0x7fff21ffb700 (LWP 9638)]
[New Thread 0x7fff217fa700 (LWP 9639)]
[New Thread 0x7fff20ff9700 (LWP 9640)]
[New Thread 0x7fff03fff700 (LWP 9641)]
[New Thread 0x7fff037fe700 (LWP 9642)]
[New Thread 0x7fff02ffd700 (LWP 9644)]
[New Thread 0x7fff027fc700 (LWP 9645)]
[New Thread 0x7fff01ffb700 (LWP 9646)]
[New Thread 0x7fff017fa700 (LWP 9647)]
[New Thread 0x7fff00ff9700 (LWP 9648)]
[New Thread 0x7ffee3fff700 (LWP 9649)]
[New Thread 0x7ffee37fe700 (LWP 9650)]
[New Thread 0x7ffee2ffd700 (LWP 9651)]
[New Thread 0x7ffee27fc700 (LWP 9652)]
[New Thread 0x7ffee1ffb700 (LWP 9653)]
[New Thread 0x7ffee17fa700 (LWP 9654)]
[New Thread 0x7ffee0ff9700 (LWP 9655)]
[New Thread 0x7ffec3fff700 (LWP 9656)]
[New Thread 0x7ffec37fe700 (LWP 9657)]
[New Thread 0x7ffec2ffd700 (LWP 9658)]
[New Thread 0x7ffec27fc700 (LWP 9659)]
[New Thread 0x7ffec1ffb700 (LWP 9660)]
[New Thread 0x7ffec17fa700 (LWP 9661)]
[New Thread 0x7ffec0ff9700 (LWP 9662)]
[New Thread 0x7ffeabfff700 (LWP 9663)]
[New Thread 0x7ffeab7fe700 (LWP 9664)]
[New Thread 0x7ffeaaffd700 (LWP 9665)]
[New Thread 0x7ffeaa7fc700 (LWP 9666)]
[New Thread 0x7ffea9ffb700 (LWP 9667)]
[New Thread 0x7ffea97fa700 (LWP 9668)]
[New Thread 0x7ffea8ff9700 (LWP 9669)]
[New Thread 0x7ffe87fff700 (LWP 9670)]
[New Thread 0x7ffe877fe700 (LWP 9671)]
[New Thread 0x7ffe86ffd700 (LWP 9672)]
[New Thread 0x7ffe867fc700 (LWP 9673)]
[New Thread 0x7ffe85ffb700 (LWP 9674)]
[New Thread 0x7ffe857fa700 (LWP 9675)]
[New Thread 0x7ffe84ff9700 (LWP 9676)]
[New Thread 0x7ffe67fff700 (LWP 9677)]
[New Thread 0x7ffe677fe700 (LWP 9678)]
[New Thread 0x7ffe66ffd700 (LWP 9679)]
[New Thread 0x7ffe667fc700 (LWP 9680)]
[New Thread 0x7ffe65ffb700 (LWP 9681)]
[New Thread 0x7ffe657fa700 (LWP 9682)]
[New Thread 0x7ffe64ff9700 (LWP 9683)]
[New Thread 0x7ffe47fff700 (LWP 9684)]
[New Thread 0x7ffe477fe700 (LWP 9685)]
[New Thread 0x7ffe46ffd700 (LWP 9686)]
[New Thread 0x7ffe467fc700 (LWP 9687)]
[New Thread 0x7ffe45ffb700 (LWP 9688)]
[New Thread 0x7ffe457fa700 (LWP 9689)]
[New Thread 0x7ffe44ff9700 (LWP 9690)]
[New Thread 0x7ffe27fff700 (LWP 9691)]
[New Thread 0x7ffe277fe700 (LWP 9692)]
[New Thread 0x7ffe26ffd700 (LWP 9693)]
[New Thread 0x7ffe267fc700 (LWP 9694)]
[New Thread 0x7ffe25ffb700 (LWP 9695)]
[New Thread 0x7ffe257fa700 (LWP 9696)]
[New Thread 0x7ffe24ff9700 (LWP 9697)]
[New Thread 0x7ffe07fff700 (LWP 9698)]
[New Thread 0x7ffe077fe700 (LWP 9699)]
[New Thread 0x7ffe06ffd700 (LWP 9700)]
[New Thread 0x7ffe067fc700 (LWP 9701)]
[New Thread 0x7ffe05ffb700 (LWP 9702)]
[New Thread 0x7ffe057fa700 (LWP 9746)]
E tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:911] could not open file to read NUMA node: /sys/bus/pci/devices/0000:04:00.0/numa_node
Your kernel may have been built without NUMA support.
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.076
pciBusID 0000:04:00.0
Total memory: 12.00GiB
Free memory: 11.87GiB
[New Thread 0x7ffe04ff9700 (LWP 9760)]
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7fc1a4051f20
E tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:911] could not open file to read NUMA node: /sys/bus/pci/devices/0000:05:00.0/numa_node
Your kernel may have been built without NUMA support.
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 1 with properties:
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.076
pciBusID 0000:05:00.0
Total memory: 12.00GiB
Free memory: 11.87GiB
[New Thread 0x7ffde7fff700 (LWP 9761)]
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7fc1a4466dd0
E tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:911] could not open file to read NUMA node: /sys/bus/pci/devices/0000:06:00.0/numa_node
Your kernel may have been built without NUMA support.
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 2 with properties:
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.076
pciBusID 0000:06:00.0
Total memory: 12.00GiB
Free memory: 11.87GiB
[New Thread 0x7ffde77fe700 (LWP 9762)]
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7fc1a48e1540
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 3 with properties:
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.076
pciBusID 0000:07:00.0
Total memory: 12.00GiB
Free memory: 11.87GiB
[New Thread 0x7ffde6ffd700 (LWP 9763)]
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7fc1a4d5fd10
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 4 with properties:
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.076
pciBusID 0000:0a:00.0
Total memory: 12.00GiB
Free memory: 11.87GiB
[New Thread 0x7ffde67fc700 (LWP 9764)]
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7fc1a51e1fb0
E tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:911] could not open file to read NUMA node: /sys/bus/pci/devices/0000:0b:00.0/numa_node
Your kernel may have been built without NUMA support.
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 5 with properties:
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.076
pciBusID 0000:0b:00.0
Total memory: 12.00GiB
Free memory: 11.87GiB
[New Thread 0x7ffde5ffb700 (LWP 9765)]
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7fc1a56681c0
E tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:911] could not open file to read NUMA node: /sys/bus/pci/devices/0000:0c:00.0/numa_node
Your kernel may have been built without NUMA support.
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 6 with properties:
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.076
pciBusID 0000:0c:00.0
Total memory: 12.00GiB
Free memory: 11.87GiB
[New Thread 0x7ffde57fa700 (LWP 9777)]
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7fc1a5af1fd0
E tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:911] could not open file to read NUMA node: /sys/bus/pci/devices/0000:0d:00.0/numa_node
Your kernel may have been built without NUMA support.
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 7 with properties:
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.076
pciBusID 0000:0d:00.0
Total memory: 12.00GiB
Free memory: 11.87GiB
[New Thread 0x7ffde4ff9700 (LWP 9799)]
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7fc1a5f7f440
E tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:911] could not open file to read NUMA node: /sys/bus/pci/devices/0000:04:00.0/numa_node
Your kernel may have been built without NUMA support.
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 8 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:04:00.0
Total memory: 11.90GiB
Free memory: 11.76GiB
[New Thread 0x7ffdc3fff700 (LWP 9801)]
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7f2b500310d0
E tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:911] could not open file to read NUMA node: /sys/bus/pci/devices/0000:05:00.0/numa_node
Your kernel may have been built without NUMA support.
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 9 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:05:00.0
Total memory: 11.90GiB
Free memory: 11.76GiB
[New Thread 0x7ffdc37fe700 (LWP 9802)]
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7f2b5042e8f0
E tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:911] could not open file to read NUMA node: /sys/bus/pci/devices/0000:06:00.0/numa_node
Your kernel may have been built without NUMA support.
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 10 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:06:00.0
Total memory: 11.90GiB
Free memory: 11.76GiB
[New Thread 0x7ffdc2ffd700 (LWP 9804)]
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7f2b50a53750
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 11 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:07:00.0
Total memory: 11.90GiB
Free memory: 11.76GiB
[New Thread 0x7ffdc27fc700 (LWP 9827)]
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7f2b5107c950
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 12 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:0a:00.0
Total memory: 11.90GiB
Free memory: 11.76GiB
[New Thread 0x7ffdc1ffb700 (LWP 9840)]
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7f2b516a9690
E tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:911] could not open file to read NUMA node: /sys/bus/pci/devices/0000:0b:00.0/numa_node
Your kernel may have been built without NUMA support.
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 13 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:0b:00.0
Total memory: 11.90GiB
Free memory: 11.76GiB
[New Thread 0x7ffdc17fa700 (LWP 9842)]
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7f2b51cd9f00
E tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:911] could not open file to read NUMA node: /sys/bus/pci/devices/0000:0c:00.0/numa_node
Your kernel may have been built without NUMA support.
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 14 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:0c:00.0
Total memory: 11.90GiB
Free memory: 11.76GiB
[New Thread 0x7ffdc0ff9700 (LWP 9854)]
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7f2b5230e680
E tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:911] could not open file to read NUMA node: /sys/bus/pci/devices/0000:0d:00.0/numa_node
Your kernel may have been built without NUMA support.
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 15 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:0d:00.0
Total memory: 11.90GiB
Free memory: 11.76GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 0 to device ordinal 8
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 0 to device ordinal 9
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 0 to device ordinal 10
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 0 to device ordinal 11
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 0 to device ordinal 12
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 0 to device ordinal 13
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 0 to device ordinal 14
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 0 to device ordinal 15
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 1 to device ordinal 8
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 1 to device ordinal 9
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 1 to device ordinal 10
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 1 to device ordinal 11
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 1 to device ordinal 12
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 1 to device ordinal 13
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 1 to device ordinal 14
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 1 to device ordinal 15
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 2 to device ordinal 8
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 2 to device ordinal 9
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 2 to device ordinal 10
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 2 to device ordinal 11
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 2 to device ordinal 12
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 2 to device ordinal 13
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 2 to device ordinal 14
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 2 to device ordinal 15
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 3 to device ordinal 8
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 3 to device ordinal 9
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 3 to device ordinal 10
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 3 to device ordinal 11
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 3 to device ordinal 12
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 3 to device ordinal 13
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 3 to device ordinal 14
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 3 to device ordinal 15
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 4 to device ordinal 8
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 4 to device ordinal 9
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 4 to device ordinal 10
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 4 to device ordinal 11
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 4 to device ordinal 12
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 4 to device ordinal 13
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 4 to device ordinal 14
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 4 to device ordinal 15
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 5 to device ordinal 8
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 5 to device ordinal 9
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 5 to device ordinal 10
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 5 to device ordinal 11
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 5 to device ordinal 12
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 5 to device ordinal 13
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 5 to device ordinal 14
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 5 to device ordinal 15
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 6 to device ordinal 8
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 6 to device ordinal 9
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 6 to device ordinal 10
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 6 to device ordinal 11
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 6 to device ordinal 12
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 6 to device ordinal 13
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 6 to device ordinal 14
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 6 to device ordinal 15
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 7 to device ordinal 8
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 7 to device ordinal 9
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 7 to device ordinal 10
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 7 to device ordinal 11
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 7 to device ordinal 12
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 7 to device ordinal 13
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 7 to device ordinal 14
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 7 to device ordinal 15
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 8 to device ordinal 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 8 to device ordinal 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 8 to device ordinal 2
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 8 to device ordinal 3
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 8 to device ordinal 4
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 8 to device ordinal 5
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 8 to device ordinal 6
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 8 to device ordinal 7
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 9 to device ordinal 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 9 to device ordinal 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 9 to device ordinal 2
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 9 to device ordinal 3
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 9 to device ordinal 4
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 9 to device ordinal 5
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 9 to device ordinal 6
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 9 to device ordinal 7
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 10 to device ordinal 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 10 to device ordinal 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 10 to device ordinal 2
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 10 to device ordinal 3
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 10 to device ordinal 4
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 10 to device ordinal 5
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 10 to device ordinal 6
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 10 to device ordinal 7
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 11 to device ordinal 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 11 to device ordinal 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 11 to device ordinal 2
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 11 to device ordinal 3
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 11 to device ordinal 4
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 11 to device ordinal 5
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 11 to device ordinal 6
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 11 to device ordinal 7
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 12 to device ordinal 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 12 to device ordinal 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 12 to device ordinal 2
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 12 to device ordinal 3
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 12 to device ordinal 4
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 12 to device ordinal 5
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 12 to device ordinal 6
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 12 to device ordinal 7
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 13 to device ordinal 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 13 to device ordinal 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 13 to device ordinal 2
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 13 to device ordinal 3
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 13 to device ordinal 4
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 13 to device ordinal 5
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 13 to device ordinal 6
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 13 to device ordinal 7
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 14 to device ordinal 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 14 to device ordinal 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 14 to device ordinal 2
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 14 to device ordinal 3
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 14 to device ordinal 4
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 14 to device ordinal 5
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 14 to device ordinal 6
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 14 to device ordinal 7
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 15 to device ordinal 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 15 to device ordinal 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 15 to device ordinal 2
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 15 to device ordinal 3
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 15 to device ordinal 4
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 15 to device ordinal 5
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 15 to device ordinal 6
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 15 to device ordinal 7
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y Y Y Y Y Y Y Y N N N N N N N N
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 1: Y Y Y Y Y Y Y Y N N N N N N N N
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 2: Y Y Y Y Y Y Y Y N N N N N N N N
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 3: Y Y Y Y Y Y Y Y N N N N N N N N
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 4: Y Y Y Y Y Y Y Y N N N N N N N N
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 5: Y Y Y Y Y Y Y Y N N N N N N N N
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 6: Y Y Y Y Y Y Y Y N N N N N N N N
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 7: Y Y Y Y Y Y Y Y N N N N N N N N
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 8: N N N N N N N N Y Y Y Y Y Y Y Y
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 9: N N N N N N N N Y Y Y Y Y Y Y Y
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 10: N N N N N N N N Y Y Y Y Y Y Y Y
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 11: N N N N N N N N Y Y Y Y Y Y Y Y
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 12: N N N N N N N N Y Y Y Y Y Y Y Y
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 13: N N N N N N N N Y Y Y Y Y Y Y Y
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 14: N N N N N N N N Y Y Y Y Y Y Y Y
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 15: N N N N N N N N Y Y Y Y Y Y Y Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:04:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX TITAN X, pci bus id: 0000:05:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:2) -> (device: 2, name: GeForce GTX TITAN X, pci bus id: 0000:06:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:3) -> (device: 3, name: GeForce GTX TITAN X, pci bus id: 0000:07:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:4) -> (device: 4, name: GeForce GTX TITAN X, pci bus id: 0000:0a:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:5) -> (device: 5, name: GeForce GTX TITAN X, pci bus id: 0000:0b:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:6) -> (device: 6, name: GeForce GTX TITAN X, pci bus id: 0000:0c:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:7) -> (device: 7, name: GeForce GTX TITAN X, pci bus id: 0000:0d:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:8) -> (device: 8, name: TITAN X (Pascal), pci bus id: 0000:04:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:9) -> (device: 9, name: TITAN X (Pascal), pci bus id: 0000:05:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:10) -> (device: 10, name: TITAN X (Pascal), pci bus id: 0000:06:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:11) -> (device: 11, name: TITAN X (Pascal), pci bus id: 0000:07:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:12) -> (device: 12, name: TITAN X (Pascal), pci bus id: 0000:0a:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:13) -> (device: 13, name: TITAN X (Pascal), pci bus id: 0000:0b:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:14) -> (device: 14, name: TITAN X (Pascal), pci bus id: 0000:0c:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:15) -> (device: 15, name: TITAN X (Pascal), pci bus id: 0000:0d:00.0)
E tensorflow/core/common_runtime/gpu/gpu_device.cc:636] Could not identify NUMA node of /gpu:0, defaulting to 0. Your kernel may not have been built with NUMA support.
Program received signal SIGSEGV, Segmentation fault.
0x00007fff7fe8d063 in tensorflow::ProcessState::GetGPUAllocator(tensorflow::GPUOptions const&, int, unsigned long) ()
from /opt/anaconda/4.2.0/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so
(gdb) bt
#0 0x00007fff7fe8d063 in tensorflow::ProcessState::GetGPUAllocator(tensorflow::GPUOptions const&, int, unsigned long) ()
from /opt/anaconda/4.2.0/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so
#1 0x00007fff7fe81553 in tensorflow::BaseGPUDeviceFactory::CreateGPUDevice(tensorflow::SessionOptions const&, std::string const&, int) ()
from /opt/anaconda/4.2.0/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so
#2 0x00007fff7fe8232b in tensorflow::BaseGPUDeviceFactory::CreateDevices(tensorflow::SessionOptions const&, std::string const&, std::vector<tensorflow::Device*, std::allocator<tensorflow::Device*> >*) ()
from /opt/anaconda/4.2.0/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so
#3 0x00007fff7ff2d226 in tensorflow::DeviceFactory::AddDevices(tensorflow::SessionOptions const&, std::string const&, std::vector<tensorflow::Device*, std::allocator<tensorflow::Device*> >*) () from /opt/anaconda/4.2.0/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so
#4 0x00007fff7ec0ec3b in _wrap_DeviceFactory_AddDevices ()
from /opt/anaconda/4.2.0/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so
#5 0x00007ffff79a65e9 in PyCFunction_Call (func=0x7fff8eeb67e0, args=0x7ffff7f97048, kwds=<optimized out>) at Objects/methodobject.c:109
#6 0x00007ffff7a2dbd5 in call_function (oparg=<optimized out>, pp_stack=0x7fffffffdc88) at Python/ceval.c:4705
#7 PyEval_EvalFrameEx (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3236
#8 0x00007ffff7a2eb49 in _PyEval_EvalCodeWithName (_co=<optimized out>, globals=<optimized out>, locals=<optimized out>,
args=<optimized out>, argcount=0, kws=0x7ffff7f8f9a8, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x7ffff64b25d0,
qualname=0x7ffff64b25d0) at Python/ceval.c:4018
#9 0x00007ffff7a2ddf5 in fast_function (nk=<optimized out>, na=0, n=<optimized out>, pp_stack=0x7fffffffdea8, func=0x7fff8f8caa60)
at Python/ceval.c:4813
#10 call_function (oparg=<optimized out>, pp_stack=0x7fffffffdea8) at Python/ceval.c:4730
#11 PyEval_EvalFrameEx (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3236
#12 0x00007ffff7a2eb49 in _PyEval_EvalCodeWithName (_co=<optimized out>, globals=<optimized out>, locals=<optimized out>,
args=<optimized out>, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0)
at Python/ceval.c:4018
#13 0x00007ffff7a2ecd8 in PyEval_EvalCodeEx (_co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=<optimized out>,
argcount=<optimized out>, kws=<optimized out>, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at Python/ceval.c:4039
#14 0x00007ffff7a2ed1b in PyEval_EvalCode (co=<optimized out>, globals=<optimized out>, locals=<optimized out>) at Python/ceval.c:777
#15 0x00007ffff7a54020 in run_mod (arena=0x69d570, flags=0x7fffffffe1f0, locals=0x7ffff7f44248, globals=0x7ffff7f44248,
filename=0x7ffff64afb30, mod=0x70e068) at Python/pythonrun.c:976
#16 PyRun_FileExFlags (fp=0x6f8370, filename_str=<optimized out>, start=<optimized out>, globals=0x7ffff7f44248, locals=0x7ffff7f44248,
closeit=<optimized out>, flags=0x7fffffffe1f0) at Python/pythonrun.c:929
#17 0x00007ffff7a55623 in PyRun_SimpleFileExFlags (fp=0x6f8370, filename=<optimized out>, closeit=1, flags=0x7fffffffe1f0)
at Python/pythonrun.c:396
#18 0x00007ffff7a708c7 in run_file (p_cf=0x7fffffffe1f0, filename=0x6032c0 L"tftest.py", fp=0x6f8370) at Modules/main.c:318
#19 Py_Main (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:769
#20 0x0000000000400add in main (argc=2, argv=0x7fffffffe368) at ./Programs/python.c:65
(gdb) q
A debugging session is active.
Inferior 1 [process 9483] will be killed.
Quit anyway? (y or n) y
jonas@31:~$
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment