Skip to content

Instantly share code, notes, and snippets.

@salehjg
Created June 19, 2018 15:10
Show Gist options
  • Save salehjg/aa1fd5e66322dabb685af091e82dd9cc to your computer and use it in GitHub Desktop.
Save salehjg/aa1fd5e66322dabb685af091e82dd9cc to your computer and use it in GitHub Desktop.
Simple test script for TF GPU with python 3.x
import tensorflow as tf
mat1 = [0,1,2]
mat2 = [2,1,0]
with tf.device('/gpu:0'):
tn1 = tf.placeholder(dtype= tf.int32, shape=[3])
tn2 = tf.placeholder(dtype= tf.int32, shape=[3])
tn3 = tn1 + tn2
with tf.Session() as sess:
result = sess.run(tn3,{tn1:mat1,tn2:mat2})
print(result)
@salehjg
Copy link
Author

salehjg commented Jul 10, 2019

for JIT version use:

import tensorflow as tf

mat1 = [0,1,2]
mat2 = [2,1,0]

with tf.device('device:XLA_GPU:0'):
        tn1 = tf.placeholder(dtype= tf.int32, shape=[3])
        tn2 = tf.placeholder(dtype= tf.int32, shape=[3])
        tn3 = tn1 + tn2

with tf.Session() as sess:
        result = sess.run(tn3,{tn1:mat1,tn2:mat2})
        print(result)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment