Skip to content

Instantly share code, notes, and snippets.

@yaroslavvb
Created September 16, 2016 23:08
Show Gist options
  • Star 5 You must be signed in to star a gist
  • Fork 4 You must be signed in to fork a gist
  • Save yaroslavvb/b73ff35424dd7ab762234620cf583aac to your computer and use it in GitHub Desktop.
Save yaroslavvb/b73ff35424dd7ab762234620cf583aac to your computer and use it in GitHub Desktop.
Example of restricting part of graph to run on single core
# try running cpu intensive test on two devices
import tensorflow as tf
import time
def matmul_op():
"""Multiply two matrices together"""
n = 2000
a = tf.ones((n, n), dtype=tf.float32)
return tf.matmul(a, a)/n
slow_op = matmul_op
with tf.device("/cpu:0"):
one = slow_op()
with tf.device("/cpu:1"):
another_one = slow_op()
config = tf.ConfigProto(device_count={"CPU": 2},
inter_op_parallelism_threads=2,
intra_op_parallelism_threads=1)
config.graph_options.optimizer_options.opt_level = -1
sess = tf.Session(config=config)
two = one+another_one
# pre-warm the kernels
sess.run(one)
start = time.time()
sess.run(one)
elapsed_time = time.time() - start
print("Single op: %2.4f sec"%(elapsed_time))
start = time.time()
sess.run(two)
elapsed_time2 = time.time()-start
print("Two ops in parallel: %.2f sec (%.2f times slower)"%(elapsed_time2,
elapsed_time2/elapsed_time))
@LujunWeng
Copy link

Hi yaroslavvb,
why do we need to pre-warm the kernel? I think it is for more accurate result. If it is so, what is the reason behind it?

@fatmas1982
Copy link

I multi session mean multi threading ?
is each one session run in different thread ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment