Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
Simulation of method for increasing precision of time.time() on Windows
for i in `seq 1 100`;
import time
clock1 = time.clock()
time1 = time.time()
true_offset = time1 - clock1
import numpy
offsets = []
waits = []
def winTime(): #to simulate 1ms precision
return int(time.time()*1000)/1000.0
#collect a bunch of offsets
for i in range(1000):
last = winTime()
clockTime1 = time.clock()
this = last
while (this-last)==0:
this = winTime()
clockTime2 = time.clock()
offsets.append( this - clockTime2 )
waits = numpy.array(waits)
offsets = numpy.array(offsets)
offset_estimate = offsets[numpy.argmin(numpy.absolute(waits-0.001))]
print "%0.30f" % (offset_estimate-true_offset)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.