Skip to content

Instantly share code, notes, and snippets.

@boris-42
Created September 6, 2014 23:55
Show Gist options
  • Save boris-42/94f62d9a2c89fc7b5c04 to your computer and use it in GitHub Desktop.
Save boris-42/94f62d9a2c89fc7b5c04 to your computer and use it in GitHub Desktop.
Models In DB:
class TaskResultChunk (M iterations per chunk)
task_uuid
data = Text()
class TaskChunkIterations (M/N iteration per chunk)
task_result_chunk()
data = Text()
benchmark.engine:
results = []
full_chunk_size = N
update_chunk_start = 0
while True:
if result_queue:
result = result_queue.popleft()
results.append(result)
if result.make_update: # This should be set by runner, because only
# it can dynamically decide when it should be done
if len(results) != full_chunk_size:
# Probably not the best name, but this adds new record of TaskChunkIterations
task.append_result_update_chunk(key, {"raw": results[update_chunk_start:]}))
update_chunk_start += len(results)
if len(results) == full_chunk_size:
# Delete all mini chunks that we have in TaskChunkIterations (cause we don't need them)
task.flush_result_updates()
# Put full big chunk to TaskResultChunk
task.append_results_chunk(key, {"raw": results,
"scenario_duration": self.duration})
elif is_done.isSet():
break
else:
time.sleep(0.1)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment