Skip to content

Instantly share code, notes, and snippets.

@josejuan
Created February 6, 2014 22:07
Show Gist options
  • Save josejuan/8853442 to your computer and use it in GitHub Desktop.
Save josejuan/8853442 to your computer and use it in GitHub Desktop.
Anton, I'm not sure what is the *exact* problem you has solved.
"Deterministic logging in presence of parallelism is not hard"
Your approach, for N parallel process, use (N - 1) * sizeEachLog amount of memory (if not first sizeEachLog is consumed on start then N * sizeEachLog).
The sequential version use no memory.
I think is not the same thing.
Moreover, suppose a ImageConverter server, you send one image to the server, and it reply with a complex *perpixel* image filter.
If the server process inputs (images) sequentially then, no memory is needed, each processed pixel is sent immediately, but, of course, only one pixel is computed at a time.
If the server process inputs (images) asynchronously (nondeterministic), no memory is needed and many pixels are computed in parallel (one channel per input is needed).
Using your aproach, the server can process many inputs at a time (FIFO) but, if determinism is needed, you can only sent to client the older pixel, if this pixel is very complex, your stream grow and grow awaiting (you can set restrictions to the FIFO e.g. max length, but it's another history).
Your code *could be useful* yes, but not solve the deterministic vs. nondeterministic parallelization problem (I think, of course).
In conclusion, I hope you are now convinced that:
* Deterministic logging in presence of parallelism is not hard
is false, in general.
Great job, anyway.
Regards!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment