Skip to content

Instantly share code, notes, and snippets.

@rwdaigle
Last active January 22, 2016 18:23
Show Gist options
  • Save rwdaigle/9893b3e213f3c993e792 to your computer and use it in GitHub Desktop.
Save rwdaigle/9893b3e213f3c993e792 to your computer and use it in GitHub Desktop.
Parallel processing
defmodule Worker do
def work(num) do
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat."
|> String.split
|> Enum.map(&String.upcase/1)
|> Enum.join
end
end
iterations = 100000
{us, :ok} = :timer.tc fn ->
0..iterations |>
Stream.map(&Task.async(Worker, :work, [&1])) |>
Enum.each(&Task.await(&1))
end
IO.puts "Time executing #{iterations} jobs in parallel: #{us/1000}ms"
{us, :ok} = :timer.tc fn ->
0..iterations |>
Enum.each(&Worker.work(&1))
end
IO.puts "Time executing #{iterations} jobs sequentially: #{us/1000}ms"
@rwdaigle
Copy link
Author

My results:

Task overhead with 10 Tasks: 0.465ms
Task overhead with 1000 Tasks: 9.069ms
Task overhead with 100000 Tasks: 624.897ms

@rwdaigle
Copy link
Author

In this version, sequential is, indeed faster:

Time executing 100000 jobs in parallel: 7767.314ms
Time executing 100000 jobs sequentially: 5176.606ms

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment