A few considerations in case you need to work on performance even further (you may not even need them, but it's worth knowing).
- Using the
GenServer
flow to read data may become a bottleneck (as it happens inside theGenServer
process) - If you don't care about duplicates, you may set the table as
duplicate_bag
. This speeds up writes, as it doesn't have to check for dupes.
It may also be interesting to read directly from the table instead of going through the GenServer
.
That's possible by creating the table as public
and named_table
, giving it __MODULE__
as name.
In addition, as the table would be mostly reads, it can be marked as read_concurrency
(which optimizes reads over writes).
ets.new(__MODULE__, [:named_table, :duplicate_bag, :public, :read_concurrency])
This setup allows reading directly from it in fetch/2
by referencing the table by name without needing call/2
. The read therefore happens in the calling process.
A side effect of this is that it also reduces the possibility of a table crash (the owner is still the GenServer
).
defp get(slug) do
case :ets.lookup(__MODULE__, slug) do
[] -> {:not_found}
[{_slug, result}] -> {:found, result}
end
end
Note that the table is now public, so any process can read it (so no private data).
@cloud8421 I tried this out and pushed it up to Heroku. Is this what you were thinking, in terms of code change to
LinkCache.Cache
?Same stats (500 concurrent users, free hardware/db, over 60 seconds), and came away with:
Absurdly fast. 2x requests served and 1/3 the avg. response time.