Skip to content

Instantly share code, notes, and snippets.

@PharkMillups
Created August 4, 2010 19:35
Show Gist options
  • Select an option

  • Save PharkMillups/508657 to your computer and use it in GitHub Desktop.

Select an option

Save PharkMillups/508657 to your computer and use it in GitHub Desktop.
09:43 <vicmargar> what is the recommended way to connect to riak from erlang?
09:43 <seancribbs> vicmargar: use riak-erlang-client
09:44 <seancribbs> which uses the PBC API
09:44 <vicmargar> is that going to be better than pb?
09:44 <seancribbs> that is PB ;)
09:44 <vicmargar> so is it the same at the moment?
09:44 <vicmargar> I'm confused
09:44 <seancribbs> so, let me clarify.
09:45 <seancribbs> there's the "local" client which you can use inside the riak node
09:45 <seancribbs> and then there's the default erlang client which uses PB
09:46 <seancribbs> generally we suggest you deploy your erlang apps in a
separate Erlang VM, so you'd use riak-erlang-client
09:47 <vicmargar> ok, I see
09:47 <vicmargar> we were using an old version of the erlang client
09:48 <vicmargar> before pb appeared
09:48 <seancribbs> ah, yes
09:48 <vicmargar> so I guess pb is now the default, that's why I was confused
09:48 <seancribbs> sorry for that!
09:48 <seancribbs> it's actually better because it decouples
09:49 <seancribbs> you don't have to use Erlang's inter-node communication
09:50 <vicmargar> ok thank you
09:52 <vicmargar> so using PB, (I asked this the other day but I just want to clarify),
we should create a pool of PB connections from our erlang nodes to riak right?
09:53 <seancribbs> mmm maybe.
09:54 <vicmargar> I assume I shouldn't create new connection from each of my erlang processes
09:54 <seancribbs> in general you want to keep the connections around, since the
TCP connection is expensive to setup/teardown
09:55 <seancribbs> whether you need a pool or just a single long-lived process to
coordinate would depend on your app
09:57 <vicmargar> Ok, I see what you mean
09:58 <vicmargar> but If there is a single connection I guess there is a risk
of overloading the gen_server process that keeps the connection
09:58 <seancribbs> vicmargar: yeah, depending on your throughput
09:59 <seancribbs> so you might need a pg2 group, or some other pooling mechanism
09:59 <vicmargar> yes, let's assume it's high
10:00 <vicmargar> what is pg2?
10:00 <seancribbs> http://erldocs.com/R14A/kernel/pg2.html?i=0&search=pg
10:00 <seancribbs> woops, should probably use the R13 version
10:00 <seancribbs> http://erldocs.com/R13B04/kernel/pg2.html?search=pg2&i=0
10:03 <vicmargar> Ok I see, thank you again :)
10:04 <vicmargar> That's probably what we need
10:04 <seancribbs> np. I'm not the Erlang expert around, so let me know if
you have troubles/more questions and I can direct them to the experts
10:06 <vicmargar> I think that's fine for now
10:06 <vicmargar> although I have another question nt erlang related
10:07 <vicmargar> I was running some tests the other day
10:08 <vicmargar> using basho_bench, and I'm not sure what I should be looking
at while the tests are running
10:08 <vicmargar> in terms of %cpu, load average and that stuff
10:10 <seancribbs> vicmargar: well, basho_bench only really measures throughput
and latency
10:11 <vicmargar> yes but if I'm running top on the machines I should be able
to get an indication on how hard they are working right?
10:11 <seancribbs> vicmargar: yeah, that would probably be useful. rrdtool or
similar tools might help with that
10:12 <vicmargar> so I was seeing %700 CPU usage with 12 cores but the load
average was just around 1
10:12 <vicmargar> so I'm not sure how to read that
10:12 <seancribbs> vicmargar: that's not too surprising, we're mostly I/O bound
10:12 <seancribbs> the load average is a misleading stat if you don't know what it means
10:13 <vicmargar> that's probably it :)
10:13 <vicmargar> should I be looking at cpu usage only?
10:13 <seancribbs> i'd be looking at iostat
10:15 <seancribbs> also free (tracking RAM usage)
10:16 <vicmargar> Ok thanks for all that, I'll keep running tests
10:17 <seancribbs> vicmargar: be sure to ping us with your results -
the initial reports sounded exciting
10:17 <vicmargar> yes it seems good also we were only testing 1/3 of our machines
10:18 <vicmargar> but they will have to run our code as well so it's difficult to
know
10:18 <seancribbs> right. might be good to talk to some of the mochi guys,
they run multiple apps/clusters on the same machines
10:19 <seancribbs> they might be able to give some advice/notes about
what worked and didn't
10:19 <vicmargar> that would probably be helpful, yes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment