Skip to content

Instantly share code, notes, and snippets.

View jtuple's full-sized avatar

Joseph Blomstedt jtuple

  • Facebook
  • Seattle, WA
View GitHub Profile
:: Running Test Package 'delete_test'...
:: Updating schema for 'test'...
:: Done.
:: Indexing documents...
:: Running Solr document(s) '../_files/sample100/solr_add.xml'...
Starting...
{ok, Ring} = riak_core_ring_manager:get_my_ring().
rp(riak_core_ring:all_owners(Ring)).
@jtuple
jtuple / riak_ring.erl
Created August 25, 2011 19:08
Script to print Riak ring ownership information
%% Script to print out Riak ring ownership.
%% Copy/paste into Erlang shell attached to a riak cluster (riak-admin attach).
%% Remember to use: CTRL-D to dettach from the shell once done.
fun() ->
{ok, Ring} = riak_core_ring_manager:get_my_ring(),
Owners = riak_core_ring:all_members(Ring),
Indices = riak_core_ring:all_owners(Ring),
RingSize = length(Indices),
Names = lists:zip(Owners, lists:seq(1, length(Owners))),
@jtuple
jtuple / gist:1202216
Created September 8, 2011 00:00
1.0-cluster-changes
Given that 1.0 prerelease packages are now available, I wanted to briefly mention some changes to Riak's clustering capabilities in 1.0. In particular, there are some subtle semantic differences in the riak-admin commands. More complete docs will be updated in the near future, but I hope a quick email suffices for now.
[nodeB/riak-admin join nodeA] is now strictly one-way. It joins nodeB to the cluster that nodeA is a member of. This is semantically different than pre-1.0 Riak in which join essentially joined clusters together rather than joined a node to a cluster. As part of this change, the joining node (nodeB in this case) must be a singleton (1-node) cluster.
In pre-1.0, leave and remove were essentially the same operation, with leave just being an alias for 'remove this-node'. This has changed. Leave and remove are now very different operations.
[nodeB/riak-admin leave] is the only safe way to have a node leave the cluster, and it must be executed by the node that you want to remove. In this case, no
Assuming there are 2 nodes called riak@host1 and riak@host2.
Attach to riak console for riak@host1. Either 'riak-admin console' if the node is down, or 'riak-admin attach' if the node is already up and running.
Update the first line below with the right "other node" name, then copy/paste:
Other = 'riak@host2'.
net_adm:ping(Other).
riak_kv_console:join([atom_to_list(Other)]).
riak:join(Other).
Vagrant::Config.run do |config|
config.vm.box = "lucid64"
config.vm.provision :shell, :inline => "rm -f /etc/init.d/sudo /etc/init.d/rsync"
config.vm.provision :chef_solo do |chef|
chef.add_recipe("jdb-basho-expect")
end
config.vm.define :box1 do |box_config|
box_config.vm.network("192.168.60.10")
box_config.vm.host_name = "box1"
require 'fog'
def start_server
Fog::Compute[:aws].servers.bootstrap(
:provider => "AWS",
# EBS-backed
# :image_id => "ami-63be790a"
# Ephermal
:image_id => "ami-fbbf7892",
:availability_zone => "us-east-1b",
@jtuple
jtuple / claim_dryrun.erl
Created January 26, 2012 19:19
Perform ring claim dry run, reporting new ring and would-be pending transfers
%% Claim dry-run / simulation code
%%
%% If running outside attached Riak shell, must have Riak libs in codepath and
%% proper cookie / long-form node name. Eg:
%% erl -pa RIAK/deps/*/ebin -setcookie riak -name claim@127.0.0.1
%%
%% Run node/2 passing in cluster claimaint and number of new nodes to test:
%% Add 1 node: claim_dryrun:node('dev1@127.0.0.1', 1)
%% Add 2 nodes: claim_dryrun:node('dev1@127.0.0.1', 2)
%% ...
$ ./dev/dev1/bin/riak-admin member_status
================================= Membership ==================================
Status Ring Pending Node
-------------------------------------------------------------------------------
valid 100.0% -- 'dev1@127.0.0.1'
-------------------------------------------------------------------------------
Valid:1 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
$ ./dev/dev2/bin/riak-admin join join dev1@127.0.0.1
The 'join' command has been deprecated in favor of the new
@jtuple
jtuple / gist:2713348
Created May 16, 2012 19:40
node replacement working, woo!
Staged changes:
(join) 'dev2@127.0.0.1'
(replace) 'dev3@127.0.0.1' -> 'dev2@127.0.0.1'
Member status after changes:
================================= Membership ==================================
Status Ring Pending Node
-------------------------------------------------------------------------------
leaving 32.8% 0.0% 'dev3@127.0.0.1'
valid 34.4% 34.4% 'dev1@127.0.0.1'