Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save russelldb/97565097e95add8b626ea4661b6b4952 to your computer and use it in GitHub Desktop.
Save russelldb/97565097e95add8b626ea4661b6b4952 to your computer and use it in GitHub Desktop.
Riak MDC cheat sheet

#Riak Multi DC Repl Cheat Sheet


##Types There are two types of multi data center replication in Riak.

  1. Fullsync

    Operation is triggered by connection creation between clusters, running riak-repl start-fullsync on the listener leader, or every fullsync_interval minutes. Relevant app.config settings:

    %Enable/Disable fullsync on site creation
    {fullsync_on_connect, true},
    %Time (in minutes) between fullsync operations. Also: disabled
    {fullsync_interval, 360}
    
  2. Real-time

    Enabled by default on all buckets after connection is established. All new data and update operations are forwarded through the listener leader to the site leader or from the send cluster to the receive cluster.

##Setup Specifically for bi-directional replication between two clusters

  1. Add a listener to all nodes in both clusters

    riak-repl add-listener <nodename> <listen_ip> <port>

  2. Add a site in cluster A connecting to any listener in cluster B

  3. Add a site in cluster B connecting to any listener in cluster A

    riak-repl add-site <ipaddr> <portnum> <sitename>

    NOTE: Unless {fullsync_on_connect, false} is set in riak_repl section of app.config, a fullsync operation from listener to site will start upon site creation.

##Operation ####Important Commands NOTE: All the following commands should be invoked on the listener leader identified in riak-repl status

riak-repl start-fullsync - Initiates fullsync from this cluster to site cluster

riak-repl pause-fullsync - Pauses fullsync operation

riak-repl resume-fullsync - Resume paused fullsync

riak-repl cancel-fullsync - Cancel fullsync

riak-repl status- One stop shop for replication statistics

Example: riak-repl status, comments marked with ##

client_bytes_recv: 0  ##Stats from local site throughput
client_bytes_sent: 0
client_connect_errors: 0
client_connects: 0
client_redirect: 0
client_rx_kbps: [0,0,0,0,0,0]
client_tx_kbps: [0,0,0,0,0,0]
elections_elected: 0
elections_leader_changed: 0
objects_dropped_no_clients: 0
objects_dropped_no_leader: 0
objects_forwarded: 0
objects_sent: 0
server_bytes_recv: 0 ##Stats from listener leader throughput
server_bytes_sent: 0
server_connect_errors: 0
server_connects: 0
server_fullsyncs: 0
server_rx_kbps: [0,0,0,0,0,0] ##Real-time replication transfer rate
server_tx_kbps: [0,0,0,0,0,0]
leader: 'listener_leader_nodename' ##Listener leader
leader_message_queue_len: 0
leader_total_heap_size: 987
leader_heap_size: 987
leader_stack_size: 9
leader_reductions: 20984
leader_garbage_collection: [{min_bin_vheap_size,46368},
                            {min_heap_size,233},
                            {fullsweep_after,0},
                            {minor_gcs,0}]
local_leader_message_queue_len: 0
local_leader_heap_size: 987
client_stats: [{<8515.3751.0>,  ##Stats from site added to this cluster
                {message_queue_len,0},
                {status,[{node,'site_leader_nodename'},
                         {site,"sitename"},
                         {strategy,riak_repl_keylist_client},
                         {fullsync_worker,<8515.3757.0>},
                         {put_pool_size,5},
                         {connected,"other_cluster_listener_ip",listener_port},
                         {state,wait_for_fullsync}]}}]
server_stats: [{<8296.5311.0>,  ##Stats from listener leader
                {message_queue_len,0},
                {status,[{node,'listener_leader_nodename'},
                         {site,"other_cluster_sitename"},
                         {strategy,riak_repl_keylist_server},
                         {fullsync_worker,<8296.5312.0>},
                         {dropped_count,0}, ##Real-time replication error count
                         {queue_length,0},
                         {queue_byte_size,0},
                         {state,wait_for_partition}]}}]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment