Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
An Apache Whirr recipe that can be used to start a single virtual machine running HBase on EC2
Hi :)
Here are some quick instruction on running the hbase-ec.properties recipe.
1. Download Whirr 0.7.0 RC0 from here. We are going to publish this as an official release soon.
http://people.apache.org/~asavu/whirr-0.7.0-candidate-0/whirr-0.7.0.tar.gz
2. Extract the archive
3. Save the hbase-ec.proprties recipe to a file (e.g. hbase-ec2.properties)
4. Export your AWS EC2 credentials
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
5. Run (don't worry if you see some SSH exception like read timeouts)
$ ./bin/whirr launch-cluster --config hbase-ec2.properties
6. Login to the VM, become root and run "hbase shell"
7. Be happy! :)
x) Destroy the virtual machine
$ ./bin/whirr destroy-cluster --config hbase-ec2.properties
Make sure you check Apache Whirr at http://whirr.apache.org/ and say hi! on the email list.
Bootstrapping cluster
Configuring template
Starting 1 node(s) with roles [zookeeper, hadoop-namenode, hadoop-jobtracker, hbase-master, hadoop-datanode, hadoop-tasktracker, hbase-regionserver]
Nodes started: [[id=us-east-1/i-b6ab48d4, providerId=i-b6ab48d4, group=hbase-single-vm, name=hbase-single-vm-b6ab48d4, location=[id=us-east-1b, scope=ZONE, description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true, description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml], state=RUNNING, loginPort=22, hostname=domU-12-31-39-0B-56-AF, privateAddresses=[10.214.89.89], publicAddresses=[67.202.25.31], hardware=[id=c1.xlarge, providerId=c1.xlarge, name=null, processors=[[cores=8.0, speed=2.5]], ram=7168, volumes=[[id=null, type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true], [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc, durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdd, durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sde, durable=false, isBootDevice=false]], supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()), tags=[]], loginUser=ubuntu, userMetadata={Name=hbase-single-vm-b6ab48d4}, tags=[]]]
Authorizing firewall ingress to [us-east-1/i-b6ab48d4] on ports [2181] for [79.177.212.182/32]
Authorizing firewall ingress to [us-east-1/i-b6ab48d4] on ports [50070] for [79.177.212.182/32]
Authorizing firewall ingress to [us-east-1/i-b6ab48d4] on ports [8020, 8021] for [67.202.25.31/32]
Authorizing firewall ingress to [us-east-1/i-b6ab48d4] on ports [50030] for [79.177.212.182/32]
Authorizing firewall ingress to [us-east-1/i-b6ab48d4] on ports [8021] for [67.202.25.31/32]
The permission '67.202.25.31/32-1-8021-8021' has already been authorized on the specified group
Authorizing firewall
Authorizing firewall ingress to [us-east-1/i-b6ab48d4] on ports [60010, 60000] for [79.177.212.182/32]
Authorizing firewall ingress to [us-east-1/i-b6ab48d4] on ports [50030] for [79.177.212.182/32]
The permission '79.177.212.182/32-1-50030-50030' has already been authorized on the specified group
Authorizing firewall ingress to [us-east-1/i-b6ab48d4] on ports [8021] for [67.202.25.31/32]
The permission '67.202.25.31/32-1-8021-8021' has already been authorized on the specified group
Authorizing firewall ingress to [us-east-1/i-b6ab48d4] on ports [60030, 60020] for [79.177.212.182/32]
Starting to run scripts on cluster for phase configureinstances: us-east-1/i-b6ab48d4
Running configure phase script on: us-east-1/i-b6ab48d4
***** Some SSH harmless errors ****
configure phase script run completed on: us-east-1/i-b6ab48d4
Successfully executed configure script: [output=starting jobtracker, logging to /var/log/hadoop/logs/hadoop-hadoop-jobtracker-domU-12-31-39-0B-56-AF.out
No directory, logging in with HOME=/
starting datanode, logging to /var/log/hadoop/logs/hadoop-hadoop-datanode-domU-12-31-39-0B-56-AF.out
No directory, logging in with HOME=/
starting tasktracker, logging to /var/log/hadoop/logs/hadoop-hadoop-tasktracker-domU-12-31-39-0B-56-AF.out
No directory, logging in with HOME=/
starting master, logging to /var/log/hbase/logs/hbase-hadoop-master-domU-12-31-39-0B-56-AF.out
No directory, logging in with HOME=/
starting regionserver, logging to /var/log/hbase/logs/hbase-hadoop-regionserver-domU-12-31-39-0B-56-AF.out
No directory, logging in with HOME=/
, error=11/12/12 07:42:09 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop
11/12/12 07:42:09 INFO namenode.FSNamesystem: supergroup=supergroup
11/12/12 07:42:09 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/12/12 07:42:09 INFO common.Storage: Image file of size 96 saved in 0 seconds.
11/12/12 07:42:29 INFO common.Storage: Storage directory /data/hadoop/hdfs/name has been successfully formatted.
11/12/12 07:42:29 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at domU-12-31-39-0B-56-AF.compute-1.internal/10.214.89.89
************************************************************/
mkdir: cannot create directory `/etc/hbase': File exists
, exitCode=0]
Finished running configure phase scripts on all cluster instances
Completed configuration of hbase-single-vm
Hosts: ec2-67-202-25-31.compute-1.amazonaws.com:2181
Completed configuration of hbase-single-vm role hadoop-namenode
Namenode web UI available at http://ec2-67-202-25-31.compute-1.amazonaws.com:50070
Wrote Hadoop site file /Users/andreisavu/.whirr/hbase-single-vm/hadoop-site.xml
Wrote Hadoop proxy script /Users/andreisavu/.whirr/hbase-single-vm/hadoop-proxy.sh
Completed configuration of hbase-single-vm role hadoop-jobtracker
Jobtracker web UI available at http://ec2-67-202-25-31.compute-1.amazonaws.com:50030
Completed configuration of hbase-single-vm
Web UI available at http://ec2-67-202-25-31.compute-1.amazonaws.com
Wrote HBase site file /Users/andreisavu/.whirr/hbase-single-vm/hbase-site.xml
Wrote HBase proxy script /Users/andreisavu/.whirr/hbase-single-vm/hbase-proxy.sh
Completed configuration of hbase-single-vm role hadoop-datanode
Completed configuration of hbase-single-vm role hadoop-tasktracker
Wrote instances file /Users/andreisavu/.whirr/hbase-single-vm/instances
Started cluster of 1 instances
Cluster{instances=[Instance{roles=[zookeeper, hadoop-namenode, hadoop-jobtracker, hbase-master, hadoop-datanode, hadoop-tasktracker, hbase-regionserver], publicIp=67.202.25.31, privateIp=10.214.89.89, id=us-east-1/i-b6ab48d4, nodeMetadata=[id=us-east-1/i-b6ab48d4, providerId=i-b6ab48d4, group=hbase-single-vm, name=hbase-single-vm-b6ab48d4, location=[id=us-east-1b, scope=ZONE, description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true, description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml], state=RUNNING, loginPort=22, hostname=domU-12-31-39-0B-56-AF, privateAddresses=[10.214.89.89], publicAddresses=[67.202.25.31], hardware=[id=c1.xlarge, providerId=c1.xlarge, name=null, processors=[[cores=8.0, speed=2.5]], ram=7168, volumes=[[id=null, type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true], [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc, durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdd, durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sde, durable=false, isBootDevice=false]], supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()), tags=[]], loginUser=ubuntu, userMetadata={Name=hbase-single-vm-b6ab48d4}, tags=[]]}], configuration={hbase.zookeeper.quorum=ec2-67-202-25-31.compute-1.amazonaws.com:2181, hadoop.rpc.socket.factory.class.default=org.apache.hadoop.net.SocksSocketFactory, hadoop.socks.server=localhost:6666, hbase.zookeeper.property.clientPort=2181}}
You can log into instances using the following ssh commands:
'ssh -i /Users/andreisavu/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no andreisavu@67.202.25.31'
#
# HBase Cluster on AWS EC2 (single VM)
#
# Read the Configuration Guide for more info:
# http://whirr.apache.org/docs/latest/configuration-guide.html
# Change the cluster name here
whirr.cluster-name=hbase-single-vm
# Change the number of machines in the cluster here
whirr.instance-templates=1 zookeeper+hadoop-namenode+hadoop-jobtracker+hbase-master+hadoop-datanode+hadoop-tasktracker+hbase-regionserver
# Make sure we get a large enough instance
whirr.hardware-min-ram=4096
# For EC2 set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
whirr.provider=aws-ec2
whirr.identity=${env:AWS_ACCESS_KEY_ID}
whirr.credential=${env:AWS_SECRET_ACCESS_KEY}
# The size of the instance to use. See http://aws.amazon.com/ec2/instance-types/
whirr.hardware-id=c1.xlarge
# Ubuntu 10.04 LTS Lucid. See http://alestic.com/
whirr.image-id=us-east-1/ami-da0cf8b3
# If you choose a different location, make sure whirr.image-id is updated too
whirr.location-id=us-east-1
# By default use the user system SSH keys. Override them here.
# whirr.private-key-file=${sys:user.home}/.ssh/id_rsa
# whirr.public-key-file=${whirr.private-key-file}.pub
# Expert: specify the version of HBase to install.
#whirr.hbase.tarball.url=http://archive.apache.org/dist/hbase/hbase-0.89.20100924/hbase-0.89.20100924-bin.tar.gz
# Options for the hbase master & regionserver processes
#hbase-env.HBASE_MASTER_OPTS=-Xms1000m -Xmx1000m -Xmn256m -XX:+UseConcMarkSweepGC -XX:+AggressiveOpts -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/data/hbase/logs/hbase-master-gc.log
#hbase-env.HBASE_REGIONSERVER_OPTS=-Xms2000m -Xmx2000m -Xmn256m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=88 -XX:+AggressiveOpts -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/data/hbase/logs/hbase-regionserver-gc.log
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.