##Start Docker
docker-kill-last
dpsa
docker run -d -h server.ambari.com --dns 127.0.0.1 --name ambari sequenceiq/ambari
-d damon (background)
-h hostname
--dns get the DNS from the host
--name name
Recently I was working on a Hadoop 2.x/YARN based Application Master - and came across Apache Helix (a generic cluster management framework by LinkedIn). | |
Usually YARN gives you the framework to allocate containers among nodes based on utilization (CPU, memory), monitor, start and restart containers - and leaves the state management, fault tolerance, cluster expansion, throttling and replication and partitioning to be left at the discretion of the Application Master. This is where Helix is complementary to YARN, and does this a declarative way (by using a Finite-State Machine http://en.wikipedia.org/wiki/Finite-state_machine). | |
I will let you know how this 'mini-project' goes, but back to the topic ... | |
I had a conversation with a good friend about how they managed to do a distributed system using Apache Zookeeper and how they struggled to model multiple distributed locks at scale. While there might be a few frameworks achieving this, I was thinking to highlight the differences and the advantages of using Hel |
If you are a HBase user a good experience to leave the comfort zone of the JVM heap is to start playing with the new BlockCache implementations in latest HBase versions (made me a sleepless night). | |
Older HBase implementations supported a single BlockCache implementation **(LruBlockCache)** but further version have introduced different cache implementations *(a block cache is a single unit of I/O, the smallest amount of data HBase writes or reads back from an HFile)* | |
Since the introduction of **ByteBuffers** in Java we had the chance to leverage the operating system's memory manager to move data in and out of memory in a way that's transparent to your program - and this is what exactly the new **SlabCache** and **BucketCache** implementations does, allocates memory outside the JVM heap using direct byte buffers. | |
``` java java.nio.ByteBuffer class | |
public static ByteBuffer allocateDirect(int capacity) | |
``` |
gradle init --type pom |
ERROR SimpleFeatureSelector:67 - Exception {} | |
java.lang.IllegalStateException: Must not use direct buffers with InputStream API | |
at com.google.common.base.Preconditions.checkState(Preconditions.java:176) | |
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) | |
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) | |
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102) | |
at org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:170) | |
at org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:135) | |
at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:642) | |
at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:698) |
function setjdk() { | |
if [ $# -ne 0 ]; then | |
removeFromPath '/System/Library/Frameworks/JavaVM.framework/Home/bin' | |
if [ -n "${JAVA_HOME+x}" ]; then | |
removeFromPath $JAVA_HOME | |
fi | |
export JAVA_HOME=`/usr/libexec/java_home -v $@` | |
export PATH=$JAVA_HOME/bin:$PATH | |
fi | |
} |
14/03/12 11:25:19 INFO monitoring.MRJobStatus: | |
Job PiEstimation into: highPriorityin queue (default) progress M/R: 1.0/1.0 | |
14/03/12 11:25:19 INFO monitoring.MRJobStatus: Tracking URL : sandbox.hortonworks.com:19888/jobhistory/job/job_1394640227686_0004 | |
14/03/12 11:25:19 INFO monitoring.MRJobStatus: Reserved memory : 0, used memory : 0 and used slots : 0 | |
14/03/12 11:25:20 INFO monitoring.MRJobStatus: MAP: Status COMPLETE with task ID task_1394640227686_0004_m_000001, and progress 1.0 | |
14/03/12 11:25:20 INFO monitoring.MRJobStatus: MAP: Status COMPLETE with task ID task_1394640227686_0004_m_000007, and progress 1.0 | |
14/03/12 11:25:20 INFO monitoring.MRJobStatus: MAP: Status COMPLETE with task ID task_1394640227686_0004_m_000000, and progress 1.0 | |
14/03/12 11:25:20 INFO monitoring.MRJobStatus: MAP: Status COMPLETE with task ID task_1394640227686_0004_m_000005, and progress 1.0 | |
14/03/12 11:25:20 INFO monitoring.MRJobStatus: MAP: Status COMPLETE with task ID task_1394640227686_0004_m_000009, and progress 1.0 | |
14/03/12 |
2014-03-17 10:41:31,833 [AMRM Callback Handler Thread] INFO HoyaAppMaster.yarn - Shutdown Request received | |
2014-03-17 10:41:31,841 [AMRM Callback Handler Thread] INFO impl.AMRMClientAsyncImpl - Shutdown requested. Stopping callback. | |
2014-03-17 10:41:32,841 [main] INFO appmaster.HoyaAppMaster - Triggering shutdown of the AM: Shutdown requested from RM | |
2014-03-17 10:41:32,842 [main] INFO appmaster.HoyaAppMaster - Process has exited with exit code 0 mapped to 0 -ignoring | |
2014-03-17 10:41:32,843 [main] INFO state.AppState - Releasing 1 containers | |
2014-03-17 10:41:32,843 [main] INFO appmaster.HoyaAppMaster - Application completed. Signalling finish to RM | |
2014-03-17 10:41:32,843 [main] INFO appmaster.HoyaAppMaster - Unregistering AM status=FAILED message=Shutdown requested from RM | |
2014-03-17 10:41:32,855 [main] INFO appmaster.HoyaAppMaster - Failed to unregister application: org.apache.hadoop.yarn.exceptions.InvalidApplicationMasterRequestException: Application doesn't exist in cache appattempt_139504910217 |
##Start Docker
docker-kill-last
dpsa
docker run -d -h server.ambari.com --dns 127.0.0.1 --name ambari sequenceiq/ambari
-d damon (background)
-h hostname
--dns get the DNS from the host
--name name
sudo route add -net 172.21.0.0/16 192.168.59.103 | |
Ambari Master | |
docker run -d --dns 127.0.0.1 --name amb0 -h amb0.mycorp.kom sequenceiq/ambari:multi /usr/local/serf/bin/start-serf-agent.sh --tag ambari-role=server,agent | |
#Ambari Agent 1 | |
docker run -d -e SERF_JOIN_IP=172.21.0.2 --dns 127.0.0.1 --name amb1 -h amb1.mycorp.kom sequenceiq/ambari:multi /usr/local/serf/bin/start-serf-agent.sh --tag ambari-role=agent | |
#Ambari Agent 2 | |
docker run -d -e SERF_JOIN_IP=172.21.0.2 --dns 127.0.0.1 --name amb2 -h amb2.mycorp.kom sequenceiq/ambari:multi /usr/local/serf/bin/start-serf-agent.sh --tag ambari-role=agent |
{ | |
"configurations" : [ | |
{ | |
"global" : { | |
"nagios_contact" : "admin@localhost" | |
} | |
} | |
], | |
"host_groups" : [ | |
{ |