Skip to content

Instantly share code, notes, and snippets.

@kawamon
kawamon / gist:5459146
Created April 25, 2013 11:42
MapReduceで途中でディスクフルになった場合のログ(CDH4)
[training@localhost ~]$ hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar wordcount 1G output2
13/04/25 07:35:54 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/04/25 07:35:55 INFO input.FileInputFormat: Total input paths to process : 1
13/04/25 07:35:55 WARN snappy.LoadSnappy: Snappy native library is available
13/04/25 07:35:55 INFO snappy.LoadSnappy: Snappy native library loaded
13/04/25 07:35:55 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
13/04/25 07:35:55 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev 477079631bb6c2b47f15140548e2c0f4efa6996f]
13/04/25 07:35:55 INFO mapred.JobClient: Running job: job_201304250735_0001
13/04/25 07:35:56 INFO mapred.JobClient: map 0% reduce 0%
13/04/25 07:36:07 INFO mapred.JobClient: map 5% reduce 0%
@kawamon
kawamon / MapReduceのReduceフェーズでノードが落ちた場合のログ
Created November 14, 2013 03:39
MapReduceのReduceフェーズでノードが落ちた場合のJTのログ
slave x 4 (DN x 4, TT x 4)
map = 8 (8 blocks)
reduce = 4 (mapred.reduce.tasks)
---
During the reduce (copy) phase, I crashed the one slave kernel (monkey) using sysrq.
2013-11-13 19:10:31,082 INFO org.apache.hadoop.mapred.JobInProgress: job_201311131900_0003: nMaps=8 nReduces=4 max=-1
2013-11-13 19:10:31,119 INFO org.apache.hadoop.mapred.JobTracker: Job job_201311131900_0003 added successfully for user 'training' to queue 'default'
@kawamon
kawamon / gist:7461079
Created November 14, 2013 03:54
MapReduceのReduceでのログ (TT)
2013-11-13 19:10:34,998 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2013-11-13 19:10:34,998 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2013-11-13 19:10:34,998 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201311131900_0003_m_000002_0 task's state:UNASSIGNED
2013-11-13 19:10:34,999 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201311131900_0003_m_000003_0 task's state:UNASSIGNED
2013-11-13 19:10:34,999 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201311131900_0003_m_000002_0 which needs 1 slots
2013-11-13 19:10:34,999 INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_201311131900_0003_m_000002_0 which needs 1 slots
2013-11-13 19:10:35,058 INFO org.apache.hadoop.mapred.
@kawamon
kawamon / gist:7476694
Created November 15, 2013 00:01
投機的実行時のJTのログ(不要な情報多いがメモ用)
最初のタスク試行attempt_201311132359_0008_m_000007_0。これに対する投機的実行がattempt_201311132359_0008_m_000007_1。
_0が終わって_1がkillされている
2013-11-14 03:58:39,347 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2013-11-14 03:58:39,347 INFO org.apache.hadoop.mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_201311132359_0007_m_000006_0' to tip task_201311132359_0007_m_000006, for tracker 'tracker_tiger:localhost.localdomain/127.0.0.1:60895'
2013-11-14 03:58:43,782 INFO org.apache.hadoop.mapred.JobInProgress: job_201311132359_0008: nMaps=9 nReduces=1 max=-1
2013-11-14 03:58:43,849 INFO org.apache.hadoop.mapred.JobTracker: Job job_201311132359_0008 added successfully for user 'training' to queue 'default'
2013-11-14 03:58:43,849 INFO org.apache.hadoop.mapred.JobTracker: Initializing job_201311132359_0008
2013-11-14 03:58:43,849 INFO org.apache.hadoop.mapred.JobInProgress: Initializing job_201311132359_0008
@kawamon
kawamon / allocations.xml
Last active December 28, 2015 18:19
MRv1のフェアスケジューラでのプリエンプションログ1
<allocations>
<pool name="preempt">
<minMaps>2</minMaps>
<minReduces>2</minReduces>
<minSharePreemptionTimeout>10</minSharePreemptionTimeout>
</pool>
</allocations>
2014-03-11 01:09:27,915 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = huetest/127.0.0.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.2.0-cdh5.0.0-beta-2
STARTUP_MSG: classpath = /etc/hadoop/conf:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh5.0.0-beta-2.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/l
[root@huetest hadoop-hdfs]# sudo -u hdfs hdfs namenode -upgrade
14/03/11 02:22:55 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = huetest/127.0.0.1
STARTUP_MSG: args = [-upgrade]
STARTUP_MSG: version = 2.2.0-cdh5.0.0-beta-2
STARTUP_MSG: classpath = /etc/hadoop/conf:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh5.0.0-beta-2.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoo
@kawamon
kawamon / gist:10354308
Created April 10, 2014 08:05
HDFS Caching
kawasaki@hadoop11:~$ hdfs cacheadmin -listDirectives
Found 0 entries
kawasaki@hadoop11:~$ hdfs cacheadmin -listDirectives stats
Can't understand argument: stats
kawasaki@hadoop11:~$ hdfs cacheadmin -listDirectives -stats
Found 0 entries
kawasaki@hadoop11:~$ hadoop fs -ls dir1
Found 4 items
drwxr-xr-x - kawasaki kawasaki 0 2014-04-09 06:38 dir1/a
-rw-r--r-- 3 kawasaki kawasaki 75288655 2014-04-09 06:44 dir1/bigfile
[training@elephant data]
[training@elephant data]$ hadoop fs -appendToFile access_log weblog/access_log
[training@elephant data]$ hadoop fs -appendToFile access_log weblog/access_log
[training@elephant data]$
[training@elephant data]$ sudo -u hdfs hdfs fsck /user/training/weblog/ -files -blocks -locations
Connecting to namenode via http://elephant:50070
FSCK started by hdfs (auth:SIMPLE) from /10.132.69.56 for path /user/training/weblog/ at Wed May 28 06:20:21 EDT 2014
@kawamon
kawamon / yarn-site.xml
Created May 28, 2014 16:32
yarn-site.xml (lxccluster)
<?xml version="1.0" encoding="UTF-8"?>
<!--Autogenerated by Cloudera Manager-->
<configuration>
<property>
<name>yarn.acl.enable</name>
<value>true</value>
</property>
<property>
<name>yarn.admin.acl</name>