Skip to content

Instantly share code, notes, and snippets.

View shrijeet's full-sized avatar

Shrijeet shrijeet

  • Redwood City, CA
View GitHub Profile
/etc/init.d/hadoop-hdfs-namenode
/etc/default/hadoop
/etc/default/hadoop-0.20-mapreduce
/etc/default/hadoop-fuse
/etc/default/hadoop-hdfs
/etc/default/hadoop-hdfs-namenode
/etc/default/hadoop-hdfs-secondarynamenode
/usr/lib/hadoop/libexec/hadoop-config.sh
/usr/lib/hadoop/libexec/hadoop-layout.sh
/etc/hadoop/conf/hadoop-env.sh
export HADOOP_COMMON_HOME=/usr/lib/hadoop
export HADOOP_CONF_DIR=/etc/hadoop/conf
export HADOOP_DATANODE_USER=hdfs
export HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfs
export HADOOP_HOME=/usr/lib/hadoop-0.20-mapreduce
export HADOOP_HOME_WARN_SUPPRESS=true
export HADOOP_IDENT_STRING=hadoop
export HADOOP_IDENT_STRING=hdfs
export HADOOP_JOBTRACKERHA_USER=mapred
export HADOOP_JOBTRACKER_USER=mapred
require 'formula'
class TmuxForIterm2 < Formula
url 'http://iterm2.googlecode.com/files/tmux-for-iTerm2-20130302.tar.gz'
sha1 '83d1389eb55b55bc069e0b66a11aa0a8faf9cddd'
homepage 'http://code.google.com/p/iterm2/wiki/TmuxIntegration'
depends_on 'libevent'
def install
@shrijeet
shrijeet / gist:4968275
Created February 16, 2013 19:13
copy table's main
public static void main(String[] args) throws Exception {
Configuration conf = HBaseConfiguration.create();
String[] otherArgs =
new GenericOptionsParser(conf, args).getRemainingArgs();
Job job = createSubmittableJob(conf, otherArgs);
if (job != null) {
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
@shrijeet
shrijeet / delete_blocks.log
Created February 11, 2013 08:21
NN log deleting blocks
2013-02-11 02:40:15,560 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask 172.22.4.30:50010 to replicate blk_-8282418489515119773_208956459 to datanode(s) 172.22.4.36:50010
2013-02-11 02:40:19,711 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 172.22.4.36:50010 is added to blk_-8282418489515119773_208956459 size 1882
2013-02-11 02:52:14,531 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 172.22.24.37:50010 is added to blk_-8282418489515119773_208956459 size 1882
2013-02-11 02:52:14,531 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.chooseExcessReplicates: (172.22.4.30:50010, blk_-8282418489515119773_208956459) is added to recentInvalidateSets
2013-02-11 02:52:30,141 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask 172.22.4.30:50010 to delete blk_-7899761269070348109_200978153 blk_-8553516317821166119_181078974 blk_-6417975954560547521_204368816 blk_-7917102129184696044_160428177 blk_-884761233393181724
@shrijeet
shrijeet / jstack_hbase_read_deadlock.java
Created December 11, 2012 20:13
jstack_hbase_read_deadlock.stack
2012-12-11 11:56:08
Full thread dump Java HotSpot(TM) 64-Bit Server VM (14.2-b01 mixed mode):
"IPC Client (1542500044) connection to txa-2.rfiserve.net/172.22.12.2:60000 from hbase" daemon prio=10 tid=0x0000000042217800 nid=0x2ad2 in Object.wait() [0x00007fb7d3e73000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.waitForWork(HBaseClient.java:459)
- locked <0x00007fb8ddb07fb0> (a org.apache.hadoop.hbase.ipc.HBaseClient$Connection)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:504)
@shrijeet
shrijeet / splits_2012-11-09.log
Created November 10, 2012 18:24
Counting splits ine-arp & tca-arp on 2012-11-09
[13:22]shrijeet@ine-60:/srv/logs/hadoop-->grep "mkdirs.*splits" hadoop-hadoop-namenode-ine-60.rfiserve.net.log.2012-11-09
2012-11-09 02:40:02,524 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hbase ip=/172.22.2.32 cmd=mkdirs src=/hbase/ine-arp/userprofile/01646cb1eabaa3d9527884b598e34fef/.splits dst=null perm=hbase:supergroup:rwxr-xr-x
2012-11-09 03:25:24,614 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hbase ip=/172.22.2.32 cmd=mkdirs src=/hbase/ine-arp/userprofile/30f4e090f947b5e70fa012019666f5c3/.splits dst=null perm=hbase:supergroup:rwxr-xr-x
2012-11-09 12:06:25,230 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hbase ip=/172.22.2.47 cmd=mkdirs src=/hbase/ine-arp/userprofile/5952cc659613fccd0b09e76159df8a28/.splits dst=null perm=hbase:supergroup:rwxr-xr-x
2012-11-09 13:09:43,052 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hbase ip=/172.22.2.52 cmd=mkdirs src=/hbase/ine-arp/userprofile/25e130dde803393d96c740f8e37a419
@shrijeet
shrijeet / TooManyPending.java
Created November 9, 2012 19:02
[asynchbase] Too many pending RPC
/**
* The errback associated with the row. If any of the put (one each column)
* fails, this errback will be invoked with a DeferredGroupException in
* argument.
*/
final class WriteErrback implements Callback<Object, Exception> {
public Exception call(final Exception arg) {
metrics.getWriteFailed().inc();
metrics.getWritePending().dec();
return arg;
@shrijeet
shrijeet / jstack_hung_client.txt
Created October 19, 2012 01:37
jstack track for asynchbase client getting hung
2012-10-18 21:10:34
Full thread dump Java HotSpot(TM) 64-Bit Server VM (14.2-b01 mixed mode):
"Attach Listener" daemon prio=10 tid=0x000000004a0af800 nid=0x1292 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"New I/O worker #139" prio=10 tid=0x0000000049978800 nid=0x477a runnable [0x0000000043554000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
@shrijeet
shrijeet / error_trace_hung_client.java
Created October 19, 2012 01:15
Error stack track for asynchbase client getting hung
12/09/23 15:02:18 ERROR async.RegionClient: Unexpected exception from downstream on [id: 0x5cac6a45, /172.22.8.8:44666 => /172.22.4.46:60020]
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)
at sun.nio.ch.IOUtil.read(IOUtil.java:200)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:63)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:373)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:247)