Skip to content

Instantly share code, notes, and snippets.

@apple-corps
apple-corps / gist:ca358e5cd6e11ca221ac
Created July 21, 2014 18:22
Cannot install puppet module maestrodev-avahi-1.1.0
puppet apply -e maestrodev-avahi-1.1.0
Error: Could not parse for environment production: Syntax error at '.' at line 1 on node rhel1.local
@apple-corps
apple-corps / gist:989ecd52c31ddc54e875
Created July 22, 2014 01:14
Cannot start yarn-nodemanager
2014-07-21 17:48:45,183 INFO [main] nodemanager.NodeManager (StringUtils.java:startupShutdownMessage(597)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NodeManager
STARTUP_MSG: host = rhel2.local/172.16.0.2
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.0.0-cdh4.1.3
STARTUP_MSG: classpath = /etc/hadoop/conf:/etc/hadoop/conf:/etc/hadoop/conf:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/
[root@rhel1 ~]# netstat -tunalp | grep LISTEN
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1259/rpcbind
tcp 0 0 172.16.0.6:19888 0.0.0.0:* LISTEN 1655/java
tcp 0 0 0.0.0.0:60787 0.0.0.0:* LISTEN 1277/rpc.statd
tcp 0 0 172.16.0.6:8020 0.0.0.0:* LISTEN 1541/java
tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 1541/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1364/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1331/cupsd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1440/master
tcp 0 0 172.16.0.6:10020 0.0.0.0:* LISTEN 1655/java
Mon Jul 28 09:16:23 PDT 2014 Starting master on rhel1
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 22958
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 32768
pipe size (512 bytes, -p) 8
@apple-corps
apple-corps / gist:80667388cc7bfd9dea4d
Created July 29, 2014 22:52
Can't import data into hbase from hdfs using hbase.jar
[root@rhel1 hadoop-hdfs]# sudo -u hdfs hadoop jar /usr/lib/hbase/hbase.jar import AuthorDetailsMd5 hdfs://golden-apple/user/hdfs/AuthorDetailsQA/
2014-07-29 15:25:37,480 WARN [main] conf.Configuration (Configuration.java:warnOnceIfDeprecated(808)) - dfs.df.interval is deprecated. Instead, use fs.df.interval
2014-07-29 15:25:37,493 WARN [main] conf.Configuration (Configuration.java:warnOnceIfDeprecated(808)) - dfs.max.objects is deprecated. Instead, use dfs.namenode.max.objects
2014-07-29 15:25:37,494 WARN [main] conf.Configuration (Configuration.java:warnOnceIfDeprecated(808)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2014-07-29 15:25:37,494 WARN [main] conf.Configuration (Configuration.java:warnOnceIfDeprecated(808)) - dfs.data.dir is deprecated. Instead, use dfs.datanode.data.dir
2014-07-29 15:25:37,494 WARN [main] conf.Configuration (Configuration.java:warnOnceIfDeprecated(808)) - dfs.name.dir is deprecated. Instead, use dfs.namenode.name.dir
2014-07-29 15:25:37,494 WARN
@apple-corps
apple-corps / gist:80bba7b6b19d64fde6c2
Last active August 29, 2015 14:05
Large discrepancy in hbase rootdir size after copytable operation in hbase .92.1-cdh4.1.3
The guide I used as a reference:
http://blog.pivotal.io/pivotal/products/migrating-an-apache-hbase-table-between-different-clusters
Supposedly the original command used to create the table on cluster A:
create 'ADMd5', {NAME => 'a', BLOOMFILTER => 'ROW', VERSIONS => '1', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0'}
@apple-corps
apple-corps / gist:01f1b082694448fbef7d
Last active August 29, 2015 14:07
Multi Machine Vagrant
VAGRANTFILE_API_VERSION = "2"
Vagrant::configure(VAGRANTFILE_API_VERSION) do | config |
["nn01","hdp01"].each do | hostname |
config.vm.define hostname do | host |
if hostname.include? "nn" or hostname.include? "esm"
host.vm.box = "CentOS 6.5 x64"
config.vm.box_url = "file://opt/vagrant/vagrant-builder/centos/centos-6-400.box"
else
@apple-corps
apple-corps / gist:b980f5ed3a8d0e3d9b32
Created November 5, 2014 00:50
Veewee : can't build CentOS 6.5
veewee vbox build 'CentOS-6.5-200' -a --debug --force --nogui
2014-11-04 09:13:03 -0800 - environment - [veewee] Loading configuration...
2014-11-04 09:13:03 -0800 - - [veewee] Initializing veewee config object
2014-11-04 09:13:03 -0800 - - [veewee] No configfile found
2014-11-04 09:13:03 -0800 - environment - [veewee] Environment initialized (#<Veewee::Environment:0x000000021d7778>)
2014-11-04 09:13:03 -0800 - environment - [veewee] - cwd : /opt/vagrant/veewee
2014-11-04 09:13:03 -0800 - environment - [veewee] - veewee_filename : Veeweefile
2014-11-04 09:13:03 -0800 - environment - [veewee] - template_path : ["/home/alterian/.rvm/gems/ruby-2.1.4/gems/veewee-0.4.5.1/templates", "templates"]
2014-11-04 09:13:03 -0800 - environment - [veewee] - validation_dir : /home/alterian/.rvm/gems/ruby-2.1.4/gems/veewee-0.4.5.1/validation
2014-11-04 09:13:03 -0800 - - [veewee] Reading ostype yamlfile /home/alterian/.rvm/gems/ruby-2.1.4/gems/veewee-0.4.5.1/lib/veewee/config/ostypes.yml
@apple-corps
apple-corps / gist:e22465c419d01bece907
Created November 10, 2014 03:23
Rabbit MQ error messages
** Reason for termination ==
** {timeout,{gen_server,call,[<0.5074.3249>,get_prefetch_limit]}}
=ERROR REPORT==== 6-Nov-2014::23:48:36 ===
** Generic server <0.5075.3249> terminating
** Last message in was pre_hibernate
** When Server state == {ch,running,rabbit_framing_amqp_0_9_1,2,
<0.4937.3249>,<0.5073.3249>,<0.4937.3249>,
<<"10.51.28.74:64389 -> 10.51.28.182:5672">>,
{lstate,<0.5074.3249>,true,false},
@apple-corps
apple-corps / gist:6ddadefd8e1e058df2cc
Created January 20, 2015 00:13
Hadoop namenode failure
2015-01-19 06:53:50,446 WARN org.apache.hadoop.ipc.Server: IPC Server handler 0 on 8020, call org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.blockReport from 10.51.28.157:37644 Call#8104114 Retry#0: error: java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:144)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28061)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)