Skip to content

Instantly share code, notes, and snippets.

@yangl
Last active August 17, 2019 07:29
Show Gist options
  • Save yangl/178a6702547ee2cf74c0 to your computer and use it in GitHub Desktop.
Save yangl/178a6702547ee2cf74c0 to your computer and use it in GitHub Desktop.
OpenTSDB。。。
OpenTSDB
安装
http://opentsdb.net/docs/build/html/installation.html
1.安装依赖:
Runtime Requirements
A Linux system
Java Runtime Environment 1.6 or later
HBase 0.92 or later
GnuPlot 4.2 or later (yum -y install gnuplot)
git 2.30版本需要
2.执行build.sh即可 (2.3.0有bug,把third_party目录拷贝到build目录下!)
3.启动,重启相关命令
pgrep -f opentsdb|xargs kill -9
nohup ./build/tsdb tsd --config=opentsdb.conf --port=4242 --staticroot=build/staticroot --cachedir=/tmp/opentsdbcachedir --auto-metric > logs/tsdb.log 2>&1 &
问题:
java.lang.illegalstateexception all unique ids for tag on 3 bytes are already assigned
1.停掉optentsdb
pgrep -f opentsdb|xargs kill -9
2.清空opentsdb在hbase用到的表
./hbase shell
查看状态
status
查询所有表
list
清空以下四张表(truncate操作其实是disable drop create三个操作的组合)
truncate 'tsdb'
truncate 'tsdb-meta'
truncate 'tsdb-tree'
truncate 'tsdb-uid'
也可以drop掉再执行opentsdb的src/create_table.sh创建即可(export HBASE_HOME='/home/hadoop5/hbase');
env COMPRESSION=NONE HBASE_HOME=path/to/hbase-0.94.X ./src/create_table.sh
3.启动tsdb即可
nohup ./build/tsdb tsd --port=4242 --staticroot=build/staticroot --cachedir=/tmp/opentsdbcachedir --auto-metric > logs/tsdb.log 2>&1 &
http://opentsdb.net/docs/build/html/user_guide/uids.html
Storage
By default, UIDs are encoded on 3 bytes in storage, giving a maximum unique ID of 16,777,215 for each UID type. This is done to reduce the amount of space taken up in storage and to reduce the memory footprint of a TSD. For the vast majority of users, 16 million unique metrics, 16 million unique tag names and 16 million unique tag values should be enough. But if you do need more of a particular type, you can modify the OpenTSDB source code and recompile with 4 bytes or more.
Warning
If you do adjust the byte encoding number, you must start with a fresh tsdb and fresh tsdb-uid table, otherwise the results will be unexpected. If you have data in an existing setup, you must export it, drop all tables, create them from scratch and re-import the data.
可修改core/TSDB.java,把METRICS_WIDTH、TAG_NAME_WIDTH、TAG_VALUE_WIDTH三个值的3改成5即可(最大8),主要是TAG_VALUE_WIDTH,tsdb2.2版本将可以配置该常量
private static final String METRICS_QUAL = "metrics";
private static final short METRICS_WIDTH = 3;
private static final String TAG_NAME_QUAL = "tagk";
private static final short TAG_NAME_WIDTH = 3;
private static final String TAG_VALUE_QUAL = "tagv";
private static final short TAG_VALUE_WIDTH = 5;
TSDB2.2可以通过配置文件修改这三个值了,详见:http://opentsdb.net/docs/build/html/user_guide/configuration.html
tsd.storage.uid.width.metric (2.2)
tsd.storage.uid.width.tagk (2.2)
tsd.storage.uid.width.tagv (2.2)
但是HBase中有监控数据之后一定不能改这个!
修改 OpenTSDB 的源代码中 src/core/CompactionQueue.java 中的 FLUSH_SPEED 常量为 1,重新编译即可。这样改动的实际影响是:默认压缩速度是 2 倍速,即最多半个小时内完成前 1 个小时数据的压缩,重新写回到 HBase,可以把这个调成 1 倍速,给它 1 个小时的时间来完成前 1 个小时数据的 Compaction,这样到 HBase 的流量会平缓很多。
详见:http://tech.meituan.com/opentsdb_hbase_compaction_problem.html
TSDB2.2可以通过配置文件修改这个值了:
tsd.storage.compaction.flush_speed (2.2)
4.修改tsd的创建脚本:
vim src/create_table.sh
case $COMPRESSION in
(NONE|LZO|GZ|LZ4|SNAPPY) :;; # Known good.
(*)
echo >&2 "warning: compression codec '$COMPRESSION' might not be supported."
;;
esac
把这五项NONE|LZO|GZ|LZ4|SNAPPY修改为自己的hbase支持的即可
tsd.core.auto_create_metrics=true
tsd.http.cachedir=/tmp/opentsdbcachedir
tsd.http.staticroot=build/staticroot
tsd.http.query.allow_delete=true
tsd.http.request.enable_chunked=true
tsd.http.request.max_chunk=409600000
tsd.network.bind=192.168.3.52
tsd.network.port=4242
tsd.query.allow_simultaneous_duplicates=true
tsd.query.skip_unresolved_tagvs=true
tsd.storage.uid.width.metric=3
tsd.storage.uid.width.tagk=3
tsd.storage.uid.width.tagv=5
tsd.storage.compaction.flush_speed=1
tsd.storage.fix_duplicates=true
tsd.storage.enable_appends=true
tsd.storage.enable_compaction=false
tsd.storage.hbase.zk_quorum=BJVPC3-51,BJVPC3-52,BJVPC3-53
#!/bin/sh
TSD_HOME=/data/cloudera/opentsdb
pgrep -f opentsdb.conf|xargs kill -9
nohup $TSD_HOME/build/tsdb tsd --config=$TSD_HOME/opentsdb.conf > $TSD_HOME/logs/tsdb.log 2>&1 &
===============================
===========hbase相关===========
===============================
在hbase停机的情况下执行:
./bin/hbase-cleanup.sh (--cleanZk|--cleanHdfs|--cleanAll)
或者手动清理,执行如下两步:
1.删除zk上的hbase数据
./zkCli.sh
rmr /hbase
2.删除hdfs上的hbase结点数据
hadoop fs -rm -r /user/hbase/data
如果hdfs所在机子磁盘满了hdfs就会进入安全模式,去清理些空间即可:
hdfs dfsadmin -safemode get
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait]
启动master:
sh /home/hadoop5/hbase/bin/hbase-daemon.sh start master
启动 reginserver:
export HBASE_CONF_DIR=/home/hadoop5/hbase/conf;
export HBASE_HOME=/home/hadoop5/hbase;
sh /home/hadoop5/hbase/bin/hbase-daemon.sh --config $HBASE_CONF_DIR start regionserver
创建表:
#export COMPRESSION=NONE
export COMPRESSION=LZ4
export HBASE_HOME='/home/hadoop5/hbase'
/home/hadoop5/opentsdb/src/create_table.sh
CDH5.6.0执行:
env JAVA_HOME='/usr/java/jdk1.7.0_67-cloudera' COMPRESSION=LZ4 HBASE_HOME='/opt/cloudera/parcels/CDH-5.6.0-1.cdh5.6.0.p0.45/lib/hbase' ./src/create_table.sh
===============================
===========hdfs相关============
===============================
关闭hdfs:
./sbin/stop-all.sh 或者
./sbin/stop-dfs.sh && ./sbin/stop-yarn.sh
启动hdfs:
./sbin/start-all.sh 或者
./sbin/start-dfs.sh && ./sbin/start-yarn.sh
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment