Skip to content

Instantly share code, notes, and snippets.

<?xml version="1.0" encoding="UTF-8"?>
<testsuite name="org.apache.hadoop.hbase.regionserver.TestPerColumnFamilyFlush" time="74.029" tests="6" errors="0" skipped="0" failures="1">
<properties>
<property name="java.runtime.name" value="Java(TM) SE Runtime Environment"/>
<property name="sun.boot.library.path" value="/opt/toolchain/sun-jdk-64bit-1.7.0.67/jre/lib/amd64"/>
<property name="java.vm.version" value="24.65-b04"/>
<property name="java.vm.vendor" value="Oracle Corporation"/>
<property name="java.vendor.url" value="http://java.oracle.com/"/>
<property name="path.separator" value=":"/>
<property name="guice.disable.misplaced.annotation.check" value="true"/>
2015-01-07 05:04:05,431 INFO [main] hbase.ResourceChecker(147): before: master.cleaner.TestSnapshotFromMaster#testGetCompletedSnapshots Thread=325, OpenFileDescriptor=474, MaxFileDescriptor=32768, SystemLoadAverage=227, ProcessCount=131, AvailableMemoryMB=24222, ConnectionCount=0
2015-01-07 05:04:05,438 DEBUG [main] ipc.RpcClientImpl$Connection(339): Use SIMPLE authentication for service MasterService, sasl=false
2015-01-07 05:04:05,438 DEBUG [main] ipc.RpcClientImpl$Connection(703): Connecting to hbase-test-slave-ba6.vpc.cloudera.com/172.26.18.24:55325
2015-01-07 05:04:05,439 DEBUG [RpcServer.listener,port=55325] ipc.RpcServer$Listener(780): RpcServer.listener,port=55325: connection from 172.26.18.24:34248; # active connections: 3
2015-01-07 05:04:05,447 INFO [B.defaultRpcServer.handler=0,queue=0,port=55325] master.HMaster(1270): Client=jenkins//172.26.18.24 create 'test', {NAME => 'fam', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE',
#!/usr/bin/env bash
# This helper script is designed to be sourced, at which time its functions are made available
# to the user. Most functions defined have functionality defined by environmental variables, which
# can be set during invocation. For example,
#
# PULL=false clusterdock_run ./bin/...
clusterdock_run() {
# Variables:
# - CLUSTERDOCK_TARGET_DIR: a folder on the host to mount into /root/target in the clusterdock
#!/usr/bin/env bash
#
# Script which installs Docker on an Ubuntu 14.04 host and updates daemon configs.
# We hardcode which version of Docker to install to avoid potential breaking changes in new
# releases.
DOCKER_VERSION=1.11.1
# Set bridge IP to avoid conflicts with Cloudera addresses.
BRIDGE_IP="192.168.123.1/24"
# DAEMON_OPTIONS will be an array of arguments to be added to /etc/default/docker.
[root@node-2 ~]# hbase org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList loop 1 4 250000000 mytmpdir 4
2016-06-08 09:38:52,479 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2016-06-08 09:38:52,893 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
2016-06-08 09:38:53,117 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x232ae98 connecting to ZooKeeper ensemble=node-1.cluster:2181
2016-06-08 09:38:53,124 INFO [main] zookeeper.ZooKeeper: Client environment:z
#!/usr/bin/env bash
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
2016-10-04 15:31:21,747 [tserver.TabletServer] INFO : Started replication service on node-8.cluster:10002
2016-10-04 15:31:22,711 [Audit ] INFO : operation: permitted; user: root; client: 192.168.124.8:48556; action: authenticate;
2016-10-04 15:31:22,714 [Audit ] INFO : operation: permitted; user: root; client: 192.168.124.8:48556; action: performSystemAction; principal: root;
2016-10-04 15:31:23,268 [watcher.MonitorLog4jWatcher] INFO : Changing monitor log4j address to node-7.cluster:4560
2016-10-04 15:31:23,268 [watcher.MonitorLog4jWatcher] INFO : Enabled log-forwarding
2016-10-04 15:31:26,274 [tserver.TabletServer] INFO : Loading tablet +r<<
2016-10-04 15:31:26,277 [tserver.TabletServer] INFO : node-8.cluster:10011: got assignment from master: +r<<
2016-10-04 15:31:26,323 [conf.ConfigSanityCheck] WARN : Use of instance.dfs.uri and instance.dfs.dir are deprecated. Consider using instance.volumes instead.
2016-10-04 15:31:26,574 [conf.ConfigSanityCheck] WARN : Use of instance.dfs.uri and instance.dfs.dir
# Install clusterdock.
pip3 install clusterdock
# Clone the Apache Kafka topology for clusterdock.
git clone https://github.com/clusterdock/topology_apache_kafka.git
# Start Apache Kafka (defaults to node-1.cluster, node-2.cluster, node-3.cluster)
clusterdock -v start topology_apache_kafka --brokers node-1 node-2 node-3 --kafka-version 1.0.0
# Start StreamSets Data Collector on the same cluster network.
@dimaspivak
dimaspivak / gist:9b17b8192b2ca7557daff0ce20967c0c
Created December 19, 2018 00:42
test_kafka_destination.py::test_kafka_error_destination
2018-12-18 22:47:24,126 [user:*admin] [pipeline:To Error Kafka/ToErrorKafkaf5f4826a-dc20-400e-9fba-0fc7421e7256] [runner:] [thread:ProductionPipelineRunnable-ToErrorKafkaf5f4826a-dc20-400e-9fba-0fc7421e7256-To Error Kafka] INFO AppInfoParser - Kafka version : 0.10.0-kafka-2.1.1
2018-12-18 22:47:24,126 [user:*admin] [pipeline:To Error Kafka/ToErrorKafkaf5f4826a-dc20-400e-9fba-0fc7421e7256] [runner:] [thread:ProductionPipelineRunnable-ToErrorKafkaf5f4826a-dc20-400e-9fba-0fc7421e7256-To Error Kafka] INFO AppInfoParser - Kafka commitId : unknown
2018-12-18 22:47:31,266 [user:*admin] [pipeline:To Error Kafka/ToErrorKafkaf5f4826a-dc20-400e-9fba-0fc7421e7256] [runner:] [thread:ProductionPipelineRunnable-ToErrorKafkaf5f4826a-dc20-400e-9fba-0fc7421e7256-To Error Kafka] ERROR ClientUtils - Failed to close consumer metrics
java.lang.StackOverflowError
at java.io.ExpiringCache.get(ExpiringCache.java:78)
at java.io.UnixFileSystem.canonicalize(UnixFileSystem.java:152)
at java.io.File.getCanonicalPath(File.java:618)
a
This file has been truncated, but you can view the full file.
2018-12-21 01:34:12,512 [user:] [pipeline:] [runner:] [thread:main] INFO Main - -----------------------------------------------------------------
2018-12-21 01:34:12,513 [user:] [pipeline:] [runner:] [thread:main] INFO Main - Build info:
2018-12-21 01:34:12,513 [user:] [pipeline:] [runner:] [thread:main] INFO Main - Version : 3.7.0
2018-12-21 01:34:12,514 [user:] [pipeline:] [runner:] [thread:main] INFO Main - Date : 2018-12-20T22:08Z
2018-12-21 01:34:12,514 [user:] [pipeline:] [runner:] [thread:main] INFO Main - Built by : ubuntu
2018-12-21 01:34:12,514 [user:] [pipeline:] [runner:] [thread:main] INFO Main - Repo SHA : 6da8299871663ddf58b033b092b2544da49d2b2d
2018-12-21 01:34:12,514 [user:] [pipeline:] [runner:] [thread:main] INFO Main - Source MD5 : ca4c16c35ae9d9ab4fc8e38c20983077
2018-12-21 01:34:12,514 [user:] [pipeline:] [runner:] [thread:main] INFO Main - -----------------------------------------------------------------
2018-12-21 01:34:12,514 [user:]