Created
May 11, 2017 13:23
-
-
Save gyfora/ee632bfcd0115a746bc55440bd8815e7 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2017-05-11 15:10:06,962 INFO org.apache.flink.yarn.YarnTaskManagerRunner - -------------------------------------------------------------------------------- | |
2017-05-11 15:10:06,963 INFO org.apache.flink.yarn.YarnTaskManagerRunner - Starting YARN TaskManager (Version: 1.3-SNAPSHOT, Rev:44a120b, Date:11.05.2017 @ 13:38:36 CEST) | |
2017-05-11 15:10:06,963 INFO org.apache.flink.yarn.YarnTaskManagerRunner - Current user: splat | |
2017-05-11 15:10:06,963 INFO org.apache.flink.yarn.YarnTaskManagerRunner - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.131-b11 | |
2017-05-11 15:10:06,963 INFO org.apache.flink.yarn.YarnTaskManagerRunner - Maximum heap size: 4217 MiBytes | |
2017-05-11 15:10:06,963 INFO org.apache.flink.yarn.YarnTaskManagerRunner - JAVA_HOME: /fjord/java/ | |
2017-05-11 15:10:06,965 INFO org.apache.flink.yarn.YarnTaskManagerRunner - Hadoop version: 2.6.0 | |
2017-05-11 15:10:06,965 INFO org.apache.flink.yarn.YarnTaskManagerRunner - JVM Options: | |
2017-05-11 15:10:06,965 INFO org.apache.flink.yarn.YarnTaskManagerRunner - -Xms4400m | |
2017-05-11 15:10:06,965 INFO org.apache.flink.yarn.YarnTaskManagerRunner - -Xmx4400m | |
2017-05-11 15:10:06,965 INFO org.apache.flink.yarn.YarnTaskManagerRunner - -Dlog.file=/fjord/hadoop/data/6/yarn/container-logs/application_1494426363399_0012/container_1494426363399_0012_01_000002/taskmanager.log | |
2017-05-11 15:10:06,965 INFO org.apache.flink.yarn.YarnTaskManagerRunner - -Dlogback.configurationFile=file:./logback.xml | |
2017-05-11 15:10:06,965 INFO org.apache.flink.yarn.YarnTaskManagerRunner - -Dlog4j.configuration=file:./log4j.properties | |
2017-05-11 15:10:06,965 INFO org.apache.flink.yarn.YarnTaskManagerRunner - Program Arguments: | |
2017-05-11 15:10:06,965 INFO org.apache.flink.yarn.YarnTaskManagerRunner - --configDir | |
2017-05-11 15:10:06,965 INFO org.apache.flink.yarn.YarnTaskManagerRunner - . | |
2017-05-11 15:10:06,966 INFO org.apache.flink.yarn.YarnTaskManagerRunner - Classpath: lib/flink-connector-kafka-0.8_2.10-1.3-SNAPSHOT.jar:lib/flink-connector-kafka-base_2.10-1.3-SNAPSHOT.jar:lib/flink-dist_2.10-1.3-SNAPSHOT.jar:lib/flink-python_2.10-1.3-SNAPSHOT.jar:lib/flink-shaded-hadoop2-1.3-SNAPSHOT.jar:lib/kafka-clients-0.8.2.2.jar:lib/kafka_2.10-0.8.2.2.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.7.jar:log4j.properties:logback.xml:rbea-on-flink-2.0-SNAPSHOT.jar:flink.jar:flink-conf.yaml::/etc/hadoop/conf.cloudera.yarn2:/run/cloudera-scm-agent/process/948-yarn-NODEMANAGER:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-protobuf.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-generator.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-cascading.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-auth.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-auth-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-aws-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-thrift.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-azure-datalake.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-nfs.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-avro.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-annotations-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-azure-datalake-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-test-hadoop2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-encoding.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-nfs-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-jackson.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-pig-bundle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-column.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-format.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-scrooge_2.10.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-format-sources.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-annotations.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-hadoop-bundle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-tools.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common-2.6.0-cdh5.11.0-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-hadoop.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-pig.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-aws.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-scala_2.10.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-format-javadoc.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-codec-1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jersey-core-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/slf4j-log4j12.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/servlet-api-2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-digester-1.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/curator-client-2.7.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/xz-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/activation-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/curator-recipes-2.7.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/asm-3.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/stax-api-1.0-2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-dynamodb-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/guava-11.0.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jsp-api-2.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/logredactor-1.0.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/httpcore-4.2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-kms-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/hamcrest-core-1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jets3t-0.9.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-s3-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/avro.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/slf4j-api-1.7.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-el-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/netty-3.10.5.Final.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-math3-3.1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/log4j-1.2.17.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/gson-2.2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/azure-data-lake-store-sdk-2.1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-io-2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-core-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-compress-1.4.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-logging-1.1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/curator-framework-2.7.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jsch-0.1.42.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jaxb-api-2.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-collections-3.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/protobuf-java-2.5.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-sts-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/httpclient-4.2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-lang-2.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jersey-server-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-configuration-1.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jsr305-3.0.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jackson-xc-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jettison-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/zookeeper.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/junit-4.11.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-net-3.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-cli-1.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/api-util-1.0.0-M20.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/paranamer-2.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jersey-json-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/xmlenc-0.52.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/mockito-all-1.8.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-httpclient-3.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.11.0-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/asm-3.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-client.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-api-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-registry-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-registry.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-client-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-api.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-common-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-tests-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/javax.inject-1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/xz-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/activation-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/asm-3.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/guava-11.0.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jline-2.11.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-io-2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/guice-3.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.11.0-yarn-shuffle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jettison-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/zookeeper.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-json-1.9.jar | |
2017-05-11 15:10:06,966 INFO org.apache.flink.yarn.YarnTaskManagerRunner - -------------------------------------------------------------------------------- | |
2017-05-11 15:10:06,966 INFO org.apache.flink.yarn.YarnTaskManagerRunner - Registered UNIX signal handlers for [TERM, HUP, INT] | |
2017-05-11 15:10:07,113 INFO org.apache.flink.runtime.taskmanager.TaskManager - Loading configuration from . | |
2017-05-11 15:10:07,117 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: akka.lookup.timeout, 120 s | |
2017-05-11 15:10:07,117 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.cluster-id, application_1494426363399_0012 | |
2017-05-11 15:10:07,117 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, splat34.sto.midasplayer.com | |
2017-05-11 15:10:07,117 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: recovery.mode, zookeeper | |
2017-05-11 15:10:07,117 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: yarn.heap-cutoff-ratio, 0.1 | |
2017-05-11 15:10:07,117 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: state.savepoints.dir, hdfs:///flink/external-checkpoints/bifrost | |
2017-05-11 15:10:07,117 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.web.port, 0 | |
2017-05-11 15:10:07,117 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 57191 | |
2017-05-11 15:10:07,117 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: yarn.containers.vcores, 4 | |
2017-05-11 15:10:07,117 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: recovery.zookeeper.storageDir, hdfs:///flink/ha | |
2017-05-11 15:10:07,118 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: yarn.maximum-failed-containers, 100 | |
2017-05-11 15:10:07,118 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: containerized.heap-cutoff-ratio, 0.1 | |
2017-05-11 15:10:07,118 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: env.log.dir, /home/splat/flink/log/ | |
2017-05-11 15:10:07,118 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: recovery.zookeeper.path.root, /flink-splat | |
2017-05-11 15:10:07,118 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.maxRegistrationDuration, 5 minutes | |
2017-05-11 15:10:07,118 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1 | |
2017-05-11 15:10:07,118 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: yarn.application-attempts, 10 | |
2017-05-11 15:10:07,118 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: recovery.zookeeper.quorum, zk04.sto.midasplayer.com:2181,zk05.sto.midasplayer.com:2181,zk06.sto.midasplayer.com:2181 | |
2017-05-11 15:10:07,118 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1 | |
2017-05-11 15:10:07,119 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: fs.hdfs.hadoopconf, /home/splat/rbea-on-flink/deployment/environments/test/yarn-conf | |
2017-05-11 15:10:07,119 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: akka.ask.timeout, 120 s | |
2017-05-11 15:10:07,119 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: akka.client.timeout, 1200 s | |
2017-05-11 15:10:07,119 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: state.checkpoints.dir, hdfs:///flink/external-checkpoints/bifrost | |
2017-05-11 15:10:07,122 INFO org.apache.flink.yarn.YarnTaskManagerRunner - Current working/local Directory: /fjord/hadoop/data/nvme/splat/yarn-local-dir/usercache/splat/appcache/application_1494426363399_0012 | |
2017-05-11 15:10:07,122 INFO org.apache.flink.yarn.YarnTaskManagerRunner - Current working Directory: /fjord/hadoop/data/nvme/splat/yarn-local-dir/usercache/splat/appcache/application_1494426363399_0012/container_1494426363399_0012_01_000002 | |
2017-05-11 15:10:07,122 INFO org.apache.flink.yarn.YarnTaskManagerRunner - TM: remoteKeytabPath obtained null | |
2017-05-11 15:10:07,122 INFO org.apache.flink.yarn.YarnTaskManagerRunner - TM: remoteKeytabPrincipal obtained null | |
2017-05-11 15:10:07,122 INFO org.apache.flink.yarn.YarnTaskManagerRunner - Setting directories for temporary file /fjord/hadoop/data/nvme/splat/yarn-local-dir/usercache/splat/appcache/application_1494426363399_0012 | |
2017-05-11 15:10:07,122 INFO org.apache.flink.yarn.YarnTaskManagerRunner - YARN daemon is running as: splat Yarn client user obtainer: splat | |
2017-05-11 15:10:07,123 INFO org.apache.flink.yarn.YarnTaskManagerRunner - ResourceID assigned for this container: container_1494426363399_0012_01_000002 | |
2017-05-11 15:10:07,152 INFO org.apache.flink.runtime.security.modules.HadoopModule - Hadoop user set to splat (auth:SIMPLE) | |
2017-05-11 15:10:07,179 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'recovery.mode' instead of proper key 'high-availability' | |
2017-05-11 15:10:07,184 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'recovery.zookeeper.quorum' instead of proper key 'high-availability.zookeeper.quorum' | |
2017-05-11 15:10:07,184 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'recovery.zookeeper.path.root' instead of proper key 'high-availability.zookeeper.path.root' | |
2017-05-11 15:10:07,185 INFO org.apache.flink.runtime.util.ZooKeeperUtils - Enforcing default ACL for ZK connections | |
2017-05-11 15:10:07,185 INFO org.apache.flink.runtime.util.ZooKeeperUtils - Using '/flink-splat/application_1494426363399_0012' as Zookeeper namespace. | |
2017-05-11 15:10:07,246 INFO org.apache.flink.shaded.org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting | |
2017-05-11 15:10:07,254 INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT | |
2017-05-11 15:10:07,255 INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=splat34.sto.midasplayer.com | |
2017-05-11 15:10:07,255 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_131 | |
2017-05-11 15:10:07,255 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation | |
2017-05-11 15:10:07,255 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-8-oracle/jre | |
2017-05-11 15:10:07,255 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=lib/flink-connector-kafka-0.8_2.10-1.3-SNAPSHOT.jar:lib/flink-connector-kafka-base_2.10-1.3-SNAPSHOT.jar:lib/flink-dist_2.10-1.3-SNAPSHOT.jar:lib/flink-python_2.10-1.3-SNAPSHOT.jar:lib/flink-shaded-hadoop2-1.3-SNAPSHOT.jar:lib/kafka-clients-0.8.2.2.jar:lib/kafka_2.10-0.8.2.2.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.7.jar:log4j.properties:logback.xml:rbea-on-flink-2.0-SNAPSHOT.jar:flink.jar:flink-conf.yaml::/etc/hadoop/conf.cloudera.yarn2:/run/cloudera-scm-agent/process/948-yarn-NODEMANAGER:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-protobuf.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-generator.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-cascading.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-auth.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-auth-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-aws-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-thrift.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-azure-datalake.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-nfs.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-avro.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-annotations-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-azure-datalake-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-test-hadoop2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-encoding.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-nfs-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-jackson.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-pig-bundle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-column.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-format.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-scrooge_2.10.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-format-sources.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-annotations.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-hadoop-bundle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-tools.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common-2.6.0-cdh5.11.0-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-hadoop.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-pig.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-aws.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-scala_2.10.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-format-javadoc.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-codec-1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jersey-core-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/slf4j-log4j12.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/servlet-api-2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-digester-1.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/curator-client-2.7.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/xz-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/activation-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/curator-recipes-2.7.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/asm-3.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/stax-api-1.0-2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-dynamodb-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/guava-11.0.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jsp-api-2.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/logredactor-1.0.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/httpcore-4.2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-kms-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/hamcrest-core-1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jets3t-0.9.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-s3-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/avro.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/slf4j-api-1.7.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-el-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/netty-3.10.5.Final.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-math3-3.1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/log4j-1.2.17.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/gson-2.2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/azure-data-lake-store-sdk-2.1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-io-2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-core-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-compress-1.4.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-logging-1.1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/curator-framework-2.7.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jsch-0.1.42.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jaxb-api-2.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-collections-3.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/protobuf-java-2.5.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-sts-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/httpclient-4.2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-lang-2.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jersey-server-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-configuration-1.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jsr305-3.0.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jackson-xc-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jettison-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/zookeeper.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/junit-4.11.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-net-3.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-cli-1.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/api-util-1.0.0-M20.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/paranamer-2.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jersey-json-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/xmlenc-0.52.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/mockito-all-1.8.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-httpclient-3.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.11.0-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/asm-3.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-client.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-api-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-registry-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-registry.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-client-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-api.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-common-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-tests-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/javax.inject-1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/xz-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/activation-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/asm-3.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/guava-11.0.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jline-2.11.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-io-2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/guice-3.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.11.0-yarn-shuffle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jettison-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/zookeeper.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-json-1.9.jar | |
2017-05-11 15:10:07,255 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/native:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib | |
2017-05-11 15:10:07,255 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp | |
2017-05-11 15:10:07,255 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA> | |
2017-05-11 15:10:07,255 INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux | |
2017-05-11 15:10:07,255 INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64 | |
2017-05-11 15:10:07,255 INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=3.16.0-4-amd64 | |
2017-05-11 15:10:07,259 INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=yarn | |
2017-05-11 15:10:07,260 INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/var/lib/hadoop-yarn | |
2017-05-11 15:10:07,260 INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/fjord/hadoop/data/nvme/splat/yarn-local-dir/usercache/splat/appcache/application_1494426363399_0012/container_1494426363399_0012_01_000002 | |
2017-05-11 15:10:07,260 INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zk04.sto.midasplayer.com:2181,zk05.sto.midasplayer.com:2181,zk06.sto.midasplayer.com:2181 sessionTimeout=60000 watcher=org.apache.flink.shaded.org.apache.curator.ConnectionState@15043a2f | |
2017-05-11 15:10:07,274 WARN org.apache.zookeeper.ClientCnxn - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/tmp/jaas-1580594546717889748.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. | |
2017-05-11 15:10:07,275 INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zk05.sto.midasplayer.com/172.26.82.243:2181 | |
2017-05-11 15:10:07,276 ERROR org.apache.flink.shaded.org.apache.curator.ConnectionState - Authentication failed | |
2017-05-11 15:10:07,280 INFO org.apache.zookeeper.ClientCnxn - Socket connection established to zk05.sto.midasplayer.com/172.26.82.243:2181, initiating session | |
2017-05-11 15:10:07,285 INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zk05.sto.midasplayer.com/172.26.82.243:2181, sessionid = 0x25aad587314e78d, negotiated timeout = 40000 | |
2017-05-11 15:10:07,286 INFO org.apache.flink.shaded.org.apache.curator.framework.state.ConnectionStateManager - State change: CONNECTED | |
2017-05-11 15:10:07,287 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - Starting ZooKeeperLeaderRetrievalService. | |
2017-05-11 15:10:07,307 INFO org.apache.flink.runtime.util.LeaderRetrievalUtils - Trying to select the network interface and address to use by connecting to the leading JobManager. | |
2017-05-11 15:10:07,307 INFO org.apache.flink.runtime.util.LeaderRetrievalUtils - TaskManager will try to connect for 120000 milliseconds before falling back to heuristics | |
2017-05-11 15:10:07,378 INFO org.apache.flink.runtime.net.ConnectionUtils - Retrieved new target address splat34.sto.midasplayer.com/172.26.83.103:57191. | |
2017-05-11 15:10:07,381 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - Stopping ZooKeeperLeaderRetrievalService. | |
2017-05-11 15:10:07,384 INFO org.apache.flink.runtime.taskmanager.TaskManager - TaskManager will use hostname/address 'splat34.sto.midasplayer.com' (172.26.83.103) for communication. | |
2017-05-11 15:10:07,384 INFO org.apache.flink.runtime.taskmanager.TaskManager - Starting TaskManager | |
2017-05-11 15:10:07,385 INFO org.apache.flink.runtime.taskmanager.TaskManager - Starting TaskManager actor system at splat34.sto.midasplayer.com:0. | |
2017-05-11 15:10:07,676 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started | |
2017-05-11 15:10:07,762 INFO Remoting - Starting remoting | |
2017-05-11 15:10:07,909 INFO Remoting - Remoting started; listening on addresses :[akka.tcp://flink@splat34.sto.midasplayer.com:51927] | |
2017-05-11 15:10:07,916 INFO org.apache.flink.runtime.taskmanager.TaskManager - Starting TaskManager actor | |
2017-05-11 15:10:07,934 INFO org.apache.flink.runtime.io.network.netty.NettyConfig - NettyConfig [server address: splat34.sto.midasplayer.com/172.26.83.103, server port: 0, ssl enabled: false, memory segment size (bytes): 32768, transport type: NIO, number of server threads: 1 (manual), number of client threads: 1 (manual), server connect backlog: 0 (use Netty's default), client connect timeout (sec): 120, send/receive buffer size (bytes): 0 (use Netty's default)] | |
2017-05-11 15:10:07,943 INFO org.apache.flink.runtime.taskexecutor.TaskManagerConfiguration - Messages have a max timeout of 120000 ms | |
2017-05-11 15:10:07,948 INFO org.apache.flink.runtime.taskexecutor.TaskManagerServices - Temporary file directory '/fjord/hadoop/data/nvme/splat/yarn-local-dir/usercache/splat/appcache/application_1494426363399_0012': total 1833 GB, usable 1740 GB (94.93% usable) | |
2017-05-11 15:10:08,199 INFO org.apache.flink.runtime.io.network.buffer.NetworkBufferPool - Allocated 418 MB for network buffer pool (number of memory segments: 13401, bytes per segment: 32768). | |
2017-05-11 15:10:08,324 INFO org.apache.flink.runtime.io.network.NetworkEnvironment - Starting the network environment and its components. | |
2017-05-11 15:10:08,330 INFO org.apache.flink.runtime.io.network.netty.NettyClient - Successful initialization (took 2 ms). | |
2017-05-11 15:10:08,355 INFO org.apache.flink.runtime.io.network.netty.NettyServer - Successful initialization (took 25 ms). Listening on SocketAddress /172.26.83.103:49369. | |
2017-05-11 15:10:08,529 INFO org.apache.flink.runtime.taskexecutor.TaskManagerServices - Limiting managed memory to 0.7 of the currently free heap space (2637 MB), memory will be allocated lazily. | |
2017-05-11 15:10:08,533 INFO org.apache.flink.runtime.io.disk.iomanager.IOManager - I/O manager uses directory /fjord/hadoop/data/nvme/splat/yarn-local-dir/usercache/splat/appcache/application_1494426363399_0012/flink-io-8d33e4e7-dfda-47a9-8ec0-563e57216f2c for spill files. | |
2017-05-11 15:10:08,536 INFO org.apache.flink.runtime.metrics.MetricRegistry - No metrics reporter configured, no metrics will be exposed/reported. | |
2017-05-11 15:10:08,577 INFO org.apache.flink.runtime.filecache.FileCache - User file cache uses directory /fjord/hadoop/data/nvme/splat/yarn-local-dir/usercache/splat/appcache/application_1494426363399_0012/flink-dist-cache-b327b269-9fa4-48f6-9fe3-0d30b153c96f | |
2017-05-11 15:10:08,585 INFO org.apache.flink.runtime.filecache.FileCache - User file cache uses directory /fjord/hadoop/data/nvme/splat/yarn-local-dir/usercache/splat/appcache/application_1494426363399_0012/flink-dist-cache-2428b725-d434-4a71-a2c0-c9ce337aba83 | |
2017-05-11 15:10:08,593 INFO org.apache.flink.yarn.YarnTaskManager - Starting TaskManager actor at akka://flink/user/taskmanager#1336875605. | |
2017-05-11 15:10:08,593 INFO org.apache.flink.yarn.YarnTaskManager - TaskManager data connection information: container_1494426363399_0012_01_000002 @ splat34.sto.midasplayer.com (dataPort=49369) | |
2017-05-11 15:10:08,594 INFO org.apache.flink.yarn.YarnTaskManager - TaskManager has 1 task slot(s). | |
2017-05-11 15:10:08,595 INFO org.apache.flink.yarn.YarnTaskManager - Memory usage stats: [HEAP: 627/4217/4217 MB, NON HEAP: 39/40/-1 MB (used/committed/max)] | |
2017-05-11 15:10:08,595 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - Starting ZooKeeperLeaderRetrievalService. | |
2017-05-11 15:10:08,602 INFO org.apache.flink.yarn.YarnTaskManager - Trying to register at JobManager akka.tcp://flink@splat34.sto.midasplayer.com:57191/user/jobmanager (attempt 1, timeout: 500 milliseconds) | |
2017-05-11 15:10:08,739 INFO org.apache.flink.yarn.YarnTaskManager - Successful registration at JobManager (akka.tcp://flink@splat34.sto.midasplayer.com:57191/user/jobmanager), starting network stack and library cache. | |
2017-05-11 15:10:08,742 INFO org.apache.flink.yarn.YarnTaskManager - Determined BLOB server address to be splat34.sto.midasplayer.com/172.26.83.103:41667. Starting BLOB cache. | |
2017-05-11 15:10:08,744 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'recovery.mode' instead of proper key 'high-availability' | |
2017-05-11 15:10:08,744 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'recovery.zookeeper.storageDir' instead of proper key 'high-availability.storageDir' | |
2017-05-11 15:10:09,090 INFO org.apache.flink.runtime.blob.FileSystemBlobStore - Creating highly available BLOB storage directory at hdfs:///flink/ha/application_1494426363399_0012/blob | |
2017-05-11 15:10:09,147 INFO org.apache.flink.runtime.blob.BlobCache - Created BLOB cache storage directory /tmp/blobStore-189cd16b-207c-4826-aa7d-e10f88125d43 | |
2017-05-11 15:10:09,261 INFO org.apache.flink.yarn.YarnTaskManager - Received task IterationSource-15 (1/1) | |
2017-05-11 15:10:09,262 INFO org.apache.flink.runtime.taskmanager.Task - IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:09,262 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f) [DEPLOYING] | |
2017-05-11 15:10:09,263 INFO org.apache.flink.yarn.YarnTaskManager - Received task Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) | |
2017-05-11 15:10:09,263 INFO org.apache.flink.runtime.taskmanager.Task - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (adbb1d5170737eaef3931cbb797be277) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:09,264 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (adbb1d5170737eaef3931cbb797be277) [DEPLOYING] | |
2017-05-11 15:10:09,265 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (adbb1d5170737eaef3931cbb797be277) [DEPLOYING]. | |
2017-05-11 15:10:09,265 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f) [DEPLOYING]. | |
2017-05-11 15:10:09,265 INFO org.apache.flink.yarn.YarnTaskManager - Received task Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) | |
2017-05-11 15:10:09,265 INFO org.apache.flink.runtime.taskmanager.Task - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:09,266 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1) [DEPLOYING] | |
2017-05-11 15:10:09,266 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1) [DEPLOYING]. | |
2017-05-11 15:10:09,266 INFO org.apache.flink.yarn.YarnTaskManager - Received task Source: Query Job Info (1/1) | |
2017-05-11 15:10:09,267 INFO org.apache.flink.runtime.taskmanager.Task - Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:09,267 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08) [DEPLOYING] | |
2017-05-11 15:10:09,267 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08) [DEPLOYING]. | |
2017-05-11 15:10:09,268 INFO org.apache.flink.yarn.YarnTaskManager - Received task Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) | |
2017-05-11 15:10:09,268 INFO org.apache.flink.runtime.taskmanager.Task - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (bebc6a77643e09ce2042d6a52f291e3e) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:09,268 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (bebc6a77643e09ce2042d6a52f291e3e) [DEPLOYING] | |
2017-05-11 15:10:09,268 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (bebc6a77643e09ce2042d6a52f291e3e) [DEPLOYING]. | |
2017-05-11 15:10:09,273 INFO org.apache.flink.yarn.YarnTaskManager - Received task Create Fields and Ids -> Filter Errors and Notifications (1/1) | |
2017-05-11 15:10:09,273 INFO org.apache.flink.runtime.taskmanager.Task - Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:09,273 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b) [DEPLOYING] | |
2017-05-11 15:10:09,274 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b) [DEPLOYING]. | |
2017-05-11 15:10:09,274 INFO org.apache.flink.yarn.YarnTaskManager - Received task Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) | |
2017-05-11 15:10:09,275 INFO org.apache.flink.runtime.taskmanager.Task - Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:09,275 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56) [DEPLOYING] | |
2017-05-11 15:10:09,275 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56) [DEPLOYING]. | |
2017-05-11 15:10:09,275 INFO org.apache.flink.yarn.YarnTaskManager - Received task Window aggregator (1/1) | |
2017-05-11 15:10:09,276 INFO org.apache.flink.runtime.taskmanager.Task - Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:09,276 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668) [DEPLOYING] | |
2017-05-11 15:10:09,276 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668) [DEPLOYING]. | |
2017-05-11 15:10:09,276 INFO org.apache.flink.yarn.YarnTaskManager - Received task Keep last -> NoOp -> Create Aggrigato events (1/1) | |
2017-05-11 15:10:09,277 INFO org.apache.flink.runtime.taskmanager.Task - Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:09,277 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1) [DEPLOYING]. | |
2017-05-11 15:10:09,277 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (bebc6a77643e09ce2042d6a52f291e3e) [DEPLOYING]. | |
2017-05-11 15:10:09,277 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08) [DEPLOYING]. | |
2017-05-11 15:10:09,277 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b) [DEPLOYING]. | |
2017-05-11 15:10:09,277 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f) [DEPLOYING]. | |
2017-05-11 15:10:09,277 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (adbb1d5170737eaef3931cbb797be277) [DEPLOYING]. | |
2017-05-11 15:10:09,278 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56) [DEPLOYING]. | |
2017-05-11 15:10:09,277 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668) [DEPLOYING]. | |
2017-05-11 15:10:09,277 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855) [DEPLOYING] | |
2017-05-11 15:10:09,278 INFO org.apache.flink.yarn.YarnTaskManager - Received task MySql output info -> Filter (1/1) | |
2017-05-11 15:10:09,278 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855) [DEPLOYING]. | |
2017-05-11 15:10:09,278 INFO org.apache.flink.runtime.taskmanager.Task - MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:09,278 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e) [DEPLOYING] | |
2017-05-11 15:10:09,278 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855) [DEPLOYING]. | |
2017-05-11 15:10:09,279 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e) [DEPLOYING]. | |
2017-05-11 15:10:09,279 INFO org.apache.flink.yarn.YarnTaskManager - Received task To DeploymentInfo (1/1) | |
2017-05-11 15:10:09,279 INFO org.apache.flink.runtime.taskmanager.Task - To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:09,279 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd) [DEPLOYING] | |
2017-05-11 15:10:09,279 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e) [DEPLOYING]. | |
2017-05-11 15:10:09,279 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd) [DEPLOYING]. | |
2017-05-11 15:10:09,280 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd) [DEPLOYING]. | |
2017-05-11 15:10:09,280 INFO org.apache.flink.yarn.YarnTaskManager - Received task IterationSink-15 (1/1) | |
2017-05-11 15:10:09,281 INFO org.apache.flink.runtime.taskmanager.Task - IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:09,281 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5) [DEPLOYING] | |
2017-05-11 15:10:09,281 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5) [DEPLOYING]. | |
2017-05-11 15:10:09,282 INFO org.apache.flink.yarn.YarnTaskManager - Received task Create Job information (1/1) | |
2017-05-11 15:10:09,282 INFO org.apache.flink.runtime.taskmanager.Task - Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:09,282 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f) [DEPLOYING] | |
2017-05-11 15:10:09,282 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f) [DEPLOYING]. | |
2017-05-11 15:10:09,283 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f) [DEPLOYING]. | |
2017-05-11 15:10:09,283 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5) [DEPLOYING]. | |
2017-05-11 15:10:09,283 INFO org.apache.flink.runtime.taskmanager.Task - Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:09,283 INFO org.apache.flink.runtime.taskmanager.Task - IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:09,283 INFO org.apache.flink.runtime.taskmanager.Task - Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:09,283 INFO org.apache.flink.runtime.taskmanager.Task - To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:09,283 INFO org.apache.flink.runtime.taskmanager.Task - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:09,283 INFO org.apache.flink.runtime.taskmanager.Task - Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:09,283 INFO org.apache.flink.runtime.taskmanager.Task - Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:09,283 INFO org.apache.flink.runtime.taskmanager.Task - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (adbb1d5170737eaef3931cbb797be277) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:09,283 INFO org.apache.flink.runtime.taskmanager.Task - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (bebc6a77643e09ce2042d6a52f291e3e) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:09,283 INFO org.apache.flink.runtime.taskmanager.Task - MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:09,283 INFO org.apache.flink.runtime.taskmanager.Task - Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:09,283 INFO org.apache.flink.runtime.taskmanager.Task - Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:09,283 INFO org.apache.flink.runtime.taskmanager.Task - IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:09,303 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:09,303 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:09,303 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:09,303 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:09,303 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:09,303 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:09,303 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:09,303 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:09,303 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:09,303 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:09,303 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:09,303 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:09,303 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:09,311 INFO org.apache.flink.streaming.runtime.tasks.StreamIterationTail - Iteration tail IterationSink-15 (1/1) trying to acquire feedback queue under 065c0937d56f3e8da025e015d3ab332b-broker-15-0 | |
2017-05-11 15:10:09,380 INFO org.apache.flink.streaming.runtime.tasks.StreamIterationHead - Iteration head IterationSource-15 (1/1) added feedback queue under 065c0937d56f3e8da025e015d3ab332b-broker-15-0 | |
2017-05-11 15:10:09,380 INFO org.apache.flink.streaming.runtime.tasks.StreamIterationTail - Iteration tail IterationSink-15 (1/1) acquired feedback queue 065c0937d56f3e8da025e015d3ab332b-broker-15-0 | |
2017-05-11 15:10:09,387 INFO org.apache.flink.contrib.streaming.state.RocksDBStateBackend - Attempting to load RocksDB native library and store it under '/fjord/hadoop/data/nvme/splat/yarn-local-dir/usercache/splat/appcache/application_1494426363399_0012' | |
2017-05-11 15:10:09,426 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - No restore state for FlinkKafkaConsumer. | |
2017-05-11 15:10:09,426 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - No restore state for FlinkKafkaConsumer. | |
2017-05-11 15:10:09,426 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - No restore state for FlinkKafkaConsumer. | |
2017-05-11 15:10:09,428 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer subtask 0 will commit offsets back to Kafka on completed checkpoints. | |
2017-05-11 15:10:09,428 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer subtask 0 will commit offsets back to Kafka on completed checkpoints. | |
2017-05-11 15:10:09,428 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer subtask 0 will commit offsets back to Kafka on completed checkpoints. | |
2017-05-11 15:10:09,428 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Trying to get topic metadata from broker kafka10.sto.midasplayer.com:9092 in try 0/3 | |
2017-05-11 15:10:09,428 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Trying to get topic metadata from broker kafka10.sto.midasplayer.com:9092 in try 0/3 | |
2017-05-11 15:10:09,428 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Trying to get topic metadata from broker kafka10.sto.midasplayer.com:9092 in try 0/3 | |
2017-05-11 15:10:09,440 INFO org.apache.flink.contrib.streaming.state.RocksDBStateBackend - Successfully loaded RocksDB native library | |
2017-05-11 15:10:09,450 INFO org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend - Initializing RocksDB keyed state backend from snapshot. | |
2017-05-11 15:10:09,450 INFO org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend - Initializing RocksDB keyed state backend from snapshot. | |
2017-05-11 15:10:09,450 INFO org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend - Initializing RocksDB keyed state backend from snapshot. | |
2017-05-11 15:10:09,596 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer is going to read the following topics (with number of partitions): rbeaDeploymentsplattest1 (16), | |
2017-05-11 15:10:09,596 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer is going to read the following topics (with number of partitions): rbea.state.event.bifrost.log (16), | |
2017-05-11 15:10:09,596 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer is going to read the following topics (with number of partitions): event.bifrost.log (16), | |
2017-05-11 15:10:09,596 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer subtask 0 will start reading the following 16 partitions from the committed group offsets in Kafka: [KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=1}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=0}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=5}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=4}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=3}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=2}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=9}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=8}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=7}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=6}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=13}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=12}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=11}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=10}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=15}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=14}] | |
2017-05-11 15:10:09,596 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer subtask 0 will start reading the following 16 partitions from the committed group offsets in Kafka: [KafkaTopicPartition{topic='event.bifrost.log', partition=0}, KafkaTopicPartition{topic='event.bifrost.log', partition=1}, KafkaTopicPartition{topic='event.bifrost.log', partition=2}, KafkaTopicPartition{topic='event.bifrost.log', partition=3}, KafkaTopicPartition{topic='event.bifrost.log', partition=4}, KafkaTopicPartition{topic='event.bifrost.log', partition=5}, KafkaTopicPartition{topic='event.bifrost.log', partition=6}, KafkaTopicPartition{topic='event.bifrost.log', partition=7}, KafkaTopicPartition{topic='event.bifrost.log', partition=8}, KafkaTopicPartition{topic='event.bifrost.log', partition=9}, KafkaTopicPartition{topic='event.bifrost.log', partition=10}, KafkaTopicPartition{topic='event.bifrost.log', partition=11}, KafkaTopicPartition{topic='event.bifrost.log', partition=12}, KafkaTopicPartition{topic='event.bifrost.log', partition=13}, KafkaTopicPartition{topic='event.bifrost.log', partition=14}, KafkaTopicPartition{topic='event.bifrost.log', partition=15}] | |
2017-05-11 15:10:09,596 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer subtask 0 will start reading the following 16 partitions from the committed group offsets in Kafka: [KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=9}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=8}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=11}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=10}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=13}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=12}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=15}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=14}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=1}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=0}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=3}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=2}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=5}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=4}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=7}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=6}] | |
2017-05-11 15:10:09,602 INFO org.apache.flink.shaded.org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting | |
2017-05-11 15:10:09,602 INFO org.apache.flink.shaded.org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting | |
2017-05-11 15:10:09,602 INFO org.apache.flink.shaded.org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting | |
2017-05-11 15:10:09,602 INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zk04.sto.midasplayer.com:2181,zk05.sto.midasplayer.com:2181,zk06.sto.midasplayer.com:2181/kafka sessionTimeout=60000 watcher=org.apache.flink.shaded.org.apache.curator.ConnectionState@48788c00 | |
2017-05-11 15:10:09,602 INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zk04.sto.midasplayer.com:2181,zk05.sto.midasplayer.com:2181,zk06.sto.midasplayer.com:2181/kafka sessionTimeout=60000 watcher=org.apache.flink.shaded.org.apache.curator.ConnectionState@21d7d90b | |
2017-05-11 15:10:09,602 INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zk04.sto.midasplayer.com:2181,zk05.sto.midasplayer.com:2181,zk06.sto.midasplayer.com:2181/kafka sessionTimeout=60000 watcher=org.apache.flink.shaded.org.apache.curator.ConnectionState@763866e6 | |
2017-05-11 15:10:09,603 WARN org.apache.zookeeper.ClientCnxn - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/tmp/jaas-1580594546717889748.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. | |
2017-05-11 15:10:09,603 WARN org.apache.zookeeper.ClientCnxn - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/tmp/jaas-1580594546717889748.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. | |
2017-05-11 15:10:09,603 ERROR org.apache.flink.shaded.org.apache.curator.ConnectionState - Authentication failed | |
2017-05-11 15:10:09,603 INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zk06.sto.midasplayer.com/172.26.82.250:2181 | |
2017-05-11 15:10:09,604 WARN org.apache.zookeeper.ClientCnxn - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/tmp/jaas-1580594546717889748.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. | |
2017-05-11 15:10:09,603 ERROR org.apache.flink.shaded.org.apache.curator.ConnectionState - Authentication failed | |
2017-05-11 15:10:09,603 INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zk06.sto.midasplayer.com/172.26.82.250:2181 | |
2017-05-11 15:10:09,604 INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zk04.sto.midasplayer.com/172.26.82.242:2181 | |
2017-05-11 15:10:09,604 ERROR org.apache.flink.shaded.org.apache.curator.ConnectionState - Authentication failed | |
2017-05-11 15:10:09,604 INFO org.apache.zookeeper.ClientCnxn - Socket connection established to zk04.sto.midasplayer.com/172.26.82.242:2181, initiating session | |
2017-05-11 15:10:09,605 INFO org.apache.zookeeper.ClientCnxn - Socket connection established to zk06.sto.midasplayer.com/172.26.82.250:2181, initiating session | |
2017-05-11 15:10:09,605 INFO org.apache.zookeeper.ClientCnxn - Socket connection established to zk06.sto.midasplayer.com/172.26.82.250:2181, initiating session | |
2017-05-11 15:10:09,607 INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zk04.sto.midasplayer.com/172.26.82.242:2181, sessionid = 0x15aad587329de37, negotiated timeout = 40000 | |
2017-05-11 15:10:09,607 INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zk06.sto.midasplayer.com/172.26.82.250:2181, sessionid = 0x35aad58756ea388, negotiated timeout = 40000 | |
2017-05-11 15:10:09,607 INFO org.apache.flink.shaded.org.apache.curator.framework.state.ConnectionStateManager - State change: CONNECTED | |
2017-05-11 15:10:09,607 INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zk06.sto.midasplayer.com/172.26.82.250:2181, sessionid = 0x35aad58756ea389, negotiated timeout = 40000 | |
2017-05-11 15:10:09,607 INFO org.apache.flink.shaded.org.apache.curator.framework.state.ConnectionStateManager - State change: CONNECTED | |
2017-05-11 15:10:09,607 INFO org.apache.flink.shaded.org.apache.curator.framework.state.ConnectionStateManager - State change: CONNECTED | |
2017-05-11 15:10:09,613 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=1}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,1], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,616 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=9}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,9], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,616 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=0}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,0], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,619 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=5}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,5], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,621 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=4}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,4], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,622 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=8}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,8], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,623 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=3}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,3], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,626 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=2}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,2], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,627 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=9}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,9], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,628 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=11}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,11], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,629 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=8}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,8], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,631 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=7}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,7], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,633 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=6}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,6], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,634 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=10}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,10], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,635 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=13}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,13], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,637 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=12}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,12], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,638 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=11}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,11], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,640 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=10}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,10], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,641 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=13}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,13], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,641 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=15}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,15], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,643 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=14}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,14], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,643 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Assigning 16 partitions to broker threads | |
2017-05-11 15:10:09,644 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Refreshing leader information for partitions [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=1}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,1], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=0}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,0], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=5}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,5], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=4}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,4], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=3}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,3], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=2}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,2], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=9}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,9], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=8}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,8], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=7}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,7], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=6}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,6], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=13}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,13], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=12}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,12], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=11}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,11], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=10}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,10], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=15}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,15], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=14}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,14], offset=-1] | |
2017-05-11 15:10:09,645 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Trying to get topic metadata from broker kafka10.sto.midasplayer.com:9092 in try 0/3 | |
2017-05-11 15:10:09,650 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-7 (kafka07.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,651 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=5}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,5], offset=-1] | |
2017-05-11 15:10:09,651 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-9 (kafka09.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,651 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=7}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,7], offset=-1] | |
2017-05-11 15:10:09,651 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-16 (kafka16.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,651 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=14}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,14], offset=-1] | |
2017-05-11 15:10:09,651 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-22 (kafka22.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,652 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-14 (kafka14.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,652 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=12}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,12], offset=-1] | |
2017-05-11 15:10:09,652 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=1}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,1], offset=-1] | |
2017-05-11 15:10:09,652 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-24 (kafka24.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,652 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=3}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,3], offset=-1] | |
2017-05-11 15:10:09,652 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-12 (kafka12.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,652 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=10}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,10], offset=-1] | |
2017-05-11 15:10:09,652 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-10 (kafka10.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,653 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=8}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,8], offset=-1] | |
2017-05-11 15:10:09,653 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-17 (kafka17.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,653 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=15}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,15], offset=-1] | |
2017-05-11 15:10:09,653 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-23 (kafka23.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,653 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=2}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,2], offset=-1] | |
2017-05-11 15:10:09,653 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-8 (kafka08.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,653 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=6}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,6], offset=-1] | |
2017-05-11 15:10:09,654 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-21 (kafka21.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,654 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=0}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,0], offset=-1] | |
2017-05-11 15:10:09,654 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-15 (kafka15.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,654 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=13}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,13], offset=-1] | |
2017-05-11 15:10:09,654 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-25 (kafka25.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,654 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=4}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,4], offset=-1] | |
2017-05-11 15:10:09,654 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-13 (kafka13.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,654 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=11}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,11], offset=-1] | |
2017-05-11 15:10:09,655 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-11 (kafka11.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,655 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=9}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,9], offset=-1] | |
2017-05-11 15:10:09,660 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=14}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,14], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,662 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-23 (kafka23.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,662 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-9 (kafka09.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,662 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-24 (kafka24.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,662 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-25 (kafka25.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,662 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-11 (kafka11.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,662 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-16 (kafka16.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,662 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-15 (kafka15.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,662 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-13 (kafka13.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,662 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-14 (kafka14.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,662 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-22 (kafka22.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,663 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-17 (kafka17.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,663 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-21 (kafka21.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,663 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-10 (kafka10.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,664 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-12 (kafka12.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,669 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=1}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,1], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,674 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=0}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,0], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,678 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=3}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,3], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,679 INFO org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend - Initializing RocksDB keyed state backend from snapshot. | |
2017-05-11 15:10:09,683 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=2}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,2], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,687 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=5}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,5], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,693 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=4}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,4], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,698 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=7}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,7], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,700 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Assigning 16 partitions to broker threads | |
2017-05-11 15:10:09,701 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Refreshing leader information for partitions [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=0}, KafkaPartitionHandle=[event.bifrost.log,0], offset=17125845, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=1}, KafkaPartitionHandle=[event.bifrost.log,1], offset=2558664, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=2}, KafkaPartitionHandle=[event.bifrost.log,2], offset=2556646, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=3}, KafkaPartitionHandle=[event.bifrost.log,3], offset=2642044, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=4}, KafkaPartitionHandle=[event.bifrost.log,4], offset=2586970, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=5}, KafkaPartitionHandle=[event.bifrost.log,5], offset=2477967, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=6}, KafkaPartitionHandle=[event.bifrost.log,6], offset=2601495, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=7}, KafkaPartitionHandle=[event.bifrost.log,7], offset=2375819, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=8}, KafkaPartitionHandle=[event.bifrost.log,8], offset=2622275, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=9}, KafkaPartitionHandle=[event.bifrost.log,9], offset=2608243, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=10}, KafkaPartitionHandle=[event.bifrost.log,10], offset=2524631, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=11}, KafkaPartitionHandle=[event.bifrost.log,11], offset=2488716, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=12}, KafkaPartitionHandle=[event.bifrost.log,12], offset=2313230, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=13}, KafkaPartitionHandle=[event.bifrost.log,13], offset=2714195, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=14}, KafkaPartitionHandle=[event.bifrost.log,14], offset=2672454, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=15}, KafkaPartitionHandle=[event.bifrost.log,15], offset=2704607] | |
2017-05-11 15:10:09,701 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Trying to get topic metadata from broker kafka10.sto.midasplayer.com:9092 in try 0/3 | |
2017-05-11 15:10:09,703 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=6}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,6], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:09,704 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-20 (kafka20.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,704 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Assigning 16 partitions to broker threads | |
2017-05-11 15:10:09,704 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=8}, KafkaPartitionHandle=[event.bifrost.log,8], offset=2622275] | |
2017-05-11 15:10:09,704 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-18 (kafka18.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,704 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=6}, KafkaPartitionHandle=[event.bifrost.log,6], offset=2601495] | |
2017-05-11 15:10:09,704 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-7 (kafka07.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,704 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=14}, KafkaPartitionHandle=[event.bifrost.log,14], offset=2672454] | |
2017-05-11 15:10:09,704 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Refreshing leader information for partitions [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=9}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,9], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=8}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,8], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=11}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,11], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=10}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,10], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=13}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,13], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=12}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,12], offset=0, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=15}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,15], offset=0, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=14}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,14], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=1}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,1], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=0}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,0], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=3}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,3], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=2}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,2], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=5}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,5], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=4}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,4], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=7}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,7], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=6}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,6], offset=-1] | |
2017-05-11 15:10:09,704 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-16 (kafka16.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,705 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=4}, KafkaPartitionHandle=[event.bifrost.log,4], offset=2586970] | |
2017-05-11 15:10:09,705 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-22 (kafka22.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,705 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Trying to get topic metadata from broker kafka10.sto.midasplayer.com:9092 in try 0/3 | |
2017-05-11 15:10:09,705 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=10}, KafkaPartitionHandle=[event.bifrost.log,10], offset=2524631] | |
2017-05-11 15:10:09,705 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-14 (kafka14.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,705 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=2}, KafkaPartitionHandle=[event.bifrost.log,2], offset=2556646] | |
2017-05-11 15:10:09,705 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-12 (kafka12.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,705 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=0}, KafkaPartitionHandle=[event.bifrost.log,0], offset=17125845] | |
2017-05-11 15:10:09,706 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-24 (kafka24.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,706 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-22 (kafka22.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,706 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=12}, KafkaPartitionHandle=[event.bifrost.log,12], offset=2313230] | |
2017-05-11 15:10:09,706 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-20 (kafka20.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,706 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-19 (kafka19.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,706 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-14 (kafka14.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,706 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-16 (kafka16.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,706 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=7}, KafkaPartitionHandle=[event.bifrost.log,7], offset=2375819] | |
2017-05-11 15:10:09,706 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-17 (kafka17.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,706 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-18 (kafka18.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,706 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=5}, KafkaPartitionHandle=[event.bifrost.log,5], offset=2477967] | |
2017-05-11 15:10:09,706 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-24 (kafka24.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,707 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-23 (kafka23.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,707 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=11}, KafkaPartitionHandle=[event.bifrost.log,11], offset=2488716] | |
2017-05-11 15:10:09,707 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-8 (kafka08.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,707 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-7 (kafka07.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,707 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-17 (kafka17.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,707 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=15}, KafkaPartitionHandle=[event.bifrost.log,15], offset=2704607] | |
2017-05-11 15:10:09,707 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=5}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,5], offset=-1] | |
2017-05-11 15:10:09,707 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-15 (kafka15.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,707 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-9 (kafka09.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,708 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=7}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,7], offset=-1] | |
2017-05-11 15:10:09,708 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=3}, KafkaPartitionHandle=[event.bifrost.log,3], offset=2642044] | |
2017-05-11 15:10:09,708 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-23 (kafka23.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,708 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-16 (kafka16.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,708 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-21 (kafka21.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,708 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=14}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,14], offset=-1] | |
2017-05-11 15:10:09,708 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=9}, KafkaPartitionHandle=[event.bifrost.log,9], offset=2608243] | |
2017-05-11 15:10:09,708 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-22 (kafka22.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,708 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-19 (kafka19.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,708 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=1}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,1], offset=-1] | |
2017-05-11 15:10:09,708 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-25 (kafka25.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,708 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-15 (kafka15.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,708 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=12}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,12], offset=0] | |
2017-05-11 15:10:09,708 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=13}, KafkaPartitionHandle=[event.bifrost.log,13], offset=2714195] | |
2017-05-11 15:10:09,708 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-14 (kafka14.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,709 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-9 (kafka09.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,709 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-13 (kafka13.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,709 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-24 (kafka24.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,709 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-21 (kafka21.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,709 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=1}, KafkaPartitionHandle=[event.bifrost.log,1], offset=2558664] | |
2017-05-11 15:10:09,709 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=3}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,3], offset=-1] | |
2017-05-11 15:10:09,709 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-12 (kafka12.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,709 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-14 (kafka14.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,709 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=10}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,10], offset=-1] | |
2017-05-11 15:10:09,709 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-22 (kafka22.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,709 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-16 (kafka16.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,709 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-25 (kafka25.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,709 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-10 (kafka10.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,710 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=8}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,8], offset=-1] | |
2017-05-11 15:10:09,710 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-17 (kafka17.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,710 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=15}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,15], offset=0] | |
2017-05-11 15:10:09,710 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-23 (kafka23.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,710 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=2}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,2], offset=-1] | |
2017-05-11 15:10:09,710 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-8 (kafka08.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,710 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=6}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,6], offset=-1] | |
2017-05-11 15:10:09,710 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-10 (kafka10.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,710 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-21 (kafka21.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,711 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-24 (kafka24.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,711 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-17 (kafka17.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,711 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=0}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,0], offset=-1] | |
2017-05-11 15:10:09,711 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-15 (kafka15.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,711 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-23 (kafka23.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,711 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=13}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,13], offset=-1] | |
2017-05-11 15:10:09,711 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-25 (kafka25.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,711 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=4}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,4], offset=-1] | |
2017-05-11 15:10:09,711 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-13 (kafka13.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,711 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=11}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,11], offset=-1] | |
2017-05-11 15:10:09,711 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-11 (kafka11.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,712 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-13 (kafka13.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,712 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-21 (kafka21.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,712 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=9}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,9], offset=-1] | |
2017-05-11 15:10:09,712 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-25 (kafka25.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,713 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-13 (kafka13.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,713 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-15 (kafka15.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,714 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-11 (kafka11.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,717 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-12 (kafka12.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,718 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-12 (kafka12.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,752 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-7 (kafka07.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,761 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-7 (kafka07.sto.midasplayer.com:9092) | |
2017-05-11 15:10:09,761 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-7 (kafka07.sto.midasplayer.com:9092) | |
2017-05-11 15:10:10,742 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-8 (kafka08.sto.midasplayer.com:9092) | |
2017-05-11 15:10:10,744 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-8 (kafka08.sto.midasplayer.com:9092) | |
2017-05-11 15:10:10,744 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-8 (kafka08.sto.midasplayer.com:9092) | |
2017-05-11 15:10:45,717 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs:/flink/external-checkpoints/bifrost/savepoint-065c09-43f08a60f389, synchronous part) in thread Thread[Create Job information (1/1),5,Flink Task Threads] took 2 ms. | |
2017-05-11 15:10:45,717 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs:/flink/external-checkpoints/bifrost/savepoint-065c09-43f08a60f389, synchronous part) in thread Thread[Create Fields and Ids -> Filter Errors and Notifications (1/1),5,Flink Task Threads] took 3 ms. | |
2017-05-11 15:10:45,721 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs:/flink/external-checkpoints/bifrost/savepoint-065c09-43f08a60f389, synchronous part) in thread Thread[Async calls on Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1),5,Flink Task Threads] took 7 ms. | |
2017-05-11 15:10:45,722 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs:/flink/external-checkpoints/bifrost/savepoint-065c09-43f08a60f389, synchronous part) in thread Thread[Async calls on Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1),5,Flink Task Threads] took 7 ms. | |
2017-05-11 15:10:45,723 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs:/flink/external-checkpoints/bifrost/savepoint-065c09-43f08a60f389, synchronous part) in thread Thread[Async calls on Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1),5,Flink Task Threads] took 9 ms. | |
2017-05-11 15:10:45,735 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs:/flink/external-checkpoints/bifrost/savepoint-065c09-43f08a60f389, asynchronous part) in thread Thread[pool-22-thread-1,5,Flink Task Threads] took 17 ms. | |
2017-05-11 15:10:45,735 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs:/flink/external-checkpoints/bifrost/savepoint-065c09-43f08a60f389, asynchronous part) in thread Thread[pool-12-thread-1,5,Flink Task Threads] took 17 ms. | |
2017-05-11 15:10:46,012 INFO org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend - Asynchronous RocksDB snapshot (File Stream Factory @ hdfs:/flink/external-checkpoints/bifrost/savepoint-065c09-43f08a60f389, synchronous part) in thread Thread[Window aggregator (1/1),5,Flink Task Threads] took 1 ms. | |
2017-05-11 15:10:46,012 INFO org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend - Asynchronous RocksDB snapshot (File Stream Factory @ hdfs:/flink/external-checkpoints/bifrost/savepoint-065c09-43f08a60f389, synchronous part) in thread Thread[Keep last -> NoOp -> Create Aggrigato events (1/1),5,Flink Task Threads] took 1 ms. | |
2017-05-11 15:10:46,068 INFO org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend - Asynchronous RocksDB snapshot (File Stream Factory @ hdfs:/flink/external-checkpoints/bifrost/savepoint-065c09-43f08a60f389, asynchronous part) in thread Thread[pool-18-thread-1,5,Flink Task Threads] took 55 ms. | |
2017-05-11 15:10:46,068 INFO org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend - Asynchronous RocksDB snapshot (File Stream Factory @ hdfs:/flink/external-checkpoints/bifrost/savepoint-065c09-43f08a60f389, asynchronous part) in thread Thread[pool-23-thread-1,5,Flink Task Threads] took 55 ms. | |
2017-05-11 15:10:46,153 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs:/flink/external-checkpoints/bifrost/savepoint-065c09-43f08a60f389, synchronous part) in thread Thread[Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1),5,Flink Task Threads] took 142 ms. | |
2017-05-11 15:10:46,153 INFO org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend - Asynchronous RocksDB snapshot (File Stream Factory @ hdfs:/flink/external-checkpoints/bifrost/savepoint-065c09-43f08a60f389, synchronous part) in thread Thread[Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1),5,Flink Task Threads] took 0 ms. | |
2017-05-11 15:10:46,218 INFO org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend - Asynchronous RocksDB snapshot (File Stream Factory @ hdfs:/flink/external-checkpoints/bifrost/savepoint-065c09-43f08a60f389, asynchronous part) in thread Thread[pool-16-thread-1,5,Flink Task Threads] took 65 ms. | |
2017-05-11 15:10:46,268 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs:/flink/external-checkpoints/bifrost/savepoint-065c09-43f08a60f389, asynchronous part) in thread Thread[pool-16-thread-1,5,Flink Task Threads] took 49 ms. | |
2017-05-11 15:10:46,796 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to fail task externally Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (bebc6a77643e09ce2042d6a52f291e3e). | |
2017-05-11 15:10:46,797 INFO org.apache.flink.runtime.taskmanager.Task - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (bebc6a77643e09ce2042d6a52f291e3e) switched from RUNNING to FAILED. | |
AsynchronousException{java.lang.Exception: Could not materialize checkpoint 1 for operator Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1).} | |
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:966) | |
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:748) | |
Caused by: java.lang.Exception: Could not materialize checkpoint 1 for operator Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1). | |
... 6 more | |
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Could not open output stream for state backend | |
at java.util.concurrent.FutureTask.report(FutureTask.java:122) | |
at java.util.concurrent.FutureTask.get(FutureTask.java:192) | |
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43) | |
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:902) | |
... 5 more | |
Suppressed: java.lang.Exception: Could not properly cancel managed operator state future. | |
at org.apache.flink.streaming.api.operators.OperatorSnapshotResult.cancel(OperatorSnapshotResult.java:98) | |
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.cleanup(StreamTask.java:1018) | |
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:957) | |
... 5 more | |
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Could not open output stream for state backend | |
at java.util.concurrent.FutureTask.report(FutureTask.java:122) | |
at java.util.concurrent.FutureTask.get(FutureTask.java:192) | |
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43) | |
at org.apache.flink.runtime.state.StateUtil.discardStateFuture(StateUtil.java:85) | |
at org.apache.flink.streaming.api.operators.OperatorSnapshotResult.cancel(OperatorSnapshotResult.java:96) | |
... 7 more | |
Caused by: java.io.IOException: Could not open output stream for state backend | |
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.createStream(FsCheckpointStreamFactory.java:371) | |
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.flush(FsCheckpointStreamFactory.java:228) | |
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.write(FsCheckpointStreamFactory.java:203) | |
at java.io.DataOutputStream.write(DataOutputStream.java:107) | |
at org.apache.flink.api.java.typeutils.runtime.DataOutputViewStream.write(DataOutputViewStream.java:41) | |
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877) | |
at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786) | |
at java.io.ObjectOutputStream.writeNonProxyDesc(ObjectOutputStream.java:1286) | |
at java.io.ObjectOutputStream.writeClassDesc(ObjectOutputStream.java:1231) | |
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1427) | |
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178) | |
at java.io.ObjectOutputStream.writeFatalException(ObjectOutputStream.java:1577) | |
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:351) | |
at org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:323) | |
at org.apache.flink.runtime.state.JavaSerializer.serialize(JavaSerializer.java:69) | |
at org.apache.flink.runtime.state.JavaSerializer.serialize(JavaSerializer.java:33) | |
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.write(DefaultOperatorStateBackend.java:415) | |
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$1.performOperation(DefaultOperatorStateBackend.java:232) | |
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$1.performOperation(DefaultOperatorStateBackend.java:202) | |
at org.apache.flink.runtime.io.async.AbstractAsyncIOCallable.call(AbstractAsyncIOCallable.java:72) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:40) | |
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:902) | |
... 5 more | |
Caused by: java.io.IOException: Cannot register Closeable, registry is already closed. Closing argument. | |
at org.apache.flink.util.AbstractCloseableRegistry.registerClosable(AbstractCloseableRegistry.java:66) | |
at org.apache.flink.core.fs.ClosingFSDataOutputStream.wrapSafe(ClosingFSDataOutputStream.java:101) | |
at org.apache.flink.core.fs.SafetyNetWrapperFileSystem.create(SafetyNetWrapperFileSystem.java:125) | |
at org.apache.flink.core.fs.FileSystem.create(FileSystem.java:621) | |
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.createStream(FsCheckpointStreamFactory.java:362) | |
... 27 more | |
[CIRCULAR REFERENCE:java.io.IOException: Could not open output stream for state backend] | |
2017-05-11 15:10:46,802 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to fail task externally Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (adbb1d5170737eaef3931cbb797be277). | |
2017-05-11 15:10:46,802 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to fail task externally Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1). | |
2017-05-11 15:10:46,802 INFO org.apache.flink.runtime.taskmanager.Task - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (adbb1d5170737eaef3931cbb797be277) switched from RUNNING to FAILED. | |
AsynchronousException{java.lang.Exception: Could not materialize checkpoint 1 for operator Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1).} | |
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:966) | |
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:748) | |
Caused by: java.lang.Exception: Could not materialize checkpoint 1 for operator Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1). | |
... 6 more | |
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Could not open output stream for state backend | |
at java.util.concurrent.FutureTask.report(FutureTask.java:122) | |
at java.util.concurrent.FutureTask.get(FutureTask.java:192) | |
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43) | |
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:902) | |
... 5 more | |
Suppressed: java.lang.Exception: Could not properly cancel managed operator state future. | |
at org.apache.flink.streaming.api.operators.OperatorSnapshotResult.cancel(OperatorSnapshotResult.java:98) | |
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.cleanup(StreamTask.java:1018) | |
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:957) | |
... 5 more | |
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Could not open output stream for state backend | |
at java.util.concurrent.FutureTask.report(FutureTask.java:122) | |
at java.util.concurrent.FutureTask.get(FutureTask.java:192) | |
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43) | |
at org.apache.flink.runtime.state.StateUtil.discardStateFuture(StateUtil.java:85) | |
at org.apache.flink.streaming.api.operators.OperatorSnapshotResult.cancel(OperatorSnapshotResult.java:96) | |
... 7 more | |
Caused by: java.io.IOException: Could not open output stream for state backend | |
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.createStream(FsCheckpointStreamFactory.java:371) | |
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.flush(FsCheckpointStreamFactory.java:228) | |
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.write(FsCheckpointStreamFactory.java:203) | |
at java.io.DataOutputStream.write(DataOutputStream.java:107) | |
at org.apache.flink.api.java.typeutils.runtime.DataOutputViewStream.write(DataOutputViewStream.java:41) | |
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877) | |
at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786) | |
at java.io.ObjectOutputStream.writeNonProxyDesc(ObjectOutputStream.java:1286) | |
at java.io.ObjectOutputStream.writeClassDesc(ObjectOutputStream.java:1231) | |
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1427) | |
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178) | |
at java.io.ObjectOutputStream.writeFatalException(ObjectOutputStream.java:1577) | |
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:351) | |
at org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:323) | |
at org.apache.flink.runtime.state.JavaSerializer.serialize(JavaSerializer.java:69) | |
at org.apache.flink.runtime.state.JavaSerializer.serialize(JavaSerializer.java:33) | |
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.write(DefaultOperatorStateBackend.java:415) | |
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$1.performOperation(DefaultOperatorStateBackend.java:232) | |
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$1.performOperation(DefaultOperatorStateBackend.java:202) | |
at org.apache.flink.runtime.io.async.AbstractAsyncIOCallable.call(AbstractAsyncIOCallable.java:72) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:40) | |
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:902) | |
... 5 more | |
Caused by: java.io.IOException: Cannot register Closeable, registry is already closed. Closing argument. | |
at org.apache.flink.util.AbstractCloseableRegistry.registerClosable(AbstractCloseableRegistry.java:66) | |
at org.apache.flink.core.fs.ClosingFSDataOutputStream.wrapSafe(ClosingFSDataOutputStream.java:101) | |
at org.apache.flink.core.fs.SafetyNetWrapperFileSystem.create(SafetyNetWrapperFileSystem.java:125) | |
at org.apache.flink.core.fs.FileSystem.create(FileSystem.java:621) | |
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.createStream(FsCheckpointStreamFactory.java:362) | |
... 27 more | |
[CIRCULAR REFERENCE:java.io.IOException: Could not open output stream for state backend] | |
2017-05-11 15:10:46,803 INFO org.apache.flink.runtime.taskmanager.Task - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1) switched from RUNNING to FAILED. | |
AsynchronousException{java.lang.Exception: Could not materialize checkpoint 1 for operator Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1).} | |
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:966) | |
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) | |
at java.lang.Thread.run(Thread.java:748) | |
Caused by: java.lang.Exception: Could not materialize checkpoint 1 for operator Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1). | |
... 6 more | |
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Could not open output stream for state backend | |
at java.util.concurrent.FutureTask.report(FutureTask.java:122) | |
at java.util.concurrent.FutureTask.get(FutureTask.java:192) | |
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43) | |
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:902) | |
... 5 more | |
Suppressed: java.lang.Exception: Could not properly cancel managed operator state future. | |
at org.apache.flink.streaming.api.operators.OperatorSnapshotResult.cancel(OperatorSnapshotResult.java:98) | |
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.cleanup(StreamTask.java:1018) | |
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:957) | |
... 5 more | |
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Could not open output stream for state backend | |
at java.util.concurrent.FutureTask.report(FutureTask.java:122) | |
at java.util.concurrent.FutureTask.get(FutureTask.java:192) | |
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43) | |
at org.apache.flink.runtime.state.StateUtil.discardStateFuture(StateUtil.java:85) | |
at org.apache.flink.streaming.api.operators.OperatorSnapshotResult.cancel(OperatorSnapshotResult.java:96) | |
... 7 more | |
Caused by: java.io.IOException: Could not open output stream for state backend | |
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.createStream(FsCheckpointStreamFactory.java:371) | |
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.flush(FsCheckpointStreamFactory.java:228) | |
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.write(FsCheckpointStreamFactory.java:203) | |
at java.io.DataOutputStream.write(DataOutputStream.java:107) | |
at org.apache.flink.api.java.typeutils.runtime.DataOutputViewStream.write(DataOutputViewStream.java:41) | |
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877) | |
at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786) | |
at java.io.ObjectOutputStream.writeNonProxyDesc(ObjectOutputStream.java:1286) | |
at java.io.ObjectOutputStream.writeClassDesc(ObjectOutputStream.java:1231) | |
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1427) | |
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178) | |
at java.io.ObjectOutputStream.writeFatalException(ObjectOutputStream.java:1577) | |
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:351) | |
at org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:323) | |
at org.apache.flink.runtime.state.JavaSerializer.serialize(JavaSerializer.java:69) | |
at org.apache.flink.runtime.state.JavaSerializer.serialize(JavaSerializer.java:33) | |
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.write(DefaultOperatorStateBackend.java:415) | |
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$1.performOperation(DefaultOperatorStateBackend.java:232) | |
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$1.performOperation(DefaultOperatorStateBackend.java:202) | |
at org.apache.flink.runtime.io.async.AbstractAsyncIOCallable.call(AbstractAsyncIOCallable.java:72) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:40) | |
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:902) | |
... 5 more | |
Caused by: java.io.IOException: Cannot register Closeable, registry is already closed. Closing argument. | |
at org.apache.flink.util.AbstractCloseableRegistry.registerClosable(AbstractCloseableRegistry.java:66) | |
at org.apache.flink.core.fs.ClosingFSDataOutputStream.wrapSafe(ClosingFSDataOutputStream.java:101) | |
at org.apache.flink.core.fs.SafetyNetWrapperFileSystem.create(SafetyNetWrapperFileSystem.java:125) | |
at org.apache.flink.core.fs.FileSystem.create(FileSystem.java:621) | |
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.createStream(FsCheckpointStreamFactory.java:362) | |
... 27 more | |
[CIRCULAR REFERENCE:java.io.IOException: Could not open output stream for state backend] | |
2017-05-11 15:10:46,808 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (adbb1d5170737eaef3931cbb797be277). | |
2017-05-11 15:10:46,808 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (bebc6a77643e09ce2042d6a52f291e3e). | |
2017-05-11 15:10:46,811 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1). | |
2017-05-11 15:10:47,583 INFO org.apache.flink.shaded.org.apache.curator.framework.imps.CuratorFrameworkImpl - backgroundOperationsLoop exiting | |
2017-05-11 15:10:47,584 INFO org.apache.zookeeper.ZooKeeper - Session: 0x15aad587329de37 closed | |
2017-05-11 15:10:47,584 INFO org.apache.zookeeper.ClientCnxn - EventThread shut down | |
2017-05-11 15:10:47,585 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (bebc6a77643e09ce2042d6a52f291e3e). | |
2017-05-11 15:10:47,587 INFO org.apache.flink.shaded.org.apache.curator.framework.imps.CuratorFrameworkImpl - backgroundOperationsLoop exiting | |
2017-05-11 15:10:47,587 INFO org.apache.zookeeper.ZooKeeper - Session: 0x35aad58756ea388 closed | |
2017-05-11 15:10:47,587 INFO org.apache.zookeeper.ClientCnxn - EventThread shut down | |
2017-05-11 15:10:47,587 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (adbb1d5170737eaef3931cbb797be277). | |
2017-05-11 15:10:47,603 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (bebc6a77643e09ce2042d6a52f291e3e) [FAILED] | |
2017-05-11 15:10:47,603 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (adbb1d5170737eaef3931cbb797be277) [FAILED] | |
2017-05-11 15:10:47,605 INFO org.apache.flink.yarn.YarnTaskManager - Un-registering task and sending final execution state FAILED to JobManager for task Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (bebc6a77643e09ce2042d6a52f291e3e) | |
2017-05-11 15:10:47,607 INFO org.apache.flink.yarn.YarnTaskManager - Un-registering task and sending final execution state FAILED to JobManager for task Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (adbb1d5170737eaef3931cbb797be277) | |
2017-05-11 15:10:47,648 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to cancel task IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f). | |
2017-05-11 15:10:47,649 INFO org.apache.flink.runtime.taskmanager.Task - IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f) switched from RUNNING to CANCELING. | |
2017-05-11 15:10:47,649 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f). | |
2017-05-11 15:10:47,650 INFO org.apache.flink.streaming.runtime.tasks.StreamIterationHead - Iteration head IterationSource-15 (1/1) removed feedback queue under 065c0937d56f3e8da025e015d3ab332b-broker-15-0 | |
2017-05-11 15:10:47,651 INFO org.apache.flink.yarn.YarnTaskManager - Discarding the results produced by task execution adbb1d5170737eaef3931cbb797be277 | |
2017-05-11 15:10:47,651 INFO org.apache.flink.runtime.taskmanager.Task - IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f) switched from CANCELING to CANCELED. | |
2017-05-11 15:10:47,651 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f). | |
2017-05-11 15:10:47,651 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to cancel task Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1). | |
2017-05-11 15:10:47,651 INFO org.apache.flink.runtime.taskmanager.Task - Task Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) is already in state FAILED | |
2017-05-11 15:10:47,652 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to cancel task Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08). | |
2017-05-11 15:10:47,652 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f) [CANCELED] | |
2017-05-11 15:10:47,652 INFO org.apache.flink.runtime.taskmanager.Task - Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08) switched from RUNNING to CANCELING. | |
2017-05-11 15:10:47,653 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08). | |
2017-05-11 15:10:47,653 INFO org.apache.flink.yarn.YarnTaskManager - Discarding the results produced by task execution bebc6a77643e09ce2042d6a52f291e3e | |
2017-05-11 15:10:47,654 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to cancel task Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b). | |
2017-05-11 15:10:47,655 INFO org.apache.flink.runtime.taskmanager.Task - Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08) switched from CANCELING to CANCELED. | |
2017-05-11 15:10:47,655 INFO org.apache.flink.runtime.taskmanager.Task - Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b) switched from RUNNING to CANCELING. | |
2017-05-11 15:10:47,655 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08). | |
2017-05-11 15:10:47,655 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08) [CANCELED] | |
2017-05-11 15:10:47,655 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b). | |
2017-05-11 15:10:47,656 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to cancel task Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56). | |
2017-05-11 15:10:47,656 INFO org.apache.flink.runtime.taskmanager.Task - Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56) switched from RUNNING to CANCELING. | |
2017-05-11 15:10:47,657 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56). | |
2017-05-11 15:10:47,657 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to cancel task Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668). | |
2017-05-11 15:10:47,658 INFO org.apache.flink.runtime.taskmanager.Task - Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668) switched from RUNNING to CANCELING. | |
2017-05-11 15:10:47,658 INFO org.apache.flink.runtime.taskmanager.Task - Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b) switched from CANCELING to CANCELED. | |
2017-05-11 15:10:47,658 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b). | |
2017-05-11 15:10:47,658 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668). | |
2017-05-11 15:10:47,658 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b) [CANCELED] | |
2017-05-11 15:10:47,658 INFO org.apache.flink.yarn.YarnTaskManager - Un-registering task and sending final execution state CANCELED to JobManager for task IterationSource-15 (54d869533bda639da0674dafc218bc0f) | |
2017-05-11 15:10:47,659 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to cancel task Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855). | |
2017-05-11 15:10:47,659 INFO org.apache.flink.runtime.taskmanager.Task - Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855) switched from RUNNING to CANCELING. | |
2017-05-11 15:10:47,659 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855). | |
2017-05-11 15:10:47,660 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to cancel task MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e). | |
2017-05-11 15:10:47,660 INFO org.apache.flink.runtime.taskmanager.Task - MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e) switched from RUNNING to CANCELING. | |
2017-05-11 15:10:47,661 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e). | |
2017-05-11 15:10:47,661 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to cancel task To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd). | |
2017-05-11 15:10:47,662 INFO org.apache.flink.runtime.taskmanager.Task - To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd) switched from RUNNING to CANCELING. | |
2017-05-11 15:10:47,662 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd). | |
2017-05-11 15:10:47,662 INFO org.apache.flink.runtime.taskmanager.Task - Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855) switched from CANCELING to CANCELED. | |
2017-05-11 15:10:47,663 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855). | |
2017-05-11 15:10:47,663 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to cancel task IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5). | |
2017-05-11 15:10:47,663 INFO org.apache.flink.runtime.taskmanager.Task - IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5) switched from RUNNING to CANCELING. | |
2017-05-11 15:10:47,663 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855) [CANCELED] | |
2017-05-11 15:10:47,663 INFO org.apache.flink.runtime.taskmanager.Task - Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668) switched from CANCELING to CANCELED. | |
2017-05-11 15:10:47,663 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668). | |
2017-05-11 15:10:47,663 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668) [CANCELED] | |
2017-05-11 15:10:47,663 INFO org.apache.flink.runtime.taskmanager.Task - To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd) switched from CANCELING to CANCELED. | |
2017-05-11 15:10:47,663 INFO org.apache.flink.runtime.taskmanager.Task - MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e) switched from CANCELING to CANCELED. | |
2017-05-11 15:10:47,664 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5). | |
2017-05-11 15:10:47,664 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd). | |
2017-05-11 15:10:47,664 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e). | |
2017-05-11 15:10:47,665 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd) [CANCELED] | |
2017-05-11 15:10:47,665 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e) [CANCELED] | |
2017-05-11 15:10:47,665 INFO org.apache.flink.yarn.YarnTaskManager - Un-registering task and sending final execution state CANCELED to JobManager for task Source: Query Job Info (3d344ccb779c222ae5e71a6cd702ab08) | |
2017-05-11 15:10:47,665 INFO org.apache.flink.runtime.taskmanager.Task - IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5) switched from CANCELING to CANCELED. | |
2017-05-11 15:10:47,665 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5). | |
2017-05-11 15:10:47,666 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to cancel task Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f). | |
2017-05-11 15:10:47,666 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5) [CANCELED] | |
2017-05-11 15:10:47,666 INFO org.apache.flink.runtime.taskmanager.Task - Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f) switched from RUNNING to CANCELING. | |
2017-05-11 15:10:47,667 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f). | |
2017-05-11 15:10:47,668 INFO org.apache.flink.yarn.YarnTaskManager - Un-registering task and sending final execution state CANCELED to JobManager for task Create Fields and Ids -> Filter Errors and Notifications (96ba02017d86acb4d6e136631f4c896b) | |
2017-05-11 15:10:47,668 INFO org.apache.flink.runtime.taskmanager.Task - Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f) switched from CANCELING to CANCELED. | |
2017-05-11 15:10:47,668 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f). | |
2017-05-11 15:10:47,668 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f) [CANCELED] | |
2017-05-11 15:10:47,668 INFO org.apache.flink.yarn.YarnTaskManager - Un-registering task and sending final execution state CANCELED to JobManager for task Keep last -> NoOp -> Create Aggrigato events (f3f8fe4ef4e9815cd6d834f4a38f2855) | |
2017-05-11 15:10:47,669 INFO org.apache.flink.yarn.YarnTaskManager - Un-registering task and sending final execution state CANCELED to JobManager for task Window aggregator (a9b27d006832311c1c5e26cb212ff668) | |
2017-05-11 15:10:47,669 INFO org.apache.flink.yarn.YarnTaskManager - Un-registering task and sending final execution state CANCELED to JobManager for task To DeploymentInfo (9d0985ae5088198c20179e94947e1bbd) | |
2017-05-11 15:10:47,669 INFO org.apache.flink.yarn.YarnTaskManager - Un-registering task and sending final execution state CANCELED to JobManager for task MySql output info -> Filter (6fd449d469f21e774663efdecc92955e) | |
2017-05-11 15:10:47,669 INFO org.apache.flink.yarn.YarnTaskManager - Un-registering task and sending final execution state CANCELED to JobManager for task IterationSink-15 (289679af7c71b17ca1509d7d7762adc5) | |
2017-05-11 15:10:47,670 INFO org.apache.flink.yarn.YarnTaskManager - Un-registering task and sending final execution state CANCELED to JobManager for task Create Job information (6a0c6e47d414c6e3159b89de29d2aa6f) | |
2017-05-11 15:10:47,670 INFO org.apache.flink.runtime.taskmanager.Task - Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56) switched from CANCELING to CANCELED. | |
2017-05-11 15:10:47,671 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56). | |
2017-05-11 15:10:47,671 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56) [CANCELED] | |
2017-05-11 15:10:47,671 INFO org.apache.flink.yarn.YarnTaskManager - Un-registering task and sending final execution state CANCELED to JobManager for task Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (51ec48a3e91ba2d68ab80b614b3d7a56) | |
2017-05-11 15:10:47,703 INFO org.apache.flink.shaded.org.apache.curator.framework.imps.CuratorFrameworkImpl - backgroundOperationsLoop exiting | |
2017-05-11 15:10:47,703 INFO org.apache.zookeeper.ZooKeeper - Session: 0x35aad58756ea389 closed | |
2017-05-11 15:10:47,703 INFO org.apache.zookeeper.ClientCnxn - EventThread shut down | |
2017-05-11 15:10:47,704 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1). | |
2017-05-11 15:10:47,704 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1) [FAILED] | |
2017-05-11 15:10:47,704 INFO org.apache.flink.yarn.YarnTaskManager - Un-registering task and sending final execution state FAILED to JobManager for task Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (fedc06fd7a057558f7aea138790e76f1) | |
2017-05-11 15:10:57,728 INFO org.apache.flink.yarn.YarnTaskManager - Received task IterationSource-15 (1/1) | |
2017-05-11 15:10:57,728 INFO org.apache.flink.runtime.taskmanager.Task - IterationSource-15 (1/1) (aaa8f50da5b4f95be0dc5e5533741cf7) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:57,729 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task IterationSource-15 (1/1) (aaa8f50da5b4f95be0dc5e5533741cf7) [DEPLOYING] | |
2017-05-11 15:10:57,729 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task IterationSource-15 (1/1) (aaa8f50da5b4f95be0dc5e5533741cf7) [DEPLOYING]. | |
2017-05-11 15:10:57,729 INFO org.apache.flink.yarn.YarnTaskManager - Received task Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) | |
2017-05-11 15:10:57,730 INFO org.apache.flink.runtime.taskmanager.Task - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (8a97789a9006d417d4b761f580c98fbe) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:57,731 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (8a97789a9006d417d4b761f580c98fbe) [DEPLOYING] | |
2017-05-11 15:10:57,731 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (8a97789a9006d417d4b761f580c98fbe) [DEPLOYING]. | |
2017-05-11 15:10:57,731 INFO org.apache.flink.yarn.YarnTaskManager - Received task Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) | |
2017-05-11 15:10:57,731 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: IterationSource-15 (1/1) (aaa8f50da5b4f95be0dc5e5533741cf7) [DEPLOYING]. | |
2017-05-11 15:10:57,731 INFO org.apache.flink.runtime.taskmanager.Task - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (1f577dc5863e4902aa8deea71b71f605) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:57,731 INFO org.apache.flink.runtime.taskmanager.Task - IterationSource-15 (1/1) (aaa8f50da5b4f95be0dc5e5533741cf7) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:57,731 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (1f577dc5863e4902aa8deea71b71f605) [DEPLOYING] | |
2017-05-11 15:10:57,732 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (1f577dc5863e4902aa8deea71b71f605) [DEPLOYING]. | |
2017-05-11 15:10:57,732 INFO org.apache.flink.yarn.YarnTaskManager - Received task Source: Query Job Info (1/1) | |
2017-05-11 15:10:57,732 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (8a97789a9006d417d4b761f580c98fbe) [DEPLOYING]. | |
2017-05-11 15:10:57,732 INFO org.apache.flink.runtime.taskmanager.Task - Source: Query Job Info (1/1) (4ba944135ae73a280d50c8aa67d6edf7) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:57,732 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:57,732 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (1f577dc5863e4902aa8deea71b71f605) [DEPLOYING]. | |
2017-05-11 15:10:57,733 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Source: Query Job Info (1/1) (4ba944135ae73a280d50c8aa67d6edf7) [DEPLOYING] | |
2017-05-11 15:10:57,733 INFO org.apache.flink.runtime.taskmanager.Task - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (8a97789a9006d417d4b761f580c98fbe) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:57,734 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Source: Query Job Info (1/1) (4ba944135ae73a280d50c8aa67d6edf7) [DEPLOYING]. | |
2017-05-11 15:10:57,734 INFO org.apache.flink.runtime.taskmanager.Task - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (1f577dc5863e4902aa8deea71b71f605) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:57,734 INFO org.apache.flink.yarn.YarnTaskManager - Received task Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) | |
2017-05-11 15:10:57,734 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:57,734 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Source: Query Job Info (1/1) (4ba944135ae73a280d50c8aa67d6edf7) [DEPLOYING]. | |
2017-05-11 15:10:57,734 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:57,734 INFO org.apache.flink.runtime.taskmanager.Task - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (e5bf796d6c72b5552867db7b1cee9eec) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:57,735 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (e5bf796d6c72b5552867db7b1cee9eec) [DEPLOYING] | |
2017-05-11 15:10:57,735 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (e5bf796d6c72b5552867db7b1cee9eec) [DEPLOYING]. | |
2017-05-11 15:10:57,735 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (e5bf796d6c72b5552867db7b1cee9eec) [DEPLOYING]. | |
2017-05-11 15:10:57,736 INFO org.apache.flink.yarn.YarnTaskManager - Received task Create Fields and Ids -> Filter Errors and Notifications (1/1) | |
2017-05-11 15:10:57,736 INFO org.apache.flink.runtime.taskmanager.Task - Create Fields and Ids -> Filter Errors and Notifications (1/1) (412bf511ffe9c5fb24986fdb7946b7eb) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:57,736 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Create Fields and Ids -> Filter Errors and Notifications (1/1) (412bf511ffe9c5fb24986fdb7946b7eb) [DEPLOYING] | |
2017-05-11 15:10:57,736 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Create Fields and Ids -> Filter Errors and Notifications (1/1) (412bf511ffe9c5fb24986fdb7946b7eb) [DEPLOYING]. | |
2017-05-11 15:10:57,736 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Create Fields and Ids -> Filter Errors and Notifications (1/1) (412bf511ffe9c5fb24986fdb7946b7eb) [DEPLOYING]. | |
2017-05-11 15:10:57,737 INFO org.apache.flink.yarn.YarnTaskManager - Received task Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) | |
2017-05-11 15:10:57,737 INFO org.apache.flink.runtime.taskmanager.Task - Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (bee8adcdb1e1015878d02f2bf7e25271) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:57,737 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (bee8adcdb1e1015878d02f2bf7e25271) [DEPLOYING] | |
2017-05-11 15:10:57,737 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (bee8adcdb1e1015878d02f2bf7e25271) [DEPLOYING]. | |
2017-05-11 15:10:57,737 INFO org.apache.flink.yarn.YarnTaskManager - Received task Window aggregator (1/1) | |
2017-05-11 15:10:57,738 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (bee8adcdb1e1015878d02f2bf7e25271) [DEPLOYING]. | |
2017-05-11 15:10:57,738 INFO org.apache.flink.runtime.taskmanager.Task - Window aggregator (1/1) (2e3d5f6b29d630d83c4c87febfccbd70) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:57,738 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Window aggregator (1/1) (2e3d5f6b29d630d83c4c87febfccbd70) [DEPLOYING] | |
2017-05-11 15:10:57,738 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Window aggregator (1/1) (2e3d5f6b29d630d83c4c87febfccbd70) [DEPLOYING]. | |
2017-05-11 15:10:57,738 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Window aggregator (1/1) (2e3d5f6b29d630d83c4c87febfccbd70) [DEPLOYING]. | |
2017-05-11 15:10:57,738 INFO org.apache.flink.yarn.YarnTaskManager - Received task Keep last -> NoOp -> Create Aggrigato events (1/1) | |
2017-05-11 15:10:57,738 INFO org.apache.flink.runtime.taskmanager.Task - Keep last -> NoOp -> Create Aggrigato events (1/1) (d05593668ad3beb74efb47eb46df749a) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:57,739 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Keep last -> NoOp -> Create Aggrigato events (1/1) (d05593668ad3beb74efb47eb46df749a) [DEPLOYING] | |
2017-05-11 15:10:57,739 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Keep last -> NoOp -> Create Aggrigato events (1/1) (d05593668ad3beb74efb47eb46df749a) [DEPLOYING]. | |
2017-05-11 15:10:57,739 INFO org.apache.flink.yarn.YarnTaskManager - Received task MySql output info -> Filter (1/1) | |
2017-05-11 15:10:57,739 INFO org.apache.flink.runtime.taskmanager.Task - MySql output info -> Filter (1/1) (6d6200e77bcbc8eadd6a7e9fb359c7de) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:57,739 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task MySql output info -> Filter (1/1) (6d6200e77bcbc8eadd6a7e9fb359c7de) [DEPLOYING] | |
2017-05-11 15:10:57,739 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task MySql output info -> Filter (1/1) (6d6200e77bcbc8eadd6a7e9fb359c7de) [DEPLOYING]. | |
2017-05-11 15:10:57,740 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: MySql output info -> Filter (1/1) (6d6200e77bcbc8eadd6a7e9fb359c7de) [DEPLOYING]. | |
2017-05-11 15:10:57,740 INFO org.apache.flink.yarn.YarnTaskManager - Received task To DeploymentInfo (1/1) | |
2017-05-11 15:10:57,740 INFO org.apache.flink.runtime.taskmanager.Task - To DeploymentInfo (1/1) (085b77eabcd6fe3f7c6afe3ac4ccf732) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:57,740 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task To DeploymentInfo (1/1) (085b77eabcd6fe3f7c6afe3ac4ccf732) [DEPLOYING] | |
2017-05-11 15:10:57,740 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task To DeploymentInfo (1/1) (085b77eabcd6fe3f7c6afe3ac4ccf732) [DEPLOYING]. | |
2017-05-11 15:10:57,740 INFO org.apache.flink.yarn.YarnTaskManager - Received task IterationSink-15 (1/1) | |
2017-05-11 15:10:57,741 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: To DeploymentInfo (1/1) (085b77eabcd6fe3f7c6afe3ac4ccf732) [DEPLOYING]. | |
2017-05-11 15:10:57,741 INFO org.apache.flink.streaming.runtime.tasks.StreamIterationHead - Iteration head IterationSource-15 (1/1) added feedback queue under 065c0937d56f3e8da025e015d3ab332b-broker-15-0 | |
2017-05-11 15:10:57,742 INFO org.apache.flink.runtime.taskmanager.Task - IterationSink-15 (1/1) (bd2ccbc322a1a9c266ea0cd9cbc7c693) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:57,742 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task IterationSink-15 (1/1) (bd2ccbc322a1a9c266ea0cd9cbc7c693) [DEPLOYING] | |
2017-05-11 15:10:57,742 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task IterationSink-15 (1/1) (bd2ccbc322a1a9c266ea0cd9cbc7c693) [DEPLOYING]. | |
2017-05-11 15:10:57,744 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Keep last -> NoOp -> Create Aggrigato events (1/1) (d05593668ad3beb74efb47eb46df749a) [DEPLOYING]. | |
2017-05-11 15:10:57,745 INFO org.apache.flink.runtime.taskmanager.Task - To DeploymentInfo (1/1) (085b77eabcd6fe3f7c6afe3ac4ccf732) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:57,747 INFO org.apache.flink.runtime.taskmanager.Task - Window aggregator (1/1) (2e3d5f6b29d630d83c4c87febfccbd70) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:57,747 INFO org.apache.flink.runtime.taskmanager.Task - Source: Query Job Info (1/1) (4ba944135ae73a280d50c8aa67d6edf7) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:57,747 INFO org.apache.flink.runtime.taskmanager.Task - MySql output info -> Filter (1/1) (6d6200e77bcbc8eadd6a7e9fb359c7de) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:57,750 INFO org.apache.flink.runtime.taskmanager.Task - Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (bee8adcdb1e1015878d02f2bf7e25271) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:57,750 INFO org.apache.flink.runtime.taskmanager.Task - Create Fields and Ids -> Filter Errors and Notifications (1/1) (412bf511ffe9c5fb24986fdb7946b7eb) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:57,750 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: IterationSink-15 (1/1) (bd2ccbc322a1a9c266ea0cd9cbc7c693) [DEPLOYING]. | |
2017-05-11 15:10:57,753 INFO org.apache.flink.runtime.taskmanager.Task - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (e5bf796d6c72b5552867db7b1cee9eec) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:57,753 INFO org.apache.flink.runtime.taskmanager.Task - Keep last -> NoOp -> Create Aggrigato events (1/1) (d05593668ad3beb74efb47eb46df749a) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:57,753 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:57,753 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:57,753 INFO org.apache.flink.runtime.taskmanager.Task - IterationSink-15 (1/1) (bd2ccbc322a1a9c266ea0cd9cbc7c693) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:57,754 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:57,754 INFO org.apache.flink.yarn.YarnTaskManager - Received task Create Job information (1/1) | |
2017-05-11 15:10:57,754 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:57,753 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:57,753 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:57,754 INFO org.apache.flink.runtime.taskmanager.Task - Create Job information (1/1) (476c70c231139063b3bc1b7314d8ce49) switched from CREATED to DEPLOYING. | |
2017-05-11 15:10:57,754 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:57,754 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:57,755 INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem stream leak safety net for task Create Job information (1/1) (476c70c231139063b3bc1b7314d8ce49) [DEPLOYING] | |
2017-05-11 15:10:57,755 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:57,755 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Create Job information (1/1) (476c70c231139063b3bc1b7314d8ce49) [DEPLOYING]. | |
2017-05-11 15:10:57,755 INFO org.apache.flink.streaming.runtime.tasks.StreamIterationTail - Iteration tail IterationSink-15 (1/1) trying to acquire feedback queue under 065c0937d56f3e8da025e015d3ab332b-broker-15-0 | |
2017-05-11 15:10:57,755 INFO org.apache.flink.streaming.runtime.tasks.StreamIterationTail - Iteration tail IterationSink-15 (1/1) acquired feedback queue 065c0937d56f3e8da025e015d3ab332b-broker-15-0 | |
2017-05-11 15:10:57,755 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Create Job information (1/1) (476c70c231139063b3bc1b7314d8ce49) [DEPLOYING]. | |
2017-05-11 15:10:57,756 INFO org.apache.flink.runtime.taskmanager.Task - Create Job information (1/1) (476c70c231139063b3bc1b7314d8ce49) switched from DEPLOYING to RUNNING. | |
2017-05-11 15:10:57,757 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Using user-defined state backend: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}. | |
2017-05-11 15:10:57,765 INFO org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend - Initializing RocksDB keyed state backend from snapshot. | |
2017-05-11 15:10:57,768 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - No restore state for FlinkKafkaConsumer. | |
2017-05-11 15:10:57,768 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer subtask 0 will commit offsets back to Kafka on completed checkpoints. | |
2017-05-11 15:10:57,768 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Trying to get topic metadata from broker kafka10.sto.midasplayer.com:9092 in try 0/3 | |
2017-05-11 15:10:57,768 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - No restore state for FlinkKafkaConsumer. | |
2017-05-11 15:10:57,769 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer subtask 0 will commit offsets back to Kafka on completed checkpoints. | |
2017-05-11 15:10:57,769 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Trying to get topic metadata from broker kafka10.sto.midasplayer.com:9092 in try 0/3 | |
2017-05-11 15:10:57,770 INFO org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend - Initializing RocksDB keyed state backend from snapshot. | |
2017-05-11 15:10:57,771 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer is going to read the following topics (with number of partitions): rbea.state.event.bifrost.log (16), | |
2017-05-11 15:10:57,771 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer is going to read the following topics (with number of partitions): event.bifrost.log (16), | |
2017-05-11 15:10:57,771 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer subtask 0 will start reading the following 16 partitions from the committed group offsets in Kafka: [KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=9}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=8}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=11}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=10}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=13}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=12}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=15}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=14}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=1}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=0}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=3}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=2}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=5}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=4}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=7}, KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=6}] | |
2017-05-11 15:10:57,771 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer subtask 0 will start reading the following 16 partitions from the committed group offsets in Kafka: [KafkaTopicPartition{topic='event.bifrost.log', partition=0}, KafkaTopicPartition{topic='event.bifrost.log', partition=1}, KafkaTopicPartition{topic='event.bifrost.log', partition=2}, KafkaTopicPartition{topic='event.bifrost.log', partition=3}, KafkaTopicPartition{topic='event.bifrost.log', partition=4}, KafkaTopicPartition{topic='event.bifrost.log', partition=5}, KafkaTopicPartition{topic='event.bifrost.log', partition=6}, KafkaTopicPartition{topic='event.bifrost.log', partition=7}, KafkaTopicPartition{topic='event.bifrost.log', partition=8}, KafkaTopicPartition{topic='event.bifrost.log', partition=9}, KafkaTopicPartition{topic='event.bifrost.log', partition=10}, KafkaTopicPartition{topic='event.bifrost.log', partition=11}, KafkaTopicPartition{topic='event.bifrost.log', partition=12}, KafkaTopicPartition{topic='event.bifrost.log', partition=13}, KafkaTopicPartition{topic='event.bifrost.log', partition=14}, KafkaTopicPartition{topic='event.bifrost.log', partition=15}] | |
2017-05-11 15:10:57,772 INFO org.apache.flink.shaded.org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting | |
2017-05-11 15:10:57,772 INFO org.apache.flink.shaded.org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting | |
2017-05-11 15:10:57,772 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - No restore state for FlinkKafkaConsumer. | |
2017-05-11 15:10:57,773 INFO org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend - Initializing RocksDB keyed state backend from snapshot. | |
2017-05-11 15:10:57,773 INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zk04.sto.midasplayer.com:2181,zk05.sto.midasplayer.com:2181,zk06.sto.midasplayer.com:2181/kafka sessionTimeout=60000 watcher=org.apache.flink.shaded.org.apache.curator.ConnectionState@574ffb13 | |
2017-05-11 15:10:57,773 INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zk04.sto.midasplayer.com:2181,zk05.sto.midasplayer.com:2181,zk06.sto.midasplayer.com:2181/kafka sessionTimeout=60000 watcher=org.apache.flink.shaded.org.apache.curator.ConnectionState@6801ba39 | |
2017-05-11 15:10:57,773 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer subtask 0 will commit offsets back to Kafka on completed checkpoints. | |
2017-05-11 15:10:57,773 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Trying to get topic metadata from broker kafka10.sto.midasplayer.com:9092 in try 0/3 | |
2017-05-11 15:10:57,775 WARN org.apache.zookeeper.ClientCnxn - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/tmp/jaas-1580594546717889748.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. | |
2017-05-11 15:10:57,775 INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zk04.sto.midasplayer.com/172.26.82.242:2181 | |
2017-05-11 15:10:57,775 ERROR org.apache.flink.shaded.org.apache.curator.ConnectionState - Authentication failed | |
2017-05-11 15:10:57,776 WARN org.apache.zookeeper.ClientCnxn - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/tmp/jaas-1580594546717889748.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. | |
2017-05-11 15:10:57,775 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer is going to read the following topics (with number of partitions): rbeaDeploymentsplattest1 (16), | |
2017-05-11 15:10:57,776 INFO org.apache.zookeeper.ClientCnxn - Socket connection established to zk04.sto.midasplayer.com/172.26.82.242:2181, initiating session | |
2017-05-11 15:10:57,776 ERROR org.apache.flink.shaded.org.apache.curator.ConnectionState - Authentication failed | |
2017-05-11 15:10:57,776 INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zk05.sto.midasplayer.com/172.26.82.243:2181 | |
2017-05-11 15:10:57,776 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer subtask 0 will start reading the following 16 partitions from the committed group offsets in Kafka: [KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=1}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=0}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=5}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=4}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=3}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=2}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=9}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=8}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=7}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=6}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=13}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=12}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=11}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=10}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=15}, KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=14}] | |
2017-05-11 15:10:57,777 INFO org.apache.zookeeper.ClientCnxn - Socket connection established to zk05.sto.midasplayer.com/172.26.82.243:2181, initiating session | |
2017-05-11 15:10:57,778 INFO org.apache.flink.shaded.org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting | |
2017-05-11 15:10:57,778 INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zk04.sto.midasplayer.com:2181,zk05.sto.midasplayer.com:2181,zk06.sto.midasplayer.com:2181/kafka sessionTimeout=60000 watcher=org.apache.flink.shaded.org.apache.curator.ConnectionState@26c3e326 | |
2017-05-11 15:10:57,779 INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zk04.sto.midasplayer.com/172.26.82.242:2181, sessionid = 0x15aad587329de59, negotiated timeout = 40000 | |
2017-05-11 15:10:57,779 WARN org.apache.zookeeper.ClientCnxn - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/tmp/jaas-1580594546717889748.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. | |
2017-05-11 15:10:57,779 INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zk05.sto.midasplayer.com/172.26.82.243:2181, sessionid = 0x25aad587314e79d, negotiated timeout = 40000 | |
2017-05-11 15:10:57,779 INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zk06.sto.midasplayer.com/172.26.82.250:2181 | |
2017-05-11 15:10:57,779 ERROR org.apache.flink.shaded.org.apache.curator.ConnectionState - Authentication failed | |
2017-05-11 15:10:57,779 INFO org.apache.flink.shaded.org.apache.curator.framework.state.ConnectionStateManager - State change: CONNECTED | |
2017-05-11 15:10:57,779 INFO org.apache.flink.shaded.org.apache.curator.framework.state.ConnectionStateManager - State change: CONNECTED | |
2017-05-11 15:10:57,780 INFO org.apache.zookeeper.ClientCnxn - Socket connection established to zk06.sto.midasplayer.com/172.26.82.250:2181, initiating session | |
2017-05-11 15:10:57,782 INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zk06.sto.midasplayer.com/172.26.82.250:2181, sessionid = 0x35aad58756ea3a5, negotiated timeout = 40000 | |
2017-05-11 15:10:57,782 INFO org.apache.flink.shaded.org.apache.curator.framework.state.ConnectionStateManager - State change: CONNECTED | |
2017-05-11 15:10:57,783 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=9}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,9], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,786 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=8}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,8], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,788 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=1}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,1], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,788 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=11}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,11], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,789 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=10}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,10], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,791 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=13}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,13], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,793 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=0}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,0], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,794 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=14}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,14], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,796 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=1}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,1], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,797 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=5}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,5], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,801 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=0}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,0], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,802 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=4}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,4], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,803 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=3}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,3], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,804 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=2}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,2], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,805 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Assigning 16 partitions to broker threads | |
2017-05-11 15:10:57,805 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Refreshing leader information for partitions [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=0}, KafkaPartitionHandle=[event.bifrost.log,0], offset=17125845, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=1}, KafkaPartitionHandle=[event.bifrost.log,1], offset=2558664, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=2}, KafkaPartitionHandle=[event.bifrost.log,2], offset=2556646, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=3}, KafkaPartitionHandle=[event.bifrost.log,3], offset=2642044, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=4}, KafkaPartitionHandle=[event.bifrost.log,4], offset=2586970, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=5}, KafkaPartitionHandle=[event.bifrost.log,5], offset=2477967, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=6}, KafkaPartitionHandle=[event.bifrost.log,6], offset=2601495, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=7}, KafkaPartitionHandle=[event.bifrost.log,7], offset=2375819, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=8}, KafkaPartitionHandle=[event.bifrost.log,8], offset=2622275, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=9}, KafkaPartitionHandle=[event.bifrost.log,9], offset=2608243, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=10}, KafkaPartitionHandle=[event.bifrost.log,10], offset=2524631, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=11}, KafkaPartitionHandle=[event.bifrost.log,11], offset=2488716, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=12}, KafkaPartitionHandle=[event.bifrost.log,12], offset=2313230, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=13}, KafkaPartitionHandle=[event.bifrost.log,13], offset=2714195, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=14}, KafkaPartitionHandle=[event.bifrost.log,14], offset=2672454, Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=15}, KafkaPartitionHandle=[event.bifrost.log,15], offset=2704607] | |
2017-05-11 15:10:57,806 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=5}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,5], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,806 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Trying to get topic metadata from broker kafka10.sto.midasplayer.com:9092 in try 0/3 | |
2017-05-11 15:10:57,807 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=3}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,3], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,808 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=4}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,4], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,809 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-20 (kafka20.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,809 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=8}, KafkaPartitionHandle=[event.bifrost.log,8], offset=2622275] | |
2017-05-11 15:10:57,809 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-18 (kafka18.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,809 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=6}, KafkaPartitionHandle=[event.bifrost.log,6], offset=2601495] | |
2017-05-11 15:10:57,810 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=7}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,7], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,810 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-7 (kafka07.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,810 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=14}, KafkaPartitionHandle=[event.bifrost.log,14], offset=2672454] | |
2017-05-11 15:10:57,810 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-16 (kafka16.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,811 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=4}, KafkaPartitionHandle=[event.bifrost.log,4], offset=2586970] | |
2017-05-11 15:10:57,811 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-20 (kafka20.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,811 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-22 (kafka22.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,811 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=10}, KafkaPartitionHandle=[event.bifrost.log,10], offset=2524631] | |
2017-05-11 15:10:57,812 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-14 (kafka14.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,812 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-7 (kafka07.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,812 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=2}, KafkaPartitionHandle=[event.bifrost.log,2], offset=2556646] | |
2017-05-11 15:10:57,812 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-12 (kafka12.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,813 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=2}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,2], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,813 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=0}, KafkaPartitionHandle=[event.bifrost.log,0], offset=17125845] | |
2017-05-11 15:10:57,813 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-16 (kafka16.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,813 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-24 (kafka24.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,813 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-22 (kafka22.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,813 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=12}, KafkaPartitionHandle=[event.bifrost.log,12], offset=2313230] | |
2017-05-11 15:10:57,813 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=6}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,6], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,814 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Assigning 16 partitions to broker threads | |
2017-05-11 15:10:57,814 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-19 (kafka19.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,814 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=7}, KafkaPartitionHandle=[event.bifrost.log,7], offset=2375819] | |
2017-05-11 15:10:57,814 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-17 (kafka17.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,814 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-12 (kafka12.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,814 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-14 (kafka14.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,814 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=5}, KafkaPartitionHandle=[event.bifrost.log,5], offset=2477967] | |
2017-05-11 15:10:57,815 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-23 (kafka23.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,815 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=11}, KafkaPartitionHandle=[event.bifrost.log,11], offset=2488716] | |
2017-05-11 15:10:57,815 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-24 (kafka24.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,815 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-8 (kafka08.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,815 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Refreshing leader information for partitions [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=9}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,9], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=8}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,8], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=11}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,11], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=10}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,10], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=13}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,13], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=12}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,12], offset=0, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=15}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,15], offset=0, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=14}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,14], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=1}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,1], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=0}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,0], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=3}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,3], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=2}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,2], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=5}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,5], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=4}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,4], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=7}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,7], offset=-1, Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=6}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,6], offset=-1] | |
2017-05-11 15:10:57,816 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=15}, KafkaPartitionHandle=[event.bifrost.log,15], offset=2704607] | |
2017-05-11 15:10:57,816 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-19 (kafka19.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,817 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-15 (kafka15.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,817 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Trying to get topic metadata from broker kafka10.sto.midasplayer.com:9092 in try 0/3 | |
2017-05-11 15:10:57,817 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=3}, KafkaPartitionHandle=[event.bifrost.log,3], offset=2642044] | |
2017-05-11 15:10:57,818 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-17 (kafka17.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,818 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-21 (kafka21.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,818 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-23 (kafka23.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,818 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=9}, KafkaPartitionHandle=[event.bifrost.log,9], offset=2608243] | |
2017-05-11 15:10:57,819 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-25 (kafka25.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,819 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-15 (kafka15.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,819 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=13}, KafkaPartitionHandle=[event.bifrost.log,13], offset=2714195] | |
2017-05-11 15:10:57,819 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-13 (kafka13.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,819 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-8 (kafka08.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,819 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='event.bifrost.log', partition=1}, KafkaPartitionHandle=[event.bifrost.log,1], offset=2558664] | |
2017-05-11 15:10:57,820 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-21 (kafka21.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,821 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-25 (kafka25.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,821 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-7 (kafka07.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,821 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=5}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,5], offset=-1] | |
2017-05-11 15:10:57,822 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=9}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,9], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,822 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-9 (kafka09.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,822 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=7}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,7], offset=-1] | |
2017-05-11 15:10:57,823 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-7 (kafka07.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,823 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-16 (kafka16.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,823 INFO org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend - Initializing RocksDB keyed state backend from snapshot. | |
2017-05-11 15:10:57,823 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=14}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,14], offset=-1] | |
2017-05-11 15:10:57,823 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-13 (kafka13.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,823 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-22 (kafka22.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,824 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=1}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,1], offset=-1] | |
2017-05-11 15:10:57,825 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-14 (kafka14.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,825 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=12}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,12], offset=0] | |
2017-05-11 15:10:57,825 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-9 (kafka09.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,826 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-22 (kafka22.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,825 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-16 (kafka16.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,826 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-24 (kafka24.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,826 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=3}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,3], offset=-1] | |
2017-05-11 15:10:57,827 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-12 (kafka12.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,827 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=10}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,10], offset=-1] | |
2017-05-11 15:10:57,827 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-10 (kafka10.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,828 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-17 (kafka17.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,828 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-24 (kafka24.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,828 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=8}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,8], offset=-1] | |
2017-05-11 15:10:57,828 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=15}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,15], offset=0] | |
2017-05-11 15:10:57,829 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-23 (kafka23.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,829 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-14 (kafka14.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,829 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=2}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,2], offset=-1] | |
2017-05-11 15:10:57,829 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-8 (kafka08.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,829 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events - broker-18 (kafka18.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,829 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=8}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,8], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,829 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-12 (kafka12.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,829 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-21 (kafka21.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,829 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=6}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,6], offset=-1] | |
2017-05-11 15:10:57,830 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-15 (kafka15.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,830 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-17 (kafka17.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,830 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-23 (kafka23.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,830 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=13}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,13], offset=-1] | |
2017-05-11 15:10:57,830 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-10 (kafka10.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,830 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-25 (kafka25.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,831 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=0}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,0], offset=-1] | |
2017-05-11 15:10:57,831 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=4}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,4], offset=-1] | |
2017-05-11 15:10:57,832 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-13 (kafka13.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,832 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-11 (kafka11.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,832 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=11}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,11], offset=-1] | |
2017-05-11 15:10:57,833 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-21 (kafka21.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,833 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-8 (kafka08.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,833 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbea.state.event.bifrost.log', partition=9}, KafkaPartitionHandle=[rbea.state.event.bifrost.log,9], offset=-1] | |
2017-05-11 15:10:57,833 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-15 (kafka15.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,833 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-25 (kafka25.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,834 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-13 (kafka13.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,834 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events - broker-11 (kafka11.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,834 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=7}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,7], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,839 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=6}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,6], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,843 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=13}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,13], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,848 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=12}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,12], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,852 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=11}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,11], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,856 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=10}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,10], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,860 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=15}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,15], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,864 WARN org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - No group offset can be found for partition Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=14}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,14], offset=-915623761773 in Zookeeper; resetting starting offset to 'auto.offset.reset' | |
2017-05-11 15:10:57,864 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Assigning 16 partitions to broker threads | |
2017-05-11 15:10:57,864 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Refreshing leader information for partitions [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=1}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,1], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=0}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,0], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=5}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,5], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=4}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,4], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=3}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,3], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=2}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,2], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=9}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,9], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=8}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,8], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=7}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,7], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=6}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,6], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=13}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,13], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=12}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,12], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=11}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,11], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=10}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,10], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=15}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,15], offset=-1, Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=14}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,14], offset=-1] | |
2017-05-11 15:10:57,865 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Trying to get topic metadata from broker kafka10.sto.midasplayer.com:9092 in try 0/3 | |
2017-05-11 15:10:57,868 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-7 (kafka07.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,868 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-9 (kafka09.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,868 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=5}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,5], offset=-1] | |
2017-05-11 15:10:57,868 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=7}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,7], offset=-1] | |
2017-05-11 15:10:57,868 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-16 (kafka16.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,869 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=14}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,14], offset=-1] | |
2017-05-11 15:10:57,869 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-22 (kafka22.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,869 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=1}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,1], offset=-1] | |
2017-05-11 15:10:57,869 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-14 (kafka14.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,870 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-7 (kafka07.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,870 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-24 (kafka24.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,870 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=12}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,12], offset=-1] | |
2017-05-11 15:10:57,870 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=3}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,3], offset=-1] | |
2017-05-11 15:10:57,870 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-9 (kafka09.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,871 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-12 (kafka12.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,871 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-16 (kafka16.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,871 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=10}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,10], offset=-1] | |
2017-05-11 15:10:57,871 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-22 (kafka22.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,871 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-24 (kafka24.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,872 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-10 (kafka10.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,872 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=8}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,8], offset=-1] | |
2017-05-11 15:10:57,872 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-17 (kafka17.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,873 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-14 (kafka14.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,873 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=15}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,15], offset=-1] | |
2017-05-11 15:10:57,873 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-23 (kafka23.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,873 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=2}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,2], offset=-1] | |
2017-05-11 15:10:57,873 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-8 (kafka08.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,873 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=6}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,6], offset=-1] | |
2017-05-11 15:10:57,873 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-21 (kafka21.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,873 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-10 (kafka10.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,874 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-17 (kafka17.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,874 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=0}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,0], offset=-1] | |
2017-05-11 15:10:57,874 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-15 (kafka15.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,874 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=13}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,13], offset=-1] | |
2017-05-11 15:10:57,874 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-25 (kafka25.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,874 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-23 (kafka23.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,874 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=4}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,4], offset=-1] | |
2017-05-11 15:10:57,874 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-13 (kafka13.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,874 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-8 (kafka08.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,874 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=11}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,11], offset=-1] | |
2017-05-11 15:10:57,874 INFO org.apache.flink.streaming.connectors.kafka.internals.Kafka08Fetcher - Starting thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-11 (kafka11.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,874 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-21 (kafka21.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,874 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to fetch from [Partition: KafkaTopicPartition{topic='rbeaDeploymentsplattest1', partition=9}, KafkaPartitionHandle=[rbeaDeploymentsplattest1,9], offset=-1] | |
2017-05-11 15:10:57,874 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-15 (kafka15.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,874 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-25 (kafka25.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,875 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-13 (kafka13.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,875 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-11 (kafka11.sto.midasplayer.com:9092) | |
2017-05-11 15:10:57,888 INFO org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread - Starting to consume 1 partitions with consumer thread SimpleConsumer - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic - broker-12 (kafka12.sto.midasplayer.com:9092) | |
2017-05-11 15:15:57,733 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, synchronous part) in thread Thread[Create Fields and Ids -> Filter Errors and Notifications (1/1),5,Flink Task Threads] took 0 ms. | |
2017-05-11 15:15:57,737 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, synchronous part) in thread Thread[Create Job information (1/1),5,Flink Task Threads] took 2 ms. | |
2017-05-11 15:15:57,737 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, asynchronous part) in thread Thread[pool-38-thread-1,5,Flink Task Threads] took 1 ms. | |
2017-05-11 15:15:57,738 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, synchronous part) in thread Thread[Async calls on Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1),5,Flink Task Threads] took 10 ms. | |
2017-05-11 15:15:57,738 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, synchronous part) in thread Thread[Async calls on Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1),5,Flink Task Threads] took 10 ms. | |
2017-05-11 15:15:57,740 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, synchronous part) in thread Thread[Async calls on Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1),5,Flink Task Threads] took 5 ms. | |
2017-05-11 15:15:57,741 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, asynchronous part) in thread Thread[pool-42-thread-1,5,Flink Task Threads] took 3 ms. | |
2017-05-11 15:15:57,834 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, asynchronous part) in thread Thread[pool-39-thread-1,5,Flink Task Threads] took 91 ms. | |
2017-05-11 15:15:57,834 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, asynchronous part) in thread Thread[pool-31-thread-1,5,Flink Task Threads] took 92 ms. | |
2017-05-11 15:15:57,834 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, asynchronous part) in thread Thread[pool-32-thread-1,5,Flink Task Threads] took 94 ms. | |
2017-05-11 15:15:57,855 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, synchronous part) in thread Thread[Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1),5,Flink Task Threads] took 21 ms. | |
2017-05-11 15:15:58,008 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, asynchronous part) in thread Thread[pool-37-thread-1,5,Flink Task Threads] took 24 ms. | |
2017-05-11 15:20:57,726 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, synchronous part) in thread Thread[Create Fields and Ids -> Filter Errors and Notifications (1/1),5,Flink Task Threads] took 0 ms. | |
2017-05-11 15:20:57,726 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, synchronous part) in thread Thread[Async calls on Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1),5,Flink Task Threads] took 1 ms. | |
2017-05-11 15:20:57,726 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, synchronous part) in thread Thread[Create Job information (1/1),5,Flink Task Threads] took 0 ms. | |
2017-05-11 15:20:57,729 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, asynchronous part) in thread Thread[pool-38-thread-2,5,Flink Task Threads] took 2 ms. | |
2017-05-11 15:20:57,730 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, asynchronous part) in thread Thread[pool-42-thread-2,5,Flink Task Threads] took 1 ms. | |
2017-05-11 15:20:57,730 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, synchronous part) in thread Thread[Async calls on Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1),5,Flink Task Threads] took 4 ms. | |
2017-05-11 15:20:57,730 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, synchronous part) in thread Thread[Async calls on Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1),5,Flink Task Threads] took 4 ms. | |
2017-05-11 15:20:57,849 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, asynchronous part) in thread Thread[pool-31-thread-2,5,Flink Task Threads] took 120 ms. | |
2017-05-11 15:20:57,857 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, asynchronous part) in thread Thread[pool-39-thread-2,5,Flink Task Threads] took 126 ms. | |
2017-05-11 15:20:57,857 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, asynchronous part) in thread Thread[pool-32-thread-2,5,Flink Task Threads] took 126 ms. | |
2017-05-11 15:20:57,858 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, synchronous part) in thread Thread[Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1),5,Flink Task Threads] took 1 ms. | |
2017-05-11 15:20:58,057 INFO org.apache.flink.runtime.state.DefaultOperatorStateBackend - DefaultOperatorStateBackend snapshot (File Stream Factory @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b, asynchronous part) in thread Thread[pool-37-thread-2,5,Flink Task Threads] took 23 ms. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment