Skip to content

Instantly share code, notes, and snippets.

@gyfora
Created May 11, 2017 13:21
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save gyfora/1b54c10601e7482009c6adb804dbcfbf to your computer and use it in GitHub Desktop.
Save gyfora/1b54c10601e7482009c6adb804dbcfbf to your computer and use it in GitHub Desktop.
2017-05-11 15:10:02,669 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - --------------------------------------------------------------------------------
2017-05-11 15:10:02,670 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - Starting YARN ApplicationMaster / ResourceManager / JobManager (Version: 1.3-SNAPSHOT, Rev:44a120b, Date:11.05.2017 @ 13:38:36 CEST)
2017-05-11 15:10:02,680 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - Current user: splat
2017-05-11 15:10:02,680 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.131-b11
2017-05-11 15:10:02,681 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - Maximum heap size: 406 MiBytes
2017-05-11 15:10:02,681 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - JAVA_HOME: /fjord/java/
2017-05-11 15:10:02,683 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - Hadoop version: 2.6.0
2017-05-11 15:10:02,683 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - JVM Options:
2017-05-11 15:10:02,683 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - -Xmx424m
2017-05-11 15:10:02,683 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - -Dlog.file=/fjord/hadoop/data/2/yarn/container-logs/application_1494426363399_0012/container_1494426363399_0012_01_000001/jobmanager.log
2017-05-11 15:10:02,683 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - -Dlogback.configurationFile=file:logback.xml
2017-05-11 15:10:02,683 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - -Dlog4j.configuration=file:log4j.properties
2017-05-11 15:10:02,683 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - Program Arguments: (none)
2017-05-11 15:10:02,683 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - Classpath: lib/flink-connector-kafka-0.8_2.10-1.3-SNAPSHOT.jar:lib/flink-connector-kafka-base_2.10-1.3-SNAPSHOT.jar:lib/flink-dist_2.10-1.3-SNAPSHOT.jar:lib/flink-python_2.10-1.3-SNAPSHOT.jar:lib/flink-shaded-hadoop2-1.3-SNAPSHOT.jar:lib/kafka-clients-0.8.2.2.jar:lib/kafka_2.10-0.8.2.2.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.7.jar:log4j.properties:logback.xml:rbea-on-flink-2.0-SNAPSHOT.jar:flink.jar:flink-conf.yaml::/etc/hadoop/conf.cloudera.yarn2:/run/cloudera-scm-agent/process/948-yarn-NODEMANAGER:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-protobuf.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-generator.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-cascading.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-auth.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-auth-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-aws-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-thrift.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-azure-datalake.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-nfs.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-avro.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-annotations-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-azure-datalake-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-test-hadoop2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-encoding.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-nfs-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-jackson.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-pig-bundle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-column.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-format.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-scrooge_2.10.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-format-sources.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-annotations.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-hadoop-bundle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-tools.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common-2.6.0-cdh5.11.0-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-hadoop.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-pig.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-aws.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-scala_2.10.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-format-javadoc.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-codec-1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jersey-core-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/slf4j-log4j12.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/servlet-api-2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-digester-1.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/curator-client-2.7.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/xz-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/activation-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/curator-recipes-2.7.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/asm-3.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/stax-api-1.0-2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-dynamodb-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/guava-11.0.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jsp-api-2.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/logredactor-1.0.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/httpcore-4.2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-kms-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/hamcrest-core-1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jets3t-0.9.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-s3-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/avro.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/slf4j-api-1.7.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-el-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/netty-3.10.5.Final.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-math3-3.1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/log4j-1.2.17.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/gson-2.2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/azure-data-lake-store-sdk-2.1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-io-2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-core-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-compress-1.4.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-logging-1.1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/curator-framework-2.7.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jsch-0.1.42.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jaxb-api-2.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-collections-3.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/protobuf-java-2.5.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-sts-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/httpclient-4.2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-lang-2.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jersey-server-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-configuration-1.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jsr305-3.0.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jackson-xc-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jettison-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/zookeeper.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/junit-4.11.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-net-3.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-cli-1.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/api-util-1.0.0-M20.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/paranamer-2.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jersey-json-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/xmlenc-0.52.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/mockito-all-1.8.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-httpclient-3.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.11.0-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/asm-3.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-client.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-api-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-registry-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-registry.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-client-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-api.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-common-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-tests-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/javax.inject-1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/xz-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/activation-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/asm-3.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/guava-11.0.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jline-2.11.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-io-2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/guice-3.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.11.0-yarn-shuffle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jettison-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/zookeeper.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-json-1.9.jar
2017-05-11 15:10:02,683 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - --------------------------------------------------------------------------------
2017-05-11 15:10:02,684 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - Registered UNIX signal handlers for [TERM, HUP, INT]
2017-05-11 15:10:02,688 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - remoteKeytabPrincipal obtained null
2017-05-11 15:10:02,689 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - YARN daemon is running as: splat Yarn client user obtainer: splat
2017-05-11 15:10:02,693 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - Loading config from directory /fjord/hadoop/data/nvme/splat/yarn-local-dir/usercache/splat/appcache/application_1494426363399_0012/container_1494426363399_0012_01_000001
2017-05-11 15:10:02,695 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: akka.ask.timeout, 120 s
2017-05-11 15:10:02,695 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: akka.client.timeout, 1200 s
2017-05-11 15:10:02,695 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: akka.lookup.timeout, 120 s
2017-05-11 15:10:02,695 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: env.log.dir, /home/splat/flink/log/
2017-05-11 15:10:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: fs.hdfs.hadoopconf, /home/splat/rbea-on-flink/deployment/environments/test/yarn-conf
2017-05-11 15:10:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1
2017-05-11 15:10:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: recovery.mode, zookeeper
2017-05-11 15:10:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: recovery.zookeeper.path.root, /flink-splat
2017-05-11 15:10:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: recovery.zookeeper.quorum, zk04.sto.midasplayer.com:2181,zk05.sto.midasplayer.com:2181,zk06.sto.midasplayer.com:2181
2017-05-11 15:10:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: recovery.zookeeper.storageDir, hdfs:///flink/ha
2017-05-11 15:10:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: state.checkpoints.dir, hdfs:///flink/external-checkpoints/bifrost
2017-05-11 15:10:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: state.savepoints.dir, hdfs:///flink/external-checkpoints/bifrost
2017-05-11 15:10:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: yarn.application-attempts, 10
2017-05-11 15:10:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: yarn.containers.vcores, 4
2017-05-11 15:10:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: yarn.heap-cutoff-ratio, 0.1
2017-05-11 15:10:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: yarn.maximum-failed-containers, 100
2017-05-11 15:10:02,732 INFO org.apache.flink.runtime.security.modules.HadoopModule - Hadoop user set to splat (auth:SIMPLE)
2017-05-11 15:10:02,762 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - YARN assigned hostname for application master: splat34.sto.midasplayer.com
2017-05-11 15:10:02,770 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - TaskManagers will be created with 1 task slots
2017-05-11 15:10:02,771 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - TaskManagers will be started with container size 5000 MB, JVM heap size 4400 MB, JVM direct memory limit -1 MB
2017-05-11 15:10:02,779 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - Trying to start actor system at splat34.sto.midasplayer.com:57191
2017-05-11 15:10:03,166 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
2017-05-11 15:10:03,202 INFO Remoting - Starting remoting
2017-05-11 15:10:03,394 INFO Remoting - Remoting started; listening on addresses :[akka.tcp://flink@splat34.sto.midasplayer.com:57191]
2017-05-11 15:10:03,400 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - Actor system started at akka.tcp://flink@splat34.sto.midasplayer.com:57191
2017-05-11 15:10:03,400 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - Actor system bound to hostname splat34.sto.midasplayer.com.
2017-05-11 15:10:03,403 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - TM:remote keytab path obtained null
2017-05-11 15:10:03,404 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - TM:remote keytab principal obtained null
2017-05-11 15:10:03,404 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - TM:remote yarn conf path obtained null
2017-05-11 15:10:03,404 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - TM:remote krb5 path obtained null
2017-05-11 15:10:03,871 INFO org.apache.flink.yarn.Utils - Copying from file:/fjord/hadoop/data/nvme/splat/yarn-local-dir/usercache/splat/appcache/application_1494426363399_0012/container_1494426363399_0012_01_000001/8ed19916-4cc6-4604-b469-ce58c896bca1-taskmanager-conf.yaml to hdfs://splat34.sto.midasplayer.com:8020/user/splat/.flink/application_1494426363399_0012/8ed19916-4cc6-4604-b469-ce58c896bca1-taskmanager-conf.yaml
2017-05-11 15:10:04,139 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - Prepared local resource for modified yaml: resource { scheme: "hdfs" host: "splat34.sto.midasplayer.com" port: 8020 file: "/user/splat/.flink/application_1494426363399_0012/8ed19916-4cc6-4604-b469-ce58c896bca1-taskmanager-conf.yaml" } size: 987 timestamp: 1494508204070 type: FILE visibility: APPLICATION
2017-05-11 15:10:04,143 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - Creating container launch context for TaskManagers
2017-05-11 15:10:04,145 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - Starting TaskManagers with command: $JAVA_HOME/bin/java -Xms4400m -Xmx4400m -Dlog.file=<LOG_DIR>/taskmanager.log -Dlogback.configurationFile=file:./logback.xml -Dlog4j.configuration=file:./log4j.properties org.apache.flink.yarn.YarnTaskManager --configDir . 1> <LOG_DIR>/taskmanager.out 2> <LOG_DIR>/taskmanager.err
2017-05-11 15:10:04,162 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'recovery.mode' instead of proper key 'high-availability'
2017-05-11 15:10:04,166 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'recovery.zookeeper.quorum' instead of proper key 'high-availability.zookeeper.quorum'
2017-05-11 15:10:04,166 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'recovery.zookeeper.path.root' instead of proper key 'high-availability.zookeeper.path.root'
2017-05-11 15:10:04,167 INFO org.apache.flink.runtime.util.ZooKeeperUtils - Enforcing default ACL for ZK connections
2017-05-11 15:10:04,167 INFO org.apache.flink.runtime.util.ZooKeeperUtils - Using '/flink-splat/application_1494426363399_0012' as Zookeeper namespace.
2017-05-11 15:10:04,231 INFO org.apache.flink.shaded.org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting
2017-05-11 15:10:04,239 INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2017-05-11 15:10:04,239 INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=splat34.sto.midasplayer.com
2017-05-11 15:10:04,239 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_131
2017-05-11 15:10:04,239 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation
2017-05-11 15:10:04,239 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-8-oracle/jre
2017-05-11 15:10:04,239 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=lib/flink-connector-kafka-0.8_2.10-1.3-SNAPSHOT.jar:lib/flink-connector-kafka-base_2.10-1.3-SNAPSHOT.jar:lib/flink-dist_2.10-1.3-SNAPSHOT.jar:lib/flink-python_2.10-1.3-SNAPSHOT.jar:lib/flink-shaded-hadoop2-1.3-SNAPSHOT.jar:lib/kafka-clients-0.8.2.2.jar:lib/kafka_2.10-0.8.2.2.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.7.jar:log4j.properties:logback.xml:rbea-on-flink-2.0-SNAPSHOT.jar:flink.jar:flink-conf.yaml::/etc/hadoop/conf.cloudera.yarn2:/run/cloudera-scm-agent/process/948-yarn-NODEMANAGER:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-protobuf.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-generator.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-cascading.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-auth.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-auth-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-aws-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-thrift.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-azure-datalake.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-nfs.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-avro.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-annotations-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-azure-datalake-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-test-hadoop2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-encoding.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-nfs-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-jackson.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-pig-bundle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-column.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-format.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-scrooge_2.10.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-format-sources.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-annotations.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-hadoop-bundle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-tools.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-common-2.6.0-cdh5.11.0-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-hadoop.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-pig.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/hadoop-aws.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-scala_2.10.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/parquet-format-javadoc.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-codec-1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jersey-core-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/slf4j-log4j12.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/servlet-api-2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-digester-1.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/curator-client-2.7.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/xz-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/activation-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/curator-recipes-2.7.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/asm-3.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/stax-api-1.0-2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-dynamodb-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/guava-11.0.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jsp-api-2.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/logredactor-1.0.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/httpcore-4.2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-kms-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/hamcrest-core-1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jets3t-0.9.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-s3-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/avro.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/slf4j-api-1.7.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-el-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/netty-3.10.5.Final.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-math3-3.1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/log4j-1.2.17.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/gson-2.2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/azure-data-lake-store-sdk-2.1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-io-2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-core-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-compress-1.4.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-logging-1.1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/curator-framework-2.7.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jsch-0.1.42.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jaxb-api-2.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-collections-3.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/protobuf-java-2.5.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/aws-java-sdk-sts-1.10.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/httpclient-4.2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-lang-2.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jersey-server-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-configuration-1.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jsr305-3.0.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jackson-xc-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jettison-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/zookeeper.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/junit-4.11.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-net-3.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-cli-1.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/api-util-1.0.0-M20.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/paranamer-2.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/jersey-json-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/xmlenc-0.52.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/mockito-all-1.8.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/commons-httpclient-3.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.11.0-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/asm-3.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-client.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-api-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-registry-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-registry.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-client-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-api.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-common-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-server-tests-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-common.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.11.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/javax.inject-1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/xz-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/activation-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/asm-3.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/guava-11.0.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jline-2.11.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-io-2.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/guice-3.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.11.0-yarn-shuffle.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jettison-1.1.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/zookeeper.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop-yarn/lib/jersey-json-1.9.jar
2017-05-11 15:10:04,240 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=:/fjord/hadoop/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/lib/hadoop/lib/native:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-05-11 15:10:04,240 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
2017-05-11 15:10:04,240 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
2017-05-11 15:10:04,240 INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
2017-05-11 15:10:04,240 INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
2017-05-11 15:10:04,240 INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=3.16.0-4-amd64
2017-05-11 15:10:04,240 INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=yarn
2017-05-11 15:10:04,240 INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/var/lib/hadoop-yarn
2017-05-11 15:10:04,240 INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/fjord/hadoop/data/nvme/splat/yarn-local-dir/usercache/splat/appcache/application_1494426363399_0012/container_1494426363399_0012_01_000001
2017-05-11 15:10:04,241 INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zk04.sto.midasplayer.com:2181,zk05.sto.midasplayer.com:2181,zk06.sto.midasplayer.com:2181 sessionTimeout=60000 watcher=org.apache.flink.shaded.org.apache.curator.ConnectionState@5a4c638d
2017-05-11 15:10:04,258 WARN org.apache.zookeeper.ClientCnxn - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/tmp/jaas-6511556418550314055.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.
2017-05-11 15:10:04,259 INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zk04.sto.midasplayer.com/172.26.82.242:2181
2017-05-11 15:10:04,259 ERROR org.apache.flink.shaded.org.apache.curator.ConnectionState - Authentication failed
2017-05-11 15:10:04,261 INFO org.apache.zookeeper.ClientCnxn - Socket connection established to zk04.sto.midasplayer.com/172.26.82.242:2181, initiating session
2017-05-11 15:10:04,268 INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zk04.sto.midasplayer.com/172.26.82.242:2181, sessionid = 0x15aad587329de34, negotiated timeout = 40000
2017-05-11 15:10:04,269 INFO org.apache.flink.shaded.org.apache.curator.framework.state.ConnectionStateManager - State change: CONNECTED
2017-05-11 15:10:04,274 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'recovery.mode' instead of proper key 'high-availability'
2017-05-11 15:10:04,274 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'recovery.zookeeper.storageDir' instead of proper key 'high-availability.storageDir'
2017-05-11 15:10:04,304 INFO org.apache.flink.runtime.blob.FileSystemBlobStore - Creating highly available BLOB storage directory at hdfs:///flink/ha/application_1494426363399_0012/blob
2017-05-11 15:10:04,513 INFO org.apache.flink.runtime.blob.BlobServer - Created BLOB server storage directory /tmp/blobStore-29ae1537-da67-4c7f-989a-f66d3769b177
2017-05-11 15:10:04,515 INFO org.apache.flink.runtime.blob.BlobServer - Started BLOB server at 0.0.0.0:41667 - max concurrent requests: 50 - max backlog: 1000
2017-05-11 15:10:04,533 INFO org.apache.flink.runtime.metrics.MetricRegistry - No metrics reporter configured, no metrics will be exposed/reported.
2017-05-11 15:10:04,541 INFO org.apache.flink.runtime.jobmanager.MemoryArchivist - Started memory archivist akka://flink/user/$a
2017-05-11 15:10:04,545 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'recovery.zookeeper.storageDir' instead of proper key 'high-availability.storageDir'
2017-05-11 15:10:04,569 WARN org.apache.flink.shaded.org.apache.curator.utils.ZKPaths - The version of ZooKeeper being used doesn't support Container nodes. CreateMode.PERSISTENT will be used instead.
2017-05-11 15:10:04,590 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'recovery.mode' instead of proper key 'high-availability'
2017-05-11 15:10:04,591 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - Starting JobManager Web Frontend
2017-05-11 15:10:04,593 INFO org.apache.flink.yarn.YarnJobManager - Starting JobManager at akka.tcp://flink@splat34.sto.midasplayer.com:57191/user/jobmanager.
2017-05-11 15:10:04,593 INFO org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService - Starting ZooKeeperLeaderElectionService org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService@528f21e4.
2017-05-11 15:10:04,599 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils - Determined location of JobManager log file: /fjord/hadoop/data/2/yarn/container-logs/application_1494426363399_0012/container_1494426363399_0012_01_000001/jobmanager.log
2017-05-11 15:10:04,599 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils - Determined location of JobManager stdout file: /fjord/hadoop/data/2/yarn/container-logs/application_1494426363399_0012/container_1494426363399_0012_01_000001/jobmanager.out
2017-05-11 15:10:04,600 INFO org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Using directory /tmp/flink-web-f5bcc5f4-4ebd-45bc-8ff4-40a70c64e5a8 for the web interface files
2017-05-11 15:10:04,600 INFO org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Using directory /tmp/flink-web-364bc75e-1cca-4075-82ff-117b48c555aa for web frontend JAR file uploads
2017-05-11 15:10:04,700 INFO org.apache.flink.yarn.YarnJobManager - JobManager akka.tcp://flink@splat34.sto.midasplayer.com:57191/user/jobmanager was granted leadership with leader session ID Some(88aae651-465e-4ba1-8f47-12ab4e5b5ee2).
2017-05-11 15:10:04,712 INFO org.apache.flink.yarn.YarnJobManager - Delaying recovery of all jobs by 120000 milliseconds.
2017-05-11 15:10:04,889 INFO org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Web frontend listening at 0.0.0.0:35714
2017-05-11 15:10:04,890 INFO org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Starting with JobManager akka.tcp://flink@splat34.sto.midasplayer.com:57191/user/jobmanager on port 35714
2017-05-11 15:10:04,890 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - Starting ZooKeeperLeaderRetrievalService.
2017-05-11 15:10:04,893 INFO org.apache.flink.runtime.webmonitor.JobManagerRetriever - New leader reachable under akka.tcp://flink@splat34.sto.midasplayer.com:57191/user/jobmanager:88aae651-465e-4ba1-8f47-12ab4e5b5ee2.
2017-05-11 15:10:04,895 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - YARN application tolerates 100 failed TaskManager containers before giving up
2017-05-11 15:10:04,898 INFO org.apache.flink.yarn.YarnApplicationMasterRunner - YARN Application Master started
2017-05-11 15:10:04,912 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - Starting ZooKeeperLeaderRetrievalService.
2017-05-11 15:10:04,912 INFO org.apache.flink.yarn.YarnFlinkResourceManager - Initializing YARN resource master
2017-05-11 15:10:04,924 INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at splat34.sto.midasplayer.com/172.26.83.103:8030
2017-05-11 15:10:04,949 INFO org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy - yarn.client.max-cached-nodemanagers-proxies : 0
2017-05-11 15:10:04,949 INFO org.apache.flink.yarn.YarnFlinkResourceManager - Registering Application Master with tracking url http://splat34.sto.midasplayer.com:35714
2017-05-11 15:10:05,041 INFO org.apache.flink.yarn.YarnFlinkResourceManager - Trying to associate with JobManager leader akka.tcp://flink@splat34.sto.midasplayer.com:57191/user/jobmanager
2017-05-11 15:10:05,048 INFO org.apache.flink.yarn.YarnFlinkResourceManager - Resource Manager associating with leading JobManager Actor[akka://flink/user/jobmanager#1827943682] - leader session 88aae651-465e-4ba1-8f47-12ab4e5b5ee2
2017-05-11 15:10:05,048 INFO org.apache.flink.yarn.YarnFlinkResourceManager - Requesting new TaskManager container with 5000 megabytes memory. Pending requests: 1
2017-05-11 15:10:06,092 INFO org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl - Received new token for : splat34.sto.midasplayer.com:8041
2017-05-11 15:10:06,107 INFO org.apache.flink.yarn.YarnFlinkResourceManager - Received new container: container_1494426363399_0012_01_000002 - Remaining pending container requests: 0
2017-05-11 15:10:06,108 INFO org.apache.flink.yarn.YarnFlinkResourceManager - Launching TaskManager in container ContainerInLaunch @ 1494508206107: Container: [ContainerId: container_1494426363399_0012_01_000002, NodeId: splat34.sto.midasplayer.com:8041, NodeHttpAddress: splat34.sto.midasplayer.com:8042, Resource: <memory:5120, vCores:4>, Priority: 0, Token: Token { kind: ContainerToken, service: 172.26.83.103:8041 }, ] on host splat34.sto.midasplayer.com
2017-05-11 15:10:06,109 INFO org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy - Opening proxy : splat34.sto.midasplayer.com:8041
2017-05-11 15:10:07,458 INFO org.apache.flink.yarn.YarnJobManager - ApplicationMaster will shut down session when job 065c0937d56f3e8da025e015d3ab332b has finished.
2017-05-11 15:10:08,723 INFO org.apache.flink.yarn.YarnFlinkResourceManager - TaskManager container_1494426363399_0012_01_000002 has started.
2017-05-11 15:10:08,724 INFO org.apache.flink.runtime.instance.InstanceManager - Registered TaskManager at splat34 (akka.tcp://flink@splat34.sto.midasplayer.com:51927/user/taskmanager) as 8354f1c544e040bb7cfe8729d5d897f3. Current number of registered hosts is 1. Current number of alive task slots is 1.
2017-05-11 15:10:08,951 INFO org.apache.flink.yarn.YarnJobManager - Submitting job 065c0937d56f3e8da025e015d3ab332b (event.bifrost.log).
2017-05-11 15:10:08,966 INFO org.apache.flink.yarn.YarnJobManager - Using restart strategy FixedDelayRestartStrategy(maxNumberRestartAttempts=2147483647, delayBetweenRestartAttempts=10000) for 065c0937d56f3e8da025e015d3ab332b.
2017-05-11 15:10:08,981 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Job recovers via failover strategy: full graph restart
2017-05-11 15:10:08,996 INFO org.apache.flink.yarn.YarnJobManager - Running initialization on master for job event.bifrost.log (065c0937d56f3e8da025e015d3ab332b).
2017-05-11 15:10:08,996 INFO org.apache.flink.yarn.YarnJobManager - Successfully ran initialization on master in 0 ms.
2017-05-11 15:10:09,013 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'recovery.zookeeper.storageDir' instead of proper key 'high-availability.storageDir'
2017-05-11 15:10:09,021 INFO org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - Initialized in '/checkpoints/065c0937d56f3e8da025e015d3ab332b'.
2017-05-11 15:10:09,029 INFO org.apache.flink.yarn.YarnJobManager - Using application-defined state backend for checkpoint/savepoint metadata: RocksDB State Backend {isInitialized=false, configuredDbBasePaths=null, initializedDbBasePaths=null, checkpointStreamBackend=File State Backend @ hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log}.
2017-05-11 15:10:09,031 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Persisting periodic checkpoints externally at hdfs:///flink/external-checkpoints/bifrost.
2017-05-11 15:10:09,181 INFO org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore - Added SubmittedJobGraph(065c0937d56f3e8da025e015d3ab332b, JobInfo(clients: Set((Actor[akka.tcp://flink@deploy.sto.midasplayer.com:55362/temp/$p],DETACHED)), start: 1494508208940)) to ZooKeeper.
2017-05-11 15:10:09,183 INFO org.apache.flink.yarn.YarnJobManager - Scheduling job 065c0937d56f3e8da025e015d3ab332b (event.bifrost.log).
2017-05-11 15:10:09,183 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Job event.bifrost.log (065c0937d56f3e8da025e015d3ab332b) switched from state CREATED to RUNNING.
2017-05-11 15:10:09,186 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f) switched from CREATED to SCHEDULED.
2017-05-11 15:10:09,192 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (adbb1d5170737eaef3931cbb797be277) switched from CREATED to SCHEDULED.
2017-05-11 15:10:09,192 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1) switched from CREATED to SCHEDULED.
2017-05-11 15:10:09,192 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08) switched from CREATED to SCHEDULED.
2017-05-11 15:10:09,192 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (bebc6a77643e09ce2042d6a52f291e3e) switched from CREATED to SCHEDULED.
2017-05-11 15:10:09,192 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b) switched from CREATED to SCHEDULED.
2017-05-11 15:10:09,193 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56) switched from CREATED to SCHEDULED.
2017-05-11 15:10:09,193 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668) switched from CREATED to SCHEDULED.
2017-05-11 15:10:09,193 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855) switched from CREATED to SCHEDULED.
2017-05-11 15:10:09,193 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e) switched from CREATED to SCHEDULED.
2017-05-11 15:10:09,193 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd) switched from CREATED to SCHEDULED.
2017-05-11 15:10:09,193 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5) switched from CREATED to SCHEDULED.
2017-05-11 15:10:09,193 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f) switched from CREATED to SCHEDULED.
2017-05-11 15:10:09,197 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:09,197 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying IterationSource-15 (1/1) (attempt #0) to splat34
2017-05-11 15:10:09,202 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (adbb1d5170737eaef3931cbb797be277) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:09,203 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (attempt #0) to splat34
2017-05-11 15:10:09,203 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:09,204 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (attempt #0) to splat34
2017-05-11 15:10:09,205 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:09,205 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Source: Query Job Info (1/1) (attempt #0) to splat34
2017-05-11 15:10:09,206 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (bebc6a77643e09ce2042d6a52f291e3e) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:09,207 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (attempt #0) to splat34
2017-05-11 15:10:09,208 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:09,208 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Create Fields and Ids -> Filter Errors and Notifications (1/1) (attempt #0) to splat34
2017-05-11 15:10:09,212 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:09,212 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (attempt #0) to splat34
2017-05-11 15:10:09,213 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:09,213 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Window aggregator (1/1) (attempt #0) to splat34
2017-05-11 15:10:09,214 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:09,215 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Keep last -> NoOp -> Create Aggrigato events (1/1) (attempt #0) to splat34
2017-05-11 15:10:09,215 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:09,216 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying MySql output info -> Filter (1/1) (attempt #0) to splat34
2017-05-11 15:10:09,216 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:09,217 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying To DeploymentInfo (1/1) (attempt #0) to splat34
2017-05-11 15:10:09,218 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:09,218 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying IterationSink-15 (1/1) (attempt #0) to splat34
2017-05-11 15:10:09,219 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:09,219 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Create Job information (1/1) (attempt #0) to splat34
2017-05-11 15:10:09,293 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:09,293 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:09,294 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:09,295 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (bebc6a77643e09ce2042d6a52f291e3e) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:09,295 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:09,296 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:09,297 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:09,297 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:09,298 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (adbb1d5170737eaef3931cbb797be277) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:09,299 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:09,300 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:09,301 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:09,302 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:45,688 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 1 @ 1494508245623
2017-05-11 15:10:47,626 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (bebc6a77643e09ce2042d6a52f291e3e) switched from RUNNING to FAILED.
AsynchronousException{java.lang.Exception: Could not materialize checkpoint 1 for operator Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1).}
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:966)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Could not materialize checkpoint 1 for operator Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1).
... 6 more
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Could not open output stream for state backend
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43)
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:902)
... 5 more
Suppressed: java.lang.Exception: Could not properly cancel managed operator state future.
at org.apache.flink.streaming.api.operators.OperatorSnapshotResult.cancel(OperatorSnapshotResult.java:98)
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.cleanup(StreamTask.java:1018)
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:957)
... 5 more
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Could not open output stream for state backend
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43)
at org.apache.flink.runtime.state.StateUtil.discardStateFuture(StateUtil.java:85)
at org.apache.flink.streaming.api.operators.OperatorSnapshotResult.cancel(OperatorSnapshotResult.java:96)
... 7 more
Caused by: java.io.IOException: Could not open output stream for state backend
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.createStream(FsCheckpointStreamFactory.java:371)
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.flush(FsCheckpointStreamFactory.java:228)
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.write(FsCheckpointStreamFactory.java:203)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.flink.api.java.typeutils.runtime.DataOutputViewStream.write(DataOutputViewStream.java:41)
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877)
at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)
at java.io.ObjectOutputStream.writeNonProxyDesc(ObjectOutputStream.java:1286)
at java.io.ObjectOutputStream.writeClassDesc(ObjectOutputStream.java:1231)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1427)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.writeFatalException(ObjectOutputStream.java:1577)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:351)
at org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:323)
at org.apache.flink.runtime.state.JavaSerializer.serialize(JavaSerializer.java:69)
at org.apache.flink.runtime.state.JavaSerializer.serialize(JavaSerializer.java:33)
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.write(DefaultOperatorStateBackend.java:415)
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$1.performOperation(DefaultOperatorStateBackend.java:232)
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$1.performOperation(DefaultOperatorStateBackend.java:202)
at org.apache.flink.runtime.io.async.AbstractAsyncIOCallable.call(AbstractAsyncIOCallable.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:40)
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:902)
... 5 more
Caused by: java.io.IOException: Cannot register Closeable, registry is already closed. Closing argument.
at org.apache.flink.util.AbstractCloseableRegistry.registerClosable(AbstractCloseableRegistry.java:66)
at org.apache.flink.core.fs.ClosingFSDataOutputStream.wrapSafe(ClosingFSDataOutputStream.java:101)
at org.apache.flink.core.fs.SafetyNetWrapperFileSystem.create(SafetyNetWrapperFileSystem.java:125)
at org.apache.flink.core.fs.FileSystem.create(FileSystem.java:621)
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.createStream(FsCheckpointStreamFactory.java:362)
... 27 more
[CIRCULAR REFERENCE:java.io.IOException: Could not open output stream for state backend]
2017-05-11 15:10:47,629 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (adbb1d5170737eaef3931cbb797be277) switched from RUNNING to FAILED.
AsynchronousException{java.lang.Exception: Could not materialize checkpoint 1 for operator Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1).}
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:966)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Could not materialize checkpoint 1 for operator Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1).
... 6 more
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Could not open output stream for state backend
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43)
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:902)
... 5 more
Suppressed: java.lang.Exception: Could not properly cancel managed operator state future.
at org.apache.flink.streaming.api.operators.OperatorSnapshotResult.cancel(OperatorSnapshotResult.java:98)
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.cleanup(StreamTask.java:1018)
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:957)
... 5 more
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Could not open output stream for state backend
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43)
at org.apache.flink.runtime.state.StateUtil.discardStateFuture(StateUtil.java:85)
at org.apache.flink.streaming.api.operators.OperatorSnapshotResult.cancel(OperatorSnapshotResult.java:96)
... 7 more
Caused by: java.io.IOException: Could not open output stream for state backend
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.createStream(FsCheckpointStreamFactory.java:371)
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.flush(FsCheckpointStreamFactory.java:228)
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.write(FsCheckpointStreamFactory.java:203)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.flink.api.java.typeutils.runtime.DataOutputViewStream.write(DataOutputViewStream.java:41)
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877)
at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)
at java.io.ObjectOutputStream.writeNonProxyDesc(ObjectOutputStream.java:1286)
at java.io.ObjectOutputStream.writeClassDesc(ObjectOutputStream.java:1231)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1427)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.writeFatalException(ObjectOutputStream.java:1577)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:351)
at org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:323)
at org.apache.flink.runtime.state.JavaSerializer.serialize(JavaSerializer.java:69)
at org.apache.flink.runtime.state.JavaSerializer.serialize(JavaSerializer.java:33)
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.write(DefaultOperatorStateBackend.java:415)
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$1.performOperation(DefaultOperatorStateBackend.java:232)
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$1.performOperation(DefaultOperatorStateBackend.java:202)
at org.apache.flink.runtime.io.async.AbstractAsyncIOCallable.call(AbstractAsyncIOCallable.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:40)
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:902)
... 5 more
Caused by: java.io.IOException: Cannot register Closeable, registry is already closed. Closing argument.
at org.apache.flink.util.AbstractCloseableRegistry.registerClosable(AbstractCloseableRegistry.java:66)
at org.apache.flink.core.fs.ClosingFSDataOutputStream.wrapSafe(ClosingFSDataOutputStream.java:101)
at org.apache.flink.core.fs.SafetyNetWrapperFileSystem.create(SafetyNetWrapperFileSystem.java:125)
at org.apache.flink.core.fs.FileSystem.create(FileSystem.java:621)
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.createStream(FsCheckpointStreamFactory.java:362)
... 27 more
[CIRCULAR REFERENCE:java.io.IOException: Could not open output stream for state backend]
2017-05-11 15:10:47,629 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Job event.bifrost.log (065c0937d56f3e8da025e015d3ab332b) switched from state RUNNING to FAILING.
AsynchronousException{java.lang.Exception: Could not materialize checkpoint 1 for operator Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1).}
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:966)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Could not materialize checkpoint 1 for operator Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1).
... 6 more
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Could not open output stream for state backend
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43)
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:902)
... 5 more
Suppressed: java.lang.Exception: Could not properly cancel managed operator state future.
at org.apache.flink.streaming.api.operators.OperatorSnapshotResult.cancel(OperatorSnapshotResult.java:98)
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.cleanup(StreamTask.java:1018)
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:957)
... 5 more
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Could not open output stream for state backend
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43)
at org.apache.flink.runtime.state.StateUtil.discardStateFuture(StateUtil.java:85)
at org.apache.flink.streaming.api.operators.OperatorSnapshotResult.cancel(OperatorSnapshotResult.java:96)
... 7 more
Caused by: java.io.IOException: Could not open output stream for state backend
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.createStream(FsCheckpointStreamFactory.java:371)
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.flush(FsCheckpointStreamFactory.java:228)
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.write(FsCheckpointStreamFactory.java:203)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.flink.api.java.typeutils.runtime.DataOutputViewStream.write(DataOutputViewStream.java:41)
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877)
at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)
at java.io.ObjectOutputStream.writeNonProxyDesc(ObjectOutputStream.java:1286)
at java.io.ObjectOutputStream.writeClassDesc(ObjectOutputStream.java:1231)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1427)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.writeFatalException(ObjectOutputStream.java:1577)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:351)
at org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:323)
at org.apache.flink.runtime.state.JavaSerializer.serialize(JavaSerializer.java:69)
at org.apache.flink.runtime.state.JavaSerializer.serialize(JavaSerializer.java:33)
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.write(DefaultOperatorStateBackend.java:415)
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$1.performOperation(DefaultOperatorStateBackend.java:232)
at org.apache.flink.runtime.state.DefaultOperatorStateBackend$1.performOperation(DefaultOperatorStateBackend.java:202)
at org.apache.flink.runtime.io.async.AbstractAsyncIOCallable.call(AbstractAsyncIOCallable.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:40)
at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:902)
... 5 more
Caused by: java.io.IOException: Cannot register Closeable, registry is already closed. Closing argument.
at org.apache.flink.util.AbstractCloseableRegistry.registerClosable(AbstractCloseableRegistry.java:66)
at org.apache.flink.core.fs.ClosingFSDataOutputStream.wrapSafe(ClosingFSDataOutputStream.java:101)
at org.apache.flink.core.fs.SafetyNetWrapperFileSystem.create(SafetyNetWrapperFileSystem.java:125)
at org.apache.flink.core.fs.FileSystem.create(FileSystem.java:621)
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory$FsCheckpointStateOutputStream.createStream(FsCheckpointStreamFactory.java:362)
... 27 more
[CIRCULAR REFERENCE:java.io.IOException: Could not open output stream for state backend]
2017-05-11 15:10:47,642 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f) switched from RUNNING to CANCELING.
2017-05-11 15:10:47,646 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1) switched from RUNNING to CANCELING.
2017-05-11 15:10:47,646 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08) switched from RUNNING to CANCELING.
2017-05-11 15:10:47,646 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b) switched from RUNNING to CANCELING.
2017-05-11 15:10:47,646 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56) switched from RUNNING to CANCELING.
2017-05-11 15:10:47,647 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668) switched from RUNNING to CANCELING.
2017-05-11 15:10:47,647 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855) switched from RUNNING to CANCELING.
2017-05-11 15:10:47,647 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e) switched from RUNNING to CANCELING.
2017-05-11 15:10:47,647 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd) switched from RUNNING to CANCELING.
2017-05-11 15:10:47,647 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5) switched from RUNNING to CANCELING.
2017-05-11 15:10:47,648 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f) switched from RUNNING to CANCELING.
2017-05-11 15:10:47,671 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSource-15 (1/1) (54d869533bda639da0674dafc218bc0f) switched from CANCELING to CANCELED.
2017-05-11 15:10:47,672 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Query Job Info (1/1) (3d344ccb779c222ae5e71a6cd702ab08) switched from CANCELING to CANCELED.
2017-05-11 15:10:47,672 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Fields and Ids -> Filter Errors and Notifications (1/1) (96ba02017d86acb4d6e136631f4c896b) switched from CANCELING to CANCELED.
2017-05-11 15:10:47,674 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Keep last -> NoOp -> Create Aggrigato events (1/1) (f3f8fe4ef4e9815cd6d834f4a38f2855) switched from CANCELING to CANCELED.
2017-05-11 15:10:47,674 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Window aggregator (1/1) (a9b27d006832311c1c5e26cb212ff668) switched from CANCELING to CANCELED.
2017-05-11 15:10:47,675 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - To DeploymentInfo (1/1) (9d0985ae5088198c20179e94947e1bbd) switched from CANCELING to CANCELED.
2017-05-11 15:10:47,677 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - MySql output info -> Filter (1/1) (6fd449d469f21e774663efdecc92955e) switched from CANCELING to CANCELED.
2017-05-11 15:10:47,678 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSink-15 (1/1) (289679af7c71b17ca1509d7d7762adc5) switched from CANCELING to CANCELED.
2017-05-11 15:10:47,679 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Job information (1/1) (6a0c6e47d414c6e3159b89de29d2aa6f) switched from CANCELING to CANCELED.
2017-05-11 15:10:47,680 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (51ec48a3e91ba2d68ab80b614b3d7a56) switched from CANCELING to CANCELED.
2017-05-11 15:10:47,711 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (fedc06fd7a057558f7aea138790e76f1) switched from CANCELING to CANCELED.
2017-05-11 15:10:47,712 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Try to restart or fail the job event.bifrost.log (065c0937d56f3e8da025e015d3ab332b) if no longer possible.
2017-05-11 15:10:47,712 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Job event.bifrost.log (065c0937d56f3e8da025e015d3ab332b) switched from state FAILING to RESTARTING.
2017-05-11 15:10:47,712 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Restarting the job event.bifrost.log (065c0937d56f3e8da025e015d3ab332b).
2017-05-11 15:10:47,714 INFO org.apache.flink.runtime.executiongraph.restart.ExecutionGraphRestarter - Delaying retry of job execution for 10000 ms ...
2017-05-11 15:10:52,828 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'recovery.mode' instead of proper key 'high-availability'
2017-05-11 15:10:52,829 WARN org.apache.flink.configuration.Configuration - Config uses deprecated configuration key 'recovery.zookeeper.storageDir' instead of proper key 'high-availability.storageDir'
2017-05-11 15:10:52,829 INFO org.apache.flink.runtime.blob.FileSystemBlobStore - Creating highly available BLOB storage directory at hdfs:///flink/ha/application_1494426363399_0012/blob
2017-05-11 15:10:52,832 INFO org.apache.flink.runtime.blob.BlobCache - Created BLOB cache storage directory /tmp/blobStore-2c60ff16-3ee0-4229-83f0-2e383950d84d
2017-05-11 15:10:57,715 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Job event.bifrost.log (065c0937d56f3e8da025e015d3ab332b) switched from state RESTARTING to CREATED.
2017-05-11 15:10:57,715 INFO org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - Recovering checkpoints from ZooKeeper.
2017-05-11 15:10:57,720 INFO org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - Found 0 checkpoints in ZooKeeper.
2017-05-11 15:10:57,720 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Job event.bifrost.log (065c0937d56f3e8da025e015d3ab332b) switched from state CREATED to RUNNING.
2017-05-11 15:10:57,721 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSource-15 (1/1) (aaa8f50da5b4f95be0dc5e5533741cf7) switched from CREATED to SCHEDULED.
2017-05-11 15:10:57,721 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (8a97789a9006d417d4b761f580c98fbe) switched from CREATED to SCHEDULED.
2017-05-11 15:10:57,721 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (1f577dc5863e4902aa8deea71b71f605) switched from CREATED to SCHEDULED.
2017-05-11 15:10:57,721 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Query Job Info (1/1) (4ba944135ae73a280d50c8aa67d6edf7) switched from CREATED to SCHEDULED.
2017-05-11 15:10:57,721 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (e5bf796d6c72b5552867db7b1cee9eec) switched from CREATED to SCHEDULED.
2017-05-11 15:10:57,721 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Fields and Ids -> Filter Errors and Notifications (1/1) (412bf511ffe9c5fb24986fdb7946b7eb) switched from CREATED to SCHEDULED.
2017-05-11 15:10:57,721 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (bee8adcdb1e1015878d02f2bf7e25271) switched from CREATED to SCHEDULED.
2017-05-11 15:10:57,721 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Window aggregator (1/1) (2e3d5f6b29d630d83c4c87febfccbd70) switched from CREATED to SCHEDULED.
2017-05-11 15:10:57,721 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Keep last -> NoOp -> Create Aggrigato events (1/1) (d05593668ad3beb74efb47eb46df749a) switched from CREATED to SCHEDULED.
2017-05-11 15:10:57,722 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - MySql output info -> Filter (1/1) (6d6200e77bcbc8eadd6a7e9fb359c7de) switched from CREATED to SCHEDULED.
2017-05-11 15:10:57,722 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - To DeploymentInfo (1/1) (085b77eabcd6fe3f7c6afe3ac4ccf732) switched from CREATED to SCHEDULED.
2017-05-11 15:10:57,722 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSink-15 (1/1) (bd2ccbc322a1a9c266ea0cd9cbc7c693) switched from CREATED to SCHEDULED.
2017-05-11 15:10:57,722 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Job information (1/1) (476c70c231139063b3bc1b7314d8ce49) switched from CREATED to SCHEDULED.
2017-05-11 15:10:57,722 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSource-15 (1/1) (aaa8f50da5b4f95be0dc5e5533741cf7) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:57,722 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying IterationSource-15 (1/1) (attempt #1) to splat34
2017-05-11 15:10:57,723 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (8a97789a9006d417d4b761f580c98fbe) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:57,723 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (attempt #1) to splat34
2017-05-11 15:10:57,723 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (1f577dc5863e4902aa8deea71b71f605) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:57,723 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (attempt #1) to splat34
2017-05-11 15:10:57,723 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Query Job Info (1/1) (4ba944135ae73a280d50c8aa67d6edf7) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:57,724 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Source: Query Job Info (1/1) (attempt #1) to splat34
2017-05-11 15:10:57,724 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (e5bf796d6c72b5552867db7b1cee9eec) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:57,724 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (attempt #1) to splat34
2017-05-11 15:10:57,724 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Fields and Ids -> Filter Errors and Notifications (1/1) (412bf511ffe9c5fb24986fdb7946b7eb) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:57,724 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Create Fields and Ids -> Filter Errors and Notifications (1/1) (attempt #1) to splat34
2017-05-11 15:10:57,724 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (bee8adcdb1e1015878d02f2bf7e25271) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:57,724 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (attempt #1) to splat34
2017-05-11 15:10:57,725 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Window aggregator (1/1) (2e3d5f6b29d630d83c4c87febfccbd70) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:57,725 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Window aggregator (1/1) (attempt #1) to splat34
2017-05-11 15:10:57,725 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Keep last -> NoOp -> Create Aggrigato events (1/1) (d05593668ad3beb74efb47eb46df749a) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:57,725 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Keep last -> NoOp -> Create Aggrigato events (1/1) (attempt #1) to splat34
2017-05-11 15:10:57,725 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - MySql output info -> Filter (1/1) (6d6200e77bcbc8eadd6a7e9fb359c7de) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:57,725 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying MySql output info -> Filter (1/1) (attempt #1) to splat34
2017-05-11 15:10:57,725 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - To DeploymentInfo (1/1) (085b77eabcd6fe3f7c6afe3ac4ccf732) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:57,726 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying To DeploymentInfo (1/1) (attempt #1) to splat34
2017-05-11 15:10:57,726 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSink-15 (1/1) (bd2ccbc322a1a9c266ea0cd9cbc7c693) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:57,726 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying IterationSink-15 (1/1) (attempt #1) to splat34
2017-05-11 15:10:57,726 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Job information (1/1) (476c70c231139063b3bc1b7314d8ce49) switched from SCHEDULED to DEPLOYING.
2017-05-11 15:10:57,726 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Create Job information (1/1) (attempt #1) to splat34
2017-05-11 15:10:57,736 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSource-15 (1/1) (aaa8f50da5b4f95be0dc5e5533741cf7) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:57,737 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[event.bifrost.log] -> Timestamp assigner -> Wrap events (1/1) (8a97789a9006d417d4b761f580c98fbe) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:57,738 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[rbea.state.event.bifrost.log] -> Timestamp assigner -> Filter state updates -> Wrap events (1/1) (1f577dc5863e4902aa8deea71b71f605) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:57,757 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - To DeploymentInfo (1/1) (085b77eabcd6fe3f7c6afe3ac4ccf732) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:57,757 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Window aggregator (1/1) (2e3d5f6b29d630d83c4c87febfccbd70) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:57,759 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Query Job Info (1/1) (4ba944135ae73a280d50c8aa67d6edf7) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:57,760 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - MySql output info -> Filter (1/1) (6d6200e77bcbc8eadd6a7e9fb359c7de) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:57,761 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Execute processors -> (Filter processor info -> Filter Failures, Filter BEA) (1/1) (bee8adcdb1e1015878d02f2bf7e25271) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:57,764 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Fields and Ids -> Filter Errors and Notifications (1/1) (412bf511ffe9c5fb24986fdb7946b7eb) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:57,764 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Kafka[rbeaDeploymentsplattest1] -> Max watermark -> Drop errors -> Filter for topic (1/1) (e5bf796d6c72b5552867db7b1cee9eec) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:57,765 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Keep last -> NoOp -> Create Aggrigato events (1/1) (d05593668ad3beb74efb47eb46df749a) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:57,766 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IterationSink-15 (1/1) (bd2ccbc322a1a9c266ea0cd9cbc7c693) switched from DEPLOYING to RUNNING.
2017-05-11 15:10:57,767 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Create Job information (1/1) (476c70c231139063b3bc1b7314d8ce49) switched from DEPLOYING to RUNNING.
2017-05-11 15:12:04,734 INFO org.apache.flink.yarn.YarnJobManager - Attempting to recover all jobs.
2017-05-11 15:12:04,737 INFO org.apache.flink.yarn.YarnJobManager - There are 1 jobs to recover. Starting the job recovery.
2017-05-11 15:12:04,739 INFO org.apache.flink.yarn.YarnJobManager - Attempting to recover job 065c0937d56f3e8da025e015d3ab332b.
2017-05-11 15:12:04,777 INFO org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore - Recovered SubmittedJobGraph(065c0937d56f3e8da025e015d3ab332b, JobInfo(clients: Set((Actor[akka.tcp://flink@deploy.sto.midasplayer.com:55362/temp/$p],DETACHED)), start: 1494508208940)).
2017-05-11 15:12:04,778 INFO org.apache.flink.yarn.YarnJobManager - Ignoring job recovery for 065c0937d56f3e8da025e015d3ab332b, because it is already submitted.
2017-05-11 15:15:57,723 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 2 @ 1494508557721
2017-05-11 15:15:58,085 ERROR org.apache.flink.yarn.YarnJobManager - Error in CheckpointCoordinator while processing Confirm Task Checkpoint 2 for (065c0937d56f3e8da025e015d3ab332b/bee8adcdb1e1015878d02f2bf7e25271) - state=SubtaskState{chainedStateHandle=org.apache.flink.runtime.state.ChainedStateHandle@e1781, operatorStateFromBackend=org.apache.flink.runtime.state.ChainedStateHandle@1284d31a, operatorStateFromStream=org.apache.flink.runtime.state.ChainedStateHandle@e1781, keyedStateFromBackend=org.apache.flink.contrib.streaming.state.RocksDBIncrementalKeyedStateHandle@1630b109, keyedStateFromStream=KeyGroupsStateHandle{groupRangeOffsets=KeyGroupRangeOffsets{keyGroupRange=KeyGroupRange{startKeyGroup=0, endKeyGroup=127}, offsets=[0, 951, 1902, 2853, 3804, 4755, 5706, 6657, 7608, 8559, 9510, 10461, 11412, 12363, 13314, 14265, 15216, 16167, 17118, 18069, 19020, 19971, 20922, 21873, 22824, 23775, 24726, 25677, 26628, 27579, 28530, 29481, 30432, 31383, 32334, 33285, 34236, 35187, 36138, 37089, 38040, 38991, 39942, 40893, 41844, 42795, 43746, 44697, 45648, 46599, 47550, 48501, 49452, 50403, 51354, 52305, 53256, 54207, 55158, 56109, 57060, 58011, 58962, 59913, 60864, 61815, 62766, 63717, 64668, 65619, 66570, 67521, 68472, 69423, 70374, 71325, 72276, 73227, 74178, 75129, 76080, 77031, 77982, 78933, 79884, 80835, 81786, 82737, 83688, 84639, 85590, 86541, 87492, 88443, 89394, 90345, 91296, 92247, 93198, 94149, 95100, 96051, 97002, 97953, 98904, 99855, 100806, 101757, 102708, 103659, 104610, 105561, 106512, 107463, 108414, 109365, 110316, 111267, 112218, 113169, 114120, 115071, 116022, 116973, 117924, 118875, 119826, 120777]}, data=File State: hdfs://splat34.sto.midasplayer.com:8020/flink/checkpoints/event.bifrost.log/065c0937d56f3e8da025e015d3ab332b/chk-2/e061c982-325a-4d0c-b10f-14b4edcc7711 [121728 bytes]}, stateSize=546522}
org.apache.flink.runtime.checkpoint.CheckpointException: Could not finalize the pending checkpoint 2.
at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:853)
at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.receiveAcknowledgeMessage(CheckpointCoordinator.java:772)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$handleCheckpointMessage$1.apply$mcV$sp(JobManager.scala:1462)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$handleCheckpointMessage$1.apply(JobManager.scala:1461)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$handleCheckpointMessage$1.apply(JobManager.scala:1461)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.IllegalStateException: Unknown KeyedStateHandle type: class org.apache.flink.contrib.streaming.state.RocksDBIncrementalKeyedStateHandle
at org.apache.flink.runtime.checkpoint.savepoint.SavepointV2Serializer.serializeKeyedStateHandle(SavepointV2Serializer.java:315)
at org.apache.flink.runtime.checkpoint.savepoint.SavepointV2Serializer.serializeSubtaskState(SavepointV2Serializer.java:267)
at org.apache.flink.runtime.checkpoint.savepoint.SavepointV2Serializer.serialize(SavepointV2Serializer.java:119)
at org.apache.flink.runtime.checkpoint.savepoint.SavepointV2Serializer.serialize(SavepointV2Serializer.java:64)
at org.apache.flink.runtime.checkpoint.savepoint.SavepointStore.storeSavepointToHandle(SavepointStore.java:199)
at org.apache.flink.runtime.checkpoint.savepoint.SavepointStore.storeExternalizedCheckpointToHandle(SavepointStore.java:164)
at org.apache.flink.runtime.checkpoint.PendingCheckpoint.finalizeCheckpointExternalized(PendingCheckpoint.java:287)
at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:843)
... 12 more
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment