Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
2019-09-30 10:14:16,485 [INFO] [IPC Server handler 1 on 35489] |app.DAGAppMaster|: Running DAG: select count(*) from ...'fc','fd','fe','ff') (Stage-1), callerContext={ context=HIVE, callerType=HIVE_QUERY_ID, callerId=hive_20190930101341_db02dfce-0fa4-4c35-980d-3d993df0e2ab }
2019-09-30 10:14:16,952 [INFO] [IPC Server handler 1 on 35489] |HistoryEventHandler.criticalEvents|: [HISTORY][DAG:dag_1569602124761_2935_2][Event:DAG_SUBMITTED]: dagID=dag_1569602124761_2935_2, submitTime=1569827656477, queueName=default
2019-09-30 10:14:16,956 [INFO] [IPC Server handler 1 on 35489] |impl.VertexImpl|: Default container context for vertex_1569602124761_2935_2_00 [Map 1]=LocalResources: [[ name=hive-hcatalog-core.jar, value=resource { scheme: "hdfs" host: "host" port: 8020 file: "/tmp/hive/hive/_tez_session_dir/5a1a7c38-843e-4927-89ed-74b190b70fe4-resources/hive-hcatalog-core.jar" } size: 269269 timestamp: 1569827622438 type: FILE visibility: PRIVATE],[ name=tez-conf.pb, value=resource { scheme: "hdfs" host: "host" port: 8020 file: "/tmp/hive/hive/_tez_session_dir/5a1a7c38-843e-4927-89ed-74b190b70fe4/.tez/application_1569602124761_2935/tez-conf.pb" } size: 130140 timestamp: 1569827622663 type: FILE visibility: APPLICATION],[ name=tezlib, value=resource { scheme: "hdfs" host: "host" port: 8020 file: "/hdp/apps/3.1.4.0-315/tez/tez.tar.gz" } size: 282569993 timestamp: 1569505738997 type: ARCHIVE visibility: PUBLIC],[ name=hive-exec-3.1.0.3.1.4.0-315-effe339ef81326ce093bc0a1516e86d9d189e11126de97212254c005859b949e.jar, value=resource { scheme: "hdfs" host: "host" port: 8020 file: "/user/hive/.hiveJars/hive-exec-3.1.0.3.1.4.0-315-effe339ef81326ce093bc0a1516e86d9d189e11126de97212254c005859b949e.jar" } size: 42426062 timestamp: 1569515366707 type: FILE visibility: PRIVATE],[ name=json-serde-1.3.8-jar-with-dependencies.jar, value=resource { scheme: "hdfs" host: "host" port: 8020 file: "/tmp/hive/hive/_tez_session_dir/5a1a7c38-843e-4927-89ed-74b190b70fe4-resources/json-serde-1.3.8-jar-with-dependencies.jar" } size: 85492 timestamp: 1569827622496 type: FILE visibility: PRIVATE],], environment: [[ SHELL=/bin/bash ],[ LD_LIBRARY_PATH=$PWD:/usr/hdp/3.1.4.0-315/hadoop/lib/native:/usr/hdp/3.1.4.0-315/hadoop/lib/native/Linux-amd64-64:$PWD:/usr/hdp/3.1.4.0-315/hadoop/lib/native:/usr/hdp/3.1.4.0-315/hadoop/lib/native/Linux-amd64-64:$PWD:/usr/hdp/3.1.4.0-315/hadoop/lib/native:/usr/hdp/3.1.4.0-315/hadoop/lib/native/Linux-amd64-64:$PWD:/usr/hdp/3.1.4.0-315/hadoop/lib/native:/usr/hdp/3.1.4.0-315/hadoop/lib/native/Linux-amd64-64:$HADOOP_COMMON_HOME/lib/native/ ],[ HADOOP_ROOT_LOGGER=INFO,CLA ],[ CLASSPATH=/usr/hdp/3.1.4.0-315/hadoop/lib/hadoop-lzo-0.6.0.3.1.4.0-315.jar:/etc/hadoop/conf/secure:$PWD:$PWD/*:$PWD/tezlib/*:$PWD/tezlib/lib/*: ],], credentials(token kinds): [tez.job,], javaOpts: -server -Djava.net.preferIPv4Stack=true -Dhdp.version=3.1.4.0-315 -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB -server -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dtez.root.logger=DEBUG,CLA , vertex: vertex_1569602124761_2935_2_00 [Map 1], Default Resources=<memory:4096, vCores:1>
2019-09-30 10:14:16,956 [INFO] [IPC Server handler 1 on 35489] |impl.VertexImpl|: Setting 1 additional inputs for vertexvertex_1569602124761_2935_2_00 [Map 1]
2019-09-30 10:14:16,957 [INFO] [IPC Server handler 1 on 35489] |impl.VertexImpl|: Using ExecutionContext from Vertex for Vertex Map 1
2019-09-30 10:14:16,957 [INFO] [IPC Server handler 1 on 35489] |impl.VertexImpl|: Running vertex: vertex_1569602124761_2935_2_00 [Map 1] : TaskScheduler=0:TezYarn, ContainerLauncher=0:TezYarn, TaskCommunicator=0:TezYarn
2019-09-30 10:14:16,961 [INFO] [IPC Server handler 1 on 35489] |impl.VertexImpl|: Default container context for vertex_1569602124761_2935_2_01 [Reducer 2]=LocalResources: [[ name=hive-hcatalog-core.jar, value=resource { scheme: "hdfs" host: "host" port: 8020 file: "/tmp/hive/hive/_tez_session_dir/5a1a7c38-843e-4927-89ed-74b190b70fe4-resources/hive-hcatalog-core.jar" } size: 269269 timestamp: 1569827622438 type: FILE visibility: PRIVATE],[ name=tez-conf.pb, value=resource { scheme: "hdfs" host: "host" port: 8020 file: "/tmp/hive/hive/_tez_session_dir/5a1a7c38-843e-4927-89ed-74b190b70fe4/.tez/application_1569602124761_2935/tez-conf.pb" } size: 130140 timestamp: 1569827622663 type: FILE visibility: APPLICATION],[ name=tezlib, value=resource { scheme: "hdfs" host: "host" port: 8020 file: "/hdp/apps/3.1.4.0-315/tez/tez.tar.gz" } size: 282569993 timestamp: 1569505738997 type: ARCHIVE visibility: PUBLIC],[ name=hive-exec-3.1.0.3.1.4.0-315-effe339ef81326ce093bc0a1516e86d9d189e11126de97212254c005859b949e.jar, value=resource { scheme: "hdfs" host: "host" port: 8020 file: "/user/hive/.hiveJars/hive-exec-3.1.0.3.1.4.0-315-effe339ef81326ce093bc0a1516e86d9d189e11126de97212254c005859b949e.jar" } size: 42426062 timestamp: 1569515366707 type: FILE visibility: PRIVATE],[ name=json-serde-1.3.8-jar-with-dependencies.jar, value=resource { scheme: "hdfs" host: "host" port: 8020 file: "/tmp/hive/hive/_tez_session_dir/5a1a7c38-843e-4927-89ed-74b190b70fe4-resources/json-serde-1.3.8-jar-with-dependencies.jar" } size: 85492 timestamp: 1569827622496 type: FILE visibility: PRIVATE],], environment: [[ SHELL=/bin/bash ],[ LD_LIBRARY_PATH=$PWD:/usr/hdp/3.1.4.0-315/hadoop/lib/native:/usr/hdp/3.1.4.0-315/hadoop/lib/native/Linux-amd64-64:$PWD:/usr/hdp/3.1.4.0-315/hadoop/lib/native:/usr/hdp/3.1.4.0-315/hadoop/lib/native/Linux-amd64-64:$PWD:/usr/hdp/3.1.4.0-315/hadoop/lib/native:/usr/hdp/3.1.4.0-315/hadoop/lib/native/Linux-amd64-64:$PWD:/usr/hdp/3.1.4.0-315/hadoop/lib/native:/usr/hdp/3.1.4.0-315/hadoop/lib/native/Linux-amd64-64:$HADOOP_COMMON_HOME/lib/native/ ],[ HADOOP_ROOT_LOGGER=INFO,CLA ],[ CLASSPATH=/usr/hdp/3.1.4.0-315/hadoop/lib/hadoop-lzo-0.6.0.3.1.4.0-315.jar:/etc/hadoop/conf/secure:$PWD:$PWD/*:$PWD/tezlib/*:$PWD/tezlib/lib/*: ],], credentials(token kinds): [tez.job,], javaOpts: -server -Djava.net.preferIPv4Stack=true -Dhdp.version=3.1.4.0-315 -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB -server -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dtez.root.logger=DEBUG,CLA , vertex: vertex_1569602124761_2935_2_01 [Reducer 2], Default Resources=<memory:4096, vCores:1>
2019-09-30 10:14:16,961 [INFO] [IPC Server handler 1 on 35489] |impl.VertexImpl|: Setting 1 additional outputs for vertex vertex_1569602124761_2935_2_01 [Reducer 2]
2019-09-30 10:14:16,961 [INFO] [IPC Server handler 1 on 35489] |impl.VertexImpl|: Using ExecutionContext from Vertex for Vertex Reducer 2
2019-09-30 10:14:16,962 [INFO] [IPC Server handler 1 on 35489] |impl.VertexImpl|: Running vertex: vertex_1569602124761_2935_2_01 [Reducer 2] : TaskScheduler=0:TezYarn, ContainerLauncher=0:TezYarn, TaskCommunicator=0:TezYarn
2019-09-30 10:14:16,967 [INFO] [IPC Server handler 1 on 35489] |impl.DAGImpl|: Using DAG Scheduler: org.apache.tez.dag.app.dag.impl.DAGSchedulerNaturalOrder
2019-09-30 10:14:16,968 [INFO] [IPC Server handler 1 on 35489] |HistoryEventHandler.criticalEvents|: [HISTORY][DAG:dag_1569602124761_2935_2][Event:DAG_INITIALIZED]: dagID=dag_1569602124761_2935_2, initTime=1569827656967
2019-09-30 10:14:16,968 [INFO] [IPC Server handler 1 on 35489] |impl.DAGImpl|: dag_1569602124761_2935_2 transitioned from NEW to INITED due to event DAG_INIT
2019-09-30 10:14:16,968 [INFO] [Dispatcher thread {Central}] |impl.DAGImpl|: Added additional resources : [[]] to classpath
2019-09-30 10:14:16,970 [INFO] [Dispatcher thread {Central}] |HistoryEventHandler.criticalEvents|: [HISTORY][DAG:dag_1569602124761_2935_2][Event:DAG_STARTED]: dagID=dag_1569602124761_2935_2, startTime=1569827656968
2019-09-30 10:14:16,970 [INFO] [Dispatcher thread {Central}] |impl.DAGImpl|: dag_1569602124761_2935_2 transitioned from INITED to RUNNING due to event DAG_START
2019-09-30 10:14:16,970 [INFO] [Dispatcher thread {Central}] |impl.VertexImpl|: Root Inputs exist for Vertex: Map 1 : {product_events_optimized={InputName=product_events_optimized}, {Descriptor=ClassName=org.apache.tez.mapreduce.input.MRInputLegacy, hasPayload=true}, {ControllerDescriptor=ClassName=org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator, hasPayload=false}}
2019-09-30 10:14:16,970 [INFO] [Dispatcher thread {Central}] |impl.VertexImpl|: Starting root input initializer for input: product_events_optimized, with class: [org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator]
2019-09-30 10:14:16,970 [INFO] [Dispatcher thread {Central}] |impl.VertexImpl|: Setting vertexManager to RootInputVertexManager for vertex_1569602124761_2935_2_00 [Map 1]
2019-09-30 10:14:16,986 [INFO] [Dispatcher thread {Central}] |impl.VertexImpl|: Num tasks is -1. Expecting VertexManager/InputInitializers/1-1 split to set #tasks for the vertex vertex_1569602124761_2935_2_00 [Map 1]
2019-09-30 10:14:16,986 [INFO] [Dispatcher thread {Central}] |impl.VertexImpl|: Vertex will initialize from input initializer. vertex_1569602124761_2935_2_00 [Map 1]
2019-09-30 10:14:16,988 [INFO] [Dispatcher thread {Central}] |impl.VertexImpl|: Starting 1 inputInitializers for vertex vertex_1569602124761_2935_2_00 [Map 1]
2019-09-30 10:14:17,009 [INFO] [Dispatcher thread {Central}] |Configuration.deprecation|: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed
2019-09-30 10:14:17,227 [INFO] [Dispatcher thread {Central}] |conf.HiveConf|: Found configuration file null
2019-09-30 10:14:17,233 [INFO] [Dispatcher thread {Central}] |exec.Utilities|: PLAN PATH = hdfs://host:8020/tmp/hive/hive/f4436064-77c8-437f-af5e-94173acd3890/hive_2019-09-30_10-13-41_623_1803751641666654705-1469/hive/_tez_scratch_dir/7f24d719-df4f-4f12-b086-eff704dec94b/map.xml
2019-09-30 10:14:17,246 [INFO] [Dispatcher thread {Central}] |exec.SerializationUtilities|: Deserializing MapWork using kryo
2019-09-30 10:14:17,614 [INFO] [Dispatcher thread {Central}] |exec.Utilities|: Deserialized plan (via RPC) - name: Map 1 size: 21.96KB
2019-09-30 10:14:17,615 [INFO] [Dispatcher thread {Central}] |tez.HiveSplitGenerator|: SplitGenerator using llap affinitized locations: false
2019-09-30 10:14:17,615 [INFO] [Dispatcher thread {Central}] |tez.HiveSplitGenerator|: SplitLocationProvider: org.apache.hadoop.hive.ql.exec.tez.Utils$1@5eda5706
2019-09-30 10:14:17,622 [INFO] [InputInitializer {Map 1} #0] |dag.RootInputInitializerManager|: Starting InputInitializer for Input: product_events_optimized on vertex vertex_1569602124761_2935_2_00 [Map 1]
2019-09-30 10:14:17,624 [INFO] [InputInitializer {Map 1} #0] |tez.HiveSplitGenerator|: GenerateConsistentSplitsInHive=true
2019-09-30 10:14:17,626 [INFO] [Dispatcher thread {Central}] |impl.VertexImpl|: vertex_1569602124761_2935_2_00 [Map 1] transitioned from NEW to INITIALIZING due to event V_INIT
2019-09-30 10:14:17,628 [INFO] [Dispatcher thread {Central}] |impl.VertexImpl|: Setting vertexManager to ShuffleVertexManager for vertex_1569602124761_2935_2_01 [Reducer 2]
2019-09-30 10:14:17,634 [INFO] [InputInitializer {Map 1} #0] |tez.HiveSplitGenerator|: The preferred split size is 16777216
2019-09-30 10:14:17,635 [INFO] [InputInitializer {Map 1} #0] |exec.Utilities|: PLAN PATH = hdfs://host:8020/tmp/hive/hive/f4436064-77c8-437f-af5e-94173acd3890/hive_2019-09-30_10-13-41_623_1803751641666654705-1469/hive/_tez_scratch_dir/7f24d719-df4f-4f12-b086-eff704dec94b/map.xml
2019-09-30 10:14:17,638 [INFO] [InputInitializer {Map 1} #0] |exec.Utilities|: Processing alias product_events_optimized
2019-09-30 10:14:17,638 [INFO] [InputInitializer {Map 1} #0] |exec.Utilities|: Adding 96 inputs; the first input is hdfs://host:8020/flume/product_events/product_events_optimized/partfunc=a0
2019-09-30 10:14:17,658 [INFO] [Dispatcher thread {Central}] |vertexmanager.ShuffleVertexManagerBase|: Settings minFrac: 0.2 maxFrac: 0.4 auto: false desiredTaskIput: 104857600
2019-09-30 10:14:17,658 [INFO] [Dispatcher thread {Central}] |vertexmanager.ShuffleVertexManager|: minTaskParallelism 1
2019-09-30 10:14:17,658 [INFO] [Dispatcher thread {Central}] |impl.VertexImpl|: Creating 1 tasks for vertex: vertex_1569602124761_2935_2_01 [Reducer 2]
2019-09-30 10:14:17,659 [INFO] [Dispatcher thread {Central}] |impl.VertexImpl|: Directly initializing vertex: vertex_1569602124761_2935_2_01 [Reducer 2]
2019-09-30 10:14:17,659 [INFO] [Dispatcher thread {Central}] |HistoryEventHandler.criticalEvents|: [HISTORY][DAG:dag_1569602124761_2935_2][Event:VERTEX_CONFIGURE_DONE]: vertexId=vertex_1569602124761_2935_2_01, reconfigureDoneTime=1569827657659, numTasks=1, vertexLocationHint=null, edgeManagersCount=1, rootInputSpecUpdateCount=0, setParallelismCalledFlag=false
2019-09-30 10:14:17,659 [INFO] [Dispatcher thread {Central}] |impl.VertexImpl|: Setting up committers for vertex vertex_1569602124761_2935_2_01 [Reducer 2], numAdditionalOutputs=1
2019-09-30 10:14:17,659 [INFO] [Dispatcher thread {Central}] |HistoryEventHandler.criticalEvents|: [HISTORY][DAG:dag_1569602124761_2935_2][Event:VERTEX_INITIALIZED]: vertexName=Reducer 2, vertexId=vertex_1569602124761_2935_2_01, initRequestedTime=1569827657628, initedTime=1569827657659, numTasks=1, processorName=org.apache.hadoop.hive.ql.exec.tez.ReduceTezProcessor, additionalInputsCount=0, initGeneratedEventsCount=0, servicePluginInfo=ServicePluginInfo {containerLauncherName=TezYarn, taskSchedulerName=TezYarn, taskCommunicatorName=TezYarn, containerLauncherClassName=org.apache.tez.dag.app.launcher.TezContainerLauncherImpl, taskSchedulerClassName=org.apache.tez.dag.app.rm.YarnTaskSchedulerService, taskCommunicatorClassName=org.apache.tez.dag.app.TezTaskCommunicatorImpl }
2019-09-30 10:14:17,659 [INFO] [Dispatcher thread {Central}] |impl.VertexImpl|: vertex_1569602124761_2935_2_01 [Reducer 2] transitioned from NEW to INITED due to event V_INIT
2019-09-30 10:14:17,663 [INFO] [InputInitializer {Map 1} #0] |io.HiveInputFormat|: hive.io.file.readcolumn.ids =
2019-09-30 10:14:17,664 [INFO] [InputInitializer {Map 1} #0] |io.HiveInputFormat|: hive.io.file.readcolumn.names =
2019-09-30 10:14:17,664 [INFO] [InputInitializer {Map 1} #0] |io.HiveInputFormat|: Generating splits for dirs: hdfs://host:8020/flume/product_events/product_events_optimized/partfunc=a0
2019-09-30 10:14:17,731 [INFO] [InputInitializer {Map 1} #0] |orc.OrcInputFormat|: ORC pushdown predicate: leaf-0 = (IN partfunc a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 aa ab ac ad ae af b0 b1 b2 b3 b4 b5 b6 b7 b8 b9 ba bb bc bd be bf c0 c1 c2 c3 c4 c5 c6 c7 c8 c9 ca cb cc cd ce cf d0 d1 d2 d3 d4 d5 d6 d7 d8 d9 da db dc dd de df e0 e1 e2 e3 e4 e5 e6 e7 e8 e9 ea eb ec ed ee ef f0 f1 f2 f3 f4 f5 f6 f7 f8 f9 fa fb fc fd fe ff), expr = leaf-0
2019-09-30 10:14:17,732 [INFO] [InputInitializer {Map 1} #0] |Configuration.deprecation|: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
2019-09-30 10:14:18,968 [INFO] [InputInitializer {Map 1} #0] |exec.Utilities|: PLAN PATH = hdfs://host:8020/tmp/hive/hive/f4436064-77c8-437f-af5e-94173acd3890/hive_2019-09-30_10-13-41_623_1803751641666654705-1469/hive/_tez_scratch_dir/7f24d719-df4f-4f12-b086-eff704dec94b/map.xml
2019-09-30 10:15:39,997 [INFO] [AMRM Callback Handler Thread] |rm.YarnTaskSchedulerService|: Allocated: <memory:12288, vCores:3> Free: <memory:0, vCores:0> pendingRequests: 0 delayedContainers: 3 heartbeats: 151 lastPreemptionHeartbeat: 150
2019-09-30 10:17:52,151 [INFO] [InputInitializer {Map 1} #0] |exec.Utilities|: PLAN PATH = hdfs://host:8020/tmp/hive/hive/f4436064-77c8-437f-af5e-94173acd3890/hive_2019-09-30_10-13-41_623_1803751641666654705-1469/hive/_tez_scratch_dir/7f24d719-df4f-4f12-b086-eff704dec94b/map.xml
2019-09-30 10:20:32,705 [INFO] [AMRM Callback Handler Thread] |rm.YarnTaskSchedulerService|: Allocated: <memory:12288, vCores:3> Free: <memory:0, vCores:0> pendingRequests: 0 delayedContainers: 3 heartbeats: 201 lastPreemptionHeartbeat: 200
2019-09-30 10:25:03,421 [INFO] [AMRM Callback Handler Thread] |rm.YarnTaskSchedulerService|: Allocated: <memory:12288, vCores:3> Free: <memory:0, vCores:0> pendingRequests: 0 delayedContainers: 3 heartbeats: 251 lastPreemptionHeartbeat: 250
2019-09-30 10:26:12,785 [INFO] [InputInitializer {Map 1} #0] |exec.Utilities|: PLAN PATH = hdfs://host:8020/tmp/hive/hive/f4436064-77c8-437f-af5e-94173acd3890/hive_2019-09-30_10-13-41_623_1803751641666654705-1469/hive/_tez_scratch_dir/7f24d719-df4f-4f12-b086-eff704dec94b/map.xml
2019-09-30 10:29:34,239 [INFO] [AMRM Callback Handler Thread] |rm.YarnTaskSchedulerService|: Allocated: <memory:12288, vCores:3> Free: <memory:114688, vCores:1> pendingRequests: 0 delayedContainers: 3 heartbeats: 301 lastPreemptionHeartbeat: 300
2019-09-30 10:33:10,198 [FATAL] [IPC Server idle connection scanner for port 35489] |yarn.YarnUncaughtExceptionHandler|: Thread Thread[IPC Server idle connection scanner for port 35489,5,main] threw an Error. Shutting down now...
java.lang.OutOfMemoryError: Java heap space
at java.util.concurrent.ConcurrentHashMap$KeySetView.iterator(ConcurrentHashMap.java:4578)
at java.util.Collections$SetFromMap.iterator(Collections.java:5462)
at org.apache.hadoop.ipc.Server$ConnectionManager.closeIdle(Server.java:3461)
at org.apache.hadoop.ipc.Server$ConnectionManager$1.run(Server.java:3506)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
2019-09-30 10:33:10,198 [FATAL] [IPC Server idle connection scanner for port 35519] |yarn.YarnUncaughtExceptionHandler|: Thread Thread[IPC Server idle connection scanner for port 35519,5,main] threw an Error. Shutting down now...
java.lang.OutOfMemoryError: Java heap space
at java.util.concurrent.ConcurrentHashMap$KeySetView.iterator(ConcurrentHashMap.java:4578)
at java.util.Collections$SetFromMap.iterator(Collections.java:5462)
at org.apache.hadoop.ipc.Server$ConnectionManager.closeIdle(Server.java:3461)
at org.apache.hadoop.ipc.Server$ConnectionManager$1.run(Server.java:3506)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
2019-09-30 10:33:10,198 [WARN] [ResponseProcessor for block BP-328156957-10.18.69.65-1534169825766:blk_1113405057_39723014] |hdfs.DataStreamer|: Exception for BP-328156957-10.18.69.65-1534169825766:blk_1113405057_39723014
java.io.EOFException: Unexpected EOF while trying to read response from server
at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:549)
at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1086)
2019-09-30 10:33:10,198 [WARN] [ResponseProcessor for block BP-328156957-10.18.69.65-1534169825766:blk_1113405056_39723013] |hdfs.DataStreamer|: Exception for BP-328156957-10.18.69.65-1534169825766:blk_1113405056_39723013
java.io.EOFException: Unexpected EOF while trying to read response from server
at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:549)
at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1086)
2019-09-30 10:33:10,198 [FATAL] [TaskHeartbeatHandler PingChecker] |yarn.YarnUncaughtExceptionHandler|: Thread Thread[TaskHeartbeatHandler PingChecker,5,main] threw an Error. Shutting down now...
java.lang.OutOfMemoryError: Java heap space
at java.util.concurrent.ConcurrentHashMap$EntrySetView.iterator(ConcurrentHashMap.java:4746)
at org.apache.tez.dag.app.HeartbeatHandlerBase$PingChecker.run(HeartbeatHandlerBase.java:145)
at java.lang.Thread.run(Thread.java:745)
2019-09-30 10:33:10,206 [INFO] [InputInitializer {Map 1} #0] |dag.RootInputInitializerManager|: Failed InputInitializer for Input: product_events_optimized on vertex vertex_1569602124761_2935_2_00 [Map 1]
2019-09-30 10:33:10,209 [ERROR] [Dispatcher thread {Central}] |impl.VertexImpl|: Vertex Input: product_events_optimized initializer failed, vertex=vertex_1569602124761_2935_2_00 [Map 1]
org.apache.tez.dag.app.dag.impl.AMUserCodeException: java.lang.OutOfMemoryError: Java heap space
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallback.onFailure(RootInputInitializerManager.java:328)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1015)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1137)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:957)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:748)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.afterRanInterruptibly(TrustedListenableFutureTask.java:133)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:133)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Arrays.java:3664)
at java.lang.String.<init>(String.java:207)
at java.lang.StringBuilder.toString(StringBuilder.java:407)
at org.apache.hadoop.fs.Path.toString(Path.java:475)
at java.lang.String.valueOf(String.java:2994)
at java.lang.StringBuilder.append(StringBuilder.java:131)
at org.apache.orc.impl.OrcAcidUtils.getSideFile(OrcAcidUtils.java:43)
at org.apache.hadoop.hive.ql.io.AcidUtils.getLogicalLength(AcidUtils.java:1879)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$BISplitStrategy.getSplits(OrcInputFormat.java:1069)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1847)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1959)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:524)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:781)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:243)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
... 4 more
2019-09-30 10:33:10,213 [WARN] [DataStreamer for file /tmp/hive/hive/_tez_session_dir/5a1a7c38-843e-4927-89ed-74b190b70fe4/.tez/application_1569602124761_2935/recovery/1/summary block BP-328156957-10.18.69.65-1534169825766:blk_1113405056_39723013] |hdfs.DataStreamer|: Error Recovery for BP-328156957-10.18.69.65-1534169825766:blk_1113405056_39723013 in pipeline [DatanodeInfoWithStorage[10.18.69.66:50010,DS-d8b1e728-defd-4e09-a148-1ea56b980138,DISK], DatanodeInfoWithStorage[10.18.69.30:50010,DS-ddb765a8-e27b-4e7c-bf36-ec77739898d0,DISK], DatanodeInfoWithStorage[10.18.69.39:50010,DS-4f61b894-8740-4f72-b1fe-19b7b4b39964,DISK]]: datanode 0(DatanodeInfoWithStorage[10.18.69.66:50010,DS-d8b1e728-defd-4e09-a148-1ea56b980138,DISK]) is bad.
2019-09-30 10:33:10,213 [WARN] [DataStreamer for file /tmp/hive/hive/_tez_session_dir/5a1a7c38-843e-4927-89ed-74b190b70fe4/.tez/application_1569602124761_2935/recovery/1/dag_1569602124761_2935_2.recovery block BP-328156957-10.18.69.65-1534169825766:blk_1113405057_39723014] |hdfs.DataStreamer|: Error Recovery for BP-328156957-10.18.69.65-1534169825766:blk_1113405057_39723014 in pipeline [DatanodeInfoWithStorage[10.18.69.66:50010,DS-d8b1e728-defd-4e09-a148-1ea56b980138,DISK], DatanodeInfoWithStorage[10.18.69.21:50010,DS-46663f8e-9e10-47dd-9530-f6deccbef715,DISK], DatanodeInfoWithStorage[10.18.69.22:50010,DS-6bc8b9ed-3b69-40f9-917d-91e6fe5b74ff,DISK]]: datanode 0(DatanodeInfoWithStorage[10.18.69.66:50010,DS-d8b1e728-defd-4e09-a148-1ea56b980138,DISK]) is bad.
2019-09-30 10:33:10,249 [INFO] [IPC Server idle connection scanner for port 35519] |util.ExitUtil|: Halt with status -1: HaltException
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment