Created
October 24, 2015 01:03
-
-
Save Aslan/ebb24e747f630dc898a1 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
15/10/24 01:00:34 INFO client.RMProxy: Connecting to ResourceManager at ip-10-65-200-150.ec2.internal/10.65.200.150:8032 | |
Container: container_1444274555723_0060_01_000003 on ip-10-169-170-124.ec2.internal_8041 | |
========================================================================================== | |
LogType:stderr | |
Log Upload Time:24-Oct-2015 01:00:10 | |
LogLength:4299 | |
Log Contents: | |
SLF4J: Class path contains multiple SLF4J bindings. | |
SLF4J: Found binding in [jar:file:/mnt1/yarn/usercache/hadoop/filecache/110/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. | |
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] | |
15/10/24 00:59:18 INFO executor.CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT] | |
15/10/24 00:59:18 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/24 00:59:18 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/24 00:59:18 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/24 00:59:19 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/24 00:59:19 INFO Remoting: Starting remoting | |
15/10/24 00:59:20 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher@ip-10-169-170-124.ec2.internal:40736] | |
15/10/24 00:59:20 INFO util.Utils: Successfully started service 'driverPropsFetcher' on port 40736. | |
15/10/24 00:59:20 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/24 00:59:20 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/24 00:59:20 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/24 00:59:20 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/24 00:59:20 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/24 00:59:20 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/24 00:59:20 INFO Remoting: Starting remoting | |
15/10/24 00:59:20 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/24 00:59:20 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor@ip-10-169-170-124.ec2.internal:60697] | |
15/10/24 00:59:20 INFO util.Utils: Successfully started service 'sparkExecutor' on port 60697. | |
15/10/24 00:59:20 INFO storage.DiskBlockManager: Created local directory at /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0060/blockmgr-3ff6ca35-5095-4cd6-b25a-269b090e2467 | |
15/10/24 00:59:20 INFO storage.DiskBlockManager: Created local directory at /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/blockmgr-b8802568-f375-4fef-8136-1f414f29d30a | |
15/10/24 00:59:20 INFO storage.MemoryStore: MemoryStore started with capacity 535.0 MB | |
15/10/24 00:59:21 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://sparkDriver@10.169.170.124:56031/user/CoarseGrainedScheduler | |
15/10/24 00:59:21 INFO executor.CoarseGrainedExecutorBackend: Successfully registered with driver | |
15/10/24 00:59:21 INFO executor.Executor: Starting executor ID 2 on host ip-10-169-170-124.ec2.internal | |
15/10/24 00:59:21 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35981. | |
15/10/24 00:59:21 INFO netty.NettyBlockTransferService: Server created on 35981 | |
15/10/24 00:59:21 INFO storage.BlockManagerMaster: Trying to register BlockManager | |
15/10/24 00:59:21 INFO storage.BlockManagerMaster: Registered BlockManager | |
15/10/24 00:59:21 INFO storage.BlockManager: Registering executor with local external shuffle service. | |
15/10/24 00:59:39 INFO executor.CoarseGrainedExecutorBackend: Driver commanded a shutdown | |
15/10/24 00:59:39 INFO storage.MemoryStore: MemoryStore cleared | |
15/10/24 00:59:39 INFO storage.BlockManager: BlockManager stopped | |
15/10/24 00:59:39 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/24 00:59:39 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/24 00:59:39 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/24 00:59:39 INFO util.ShutdownHookManager: Shutdown hook called | |
LogType:stdout | |
Log Upload Time:24-Oct-2015 01:00:10 | |
LogLength:2080 | |
Log Contents: | |
2015-10-24T00:59:19.869+0000: [GC2015-10-24T00:59:19.869+0000: [ParNew: 272640K->17703K(306688K), 0.0283480 secs] 272640K->17703K(1014528K), 0.0284660 secs] [Times: user=0.04 sys=0.03, real=0.03 secs] | |
2015-10-24T00:59:19.898+0000: [GC [1 CMS-initial-mark: 0K(707840K)] 20539K(1014528K), 0.0056730 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] | |
2015-10-24T00:59:19.937+0000: [CMS-concurrent-mark: 0.032/0.034 secs] [Times: user=0.07 sys=0.01, real=0.04 secs] | |
2015-10-24T00:59:19.940+0000: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] | |
2015-10-24T00:59:21.253+0000: [CMS-concurrent-abortable-preclean: 0.927/1.313 secs] [Times: user=2.58 sys=0.41, real=1.31 secs] | |
2015-10-24T00:59:21.253+0000: [GC[YG occupancy: 166257 K (306688 K)]2015-10-24T00:59:21.253+0000: [Rescan (parallel) , 0.0116090 secs]2015-10-24T00:59:21.265+0000: [weak refs processing, 0.0000340 secs]2015-10-24T00:59:21.265+0000: [class unloading, 0.0021320 secs]2015-10-24T00:59:21.267+0000: [scrub symbol table, 0.0031190 secs]2015-10-24T00:59:21.270+0000: [scrub string table, 0.0002600 secs] [1 CMS-remark: 0K(707840K)] 166257K(1014528K), 0.0174650 secs] [Times: user=0.05 sys=0.00, real=0.02 secs] | |
2015-10-24T00:59:21.283+0000: [CMS-concurrent-sweep: 0.009/0.012 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] | |
2015-10-24T00:59:21.307+0000: [CMS-concurrent-reset: 0.024/0.024 secs] [Times: user=0.01 sys=0.03, real=0.02 secs] | |
Heap | |
par new generation total 306688K, used 261236K [0x00000000b5a00000, 0x00000000ca6c0000, 0x00000000ca6c0000) | |
eden space 272640K, 89% used [0x00000000b5a00000, 0x00000000c47d3300, 0x00000000c6440000) | |
from space 34048K, 51% used [0x00000000c8580000, 0x00000000c96c9ee0, 0x00000000ca6c0000) | |
to space 34048K, 0% used [0x00000000c6440000, 0x00000000c6440000, 0x00000000c8580000) | |
concurrent mark-sweep generation total 707840K, used 0K [0x00000000ca6c0000, 0x00000000f5a00000, 0x00000000f5a00000) | |
concurrent-mark-sweep perm gen total 48568K, used 34208K [0x00000000f5a00000, 0x00000000f896e000, 0x0000000100000000) | |
Container: container_1444274555723_0060_02_000003 on ip-10-169-170-124.ec2.internal_8041 | |
========================================================================================== | |
LogType:stderr | |
Log Upload Time:24-Oct-2015 01:00:10 | |
LogLength:4299 | |
Log Contents: | |
SLF4J: Class path contains multiple SLF4J bindings. | |
SLF4J: Found binding in [jar:file:/mnt1/yarn/usercache/hadoop/filecache/110/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. | |
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] | |
15/10/24 00:59:49 INFO executor.CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT] | |
15/10/24 00:59:50 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/24 00:59:50 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/24 00:59:50 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/24 00:59:51 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/24 00:59:51 INFO Remoting: Starting remoting | |
15/10/24 00:59:52 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher@ip-10-169-170-124.ec2.internal:46765] | |
15/10/24 00:59:52 INFO util.Utils: Successfully started service 'driverPropsFetcher' on port 46765. | |
15/10/24 00:59:52 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/24 00:59:52 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/24 00:59:52 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/24 00:59:52 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/24 00:59:52 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/24 00:59:52 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/24 00:59:52 INFO Remoting: Starting remoting | |
15/10/24 00:59:52 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor@ip-10-169-170-124.ec2.internal:40276] | |
15/10/24 00:59:52 INFO util.Utils: Successfully started service 'sparkExecutor' on port 40276. | |
15/10/24 00:59:52 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/24 00:59:53 INFO storage.DiskBlockManager: Created local directory at /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0060/blockmgr-c281c0ec-9b3e-4411-8f5d-c5bfe0c374c4 | |
15/10/24 00:59:53 INFO storage.DiskBlockManager: Created local directory at /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/blockmgr-5148877b-d965-4808-83d2-c358d37a48e9 | |
15/10/24 00:59:53 INFO storage.MemoryStore: MemoryStore started with capacity 535.0 MB | |
15/10/24 00:59:53 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://sparkDriver@10.169.170.124:58737/user/CoarseGrainedScheduler | |
15/10/24 00:59:53 INFO executor.CoarseGrainedExecutorBackend: Successfully registered with driver | |
15/10/24 00:59:53 INFO executor.Executor: Starting executor ID 2 on host ip-10-169-170-124.ec2.internal | |
15/10/24 00:59:53 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 36997. | |
15/10/24 00:59:53 INFO netty.NettyBlockTransferService: Server created on 36997 | |
15/10/24 00:59:53 INFO storage.BlockManagerMaster: Trying to register BlockManager | |
15/10/24 00:59:53 INFO storage.BlockManagerMaster: Registered BlockManager | |
15/10/24 00:59:53 INFO storage.BlockManager: Registering executor with local external shuffle service. | |
15/10/24 01:00:09 INFO executor.CoarseGrainedExecutorBackend: Driver commanded a shutdown | |
15/10/24 01:00:09 INFO storage.MemoryStore: MemoryStore cleared | |
15/10/24 01:00:09 INFO storage.BlockManager: BlockManager stopped | |
15/10/24 01:00:09 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/24 01:00:09 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/24 01:00:09 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/24 01:00:09 INFO util.ShutdownHookManager: Shutdown hook called | |
LogType:stdout | |
Log Upload Time:24-Oct-2015 01:00:10 | |
LogLength:2080 | |
Log Contents: | |
2015-10-24T00:59:51.838+0000: [GC2015-10-24T00:59:51.838+0000: [ParNew: 272640K->17630K(306688K), 0.0341150 secs] 272640K->17630K(1014528K), 0.0342320 secs] [Times: user=0.06 sys=0.02, real=0.03 secs] | |
2015-10-24T00:59:51.873+0000: [GC [1 CMS-initial-mark: 0K(707840K)] 20466K(1014528K), 0.0055560 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] | |
2015-10-24T00:59:51.908+0000: [CMS-concurrent-mark: 0.028/0.029 secs] [Times: user=0.06 sys=0.01, real=0.02 secs] | |
2015-10-24T00:59:51.910+0000: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] | |
2015-10-24T00:59:53.355+0000: [CMS-concurrent-abortable-preclean: 1.109/1.445 secs] [Times: user=2.63 sys=0.48, real=1.44 secs] | |
2015-10-24T00:59:53.355+0000: [GC[YG occupancy: 171074 K (306688 K)]2015-10-24T00:59:53.355+0000: [Rescan (parallel) , 0.0133540 secs]2015-10-24T00:59:53.369+0000: [weak refs processing, 0.0000330 secs]2015-10-24T00:59:53.369+0000: [class unloading, 0.0022200 secs]2015-10-24T00:59:53.371+0000: [scrub symbol table, 0.0032920 secs]2015-10-24T00:59:53.374+0000: [scrub string table, 0.0002610 secs] [1 CMS-remark: 0K(707840K)] 171074K(1014528K), 0.0194820 secs] [Times: user=0.06 sys=0.00, real=0.02 secs] | |
2015-10-24T00:59:53.381+0000: [CMS-concurrent-sweep: 0.005/0.006 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] | |
2015-10-24T00:59:53.405+0000: [CMS-concurrent-reset: 0.024/0.024 secs] [Times: user=0.02 sys=0.03, real=0.02 secs] | |
Heap | |
par new generation total 306688K, used 257224K [0x00000000b5a00000, 0x00000000ca6c0000, 0x00000000ca6c0000) | |
eden space 272640K, 87% used [0x00000000b5a00000, 0x00000000c43fa870, 0x00000000c6440000) | |
from space 34048K, 51% used [0x00000000c8580000, 0x00000000c96b7a40, 0x00000000ca6c0000) | |
to space 34048K, 0% used [0x00000000c6440000, 0x00000000c6440000, 0x00000000c8580000) | |
concurrent mark-sweep generation total 707840K, used 0K [0x00000000ca6c0000, 0x00000000f5a00000, 0x00000000f5a00000) | |
concurrent-mark-sweep perm gen total 50516K, used 34211K [0x00000000f5a00000, 0x00000000f8b55000, 0x0000000100000000) | |
Container: container_1444274555723_0060_02_000001 on ip-10-169-170-124.ec2.internal_8041 | |
========================================================================================== | |
LogType:stderr | |
Log Upload Time:24-Oct-2015 01:00:10 | |
LogLength:31793 | |
Log Contents: | |
log4j:ERROR Could not read configuration file from URL [file:/etc/spark/conf/log4j.properties]. | |
java.io.FileNotFoundException: /etc/spark/conf/log4j.properties (No such file or directory) | |
at java.io.FileInputStream.open(Native Method) | |
at java.io.FileInputStream.<init>(FileInputStream.java:146) | |
at java.io.FileInputStream.<init>(FileInputStream.java:101) | |
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90) | |
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188) | |
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557) | |
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526) | |
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127) | |
at org.apache.spark.Logging$class.initializeLogging(Logging.scala:122) | |
at org.apache.spark.Logging$class.initializeIfNecessary(Logging.scala:107) | |
at org.apache.spark.Logging$class.log(Logging.scala:51) | |
at org.apache.spark.deploy.yarn.ApplicationMaster$.log(ApplicationMaster.scala:603) | |
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:617) | |
at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala) | |
log4j:ERROR Ignoring configuration file [file:/etc/spark/conf/log4j.properties]. | |
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties | |
SLF4J: Class path contains multiple SLF4J bindings. | |
SLF4J: Found binding in [jar:file:/mnt1/yarn/usercache/hadoop/filecache/110/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. | |
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] | |
15/10/24 00:59:42 INFO ApplicationMaster: Registered signal handlers for [TERM, HUP, INT] | |
15/10/24 00:59:43 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1444274555723_0060_000002 | |
15/10/24 00:59:43 INFO SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/24 00:59:43 INFO SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/24 00:59:43 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/24 00:59:44 INFO ApplicationMaster: Starting the user application in a separate Thread | |
15/10/24 00:59:44 INFO ApplicationMaster: Waiting for spark context initialization | |
15/10/24 00:59:44 INFO ApplicationMaster: Waiting for spark context initialization ... | |
15/10/24 00:59:44 INFO SparkContext: Running Spark version 1.5.0 | |
15/10/24 00:59:44 INFO SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/24 00:59:44 INFO SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/24 00:59:44 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/24 00:59:45 INFO Slf4jLogger: Slf4jLogger started | |
15/10/24 00:59:45 INFO Remoting: Starting remoting | |
15/10/24 00:59:45 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.169.170.124:58737] | |
15/10/24 00:59:45 INFO Utils: Successfully started service 'sparkDriver' on port 58737. | |
15/10/24 00:59:45 INFO SparkEnv: Registering MapOutputTracker | |
15/10/24 00:59:45 INFO SparkEnv: Registering BlockManagerMaster | |
15/10/24 00:59:45 INFO DiskBlockManager: Created local directory at /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0060/blockmgr-cbe1a974-94a6-4f57-b88b-bc381a78840f | |
15/10/24 00:59:45 INFO DiskBlockManager: Created local directory at /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/blockmgr-0a807d69-7d8b-4779-b20a-89f9a3799ced | |
15/10/24 00:59:45 INFO MemoryStore: MemoryStore started with capacity 535.0 MB | |
15/10/24 00:59:45 INFO HttpFileServer: HTTP File server directory is /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0060/spark-0f8b4459-cdf0-4352-b219-cc85a14ff7c0/httpd-b1de396c-df3c-481f-a242-18e669652811 | |
15/10/24 00:59:45 INFO HttpServer: Starting HTTP Server | |
15/10/24 00:59:45 INFO Utils: Successfully started service 'HTTP file server' on port 35875. | |
15/10/24 00:59:45 INFO SparkEnv: Registering OutputCommitCoordinator | |
15/10/24 00:59:45 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter | |
15/10/24 00:59:45 INFO Utils: Successfully started service 'SparkUI' on port 57184. | |
15/10/24 00:59:45 INFO SparkUI: Started SparkUI at http://10.169.170.124:57184 | |
15/10/24 00:59:45 INFO YarnClusterScheduler: Created YarnClusterScheduler | |
15/10/24 00:59:45 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set. | |
15/10/24 00:59:46 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 45251. | |
15/10/24 00:59:46 INFO NettyBlockTransferService: Server created on 45251 | |
15/10/24 00:59:46 INFO BlockManagerMaster: Trying to register BlockManager | |
15/10/24 00:59:46 INFO BlockManagerMasterEndpoint: Registering block manager 10.169.170.124:45251 with 535.0 MB RAM, BlockManagerId(driver, 10.169.170.124, 45251) | |
15/10/24 00:59:46 INFO BlockManagerMaster: Registered BlockManager | |
15/10/24 00:59:47 INFO MetricsSaver: MetricsConfigRecord disabledInCluster: false instanceEngineCycleSec: 60 clusterEngineCycleSec: 60 disableClusterEngine: false maxMemoryMb: 3072 maxInstanceCount: 500 lastModified: 1444274560440 | |
15/10/24 00:59:47 INFO MetricsSaver: Created MetricsSaver j-2US4HNPLS1SJO:i-031cded7:ApplicationMaster:05995 period:60 /mnt/var/em/raw/i-031cded7_20151024_ApplicationMaster_05995_raw.bin | |
15/10/24 00:59:47 INFO EventLoggingListener: Logging events to hdfs:///var/log/spark/apps/application_1444274555723_0060_2 | |
15/10/24 00:59:47 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka://sparkDriver/user/YarnAM#-1530242383]) | |
15/10/24 00:59:47 INFO RMProxy: Connecting to ResourceManager at ip-10-65-200-150.ec2.internal/10.65.200.150:8030 | |
15/10/24 00:59:47 INFO YarnRMClient: Registering the ApplicationMaster | |
15/10/24 00:59:47 INFO YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead | |
15/10/24 00:59:47 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>) | |
15/10/24 00:59:47 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>) | |
15/10/24 00:59:47 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals | |
15/10/24 00:59:48 INFO AMRMClientImpl: Received new token for : ip-10-67-169-247.ec2.internal:8041 | |
15/10/24 00:59:48 INFO AMRMClientImpl: Received new token for : ip-10-169-170-124.ec2.internal:8041 | |
15/10/24 00:59:48 INFO YarnAllocator: Launching container container_1444274555723_0060_02_000002 for on host ip-10-67-169-247.ec2.internal | |
15/10/24 00:59:48 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: akka.tcp://sparkDriver@10.169.170.124:58737/user/CoarseGrainedScheduler, executorHostname: ip-10-67-169-247.ec2.internal | |
15/10/24 00:59:48 INFO YarnAllocator: Launching container container_1444274555723_0060_02_000003 for on host ip-10-169-170-124.ec2.internal | |
15/10/24 00:59:48 INFO ExecutorRunnable: Starting Executor Container | |
15/10/24 00:59:48 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: akka.tcp://sparkDriver@10.169.170.124:58737/user/CoarseGrainedScheduler, executorHostname: ip-10-169-170-124.ec2.internal | |
15/10/24 00:59:48 INFO YarnAllocator: Received 2 containers from YARN, launching executors on 2 of them. | |
15/10/24 00:59:48 INFO ExecutorRunnable: Starting Executor Container | |
15/10/24 00:59:48 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 | |
15/10/24 00:59:48 INFO ExecutorRunnable: Setting up ContainerLaunchContext | |
15/10/24 00:59:48 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 | |
15/10/24 00:59:48 INFO ExecutorRunnable: Setting up ContainerLaunchContext | |
15/10/24 00:59:48 INFO ExecutorRunnable: Preparing Local resources | |
15/10/24 00:59:48 INFO ExecutorRunnable: Preparing Local resources | |
15/10/24 00:59:48 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0060/Prometheus-assembly-0.0.1.jar" } size: 162982755 timestamp: 1445648346401 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0060/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar" } size: 206949550 timestamp: 1445648345028 type: FILE visibility: PRIVATE) | |
15/10/24 00:59:48 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0060/Prometheus-assembly-0.0.1.jar" } size: 162982755 timestamp: 1445648346401 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0060/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar" } size: 206949550 timestamp: 1445648345028 type: FILE visibility: PRIVATE) | |
15/10/24 00:59:48 INFO ExecutorRunnable: | |
=============================================================================== | |
YARN executor launch context: | |
env: | |
CLASSPATH -> /etc/hadoop/conf:/etc/hive/conf:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*<CPS>{{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$HADOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/* | |
SPARK_LOG_URL_STDERR -> http://ip-10-169-170-124.ec2.internal:8042/node/containerlogs/container_1444274555723_0060_02_000003/hadoop/stderr?start=-4096 | |
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1444274555723_0060 | |
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 206949550,162982755 | |
SPARK_USER -> hadoop | |
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE | |
SPARK_YARN_MODE -> true | |
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1445648345028,1445648346401 | |
SPARK_LOG_URL_STDOUT -> http://ip-10-169-170-124.ec2.internal:8042/node/containerlogs/container_1444274555723_0060_02_000003/hadoop/stdout?start=-4096 | |
SPARK_YARN_CACHE_FILES -> hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0060/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar#__spark__.jar,hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0060/Prometheus-assembly-0.0.1.jar#__app__.jar | |
command: | |
LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:$LD_LIBRARY_PATH" {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m '-verbose:gc' '-XX:+PrintGCDetails' '-XX:+PrintGCDateStamps' '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=58737' '-Dspark.history.ui.port=18080' '-Dspark.ui.port=0' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://sparkDriver@10.169.170.124:58737/user/CoarseGrainedScheduler --executor-id 2 --hostname ip-10-169-170-124.ec2.internal --cores 1 --app-id application_1444274555723_0060 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr | |
=============================================================================== | |
15/10/24 00:59:48 INFO ExecutorRunnable: | |
=============================================================================== | |
YARN executor launch context: | |
env: | |
CLASSPATH -> /etc/hadoop/conf:/etc/hive/conf:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*<CPS>{{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$HADOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/* | |
SPARK_LOG_URL_STDERR -> http://ip-10-67-169-247.ec2.internal:8042/node/containerlogs/container_1444274555723_0060_02_000002/hadoop/stderr?start=-4096 | |
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1444274555723_0060 | |
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 206949550,162982755 | |
SPARK_USER -> hadoop | |
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE | |
SPARK_YARN_MODE -> true | |
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1445648345028,1445648346401 | |
SPARK_LOG_URL_STDOUT -> http://ip-10-67-169-247.ec2.internal:8042/node/containerlogs/container_1444274555723_0060_02_000002/hadoop/stdout?start=-4096 | |
SPARK_YARN_CACHE_FILES -> hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0060/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar#__spark__.jar,hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0060/Prometheus-assembly-0.0.1.jar#__app__.jar | |
command: | |
LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:$LD_LIBRARY_PATH" {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m '-verbose:gc' '-XX:+PrintGCDetails' '-XX:+PrintGCDateStamps' '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=58737' '-Dspark.history.ui.port=18080' '-Dspark.ui.port=0' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://sparkDriver@10.169.170.124:58737/user/CoarseGrainedScheduler --executor-id 1 --hostname ip-10-67-169-247.ec2.internal --cores 1 --app-id application_1444274555723_0060 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr | |
=============================================================================== | |
15/10/24 00:59:48 INFO ContainerManagementProtocolProxy: Opening proxy : ip-10-67-169-247.ec2.internal:8041 | |
15/10/24 00:59:48 INFO ContainerManagementProtocolProxy: Opening proxy : ip-10-169-170-124.ec2.internal:8041 | |
15/10/24 00:59:52 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-169-170-124.ec2.internal:46765 | |
15/10/24 00:59:52 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-67-169-247.ec2.internal:35370 | |
15/10/24 00:59:53 INFO YarnClusterSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@ip-10-67-169-247.ec2.internal:41906/user/Executor#1152761469]) with ID 1 | |
15/10/24 00:59:53 INFO YarnClusterSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@ip-10-169-170-124.ec2.internal:40276/user/Executor#1539380902]) with ID 2 | |
15/10/24 00:59:53 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8 | |
15/10/24 00:59:53 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done | |
15/10/24 00:59:53 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-67-169-247.ec2.internal:37764 with 535.0 MB RAM, BlockManagerId(1, ip-10-67-169-247.ec2.internal, 37764) | |
15/10/24 00:59:53 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-169-170-124.ec2.internal:36997 with 535.0 MB RAM, BlockManagerId(2, ip-10-169-170-124.ec2.internal, 36997) | |
15/10/24 00:59:54 INFO HiveContext: Initializing execution hive, version 1.2.1 | |
15/10/24 00:59:54 INFO ClientWrapper: Inspected Hadoop version: 2.6.0-amzn-1 | |
15/10/24 00:59:54 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0-amzn-1 | |
15/10/24 00:59:55 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | |
15/10/24 00:59:55 INFO ObjectStore: ObjectStore, initialize called | |
15/10/24 00:59:55 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored | |
15/10/24 00:59:55 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored | |
15/10/24 00:59:57 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" | |
15/10/24 00:59:59 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 00:59:59 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 01:00:00 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 01:00:00 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 01:00:01 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY | |
15/10/24 01:00:01 INFO ObjectStore: Initialized ObjectStore | |
15/10/24 01:00:01 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 | |
15/10/24 01:00:01 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException | |
15/10/24 01:00:01 INFO HiveMetaStore: Added admin role in metastore | |
15/10/24 01:00:01 INFO HiveMetaStore: Added public role in metastore | |
15/10/24 01:00:01 INFO HiveMetaStore: No user is added in admin role, since config is empty | |
15/10/24 01:00:01 INFO HiveMetaStore: 0: get_all_databases | |
15/10/24 01:00:01 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_all_databases | |
15/10/24 01:00:01 INFO HiveMetaStore: 0: get_functions: db=default pat=* | |
15/10/24 01:00:01 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_functions: db=default pat=* | |
15/10/24 01:00:01 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 01:00:02 INFO SessionState: Created local directory: /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/container_1444274555723_0060_02_000001/tmp/yarn | |
15/10/24 01:00:02 INFO SessionState: Created local directory: /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/container_1444274555723_0060_02_000001/tmp/d13dcd03-ee7c-49ca-9ddc-908b43f20c25_resources | |
15/10/24 01:00:02 INFO SessionState: Created HDFS directory: /tmp/hive/hadoop/d13dcd03-ee7c-49ca-9ddc-908b43f20c25 | |
15/10/24 01:00:02 INFO SessionState: Created local directory: /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/container_1444274555723_0060_02_000001/tmp/yarn/d13dcd03-ee7c-49ca-9ddc-908b43f20c25 | |
15/10/24 01:00:02 INFO SessionState: Created HDFS directory: /tmp/hive/hadoop/d13dcd03-ee7c-49ca-9ddc-908b43f20c25/_tmp_space.db | |
15/10/24 01:00:02 INFO HiveContext: default warehouse location is /user/hive/warehouse | |
15/10/24 01:00:02 INFO HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes. | |
15/10/24 01:00:02 INFO ClientWrapper: Inspected Hadoop version: 2.4.0 | |
15/10/24 01:00:02 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.4.0 | |
15/10/24 01:00:02 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 01:00:02 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 01:00:02 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 01:00:03 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 01:00:03 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 01:00:03 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 01:00:03 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
15/10/24 01:00:03 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | |
15/10/24 01:00:03 INFO ObjectStore: ObjectStore, initialize called | |
15/10/24 01:00:03 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored | |
15/10/24 01:00:03 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored | |
15/10/24 01:00:05 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 01:00:05 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 01:00:05 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 01:00:05 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" | |
15/10/24 01:00:06 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 01:00:06 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 01:00:07 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 01:00:07 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 01:00:08 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY | |
15/10/24 01:00:08 INFO ObjectStore: Initialized ObjectStore | |
15/10/24 01:00:08 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 | |
15/10/24 01:00:08 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException | |
15/10/24 01:00:08 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 01:00:08 INFO HiveMetaStore: Added admin role in metastore | |
15/10/24 01:00:08 INFO HiveMetaStore: Added public role in metastore | |
15/10/24 01:00:08 INFO HiveMetaStore: No user is added in admin role, since config is empty | |
15/10/24 01:00:08 INFO HiveMetaStore: 0: get_all_databases | |
15/10/24 01:00:08 INFO audit: ugi=yarn ip=unknown-ip-addr cmd=get_all_databases | |
15/10/24 01:00:08 INFO HiveMetaStore: 0: get_functions: db=default pat=* | |
15/10/24 01:00:08 INFO audit: ugi=yarn ip=unknown-ip-addr cmd=get_functions: db=default pat=* | |
15/10/24 01:00:08 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 01:00:09 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 01:00:09 INFO SessionState: Created local directory: /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/container_1444274555723_0060_02_000001/tmp/7ab3d08d-3a7a-4d2b-9dc0-fb4d645e8448_resources | |
15/10/24 01:00:09 INFO SessionState: Created HDFS directory: /tmp/hive/yarn/7ab3d08d-3a7a-4d2b-9dc0-fb4d645e8448 | |
15/10/24 01:00:09 INFO SessionState: Created local directory: /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/container_1444274555723_0060_02_000001/tmp/yarn/7ab3d08d-3a7a-4d2b-9dc0-fb4d645e8448 | |
15/10/24 01:00:09 INFO SessionState: Created HDFS directory: /tmp/hive/yarn/7ab3d08d-3a7a-4d2b-9dc0-fb4d645e8448/_tmp_space.db | |
15/10/24 01:00:09 ERROR ApplicationMaster: User class threw exception: java.nio.charset.MalformedInputException: Input length = 1 | |
java.nio.charset.MalformedInputException: Input length = 1 | |
at java.nio.charset.CoderResult.throwException(CoderResult.java:277) | |
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:338) | |
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) | |
at java.io.InputStreamReader.read(InputStreamReader.java:184) | |
at java.io.BufferedReader.fill(BufferedReader.java:154) | |
at java.io.BufferedReader.read(BufferedReader.java:175) | |
at scala.io.BufferedSource$$anonfun$iter$1$$anonfun$apply$mcI$sp$1.apply$mcI$sp(BufferedSource.scala:38) | |
at scala.io.Codec.wrap(Codec.scala:68) | |
at scala.io.BufferedSource$$anonfun$iter$1.apply(BufferedSource.scala:38) | |
at scala.io.BufferedSource$$anonfun$iter$1.apply(BufferedSource.scala:38) | |
at scala.collection.Iterator$$anon$9.next(Iterator.scala:162) | |
at scala.collection.Iterator$$anon$17.hasNext(Iterator.scala:511) | |
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) | |
at scala.io.Source.hasNext(Source.scala:226) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:727) | |
at scala.io.Source.foreach(Source.scala:178) | |
at scala.collection.TraversableOnce$class.addString(TraversableOnce.scala:320) | |
at scala.io.Source.addString(Source.scala:178) | |
at scala.collection.TraversableOnce$class.mkString(TraversableOnce.scala:286) | |
at scala.io.Source.mkString(Source.scala:178) | |
at scala.collection.TraversableOnce$class.mkString(TraversableOnce.scala:288) | |
at scala.io.Source.mkString(Source.scala:178) | |
at scala.collection.TraversableOnce$class.mkString(TraversableOnce.scala:290) | |
at scala.io.Source.mkString(Source.scala:178) | |
at com.truex.prometheus.SparkContextExtension$ExtendedSparkContext.loadHttp(ExtendedSparkContext.scala:47) | |
at com.truex.prometheus.SparkContextExtension$ExtendedSparkContext.read(ExtendedSparkContext.scala:53) | |
at com.truex.prometheus.Job.read(Job.scala:42) | |
at com.truex.prometheus.CLIJob$$anon$1.execute(CLIJob.scala:91) | |
at com.truex.prometheus.CLIJob$.main(CLIJob.scala:122) | |
at com.truex.prometheus.CLIJob.main(CLIJob.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:606) | |
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525) | |
15/10/24 01:00:09 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.nio.charset.MalformedInputException: Input length = 1) | |
15/10/24 01:00:09 INFO SparkContext: Invoking stop() from shutdown hook | |
15/10/24 01:00:09 INFO SparkUI: Stopped Spark web UI at http://10.169.170.124:57184 | |
15/10/24 01:00:09 INFO DAGScheduler: Stopping DAGScheduler | |
15/10/24 01:00:09 INFO YarnClusterSchedulerBackend: Shutting down all executors | |
15/10/24 01:00:09 INFO YarnClusterSchedulerBackend: Asking each executor to shut down | |
15/10/24 01:00:09 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-67-169-247.ec2.internal:41906 | |
15/10/24 01:00:09 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-169-170-124.ec2.internal:40276 | |
15/10/24 01:00:09 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! | |
15/10/24 01:00:09 INFO MemoryStore: MemoryStore cleared | |
15/10/24 01:00:09 INFO BlockManager: BlockManager stopped | |
15/10/24 01:00:09 INFO BlockManagerMaster: BlockManagerMaster stopped | |
15/10/24 01:00:09 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! | |
15/10/24 01:00:09 INFO SparkContext: Successfully stopped SparkContext | |
15/10/24 01:00:09 INFO ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: User class threw exception: java.nio.charset.MalformedInputException: Input length = 1) | |
15/10/24 01:00:09 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/24 01:00:09 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/24 01:00:09 INFO AMRMClientImpl: Waiting for application to be successfully unregistered. | |
15/10/24 01:00:09 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/24 01:00:09 INFO ApplicationMaster: Deleting staging directory .sparkStaging/application_1444274555723_0060 | |
15/10/24 01:00:09 INFO ShutdownHookManager: Shutdown hook called | |
15/10/24 01:00:09 INFO ShutdownHookManager: Deleting directory /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0060/spark-0f8b4459-cdf0-4352-b219-cc85a14ff7c0 | |
15/10/24 01:00:09 INFO ShutdownHookManager: Deleting directory /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/container_1444274555723_0060_02_000001/tmp/spark-a27deaa7-9efe-4e0a-ae16-29d78b7d6d8f | |
15/10/24 01:00:09 INFO ShutdownHookManager: Deleting directory /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/spark-69571eef-899b-4d79-bb8a-43c3dc9c0b94 | |
LogType:stdout | |
Log Upload Time:24-Oct-2015 01:00:10 | |
LogLength:0 | |
Log Contents: | |
Container: container_1444274555723_0060_01_000001 on ip-10-169-170-124.ec2.internal_8041 | |
========================================================================================== | |
LogType:stderr | |
Log Upload Time:24-Oct-2015 01:00:10 | |
LogLength:31306 | |
Log Contents: | |
log4j:ERROR Could not read configuration file from URL [file:/etc/spark/conf/log4j.properties]. | |
java.io.FileNotFoundException: /etc/spark/conf/log4j.properties (No such file or directory) | |
at java.io.FileInputStream.open(Native Method) | |
at java.io.FileInputStream.<init>(FileInputStream.java:146) | |
at java.io.FileInputStream.<init>(FileInputStream.java:101) | |
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90) | |
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188) | |
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557) | |
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526) | |
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127) | |
at org.apache.spark.Logging$class.initializeLogging(Logging.scala:122) | |
at org.apache.spark.Logging$class.initializeIfNecessary(Logging.scala:107) | |
at org.apache.spark.Logging$class.log(Logging.scala:51) | |
at org.apache.spark.deploy.yarn.ApplicationMaster$.log(ApplicationMaster.scala:603) | |
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:617) | |
at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala) | |
log4j:ERROR Ignoring configuration file [file:/etc/spark/conf/log4j.properties]. | |
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties | |
SLF4J: Class path contains multiple SLF4J bindings. | |
SLF4J: Found binding in [jar:file:/mnt1/yarn/usercache/hadoop/filecache/110/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. | |
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] | |
15/10/24 00:59:10 INFO ApplicationMaster: Registered signal handlers for [TERM, HUP, INT] | |
15/10/24 00:59:11 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1444274555723_0060_000001 | |
15/10/24 00:59:12 INFO SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/24 00:59:12 INFO SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/24 00:59:12 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/24 00:59:12 INFO ApplicationMaster: Starting the user application in a separate Thread | |
15/10/24 00:59:12 INFO ApplicationMaster: Waiting for spark context initialization | |
15/10/24 00:59:12 INFO ApplicationMaster: Waiting for spark context initialization ... | |
15/10/24 00:59:12 INFO SparkContext: Running Spark version 1.5.0 | |
15/10/24 00:59:12 INFO SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/24 00:59:12 INFO SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/24 00:59:12 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/24 00:59:13 INFO Slf4jLogger: Slf4jLogger started | |
15/10/24 00:59:13 INFO Remoting: Starting remoting | |
15/10/24 00:59:13 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.169.170.124:56031] | |
15/10/24 00:59:13 INFO Utils: Successfully started service 'sparkDriver' on port 56031. | |
15/10/24 00:59:13 INFO SparkEnv: Registering MapOutputTracker | |
15/10/24 00:59:13 INFO SparkEnv: Registering BlockManagerMaster | |
15/10/24 00:59:13 INFO DiskBlockManager: Created local directory at /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0060/blockmgr-344a79a0-1a84-47f3-ba7c-8282851b6be3 | |
15/10/24 00:59:13 INFO DiskBlockManager: Created local directory at /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/blockmgr-592333e8-7ced-4770-89cd-63047861606f | |
15/10/24 00:59:13 INFO MemoryStore: MemoryStore started with capacity 535.0 MB | |
15/10/24 00:59:13 INFO HttpFileServer: HTTP File server directory is /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0060/spark-6785d390-86d3-4581-8141-96387ad182c8/httpd-48a8042a-1809-430d-b206-68071941415b | |
15/10/24 00:59:13 INFO HttpServer: Starting HTTP Server | |
15/10/24 00:59:13 INFO Utils: Successfully started service 'HTTP file server' on port 58624. | |
15/10/24 00:59:13 INFO SparkEnv: Registering OutputCommitCoordinator | |
15/10/24 00:59:13 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter | |
15/10/24 00:59:13 INFO Utils: Successfully started service 'SparkUI' on port 49200. | |
15/10/24 00:59:13 INFO SparkUI: Started SparkUI at http://10.169.170.124:49200 | |
15/10/24 00:59:13 INFO YarnClusterScheduler: Created YarnClusterScheduler | |
15/10/24 00:59:13 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set. | |
15/10/24 00:59:14 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 58827. | |
15/10/24 00:59:14 INFO NettyBlockTransferService: Server created on 58827 | |
15/10/24 00:59:14 INFO BlockManagerMaster: Trying to register BlockManager | |
15/10/24 00:59:14 INFO BlockManagerMasterEndpoint: Registering block manager 10.169.170.124:58827 with 535.0 MB RAM, BlockManagerId(driver, 10.169.170.124, 58827) | |
15/10/24 00:59:14 INFO BlockManagerMaster: Registered BlockManager | |
15/10/24 00:59:15 INFO MetricsSaver: MetricsConfigRecord disabledInCluster: false instanceEngineCycleSec: 60 clusterEngineCycleSec: 60 disableClusterEngine: false maxMemoryMb: 3072 maxInstanceCount: 500 lastModified: 1444274560440 | |
15/10/24 00:59:15 INFO MetricsSaver: Created MetricsSaver j-2US4HNPLS1SJO:i-031cded7:ApplicationMaster:05753 period:60 /mnt/var/em/raw/i-031cded7_20151024_ApplicationMaster_05753_raw.bin | |
15/10/24 00:59:15 INFO EventLoggingListener: Logging events to hdfs:///var/log/spark/apps/application_1444274555723_0060_1 | |
15/10/24 00:59:15 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka://sparkDriver/user/YarnAM#1204633727]) | |
15/10/24 00:59:15 INFO RMProxy: Connecting to ResourceManager at ip-10-65-200-150.ec2.internal/10.65.200.150:8030 | |
15/10/24 00:59:15 INFO YarnRMClient: Registering the ApplicationMaster | |
15/10/24 00:59:15 INFO YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead | |
15/10/24 00:59:15 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>) | |
15/10/24 00:59:15 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>) | |
15/10/24 00:59:15 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals | |
15/10/24 00:59:16 INFO AMRMClientImpl: Received new token for : ip-10-67-169-247.ec2.internal:8041 | |
15/10/24 00:59:16 INFO AMRMClientImpl: Received new token for : ip-10-169-170-124.ec2.internal:8041 | |
15/10/24 00:59:16 INFO YarnAllocator: Launching container container_1444274555723_0060_01_000002 for on host ip-10-67-169-247.ec2.internal | |
15/10/24 00:59:16 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: akka.tcp://sparkDriver@10.169.170.124:56031/user/CoarseGrainedScheduler, executorHostname: ip-10-67-169-247.ec2.internal | |
15/10/24 00:59:16 INFO YarnAllocator: Launching container container_1444274555723_0060_01_000003 for on host ip-10-169-170-124.ec2.internal | |
15/10/24 00:59:16 INFO ExecutorRunnable: Starting Executor Container | |
15/10/24 00:59:16 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: akka.tcp://sparkDriver@10.169.170.124:56031/user/CoarseGrainedScheduler, executorHostname: ip-10-169-170-124.ec2.internal | |
15/10/24 00:59:16 INFO YarnAllocator: Received 2 containers from YARN, launching executors on 2 of them. | |
15/10/24 00:59:16 INFO ExecutorRunnable: Starting Executor Container | |
15/10/24 00:59:16 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 | |
15/10/24 00:59:16 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 | |
15/10/24 00:59:16 INFO ExecutorRunnable: Setting up ContainerLaunchContext | |
15/10/24 00:59:16 INFO ExecutorRunnable: Setting up ContainerLaunchContext | |
15/10/24 00:59:16 INFO ExecutorRunnable: Preparing Local resources | |
15/10/24 00:59:16 INFO ExecutorRunnable: Preparing Local resources | |
15/10/24 00:59:16 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0060/Prometheus-assembly-0.0.1.jar" } size: 162982755 timestamp: 1445648346401 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0060/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar" } size: 206949550 timestamp: 1445648345028 type: FILE visibility: PRIVATE) | |
15/10/24 00:59:16 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0060/Prometheus-assembly-0.0.1.jar" } size: 162982755 timestamp: 1445648346401 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0060/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar" } size: 206949550 timestamp: 1445648345028 type: FILE visibility: PRIVATE) | |
15/10/24 00:59:16 INFO ExecutorRunnable: | |
=============================================================================== | |
YARN executor launch context: | |
env: | |
CLASSPATH -> /etc/hadoop/conf:/etc/hive/conf:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*<CPS>{{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$HADOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/* | |
SPARK_LOG_URL_STDERR -> http://ip-10-169-170-124.ec2.internal:8042/node/containerlogs/container_1444274555723_0060_01_000003/hadoop/stderr?start=-4096 | |
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1444274555723_0060 | |
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 206949550,162982755 | |
SPARK_USER -> hadoop | |
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE | |
SPARK_YARN_MODE -> true | |
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1445648345028,1445648346401 | |
SPARK_LOG_URL_STDOUT -> http://ip-10-169-170-124.ec2.internal:8042/node/containerlogs/container_1444274555723_0060_01_000003/hadoop/stdout?start=-4096 | |
SPARK_YARN_CACHE_FILES -> hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0060/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar#__spark__.jar,hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0060/Prometheus-assembly-0.0.1.jar#__app__.jar | |
command: | |
LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:$LD_LIBRARY_PATH" {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m '-verbose:gc' '-XX:+PrintGCDetails' '-XX:+PrintGCDateStamps' '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=56031' '-Dspark.history.ui.port=18080' '-Dspark.ui.port=0' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://sparkDriver@10.169.170.124:56031/user/CoarseGrainedScheduler --executor-id 2 --hostname ip-10-169-170-124.ec2.internal --cores 1 --app-id application_1444274555723_0060 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr | |
=============================================================================== | |
15/10/24 00:59:16 INFO ExecutorRunnable: | |
=============================================================================== | |
YARN executor launch context: | |
env: | |
CLASSPATH -> /etc/hadoop/conf:/etc/hive/conf:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*<CPS>{{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$HADOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/* | |
SPARK_LOG_URL_STDERR -> http://ip-10-67-169-247.ec2.internal:8042/node/containerlogs/container_1444274555723_0060_01_000002/hadoop/stderr?start=-4096 | |
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1444274555723_0060 | |
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 206949550,162982755 | |
SPARK_USER -> hadoop | |
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE | |
SPARK_YARN_MODE -> true | |
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1445648345028,1445648346401 | |
SPARK_LOG_URL_STDOUT -> http://ip-10-67-169-247.ec2.internal:8042/node/containerlogs/container_1444274555723_0060_01_000002/hadoop/stdout?start=-4096 | |
SPARK_YARN_CACHE_FILES -> hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0060/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar#__spark__.jar,hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0060/Prometheus-assembly-0.0.1.jar#__app__.jar | |
command: | |
LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:$LD_LIBRARY_PATH" {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m '-verbose:gc' '-XX:+PrintGCDetails' '-XX:+PrintGCDateStamps' '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=56031' '-Dspark.history.ui.port=18080' '-Dspark.ui.port=0' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://sparkDriver@10.169.170.124:56031/user/CoarseGrainedScheduler --executor-id 1 --hostname ip-10-67-169-247.ec2.internal --cores 1 --app-id application_1444274555723_0060 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr | |
=============================================================================== | |
15/10/24 00:59:16 INFO ContainerManagementProtocolProxy: Opening proxy : ip-10-169-170-124.ec2.internal:8041 | |
15/10/24 00:59:16 INFO ContainerManagementProtocolProxy: Opening proxy : ip-10-67-169-247.ec2.internal:8041 | |
15/10/24 00:59:20 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-169-170-124.ec2.internal:40736 | |
15/10/24 00:59:21 INFO YarnClusterSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@ip-10-169-170-124.ec2.internal:60697/user/Executor#421111002]) with ID 2 | |
15/10/24 00:59:21 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-169-170-124.ec2.internal:35981 with 535.0 MB RAM, BlockManagerId(2, ip-10-169-170-124.ec2.internal, 35981) | |
15/10/24 00:59:21 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-67-169-247.ec2.internal:41558 | |
15/10/24 00:59:22 INFO YarnClusterSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@ip-10-67-169-247.ec2.internal:54031/user/Executor#1464001382]) with ID 1 | |
15/10/24 00:59:22 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8 | |
15/10/24 00:59:22 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done | |
15/10/24 00:59:22 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-67-169-247.ec2.internal:45837 with 535.0 MB RAM, BlockManagerId(1, ip-10-67-169-247.ec2.internal, 45837) | |
15/10/24 00:59:23 INFO HiveContext: Initializing execution hive, version 1.2.1 | |
15/10/24 00:59:23 INFO ClientWrapper: Inspected Hadoop version: 2.6.0-amzn-1 | |
15/10/24 00:59:23 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0-amzn-1 | |
15/10/24 00:59:24 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | |
15/10/24 00:59:24 INFO ObjectStore: ObjectStore, initialize called | |
15/10/24 00:59:24 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored | |
15/10/24 00:59:24 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored | |
15/10/24 00:59:26 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" | |
15/10/24 00:59:28 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 00:59:28 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 00:59:29 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 00:59:29 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 00:59:30 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY | |
15/10/24 00:59:30 INFO ObjectStore: Initialized ObjectStore | |
15/10/24 00:59:30 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 | |
15/10/24 00:59:30 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException | |
15/10/24 00:59:30 INFO HiveMetaStore: Added admin role in metastore | |
15/10/24 00:59:30 INFO HiveMetaStore: Added public role in metastore | |
15/10/24 00:59:30 INFO HiveMetaStore: No user is added in admin role, since config is empty | |
15/10/24 00:59:30 INFO HiveMetaStore: 0: get_all_databases | |
15/10/24 00:59:30 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_all_databases | |
15/10/24 00:59:30 INFO HiveMetaStore: 0: get_functions: db=default pat=* | |
15/10/24 00:59:30 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_functions: db=default pat=* | |
15/10/24 00:59:30 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 00:59:31 INFO SessionState: Created local directory: /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/container_1444274555723_0060_01_000001/tmp/yarn | |
15/10/24 00:59:31 INFO SessionState: Created local directory: /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/container_1444274555723_0060_01_000001/tmp/a8f16260-ffba-4589-a94e-5ecc914206f1_resources | |
15/10/24 00:59:31 INFO SessionState: Created HDFS directory: /tmp/hive/hadoop/a8f16260-ffba-4589-a94e-5ecc914206f1 | |
15/10/24 00:59:31 INFO SessionState: Created local directory: /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/container_1444274555723_0060_01_000001/tmp/yarn/a8f16260-ffba-4589-a94e-5ecc914206f1 | |
15/10/24 00:59:31 INFO SessionState: Created HDFS directory: /tmp/hive/hadoop/a8f16260-ffba-4589-a94e-5ecc914206f1/_tmp_space.db | |
15/10/24 00:59:31 INFO HiveContext: default warehouse location is /user/hive/warehouse | |
15/10/24 00:59:31 INFO HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes. | |
15/10/24 00:59:31 INFO ClientWrapper: Inspected Hadoop version: 2.4.0 | |
15/10/24 00:59:31 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.4.0 | |
15/10/24 00:59:31 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 00:59:31 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 00:59:31 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 00:59:32 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 00:59:32 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 00:59:32 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 00:59:32 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
15/10/24 00:59:32 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | |
15/10/24 00:59:32 INFO ObjectStore: ObjectStore, initialize called | |
15/10/24 00:59:32 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored | |
15/10/24 00:59:32 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored | |
15/10/24 00:59:34 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 00:59:34 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 00:59:34 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 00:59:34 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" | |
15/10/24 00:59:35 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 00:59:35 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 00:59:37 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 00:59:37 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 00:59:37 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY | |
15/10/24 00:59:37 INFO ObjectStore: Initialized ObjectStore | |
15/10/24 00:59:37 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 | |
15/10/24 00:59:37 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException | |
15/10/24 00:59:37 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 00:59:38 INFO HiveMetaStore: Added admin role in metastore | |
15/10/24 00:59:38 INFO HiveMetaStore: Added public role in metastore | |
15/10/24 00:59:38 INFO HiveMetaStore: No user is added in admin role, since config is empty | |
15/10/24 00:59:39 INFO HiveMetaStore: 0: get_all_databases | |
15/10/24 00:59:39 INFO audit: ugi=yarn ip=unknown-ip-addr cmd=get_all_databases | |
15/10/24 00:59:39 INFO HiveMetaStore: 0: get_functions: db=default pat=* | |
15/10/24 00:59:39 INFO audit: ugi=yarn ip=unknown-ip-addr cmd=get_functions: db=default pat=* | |
15/10/24 00:59:39 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/24 00:59:39 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/24 00:59:39 INFO SessionState: Created local directory: /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/container_1444274555723_0060_01_000001/tmp/04ebfa6b-67cb-40f3-a117-30e130670060_resources | |
15/10/24 00:59:39 INFO SessionState: Created HDFS directory: /tmp/hive/yarn/04ebfa6b-67cb-40f3-a117-30e130670060 | |
15/10/24 00:59:39 INFO SessionState: Created local directory: /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/container_1444274555723_0060_01_000001/tmp/yarn/04ebfa6b-67cb-40f3-a117-30e130670060 | |
15/10/24 00:59:39 INFO SessionState: Created HDFS directory: /tmp/hive/yarn/04ebfa6b-67cb-40f3-a117-30e130670060/_tmp_space.db | |
15/10/24 00:59:39 ERROR ApplicationMaster: User class threw exception: java.nio.charset.MalformedInputException: Input length = 1 | |
java.nio.charset.MalformedInputException: Input length = 1 | |
at java.nio.charset.CoderResult.throwException(CoderResult.java:277) | |
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:338) | |
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) | |
at java.io.InputStreamReader.read(InputStreamReader.java:184) | |
at java.io.BufferedReader.fill(BufferedReader.java:154) | |
at java.io.BufferedReader.read(BufferedReader.java:175) | |
at scala.io.BufferedSource$$anonfun$iter$1$$anonfun$apply$mcI$sp$1.apply$mcI$sp(BufferedSource.scala:38) | |
at scala.io.Codec.wrap(Codec.scala:68) | |
at scala.io.BufferedSource$$anonfun$iter$1.apply(BufferedSource.scala:38) | |
at scala.io.BufferedSource$$anonfun$iter$1.apply(BufferedSource.scala:38) | |
at scala.collection.Iterator$$anon$9.next(Iterator.scala:162) | |
at scala.collection.Iterator$$anon$17.hasNext(Iterator.scala:511) | |
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) | |
at scala.io.Source.hasNext(Source.scala:226) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:727) | |
at scala.io.Source.foreach(Source.scala:178) | |
at scala.collection.TraversableOnce$class.addString(TraversableOnce.scala:320) | |
at scala.io.Source.addString(Source.scala:178) | |
at scala.collection.TraversableOnce$class.mkString(TraversableOnce.scala:286) | |
at scala.io.Source.mkString(Source.scala:178) | |
at scala.collection.TraversableOnce$class.mkString(TraversableOnce.scala:288) | |
at scala.io.Source.mkString(Source.scala:178) | |
at scala.collection.TraversableOnce$class.mkString(TraversableOnce.scala:290) | |
at scala.io.Source.mkString(Source.scala:178) | |
at com.truex.prometheus.SparkContextExtension$ExtendedSparkContext.loadHttp(ExtendedSparkContext.scala:47) | |
at com.truex.prometheus.SparkContextExtension$ExtendedSparkContext.read(ExtendedSparkContext.scala:53) | |
at com.truex.prometheus.Job.read(Job.scala:42) | |
at com.truex.prometheus.CLIJob$$anon$1.execute(CLIJob.scala:91) | |
at com.truex.prometheus.CLIJob$.main(CLIJob.scala:122) | |
at com.truex.prometheus.CLIJob.main(CLIJob.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:606) | |
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525) | |
15/10/24 00:59:39 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.nio.charset.MalformedInputException: Input length = 1) | |
15/10/24 00:59:39 INFO SparkContext: Invoking stop() from shutdown hook | |
15/10/24 00:59:39 INFO SparkUI: Stopped Spark web UI at http://10.169.170.124:49200 | |
15/10/24 00:59:39 INFO DAGScheduler: Stopping DAGScheduler | |
15/10/24 00:59:39 INFO YarnClusterSchedulerBackend: Shutting down all executors | |
15/10/24 00:59:39 INFO YarnClusterSchedulerBackend: Asking each executor to shut down | |
15/10/24 00:59:39 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-67-169-247.ec2.internal:54031 | |
15/10/24 00:59:39 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-169-170-124.ec2.internal:60697 | |
15/10/24 00:59:39 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! | |
15/10/24 00:59:39 INFO MemoryStore: MemoryStore cleared | |
15/10/24 00:59:39 INFO BlockManager: BlockManager stopped | |
15/10/24 00:59:39 INFO BlockManagerMaster: BlockManagerMaster stopped | |
15/10/24 00:59:39 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! | |
15/10/24 00:59:39 INFO SparkContext: Successfully stopped SparkContext | |
15/10/24 00:59:39 INFO ShutdownHookManager: Shutdown hook called | |
15/10/24 00:59:39 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/24 00:59:39 INFO ShutdownHookManager: Deleting directory /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/container_1444274555723_0060_01_000001/tmp/spark-27aaeecc-b209-4a91-b98d-1d9353941d82 | |
15/10/24 00:59:39 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/24 00:59:39 INFO ShutdownHookManager: Deleting directory /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0060/spark-6785d390-86d3-4581-8141-96387ad182c8 | |
15/10/24 00:59:39 INFO ShutdownHookManager: Deleting directory /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/spark-a3ee8e8a-6e24-4959-9a80-fbc8a5a47e39 | |
LogType:stdout | |
Log Upload Time:24-Oct-2015 01:00:10 | |
LogLength:0 | |
Log Contents: | |
15/10/24 01:00:36 INFO metrics.MetricsSaver: MetricsConfigRecord disabledInCluster: false instanceEngineCycleSec: 60 clusterEngineCycleSec: 60 disableClusterEngine: false maxMemoryMb: 3072 maxInstanceCount: 500 lastModified: 1444274560440 | |
15/10/24 01:00:36 INFO metrics.MetricsSaver: Created MetricsSaver j-2US4HNPLS1SJO:i-131cdec7:LogsCLI:28781 period:60 /mnt/var/em/raw/i-131cdec7_20151024_LogsCLI_28781_raw.bin | |
Container: container_1444274555723_0060_02_000002 on ip-10-67-169-247.ec2.internal_8041 | |
========================================================================================= | |
LogType:stderr | |
Log Upload Time:24-Oct-2015 01:00:11 | |
LogLength:4296 | |
Log Contents: | |
SLF4J: Class path contains multiple SLF4J bindings. | |
SLF4J: Found binding in [jar:file:/mnt1/yarn/usercache/hadoop/filecache/109/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. | |
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] | |
15/10/24 00:59:50 INFO executor.CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT] | |
15/10/24 00:59:50 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/24 00:59:50 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/24 00:59:50 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/24 00:59:51 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/24 00:59:51 INFO Remoting: Starting remoting | |
15/10/24 00:59:52 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher@ip-10-67-169-247.ec2.internal:35370] | |
15/10/24 00:59:52 INFO util.Utils: Successfully started service 'driverPropsFetcher' on port 35370. | |
15/10/24 00:59:52 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/24 00:59:52 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/24 00:59:52 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/24 00:59:52 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/24 00:59:52 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/24 00:59:52 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/24 00:59:52 INFO Remoting: Starting remoting | |
15/10/24 00:59:52 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor@ip-10-67-169-247.ec2.internal:41906] | |
15/10/24 00:59:52 INFO util.Utils: Successfully started service 'sparkExecutor' on port 41906. | |
15/10/24 00:59:52 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/24 00:59:52 INFO storage.DiskBlockManager: Created local directory at /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0060/blockmgr-2f30386d-07e2-4acd-b949-3ba344212abc | |
15/10/24 00:59:52 INFO storage.DiskBlockManager: Created local directory at /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/blockmgr-f494eb94-5e26-4db6-8434-991611d83c61 | |
15/10/24 00:59:53 INFO storage.MemoryStore: MemoryStore started with capacity 535.0 MB | |
15/10/24 00:59:53 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://sparkDriver@10.169.170.124:58737/user/CoarseGrainedScheduler | |
15/10/24 00:59:53 INFO executor.CoarseGrainedExecutorBackend: Successfully registered with driver | |
15/10/24 00:59:53 INFO executor.Executor: Starting executor ID 1 on host ip-10-67-169-247.ec2.internal | |
15/10/24 00:59:53 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37764. | |
15/10/24 00:59:53 INFO netty.NettyBlockTransferService: Server created on 37764 | |
15/10/24 00:59:53 INFO storage.BlockManagerMaster: Trying to register BlockManager | |
15/10/24 00:59:53 INFO storage.BlockManagerMaster: Registered BlockManager | |
15/10/24 00:59:53 INFO storage.BlockManager: Registering executor with local external shuffle service. | |
15/10/24 01:00:09 INFO executor.CoarseGrainedExecutorBackend: Driver commanded a shutdown | |
15/10/24 01:00:09 INFO storage.MemoryStore: MemoryStore cleared | |
15/10/24 01:00:09 INFO storage.BlockManager: BlockManager stopped | |
15/10/24 01:00:09 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/24 01:00:09 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/24 01:00:09 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/24 01:00:09 INFO util.ShutdownHookManager: Shutdown hook called | |
LogType:stdout | |
Log Upload Time:24-Oct-2015 01:00:11 | |
LogLength:2080 | |
Log Contents: | |
2015-10-24T00:59:51.870+0000: [GC2015-10-24T00:59:51.870+0000: [ParNew: 272640K->17625K(306688K), 0.0274980 secs] 272640K->17625K(1014528K), 0.0276230 secs] [Times: user=0.05 sys=0.02, real=0.03 secs] | |
2015-10-24T00:59:51.898+0000: [GC [1 CMS-initial-mark: 0K(707840K)] 17625K(1014528K), 0.0054730 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] | |
2015-10-24T00:59:51.935+0000: [CMS-concurrent-mark: 0.029/0.031 secs] [Times: user=0.06 sys=0.02, real=0.04 secs] | |
2015-10-24T00:59:51.936+0000: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] | |
2015-10-24T00:59:53.255+0000: [CMS-concurrent-abortable-preclean: 1.046/1.319 secs] [Times: user=2.60 sys=0.56, real=1.32 secs] | |
2015-10-24T00:59:53.256+0000: [GC[YG occupancy: 163056 K (306688 K)]2015-10-24T00:59:53.256+0000: [Rescan (parallel) , 0.0156640 secs]2015-10-24T00:59:53.271+0000: [weak refs processing, 0.0000430 secs]2015-10-24T00:59:53.271+0000: [class unloading, 0.0027660 secs]2015-10-24T00:59:53.274+0000: [scrub symbol table, 0.0042750 secs]2015-10-24T00:59:53.278+0000: [scrub string table, 0.0003690 secs] [1 CMS-remark: 0K(707840K)] 163056K(1014528K), 0.0235240 secs] [Times: user=0.08 sys=0.00, real=0.02 secs] | |
2015-10-24T00:59:53.285+0000: [CMS-concurrent-sweep: 0.005/0.006 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] | |
2015-10-24T00:59:53.310+0000: [CMS-concurrent-reset: 0.025/0.025 secs] [Times: user=0.05 sys=0.03, real=0.03 secs] | |
Heap | |
par new generation total 306688K, used 257263K [0x00000000b5a00000, 0x00000000ca6c0000, 0x00000000ca6c0000) | |
eden space 272640K, 87% used [0x00000000b5a00000, 0x00000000c44059c0, 0x00000000c6440000) | |
from space 34048K, 51% used [0x00000000c8580000, 0x00000000c96b6470, 0x00000000ca6c0000) | |
to space 34048K, 0% used [0x00000000c6440000, 0x00000000c6440000, 0x00000000c8580000) | |
concurrent mark-sweep generation total 707840K, used 0K [0x00000000ca6c0000, 0x00000000f5a00000, 0x00000000f5a00000) | |
concurrent-mark-sweep perm gen total 48488K, used 34217K [0x00000000f5a00000, 0x00000000f895a000, 0x0000000100000000) | |
Container: container_1444274555723_0060_01_000002 on ip-10-67-169-247.ec2.internal_8041 | |
========================================================================================= | |
LogType:stderr | |
Log Upload Time:24-Oct-2015 01:00:11 | |
LogLength:4296 | |
Log Contents: | |
SLF4J: Class path contains multiple SLF4J bindings. | |
SLF4J: Found binding in [jar:file:/mnt1/yarn/usercache/hadoop/filecache/109/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. | |
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] | |
15/10/24 00:59:18 INFO executor.CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT] | |
15/10/24 00:59:19 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/24 00:59:19 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/24 00:59:19 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/24 00:59:20 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/24 00:59:20 INFO Remoting: Starting remoting | |
15/10/24 00:59:21 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher@ip-10-67-169-247.ec2.internal:41558] | |
15/10/24 00:59:21 INFO util.Utils: Successfully started service 'driverPropsFetcher' on port 41558. | |
15/10/24 00:59:21 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/24 00:59:21 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/24 00:59:21 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/24 00:59:21 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/24 00:59:21 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/24 00:59:21 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/24 00:59:21 INFO Remoting: Starting remoting | |
15/10/24 00:59:21 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor@ip-10-67-169-247.ec2.internal:54031] | |
15/10/24 00:59:21 INFO util.Utils: Successfully started service 'sparkExecutor' on port 54031. | |
15/10/24 00:59:21 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/24 00:59:22 INFO storage.DiskBlockManager: Created local directory at /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0060/blockmgr-4a39a2f4-689c-42d8-a025-271feeb288a0 | |
15/10/24 00:59:22 INFO storage.DiskBlockManager: Created local directory at /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0060/blockmgr-4f04c76e-2c00-4251-96c3-0fbc098c531f | |
15/10/24 00:59:22 INFO storage.MemoryStore: MemoryStore started with capacity 535.0 MB | |
15/10/24 00:59:22 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://sparkDriver@10.169.170.124:56031/user/CoarseGrainedScheduler | |
15/10/24 00:59:22 INFO executor.CoarseGrainedExecutorBackend: Successfully registered with driver | |
15/10/24 00:59:22 INFO executor.Executor: Starting executor ID 1 on host ip-10-67-169-247.ec2.internal | |
15/10/24 00:59:22 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 45837. | |
15/10/24 00:59:22 INFO netty.NettyBlockTransferService: Server created on 45837 | |
15/10/24 00:59:22 INFO storage.BlockManagerMaster: Trying to register BlockManager | |
15/10/24 00:59:22 INFO storage.BlockManagerMaster: Registered BlockManager | |
15/10/24 00:59:22 INFO storage.BlockManager: Registering executor with local external shuffle service. | |
15/10/24 00:59:39 INFO executor.CoarseGrainedExecutorBackend: Driver commanded a shutdown | |
15/10/24 00:59:39 INFO storage.MemoryStore: MemoryStore cleared | |
15/10/24 00:59:39 INFO storage.BlockManager: BlockManager stopped | |
15/10/24 00:59:39 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/24 00:59:39 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/24 00:59:39 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/24 00:59:39 INFO util.ShutdownHookManager: Shutdown hook called | |
LogType:stdout | |
Log Upload Time:24-Oct-2015 01:00:11 | |
LogLength:2081 | |
Log Contents: | |
2015-10-24T00:59:20.929+0000: [GC [1 CMS-initial-mark: 0K(707840K)] 267418K(1014528K), 0.0494570 secs] [Times: user=0.05 sys=0.01, real=0.05 secs] | |
2015-10-24T00:59:20.998+0000: [GC2015-10-24T00:59:20.998+0000: [ParNew: 272640K->17633K(306688K), 0.0277430 secs] 272640K->17633K(1014528K), 0.0278190 secs] [Times: user=0.04 sys=0.02, real=0.03 secs] | |
2015-10-24T00:59:21.034+0000: [CMS-concurrent-mark: 0.026/0.055 secs] [Times: user=0.09 sys=0.03, real=0.06 secs] | |
2015-10-24T00:59:21.069+0000: [CMS-concurrent-preclean: 0.028/0.034 secs] [Times: user=0.07 sys=0.01, real=0.03 secs] | |
2015-10-24T00:59:22.438+0000: [CMS-concurrent-abortable-preclean: 1.009/1.369 secs] [Times: user=2.84 sys=0.45, real=1.37 secs] | |
2015-10-24T00:59:22.438+0000: [GC[YG occupancy: 179297 K (306688 K)]2015-10-24T00:59:22.438+0000: [Rescan (parallel) , 0.0130060 secs]2015-10-24T00:59:22.451+0000: [weak refs processing, 0.0000350 secs]2015-10-24T00:59:22.451+0000: [class unloading, 0.0022630 secs]2015-10-24T00:59:22.453+0000: [scrub symbol table, 0.0033230 secs]2015-10-24T00:59:22.456+0000: [scrub string table, 0.0002640 secs] [1 CMS-remark: 0K(707840K)] 179297K(1014528K), 0.0192010 secs] [Times: user=0.06 sys=0.00, real=0.02 secs] | |
2015-10-24T00:59:22.463+0000: [CMS-concurrent-sweep: 0.005/0.006 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] | |
2015-10-24T00:59:22.487+0000: [CMS-concurrent-reset: 0.024/0.024 secs] [Times: user=0.02 sys=0.03, real=0.03 secs] | |
Heap | |
par new generation total 306688K, used 258294K [0x00000000b5a00000, 0x00000000ca6c0000, 0x00000000ca6c0000) | |
eden space 272640K, 88% used [0x00000000b5a00000, 0x00000000c4505698, 0x00000000c6440000) | |
from space 34048K, 51% used [0x00000000c8580000, 0x00000000c96b84c8, 0x00000000ca6c0000) | |
to space 34048K, 0% used [0x00000000c6440000, 0x00000000c6440000, 0x00000000c8580000) | |
concurrent mark-sweep generation total 707840K, used 0K [0x00000000ca6c0000, 0x00000000f5a00000, 0x00000000f5a00000) | |
concurrent-mark-sweep perm gen total 50936K, used 34210K [0x00000000f5a00000, 0x00000000f8bbe000, 0x0000000100000000) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment