Last active
August 29, 2015 14:13
-
-
Save rizo/2662a1a4c81d1c572d09 to your computer and use it in GitHub Desktop.
Spark Demo Logs
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties | |
15/01/08 15:21:16 INFO CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT] | |
15/01/08 15:21:17 INFO SecurityManager: Changing view acls to: root,rizo | |
15/01/08 15:21:17 INFO SecurityManager: Changing modify acls to: root,rizo | |
15/01/08 15:21:17 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, rizo); users with modify permissions: Set(root, rizo) | |
15/01/08 15:21:17 INFO Slf4jLogger: Slf4jLogger started | |
15/01/08 15:21:17 INFO Remoting: Starting remoting | |
15/01/08 15:21:17 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher@188.166.34.149:55071] | |
15/01/08 15:21:17 INFO Remoting: Remoting now listens on addresses: [akka.tcp://driverPropsFetcher@188.166.34.149:55071] | |
15/01/08 15:21:17 INFO Utils: Successfully started service 'driverPropsFetcher' on port 55071. | |
15/01/08 15:21:47 ERROR UserGroupInformation: PriviledgedActionException as:rizo cause:java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] | |
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException: Unknown exception in doAs | |
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1134) | |
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:52) | |
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:113) | |
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:156) | |
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala) | |
Caused by: java.security.PrivilegedActionException: java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] | |
at java.security.AccessController.doPrivileged(Native Method) | |
at javax.security.auth.Subject.doAs(Unknown Source) | |
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) | |
... 4 more | |
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] | |
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) | |
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) | |
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) | |
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) | |
at scala.concurrent.Await$.result(package.scala:107) | |
at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:125) | |
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:53) | |
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:52) | |
... 7 more |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
15/01/08 15:21:12 INFO spark.SecurityManager: Changing view acls to: rizo, | |
15/01/08 15:21:12 INFO spark.SecurityManager: Changing modify acls to: rizo, | |
15/01/08 15:21:12 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(rizo, ); users with modify permissions: Set(rizo, ) | |
15/01/08 15:21:12 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/01/08 15:21:13 INFO Remoting: Starting remoting | |
15/01/08 15:21:13 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.0.1.93:50763] | |
15/01/08 15:21:13 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@10.0.1.93:50763] | |
15/01/08 15:21:13 INFO util.Utils: Successfully started service 'sparkDriver' on port 50763. | |
15/01/08 15:21:13 INFO spark.SparkEnv: Registering MapOutputTracker | |
15/01/08 15:21:13 INFO spark.SparkEnv: Registering BlockManagerMaster | |
15/01/08 15:21:13 INFO storage.DiskBlockManager: Created local directory at /var/folders/xc/s17mnrnn03q0f2bhmt0w5yg80000gn/T/spark-local-20150108152113-9695 | |
15/01/08 15:21:13 INFO util.Utils: Successfully started service 'Connection manager for block manager' on port 50764. | |
15/01/08 15:21:13 INFO network.ConnectionManager: Bound socket to port 50764 with id = ConnectionManagerId(10.0.1.93,50764) | |
15/01/08 15:21:13 INFO storage.MemoryStore: MemoryStore started with capacity 737.4 MB | |
15/01/08 15:21:13 INFO storage.BlockManagerMaster: Trying to register BlockManager | |
15/01/08 15:21:13 INFO storage.BlockManagerMasterActor: Registering block manager 10.0.1.93:50764 with 737.4 MB RAM | |
15/01/08 15:21:13 INFO storage.BlockManagerMaster: Registered BlockManager | |
15/01/08 15:21:13 INFO spark.HttpFileServer: HTTP File server directory is /var/folders/xc/s17mnrnn03q0f2bhmt0w5yg80000gn/T/spark-6f0a77f1-d1d1-4a20-9318-4b9312af3ecb | |
15/01/08 15:21:13 INFO spark.HttpServer: Starting HTTP Server | |
15/01/08 15:21:13 INFO server.Server: jetty-8.1.14.v20131031 | |
15/01/08 15:21:13 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:50765 | |
15/01/08 15:21:13 INFO util.Utils: Successfully started service 'HTTP file server' on port 50765. | |
15/01/08 15:21:14 INFO server.Server: jetty-8.1.14.v20131031 | |
15/01/08 15:21:14 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040 | |
15/01/08 15:21:14 INFO util.Utils: Successfully started service 'SparkUI' on port 4040. | |
15/01/08 15:21:14 INFO ui.SparkUI: Started SparkUI at http://10.0.1.93:4040 | |
15/01/08 15:21:15 INFO client.AppClient$ClientActor: Connecting to master spark://188.166.34.149:7077... | |
15/01/08 15:21:15 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0 | |
15/01/08 15:21:15 INFO spark.SparkContext: Starting job: count at NativeMethodAccessorImpl.java:-2 | |
15/01/08 15:21:15 INFO scheduler.DAGScheduler: Got job 0 (count at NativeMethodAccessorImpl.java:-2) with 2 output partitions (allowLocal=false) | |
15/01/08 15:21:15 INFO scheduler.DAGScheduler: Final stage: Stage 0(count at NativeMethodAccessorImpl.java:-2) | |
15/01/08 15:21:15 INFO scheduler.DAGScheduler: Parents of final stage: List() | |
15/01/08 15:21:15 INFO scheduler.DAGScheduler: Missing parents: List() | |
15/01/08 15:21:15 INFO scheduler.DAGScheduler: Submitting Stage 0 (ParallelCollectionRDD[0] at parallelize at NativeMethodAccessorImpl.java:-2), which has no missing parents | |
15/01/08 15:21:15 INFO storage.MemoryStore: ensureFreeSpace(1312) called with curMem=0, maxMem=773188485 | |
15/01/08 15:21:15 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1312.0 B, free 737.4 MB) | |
15/01/08 15:21:15 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20150108152115-0006 | |
15/01/08 15:21:15 INFO client.AppClient$ClientActor: Executor added: app-20150108152115-0006/0 on worker-20150108145310-188.166.34.149-7078 (188.166.34.149:7078) with 4 cores | |
15/01/08 15:21:15 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20150108152115-0006/0 on hostPort 188.166.34.149:7078 with 4 cores, 512.0 MB RAM | |
15/01/08 15:21:15 INFO client.AppClient$ClientActor: Executor updated: app-20150108152115-0006/0 is now RUNNING | |
15/01/08 15:21:16 INFO storage.MemoryStore: ensureFreeSpace(905) called with curMem=1312, maxMem=773188485 | |
15/01/08 15:21:16 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 905.0 B, free 737.4 MB) | |
15/01/08 15:21:16 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.0.1.93:50764 (size: 905.0 B, free: 737.4 MB) | |
15/01/08 15:21:16 INFO storage.BlockManagerMaster: Updated info of block broadcast_0_piece0 | |
15/01/08 15:21:16 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 0 (ParallelCollectionRDD[0] at parallelize at NativeMethodAccessorImpl.java:-2) | |
15/01/08 15:21:16 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks | |
15/01/08 15:21:31 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory | |
15/01/08 15:21:46 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory | |
15/01/08 15:21:48 INFO client.AppClient$ClientActor: Executor updated: app-20150108152115-0006/0 is now EXITED (Command exited with code 1) | |
15/01/08 15:21:48 INFO cluster.SparkDeploySchedulerBackend: Executor app-20150108152115-0006/0 removed: Command exited with code 1 | |
15/01/08 15:21:48 INFO client.AppClient$ClientActor: Executor added: app-20150108152115-0006/1 on worker-20150108145310-188.166.34.149-7078 (188.166.34.149:7078) with 4 cores | |
15/01/08 15:21:48 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20150108152115-0006/1 on hostPort 188.166.34.149:7078 with 4 cores, 512.0 MB RAM | |
15/01/08 15:21:48 INFO client.AppClient$ClientActor: Executor updated: app-20150108152115-0006/1 is now RUNNING | |
15/01/08 15:22:01 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory | |
15/01/08 15:22:16 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory | |
15/01/08 15:22:20 INFO client.AppClient$ClientActor: Executor updated: app-20150108152115-0006/1 is now EXITED (Command exited with code 1) | |
15/01/08 15:22:20 INFO cluster.SparkDeploySchedulerBackend: Executor app-20150108152115-0006/1 removed: Command exited with code 1 | |
15/01/08 15:22:20 INFO client.AppClient$ClientActor: Executor added: app-20150108152115-0006/2 on worker-20150108145310-188.166.34.149-7078 (188.166.34.149:7078) with 4 cores | |
15/01/08 15:22:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20150108152115-0006/2 on hostPort 188.166.34.149:7078 with 4 cores, 512.0 MB RAM | |
15/01/08 15:22:20 INFO client.AppClient$ClientActor: Executor updated: app-20150108152115-0006/2 is now RUNNING | |
15/01/08 15:22:31 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory | |
15/01/08 15:22:46 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory | |
15/01/08 15:22:53 INFO client.AppClient$ClientActor: Executor updated: app-20150108152115-0006/2 is now EXITED (Command exited with code 1) | |
15/01/08 15:22:53 INFO cluster.SparkDeploySchedulerBackend: Executor app-20150108152115-0006/2 removed: Command exited with code 1 | |
15/01/08 15:22:53 INFO client.AppClient$ClientActor: Executor added: app-20150108152115-0006/3 on worker-20150108145310-188.166.34.149-7078 (188.166.34.149:7078) with 4 cores | |
15/01/08 15:22:53 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20150108152115-0006/3 on hostPort 188.166.34.149:7078 with 4 cores, 512.0 MB RAM | |
15/01/08 15:22:53 INFO client.AppClient$ClientActor: Executor updated: app-20150108152115-0006/3 is now RUNNING | |
15/01/08 15:23:01 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory | |
15/01/08 15:23:16 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory | |
15/01/08 15:23:26 INFO client.AppClient$ClientActor: Executor updated: app-20150108152115-0006/3 is now EXITED (Command exited with code 1) | |
15/01/08 15:23:26 INFO cluster.SparkDeploySchedulerBackend: Executor app-20150108152115-0006/3 removed: Command exited with code 1 | |
15/01/08 15:23:26 INFO client.AppClient$ClientActor: Executor added: app-20150108152115-0006/4 on worker-20150108145310-188.166.34.149-7078 (188.166.34.149:7078) with 4 cores | |
15/01/08 15:23:26 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20150108152115-0006/4 on hostPort 188.166.34.149:7078 with 4 cores, 512.0 MB RAM | |
15/01/08 15:23:26 INFO client.AppClient$ClientActor: Executor updated: app-20150108152115-0006/4 is now RUNNING | |
15/01/08 15:23:31 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory | |
15/01/08 15:23:46 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory | |
15/01/08 15:23:58 INFO client.AppClient$ClientActor: Executor updated: app-20150108152115-0006/4 is now EXITED (Command exited with code 1) | |
15/01/08 15:23:58 INFO cluster.SparkDeploySchedulerBackend: Executor app-20150108152115-0006/4 removed: Command exited with code 1 | |
15/01/08 15:23:58 INFO client.AppClient$ClientActor: Executor added: app-20150108152115-0006/5 on worker-20150108145310-188.166.34.149-7078 (188.166.34.149:7078) with 4 cores | |
15/01/08 15:23:58 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20150108152115-0006/5 on hostPort 188.166.34.149:7078 with 4 cores, 512.0 MB RAM | |
15/01/08 15:23:58 INFO client.AppClient$ClientActor: Executor updated: app-20150108152115-0006/5 is now RUNNING | |
15/01/08 15:24:01 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory | |
^C |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment