Created
July 17, 2024 22:30
-
-
Save MrCreosote/a1fd88aaeb70fd0b6ea16ad18e89b91c to your computer and use it in GitHub Desktop.
spark_yarn_client_mode_logs.txt
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
24/07/17 21:34:01 INFO SparkContext: Running Spark version 3.5.1 | |
24/07/17 21:34:01 INFO SparkContext: OS info Linux, 4.15.0-213-generic, amd64 | |
24/07/17 21:34:01 INFO SparkContext: Java version 17.0.11 | |
24/07/17 21:34:01 INFO ResourceUtils: ============================================================== | |
24/07/17 21:34:01 INFO ResourceUtils: No custom resources configured for spark.driver. | |
24/07/17 21:34:01 INFO ResourceUtils: ============================================================== | |
24/07/17 21:34:01 INFO SparkContext: Submitted application: Spark Pi | |
24/07/17 21:34:01 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0) | |
24/07/17 21:34:01 INFO ResourceProfile: Limiting resource is cpus at 1 tasks per executor | |
24/07/17 21:34:01 INFO ResourceProfileManager: Added ResourceProfile id: 0 | |
24/07/17 21:34:01 INFO SecurityManager: Changing view acls to: spark_user,spark | |
24/07/17 21:34:01 INFO SecurityManager: Changing modify acls to: spark_user,spark | |
24/07/17 21:34:01 INFO SecurityManager: Changing view acls groups to: | |
24/07/17 21:34:01 INFO SecurityManager: Changing modify acls groups to: | |
24/07/17 21:34:01 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: spark_user, spark; groups with view permissions: EMPTY; users with modify permissions: spark_user, spark; groups with modify permissions: EMPTY | |
24/07/17 21:34:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
24/07/17 21:34:01 INFO Utils: Successfully started service 'sparkDriver' on port 42599. | |
24/07/17 21:34:01 INFO SparkEnv: Registering MapOutputTracker | |
24/07/17 21:34:01 INFO SparkEnv: Registering BlockManagerMaster | |
24/07/17 21:34:01 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information | |
24/07/17 21:34:01 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up | |
24/07/17 21:34:01 INFO SparkEnv: Registering BlockManagerMasterHeartbeat | |
24/07/17 21:34:01 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-18e15964-f471-43a5-9f58-b2e19f37a79c | |
24/07/17 21:34:01 INFO MemoryStore: MemoryStore started with capacity 434.4 MiB | |
24/07/17 21:34:01 INFO SparkEnv: Registering OutputCommitCoordinator | |
24/07/17 21:34:01 INFO JettyUtils: Start Jetty 0.0.0.0:4040 for SparkUI | |
24/07/17 21:34:02 INFO Utils: Successfully started service 'SparkUI' on port 4040. | |
24/07/17 21:34:02 INFO SparkContext: Added JAR file:/opt/bitnami/spark/examples/jars/spark-examples_2.12-3.5.1.jar at spark://spark-dev-notebook:42599/jars/spark-examples_2.12-3.5.1.jar with timestamp 1721252041358 | |
24/07/17 21:34:02 WARN Utils: spark.executor.instances less than spark.dynamicAllocation.minExecutors is invalid, ignoring its setting, please update your configs. | |
24/07/17 21:34:02 INFO Utils: Using initial executors = 1, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances | |
24/07/17 21:34:02 INFO DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at yarn-resourcemanager/172.26.0.3:8032 | |
24/07/17 21:34:02 INFO Configuration: resource-types.xml not found | |
24/07/17 21:34:02 INFO ResourceUtils: Unable to find 'resource-types.xml'. | |
24/07/17 21:34:02 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container) | |
24/07/17 21:34:02 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead | |
24/07/17 21:34:02 INFO Client: Setting up container launch context for our AM | |
24/07/17 21:34:02 INFO Client: Setting up the launch environment for our AM container | |
24/07/17 21:34:02 INFO Client: Preparing resources for our AM container | |
24/07/17 21:34:02 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME. | |
24/07/17 21:34:05 INFO Client: Uploading resource file:/tmp/spark-02f57858-7f47-4a84-a374-728cd1ffc962/__spark_libs__11685370608324626554.zip -> file:/home/spark_user/.sparkStaging/application_1721244785338_0003/__spark_libs__11685370608324626554.zip | |
24/07/17 21:34:05 INFO Client: Uploading resource file:/tmp/spark-02f57858-7f47-4a84-a374-728cd1ffc962/__spark_conf__14523889379781452793.zip -> file:/home/spark_user/.sparkStaging/application_1721244785338_0003/__spark_conf__.zip | |
24/07/17 21:34:05 INFO SecurityManager: Changing view acls to: spark_user,spark | |
24/07/17 21:34:05 INFO SecurityManager: Changing modify acls to: spark_user,spark | |
24/07/17 21:34:05 INFO SecurityManager: Changing view acls groups to: | |
24/07/17 21:34:05 INFO SecurityManager: Changing modify acls groups to: | |
24/07/17 21:34:05 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: spark_user, spark; groups with view permissions: EMPTY; users with modify permissions: spark_user, spark; groups with modify permissions: EMPTY | |
24/07/17 21:34:05 INFO Client: Submitting application application_1721244785338_0003 to ResourceManager | |
24/07/17 21:34:05 INFO YarnClientImpl: Submitted application application_1721244785338_0003 | |
24/07/17 21:34:06 INFO Client: Application report for application_1721244785338_0003 (state: ACCEPTED) | |
24/07/17 21:34:06 INFO Client: | |
client token: N/A | |
diagnostics: [Wed Jul 17 21:34:06 +0000 2024] Application is Activated, waiting for resources to be assigned for AM. Details : AM Partition = <DEFAULT_PARTITION> ; Partition Resource = <memory:8192, vCores:8> ; Queue's Absolute capacity = 100.0 % ; Queue's Absolute used capacity = 0.0 % ; Queue's Absolute max capacity = 100.0 % ; Queue's capacity (absolute resource) = <memory:8192, vCores:8> ; Queue's used capacity (absolute resource) = <memory:0, vCores:0> ; Queue's max capacity (absolute resource) = <memory:8192, vCores:8> ; | |
ApplicationMaster host: N/A | |
ApplicationMaster RPC port: -1 | |
queue: default | |
start time: 1721252045821 | |
final status: UNDEFINED | |
tracking URL: http://b967e49687bc:8088/proxy/application_1721244785338_0003/ | |
user: spark_user | |
24/07/17 21:34:07 INFO Client: Application report for application_1721244785338_0003 (state: FAILED) | |
24/07/17 21:34:07 INFO Client: | |
client token: N/A | |
diagnostics: Application application_1721244785338_0003 failed 2 times due to AM Container for appattempt_1721244785338_0003_000002 exited with exitCode: -1000 | |
Failing this attempt.Diagnostics: [2024-07-17 21:34:07.792]File file:/home/spark_user/.sparkStaging/application_1721244785338_0003/__spark_libs__11685370608324626554.zip does not exist | |
java.io.FileNotFoundException: File file:/home/spark_user/.sparkStaging/application_1721244785338_0003/__spark_libs__11685370608324626554.zip does not exist | |
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:915) | |
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1236) | |
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:905) | |
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:462) | |
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:275) | |
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:72) | |
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:425) | |
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:422) | |
at java.security.AccessController.doPrivileged(Native Method) | |
at javax.security.auth.Subject.doAs(Subject.java:422) | |
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) | |
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:422) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:247) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:240) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:228) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) | |
at java.lang.Thread.run(Thread.java:748) | |
For more detailed output, check the application tracking page: http://b967e49687bc:8088/cluster/app/application_1721244785338_0003 Then click on links to logs of each attempt. | |
. Failing the application. | |
ApplicationMaster host: N/A | |
ApplicationMaster RPC port: -1 | |
queue: default | |
start time: 1721252045821 | |
final status: FAILED | |
tracking URL: http://b967e49687bc:8088/cluster/app/application_1721244785338_0003 | |
user: spark_user | |
24/07/17 21:34:07 INFO Client: Deleted staging directory file:/home/spark_user/.sparkStaging/application_1721244785338_0003 | |
24/07/17 21:34:07 ERROR YarnClientSchedulerBackend: The YARN application has already ended! It might have been killed or the Application Master may have failed to start. Check the YARN application logs for more details. | |
24/07/17 21:34:07 ERROR SparkContext: Error initializing SparkContext. | |
org.apache.spark.SparkException: Application application_1721244785338_0003 failed 2 times due to AM Container for appattempt_1721244785338_0003_000002 exited with exitCode: -1000 | |
Failing this attempt.Diagnostics: [2024-07-17 21:34:07.792]File file:/home/spark_user/.sparkStaging/application_1721244785338_0003/__spark_libs__11685370608324626554.zip does not exist | |
java.io.FileNotFoundException: File file:/home/spark_user/.sparkStaging/application_1721244785338_0003/__spark_libs__11685370608324626554.zip does not exist | |
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:915) | |
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1236) | |
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:905) | |
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:462) | |
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:275) | |
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:72) | |
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:425) | |
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:422) | |
at java.security.AccessController.doPrivileged(Native Method) | |
at javax.security.auth.Subject.doAs(Subject.java:422) | |
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) | |
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:422) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:247) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:240) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:228) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) | |
at java.lang.Thread.run(Thread.java:748) | |
For more detailed output, check the application tracking page: http://b967e49687bc:8088/cluster/app/application_1721244785338_0003 Then click on links to logs of each attempt. | |
. Failing the application. | |
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:98) | |
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:65) | |
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:235) | |
at org.apache.spark.SparkContext.<init>(SparkContext.scala:604) | |
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2888) | |
at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:1099) | |
at scala.Option.getOrElse(Option.scala:189) | |
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:1093) | |
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:30) | |
at org.apache.spark.examples.SparkPi.main(SparkPi.scala) | |
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) | |
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.base/java.lang.reflect.Method.invoke(Method.java:568) | |
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) | |
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1029) | |
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:194) | |
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:217) | |
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91) | |
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1120) | |
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1129) | |
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
24/07/17 21:34:07 INFO SparkContext: SparkContext is stopping with exitCode 0. | |
24/07/17 21:34:07 INFO SparkUI: Stopped Spark web UI at http://spark-dev-notebook:4040 | |
24/07/17 21:34:07 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to send shutdown message before the AM has registered! | |
24/07/17 21:34:07 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered! | |
24/07/17 21:34:07 INFO YarnClientSchedulerBackend: Shutting down all executors | |
24/07/17 21:34:07 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down | |
24/07/17 21:34:07 INFO YarnClientSchedulerBackend: YARN client scheduler backend Stopped | |
24/07/17 21:34:07 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! | |
24/07/17 21:34:07 INFO MemoryStore: MemoryStore cleared | |
24/07/17 21:34:07 INFO BlockManager: BlockManager stopped | |
24/07/17 21:34:07 INFO BlockManagerMaster: BlockManagerMaster stopped | |
24/07/17 21:34:07 WARN MetricsSystem: Stopping a MetricsSystem that is not running | |
24/07/17 21:34:07 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! | |
24/07/17 21:34:07 INFO SparkContext: Successfully stopped SparkContext | |
Exception in thread "main" org.apache.spark.SparkException: Application application_1721244785338_0003 failed 2 times due to AM Container for appattempt_1721244785338_0003_000002 exited with exitCode: -1000 | |
Failing this attempt.Diagnostics: [2024-07-17 21:34:07.792]File file:/home/spark_user/.sparkStaging/application_1721244785338_0003/__spark_libs__11685370608324626554.zip does not exist | |
java.io.FileNotFoundException: File file:/home/spark_user/.sparkStaging/application_1721244785338_0003/__spark_libs__11685370608324626554.zip does not exist | |
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:915) | |
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1236) | |
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:905) | |
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:462) | |
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:275) | |
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:72) | |
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:425) | |
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:422) | |
at java.security.AccessController.doPrivileged(Native Method) | |
at javax.security.auth.Subject.doAs(Subject.java:422) | |
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) | |
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:422) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:247) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:240) | |
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:228) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) | |
at java.lang.Thread.run(Thread.java:748) | |
For more detailed output, check the application tracking page: http://b967e49687bc:8088/cluster/app/application_1721244785338_0003 Then click on links to logs of each attempt. | |
. Failing the application. | |
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:98) | |
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:65) | |
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:235) | |
at org.apache.spark.SparkContext.<init>(SparkContext.scala:604) | |
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2888) | |
at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:1099) | |
at scala.Option.getOrElse(Option.scala:189) | |
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:1093) | |
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:30) | |
at org.apache.spark.examples.SparkPi.main(SparkPi.scala) | |
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) | |
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.base/java.lang.reflect.Method.invoke(Method.java:568) | |
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) | |
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1029) | |
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:194) | |
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:217) | |
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91) | |
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1120) | |
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1129) | |
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
24/07/17 21:34:07 INFO ShutdownHookManager: Shutdown hook called | |
24/07/17 21:34:07 INFO ShutdownHookManager: Deleting directory /tmp/spark-bd942214-ca55-49f3-b702-80a3047dd32b | |
24/07/17 21:34:07 INFO ShutdownHookManager: Deleting directory /tmp/spark-02f57858-7f47-4a84-a374-728cd1ffc962 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment