Created
January 26, 2023 07:10
-
-
Save romilbhardwaj/4be928e440d914d66bccd1045d7a9e14 to your computer and use it in GitHub Desktop.
test_autostop log
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
+ sky launch -y -d -c t-autostop-b250-22 --num-nodes 2 --cloud gcp tests/test_yamls/minimal.yaml | |
Task from YAML spec: tests/test_yamls/minimal.yaml | |
D 01-26 06:42:58 optimizer.py:231] #### min #### | |
D 01-26 06:42:58 optimizer.py:261] Defaulting the task's estimated time to 1 hour. | |
D 01-26 06:42:58 optimizer.py:277] resources: GCP(n2-standard-8) | |
D 01-26 06:42:58 optimizer.py:288] estimated_runtime: 3600 s (1.0 hr) | |
D 01-26 06:42:58 optimizer.py:292] estimated_cost (not incl. egress): $0.8 | |
D 01-26 06:42:58 optimizer.py:277] resources: GCP(n2-standard-8) | |
D 01-26 06:42:58 optimizer.py:288] estimated_runtime: 3600 s (1.0 hr) | |
D 01-26 06:42:58 optimizer.py:292] estimated_cost (not incl. egress): $0.8 | |
D 01-26 06:42:58 optimizer.py:277] resources: GCP(n2-standard-8) | |
D 01-26 06:42:58 optimizer.py:288] estimated_runtime: 3600 s (1.0 hr) | |
D 01-26 06:42:58 optimizer.py:292] estimated_cost (not incl. egress): $0.8 | |
D 01-26 06:42:58 optimizer.py:277] resources: GCP(n2-standard-8) | |
D 01-26 06:42:58 optimizer.py:288] estimated_runtime: 3600 s (1.0 hr) | |
D 01-26 06:42:58 optimizer.py:292] estimated_cost (not incl. egress): $0.8 | |
D 01-26 06:42:58 optimizer.py:277] resources: GCP(n2-standard-8) | |
D 01-26 06:42:58 optimizer.py:288] estimated_runtime: 3600 s (1.0 hr) | |
D 01-26 06:42:58 optimizer.py:292] estimated_cost (not incl. egress): $0.9 | |
D 01-26 06:42:58 optimizer.py:277] resources: GCP(n2-standard-8) | |
D 01-26 06:42:58 optimizer.py:288] estimated_runtime: 3600 s (1.0 hr) | |
D 01-26 06:42:58 optimizer.py:292] estimated_cost (not incl. egress): $0.9 | |
D 01-26 06:42:58 optimizer.py:277] resources: GCP(n2-standard-8) | |
D 01-26 06:42:58 optimizer.py:288] estimated_runtime: 3600 s (1.0 hr) | |
D 01-26 06:42:58 optimizer.py:292] estimated_cost (not incl. egress): $0.9 | |
D 01-26 06:42:58 optimizer.py:277] resources: GCP(n2-standard-8) | |
D 01-26 06:42:58 optimizer.py:288] estimated_runtime: 3600 s (1.0 hr) | |
D 01-26 06:42:58 optimizer.py:292] estimated_cost (not incl. egress): $0.9 | |
D 01-26 06:42:58 optimizer.py:277] resources: GCP(n2-standard-8) | |
D 01-26 06:42:58 optimizer.py:288] estimated_runtime: 3600 s (1.0 hr) | |
D 01-26 06:42:58 optimizer.py:292] estimated_cost (not incl. egress): $0.9 | |
I 01-26 06:42:58 optimizer.py:605] == Optimizer == | |
I 01-26 06:42:58 optimizer.py:617] Target: minimizing cost | |
I 01-26 06:42:58 optimizer.py:628] Estimated cost: $0.8 / hour | |
I 01-26 06:42:58 optimizer.py:628] | |
I 01-26 06:42:58 optimizer.py:692] Considered resources (2 nodes): | |
I 01-26 06:42:58 optimizer.py:739] ---------------------------------------------------------------------------------- | |
I 01-26 06:42:58 optimizer.py:739] CLOUD INSTANCE vCPUs ACCELERATORS REGION/ZONE COST ($) CHOSEN | |
I 01-26 06:42:58 optimizer.py:739] ---------------------------------------------------------------------------------- | |
I 01-26 06:42:58 optimizer.py:739] GCP n2-standard-8 8 - us-central1 0.78 ✔ | |
I 01-26 06:42:58 optimizer.py:739] ---------------------------------------------------------------------------------- | |
I 01-26 06:42:58 optimizer.py:739] | |
Running task on cluster t-autostop-b250-22... | |
I 01-26 06:42:58 cloud_vm_ray_backend.py:3143] Creating a new cluster: "t-autostop-b250-22" [2x GCP(n2-standard-8)]. | |
I 01-26 06:42:58 cloud_vm_ray_backend.py:3143] Tip: to reuse an existing cluster, specify --cluster (-c). Run `sky status` to see existing clusters. | |
I 01-26 06:43:00 cloud_vm_ray_backend.py:1081] To view detailed progress: tail -n100 -f /home/gcpuser/sky_logs/sky-2023-01-26-06-42-58-372926/provision.log | |
D 01-26 06:43:02 backend_utils.py:813] Using ssh_proxy_command: None | |
I 01-26 06:43:03 cloud_vm_ray_backend.py:1406] Launching on GCP us-central1 (us-central1-a) | |
D 01-26 06:43:03 cloud_vm_ray_backend.py:141] `ray up` script: /tmp/skypilot_ray_up_o5kzw0bw.py | |
I 01-26 06:44:04 log_utils.py:45] Head node is up. | |
D 01-26 06:44:58 cloud_vm_ray_backend.py:1484] `ray up` takes 114.6 seconds with 1 retries. | |
I 01-26 06:44:58 cloud_vm_ray_backend.py:1516] Successfully provisioned or found existing head VM. Waiting for workers. | |
D 01-26 06:45:05 backend_utils.py:1008] ======== Autoscaler status: 2023-01-26 06:44:54.372154 ======== | |
D 01-26 06:45:05 backend_utils.py:1008] Node status | |
D 01-26 06:45:05 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:45:05 backend_utils.py:1008] Healthy: | |
D 01-26 06:45:05 backend_utils.py:1008] 1 ray_head_default | |
D 01-26 06:45:05 backend_utils.py:1008] Pending: | |
D 01-26 06:45:05 backend_utils.py:1008] ray_worker_default, 1 launching | |
D 01-26 06:45:05 backend_utils.py:1008] Recent failures: | |
D 01-26 06:45:05 backend_utils.py:1008] (no failures) | |
D 01-26 06:45:05 backend_utils.py:1008] | |
D 01-26 06:45:05 backend_utils.py:1008] Resources | |
D 01-26 06:45:05 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:45:05 backend_utils.py:1008] Usage: | |
D 01-26 06:45:05 backend_utils.py:1008] 0.0/8.0 CPU | |
D 01-26 06:45:05 backend_utils.py:1008] 0.00/18.274 GiB memory | |
D 01-26 06:45:05 backend_utils.py:1008] 0.00/9.137 GiB object_store_memory | |
D 01-26 06:45:05 backend_utils.py:1008] | |
D 01-26 06:45:05 backend_utils.py:1008] Demands: | |
D 01-26 06:45:05 backend_utils.py:1008] (no resource demands) | |
D 01-26 06:45:05 backend_utils.py:1008] | |
D 01-26 06:45:05 backend_utils.py:1063] Reset start time, as new nodes are launched. (0 -> 1) | |
D 01-26 06:45:18 backend_utils.py:1008] ======== Autoscaler status: 2023-01-26 06:45:16.710947 ======== | |
D 01-26 06:45:18 backend_utils.py:1008] Node status | |
D 01-26 06:45:18 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:45:18 backend_utils.py:1008] Healthy: | |
D 01-26 06:45:18 backend_utils.py:1008] 1 ray_head_default | |
D 01-26 06:45:18 backend_utils.py:1008] Pending: | |
D 01-26 06:45:18 backend_utils.py:1008] 10.128.0.43: ray_worker_default, waiting-for-ssh | |
D 01-26 06:45:18 backend_utils.py:1008] Recent failures: | |
D 01-26 06:45:18 backend_utils.py:1008] (no failures) | |
D 01-26 06:45:18 backend_utils.py:1008] | |
D 01-26 06:45:18 backend_utils.py:1008] Resources | |
D 01-26 06:45:18 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:45:18 backend_utils.py:1008] Usage: | |
D 01-26 06:45:18 backend_utils.py:1008] 0.0/8.0 CPU | |
D 01-26 06:45:18 backend_utils.py:1008] 0.00/18.274 GiB memory | |
D 01-26 06:45:18 backend_utils.py:1008] 0.00/9.137 GiB object_store_memory | |
D 01-26 06:45:18 backend_utils.py:1008] | |
D 01-26 06:45:18 backend_utils.py:1008] Demands: | |
D 01-26 06:45:18 backend_utils.py:1008] (no resource demands) | |
D 01-26 06:45:18 backend_utils.py:1008] | |
D 01-26 06:45:18 backend_utils.py:1063] Reset start time, as new nodes are launched. (1 -> 2) | |
D 01-26 06:45:30 backend_utils.py:1008] ======== Autoscaler status: 2023-01-26 06:45:27.127261 ======== | |
D 01-26 06:45:30 backend_utils.py:1008] Node status | |
D 01-26 06:45:30 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:45:30 backend_utils.py:1008] Healthy: | |
D 01-26 06:45:30 backend_utils.py:1008] 1 ray_head_default | |
D 01-26 06:45:30 backend_utils.py:1008] Pending: | |
D 01-26 06:45:30 backend_utils.py:1008] 10.128.0.43: ray_worker_default, waiting-for-ssh | |
D 01-26 06:45:30 backend_utils.py:1008] Recent failures: | |
D 01-26 06:45:30 backend_utils.py:1008] (no failures) | |
D 01-26 06:45:30 backend_utils.py:1008] | |
D 01-26 06:45:30 backend_utils.py:1008] Resources | |
D 01-26 06:45:30 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:45:30 backend_utils.py:1008] Usage: | |
D 01-26 06:45:30 backend_utils.py:1008] 0.0/8.0 CPU | |
D 01-26 06:45:30 backend_utils.py:1008] 0.00/18.274 GiB memory | |
D 01-26 06:45:30 backend_utils.py:1008] 0.00/9.137 GiB object_store_memory | |
D 01-26 06:45:30 backend_utils.py:1008] | |
D 01-26 06:45:30 backend_utils.py:1008] Demands: | |
D 01-26 06:45:30 backend_utils.py:1008] (no resource demands) | |
D 01-26 06:45:30 backend_utils.py:1008] | |
D 01-26 06:45:43 backend_utils.py:1008] ======== Autoscaler status: 2023-01-26 06:45:37.373648 ======== | |
D 01-26 06:45:43 backend_utils.py:1008] Node status | |
D 01-26 06:45:43 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:45:43 backend_utils.py:1008] Healthy: | |
D 01-26 06:45:43 backend_utils.py:1008] 1 ray_head_default | |
D 01-26 06:45:43 backend_utils.py:1008] Pending: | |
D 01-26 06:45:43 backend_utils.py:1008] 10.128.0.43: ray_worker_default, syncing-files | |
D 01-26 06:45:43 backend_utils.py:1008] Recent failures: | |
D 01-26 06:45:43 backend_utils.py:1008] (no failures) | |
D 01-26 06:45:43 backend_utils.py:1008] | |
D 01-26 06:45:43 backend_utils.py:1008] Resources | |
D 01-26 06:45:43 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:45:43 backend_utils.py:1008] Usage: | |
D 01-26 06:45:43 backend_utils.py:1008] 0.0/8.0 CPU | |
D 01-26 06:45:43 backend_utils.py:1008] 0.00/18.274 GiB memory | |
D 01-26 06:45:43 backend_utils.py:1008] 0.00/9.137 GiB object_store_memory | |
D 01-26 06:45:43 backend_utils.py:1008] | |
D 01-26 06:45:43 backend_utils.py:1008] Demands: | |
D 01-26 06:45:43 backend_utils.py:1008] (no resource demands) | |
D 01-26 06:45:43 backend_utils.py:1008] | |
D 01-26 06:45:55 backend_utils.py:1008] ======== Autoscaler status: 2023-01-26 06:45:51.524251 ======== | |
D 01-26 06:45:55 backend_utils.py:1008] Node status | |
D 01-26 06:45:55 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:45:55 backend_utils.py:1008] Healthy: | |
D 01-26 06:45:55 backend_utils.py:1008] 1 ray_head_default | |
D 01-26 06:45:55 backend_utils.py:1008] Pending: | |
D 01-26 06:45:55 backend_utils.py:1008] 10.128.0.43: ray_worker_default, setting-up | |
D 01-26 06:45:55 backend_utils.py:1008] Recent failures: | |
D 01-26 06:45:55 backend_utils.py:1008] (no failures) | |
D 01-26 06:45:55 backend_utils.py:1008] | |
D 01-26 06:45:55 backend_utils.py:1008] Resources | |
D 01-26 06:45:55 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:45:55 backend_utils.py:1008] Usage: | |
D 01-26 06:45:55 backend_utils.py:1008] 0.0/8.0 CPU | |
D 01-26 06:45:55 backend_utils.py:1008] 0.00/18.274 GiB memory | |
D 01-26 06:45:55 backend_utils.py:1008] 0.00/9.137 GiB object_store_memory | |
D 01-26 06:45:55 backend_utils.py:1008] | |
D 01-26 06:45:55 backend_utils.py:1008] Demands: | |
D 01-26 06:45:55 backend_utils.py:1008] (no resource demands) | |
D 01-26 06:45:55 backend_utils.py:1008] | |
D 01-26 06:46:07 backend_utils.py:1008] ======== Autoscaler status: 2023-01-26 06:46:06.888987 ======== | |
D 01-26 06:46:07 backend_utils.py:1008] Node status | |
D 01-26 06:46:07 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:46:07 backend_utils.py:1008] Healthy: | |
D 01-26 06:46:07 backend_utils.py:1008] 1 ray_head_default | |
D 01-26 06:46:07 backend_utils.py:1008] Pending: | |
D 01-26 06:46:07 backend_utils.py:1008] 10.128.0.43: ray_worker_default, setting-up | |
D 01-26 06:46:07 backend_utils.py:1008] Recent failures: | |
D 01-26 06:46:07 backend_utils.py:1008] (no failures) | |
D 01-26 06:46:07 backend_utils.py:1008] | |
D 01-26 06:46:07 backend_utils.py:1008] Resources | |
D 01-26 06:46:07 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:46:07 backend_utils.py:1008] Usage: | |
D 01-26 06:46:07 backend_utils.py:1008] 0.0/8.0 CPU | |
D 01-26 06:46:07 backend_utils.py:1008] 0.00/18.274 GiB memory | |
D 01-26 06:46:07 backend_utils.py:1008] 0.00/9.137 GiB object_store_memory | |
D 01-26 06:46:07 backend_utils.py:1008] | |
D 01-26 06:46:07 backend_utils.py:1008] Demands: | |
D 01-26 06:46:07 backend_utils.py:1008] (no resource demands) | |
D 01-26 06:46:07 backend_utils.py:1008] | |
D 01-26 06:46:20 backend_utils.py:1008] ======== Autoscaler status: 2023-01-26 06:46:12.005221 ======== | |
D 01-26 06:46:20 backend_utils.py:1008] Node status | |
D 01-26 06:46:20 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:46:20 backend_utils.py:1008] Healthy: | |
D 01-26 06:46:20 backend_utils.py:1008] 1 ray_head_default | |
D 01-26 06:46:20 backend_utils.py:1008] Pending: | |
D 01-26 06:46:20 backend_utils.py:1008] 10.128.0.43: ray_worker_default, setting-up | |
D 01-26 06:46:20 backend_utils.py:1008] Recent failures: | |
D 01-26 06:46:20 backend_utils.py:1008] (no failures) | |
D 01-26 06:46:20 backend_utils.py:1008] | |
D 01-26 06:46:20 backend_utils.py:1008] Resources | |
D 01-26 06:46:20 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:46:20 backend_utils.py:1008] Usage: | |
D 01-26 06:46:20 backend_utils.py:1008] 0.0/8.0 CPU | |
D 01-26 06:46:20 backend_utils.py:1008] 0.00/18.274 GiB memory | |
D 01-26 06:46:20 backend_utils.py:1008] 0.00/9.137 GiB object_store_memory | |
D 01-26 06:46:20 backend_utils.py:1008] | |
D 01-26 06:46:20 backend_utils.py:1008] Demands: | |
D 01-26 06:46:20 backend_utils.py:1008] (no resource demands) | |
D 01-26 06:46:20 backend_utils.py:1008] | |
D 01-26 06:46:32 backend_utils.py:1008] ======== Autoscaler status: 2023-01-26 06:46:31.451731 ======== | |
D 01-26 06:46:32 backend_utils.py:1008] Node status | |
D 01-26 06:46:32 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:46:32 backend_utils.py:1008] Healthy: | |
D 01-26 06:46:32 backend_utils.py:1008] 1 ray_head_default | |
D 01-26 06:46:32 backend_utils.py:1008] 1 ray_worker_default | |
D 01-26 06:46:32 backend_utils.py:1008] Pending: | |
D 01-26 06:46:32 backend_utils.py:1008] (no pending nodes) | |
D 01-26 06:46:32 backend_utils.py:1008] Recent failures: | |
D 01-26 06:46:32 backend_utils.py:1008] (no failures) | |
D 01-26 06:46:32 backend_utils.py:1008] | |
D 01-26 06:46:32 backend_utils.py:1008] Resources | |
D 01-26 06:46:32 backend_utils.py:1008] --------------------------------------------------------------- | |
D 01-26 06:46:32 backend_utils.py:1008] Usage: | |
D 01-26 06:46:32 backend_utils.py:1008] 0.0/16.0 CPU | |
D 01-26 06:46:32 backend_utils.py:1008] 0.00/39.855 GiB memory | |
D 01-26 06:46:32 backend_utils.py:1008] 0.00/18.386 GiB object_store_memory | |
D 01-26 06:46:32 backend_utils.py:1008] | |
D 01-26 06:46:32 backend_utils.py:1008] Demands: | |
D 01-26 06:46:32 backend_utils.py:1008] (no resource demands) | |
D 01-26 06:46:32 backend_utils.py:1008] | |
I 01-26 06:46:32 cloud_vm_ray_backend.py:1213] Successfully provisioned or found existing VMs. | |
I 01-26 06:46:50 cloud_vm_ray_backend.py:2384] Running setup on 2 nodes. | |
Warning: Permanently added '35.223.120.150' (ECDSA) to the list of known hosts. | |
Warning: Permanently added '35.223.129.245' (ECDSA) to the list of known hosts. | |
running setup | |
running setup | |
I 01-26 06:46:53 cloud_vm_ray_backend.py:2393] Setup completed. | |
D 01-26 06:46:53 cloud_vm_ray_backend.py:2395] Setup took 2.9194061756134033 seconds. | |
D 01-26 06:46:55 cloud_vm_ray_backend.py:464] Added Task with options: , num_cpus=0.5, placement_group=pg, placement_group_bundle_index=0 | |
D 01-26 06:46:55 cloud_vm_ray_backend.py:464] Added Task with options: , num_cpus=0.5, placement_group=pg, placement_group_bundle_index=1 | |
I 01-26 06:46:58 cloud_vm_ray_backend.py:2458] Job submitted with Job ID: 1 | |
I 01-26 06:46:58 cloud_vm_ray_backend.py:2487] Job ID: 1 | |
I 01-26 06:46:58 cloud_vm_ray_backend.py:2487] To cancel the job: sky cancel t-autostop-b250-22 1 | |
I 01-26 06:46:58 cloud_vm_ray_backend.py:2487] To stream job logs: sky logs t-autostop-b250-22 1 | |
I 01-26 06:46:58 cloud_vm_ray_backend.py:2487] To view the job queue: sky queue t-autostop-b250-22 | |
I 01-26 06:46:58 cloud_vm_ray_backend.py:2600] | |
I 01-26 06:46:58 cloud_vm_ray_backend.py:2600] Cluster name: t-autostop-b250-22 | |
I 01-26 06:46:58 cloud_vm_ray_backend.py:2600] To log into the head VM: ssh t-autostop-b250-22 | |
I 01-26 06:46:58 cloud_vm_ray_backend.py:2600] To submit a job: sky exec t-autostop-b250-22 yaml_file | |
I 01-26 06:46:58 cloud_vm_ray_backend.py:2600] To stop the cluster: sky stop t-autostop-b250-22 | |
I 01-26 06:46:58 cloud_vm_ray_backend.py:2600] To teardown the cluster: sky down t-autostop-b250-22 | |
Clusters | |
NAME LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND | |
t-autostop-b250-22 a few secs ago 2x GCP(n2-standard-8) UP - sky launch -y -d -c t-aut... | |
t-autostop-b250-3d 25 mins ago 2x GCP(n2-standard-8) INIT - sky launch -y -d -c t-aut... | |
t-autostop-b250-88 3 hrs ago 2x GCP(n2-standard-8) INIT - sky launch -y -d -c t-aut... | |
Managed spot controller (autostopped if idle for 10min) | |
Use spot jobs CLI: sky spot --help | |
NAME LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND | |
sky-spot-controller-b25086b8 15 mins ago 1x AWS(m6i.2xlarge, disk_size=50) UP 10m sky spot launch --cloud gcp... | |
1 cluster has auto{stop,down} scheduled. Refresh statuses with: sky status --refresh | |
+ sky autostop -y t-autostop-b250-22 -i 1 | |
Scheduling autostop on cluster 't-autostop-b250-22'...done | |
The cluster will be autostopped after 1 minute of idleness. | |
To cancel the autostop, run: sky autostop t-autostop-b250-22 --cancel | |
Scheduling autostop on 1 cluster ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 | |
+ sky status | grep t-autostop-b250-22 | grep "1m" | |
t-autostop-b250-22 15 secs ago 2x GCP(n2-standard-8) UP 1m sky launch -y -d -c t-aut... | |
+ sleep 45 | |
+ s=$(sky status t-autostop-b250-22 --refresh); echo "$s"; echo; echo; echo "$s" | grep t-autostop-b250-22 | grep UP | |
Clusters | |
NAME LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND | |
t-autostop-b250-22 1 min ago 2x GCP(n2-standard-8) UP 1m sky launch -y -d -c t-aut... | |
1 cluster has auto{stop,down} scheduled. Refresh statuses with: sky status --refresh | |
t-autostop-b250-22 1 min ago 2x GCP(n2-standard-8) UP 1m sky launch -y -d -c t-aut... | |
+ sleep 150 | |
+ s=$(sky status t-autostop-b250-22 --refresh); echo "$s"; echo; echo; echo "$s" | grep t-autostop-b250-22 | grep STOPPED | |
W 01-26 06:50:45 backend_utils.py:1346] Expected 1 worker IP(s); found 0: [] | |
W 01-26 06:50:45 backend_utils.py:1346] This could happen if there is extra output from `ray get-worker-ips`, which should be inspected below. | |
W 01-26 06:50:45 backend_utils.py:1346] == Output == | |
W 01-26 06:50:45 backend_utils.py:1346] | |
W 01-26 06:50:45 backend_utils.py:1346] | |
W 01-26 06:50:45 backend_utils.py:1346] == Output ends == | |
D 01-26 06:50:45 backend_utils.py:1904] Refreshing status: Failed to get IPs from cluster 't-autostop-b250-22', trying to fetch from provider. | |
D 01-26 06:50:47 backend_utils.py:1569] gcloud compute instances list --filter="(labels.ray-cluster-name=t-autostop-b250-22 AND labels.ray-launch-config=(7d331eb1d9bbad930ed3744b1da3b575797547d9 bc2386b65c8208224317b49e264853a4e13473f3))" --format="value(status)" returned 0. | |
D 01-26 06:50:47 backend_utils.py:1569] **** STDOUT **** | |
D 01-26 06:50:47 backend_utils.py:1569] RUNNING | |
D 01-26 06:50:47 backend_utils.py:1569] STOPPING | |
D 01-26 06:50:47 backend_utils.py:1569] | |
D 01-26 06:50:47 backend_utils.py:1569] **** STDERR **** | |
D 01-26 06:50:47 backend_utils.py:1569] | |
Clusters | |
NAME LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND | |
t-autostop-b250-22 3 mins ago 2x GCP(n2-standard-8) INIT - sky launch -y -d -c t-aut... | |
Failed. | |
Reason: s=$(sky status t-autostop-b250-22 --refresh); echo "$s"; echo; echo; echo "$s" | grep t-autostop-b250-22 | grep STOPPED | |
Log: less /tmp/autostop-zd5hwdn8.log |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment