Skip to content

Instantly share code, notes, and snippets.

@cisaacstern
Created April 28, 2022 17:11
Show Gist options
  • Save cisaacstern/103e755ddeea7b0b2dd16b29042778e0 to your computer and use it in GitHub Desktop.
Save cisaacstern/103e755ddeea7b0b2dd16b29042778e0 to your computer and use it in GitHub Desktop.
2022-04-27T22:43:40.458994174Z stderr F Coordinates:
2022-04-27T22:43:40.459002487Z stderr F * time (time) float64 4.419e+04 4.42e+04 ... 4.456e+04 4.456e+04
2022-04-27T22:43:40.45901095Z stderr F * lat (lat) float32 -44.5 -44.45 -44.4 -44.35 ... -10.1 -10.05 -10.0
2022-04-27T22:43:40.459019409Z stderr F * lon (lon) float32 112.0 112.1 112.1 112.2 ... 156.1 156.2 156.2
2022-04-27T22:43:40.459027566Z stderr F Dimensions without coordinates: nv
2022-04-27T22:43:40.459038792Z stderr F Data variables:
2022-04-27T22:43:40.459046529Z stderr F lat_bnds (lat, nv) float32 ...
2022-04-27T22:43:40.459055527Z stderr F lon_bnds (lon, nv) float32 ...
2022-04-27T22:43:40.459062712Z stderr F time_bnds (time, nv) float64 ...
2022-04-27T22:43:40.459070159Z stderr F vapourpres (time, lat, lon) float32 ...
2022-04-27T22:43:40.459084347Z stderr F crs int32 ...
2022-04-27T22:43:40.459092822Z stderr F Attributes: (12/33)
2022-04-27T22:43:40.459101643Z stderr F geospatial_lat_min: -44.525
2022-04-27T22:43:40.459108699Z stderr F geospatial_lat_max: -9.975
2022-04-27T22:43:40.459135954Z stderr F geospatial_lon_min: 111.975
2022-04-27T22:43:40.459142425Z stderr F geospatial_lon_max: 156.275
2022-04-27T22:43:40.45914943Z stderr F time_coverage_start: 1971-01-01T15:00:00
2022-04-27T22:43:40.459155487Z stderr F date_created: 2017-03-07T08:30:03.928732
2022-04-27T22:43:40.459162623Z stderr F ... ...
2022-04-27T22:43:40.459169507Z stderr F licence: Data Licence: The grid data files in this AGCD...
2022-04-27T22:43:40.459176895Z stderr F description: This AGCD data is a snapshot of the operationa...
2022-04-27T22:43:40.459183884Z stderr F date_issued: 2020-12-02 15:35:07
2022-04-27T22:43:40.45919119Z stderr F attribution: Data should be cited as : Australian Bureau of...
2022-04-27T22:43:40.459198462Z stderr F copyright: (C) Copyright Commonwealth of Australia 2020, ...
2022-04-27T22:43:40.459205322Z stderr F history:
2022-04-27T22:43:40.459227636Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Subsetting input according to time-1
2022-04-27T22:43:40.459235564Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Subsetting dataset with indexer {'time': slice(182, 365, None)}
2022-04-27T22:43:40.461963845Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Combining inputs for chunk 'Index({DimIndex(name='variable', index=4, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.SUBSET: 3>), DimIndex(name='time', index=0, sequence_len=2, operation=<CombineOp.CONCAT: 2>)})'
2022-04-27T22:43:40.46452602Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:<xarray.Dataset>
2022-04-27T22:43:40.464579883Z stderr F Dimensions: (time: 183, lat: 691, lon: 886)
2022-04-27T22:43:40.46459075Z stderr F Coordinates:
2022-04-27T22:43:40.46459858Z stderr F * time (time) float64 4.438e+04 4.438e+04 ... 4.456e+04 4.456e+04
2022-04-27T22:43:40.464606521Z stderr F * lat (lat) float32 -44.5 -44.45 -44.4 -44.35 ... -10.1 -10.05 -10.0
2022-04-27T22:43:40.464613257Z stderr F * lon (lon) float32 112.0 112.1 112.1 112.2 ... 156.1 156.2 156.2
2022-04-27T22:43:40.464619892Z stderr F Data variables:
2022-04-27T22:43:40.464626481Z stderr F vapourpres (time, lat, lon) float32 dask.array<chunksize=(183, 691, 886), meta=np.ndarray>
2022-04-27T22:43:40.464635277Z stderr F Attributes: (12/33)
2022-04-27T22:43:40.46464273Z stderr F geospatial_lat_min: -44.525
2022-04-27T22:43:40.464649993Z stderr F geospatial_lat_max: -9.975
2022-04-27T22:43:40.464657237Z stderr F geospatial_lon_min: 111.975
2022-04-27T22:43:40.464665472Z stderr F geospatial_lon_max: 156.275
2022-04-27T22:43:40.46467363Z stderr F time_coverage_start: 1971-01-01T15:00:00
2022-04-27T22:43:40.464683196Z stderr F date_created: 2017-03-07T08:30:03.928732
2022-04-27T22:43:40.464691085Z stderr F ... ...
2022-04-27T22:43:40.464698659Z stderr F licence: Data Licence: The grid data files in this AGCD...
2022-04-27T22:43:40.464705261Z stderr F description: This AGCD data is a snapshot of the operationa...
2022-04-27T22:43:40.464712165Z stderr F date_issued: 2020-12-02 15:35:07
2022-04-27T22:43:40.464719112Z stderr F attribution: Data should be cited as : Australian Bureau of...
2022-04-27T22:43:40.464726425Z stderr F copyright: (C) Copyright Commonwealth of Australia 2020, ...
2022-04-27T22:43:40.464734009Z stderr F history:
2022-04-27T22:43:40.893938559Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Converting variable time of 1464 bytes to `numpy.ndarray`
2022-04-27T22:43:40.894021412Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Acquiring locks ['time-time-9', 'time-time-18']
2022-04-27T22:43:40.91339569Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquiring lock pangeo-forge-time-time-9...
2022-04-27T22:43:40.915038714Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquired lock pangeo-forge-time-time-9
2022-04-27T22:43:40.915082188Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquiring lock pangeo-forge-time-time-18...
2022-04-27T22:43:40.916640587Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquired lock pangeo-forge-time-time-18
2022-04-27T22:43:40.91674584Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Storing variable time chunk Index({DimIndex(name='variable', index=4, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.SUBSET: 3>), DimIndex(name='time', index=0, sequence_len=2, operation=<CombineOp.CONCAT: 2>)}) to Zarr region (slice(182, 365, None),)
2022-04-27T22:43:40.910449277Z stderr F distributed.scheduler - INFO - Receive client connection: Client-worker-7c37bc09-c67b-11ec-813a-4d62ab76704f
2022-04-27T22:43:40.911182709Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T22:43:41.347179021Z stderr F DEBUG:pangeo_forge_recipes.utils:Released lock pangeo-forge-time-time-9
2022-04-27T22:43:41.349150073Z stderr F DEBUG:pangeo_forge_recipes.utils:Released lock pangeo-forge-time-time-18
2022-04-27T22:43:41.427646263Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Converting variable vapourpres of 448149432 bytes to `numpy.ndarray`
2022-04-27T22:43:47.419748195Z stderr F distributed.core - INFO - Event loop was unresponsive in Worker for 11.38s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
2022-04-27T22:43:47.610511582Z stdout F [2022-04-27 22:43:47+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[8]': Starting task run...
2022-04-27T22:43:47.622091711Z stdout F [2022-04-27 22:43:47+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[8]': Finished task run for task with final state: 'Failed'
2022-04-27T22:43:47.79882572Z stdout F [2022-04-27 22:43:47+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[4]': Starting task run...
2022-04-27T22:43:47.810333429Z stdout F [2022-04-27 22:43:47+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[4]': Finished task run for task with final state: 'Success'
2022-04-27T22:43:48.046199212Z stdout F [2022-04-27 22:43:48+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[0]': Starting task run...
2022-04-27T22:43:48.055367166Z stdout F [2022-04-27 22:43:48+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[0]': Finished task run for task with final state: 'Success'
2022-04-27T22:43:48.209133467Z stdout F [2022-04-27 22:43:48+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[1]': Starting task run...
2022-04-27T22:43:48.218376377Z stdout F [2022-04-27 22:43:48+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[1]': Finished task run for task with final state: 'Success'
2022-04-27T22:43:48.387966375Z stdout F [2022-04-27 22:43:48+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[6]': Starting task run...
2022-04-27T22:43:48.401353128Z stdout F [2022-04-27 22:43:48+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[6]': Finished task run for task with final state: 'Success'
2022-04-27T22:43:48.607215587Z stdout F [2022-04-27 22:43:48+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[11]': Starting task run...
2022-04-27T22:43:48.619289802Z stdout F [2022-04-27 22:43:48+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[11]': Finished task run for task with final state: 'Running'
2022-04-27T22:43:48.8326999Z stdout F [2022-04-27 22:43:48+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[0]': Starting task run...
2022-04-27T22:43:48.842061326Z stdout F [2022-04-27 22:43:48+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[0]': Finished task run for task with final state: 'Running'
2022-04-27T22:43:49.008480689Z stdout F [2022-04-27 22:43:49+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[6]': Starting task run...
2022-04-27T22:43:49.017643644Z stdout F [2022-04-27 22:43:49+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[6]': Finished task run for task with final state: 'Running'
2022-04-27T22:43:49.1967675Z stdout F [2022-04-27 22:43:49+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[15]': Starting task run...
2022-04-27T22:43:49.625008025Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Opening inputs for chunk Index({DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.CONCAT: 2>), DimIndex(name='variable', index=2, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.SUBSET: 3>)})
2022-04-27T22:43:49.62515959Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Opening input with Xarray Index({DimIndex(name='variable', index=2, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.CONCAT: 2>)}): 'https://dapds00.nci.org.au/thredds/fileServer/zv2/agcd/v1/tmin/mean/r005/01day/agcd_v1_tmin_mean_r005_daily_1972.nc'
2022-04-27T22:43:49.62551287Z stderr F INFO:pangeo_forge_recipes.storage:Opening 'https://dapds00.nci.org.au/thredds/fileServer/zv2/agcd/v1/tmin/mean/r005/01day/agcd_v1_tmin_mean_r005_daily_1972.nc' from cache
2022-04-27T22:43:49.625700634Z stderr F DEBUG:pangeo_forge_recipes.storage:file_opener entering first context for <contextlib._GeneratorContextManager object at 0x7f085a0b6be0>
2022-04-27T22:43:49.626157301Z stderr F DEBUG:pangeo_forge_recipes.storage:entering fs.open context manager for pfcsb-bucket/cache/262337688c6b1df68086b5f304077c7c-https_dapds00.nci.org.au_thredds_fileserver_zv2_agcd_v1_tmin_mean_r005_01day_agcd_v1_tmin_mean_r005_daily_1972.nc
2022-04-27T22:43:49.705501219Z stderr F DEBUG:pangeo_forge_recipes.storage:FSSpecTarget.open yielding <File-like object GCSFileSystem, pfcsb-bucket/cache/262337688c6b1df68086b5f304077c7c-https_dapds00.nci.org.au_thredds_fileserver_zv2_agcd_v1_tmin_mean_r005_01day_agcd_v1_tmin_mean_r005_daily_1972.nc>
2022-04-27T22:43:49.705576947Z stderr F DEBUG:pangeo_forge_recipes.storage:file_opener entering second context for <File-like object GCSFileSystem, pfcsb-bucket/cache/262337688c6b1df68086b5f304077c7c-https_dapds00.nci.org.au_thredds_fileserver_zv2_agcd_v1_tmin_mean_r005_01day_agcd_v1_tmin_mean_r005_daily_1972.nc>
2022-04-27T22:43:49.705663655Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:about to enter xr.open_dataset context on <File-like object GCSFileSystem, pfcsb-bucket/cache/262337688c6b1df68086b5f304077c7c-https_dapds00.nci.org.au_thredds_fileserver_zv2_agcd_v1_tmin_mean_r005_01day_agcd_v1_tmin_mean_r005_daily_1972.nc>
2022-04-27T22:43:51.30598421Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Acquiring locks ['vapourpres-time-9', 'vapourpres-time-18']
2022-04-27T22:43:51.306860903Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquiring lock pangeo-forge-vapourpres-time-9...
2022-04-27T22:43:51.316429417Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquired lock pangeo-forge-vapourpres-time-9
2022-04-27T22:43:51.31649383Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquiring lock pangeo-forge-vapourpres-time-18...
2022-04-27T22:43:51.316504445Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquired lock pangeo-forge-vapourpres-time-18
2022-04-27T22:43:51.316520318Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Storing variable vapourpres chunk Index({DimIndex(name='variable', index=4, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.SUBSET: 3>), DimIndex(name='time', index=0, sequence_len=2, operation=<CombineOp.CONCAT: 2>)}) to Zarr region (slice(182, 365, None), slice(None, None, None), slice(None, None, None))
2022-04-27T22:43:52.106226941Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:successfully opened dataset
2022-04-27T22:43:52.112124752Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:<xarray.Dataset>
2022-04-27T22:43:52.112172436Z stderr F Dimensions: (time: 366, lat: 691, lon: 886, nv: 2)
2022-04-27T22:43:52.112182586Z stderr F Coordinates:
2022-04-27T22:43:52.112190036Z stderr F * time (time) float64 4.456e+04 4.456e+04 ... 4.492e+04 4.492e+04
2022-04-27T22:43:52.112196485Z stderr F * lat (lat) float32 -44.5 -44.45 -44.4 -44.35 ... -10.1 -10.05 -10.0
2022-04-27T22:43:52.112202794Z stderr F * lon (lon) float32 112.0 112.1 112.1 112.2 ... 156.1 156.1 156.2 156.2
2022-04-27T22:43:52.112209115Z stderr F Dimensions without coordinates: nv
2022-04-27T22:43:52.112215703Z stderr F Data variables:
2022-04-27T22:43:52.112221713Z stderr F lat_bnds (lat, nv) float32 ...
2022-04-27T22:43:52.112227415Z stderr F lon_bnds (lon, nv) float32 ...
2022-04-27T22:43:52.112232797Z stderr F time_bnds (time, nv) float64 ...
2022-04-27T22:43:52.112238509Z stderr F tmin (time, lat, lon) float32 ...
2022-04-27T22:43:52.112245578Z stderr F crs int32 ...
2022-04-27T22:43:52.112268979Z stderr F Attributes: (12/33)
2022-04-27T22:43:52.11227476Z stderr F geospatial_lat_min: -44.525
2022-04-27T22:43:52.112280226Z stderr F geospatial_lat_max: -9.975
2022-04-27T22:43:52.112285924Z stderr F geospatial_lon_min: 111.975
2022-04-27T22:43:52.112292082Z stderr F geospatial_lon_max: 156.275
2022-04-27T22:43:52.112298309Z stderr F time_coverage_start: 1971-12-31T09:00:00
2022-04-27T22:43:52.112304772Z stderr F date_created: 2017-02-23T17:00:48.256494
2022-04-27T22:43:52.112311215Z stderr F ... ...
2022-04-27T22:43:52.112317555Z stderr F licence: Data Licence: The grid data files in this AGCD...
2022-04-27T22:43:52.112324258Z stderr F description: This AGCD data is a snapshot of the operationa...
2022-04-27T22:43:52.112347995Z stderr F date_issued: 2020-12-02 11:51:45
2022-04-27T22:43:52.112354191Z stderr F attribution: Data should be cited as : Australian Bureau of...
2022-04-27T22:43:52.112360404Z stderr F copyright: (C) Copyright Commonwealth of Australia 2020, ...
2022-04-27T22:43:52.112366778Z stderr F history:
2022-04-27T22:43:52.112389055Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Subsetting input according to time-1
2022-04-27T22:43:52.112397746Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Subsetting dataset with indexer {'time': slice(183, 366, None)}
2022-04-27T22:43:52.115767404Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Combining inputs for chunk 'Index({DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.CONCAT: 2>), DimIndex(name='variable', index=2, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.SUBSET: 3>)})'
2022-04-27T22:43:52.121149268Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:<xarray.Dataset>
2022-04-27T22:43:52.121233217Z stderr F Dimensions: (time: 183, lat: 691, lon: 886)
2022-04-27T22:43:52.121244341Z stderr F Coordinates:
2022-04-27T22:43:52.121252599Z stderr F * time (time) float64 4.474e+04 4.474e+04 ... 4.492e+04 4.492e+04
2022-04-27T22:43:52.121260754Z stderr F * lat (lat) float32 -44.5 -44.45 -44.4 -44.35 ... -10.1 -10.05 -10.0
2022-04-27T22:43:52.121267258Z stderr F * lon (lon) float32 112.0 112.1 112.1 112.2 ... 156.1 156.1 156.2 156.2
2022-04-27T22:43:52.121273345Z stderr F Data variables:
2022-04-27T22:43:52.121279019Z stderr F tmin (time, lat, lon) float32 dask.array<chunksize=(183, 691, 886), meta=np.ndarray>
2022-04-27T22:43:52.121294523Z stderr F Attributes: (12/33)
2022-04-27T22:43:52.121301082Z stderr F geospatial_lat_min: -44.525
2022-04-27T22:43:52.121307101Z stderr F geospatial_lat_max: -9.975
2022-04-27T22:43:52.121312825Z stderr F geospatial_lon_min: 111.975
2022-04-27T22:43:52.121318993Z stderr F geospatial_lon_max: 156.275
2022-04-27T22:43:52.121325156Z stderr F time_coverage_start: 1971-12-31T09:00:00
2022-04-27T22:43:52.121331799Z stderr F date_created: 2017-02-23T17:00:48.256494
2022-04-27T22:43:52.121338059Z stderr F ... ...
2022-04-27T22:43:52.121343799Z stderr F licence: Data Licence: The grid data files in this AGCD...
2022-04-27T22:43:52.121349775Z stderr F description: This AGCD data is a snapshot of the operationa...
2022-04-27T22:43:52.121355274Z stderr F date_issued: 2020-12-02 11:51:45
2022-04-27T22:43:52.121360606Z stderr F attribution: Data should be cited as : Australian Bureau of...
2022-04-27T22:43:52.121366804Z stderr F copyright: (C) Copyright Commonwealth of Australia 2020, ...
2022-04-27T22:43:52.121372432Z stderr F history:
2022-04-27T22:43:52.520274945Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Converting variable time of 1464 bytes to `numpy.ndarray`
2022-04-27T22:43:52.520342979Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Acquiring locks ['time-time-27']
2022-04-27T22:43:52.531134603Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquiring lock pangeo-forge-time-time-27...
2022-04-27T22:43:52.534145894Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquired lock pangeo-forge-time-time-27
2022-04-27T22:43:52.534184642Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Storing variable time chunk Index({DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.CONCAT: 2>), DimIndex(name='variable', index=2, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.SUBSET: 3>)}) to Zarr region (slice(548, 731, None),)
2022-04-27T22:43:52.528449375Z stderr F distributed.scheduler - INFO - Receive client connection: Client-worker-832582ac-c67b-11ec-8139-5b26622d91e3
2022-04-27T22:43:52.528962108Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T22:43:52.906859237Z stderr F DEBUG:pangeo_forge_recipes.utils:Released lock pangeo-forge-time-time-27
2022-04-27T22:43:52.996592464Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Converting variable tmin of 448149432 bytes to `numpy.ndarray`
2022-04-27T22:43:55.509916521Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.06 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:43:56.599976356Z stderr F distributed.utils_perf - INFO - full garbage collection released 1.13 GiB from 496 reference cycles (threshold: 9.54 MiB)
2022-04-27T22:43:56.600346814Z stderr F distributed.worker - WARNING - Worker is at 80% memory usage. Pausing worker. Process memory: 3.13 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:43:56.60069608Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.13 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:43:56.603242955Z stderr F distributed.worker - WARNING - Worker is at 51% memory usage. Resuming worker. Process memory: 2.00 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:43:58.740519286Z stderr F distributed.nanny - INFO - Start Nanny at: 'tcp://10.60.4.7:42005'
2022-04-27T22:43:59.374087817Z stderr F distributed.nanny - INFO - Start Nanny at: 'tcp://10.60.1.7:38733'
2022-04-27T22:44:02.220784994Z stderr F distributed.scheduler - INFO - Register worker <WorkerState 'tcp://10.60.4.7:42337', name: 14, status: undefined, memory: 0, processing: 0>
2022-04-27T22:44:02.221396313Z stderr F distributed.scheduler - INFO - Starting worker compute stream, tcp://10.60.4.7:42337
2022-04-27T22:44:02.221428805Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T22:44:02.204487448Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.4.7:42337
2022-04-27T22:44:02.204672461Z stderr F distributed.worker - INFO - Listening to: tcp://10.60.4.7:42337
2022-04-27T22:44:02.204860987Z stderr F distributed.worker - INFO - dashboard at: 10.60.4.7:33323
2022-04-27T22:44:02.205016134Z stderr F distributed.worker - INFO - Waiting to connect to: tcp://dask-jovyan-0f2793d7-1.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T22:44:02.205099123Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T22:44:02.205130532Z stderr F distributed.worker - INFO - Threads: 1
2022-04-27T22:44:02.20515254Z stderr F distributed.worker - INFO - Memory: 3.90 GiB
2022-04-27T22:44:02.205217396Z stderr F distributed.worker - INFO - Local Directory: /home/jovyan/dask-worker-space/worker-0shevbdx
2022-04-27T22:44:02.205377261Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T22:44:02.222521697Z stderr F distributed.worker - INFO - Registered to: tcp://dask-jovyan-0f2793d7-1.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T22:44:02.222588866Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T22:44:02.223822089Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T22:44:02.703611754Z stderr F distributed.scheduler - INFO - Register worker <WorkerState 'tcp://10.60.1.7:44993', name: 15, status: undefined, memory: 0, processing: 0>
2022-04-27T22:44:02.704674551Z stderr F distributed.scheduler - INFO - Starting worker compute stream, tcp://10.60.1.7:44993
2022-04-27T22:44:02.704731094Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T22:44:02.681971274Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.1.7:44993
2022-04-27T22:44:02.682026646Z stderr F distributed.worker - INFO - Listening to: tcp://10.60.1.7:44993
2022-04-27T22:44:02.682047014Z stderr F distributed.worker - INFO - dashboard at: 10.60.1.7:34987
2022-04-27T22:44:02.682054369Z stderr F distributed.worker - INFO - Waiting to connect to: tcp://dask-jovyan-0f2793d7-1.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T22:44:02.682081554Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T22:44:02.682088661Z stderr F distributed.worker - INFO - Threads: 1
2022-04-27T22:44:02.682095847Z stderr F distributed.worker - INFO - Memory: 3.90 GiB
2022-04-27T22:44:02.682104184Z stderr F distributed.worker - INFO - Local Directory: /home/jovyan/dask-worker-space/worker-i2itl0g3
2022-04-27T22:44:02.682123487Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T22:44:02.705992682Z stderr F distributed.worker - INFO - Registered to: tcp://dask-jovyan-0f2793d7-1.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T22:44:02.706106037Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T22:44:02.706787084Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T22:44:02.906464769Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 2.89 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:03.10388047Z stderr F distributed.worker - WARNING - Worker is at 82% memory usage. Pausing worker. Process memory: 3.21 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:03.10477138Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.21 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:03.206337526Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Acquiring locks ['tmin-time-27']
2022-04-27T22:44:03.207244032Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquiring lock pangeo-forge-tmin-time-27...
2022-04-27T22:44:03.209290318Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquired lock pangeo-forge-tmin-time-27
2022-04-27T22:44:03.209607518Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Storing variable tmin chunk Index({DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.CONCAT: 2>), DimIndex(name='variable', index=2, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.SUBSET: 3>)}) to Zarr region (slice(548, 731, None), slice(None, None, None), slice(None, None, None))
2022-04-27T22:44:03.303999801Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.23 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:03.503919331Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.23 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:03.704411147Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.23 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:03.907102455Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.24 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:04.104062925Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.24 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:04.304482305Z stderr F distributed.worker - WARNING - Worker is at 57% memory usage. Resuming worker. Process memory: 2.26 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:04.703863954Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 2.90 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:04.906019849Z stderr F distributed.worker - WARNING - Worker is at 83% memory usage. Pausing worker. Process memory: 3.24 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:04.907163031Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.24 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:05.104393047Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.25 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:05.303311416Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.25 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:05.503822563Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.26 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:05.704145524Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.26 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:05.905676602Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.27 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:06.103481124Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.46 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:06.247126368Z stderr F distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting
2022-04-27T22:44:06.329637716Z stderr F distributed.nanny - INFO - Worker process 314 was killed by signal 15
2022-04-27T22:44:06.334020388Z stderr F distributed.nanny - WARNING - Restarting worker
2022-04-27T22:44:06.329370854Z stderr F distributed.scheduler - INFO - Remove client Client-worker-7c37bc09-c67b-11ec-813a-4d62ab76704f
2022-04-27T22:44:06.330055488Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.4.6:42661', name: 7, status: paused, memory: 7, processing: 1>
2022-04-27T22:44:06.33010816Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.4.6:42661
2022-04-27T22:44:06.331884124Z stderr F distributed.scheduler - INFO - Close client connection: Client-worker-7c37bc09-c67b-11ec-813a-4d62ab76704f
2022-04-27T22:44:06.970090202Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 2.85 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:07.167994999Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.02 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:08.834187043Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.4.6:37503
2022-04-27T22:44:08.834381483Z stderr F distributed.worker - INFO - Listening to: tcp://10.60.4.6:37503
2022-04-27T22:44:08.834431684Z stderr F distributed.worker - INFO - dashboard at: 10.60.4.6:39581
2022-04-27T22:44:08.834448865Z stderr F distributed.worker - INFO - Waiting to connect to: tcp://dask-jovyan-0f2793d7-1.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T22:44:08.83445945Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T22:44:08.834488582Z stderr F distributed.worker - INFO - Threads: 1
2022-04-27T22:44:08.834517078Z stderr F distributed.worker - INFO - Memory: 3.90 GiB
2022-04-27T22:44:08.835166004Z stderr F distributed.worker - INFO - Local Directory: /home/jovyan/dask-worker-space/worker-rbr4xys6
2022-04-27T22:44:08.835214042Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T22:44:08.903105099Z stderr F distributed.worker - INFO - Registered to: tcp://dask-jovyan-0f2793d7-1.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T22:44:08.903171794Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T22:44:08.904048328Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T22:44:08.900803007Z stderr F distributed.scheduler - INFO - Register worker <WorkerState 'tcp://10.60.4.6:37503', name: 7, status: undefined, memory: 0, processing: 0>
2022-04-27T22:44:08.902304285Z stderr F distributed.scheduler - INFO - Starting worker compute stream, tcp://10.60.4.6:37503
2022-04-27T22:44:08.902387027Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T22:44:09.086717294Z stderr F distributed.worker - INFO - Stopping worker at tcp://10.60.4.6:37503
2022-04-27T22:44:09.093437778Z stderr F distributed.nanny - INFO - Worker closed
2022-04-27T22:44:09.133882406Z stderr F distributed.dask_worker - INFO - Exiting on signal 15
2022-04-27T22:44:09.134021035Z stderr F distributed.nanny - INFO - Closing Nanny at 'tcp://10.60.4.6:45933'
2022-04-27T22:44:09.083841024Z stderr F distributed.scheduler - INFO - Retire worker names (7,)
2022-04-27T22:44:09.08429023Z stderr F distributed.scheduler - INFO - Retire workers {<WorkerState 'tcp://10.60.4.6:37503', name: 7, status: running, memory: 0, processing: 0>}
2022-04-27T22:44:09.084670555Z stderr F distributed.scheduler - INFO - Closing worker tcp://10.60.4.6:37503
2022-04-27T22:44:09.085079604Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.4.6:37503', name: 7, status: running, memory: 0, processing: 0>
2022-04-27T22:44:09.085194907Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.4.6:37503
2022-04-27T22:44:09.548453673Z stderr F distributed.dask_worker - INFO - End worker
2022-04-27T22:44:12.316433962Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 2.89 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:12.902293815Z stderr F distributed.utils_perf - INFO - full garbage collection released 1.14 GiB from 292 reference cycles (threshold: 9.54 MiB)
2022-04-27T22:44:12.902435324Z stderr F distributed.worker - WARNING - Worker is at 80% memory usage. Pausing worker. Process memory: 3.16 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:12.902563765Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.16 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:12.905692718Z stderr F distributed.worker - WARNING - Worker is at 51% memory usage. Resuming worker. Process memory: 2.02 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:15.52235777Z stderr F distributed.core - INFO - Event loop was unresponsive in Worker for 13.22s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
2022-04-27T22:44:15.753130468Z stdout F [2022-04-27 22:44:15+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[16]': Starting task run...
2022-04-27T22:44:15.76592382Z stdout F [2022-04-27 22:44:15+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[16]': Finished task run for task with final state: 'Running'
2022-04-27T22:44:15.995827777Z stdout F [2022-04-27 22:44:15+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[7]': Starting task run...
2022-04-27T22:44:16.005539186Z stdout F [2022-04-27 22:44:16+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[7]': Finished task run for task with final state: 'Running'
2022-04-27T22:44:16.169332282Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 2.91 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:16.191054806Z stdout F [2022-04-27 22:44:16+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[4]': Starting task run...
2022-04-27T22:44:16.201103575Z stdout F [2022-04-27 22:44:16+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[4]': Finished task run for task with final state: 'Running'
2022-04-27T22:44:16.404011102Z stdout F [2022-04-27 22:44:16+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[12]': Starting task run...
2022-04-27T22:44:16.367153406Z stderr F distributed.worker - WARNING - Worker is at 82% memory usage. Pausing worker. Process memory: 3.20 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:16.368197941Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.20 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:16.609403471Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Opening inputs for chunk Index({DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.CONCAT: 2>), DimIndex(name='variable', index=1, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=0, sequence_len=2, operation=<CombineOp.SUBSET: 3>)})
2022-04-27T22:44:16.609848641Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Opening input with Xarray Index({DimIndex(name='variable', index=1, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.CONCAT: 2>)}): 'https://dapds00.nci.org.au/thredds/fileServer/zv2/agcd/v1/tmax/mean/r005/01day/agcd_v1_tmax_mean_r005_daily_1972.nc'
2022-04-27T22:44:16.610140525Z stderr F INFO:pangeo_forge_recipes.storage:Opening 'https://dapds00.nci.org.au/thredds/fileServer/zv2/agcd/v1/tmax/mean/r005/01day/agcd_v1_tmax_mean_r005_daily_1972.nc' from cache
2022-04-27T22:44:16.610405685Z stderr F DEBUG:pangeo_forge_recipes.storage:file_opener entering first context for <contextlib._GeneratorContextManager object at 0x7fba637a8f40>
2022-04-27T22:44:16.61102213Z stderr F DEBUG:pangeo_forge_recipes.storage:entering fs.open context manager for pfcsb-bucket/cache/f6ba0c001bcb9f2190a9ff44f62eefd2-https_dapds00.nci.org.au_thredds_fileserver_zv2_agcd_v1_tmax_mean_r005_01day_agcd_v1_tmax_mean_r005_daily_1972.nc
2022-04-27T22:44:16.681308338Z stderr F DEBUG:pangeo_forge_recipes.storage:FSSpecTarget.open yielding <File-like object GCSFileSystem, pfcsb-bucket/cache/f6ba0c001bcb9f2190a9ff44f62eefd2-https_dapds00.nci.org.au_thredds_fileserver_zv2_agcd_v1_tmax_mean_r005_01day_agcd_v1_tmax_mean_r005_daily_1972.nc>
2022-04-27T22:44:16.681454618Z stderr F DEBUG:pangeo_forge_recipes.storage:file_opener entering second context for <File-like object GCSFileSystem, pfcsb-bucket/cache/f6ba0c001bcb9f2190a9ff44f62eefd2-https_dapds00.nci.org.au_thredds_fileserver_zv2_agcd_v1_tmax_mean_r005_01day_agcd_v1_tmax_mean_r005_daily_1972.nc>
2022-04-27T22:44:16.681515709Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:about to enter xr.open_dataset context on <File-like object GCSFileSystem, pfcsb-bucket/cache/f6ba0c001bcb9f2190a9ff44f62eefd2-https_dapds00.nci.org.au_thredds_fileserver_zv2_agcd_v1_tmax_mean_r005_01day_agcd_v1_tmax_mean_r005_daily_1972.nc>
2022-04-27T22:44:16.51662397Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.21 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:16.714994971Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.21 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:16.966659512Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.22 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:17.168439257Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.22 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:17.382488228Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.23 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:17.515718496Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.44 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:17.767601461Z stderr F distributed.worker - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker.html#memtrim for more information. -- Unmanaged memory: 3.69 GiB -- Worker memory limit: 3.90 GiB
2022-04-27T22:44:17.81913126Z stderr F distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting
2022-04-27T22:44:17.89841773Z stderr F distributed.nanny - INFO - Worker process 313 was killed by signal 15
2022-04-27T22:44:17.902191165Z stderr F distributed.nanny - WARNING - Restarting worker
2022-04-27T22:44:17.897680085Z stderr F distributed.scheduler - INFO - Remove client Client-worker-832582ac-c67b-11ec-8139-5b26622d91e3
2022-04-27T22:44:17.898591675Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.1.6:41861', name: 9, status: paused, memory: 8, processing: 1>
2022-04-27T22:44:17.898671148Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.1.6:41861
2022-04-27T22:44:17.900422841Z stderr F distributed.scheduler - INFO - Close client connection: Client-worker-832582ac-c67b-11ec-8139-5b26622d91e3
2022-04-27T22:44:18.843823645Z stderr F distributed.nanny - INFO - Start Nanny at: 'tcp://10.60.4.8:42677'
2022-04-27T22:44:18.900557248Z stderr F distributed.core - INFO - Event loop was unresponsive in Worker for 12.57s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
2022-04-27T22:44:19.089029139Z stdout F [2022-04-27 22:44:19+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[5]': Starting task run...
2022-04-27T22:44:19.099668381Z stdout F [2022-04-27 22:44:19+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[5]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:18.983150575Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:successfully opened dataset
2022-04-27T22:44:18.986783725Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:<xarray.Dataset>
2022-04-27T22:44:18.986844306Z stderr F Dimensions: (time: 366, lat: 691, lon: 886, nv: 2)
2022-04-27T22:44:18.98687019Z stderr F Coordinates:
2022-04-27T22:44:18.986880991Z stderr F * time (time) float64 4.456e+04 4.456e+04 ... 4.492e+04 4.492e+04
2022-04-27T22:44:18.986891066Z stderr F * lat (lat) float32 -44.5 -44.45 -44.4 -44.35 ... -10.1 -10.05 -10.0
2022-04-27T22:44:18.986900294Z stderr F * lon (lon) float32 112.0 112.1 112.1 112.2 ... 156.1 156.1 156.2 156.2
2022-04-27T22:44:18.986909573Z stderr F Dimensions without coordinates: nv
2022-04-27T22:44:18.986921013Z stderr F Data variables:
2022-04-27T22:44:18.986929363Z stderr F lat_bnds (lat, nv) float32 ...
2022-04-27T22:44:18.986937491Z stderr F lon_bnds (lon, nv) float32 ...
2022-04-27T22:44:18.986945738Z stderr F time_bnds (time, nv) float64 ...
2022-04-27T22:44:18.986953966Z stderr F tmax (time, lat, lon) float32 ...
2022-04-27T22:44:18.986966177Z stderr F crs int32 ...
2022-04-27T22:44:18.986975925Z stderr F Attributes: (12/33)
2022-04-27T22:44:18.986984925Z stderr F geospatial_lat_min: -44.525
2022-04-27T22:44:18.987000681Z stderr F geospatial_lat_max: -9.975
2022-04-27T22:44:18.987009316Z stderr F geospatial_lon_min: 111.975
2022-04-27T22:44:18.987018721Z stderr F geospatial_lon_max: 156.275
2022-04-27T22:44:18.987027553Z stderr F time_coverage_start: 1972-01-01T09:00:00
2022-04-27T22:44:18.987035874Z stderr F date_created: 2017-02-24T14:54:03.211654
2022-04-27T22:44:18.987043859Z stderr F ... ...
2022-04-27T22:44:18.987053291Z stderr F licence: Data Licence: The grid data files in this AGCD...
2022-04-27T22:44:18.98706188Z stderr F description: This AGCD data is a snapshot of the operationa...
2022-04-27T22:44:18.987071187Z stderr F date_issued: 2020-12-02 09:15:40
2022-04-27T22:44:18.987080331Z stderr F attribution: Data should be cited as : Australian Bureau of...
2022-04-27T22:44:18.987088708Z stderr F copyright: (C) Copyright Commonwealth of Australia 2020, ...
2022-04-27T22:44:18.98709719Z stderr F history:
2022-04-27T22:44:18.987209298Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Subsetting input according to time-0
2022-04-27T22:44:18.987225537Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Subsetting dataset with indexer {'time': slice(0, 183, None)}
2022-04-27T22:44:18.990856137Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Combining inputs for chunk 'Index({DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.CONCAT: 2>), DimIndex(name='variable', index=1, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=0, sequence_len=2, operation=<CombineOp.SUBSET: 3>)})'
2022-04-27T22:44:18.993592236Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:<xarray.Dataset>
2022-04-27T22:44:18.993644397Z stderr F Dimensions: (time: 183, lat: 691, lon: 886)
2022-04-27T22:44:18.993654019Z stderr F Coordinates:
2022-04-27T22:44:18.993661812Z stderr F * time (time) float64 4.456e+04 4.456e+04 ... 4.474e+04 4.474e+04
2022-04-27T22:44:18.993669254Z stderr F * lat (lat) float32 -44.5 -44.45 -44.4 -44.35 ... -10.1 -10.05 -10.0
2022-04-27T22:44:18.993677228Z stderr F * lon (lon) float32 112.0 112.1 112.1 112.2 ... 156.1 156.1 156.2 156.2
2022-04-27T22:44:18.99368388Z stderr F Data variables:
2022-04-27T22:44:18.993690737Z stderr F tmax (time, lat, lon) float32 dask.array<chunksize=(183, 691, 886), meta=np.ndarray>
2022-04-27T22:44:18.993698862Z stderr F Attributes: (12/33)
2022-04-27T22:44:18.993714927Z stderr F geospatial_lat_min: -44.525
2022-04-27T22:44:18.99372249Z stderr F geospatial_lat_max: -9.975
2022-04-27T22:44:18.993728528Z stderr F geospatial_lon_min: 111.975
2022-04-27T22:44:18.993735158Z stderr F geospatial_lon_max: 156.275
2022-04-27T22:44:18.993741036Z stderr F time_coverage_start: 1972-01-01T09:00:00
2022-04-27T22:44:18.993748305Z stderr F date_created: 2017-02-24T14:54:03.211654
2022-04-27T22:44:18.993754941Z stderr F ... ...
2022-04-27T22:44:18.993803669Z stderr F licence: Data Licence: The grid data files in this AGCD...
2022-04-27T22:44:18.993811001Z stderr F description: This AGCD data is a snapshot of the operationa...
2022-04-27T22:44:18.993818092Z stderr F date_issued: 2020-12-02 09:15:40
2022-04-27T22:44:18.993824943Z stderr F attribution: Data should be cited as : Australian Bureau of...
2022-04-27T22:44:18.993832008Z stderr F copyright: (C) Copyright Commonwealth of Australia 2020, ...
2022-04-27T22:44:18.99383871Z stderr F history:
2022-04-27T22:44:19.315772723Z stdout F [2022-04-27 22:44:19+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[2]': Starting task run...
2022-04-27T22:44:19.325462551Z stdout F [2022-04-27 22:44:19+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[2]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:19.366000078Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Converting variable time of 1464 bytes to `numpy.ndarray`
2022-04-27T22:44:19.366317112Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Acquiring locks ['time-time-18', 'time-time-27']
2022-04-27T22:44:19.383687442Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquiring lock pangeo-forge-time-time-18...
2022-04-27T22:44:19.386327715Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquired lock pangeo-forge-time-time-18
2022-04-27T22:44:19.386397206Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquiring lock pangeo-forge-time-time-27...
2022-04-27T22:44:19.388250911Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquired lock pangeo-forge-time-time-27
2022-04-27T22:44:19.388304076Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Storing variable time chunk Index({DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.CONCAT: 2>), DimIndex(name='variable', index=1, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=0, sequence_len=2, operation=<CombineOp.SUBSET: 3>)}) to Zarr region (slice(365, 548, None),)
2022-04-27T22:44:19.380453439Z stderr F distributed.scheduler - INFO - Receive client connection: Client-worker-932628f9-c67b-11ec-8139-7b346736645a
2022-04-27T22:44:19.381436238Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T22:44:19.585448023Z stdout F [2022-04-27 22:44:19+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[4]': Starting task run...
2022-04-27T22:44:19.596994977Z stdout F [2022-04-27 22:44:19+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[4]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:19.787168195Z stdout F [2022-04-27 22:44:19+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[1]': Starting task run...
2022-04-27T22:44:19.798723536Z stdout F [2022-04-27 22:44:19+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[1]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:19.706709243Z stderr F DEBUG:pangeo_forge_recipes.utils:Released lock pangeo-forge-time-time-18
2022-04-27T22:44:19.708869405Z stderr F DEBUG:pangeo_forge_recipes.utils:Released lock pangeo-forge-time-time-27
2022-04-27T22:44:19.808848806Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Converting variable tmax of 448149432 bytes to `numpy.ndarray`
2022-04-27T22:44:20.001452979Z stdout F [2022-04-27 22:44:20+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[8]': Starting task run...
2022-04-27T22:44:20.013884654Z stdout F [2022-04-27 22:44:20+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[8]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:20.168095048Z stderr F distributed.dask_worker - INFO - Exiting on signal 15
2022-04-27T22:44:20.168437162Z stderr F distributed.nanny - INFO - Closing Nanny at 'tcp://10.60.1.7:38733'
2022-04-27T22:44:20.171203129Z stderr F distributed.worker - INFO - Stopping worker at tcp://10.60.1.7:44993
2022-04-27T22:44:20.151789506Z stderr F distributed.dask_worker - INFO - Exiting on signal 15
2022-04-27T22:44:20.152438378Z stderr F distributed.nanny - INFO - Closing Nanny at 'tcp://10.60.4.7:42005'
2022-04-27T22:44:20.169067752Z stderr F distributed.dask_worker - INFO - Exiting on signal 15
2022-04-27T22:44:20.169533307Z stderr F distributed.nanny - INFO - Closing Nanny at 'tcp://10.60.1.6:42131'
2022-04-27T22:44:20.084968714Z stderr F distributed.scheduler - INFO - Retire worker names (9, 10, 11, 12, 13, 14, 15)
2022-04-27T22:44:20.085260949Z stderr F distributed.scheduler - INFO - Retire workers {<WorkerState 'tcp://10.60.4.7:42337', name: 14, status: running, memory: 3, processing: 1>, <WorkerState 'tcp://10.60.1.7:44993', name: 15, status: running, memory: 5, processing: 21>}
2022-04-27T22:44:20.172466854Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.1.7:44993', name: 15, status: running, memory: 5, processing: 21>
2022-04-27T22:44:20.172880975Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.1.7:44993
2022-04-27T22:44:20.203243339Z stderr F distributed.scheduler - INFO - Remove client Client-worker-932628f9-c67b-11ec-8139-7b346736645a
2022-04-27T22:44:20.205111035Z stderr F distributed.scheduler - INFO - Remove client Client-worker-932628f9-c67b-11ec-8139-7b346736645a
2022-04-27T22:44:20.205162358Z stderr F distributed.scheduler - INFO - Close client connection: Client-worker-932628f9-c67b-11ec-8139-7b346736645a
2022-04-27T22:44:20.207370175Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.4.7:42337', name: 14, status: closing, memory: 3, processing: 27>
2022-04-27T22:44:20.207405769Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.4.7:42337
2022-04-27T22:44:20.209446185Z stderr F distributed.scheduler - INFO - Lost all workers
2022-04-27T22:44:20.221428044Z stdout F [2022-04-27 22:44:20+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[7]': Starting task run...
2022-04-27T22:44:20.233054488Z stdout F [2022-04-27 22:44:20+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[7]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:20.200649784Z stderr F distributed.worker - INFO - Stopping worker at tcp://10.60.4.7:42337
2022-04-27T22:44:20.487625734Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.1.6:40619
2022-04-27T22:44:20.487697272Z stderr F distributed.worker - INFO - Listening to: tcp://10.60.1.6:40619
2022-04-27T22:44:20.48770843Z stderr F distributed.worker - INFO - dashboard at: 10.60.1.6:38547
2022-04-27T22:44:20.487729075Z stderr F distributed.worker - INFO - Waiting to connect to: tcp://dask-jovyan-0f2793d7-1.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T22:44:20.487751112Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T22:44:20.487759572Z stderr F distributed.worker - INFO - Threads: 1
2022-04-27T22:44:20.487766512Z stderr F distributed.worker - INFO - Memory: 3.90 GiB
2022-04-27T22:44:20.487774037Z stderr F distributed.worker - INFO - Local Directory: /home/jovyan/dask-worker-space/worker-rcp7kmx4
2022-04-27T22:44:20.487842586Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T22:44:20.488817443Z stderr F distributed.worker - INFO - Stopping worker at tcp://10.60.1.6:40619
2022-04-27T22:44:20.48892097Z stderr F distributed.worker - INFO - Closed worker has not yet started: Status.undefined
2022-04-27T22:44:20.505158729Z stderr F distributed.worker - INFO - Registered to: tcp://dask-jovyan-0f2793d7-1.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T22:44:20.505226499Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T22:44:20.50591065Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T22:44:20.508687248Z stderr F distributed.worker - INFO - Stopping worker at tcp://10.60.1.6:40619
2022-04-27T22:44:20.511285264Z stderr F tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOLoop object at 0x7f4de24a3310>>, <Task finished name='Task-14' coro=<Worker.close() done, defined at /srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/worker.py:1511> exception=CommClosedError('ConnectionPool not running. Status: Status.closed')>)
2022-04-27T22:44:20.511329991Z stderr F Traceback (most recent call last):
2022-04-27T22:44:20.51134671Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/comm/core.py", line 284, in connect
2022-04-27T22:44:20.511357801Z stderr F comm = await asyncio.wait_for(
2022-04-27T22:44:20.511368181Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/asyncio/tasks.py", line 468, in wait_for
2022-04-27T22:44:20.511384299Z stderr F await waiter
2022-04-27T22:44:20.511393957Z stderr F asyncio.exceptions.CancelledError
2022-04-27T22:44:20.511402123Z stderr F
2022-04-27T22:44:20.511411624Z stderr F During handling of the above exception, another exception occurred:
2022-04-27T22:44:20.511420117Z stderr F
2022-04-27T22:44:20.51142952Z stderr F Traceback (most recent call last):
2022-04-27T22:44:20.511438176Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/core.py", line 1064, in connect
2022-04-27T22:44:20.511450251Z stderr F comm = await fut
2022-04-27T22:44:20.511458967Z stderr F asyncio.exceptions.CancelledError
2022-04-27T22:44:20.511466495Z stderr F
2022-04-27T22:44:20.511474865Z stderr F The above exception was the direct cause of the following exception:
2022-04-27T22:44:20.511482864Z stderr F
2022-04-27T22:44:20.511491789Z stderr F Traceback (most recent call last):
2022-04-27T22:44:20.511500376Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/tornado/ioloop.py", line 741, in _run_callback
2022-04-27T22:44:20.511508644Z stderr F ret = callback()
2022-04-27T22:44:20.511518045Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/tornado/ioloop.py", line 765, in _discard_future_result
2022-04-27T22:44:20.511527415Z stderr F future.result()
2022-04-27T22:44:20.511536652Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/worker.py", line 1535, in close
2022-04-27T22:44:20.511557621Z stderr F await r.close_gracefully()
2022-04-27T22:44:20.511568932Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/core.py", line 892, in send_recv_from_rpc
2022-04-27T22:44:20.511577531Z stderr F comm = await self.pool.connect(self.addr)
2022-04-27T22:44:20.511586416Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/core.py", line 1075, in connect
2022-04-27T22:44:20.511594387Z stderr F raise CommClosedError(
2022-04-27T22:44:20.511604983Z stderr F distributed.comm.core.CommClosedError: ConnectionPool not running. Status: Status.closed
2022-04-27T22:44:20.503861778Z stderr F distributed.scheduler - INFO - Register worker <WorkerState 'tcp://10.60.1.6:40619', name: 9, status: closing, memory: 0, processing: 0>
2022-04-27T22:44:20.504640419Z stderr F distributed.scheduler - INFO - Starting worker compute stream, tcp://10.60.1.6:40619
2022-04-27T22:44:20.504759217Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T22:44:20.50611127Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.1.6:40619', name: 9, status: closing, memory: 0, processing: 0>
2022-04-27T22:44:20.506257001Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.1.6:40619
2022-04-27T22:44:20.506339759Z stderr F distributed.scheduler - INFO - Lost all workers
2022-04-27T22:44:21.019242464Z stderr F distributed.dask_worker - INFO - End worker
2022-04-27T22:44:21.818574007Z stderr F distributed.nanny - WARNING - Worker process still alive after 1 seconds, killing
2022-04-27T22:44:21.868258118Z stderr F distributed.dask_worker - INFO - End worker
2022-04-27T22:44:21.86863234Z stderr F distributed.process - INFO - reaping stray process <SpawnProcess name='Dask Worker process (from Nanny)' pid=313 parent=1 started daemon>
2022-04-27T22:44:21.800698601Z stderr F distributed.nanny - WARNING - Worker process still alive after 1 seconds, killing
2022-04-27T22:44:21.80250778Z stderr F distributed.dask_worker - INFO - End worker
2022-04-27T22:44:21.803561549Z stderr F distributed.process - INFO - reaping stray process <SpawnProcess name='Dask Worker process (from Nanny)' pid=313 parent=1 started daemon>
2022-04-27T22:44:21.820467827Z stderr F Exception in thread AsyncProcess Dask Worker process (from Nanny) watch process join:
2022-04-27T22:44:22.237591839Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.4.8:39365
2022-04-27T22:44:22.237718125Z stderr F distributed.worker - INFO - Listening to: tcp://10.60.4.8:39365
2022-04-27T22:44:22.237733308Z stderr F distributed.worker - INFO - dashboard at: 10.60.4.8:34981
2022-04-27T22:44:22.237740858Z stderr F distributed.worker - INFO - Waiting to connect to: tcp://dask-jovyan-0f2793d7-1.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T22:44:22.23775695Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T22:44:22.237825465Z stderr F distributed.worker - INFO - Threads: 1
2022-04-27T22:44:22.237835435Z stderr F distributed.worker - INFO - Memory: 3.90 GiB
2022-04-27T22:44:22.237874791Z stderr F distributed.worker - INFO - Local Directory: /home/jovyan/dask-worker-space/worker-ekws2r_e
2022-04-27T22:44:22.23805287Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T22:44:22.302904192Z stderr F distributed.worker - INFO - Registered to: tcp://dask-jovyan-0f2793d7-1.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T22:44:22.302975201Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T22:44:22.303908277Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T22:44:22.302438654Z stderr F distributed.scheduler - INFO - Register worker <WorkerState 'tcp://10.60.4.8:39365', name: 18, status: undefined, memory: 0, processing: 0>
2022-04-27T22:44:22.302490542Z stderr F distributed.scheduler - INFO - Starting worker compute stream, tcp://10.60.4.8:39365
2022-04-27T22:44:22.30249848Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T22:44:23.084251342Z stderr F distributed.scheduler - INFO - Retire worker names (16,)
2022-04-27T22:44:34.712284504Z stderr F distributed.core - INFO - Event loop was unresponsive in Worker for 12.41s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
2022-04-27T22:44:35.090222853Z stdout F [2022-04-27 22:44:35+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[6]': Starting task run...
2022-04-27T22:44:35.101964027Z stdout F [2022-04-27 22:44:35+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[6]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:35.341427438Z stdout F [2022-04-27 22:44:35+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[5]': Starting task run...
2022-04-27T22:44:35.352520289Z stdout F [2022-04-27 22:44:35+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[5]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:35.545293616Z stdout F [2022-04-27 22:44:35+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[1]': Starting task run...
2022-04-27T22:44:35.557943705Z stdout F [2022-04-27 22:44:35+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[1]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:35.844370445Z stdout F [2022-04-27 22:44:35+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[0]': Starting task run...
2022-04-27T22:44:35.856664361Z stdout F [2022-04-27 22:44:35+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[0]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:36.057544269Z stdout F [2022-04-27 22:44:36+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[4]': Starting task run...
2022-04-27T22:44:36.067305962Z stdout F [2022-04-27 22:44:36+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[4]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:36.238957662Z stdout F [2022-04-27 22:44:36+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[8]': Starting task run...
2022-04-27T22:44:36.251059871Z stdout F [2022-04-27 22:44:36+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[8]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:36.452350034Z stdout F [2022-04-27 22:44:36+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[9]': Starting task run...
2022-04-27T22:44:36.461929284Z stdout F [2022-04-27 22:44:36+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[9]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:36.599350041Z stdout F [2022-04-27 22:44:36+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[2]': Starting task run...
2022-04-27T22:44:36.611304381Z stdout F [2022-04-27 22:44:36+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[2]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:36.835238948Z stdout F [2022-04-27 22:44:36+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[3]': Starting task run...
2022-04-27T22:44:36.844751365Z stdout F [2022-04-27 22:44:36+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[3]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:37.057593729Z stdout F [2022-04-27 22:44:37+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[7]': Starting task run...
2022-04-27T22:44:37.067020497Z stdout F [2022-04-27 22:44:37+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[7]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:37.388826786Z stdout F [2022-04-27 22:44:37+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[8]': Starting task run...
2022-04-27T22:44:37.39865416Z stdout F [2022-04-27 22:44:37+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[8]': Finished task run for task with final state: 'Failed'
2022-04-27T22:44:37.597490606Z stdout F [2022-04-27 22:44:37+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[16]': Starting task run...
2022-04-27T22:44:37.608796976Z stdout F [2022-04-27 22:44:37+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[16]': Finished task run for task with final state: 'Running'
2022-04-27T22:44:37.792919799Z stdout F [2022-04-27 22:44:37+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[4]': Starting task run...
2022-04-27T22:44:37.803168038Z stdout F [2022-04-27 22:44:37+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[4]': Finished task run for task with final state: 'Running'
2022-04-27T22:44:38.029158128Z stdout F [2022-04-27 22:44:38+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[7]': Starting task run...
2022-04-27T22:44:38.039463641Z stdout F [2022-04-27 22:44:38+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[7]': Finished task run for task with final state: 'Running'
2022-04-27T22:44:38.228417984Z stdout F [2022-04-27 22:44:38+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[12]': Starting task run...
2022-04-27T22:44:38.238292199Z stdout F [2022-04-27 22:44:38+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[12]': Finished task run for task with final state: 'Running'
2022-04-27T22:44:38.401871748Z stdout F [2022-04-27 22:44:38+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[14]': Starting task run...
2022-04-27T22:44:38.411794902Z stdout F [2022-04-27 22:44:38+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[14]': Finished task run for task with final state: 'Success'
2022-04-27T22:44:38.640050299Z stdout F [2022-04-27 22:44:38+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[5]': Starting task run...
2022-04-27T22:44:38.649175995Z stdout F [2022-04-27 22:44:38+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[5]': Finished task run for task with final state: 'Running'
2022-04-27T22:44:38.816602678Z stdout F [2022-04-27 22:44:38+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[18]': Starting task run...
2022-04-27T22:44:38.829906058Z stdout F [2022-04-27 22:44:38+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[18]': Finished task run for task with final state: 'Running'
2022-04-27T22:44:39.044253399Z stdout F [2022-04-27 22:44:39+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[10]': Starting task run...
2022-04-27T22:44:39.082603934Z stderr F distributed.scheduler - INFO - Retire worker names (17,)
2022-04-27T22:44:39.198642737Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Opening inputs for chunk Index({DimIndex(name='variable', index=0, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.CONCAT: 2>), DimIndex(name='time', index=0, sequence_len=2, operation=<CombineOp.SUBSET: 3>)})
2022-04-27T22:44:39.198836406Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Opening input with Xarray Index({DimIndex(name='variable', index=0, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.CONCAT: 2>)}): 'https://dapds00.nci.org.au/thredds/fileServer/zv2/agcd/v1/precip/total/r005/01day/agcd_v1_precip_total_r005_daily_1972.nc'
2022-04-27T22:44:39.198955372Z stderr F INFO:pangeo_forge_recipes.storage:Opening 'https://dapds00.nci.org.au/thredds/fileServer/zv2/agcd/v1/precip/total/r005/01day/agcd_v1_precip_total_r005_daily_1972.nc' from cache
2022-04-27T22:44:39.198972706Z stderr F DEBUG:pangeo_forge_recipes.storage:file_opener entering first context for <contextlib._GeneratorContextManager object at 0x7f3b746927c0>
2022-04-27T22:44:39.199481199Z stderr F DEBUG:pangeo_forge_recipes.storage:entering fs.open context manager for pfcsb-bucket/cache/f2b8c7cabd1ed75b42284d1995d0d23e-https_dapds00.nci.org.au_thredds_fileserver_zv2_agcd_v1_precip_total_r005_01day_agcd_v1_precip_total_r005_daily_1972.nc
2022-04-27T22:44:39.275775012Z stderr F DEBUG:pangeo_forge_recipes.storage:FSSpecTarget.open yielding <File-like object GCSFileSystem, pfcsb-bucket/cache/f2b8c7cabd1ed75b42284d1995d0d23e-https_dapds00.nci.org.au_thredds_fileserver_zv2_agcd_v1_precip_total_r005_01day_agcd_v1_precip_total_r005_daily_1972.nc>
2022-04-27T22:44:39.275842139Z stderr F DEBUG:pangeo_forge_recipes.storage:file_opener entering second context for <File-like object GCSFileSystem, pfcsb-bucket/cache/f2b8c7cabd1ed75b42284d1995d0d23e-https_dapds00.nci.org.au_thredds_fileserver_zv2_agcd_v1_precip_total_r005_01day_agcd_v1_precip_total_r005_daily_1972.nc>
2022-04-27T22:44:39.27590287Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:about to enter xr.open_dataset context on <File-like object GCSFileSystem, pfcsb-bucket/cache/f2b8c7cabd1ed75b42284d1995d0d23e-https_dapds00.nci.org.au_thredds_fileserver_zv2_agcd_v1_precip_total_r005_01day_agcd_v1_precip_total_r005_daily_1972.nc>
2022-04-27T22:44:41.656029414Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:successfully opened dataset
2022-04-27T22:44:41.660013773Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:<xarray.Dataset>
2022-04-27T22:44:41.660065912Z stderr F Dimensions: (time: 366, lat: 691, lon: 886, nv: 2)
2022-04-27T22:44:41.660076383Z stderr F Coordinates:
2022-04-27T22:44:41.660083664Z stderr F * time (time) float64 4.456e+04 4.456e+04 ... 4.492e+04 4.492e+04
2022-04-27T22:44:41.660093887Z stderr F * lat (lat) float32 -44.5 -44.45 -44.4 -44.35 ... -10.1 -10.05 -10.0
2022-04-27T22:44:41.660101778Z stderr F * lon (lon) float32 112.0 112.1 112.1 112.2 ... 156.1 156.1 156.2 156.2
2022-04-27T22:44:41.660109909Z stderr F Dimensions without coordinates: nv
2022-04-27T22:44:41.660126035Z stderr F Data variables:
2022-04-27T22:44:41.660132819Z stderr F lat_bnds (lat, nv) float32 ...
2022-04-27T22:44:41.660138958Z stderr F lon_bnds (lon, nv) float32 ...
2022-04-27T22:44:41.66014573Z stderr F time_bnds (time, nv) float64 ...
2022-04-27T22:44:41.66015229Z stderr F precip (time, lat, lon) float32 ...
2022-04-27T22:44:41.660162146Z stderr F crs int32 ...
2022-04-27T22:44:41.660168231Z stderr F Attributes: (12/33)
2022-04-27T22:44:41.660174472Z stderr F geospatial_lat_min: -44.525
2022-04-27T22:44:41.660181131Z stderr F geospatial_lat_max: -9.975
2022-04-27T22:44:41.660187696Z stderr F geospatial_lon_min: 111.975
2022-04-27T22:44:41.660194926Z stderr F geospatial_lon_max: 156.275
2022-04-27T22:44:41.660202412Z stderr F time_coverage_start: 1971-12-31T09:00:00
2022-04-27T22:44:41.660209578Z stderr F date_created: 2017-01-18T12:14:35.516070
2022-04-27T22:44:41.660216515Z stderr F ... ...
2022-04-27T22:44:41.660224477Z stderr F licence: Data Licence: The grid data files in this AGCD...
2022-04-27T22:44:41.66023204Z stderr F description: This AGCD data is a snapshot of the operationa...
2022-04-27T22:44:41.660262251Z stderr F date_issued: 2020-12-02 05:42:26
2022-04-27T22:44:41.660269751Z stderr F attribution: Data should be cited as : Australian Bureau of...
2022-04-27T22:44:41.660276469Z stderr F copyright: (C) Copyright Commonwealth of Australia 2020, ...
2022-04-27T22:44:41.660283586Z stderr F history:
2022-04-27T22:44:41.66031317Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Subsetting input according to time-0
2022-04-27T22:44:41.66034242Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Subsetting dataset with indexer {'time': slice(0, 183, None)}
2022-04-27T22:44:41.664050322Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Combining inputs for chunk 'Index({DimIndex(name='variable', index=0, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.CONCAT: 2>), DimIndex(name='time', index=0, sequence_len=2, operation=<CombineOp.SUBSET: 3>)})'
2022-04-27T22:44:41.666747253Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:<xarray.Dataset>
2022-04-27T22:44:41.6668868Z stderr F Dimensions: (time: 183, lat: 691, lon: 886)
2022-04-27T22:44:41.666927312Z stderr F Coordinates:
2022-04-27T22:44:41.666940007Z stderr F * time (time) float64 4.456e+04 4.456e+04 ... 4.474e+04 4.474e+04
2022-04-27T22:44:41.666952462Z stderr F * lat (lat) float32 -44.5 -44.45 -44.4 -44.35 ... -10.1 -10.05 -10.0
2022-04-27T22:44:41.666963504Z stderr F * lon (lon) float32 112.0 112.1 112.1 112.2 ... 156.1 156.1 156.2 156.2
2022-04-27T22:44:41.666973334Z stderr F Data variables:
2022-04-27T22:44:41.666982682Z stderr F precip (time, lat, lon) float32 dask.array<chunksize=(183, 691, 886), meta=np.ndarray>
2022-04-27T22:44:41.667007584Z stderr F Attributes: (12/33)
2022-04-27T22:44:41.66701787Z stderr F geospatial_lat_min: -44.525
2022-04-27T22:44:41.667038675Z stderr F geospatial_lat_max: -9.975
2022-04-27T22:44:41.66704809Z stderr F geospatial_lon_min: 111.975
2022-04-27T22:44:41.66705798Z stderr F geospatial_lon_max: 156.275
2022-04-27T22:44:41.667067204Z stderr F time_coverage_start: 1971-12-31T09:00:00
2022-04-27T22:44:41.667080577Z stderr F date_created: 2017-01-18T12:14:35.516070
2022-04-27T22:44:41.667090932Z stderr F ... ...
2022-04-27T22:44:41.667101049Z stderr F licence: Data Licence: The grid data files in this AGCD...
2022-04-27T22:44:41.667111411Z stderr F description: This AGCD data is a snapshot of the operationa...
2022-04-27T22:44:41.667121295Z stderr F date_issued: 2020-12-02 05:42:26
2022-04-27T22:44:41.667130365Z stderr F attribution: Data should be cited as : Australian Bureau of...
2022-04-27T22:44:41.667139502Z stderr F copyright: (C) Copyright Commonwealth of Australia 2020, ...
2022-04-27T22:44:41.66714941Z stderr F history:
2022-04-27T22:44:42.059837828Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Converting variable time of 1464 bytes to `numpy.ndarray`
2022-04-27T22:44:42.060039154Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Acquiring locks ['time-time-18', 'time-time-27']
2022-04-27T22:44:42.071749754Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquiring lock pangeo-forge-time-time-18...
2022-04-27T22:44:42.073401433Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquired lock pangeo-forge-time-time-18
2022-04-27T22:44:42.073450674Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquiring lock pangeo-forge-time-time-27...
2022-04-27T22:44:42.075453045Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquired lock pangeo-forge-time-time-27
2022-04-27T22:44:42.075649459Z stderr F INFO:pangeo_forge_recipes.recipes.xarray_zarr:Storing variable time chunk Index({DimIndex(name='variable', index=0, sequence_len=5, operation=<CombineOp.MERGE: 1>), DimIndex(name='time', index=1, sequence_len=2, operation=<CombineOp.CONCAT: 2>), DimIndex(name='time', index=0, sequence_len=2, operation=<CombineOp.SUBSET: 3>)}) to Zarr region (slice(365, 548, None),)
2022-04-27T22:44:42.069377959Z stderr F distributed.scheduler - INFO - Receive client connection: Client-worker-a0ace7ea-c67b-11ec-8139-07ed25e8274f
2022-04-27T22:44:42.069914662Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T22:44:42.36988287Z stderr F DEBUG:pangeo_forge_recipes.utils:Released lock pangeo-forge-time-time-18
2022-04-27T22:44:42.371270876Z stderr F DEBUG:pangeo_forge_recipes.utils:Released lock pangeo-forge-time-time-27
2022-04-27T22:44:42.476564351Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Converting variable precip of 448149432 bytes to `numpy.ndarray`
2022-04-27T22:44:48.023377378Z stderr F distributed.scheduler - INFO - Remove client Client-worker-bf9f3e0b-c678-11ec-813a-21bc5adbea7d
2022-04-27T22:44:48.023463632Z stderr F distributed.scheduler - INFO - Remove client Client-worker-bf9f3e0b-c678-11ec-813a-21bc5adbea7d
2022-04-27T22:44:48.023523721Z stderr F distributed.scheduler - INFO - Close client connection: Client-worker-bf9f3e0b-c678-11ec-813a-21bc5adbea7d
2022-04-27T22:44:48.024646334Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.3.5:35339', name: 14, status: closing, memory: 0, processing: 0>
2022-04-27T22:44:48.024692794Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.3.5:35339
2022-04-27T22:44:48.025981303Z stderr F distributed.scheduler - INFO - Remove client Client-worker-bd3d06ef-c678-11ec-813a-4da8199e2b97
2022-04-27T22:44:48.026572591Z stderr F distributed.scheduler - INFO - Remove client Client-worker-bd3d06ef-c678-11ec-813a-4da8199e2b97
2022-04-27T22:44:48.026823076Z stderr F distributed.scheduler - INFO - Close client connection: Client-worker-bd3d06ef-c678-11ec-813a-4da8199e2b97
2022-04-27T22:44:48.028326823Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.3.4:43873', name: 10, status: closing, memory: 0, processing: 0>
2022-04-27T22:44:48.02835736Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.3.4:43873
2022-04-27T22:44:48.032507264Z stderr F distributed.scheduler - INFO - Remove client Client-worker-be204ef3-c678-11ec-8139-abdbfd752a70
2022-04-27T22:44:48.032798103Z stderr F distributed.scheduler - INFO - Remove client Client-worker-be204ef3-c678-11ec-8139-abdbfd752a70
2022-04-27T22:44:48.033115643Z stderr F distributed.scheduler - INFO - Close client connection: Client-worker-be204ef3-c678-11ec-8139-abdbfd752a70
2022-04-27T22:44:48.034437728Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.3.3:43683', name: 8, status: closing, memory: 0, processing: 0>
2022-04-27T22:44:48.034476672Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.3.3:43683
2022-04-27T22:44:48.182797282Z stderr F distributed.nanny - INFO - Start Nanny at: 'tcp://10.60.1.8:39035'
2022-04-27T22:44:49.8068194Z stderr F DEBUG:pangeo_forge_recipes.recipes.xarray_zarr:Acquiring locks ['precip-time-18', 'precip-time-27']
2022-04-27T22:44:49.808386367Z stderr F DEBUG:pangeo_forge_recipes.utils:Acquiring lock pangeo-forge-precip-time-18...
2022-04-27T22:44:50.801598969Z stderr F distributed.nanny - INFO - Start Nanny at: 'tcp://10.60.4.9:41487'
2022-04-27T22:44:51.299702078Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.1.8:34461
2022-04-27T22:44:51.29977436Z stderr F distributed.worker - INFO - Listening to: tcp://10.60.1.8:34461
2022-04-27T22:44:51.319924467Z stderr F distributed.scheduler - INFO - Register worker <WorkerState 'tcp://10.60.1.8:34461', name: 19, status: undefined, memory: 0, processing: 0>
2022-04-27T22:44:51.320516901Z stderr F distributed.scheduler - INFO - Starting worker compute stream, tcp://10.60.1.8:34461
2022-04-27T22:44:53.271683712Z stderr F distributed.nanny - INFO - Start Nanny at: 'tcp://10.60.1.9:40359'
2022-04-27T22:44:54.306718319Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.4.9:46813
2022-04-27T22:44:56.497622384Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.1.9:38023
2022-04-27T22:45:35.823198775Z stderr F distributed.nanny - INFO - Start Nanny at: 'tcp://10.60.6.8:40929'
2022-04-27T22:45:38.943650054Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.6.8:43501
2022-04-27T22:45:45.823310274Z stderr F distributed.nanny - INFO - Start Nanny at: 'tcp://10.60.5.9:37235'
2022-04-27T22:45:48.875488155Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.5.9:34653
2022-04-27T23:01:05.415000428Z stderr F distributed.scheduler - INFO - -----------------------------------------------
2022-04-27T23:01:07.943307894Z stderr F distributed.scheduler - INFO - -----------------------------------------------
2022-04-27T23:01:07.945155644Z stderr F distributed.scheduler - INFO - Clear task state
2022-04-27T23:01:07.946225924Z stderr F distributed.scheduler - INFO - Scheduler at: tcp://10.60.6.11:8786
2022-04-27T23:01:07.946526384Z stderr F distributed.scheduler - INFO - dashboard at: :8787
2022-04-27T23:01:09.72626988Z stderr F distributed.scheduler - INFO - Receive client connection: Client-ed5babaa-c67d-11ec-8001-7bbe31362ddf
2022-04-27T23:01:09.733576176Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T23:01:09.75323375Z stderr F /srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/scheduler.py:5622: UserWarning: Scheduler already contains a plugin with name worker-status; overwriting.
2022-04-27T23:01:09.753288033Z stderr F warnings.warn(
2022-04-27T23:01:13.628090633Z stderr F distributed.scheduler - INFO - -----------------------------------------------
2022-04-27T23:01:16.02664027Z stderr F distributed.scheduler - INFO - -----------------------------------------------
2022-04-27T23:01:16.02770011Z stderr F distributed.scheduler - INFO - Clear task state
2022-04-27T23:01:16.028751782Z stderr F distributed.scheduler - INFO - Scheduler at: tcp://10.60.6.12:8786
2022-04-27T23:01:16.028839067Z stderr F distributed.scheduler - INFO - dashboard at: :8787
2022-04-27T23:01:22.489646021Z stderr F distributed.scheduler - INFO - Receive client connection: Client-f4f7495f-c67d-11ec-8001-75a5dcceec98
2022-04-27T23:01:22.496205587Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T23:01:22.517181745Z stderr F /srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/scheduler.py:5622: UserWarning: Scheduler already contains a plugin with name worker-status; overwriting.
2022-04-27T23:01:22.517227878Z stderr F warnings.warn(
2022-04-27T23:01:23.22398654Z stderr F distributed.scheduler - INFO - Remove client Client-ed5babaa-c67d-11ec-8001-7bbe31362ddf
2022-04-27T23:01:23.224378926Z stderr F distributed.scheduler - INFO - Close client connection: Client-ed5babaa-c67d-11ec-8001-7bbe31362ddf
2022-04-27T23:04:12.997900849Z stderr F distributed.nanny - INFO - Start Nanny at: 'tcp://10.60.3.3:34215'
2022-04-27T23:04:12.997962926Z stderr F distributed.nanny - INFO - Start Nanny at: 'tcp://10.60.3.4:40499'
2022-04-27T23:04:16.279426353Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.3.3:39839
2022-04-27T23:04:16.279658686Z stderr F distributed.worker - INFO - Listening to: tcp://10.60.3.3:39839
2022-04-27T23:04:16.268540735Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.3.4:41463
2022-04-27T23:04:16.268629602Z stderr F distributed.worker - INFO - Listening to: tcp://10.60.3.4:41463
2022-04-27T23:04:16.2803674Z stderr F distributed.worker - INFO - dashboard at: 10.60.3.3:35833
2022-04-27T23:04:16.280557022Z stderr F distributed.worker - INFO - Waiting to connect to: tcp://dask-jovyan-55f4b085-4.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T23:04:16.268653528Z stderr F distributed.worker - INFO - dashboard at: 10.60.3.4:41005
2022-04-27T23:04:16.268667522Z stderr F distributed.worker - INFO - Waiting to connect to: tcp://dask-jovyan-1d96fc5b-8.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T23:04:16.280735627Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:04:16.280929662Z stderr F distributed.worker - INFO - Threads: 1
2022-04-27T23:04:16.268775787Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:04:16.281078084Z stderr F distributed.worker - INFO - Memory: 3.90 GiB
2022-04-27T23:04:16.268806194Z stderr F distributed.worker - INFO - Threads: 1
2022-04-27T23:04:16.268862985Z stderr F distributed.worker - INFO - Memory: 3.90 GiB
2022-04-27T23:04:16.281203807Z stderr F distributed.worker - INFO - Local Directory: /home/jovyan/dask-worker-space/worker-ru02q60i
2022-04-27T23:04:16.281295429Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:04:16.268889076Z stderr F distributed.worker - INFO - Local Directory: /home/jovyan/dask-worker-space/worker-hanog_k7
2022-04-27T23:04:16.269047117Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:04:16.283425065Z stderr F distributed.worker - INFO - Registered to: tcp://dask-jovyan-1d96fc5b-8.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T23:04:16.283541645Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:04:16.284268069Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T23:04:16.282116033Z stderr F distributed.scheduler - INFO - Register worker <WorkerState 'tcp://10.60.3.4:41463', name: 0, status: undefined, memory: 0, processing: 0>
2022-04-27T23:04:16.282672014Z stderr F distributed.scheduler - INFO - Starting worker compute stream, tcp://10.60.3.4:41463
2022-04-27T23:04:16.282719461Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T23:04:16.347439029Z stderr F distributed.scheduler - INFO - Register worker <WorkerState 'tcp://10.60.3.3:39839', name: 0, status: undefined, memory: 0, processing: 0>
2022-04-27T23:04:16.347715144Z stderr F distributed.scheduler - INFO - Starting worker compute stream, tcp://10.60.3.3:39839
2022-04-27T23:04:16.347741353Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T23:04:16.348529562Z stderr F distributed.worker - INFO - Registered to: tcp://dask-jovyan-55f4b085-4.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T23:04:16.348714291Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:04:16.34955268Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T23:04:28.211242285Z stderr F distributed.core - INFO - Event loop was unresponsive in Worker for 11.93s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
2022-04-27T23:04:28.379362351Z stdout F [2022-04-27 23:04:28+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input': Starting task run...
2022-04-27T23:04:28.954840649Z stdout F [2022-04-27 23:04:28+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input': Finished task run for task with final state: 'Mapped'
2022-04-27T23:04:29.147153453Z stdout F [2022-04-27 23:04:29+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[0]': Starting task run...
2022-04-27T23:04:29.156492878Z stdout F [2022-04-27 23:04:29+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[0]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:29.305667574Z stdout F [2022-04-27 23:04:29+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk': Starting task run...
2022-04-27T23:04:29.463884065Z stdout F [2022-04-27 23:04:29+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk': Finished task run for task with final state: 'Mapped'
2022-04-27T23:04:29.701069612Z stdout F [2022-04-27 23:04:29+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[9]': Starting task run...
2022-04-27T23:04:29.711186963Z stdout F [2022-04-27 23:04:29+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[9]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:29.84958278Z stdout F [2022-04-27 23:04:29+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[8]': Starting task run...
2022-04-27T23:04:29.860496325Z stdout F [2022-04-27 23:04:29+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[8]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:30.076659327Z stdout F [2022-04-27 23:04:30+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[7]': Starting task run...
2022-04-27T23:04:30.086478162Z stdout F [2022-04-27 23:04:30+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[7]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:30.272058482Z stdout F [2022-04-27 23:04:30+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[6]': Starting task run...
2022-04-27T23:04:30.281247016Z stdout F [2022-04-27 23:04:30+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[6]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:30.463885232Z stdout F [2022-04-27 23:04:30+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[5]': Starting task run...
2022-04-27T23:04:30.47339285Z stdout F [2022-04-27 23:04:30+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[5]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:30.663362992Z stdout F [2022-04-27 23:04:30+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[4]': Starting task run...
2022-04-27T23:04:30.672649944Z stdout F [2022-04-27 23:04:30+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[4]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:30.887956696Z stdout F [2022-04-27 23:04:30+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[3]': Starting task run...
2022-04-27T23:04:30.897484854Z stdout F [2022-04-27 23:04:30+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[3]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:31.057645189Z stdout F [2022-04-27 23:04:31+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[2]': Starting task run...
2022-04-27T23:04:31.068109987Z stdout F [2022-04-27 23:04:31+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[2]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:31.305934159Z stdout F [2022-04-27 23:04:31+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[1]': Starting task run...
2022-04-27T23:04:31.316459357Z stdout F [2022-04-27 23:04:31+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[1]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:31.481666747Z stdout F [2022-04-27 23:04:31+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[19]': Starting task run...
2022-04-27T23:04:31.494836368Z stdout F [2022-04-27 23:04:31+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[19]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:31.70459833Z stdout F [2022-04-27 23:04:31+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[18]': Starting task run...
2022-04-27T23:04:31.715764132Z stdout F [2022-04-27 23:04:31+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[18]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:31.898206363Z stdout F [2022-04-27 23:04:31+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[17]': Starting task run...
2022-04-27T23:04:31.927234908Z stdout F [2022-04-27 23:04:31+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[17]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:32.115630708Z stdout F [2022-04-27 23:04:32+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[16]': Starting task run...
2022-04-27T23:04:32.124588659Z stdout F [2022-04-27 23:04:32+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[16]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:32.290336685Z stdout F [2022-04-27 23:04:32+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[15]': Starting task run...
2022-04-27T23:04:32.302469249Z stdout F [2022-04-27 23:04:32+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[15]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:32.477980398Z stdout F [2022-04-27 23:04:32+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[14]': Starting task run...
2022-04-27T23:04:32.490011558Z stdout F [2022-04-27 23:04:32+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[14]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:32.854624207Z stdout F [2022-04-27 23:04:32+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[13]': Starting task run...
2022-04-27T23:04:32.865002424Z stdout F [2022-04-27 23:04:32+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[13]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:33.094845922Z stdout F [2022-04-27 23:04:33+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[12]': Starting task run...
2022-04-27T23:04:33.106846492Z stdout F [2022-04-27 23:04:33+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[12]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:33.293714581Z stdout F [2022-04-27 23:04:33+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[11]': Starting task run...
2022-04-27T23:04:33.308491392Z stdout F [2022-04-27 23:04:33+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[11]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:33.485765232Z stdout F [2022-04-27 23:04:33+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[10]': Starting task run...
2022-04-27T23:04:33.497434356Z stdout F [2022-04-27 23:04:33+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[10]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:33.694730537Z stdout F [2022-04-27 23:04:33+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[9]': Starting task run...
2022-04-27T23:04:33.7045693Z stdout F [2022-04-27 23:04:33+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[9]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:33.891438256Z stdout F [2022-04-27 23:04:33+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[8]': Starting task run...
2022-04-27T23:04:33.904098108Z stdout F [2022-04-27 23:04:33+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[8]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:34.105124868Z stdout F [2022-04-27 23:04:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[7]': Starting task run...
2022-04-27T23:04:34.116811236Z stdout F [2022-04-27 23:04:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[7]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:34.293003583Z stdout F [2022-04-27 23:04:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[6]': Starting task run...
2022-04-27T23:04:34.305441138Z stdout F [2022-04-27 23:04:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[6]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:34.490383513Z stdout F [2022-04-27 23:04:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[5]': Starting task run...
2022-04-27T23:04:34.501946156Z stdout F [2022-04-27 23:04:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[5]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:34.518691353Z stderr F distributed.dask_worker - INFO - Exiting on signal 15
2022-04-27T23:04:34.518752306Z stderr F distributed.nanny - INFO - Closing Nanny at 'tcp://10.60.3.4:40499'
2022-04-27T23:04:34.567894317Z stderr F distributed.worker - INFO - Stopping worker at tcp://10.60.3.4:41463
2022-04-27T23:04:34.477371478Z stderr F distributed.scheduler - INFO - Retire worker names (0,)
2022-04-27T23:04:34.47769921Z stderr F distributed.scheduler - INFO - Retire workers {<WorkerState 'tcp://10.60.3.4:41463', name: 0, status: running, memory: 24, processing: 6>}
2022-04-27T23:04:34.570809343Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.3.4:41463', name: 0, status: running, memory: 25, processing: 5>
2022-04-27T23:04:34.57092628Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.3.4:41463
2022-04-27T23:04:34.571443785Z stderr F distributed.scheduler - INFO - Lost all workers
2022-04-27T23:04:34.703889737Z stdout F [2022-04-27 23:04:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[4]': Starting task run...
2022-04-27T23:04:34.712915437Z stdout F [2022-04-27 23:04:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[4]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:36.152763134Z stderr F distributed.nanny - WARNING - Worker process still alive after 1 seconds, killing
2022-04-27T23:04:36.15436608Z stderr F distributed.dask_worker - INFO - End worker
2022-04-27T23:04:36.154908975Z stderr F distributed.process - INFO - reaping stray process <SpawnProcess name='Dask Worker process (from Nanny)' pid=314 parent=1 started daemon>
2022-04-27T23:04:38.189484044Z stderr F distributed.nanny - INFO - Start Nanny at: 'tcp://10.60.3.5:40259'
2022-04-27T23:04:41.265859939Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.3.5:38527
2022-04-27T23:04:41.265997807Z stderr F distributed.worker - INFO - Listening to: tcp://10.60.3.5:38527
2022-04-27T23:04:41.266038721Z stderr F distributed.worker - INFO - dashboard at: 10.60.3.5:35487
2022-04-27T23:04:41.26605279Z stderr F distributed.worker - INFO - Waiting to connect to: tcp://dask-jovyan-1d96fc5b-8.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T23:04:41.266147574Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:04:41.266202364Z stderr F distributed.worker - INFO - Threads: 1
2022-04-27T23:04:41.26621262Z stderr F distributed.worker - INFO - Memory: 3.90 GiB
2022-04-27T23:04:41.266229684Z stderr F distributed.worker - INFO - Local Directory: /home/jovyan/dask-worker-space/worker-e91vwou6
2022-04-27T23:04:41.266370479Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:04:41.28085019Z stderr F distributed.worker - INFO - Registered to: tcp://dask-jovyan-1d96fc5b-8.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T23:04:41.2809244Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:04:41.281555411Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T23:04:41.279510031Z stderr F distributed.scheduler - INFO - Register worker <WorkerState 'tcp://10.60.3.5:38527', name: 1, status: undefined, memory: 0, processing: 0>
2022-04-27T23:04:41.280257089Z stderr F distributed.scheduler - INFO - Starting worker compute stream, tcp://10.60.3.5:38527
2022-04-27T23:04:41.280433877Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T23:04:48.992171477Z stderr F distributed.nanny - INFO - Start Nanny at: 'tcp://10.60.3.6:35749'
2022-04-27T23:04:52.366852353Z stderr F distributed.scheduler - INFO - Register worker <WorkerState 'tcp://10.60.3.6:35191', name: 2, status: undefined, memory: 0, processing: 0>
2022-04-27T23:04:52.367613508Z stderr F distributed.scheduler - INFO - Starting worker compute stream, tcp://10.60.3.6:35191
2022-04-27T23:04:52.367652225Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T23:04:52.352177507Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.3.6:35191
2022-04-27T23:04:52.352480983Z stderr F distributed.worker - INFO - Listening to: tcp://10.60.3.6:35191
2022-04-27T23:04:52.352550679Z stderr F distributed.worker - INFO - dashboard at: 10.60.3.6:40241
2022-04-27T23:04:52.352592687Z stderr F distributed.worker - INFO - Waiting to connect to: tcp://dask-jovyan-1d96fc5b-8.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T23:04:52.352628297Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:04:52.352635629Z stderr F distributed.worker - INFO - Threads: 1
2022-04-27T23:04:52.352641336Z stderr F distributed.worker - INFO - Memory: 3.90 GiB
2022-04-27T23:04:52.352647562Z stderr F distributed.worker - INFO - Local Directory: /home/jovyan/dask-worker-space/worker-rxgx3viq
2022-04-27T23:04:52.352665536Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:04:52.368068945Z stderr F distributed.worker - INFO - Registered to: tcp://dask-jovyan-1d96fc5b-8.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T23:04:52.36859806Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:04:52.369621062Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T23:04:53.695199194Z stderr F distributed.core - INFO - Event loop was unresponsive in Worker for 12.41s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
2022-04-27T23:04:53.951559826Z stdout F [2022-04-27 23:04:53+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[6]': Starting task run...
2022-04-27T23:04:53.966270151Z stdout F [2022-04-27 23:04:53+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[6]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:54.242967779Z stdout F [2022-04-27 23:04:54+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[5]': Starting task run...
2022-04-27T23:04:54.252966571Z stdout F [2022-04-27 23:04:54+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[5]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:54.500498597Z stdout F [2022-04-27 23:04:54+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[4]': Starting task run...
2022-04-27T23:04:54.512550513Z stdout F [2022-04-27 23:04:54+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[4]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:54.692974745Z stdout F [2022-04-27 23:04:54+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[2]': Starting task run...
2022-04-27T23:04:54.705032931Z stdout F [2022-04-27 23:04:54+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[2]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:54.876862806Z stdout F [2022-04-27 23:04:54+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[9]': Starting task run...
2022-04-27T23:04:54.894086857Z stdout F [2022-04-27 23:04:54+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[9]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:55.075468906Z stdout F [2022-04-27 23:04:55+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[8]': Starting task run...
2022-04-27T23:04:55.085123184Z stdout F [2022-04-27 23:04:55+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[8]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:55.584898146Z stdout F [2022-04-27 23:04:55+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[0]': Starting task run...
2022-04-27T23:04:55.599244021Z stdout F [2022-04-27 23:04:55+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[0]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:56.39194743Z stdout F [2022-04-27 23:04:56+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[3]': Starting task run...
2022-04-27T23:04:56.404248422Z stdout F [2022-04-27 23:04:56+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[3]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:56.58884563Z stdout F [2022-04-27 23:04:56+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[13]': Starting task run...
2022-04-27T23:04:56.600950217Z stdout F [2022-04-27 23:04:56+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[13]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:56.81909706Z stdout F [2022-04-27 23:04:56+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[6]': Starting task run...
2022-04-27T23:04:56.831014797Z stdout F [2022-04-27 23:04:56+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[6]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:56.97566149Z stdout F [2022-04-27 23:04:56+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[4]': Starting task run...
2022-04-27T23:04:56.99237294Z stdout F [2022-04-27 23:04:56+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[4]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:57.195445814Z stdout F [2022-04-27 23:04:57+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[18]': Starting task run...
2022-04-27T23:04:57.205306026Z stdout F [2022-04-27 23:04:57+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[18]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:57.43074292Z stdout F [2022-04-27 23:04:57+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[12]': Starting task run...
2022-04-27T23:04:57.441548576Z stdout F [2022-04-27 23:04:57+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[12]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:57.64688236Z stdout F [2022-04-27 23:04:57+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[1]': Starting task run...
2022-04-27T23:04:57.657424296Z stdout F [2022-04-27 23:04:57+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[1]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:57.861761764Z stdout F [2022-04-27 23:04:57+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[14]': Starting task run...
2022-04-27T23:04:57.871636809Z stdout F [2022-04-27 23:04:57+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[14]': Finished task run for task with final state: 'Success'
2022-04-27T23:04:58.131850483Z stdout F [2022-04-27 23:04:58+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[11]': Starting task run...
2022-04-27T23:04:58.143687392Z stdout F [2022-04-27 23:04:58+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[11]': Finished task run for task with final state: 'Failed'
2022-04-27T23:04:59.47346916Z stderr F distributed.scheduler - INFO - Retire worker names (1,)
2022-04-27T23:04:59.473792474Z stderr F distributed.scheduler - INFO - Retire workers {<WorkerState 'tcp://10.60.3.5:38527', name: 1, status: running, memory: 16, processing: 0>}
2022-04-27T23:04:59.474148442Z stderr F distributed.scheduler - INFO - Moving 16 keys to other workers
2022-04-27T23:05:05.747519313Z stderr F distributed.core - INFO - Event loop was unresponsive in Worker for 12.00s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
2022-04-27T23:05:05.849674323Z stderr F distributed.worker - INFO - Stopping worker at tcp://10.60.3.5:38527
2022-04-27T23:05:05.846921832Z stderr F distributed.scheduler - INFO - Closing worker tcp://10.60.3.5:38527
2022-04-27T23:05:05.847305278Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.3.5:38527', name: 1, status: running, memory: 16, processing: 6>
2022-04-27T23:05:05.847450727Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.3.5:38527
2022-04-27T23:05:05.896039623Z stderr F distributed.dask_worker - INFO - Exiting on signal 15
2022-04-27T23:05:05.896455184Z stderr F distributed.nanny - INFO - Closing Nanny at 'tcp://10.60.3.5:40259'
2022-04-27T23:05:06.187490342Z stdout F [2022-04-27 23:05:06+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[1]': Starting task run...
2022-04-27T23:05:06.200750592Z stdout F [2022-04-27 23:05:06+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[1]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:06.206063389Z stdout F [2022-04-27 23:05:06+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[7]': Starting task run...
2022-04-27T23:05:06.217665057Z stdout F [2022-04-27 23:05:06+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[7]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:06.219958013Z stderr F distributed.nanny - INFO - Worker closed
2022-04-27T23:05:06.37380382Z stdout F [2022-04-27 23:05:06+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[7]': Starting task run...
2022-04-27T23:05:06.384454048Z stdout F [2022-04-27 23:05:06+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[7]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:06.503696516Z stderr F distributed.dask_worker - INFO - Exiting on signal 15
2022-04-27T23:05:06.504389518Z stderr F distributed.nanny - INFO - Closing Nanny at 'tcp://10.60.3.6:35749'
2022-04-27T23:05:06.5061791Z stderr F distributed.worker - INFO - Stopping worker at tcp://10.60.3.6:35191
2022-04-27T23:05:06.567843534Z stdout F [2022-04-27 23:05:06+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[3]': Starting task run...
2022-04-27T23:05:06.586920499Z stdout F [2022-04-27 23:05:06+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[3]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:06.472790573Z stderr F distributed.scheduler - INFO - Retire worker names (2,)
2022-04-27T23:05:06.473381112Z stderr F distributed.scheduler - INFO - Retire workers {<WorkerState 'tcp://10.60.3.6:35191', name: 2, status: running, memory: 18, processing: 12>}
2022-04-27T23:05:06.507364446Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.3.6:35191', name: 2, status: running, memory: 18, processing: 12>
2022-04-27T23:05:06.507683119Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.3.6:35191
2022-04-27T23:05:06.508401562Z stderr F distributed.scheduler - INFO - Lost all workers
2022-04-27T23:05:07.497892782Z stderr F distributed.nanny - WARNING - Worker process still alive after 1 seconds, killing
2022-04-27T23:05:07.544688458Z stderr F distributed.dask_worker - INFO - End worker
2022-04-27T23:05:07.545439052Z stderr F distributed.process - INFO - reaping stray process <SpawnProcess name='Dask Worker process (from Nanny)' pid=314 parent=1 started daemon>
2022-04-27T23:05:08.044600747Z stderr F distributed.dask_worker - INFO - End worker
2022-04-27T23:05:15.881757804Z stderr F distributed.nanny - INFO - Start Nanny at: 'tcp://10.60.3.7:42961'
2022-04-27T23:05:18.986473425Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.3.7:37215
2022-04-27T23:05:18.986530291Z stderr F distributed.worker - INFO - Listening to: tcp://10.60.3.7:37215
2022-04-27T23:05:18.986540543Z stderr F distributed.worker - INFO - dashboard at: 10.60.3.7:44031
2022-04-27T23:05:18.986592373Z stderr F distributed.worker - INFO - Waiting to connect to: tcp://dask-jovyan-1d96fc5b-8.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T23:05:18.986603599Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:05:18.986621819Z stderr F distributed.worker - INFO - Threads: 1
2022-04-27T23:05:18.986667334Z stderr F distributed.worker - INFO - Memory: 3.90 GiB
2022-04-27T23:05:18.986740388Z stderr F distributed.worker - INFO - Local Directory: /home/jovyan/dask-worker-space/worker-0yy_j_kz
2022-04-27T23:05:18.986760619Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:05:19.054675512Z stderr F distributed.worker - INFO - Registered to: tcp://dask-jovyan-1d96fc5b-8.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T23:05:19.054750611Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:05:19.055366169Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T23:05:19.053385872Z stderr F distributed.scheduler - INFO - Register worker <WorkerState 'tcp://10.60.3.7:37215', name: 3, status: undefined, memory: 0, processing: 0>
2022-04-27T23:05:19.053971296Z stderr F distributed.scheduler - INFO - Starting worker compute stream, tcp://10.60.3.7:37215
2022-04-27T23:05:19.054044147Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T23:05:27.098111488Z stderr F distributed.nanny - INFO - Start Nanny at: 'tcp://10.60.3.8:39973'
2022-04-27T23:05:30.482751399Z stderr F distributed.worker - INFO - Start worker at: tcp://10.60.3.8:42335
2022-04-27T23:05:30.483792493Z stderr F distributed.worker - INFO - Listening to: tcp://10.60.3.8:42335
2022-04-27T23:05:30.484044805Z stderr F distributed.worker - INFO - dashboard at: 10.60.3.8:36593
2022-04-27T23:05:30.484150137Z stderr F distributed.worker - INFO - Waiting to connect to: tcp://dask-jovyan-1d96fc5b-8.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T23:05:30.484263505Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:05:30.484312705Z stderr F distributed.worker - INFO - Threads: 1
2022-04-27T23:05:30.484388783Z stderr F distributed.worker - INFO - Memory: 3.90 GiB
2022-04-27T23:05:30.484399253Z stderr F distributed.worker - INFO - Local Directory: /home/jovyan/dask-worker-space/worker-jx340akt
2022-04-27T23:05:30.484414552Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:05:30.549101206Z stderr F distributed.worker - INFO - Registered to: tcp://dask-jovyan-1d96fc5b-8.pangeo-forge-columbia-staging-bakery:8786
2022-04-27T23:05:30.549160349Z stderr F distributed.worker - INFO - -------------------------------------------------
2022-04-27T23:05:30.550015478Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T23:05:30.547514146Z stderr F distributed.scheduler - INFO - Register worker <WorkerState 'tcp://10.60.3.8:42335', name: 4, status: undefined, memory: 0, processing: 0>
2022-04-27T23:05:30.548193154Z stderr F distributed.scheduler - INFO - Starting worker compute stream, tcp://10.60.3.8:42335
2022-04-27T23:05:30.549124597Z stderr F distributed.core - INFO - Starting established connection
2022-04-27T23:05:31.800469147Z stderr F distributed.core - INFO - Event loop was unresponsive in Worker for 12.75s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
2022-04-27T23:05:31.988008471Z stdout F [2022-04-27 23:05:31+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[6]': Starting task run...
2022-04-27T23:05:32.001283466Z stdout F [2022-04-27 23:05:32+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[6]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:32.196465357Z stdout F [2022-04-27 23:05:32+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[4]': Starting task run...
2022-04-27T23:05:32.206859912Z stdout F [2022-04-27 23:05:32+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[4]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:32.367500995Z stdout F [2022-04-27 23:05:32+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[5]': Starting task run...
2022-04-27T23:05:32.37813972Z stdout F [2022-04-27 23:05:32+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[5]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:32.59324114Z stdout F [2022-04-27 23:05:32+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[2]': Starting task run...
2022-04-27T23:05:32.602484083Z stdout F [2022-04-27 23:05:32+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[2]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:32.917540483Z stdout F [2022-04-27 23:05:32+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[9]': Starting task run...
2022-04-27T23:05:32.927264927Z stdout F [2022-04-27 23:05:32+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[9]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:33.130648831Z stdout F [2022-04-27 23:05:33+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[8]': Starting task run...
2022-04-27T23:05:33.141191965Z stdout F [2022-04-27 23:05:33+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[8]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:33.307993418Z stdout F [2022-04-27 23:05:33+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[0]': Starting task run...
2022-04-27T23:05:33.317928259Z stdout F [2022-04-27 23:05:33+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[0]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:33.576681155Z stdout F [2022-04-27 23:05:33+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[3]': Starting task run...
2022-04-27T23:05:33.587120994Z stdout F [2022-04-27 23:05:33+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[3]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:33.766714772Z stdout F [2022-04-27 23:05:33+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[13]': Starting task run...
2022-04-27T23:05:33.778192369Z stdout F [2022-04-27 23:05:33+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[13]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:33.998509632Z stdout F [2022-04-27 23:05:33+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[6]': Starting task run...
2022-04-27T23:05:34.008958604Z stdout F [2022-04-27 23:05:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[6]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:34.187000289Z stdout F [2022-04-27 23:05:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[4]': Starting task run...
2022-04-27T23:05:34.199091864Z stdout F [2022-04-27 23:05:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[4]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:34.394952953Z stdout F [2022-04-27 23:05:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[18]': Starting task run...
2022-04-27T23:05:34.404231064Z stdout F [2022-04-27 23:05:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[18]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:34.576713582Z stdout F [2022-04-27 23:05:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[1]': Starting task run...
2022-04-27T23:05:34.589025703Z stdout F [2022-04-27 23:05:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[1]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:34.790076598Z stdout F [2022-04-27 23:05:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[14]': Starting task run...
2022-04-27T23:05:34.800523191Z stdout F [2022-04-27 23:05:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[14]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:34.94874722Z stdout F [2022-04-27 23:05:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[12]': Starting task run...
2022-04-27T23:05:34.958843847Z stdout F [2022-04-27 23:05:34+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[12]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:35.166743777Z stdout F [2022-04-27 23:05:35+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[11]': Starting task run...
2022-04-27T23:05:35.17910924Z stdout F [2022-04-27 23:05:35+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[11]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:37.473944813Z stderr F distributed.scheduler - INFO - Retire worker names (3,)
2022-04-27T23:05:37.474022548Z stderr F distributed.scheduler - INFO - Retire workers {<WorkerState 'tcp://10.60.3.7:37215', name: 3, status: running, memory: 16, processing: 0>}
2022-04-27T23:05:37.47410966Z stderr F distributed.scheduler - INFO - Moving 16 keys to other workers
2022-04-27T23:05:43.980310292Z stderr F distributed.core - INFO - Event loop was unresponsive in Worker for 12.13s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
2022-04-27T23:05:44.091161822Z stderr F distributed.worker - INFO - Stopping worker at tcp://10.60.3.7:37215
2022-04-27T23:05:44.09000644Z stderr F distributed.scheduler - INFO - Closing worker tcp://10.60.3.7:37215
2022-04-27T23:05:44.090080339Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.3.7:37215', name: 3, status: running, memory: 16, processing: 6>
2022-04-27T23:05:44.090097837Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.3.7:37215
2022-04-27T23:05:44.164374192Z stdout F [2022-04-27 23:05:44+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[1]': Starting task run...
2022-04-27T23:05:44.175399778Z stdout F [2022-04-27 23:05:44+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[1]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:44.144575662Z stderr F distributed.dask_worker - INFO - Exiting on signal 15
2022-04-27T23:05:44.14577034Z stderr F distributed.nanny - INFO - Closing Nanny at 'tcp://10.60.3.7:42961'
2022-04-27T23:05:44.23767061Z stdout F [2022-04-27 23:05:44+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[19]': Starting task run...
2022-04-27T23:05:44.247141256Z stdout F [2022-04-27 23:05:44+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[19]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:44.249762925Z stderr F distributed.nanny - INFO - Worker closed
2022-04-27T23:05:44.376554326Z stdout F [2022-04-27 23:05:44+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[7]': Starting task run...
2022-04-27T23:05:44.388551077Z stdout F [2022-04-27 23:05:44+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[7]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:44.577438635Z stdout F [2022-04-27 23:05:44+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[3]': Starting task run...
2022-04-27T23:05:44.587665389Z stdout F [2022-04-27 23:05:44+0000] INFO - prefect.CloudTaskRunner | Task 'cache_input[3]': Finished task run for task with final state: 'Success'
2022-04-27T23:05:44.783351815Z stdout F [2022-04-27 23:05:44+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[19]': Starting task run...
2022-04-27T23:05:44.796648823Z stdout F [2022-04-27 23:05:44+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[19]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:45.008352791Z stdout F [2022-04-27 23:05:45+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[8]': Starting task run...
2022-04-27T23:05:45.017704311Z stdout F [2022-04-27 23:05:45+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[8]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:45.209022109Z stdout F [2022-04-27 23:05:45+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[16]': Starting task run...
2022-04-27T23:05:45.219058336Z stdout F [2022-04-27 23:05:45+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[16]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:45.358873409Z stdout F [2022-04-27 23:05:45+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[9]': Starting task run...
2022-04-27T23:05:45.368549581Z stdout F [2022-04-27 23:05:45+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[9]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:45.604019289Z stdout F [2022-04-27 23:05:45+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[17]': Starting task run...
2022-04-27T23:05:45.613014424Z stdout F [2022-04-27 23:05:45+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[17]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:45.785059623Z stdout F [2022-04-27 23:05:45+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[7]': Starting task run...
2022-04-27T23:05:45.796082485Z stdout F [2022-04-27 23:05:45+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[7]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:45.694676266Z stderr F distributed.dask_worker - INFO - End worker
2022-04-27T23:05:46.001895333Z stdout F [2022-04-27 23:05:46+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[2]': Starting task run...
2022-04-27T23:05:46.011580705Z stdout F [2022-04-27 23:05:46+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[2]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:46.251024424Z stdout F [2022-04-27 23:05:46+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[5]': Starting task run...
2022-04-27T23:05:46.260954578Z stdout F [2022-04-27 23:05:46+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[5]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:46.458115026Z stdout F [2022-04-27 23:05:46+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[15]': Starting task run...
2022-04-27T23:05:46.468074197Z stdout F [2022-04-27 23:05:46+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[15]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:46.746895265Z stdout F [2022-04-27 23:05:46+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[10]': Starting task run...
2022-04-27T23:05:46.758187683Z stdout F [2022-04-27 23:05:46+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[10]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:46.937229201Z stdout F [2022-04-27 23:05:46+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[0]': Starting task run...
2022-04-27T23:05:46.946551868Z stdout F [2022-04-27 23:05:46+0000] INFO - prefect.CloudTaskRunner | Task 'store_chunk[0]': Finished task run for task with final state: 'Failed'
2022-04-27T23:05:47.115945016Z stdout F [2022-04-27 23:05:47+0000] INFO - prefect.CloudTaskRunner | Task 'finalize_target': Starting task run...
2022-04-27T23:05:47.263463213Z stdout F [2022-04-27 23:05:47+0000] INFO - prefect.CloudTaskRunner | Task 'finalize_target': Finished task run for task with final state: 'TriggerFailed'
2022-04-27T23:05:47.302152916Z stderr F distributed.dask_worker - INFO - Exiting on signal 15
2022-04-27T23:05:47.302619923Z stderr F distributed.nanny - INFO - Closing Nanny at 'tcp://10.60.3.8:39973'
2022-04-27T23:05:47.3052676Z stderr F distributed.worker - INFO - Stopping worker at tcp://10.60.3.8:42335
2022-04-27T23:05:47.270667479Z stderr F distributed.scheduler - INFO - Remove client Client-f4f7495f-c67d-11ec-8001-75a5dcceec98
2022-04-27T23:05:47.271603037Z stderr F distributed.scheduler - INFO - Remove client Client-f4f7495f-c67d-11ec-8001-75a5dcceec98
2022-04-27T23:05:47.271914114Z stderr F distributed.scheduler - INFO - Close client connection: Client-f4f7495f-c67d-11ec-8001-75a5dcceec98
2022-04-27T23:05:47.305653326Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.3.8:42335', name: 4, status: running, memory: 0, processing: 0>
2022-04-27T23:05:47.305818641Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.3.8:42335
2022-04-27T23:05:47.305994225Z stderr F distributed.scheduler - INFO - Lost all workers
2022-04-27T23:05:47.408926327Z stderr F distributed.scheduler - INFO - End scheduler at 'tcp://10.60.6.12:8786'
2022-04-27T23:05:47.408984162Z stderr F Traceback (most recent call last):
2022-04-27T23:05:47.409002473Z stderr F File "/srv/conda/envs/notebook/bin/dask-scheduler", line 11, in <module>
2022-04-27T23:05:47.409012056Z stderr F sys.exit(go())
2022-04-27T23:05:47.40901963Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/cli/dask_scheduler.py", line 217, in go
2022-04-27T23:05:47.409029044Z stderr F main()
2022-04-27T23:05:47.409038119Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/click/core.py", line 829, in __call__
2022-04-27T23:05:47.409046175Z stderr F return self.main(*args, **kwargs)
2022-04-27T23:05:47.40905424Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/click/core.py", line 782, in main
2022-04-27T23:05:47.409062097Z stderr F rv = self.invoke(ctx)
2022-04-27T23:05:47.40907026Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/click/core.py", line 1066, in invoke
2022-04-27T23:05:47.409077556Z stderr F return ctx.invoke(self.callback, **ctx.params)
2022-04-27T23:05:47.409086302Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/click/core.py", line 610, in invoke
2022-04-27T23:05:47.409094016Z stderr F return callback(*args, **kwargs)
2022-04-27T23:05:47.409100889Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/cli/dask_scheduler.py", line 208, in main
2022-04-27T23:05:47.409130566Z stderr F loop.run_sync(run)
2022-04-27T23:05:47.409138041Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/tornado/ioloop.py", line 529, in run_sync
2022-04-27T23:05:47.409145276Z stderr F raise TimeoutError("Operation timed out after %s seconds" % timeout)
2022-04-27T23:05:47.409152953Z stderr F tornado.util.TimeoutError: Operation timed out after None seconds
2022-04-27T23:05:48.796589007Z stderr F distributed.dask_worker - INFO - End worker
2022-04-27T23:25:56.035234227Z stderr F distributed.scheduler - INFO - Remove client Client-worker-8cbd741a-c678-11ec-813a-934b7114575b
2022-04-27T23:25:56.035696386Z stderr F distributed.scheduler - INFO - Remove client Client-worker-8cbd741a-c678-11ec-813a-934b7114575b
2022-04-27T23:25:56.035951068Z stderr F distributed.scheduler - INFO - Close client connection: Client-worker-8cbd741a-c678-11ec-813a-934b7114575b
2022-04-27T23:25:56.037724878Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.2.10:43501', name: 12, status: closing, memory: 0, processing: 0>
2022-04-27T23:25:56.037781881Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.2.10:43501
2022-04-27T23:25:56.053057938Z stderr F distributed.scheduler - INFO - Remove client Client-worker-a6b2947f-c678-11ec-8150-6743de59de28
2022-04-27T23:25:56.053560161Z stderr F distributed.scheduler - INFO - Remove client Client-worker-a6b2947f-c678-11ec-8150-6743de59de28
2022-04-27T23:25:56.053789232Z stderr F distributed.scheduler - INFO - Close client connection: Client-worker-a6b2947f-c678-11ec-8150-6743de59de28
2022-04-27T23:25:56.054810674Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.2.11:33035', name: 16, status: closing, memory: 0, processing: 0>
2022-04-27T23:25:56.054841914Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.2.11:33035
2022-04-27T23:25:56.063126128Z stderr F distributed.scheduler - INFO - Remove client Client-worker-ad606646-c678-11ec-8164-81ea52694ff4
2022-04-27T23:25:56.063520244Z stderr F distributed.scheduler - INFO - Remove client Client-worker-ad606646-c678-11ec-8164-81ea52694ff4
2022-04-27T23:25:56.063743175Z stderr F distributed.scheduler - INFO - Close client connection: Client-worker-ad606646-c678-11ec-8164-81ea52694ff4
2022-04-27T23:25:56.065390729Z stderr F distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://10.60.2.9:33071', name: 7, status: closing, memory: 0, processing: 0>
2022-04-27T23:25:56.065427087Z stderr F distributed.core - INFO - Removing comms to tcp://10.60.2.9:33071
2022-04-28T15:12:16.753871368Z stderr F distributed.worker - WARNING - Heartbeat to scheduler failed
2022-04-28T15:12:16.754203605Z stderr F Traceback (most recent call last):
2022-04-28T15:12:16.754358991Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/tornado/iostream.py", line 867, in _read_to_buffer
2022-04-28T15:12:16.754385855Z stderr F bytes_read = self.read_from_fd(buf)
2022-04-28T15:12:16.754396256Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/tornado/iostream.py", line 1140, in read_from_fd
2022-04-28T15:12:16.754405052Z stderr F return self.socket.recv_into(buf, len(buf))
2022-04-28T15:12:16.754415976Z stderr F ConnectionResetError: [Errno 104] Connection reset by peer
2022-04-28T15:12:16.754423215Z stderr F
2022-04-28T15:12:16.754430247Z stderr F The above exception was the direct cause of the following exception:
2022-04-28T15:12:16.75443753Z stderr F
2022-04-28T15:12:16.754444891Z stderr F Traceback (most recent call last):
2022-04-28T15:12:16.754452659Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/worker.py", line 1260, in heartbeat
2022-04-28T15:12:16.754460532Z stderr F response = await retry_operation(
2022-04-28T15:12:16.754469646Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/utils_comm.py", line 385, in retry_operation
2022-04-28T15:12:16.754477714Z stderr F return await retry(
2022-04-28T15:12:16.754486014Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/utils_comm.py", line 370, in retry
2022-04-28T15:12:16.754493592Z stderr F return await coro()
2022-04-28T15:12:16.754550802Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/core.py", line 895, in send_recv_from_rpc
2022-04-28T15:12:16.75456967Z stderr F result = await send_recv(comm=comm, op=key, **kwargs)
2022-04-28T15:12:16.754580449Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/core.py", line 672, in send_recv
2022-04-28T15:12:16.754587745Z stderr F response = await comm.read(deserializers=deserializers)
2022-04-28T15:12:16.754595569Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/comm/tcp.py", line 221, in read
2022-04-28T15:12:16.754604446Z stderr F convert_stream_closed_error(self, e)
2022-04-28T15:12:16.754612262Z stderr F File "/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/comm/tcp.py", line 126, in convert_stream_closed_error
2022-04-28T15:12:16.754653357Z stderr F raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc
2022-04-28T15:12:16.754700339Z stderr F distributed.comm.core.CommClosedError: in <TCP (closed) ConnectionPool.heartbeat_worker local=tcp://10.60.3.3:60516 remote=tcp://dask-jovyan-55f4b085-4.pangeo-forge-columbia-staging-bakery:8786>: ConnectionResetError: [Errno 104] Connection reset by peer
2022-04-28T15:12:16.754741183Z stderr F distributed.worker - INFO - Connection to scheduler broken. Reconnecting...
2022-04-28T15:12:46.756062431Z stderr F distributed.worker - INFO - Stopping worker at tcp://10.60.3.3:39839
2022-04-28T15:12:46.761116437Z stderr F distributed.nanny - INFO - Worker closed
2022-04-28T15:12:47.370222429Z stderr F distributed.nanny - INFO - Closing Nanny at 'tcp://10.60.3.3:34215'
2022-04-28T15:12:47.37144305Z stderr F distributed.dask_worker - INFO - End worker
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment