This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import typing | |
import inspect | |
def _extract_attributes(bases, attrs): | |
arg_fields = {} | |
kwarg_fields = {} | |
existing_slots = set() | |
# Walk up the bases, validating and merging defaults |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
In [3]: from typing import NamedTuple | |
In [4]: from quickle import Struct | |
In [5]: from dataclasses import dataclass | |
In [6]: class PointTuple(NamedTuple): | |
...: x: int | |
...: y: int | |
...: |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from prefect import Flow | |
from prefect.environments.execution import KubernetesJobEnvironment | |
from prefect.environments.storage import Docker | |
with Flow("kubernetes-example") as flow: | |
# Add tasks to flow here... | |
# Run on Kubernetes using a custom job specification | |
# This was needed to do even simple things like increase | |
# the job resource limits |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from prefect import Flow | |
from prefect.run_configs import KubernetesRun | |
from prefect.storage import Docker | |
with Flow("kubernetes-example") as flow: | |
# Add tasks to flow here... | |
# Run on Kubernetes with a custom resource configuration | |
flow.run_config = KubernetesRun(cpu_request=2, memory_request="4Gi") | |
# Store the flow in a docker image |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from prefect import Flow | |
from prefect.executors import DaskExecutor | |
with Flow("daskcloudprovider-example") as flow: | |
# Add tasks to flow here... | |
# Execute this flow on a Dask cluster deployed on AWS Fargate | |
flow.executor = DaskExecutor( | |
cluster_class="dask_cloudprovider.aws.FargateCluster", | |
cluster_kwargs={"image": "prefecthq/prefect", "n_workers": 5} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from prefect import Flow | |
from prefect.storage import S3 | |
from prefect.run_configs import ECSRun | |
from prefect.executors import DaskExecutor | |
with Flow("example") as flow: | |
... | |
flow.storage = S3("my-flows") | |
flow.run_config = ECSRun() # Run job on ECS instead of locally |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
For workloads where most of the grunt work is *driven* by prefect, but done | |
using some external system like dask, it makes more sense to use Prefect to | |
drive Dask rather than running Prefect inside Dask. | |
If you want your prefect Flow to startup a dask cluster, you'll want to ensure | |
all resources are still cleaned up properly, even in the case of Flow failure. | |
To do this, you can make use of a `prefect.resource_manager`. This mirrors the | |
`contextmanager` pattern you may be familiar with in Python, but makes it work | |
with Prefect tasks. See |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
diff --git a/dask_gateway/client.py b/garnet/client.py | |
index db044b9..6f35ea1 100644 | |
--- a/dask_gateway/client.py | |
+++ b/garnet/client.py | |
@@ -27,25 +27,25 @@ from .utils import format_template, cancel_task | |
del comm | |
-__all__ = ("Gateway", "GatewayCluster", "GatewayClusterError", "GatewayServerError") | |
+__all__ = ("Garnet", "GarnetCluster", "GarnetClusterError", "GarnetServerError") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import argparse | |
import json | |
import lzma | |
import os | |
import timeit | |
import urllib.request | |
import msgspec | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
This benchmark is a modified version of the benchmark available at | |
https://github.com/samuelcolvin/pydantic/tree/master/benchmarks to support | |
benchmarking msgspec. | |
The benchmark measures the time to JSON encode/decode `n` random objects | |
matching a specific schema. It compares the time required for both | |
serialization _and_ schema validation. | |
""" |