Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save AseiSugiyama/20deaa4a001a12f122a43e4092afdd16 to your computer and use it in GitHub Desktop.
Save AseiSugiyama/20deaa4a001a12f122a43e4092afdd16 to your computer and use it in GitHub Desktop.
lightweight_functions_component_io_kfp.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "lightweight_functions_component_io_kfp.ipynb",
"provenance": [],
"collapsed_sections": [],
"include_colab_link": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/AseiSugiyama/20deaa4a001a12f122a43e4092afdd16/lightweight_functions_component_io_kfp.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "code",
"metadata": {
"id": "ur8xi4C7S06n"
},
"source": [
"# Copyright 2021 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "JAPoU8Sm5E6e"
},
"source": [
"<table align=\"left\">\n",
"\n",
" <td>\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/unofficial/pipelines/lightweight_functions_component_io_kfp.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n",
" </a>\n",
" </td>\n",
" <td>\n",
" <a href=\"https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/unofficial/pipelines/lightweight_functions_component_io_kfp.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n",
" View on GitHub\n",
" </a>\n",
" </td>\n",
" <td>\n",
" <a href=\"https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/ai-platform-samples/raw/master/ai-platform-unified/notebooks/unofficial/pipelines/lightweight_functions_component_io_kfp.ipynb\">\n",
" Open in Google Cloud Notebooks\n",
" </a>\n",
" </td> \n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "o2rM_9Ml7-W2"
},
"source": [
"# Vertex Pipelines: lightweight Python function-based components, component I/O, and acceralators."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tvgnzT1CKxrO"
},
"source": [
"## Overview\n",
"\n",
"\n",
"This [Vertex Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines) example shows how to build lightweight Python function-based components using [the Kubeflow Pipelines (KFP) SDK](https://www.kubeflow.org/docs/components/pipelines/), and in particular how to support component I/O using the KFP SDK.\n",
"\n",
"### Objective\n",
"\n",
"In this example, you'll learn: \n",
"\n",
"- How to build Python-function-based components.\n",
"- How to pass *Artifacts* and *parameters* between components, both by path reference and by value.\n",
"- How to use the `kfp.dsl.importer` method.\n",
"\n",
"\n",
"### Costs \n",
"\n",
"This tutorial uses billable components of Google Cloud:\n",
"\n",
"* Vertex AI Training\n",
"* Cloud Storage\n",
"\n",
"Learn about pricing for [Vertex AI](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage](https://cloud.google.com/storage/pricing), and use the [Pricing\n",
"Calculator](https://cloud.google.com/products/calculator/)\n",
"to generate a cost estimate based on your projected usage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Jq63DFPPxAXJ"
},
"source": [
"### KFP Python function-based components\n",
"\n",
"A Kubeflow pipeline component is a self-contained set of code that performs one step in your ML workflow. A pipeline component is composed of:\n",
"\n",
"* The component code, which implements the logic needed to perform a step in your ML workflow.\n",
"* A component specification, which defines the following:\n",
" * The component’s metadata, its name and description.\n",
" * The component’s interface, the component’s inputs and outputs.\n",
"* The component’s implementation, the Docker container image to run, how to pass inputs to your component code, and how to get the component’s outputs.\n",
"\n",
"Lightweight Python function-based components make it easier to iterate quickly by letting you build your component code as a Python function and generating the component specification for you. This notebook shows how to create Python function-based components for use in [Vertex Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines).\n",
"\n",
"Python function-based components use the Kubeflow Pipelines SDK to handle the complexity of passing inputs into your component and passing your function’s outputs back to your pipeline.\n",
"\n",
"There are two categories of inputs/outputs supported in Python function-based components: *artifacts* and *parameters*.\n",
"\n",
"* Parameters are passed to your component by value and typically contain `int`, `float`, `bool`, or small `string` values.\n",
"* Artifacts are passed to your component as a *reference* to a path, to which you can write a file or a subdirectory structure. In addition to the artifact’s data, you can also read and write the artifact’s metadata. This lets you record arbitrary key-value pairs for an artifact such as the accuracy of a trained model, and use metadata in downstream components – for example, you could use metadata to decide if a model is accurate enough to deploy for predictions."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ze4-nDLfK4pw"
},
"source": [
"### Set up your local development environment\n",
"\n",
"**If you are using Colab or Google Cloud Notebooks**, your environment already meets\n",
"all the requirements to run this notebook. You can skip this step."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gCuSR8GkAgzl"
},
"source": [
"**Otherwise**, make sure your environment meets this notebook's requirements.\n",
"You need the following:\n",
"\n",
"* The Google Cloud SDK\n",
"* Git\n",
"* Python 3\n",
"* virtualenv\n",
"* Jupyter notebook running in a virtual environment with Python 3\n",
"\n",
"The Google Cloud guide to [Setting up a Python development\n",
"environment](https://cloud.google.com/python/setup) and the [Jupyter\n",
"installation guide](https://jupyter.org/install) provide detailed instructions\n",
"for meeting these requirements. The following steps provide a condensed set of\n",
"instructions:\n",
"\n",
"1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)\n",
"\n",
"1. [Install Python 3.](https://cloud.google.com/python/setup#installing_python)\n",
"\n",
"1. [Install\n",
" virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv)\n",
" and create a virtual environment that uses Python 3. Activate the virtual environment.\n",
"\n",
"1. To install Jupyter, run `pip install jupyter` on the\n",
"command-line in a terminal shell.\n",
"\n",
"1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.\n",
"\n",
"1. Open this notebook in the Jupyter Notebook Dashboard."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "i7EUnXsZhAGF"
},
"source": [
"### Install additional packages\n",
"\n",
"Install the KFP SDK."
]
},
{
"cell_type": "code",
"metadata": {
"id": "IaYsrh0Tc17L"
},
"source": [
"import sys\n",
"\n",
"if \"google.colab\" in sys.modules:\n",
" USER_FLAG = \"\"\n",
"else:\n",
" USER_FLAG = \"--user\""
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "aR7LNYMUCVKc"
},
"source": [
"!pip3 install {USER_FLAG} kfp --upgrade"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "hhq5zEbGg0XX"
},
"source": [
"### Restart the kernel\n",
"\n",
"After you install the additional packages, you need to restart the notebook kernel so it can find the packages."
]
},
{
"cell_type": "code",
"metadata": {
"id": "EzrelQZ22IZj"
},
"source": [
"# Automatically restart kernel after installs\n",
"import os\n",
"\n",
"if not os.getenv(\"IS_TESTING\"):\n",
" # Automatically restart kernel after installs\n",
" import IPython\n",
"\n",
" app = IPython.Application.instance()\n",
" app.kernel.do_shutdown(True)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "6GPgNN7eeX1l"
},
"source": [
"Check the version of the package you installed. The KFP SDK version should be >=1.6."
]
},
{
"cell_type": "code",
"metadata": {
"id": "NN0mULkEeb84"
},
"source": [
"!python3 -c \"import kfp; print('KFP SDK version: {}'.format(kfp.__version__))\""
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "lWEdiXsJg0XY"
},
"source": [
"## Before you begin\n",
"\n",
"This notebook does not require a GPU runtime."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Pfmx825FloYH"
},
"source": [
"### Set up your Google Cloud project\n",
"\n",
"**The following steps are required, regardless of your notebook environment.**\n",
"\n",
"1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.\n",
"\n",
"1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n",
"\n",
"1. [Enable the Vertex AI, Cloud Storage, and Compute Engine APIs](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component,storage-component.googleapis.com). \n",
"\n",
"1. Follow the \"**Configuring your project**\" instructions from the Vertex Pipelines documentation.\n",
"\n",
"1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).\n",
"\n",
"1. Enter your project ID in the cell below. Then run the cell to make sure the\n",
"Cloud SDK uses the right project for all the commands in this notebook.\n",
"\n",
"**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WReHDGG5g0XY"
},
"source": [
"#### Set your project ID\n",
"\n",
"**If you don't know your project ID**, you may be able to get your project ID using `gcloud`."
]
},
{
"cell_type": "code",
"metadata": {
"id": "oM1iC_MfAts1"
},
"source": [
"import os\n",
"\n",
"PROJECT_ID = \"\"\n",
"\n",
"# Get your Google Cloud project ID from gcloud\n",
"if not os.getenv(\"IS_TESTING\"):\n",
" shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null\n",
" PROJECT_ID = shell_output[0]\n",
" print(\"Project ID: \", PROJECT_ID)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "qJYoRfYng0XZ"
},
"source": [
"Otherwise, set your project ID here."
]
},
{
"cell_type": "code",
"metadata": {
"id": "riG_qUokg0XZ"
},
"source": [
"if PROJECT_ID == \"\" or PROJECT_ID is None:\n",
" PROJECT_ID = \"python-docs-samples-tests\" # @param {type:\"string\"}"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "06571eb4063b"
},
"source": [
"#### Timestamp\n",
"\n",
"If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial."
]
},
{
"cell_type": "code",
"metadata": {
"id": "697568e92bd6"
},
"source": [
"from datetime import datetime\n",
"\n",
"TIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "dr--iN2kAylZ"
},
"source": [
"### Authenticate your Google Cloud account\n",
"\n",
"**If you are using Google Cloud Notebooks**, your environment is already\n",
"authenticated. Skip this step."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sBCra4QMA2wR"
},
"source": [
"**If you are using Colab**, run the cell below and follow the instructions\n",
"when prompted to authenticate your account via oAuth.\n",
"\n",
"**Otherwise**, follow these steps:\n",
"\n",
"1. In the Cloud Console, go to the [**Create service account key**\n",
" page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).\n",
"\n",
"2. Click **Create service account**.\n",
"\n",
"3. In the **Service account name** field, enter a name, and\n",
" click **Create**.\n",
"\n",
"4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type \"Vertex AI\"\n",
"into the filter box, and select\n",
" **Vertex AI Administrator**. Type \"Storage Object Admin\" into the filter box, and select **Storage Object Admin**.\n",
"\n",
"5. Click *Create*. A JSON file that contains your key downloads to your\n",
"local environment.\n",
"\n",
"6. Enter the path to your service account key as the\n",
"`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell."
]
},
{
"cell_type": "code",
"metadata": {
"id": "PyQmSRbKA8r-"
},
"source": [
"import os\n",
"import sys\n",
"\n",
"# If you are running this notebook in Colab, run this cell and follow the\n",
"# instructions to authenticate your GCP account. This provides access to your\n",
"# Cloud Storage bucket and lets you submit training jobs and prediction\n",
"# requests.\n",
"\n",
"# If on Google Cloud Notebooks, then don't execute this code\n",
"if not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n",
" if \"google.colab\" in sys.modules:\n",
" from google.colab import auth as google_auth\n",
"\n",
" google_auth.authenticate_user()\n",
"\n",
" # If you are running this notebook locally, replace the string below with the\n",
" # path to your service account key and run this cell to authenticate your GCP\n",
" # account.\n",
" elif not os.getenv(\"IS_TESTING\"):\n",
" %env GOOGLE_APPLICATION_CREDENTIALS ''"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "NxhCPW6e46EF"
},
"source": [
"### Create a Cloud Storage bucket as necessary\n",
"\n",
"You will need a Cloud Storage bucket for this example. If you don't have one that you want to use, you can make one now.\n",
"\n",
"\n",
"Set the name of your Cloud Storage bucket below. It must be unique across all\n",
"Cloud Storage buckets.\n",
"\n",
"You may also change the `REGION` variable, which is used for operations\n",
"throughout the rest of this notebook. Make sure to [choose a region where Vertex AI services are\n",
"available](https://cloud.google.com/vertex-ai/docs/general/locations#available_regions). You may\n",
"not use a Multi-Regional Storage bucket for training with Vertex AI."
]
},
{
"cell_type": "code",
"metadata": {
"id": "MzGDU7TWdts_"
},
"source": [
"BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\n",
"REGION = \"us-central1\" # @param {type:\"string\"}"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "cf221059d072"
},
"source": [
"if BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n",
" BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "-EcIXiGsCePi"
},
"source": [
"**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket."
]
},
{
"cell_type": "code",
"metadata": {
"id": "NIq7R4HZCfIc"
},
"source": [
"! gsutil mb -l $REGION $BUCKET_NAME"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "ucvCsknMCims"
},
"source": [
"Finally, validate access to your Cloud Storage bucket by examining its contents:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "vhOb7YnwClBb"
},
"source": [
"! gsutil ls -al $BUCKET_NAME"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "XoEqT2Y4DJmf"
},
"source": [
"### Import libraries and define constants"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YYtGjGG45ELJ"
},
"source": [
"Define some constants."
]
},
{
"cell_type": "code",
"metadata": {
"id": "5zmD19ryCre7"
},
"source": [
"PATH=%env PATH\n",
"%env PATH={PATH}:/home/jupyter/.local/bin\n",
"\n",
"USER = \"your-user-name\" # <---CHANGE THIS\n",
"PIPELINE_ROOT = \"{}/pipeline_root/{}\".format(BUCKET_NAME, USER)\n",
"\n",
"PIPELINE_ROOT"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "IprQaSI25oSk"
},
"source": [
"Do some imports:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "TmMeVQ-fUEUM"
},
"source": [
"from typing import NamedTuple\n",
"\n",
"import kfp\n",
"from kfp.v2 import dsl\n",
"from kfp.v2.dsl import (Artifact, Dataset, Input, InputPath, Model, Output,\n",
" OutputPath, component)\n",
"from kfp.v2.google.client import AIPlatformClient"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "IbN_49SUW7b7"
},
"source": [
"## Define a pipeline\n",
"\n",
"This example defines a pipeline that illustrates component I/O for both *parameters* and *artifacts*.\n",
"\n",
"In addition, this example demonstrates use of the `kfp.dsl.importer` method in the pipeline. You can use the importer if you would like to import an existing Artifact. The importer only works for pipelines that are compiled using `kfp.v2.compiler.Compiler`.\n",
"\n",
"The first step is to define some pipeline *components*, then define a pipeline that uses them.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "QW9EmIXmYLa6"
},
"source": [
"### Create Python function-based pipeline components\n",
"\n",
"This example define some function-based components that consume parameters and produce (typed) Artifacts and parameters. Functions can produce Artifacts in three ways:\n",
"\n",
"* Accept an output local path using `OutputPath` \n",
"* Accept an `OutputArtifact` which gives the function a handle to the output artifact's metadata\n",
"* Return an `Artifact` (or `Dataset`, `Model`, `Metrics`, etc) in a `NamedTuple` \n",
"\n",
"These options for producing Artifacts are demonstrated in the examples below."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "RF9bpqJ_iWyr"
},
"source": [
"The first component definition, `preprocess`, shows a component that outputs two `Dataset` Artifacts, as well as an output parameter. (For this example, the datasets don't reflect real data).\n",
"\n",
"For the parameter output, you would typically use the approach shown here, using the `OutputPath` type, for \"larger\" data. \n",
"For \"small data\", like a short string, it might be more convenient to use the `NamedTuple` function output as shown in the second component instead.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "GZ_kXbhCUEUN"
},
"source": [
"@component\n",
"def preprocess(\n",
" # An input parameter of type string.\n",
" message: str,\n",
" # Use Output to get a metadata-rich handle to the output artifact\n",
" # of type `Dataset`.\n",
" output_dataset_one: Output[Dataset],\n",
" # A locally accessible filepath for another output artifact of type\n",
" # `Dataset`.\n",
" output_dataset_two_path: OutputPath(\"Dataset\"),\n",
" # A locally accessible filepath for an output parameter of type string.\n",
" output_parameter_path: OutputPath(str),\n",
"):\n",
" \"\"\"'Mock' preprocessing step.\n",
" Writes out the passed in message to the output \"Dataset\"s and the output message.\n",
" \"\"\"\n",
" output_dataset_one.metadata[\"hello\"] = \"there\"\n",
" # Use OutputArtifact.path to access a local file path for writing.\n",
" # One can also use OutputArtifact.uri to access the actual URI file path.\n",
" with open(output_dataset_one.path, \"w\") as f:\n",
" f.write(message)\n",
"\n",
" # OutputPath is used to just pass the local file path of the output artifact\n",
" # to the function.\n",
" with open(output_dataset_two_path, \"w\") as f:\n",
" f.write(message)\n",
"\n",
" with open(output_parameter_path, \"w\") as f:\n",
" f.write(message)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "CskpfNV-RGNO"
},
"source": [
"The second component definition, `train`, defines as input both an `InputPath` of type `Dataset`, and an `InputArtifact` of type `Dataset` (as well as other parameter inputs). It also uses the `NamedTuple` format for function output. As shown, these outputs can be Artifacts as well as parameters.\n",
"\n",
"Note that this component also writes some metrics metadata to the `model` output Artifact. This information is displayed in the Cloud Console user interface when the pipeline runs.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "7ZCLrE7IUEUN"
},
"source": [
"@component(\n",
" base_image=\"python:3.9\", # Use a different base image.\n",
")\n",
"def train(\n",
" # An input parameter of type string.\n",
" message: str,\n",
" # Use InputPath to get a locally accessible path for the input artifact\n",
" # of type `Dataset`.\n",
" dataset_one_path: InputPath(\"Dataset\"),\n",
" # Use InputArtifact to get a metadata-rich handle to the input artifact\n",
" # of type `Dataset`.\n",
" dataset_two: Input[Dataset],\n",
" # Output artifact of type Model.\n",
" imported_dataset: Input[Dataset],\n",
" model: Output[Model],\n",
" # An input parameter of type int with a default value.\n",
" num_steps: int = 3,\n",
" # Use NamedTuple to return either artifacts or parameters.\n",
" # When returning artifacts like this, return the contents of\n",
" # the artifact. The assumption here is that this return value\n",
" # fits in memory.\n",
") -> NamedTuple(\n",
" \"Outputs\",\n",
" [\n",
" (\"output_message\", str), # Return parameter.\n",
" (\"generic_artifact\", Artifact), # Return generic Artifact.\n",
" ],\n",
"):\n",
" \"\"\"'Mock' Training step.\n",
" Combines the contents of dataset_one and dataset_two into the\n",
" output Model.\n",
" Constructs a new output_message consisting of message repeated num_steps times.\n",
" \"\"\"\n",
"\n",
" # Directly access the passed in GCS URI as a local file (uses GCSFuse).\n",
" with open(dataset_one_path, \"r\") as input_file:\n",
" dataset_one_contents = input_file.read()\n",
"\n",
" # dataset_two is an Artifact handle. Use dataset_two.path to get a\n",
" # local file path (uses GCSFuse).\n",
" # Alternately, use dataset_two.uri to access the GCS URI directly.\n",
" with open(dataset_two.path, \"r\") as input_file:\n",
" dataset_two_contents = input_file.read()\n",
"\n",
" with open(model.path, \"w\") as f:\n",
" f.write(\"My Model\")\n",
"\n",
" with open(imported_dataset.path, \"r\") as f:\n",
" data = f.read()\n",
" print(\"Imported Dataset:\", data)\n",
"\n",
" # Use model.get() to get a Model artifact, which has a .metadata dictionary\n",
" # to store arbitrary metadata for the output artifact. This metadata will be\n",
" # recorded in Managed Metadata and can be queried later. It will also show up\n",
" # in the UI.\n",
" model.metadata[\"accuracy\"] = 0.9\n",
" model.metadata[\"framework\"] = \"Tensorflow\"\n",
" model.metadata[\"time_to_train_in_seconds\"] = 257\n",
"\n",
" artifact_contents = \"{}\\n{}\".format(dataset_one_contents, dataset_two_contents)\n",
" output_message = \" \".join([message for _ in range(num_steps)])\n",
" return (output_message, artifact_contents)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "OOIzr3g5MK8q"
},
"source": [
"Finally, define a small component that takes as input the `generic_artifact` returned by the `train` component function, and reads and prints the Artifact's contents."
]
},
{
"cell_type": "code",
"metadata": {
"id": "Ta_n1Ww_C5tQ"
},
"source": [
"@component\n",
"def read_artifact_input(\n",
" generic: Input[Artifact],\n",
"):\n",
" with open(generic.path, \"r\") as input_file:\n",
" generic_contents = input_file.read()\n",
" print(f\"generic contents: {generic_contents}\")"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "fmKIkUCoZeAu"
},
"source": [
"### Define a pipeline that uses your components and the Importer\n",
"\n",
"Next, define a pipeline that uses the components that were built in the previous section, and also shows the use of the `kfp.dsl.importer`. \n",
"\n",
"This example uses the `importer` to create, in this case, a `Dataset` artifact from an existing URI.\n",
"\n",
"Note that the `train_task` step takes as inputs three of the outputs of the `preprocess_task` step, as well as the output of the `importer` step.\n",
"In the \"train\" inputs we refer to the `preprocess` `output_parameter`, which gives us the output string directly.\n",
"\n",
"The `read_task` step takes as input the `train_task` `generic_artifact` output.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "289jqF_XUEUO"
},
"source": [
"@dsl.pipeline(\n",
" # Default pipeline root. You can override it when submitting the pipeline.\n",
" pipeline_root=PIPELINE_ROOT,\n",
" # A name for the pipeline. Use to determine the pipeline Context.\n",
" name=\"metadata-pipeline-v2\",\n",
")\n",
"def pipeline(message: str):\n",
" importer = kfp.dsl.importer(\n",
" artifact_uri=\"gs://ml-pipeline-playground/shakespeare1.txt\",\n",
" artifact_class=Dataset,\n",
" reimport=False,\n",
" )\n",
" preprocess_task = preprocess(message=message)\n",
" train_task = train(\n",
" dataset_one=preprocess_task.outputs[\"output_dataset_one\"],\n",
" dataset_two=preprocess_task.outputs[\"output_dataset_two\"],\n",
" imported_dataset=importer.output,\n",
" message=preprocess_task.outputs[\"output_parameter\"],\n",
" num_steps=5,\n",
" )\n",
" read_task = read_artifact_input( # noqa: F841\n",
" train_task.outputs[\"generic_artifact\"]\n",
" )\n",
" train_task_with_gpu = (train(\n",
" dataset_one=preprocess_task.outputs[\"output_dataset_one\"],\n",
" dataset_two=preprocess_task.outputs[\"output_dataset_two\"],\n",
" imported_dataset=importer.output,\n",
" message=preprocess_task.outputs[\"output_parameter\"],\n",
" num_steps=5,\n",
" ).add_node_selector_constraint(\"cloud.google.com/gke-accelerator\",\"nvidia-tesla-p100\").set_gpu_limit(1))\n",
" read_task_with_gpu = read_artifact_input( # noqa: F841\n",
" train_task_with_gpu.outputs[\"generic_artifact\"]\n",
" )"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "2Hl1iYEKSzjP"
},
"source": [
"## Compile and run the pipeline\n",
"\n",
"Now, you're ready to compile the pipeline:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "7PwUFV-MleGs"
},
"source": [
"from kfp.v2 import compiler\n",
"\n",
"compiler.Compiler().compile(\n",
" pipeline_func=pipeline, package_path=\"component_io_job.json\"\n",
")"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "qfNuzFswBB4g"
},
"source": [
"The pipeline compilation generates the `component_io_job.json` job spec file.\n",
"\n",
"Next, instantiate an API client object:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "Hl5Q74_gkW2c"
},
"source": [
"from kfp.v2.google.client import AIPlatformClient # noqa: F811\n",
"\n",
"api_client = AIPlatformClient(\n",
" project_id=PROJECT_ID,\n",
" region=REGION,\n",
")"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "_jrn6saiQsPh"
},
"source": [
"Then, you run the defined pipeline like this: "
]
},
{
"cell_type": "code",
"metadata": {
"id": "R4Ha4FoDQpkd"
},
"source": [
"response = api_client.create_run_from_job_spec(\n",
" job_spec_path=\"component_io_job.json\",\n",
" # pipeline_root=PIPELINE_ROOT, # Override if needed.\n",
" parameter_values={\"message\": \"Hello, World\"},\n",
")"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "GvBTCP318RKs"
},
"source": [
"Click on the generated link to see your run in the Cloud Console. It should look like this:\n",
"\n",
"<a href=\"https://storage.googleapis.com/amy-jo/images/mp/artifact_io2.png\" target=\"_blank\"><img src=\"https://storage.googleapis.com/amy-jo/images/mp/artifact_io2.png\" width=\"95%\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0dsUfUGk2sOj"
},
"source": [
"## Cleaning up\n",
"\n",
"To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\n",
"project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n",
"\n",
"Otherwise, you can delete the individual resources you created in this tutorial:\n",
"- delete Cloud Storage objects that were created. Uncomment and run the command in the cell below **only if you are not using the `PIPELINE_ROOT` path for any other purpose**.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "yMsZxXBZ2sOo"
},
"source": [
"# Warning: this command will delete ALL Cloud Storage objects under the PIPELINE_ROOT path.\n",
"# ! gsutil -m rm -r $PIPELINE_ROOT"
],
"execution_count": null,
"outputs": []
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment