Skip to content

Instantly share code, notes, and snippets.

@wphicks
Last active January 28, 2023 12:44
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save wphicks/2298fe904f3293a59254e0c35cfe05c1 to your computer and use it in GitHub Desktop.
Save wphicks/2298fe904f3293a59254e0c35cfe05c1 to your computer and use it in GitHub Desktop.
Notebook example for fraud detection with the Triton FIL Backend
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"id": "e9ad97cc",
"metadata": {},
"source": [
"# Fraud Detection with XGBoost and Triton-FIL\n",
"\n",
"## Introduction\n",
"In this example notebook, we will go step-by-step through the process of training and deploying an XGBoost fraud detection model using Triton's new FIL backend. Along the way, we'll show how to analyze the performance of a model deployed in Triton and optimize its performance based on specific SLA targets or other considerations.\n",
"\n",
"## Pre-Requisites\n",
"This notebook assumes that you have Docker plus a few Python dependencies. To install all of these dependencies in a conda environment, you may make use of the following conda environment file:\n",
"```yaml\n",
"---\n",
"name: triton_example\n",
"channels:\n",
" - conda-forge\n",
" - nvidia\n",
" - rapidsai\n",
"dependencies:\n",
" - cudatoolkit=11.4\n",
" - cudf=21.12\n",
" - cuml=21.12\n",
" - cupy\n",
" - jupyter\n",
" - kaggle\n",
" - matplotlib\n",
" - numpy\n",
" - pandas\n",
" - pip\n",
" - python=3.8\n",
" - scikit-learn\n",
" - pip:\n",
" - tritonclient[all]\n",
" - xgboost>=1.5,<1.6\n",
"```\n",
"\n",
"### XGBoost 1.6\n",
"Note that due to a change in XGBoost's JSON serialization format, Triton will not be able to load JSON-serialized models from XGBoost 1.6 until Triton version 22.07."
]
},
{
"cell_type": "markdown",
"id": "0ca1856e",
"metadata": {},
"source": [
"## A Note on Categorical Variables\n",
"Categorical variable support was added to the Triton FIL backend in release 21.12 and to XGBoost in release 1.5. If you would like to use an earlier version of either of these or if you simply wish to see how the same workflow would go without explicit categorical variable support, you may set the `USE_CATEGORICAL` variable in the following cell to `false`. Otherwise, by leaving it as `True`, you can take advantage of categorical variable support.\n",
"\n",
"Please note that categorical variable support is still considered experimental in XGBoost 1.5."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0e3205ea",
"metadata": {},
"outputs": [],
"source": [
"USE_CATEGORICAL = False"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "29dd53a4",
"metadata": {},
"outputs": [],
"source": [
"TRITON_IMAGE = 'nvcr.io/nvidia/tritonserver:21.12-py3'"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "190453b9",
"metadata": {},
"outputs": [],
"source": [
"!docker pull {TRITON_IMAGE}"
]
},
{
"cell_type": "markdown",
"id": "c60ae4a2",
"metadata": {},
"source": [
"## Fetching Training Data\n",
"For this example, we will make use of data from the [IEEE-CIS Fraud Detection](https://www.kaggle.com/c/ieee-fraud-detection/overview) Kaggle competition. You may fetch the data from this competition using the Kaggle command line client using the following commands.\n",
"\n",
"\n",
"**NOTE**: You will need to make sure that your Kaggle credentials are [available](https://github.com/Kaggle/kaggle-api#api-credentials) either through a kaggle.json file or via environment variables."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cd826c31",
"metadata": {},
"outputs": [],
"source": [
"!kaggle competitions download -c ieee-fraud-detection\n",
"!unzip -u ieee-fraud-detection.zip\n",
"train_csv = 'train_transaction.csv'"
]
},
{
"cell_type": "markdown",
"id": "cf0fb66a",
"metadata": {},
"source": [
"## Training Example Models\n",
"While the IEEE-CIS Kaggle competition focused on a more sophisticated problem involving analysis of both fraudulent transactions and the users linked to those transactions, we will use a simpler version of that problem (identifying fraudulent transactions only) to build our example model. In the following steps, we make use of cuML's preprocessing tools to clean the data and then train two example models using XGBoost. Note that we will be making use of the new categorical feature support in XGBoost 1.5. If you wish to use an earlier version of XGBoost, you will need to perform a [label encoding](https://docs.rapids.ai/api/cuml/stable/api.html?highlight=labelencoder#cuml.preprocessing.LabelEncoder.LabelEncoder) on the categorical features."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c529fe1a",
"metadata": {},
"outputs": [],
"source": [
"import cudf\n",
"import cupy as cp\n",
"from cuml.preprocessing import SimpleImputer\n",
"if not USE_CATEGORICAL:\n",
" from cuml.preprocessing import LabelEncoder\n",
"# Due to an upstream bug, cuML's train_test_split function is\n",
"# currently non-deterministic. We will therefore use sklearn's\n",
"# train_test_split in this example to obtain more consistent\n",
"# results.\n",
"from sklearn.model_selection import train_test_split\n",
"\n",
"SEED=0"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1b440e35",
"metadata": {},
"outputs": [],
"source": [
"# Load data from CSV files into cuDF DataFrames\n",
"data = cudf.read_csv(train_csv)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "de552313",
"metadata": {},
"outputs": [],
"source": [
"# Replace NaNs in data\n",
"nan_columns = data.columns[data.isna().any().to_pandas()]\n",
"float_nan_subset = data[nan_columns].select_dtypes(include='float64')\n",
"\n",
"imputer = SimpleImputer(missing_values=cp.nan, strategy='median')\n",
"data[float_nan_subset.columns] = imputer.fit_transform(float_nan_subset)\n",
"\n",
"obj_nan_subset = data[nan_columns].select_dtypes(include='object')\n",
"data[obj_nan_subset.columns] = obj_nan_subset.fillna('UNKNOWN')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f456211a",
"metadata": {},
"outputs": [],
"source": [
"# Convert string columns to categorical or perform label encoding\n",
"cat_columns = data.select_dtypes(include='object')\n",
"if USE_CATEGORICAL:\n",
" data[cat_columns.columns] = cat_columns.astype('category')\n",
"else:\n",
" for col in cat_columns.columns:\n",
" data[col] = LabelEncoder().fit_transform(data[col])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3720aacb",
"metadata": {},
"outputs": [],
"source": [
"# Split data into training and testing sets\n",
"X = data.drop('isFraud', axis=1)\n",
"y = data.isFraud.astype(int)\n",
"X_train, X_test, y_train, y_test = train_test_split(\n",
" X.to_pandas(), y.to_pandas(), test_size=0.3, stratify=y.to_pandas(), random_state=SEED\n",
")\n",
"# Copy data to avoid slowdowns due to fragmentation\n",
"X_train = X_train.copy()\n",
"X_test = X_test.copy()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a3bc5afd",
"metadata": {},
"outputs": [],
"source": [
"import xgboost as xgb"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9aec11f0",
"metadata": {},
"outputs": [],
"source": [
"# Define model training function\n",
"def train_model(num_trees, max_depth):\n",
" model = xgb.XGBClassifier(\n",
" tree_method='gpu_hist',\n",
" enable_categorical=USE_CATEGORICAL,\n",
" use_label_encoder=False,\n",
" predictor='gpu_predictor',\n",
" eval_metric='aucpr',\n",
" objective='binary:logistic',\n",
" max_depth=max_depth,\n",
" n_estimators=num_trees\n",
" )\n",
" model.fit(\n",
" X_train,\n",
" y_train,\n",
" eval_set=[(X_test, y_test)]\n",
" )\n",
" return model"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fcc239af",
"metadata": {},
"outputs": [],
"source": [
"# Train a small model with just 500 trees and a maximum depth of 3\n",
"small_model = train_model(500, 3)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8f57524f",
"metadata": {},
"outputs": [],
"source": [
"# Train a large model with 5000 trees and a maximum depth of 12\n",
"large_model = train_model(5000, 12)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "65e8ab0d",
"metadata": {},
"outputs": [],
"source": [
"# Free up some room on the GPU by explicitly deleting dataframes\n",
"import gc\n",
"del data\n",
"del nan_columns\n",
"del float_nan_subset\n",
"del imputer\n",
"del obj_nan_subset\n",
"del cat_columns\n",
"del X\n",
"del y\n",
"gc.collect()"
]
},
{
"cell_type": "markdown",
"id": "b3008720",
"metadata": {},
"source": [
"## Deploying Models in Triton\n",
"Now that we have two example models to work with, let's actually deploy them for real-time serving using Triton. In order to do so, we will need to first serialize the models in the directory structure that Triton expects and then add configuration files to tell Triton exactly how we wish to use these models."
]
},
{
"cell_type": "markdown",
"id": "6c4e0a5c",
"metadata": {},
"source": [
"### Model Serialization\n",
"Triton models can be stored locally on disk or in S3, Google Cloud Storage, or Azure Storage. For this example, we will stick to local storage, but information about using cloud storage solutions can be found [here](https://github.com/triton-inference-server/server/blob/main/docs/model_repository.md). Each model has a dedicated directory within a main model repository directory. Multiple versions of a model can also be served by Triton, as indicated by numbered directories (see below)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7a005e16",
"metadata": {},
"outputs": [],
"source": [
"import os"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b7126325",
"metadata": {},
"outputs": [],
"source": [
"# Create the model repository directory. The name of this directory is arbitrary.\n",
"REPO_PATH = os.path.abspath('model_repository')\n",
"os.makedirs(REPO_PATH, exist_ok=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2db311a7",
"metadata": {},
"outputs": [],
"source": [
"def serialize_model(model, model_name):\n",
" # The name of the model directory determines the name of the model as reported\n",
" # by Triton\n",
" model_dir = os.path.join(REPO_PATH, model_name)\n",
" # We can store multiple versions of the model in the same directory. In our\n",
" # case, we have just one version, so we will add a single directory, named '1'.\n",
" version_dir = os.path.join(model_dir, '1')\n",
" os.makedirs(version_dir, exist_ok=True)\n",
" \n",
" # The default filename for XGBoost models saved in json format is 'xgboost.json'.\n",
" # It is recommended that you use this filename to avoid having to specify a\n",
" # name in the configuration file.\n",
" model_file = os.path.join(version_dir, 'xgboost.json')\n",
" model.save_model(model_file)\n",
" \n",
" return model_dir"
]
},
{
"cell_type": "markdown",
"id": "83d42c83",
"metadata": {},
"source": [
"We will be deploying two copies of each of our example models: one on CPU and one on GPU. We will use these separate instances to demonstrate the performance differences between GPU and CPU execution later on."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bf33ec4d",
"metadata": {},
"outputs": [],
"source": [
"small_model_dir = serialize_model(small_model, 'small_model')\n",
"small_model_cpu_dir = serialize_model(small_model, 'small_model-cpu')\n",
"large_model_dir = serialize_model(large_model, 'large_model')\n",
"large_model_cpu_dir = serialize_model(large_model, 'large_model-cpu')"
]
},
{
"cell_type": "markdown",
"id": "6550c7a9",
"metadata": {},
"source": [
"### The Configuration File\n",
"The configuration file associated with a model tells Triton a little bit about the model itself and how you would like to use it. You can read about all generic Triton configuration options [here](https://github.com/triton-inference-server/server/blob/master/docs/model_configuration.md) and about configuration options specific to the FIL backend [here](https://github.com/triton-inference-server/fil_backend#configuration), but we will focus on just a few of the most common and relevant options in this example. Below are general descriptions of these options:\n",
"- **max_batch_size**: The maximum batch size that can be passed to this model. In general, the only limit on the size of batches passed to a FIL backend is the memory available with which to process them. For GPU execution, the available memory is determined by the size of Triton's CUDA memory pool, which can be set via a command line argument when starting the server.\n",
"- **input**: Options in this section tell Triton the number of features to expect for each input sample.\n",
"- **output**: Options in this section tell Triton how many output values there will be for each sample. If the \"predict_proba\" option (described further on) is set to true, then a probability value will be returned for each class. Otherwise, a single value will be returned indicating the class predicted for the given sample.\n",
"- **instance_group**: This determines how many instances of this model will be created and whether they will use the GPU or CPU.\n",
"- **model_type**: A string indicating what format the model is in (\"xgboost_json\" in this example, but \"xgboost\", \"lightgbm\", and \"tl_checkpoint\" are valid formats as well).\n",
"- **predict_proba**: If set to true, probability values will be returned for each class rather than just a class prediction.\n",
"- **output_class**: True for classification models, false for regression models.\n",
"- **threshold**: A score threshold for determining classification. When output_class is set to true, this must be provided, although it will not be used if predict_proba is also set to true.\n",
"- **storage_type**: In general, using \"AUTO\" for this setting should meet most usecases. If \"AUTO\" storage is selected, FIL will load the model using either a sparse or dense representation based on the approximate size of the model. In some cases, you may want to explicitly set this to \"SPARSE\" in order to reduce the memory footprint of large models.\n",
"\n",
"Based on this information, let's set up configuration files for our models."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "858fd518",
"metadata": {},
"outputs": [],
"source": [
"# Maximum size in bytes for input and output arrays. If you are\n",
"# using Triton 21.11 or higher, all memory allocations will make\n",
"# use of Triton's memory pool, which has a default size of\n",
"# 67_108_864 bytes. This can be increased using the\n",
"# `--cuda-memory-pool-byte-size` option when the server is\n",
"# started, but this notebook should work fine with default\n",
"# settings.\n",
"MAX_MEMORY_BYTES = 60_000_000"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "539ab7b9",
"metadata": {},
"outputs": [],
"source": [
"features = X_test.shape[1]\n",
"num_classes = cp.unique(y_test).size\n",
"bytes_per_sample = (features + num_classes) * 4\n",
"max_batch_size = MAX_MEMORY_BYTES // bytes_per_sample"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0e794668",
"metadata": {},
"outputs": [],
"source": [
"def generate_config(model_dir, deployment_type='gpu', storage_type='AUTO'):\n",
" if deployment_type.lower() == 'cpu':\n",
" instance_kind = 'KIND_CPU'\n",
" else:\n",
" instance_kind = 'KIND_GPU'\n",
"\n",
" config_text = f\"\"\"backend: \"fil\"\n",
"max_batch_size: {max_batch_size}\n",
"input [ \n",
" {{ \n",
" name: \"input__0\"\n",
" data_type: TYPE_FP32\n",
" dims: [ {features} ] \n",
" }} \n",
"]\n",
"output [\n",
" {{\n",
" name: \"output__0\"\n",
" data_type: TYPE_FP32\n",
" dims: [ {num_classes} ]\n",
" }}\n",
"]\n",
"instance_group [{{ kind: {instance_kind} }}]\n",
"parameters [\n",
" {{\n",
" key: \"model_type\"\n",
" value: {{ string_value: \"xgboost_json\" }}\n",
" }},\n",
" {{\n",
" key: \"predict_proba\"\n",
" value: {{ string_value: \"true\" }}\n",
" }},\n",
" {{\n",
" key: \"output_class\"\n",
" value: {{ string_value: \"true\" }}\n",
" }},\n",
" {{\n",
" key: \"threshold\"\n",
" value: {{ string_value: \"0.5\" }}\n",
" }},\n",
" {{\n",
" key: \"storage_type\"\n",
" value: {{ string_value: \"{storage_type}\" }}\n",
" }}\n",
"]\n",
"\n",
"dynamic_batching {{\n",
" max_queue_delay_microseconds: 100\n",
"}}\"\"\"\n",
" config_path = os.path.join(model_dir, 'config.pbtxt')\n",
" with open(config_path, 'w') as file_:\n",
" file_.write(config_text)\n",
"\n",
" return config_path"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "43016177",
"metadata": {},
"outputs": [],
"source": [
"generate_config(small_model_dir, deployment_type='gpu')\n",
"generate_config(small_model_cpu_dir, deployment_type='cpu')\n",
"generate_config(large_model_dir, deployment_type='gpu')\n",
"generate_config(large_model_cpu_dir, deployment_type='cpu')"
]
},
{
"cell_type": "markdown",
"id": "2016e3d5",
"metadata": {},
"source": [
"### Starting the server\n",
"With valid models and configuration files in place, we can now start the server. Below, we do so, use the Python client to wait for it to come fully online, and then check the logs to make sure we didn't get any unexpected warnings or errors while loading the models."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1f9e2f6c",
"metadata": {},
"outputs": [],
"source": [
"!docker run --gpus all -d -p 8000:8000 -p 8001:8001 -p 8002:8002 -v {REPO_PATH}:/models --name tritonserver {TRITON_IMAGE} tritonserver --model-repository=/models"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bd2c07ae",
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"import tritonclient.grpc as triton_grpc\n",
"from tritonclient import utils as triton_utils\n",
"HOST = 'localhost'\n",
"PORT = 8001\n",
"TIMEOUT = 60"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d0c7972b",
"metadata": {},
"outputs": [],
"source": [
"client = triton_grpc.InferenceServerClient(url=f'{HOST}:{PORT}')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "66742762",
"metadata": {},
"outputs": [],
"source": [
"# Wait for server to come online\n",
"server_start = time.time()\n",
"while True:\n",
" try:\n",
" if client.is_server_ready() or time.time() - server_start > TIMEOUT:\n",
" break\n",
" except triton_utils.InferenceServerException:\n",
" pass\n",
" time.sleep(1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ca7e9196",
"metadata": {},
"outputs": [],
"source": [
"!docker logs tritonserver"
]
},
{
"cell_type": "markdown",
"id": "164ae4ef",
"metadata": {},
"source": [
"## Submitting inference requests\n",
"With our models now deployed on a running Triton server, let's confirm that we get the same results from the deployed model as we get locally. Note that we will occasionally see slight divergences due to floating point errors during parallel execution, but otherwise, results should match.\n",
"\n",
"### Categorical variables\n",
"If you are using a model with categorical features, a certain amount of care must be taken with categorical features, just as if you were executing a model locally. Both XGBoost and LightGBM depend on the input data frames to convert categories into numeric variables. If data is later submitted from a data frame which contains a different subset of categories, this numeric conversion will not be handled properly. In this example, we will use the same dataframe we used during testing, so we need not consider this, but otherwise we would need to note the mapping used for the `.codes` attribute for each categorical feature in the training dataframe and make sure the same codes were used when submitting inference requests."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c64db65d",
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"def convert_to_numpy(df):\n",
" df = df.copy()\n",
" cat_cols = df.select_dtypes('category').columns\n",
" for col in cat_cols:\n",
" df[col] = df[col].cat.codes\n",
" for col in df.columns:\n",
" df[col] = pd.to_numeric(df[col], downcast='float')\n",
" return df.values"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "60633965",
"metadata": {},
"outputs": [],
"source": [
"np_data = convert_to_numpy(X_test)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a65ceb6a",
"metadata": {},
"outputs": [],
"source": [
"def triton_predict(model_name, arr):\n",
" triton_input = triton_grpc.InferInput('input__0', arr.shape, 'FP32')\n",
" triton_input.set_data_from_numpy(arr)\n",
" triton_output = triton_grpc.InferRequestedOutput('output__0')\n",
" response = client.infer(model_name, model_version='1', inputs=[triton_input], outputs=[triton_output])\n",
" return response.as_numpy('output__0')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e7f7bc7f",
"metadata": {},
"outputs": [],
"source": [
"triton_result = triton_predict('small_model', np_data[0:5])\n",
"local_result = small_model.predict_proba(X_test[0:5])\n",
"print(\"Result computed on Triton: \")\n",
"print(triton_result)\n",
"print(\"\\nResult computed locally: \")\n",
"print(local_result)\n",
"cp.testing.assert_allclose(triton_result, local_result, rtol=1e-6, atol=1e-6)"
]
},
{
"cell_type": "markdown",
"id": "8147db66",
"metadata": {},
"source": [
"## Optimizing Performance\n",
"Triton offers several tools to help tune your model deployment parameters and optimize your target metrics, whether that be throughput, latency, device utilization, or some other measure of performance. Some of these optimizations depend on expected server load and whether inference requests will be submitted in batches or one at a time from clients. As we shall see, Triton's performance analysis tools allow you to test performance based on a wide range of anticipated scenarios and modify deployment parameters accordingly.\n",
"\n",
"For this example, we will make use of Triton's `perf_analyzer` [tool](https://github.com/triton-inference-server/server/blob/main/docs/perf_analyzer.md#performance-analyzer), which allows us to quickly measure throughput and latency based on different batch sizes and request concurrency. We'll start with a basic comparison of the performance of our large model deployed on CPU vs GPU with batch size 1 and no concurrency.\n",
"\n",
"All of the specific performance numbers here were obtained on a DGX-1 with 8 V100s and Triton 21.11, but your numbers may vary depending on available hardware and whether or not you chose to enable categorical features."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "236f6dc6",
"metadata": {},
"outputs": [],
"source": [
"# Analyze performance of our large model on CPU.\n",
"# By default, perf_analyzer uses batch size 1 and concurrency 1.\n",
"!perf_analyzer -m large_model-cpu"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8fc3d883",
"metadata": {},
"outputs": [],
"source": [
"# Let's now get the same performance numbers for GPU execution\n",
"!perf_analyzer -m large_model"
]
},
{
"cell_type": "markdown",
"id": "6b953ec1",
"metadata": {},
"source": [
"Already, we can see that GPU execution offers substantially improved throughput at lower latency for this complex model, but let's see what happens when we look at higher batch sizes or request load."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "685c163a",
"metadata": {},
"outputs": [],
"source": [
"# Measure performance with batch size 6 and a concurrrency of 6 for\n",
"# request submissions\n",
"!perf_analyzer -m large_model-cpu -b 6 --concurrency-range 6:6"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d32f9de7",
"metadata": {},
"outputs": [],
"source": [
"!perf_analyzer -m large_model -b 6 --concurrency-range 6:6"
]
},
{
"cell_type": "markdown",
"id": "f1a6705f",
"metadata": {},
"source": [
"As we can see, deployed on CPU, the model was able to offer a somewhat increased throughput at higher load, but latency increased dramatically. Meanwhile, the same model deployed on the GPU significantly increased its throughput with only a slight increase in latency.\n",
"\n",
"In order to maintain a tight latency budget on a CPU-only server under high request load, we would have to turn to a significantly less sophisticated model. Let's imagine that we were trying to keep our p99 latency under 2 ms on the DGX machine referred to above. On CPU, we can just barely stay under that budget with a batch size of 6 and concurrency of 6 on CPU. Deploying the same model on GPU with the same parameters, we can keep our p99 latency under 0.7 ms and offer 3.5X the throughput"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "14ba9337",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!perf_analyzer -m small_model-cpu -b 6 --concurrency-range 6:6"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7c9a5592",
"metadata": {},
"outputs": [],
"source": [
"!perf_analyzer -m small_model -b 6 --concurrency-range 6:6"
]
},
{
"cell_type": "markdown",
"id": "e6e9beba",
"metadata": {},
"source": [
"Let's see how far we can push our large model on GPU while staying within our 2 ms latency budget."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "805ab7a8",
"metadata": {},
"outputs": [],
"source": [
"!perf_analyzer -m large_model -b 80 --concurrency-range 8:8"
]
},
{
"cell_type": "markdown",
"id": "2956b745",
"metadata": {},
"source": [
"On the GPU, this larger model can achieve 20X the throughput of the smaller model on CPU, allowing us to handle a substantially higher load. But of course throughput performance is only part of the picture. If our latency budget forces us to use a smaller model on CPU, how much worse will we do at actually detecting fraud? Let's compute results for the entire test dataset using the large and small models and then compare their precision-recall curves to see how much we may be losing by resorting to the smaller model for CPU deployments."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0f026d07",
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import cuml"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "424da1e7",
"metadata": {},
"outputs": [],
"source": [
"GPU_COUNT = 8\n",
"\n",
"def create_batches(arr):\n",
" # Determine how many chunks are needed to keep size <= max_batch_size\n",
" chunks = (\n",
" arr.shape[0] // max_batch_size +\n",
" int(bool(arr.shape[0] % max_batch_size) or arr.shape[0] < max_batch_size)\n",
" )\n",
" return np.array_split(arr, max(GPU_COUNT, chunks))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "431cdd4f",
"metadata": {},
"outputs": [],
"source": [
"%time large_model_results = np.concatenate([triton_predict('large_model', chunk) for chunk in create_batches(np_data)])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1c19d85c",
"metadata": {},
"outputs": [],
"source": [
"%time small_model_results = np.concatenate([triton_predict('small_model-cpu', chunk) for chunk in create_batches(np_data)])"
]
},
{
"cell_type": "markdown",
"id": "e3862ec0",
"metadata": {},
"source": [
"Note that we can more quickly process the full dataset on GPU even with a significantly more sophisticated model than we are using for our CPU deployment. As an interesting point of comparison, due to the optimized inference performance of the RAPIDS Forest Inference Library (FIL) used by the Triton backend and Triton's inherent ability to parallelize over available GPUs, it is even faster to submit these samples for processing to Triton than it is to process them locally using XGBoost for the larger model, despite the overhead of data transfer. For information about invoking FIL directly in Python without Triton, see the [FIL documentation](https://github.com/rapidsai/cuml/tree/branch-21.12/python/cuml/fil#fil---rapids-forest-inference-library)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e0094559",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"%time large_model.predict_proba(X_test)"
]
},
{
"cell_type": "markdown",
"id": "200a8bb1",
"metadata": {},
"source": [
"We now return to evaluating the benefit of the larger model for accurately detecting fraud by computing precision-recall curves for both the small and large models."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1f3c5a62",
"metadata": {},
"outputs": [],
"source": [
"large_precision, large_recall, _ = cuml.metrics.precision_recall_curve(y_test, large_model_results[:, 1])\n",
"small_precision, small_recall, _ = cuml.metrics.precision_recall_curve(y_test, small_model_results[:, 1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3a855acb",
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "167fe250",
"metadata": {},
"outputs": [],
"source": [
"plt.plot(small_precision, small_recall, color='#0071c5')\n",
"plt.plot(large_precision, large_recall, color='#76b900')\n",
"plt.title('Precision vs Recall for Small and Large Models')\n",
"plt.xlabel('Precision')\n",
"plt.ylabel('Recall')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "36712d5b",
"metadata": {},
"source": [
"As we can see, the larger, more sophisticated model dominates the smaller model all along this curve. By deploying our model on GPU, we can identify a far greater proportion of actual fraud incidents with fewer false positives, all without going over our latency budget."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b2c8cf37",
"metadata": {},
"outputs": [],
"source": [
"# Shut down the server\n",
"!docker rm -f tritonserver"
]
},
{
"cell_type": "markdown",
"id": "6747b1fb",
"metadata": {},
"source": [
"## Conclusion\n",
"In this example notebook, we showed how to deploy an XGBoost model in Triton using the new FIL backend. While it is possible to deploy these models on both CPU and GPU in Triton, GPU-deployed models offer far higher throughput at lower latency. As a result, we can deploy more sophisticated models on the GPU for any given latency budget and thereby obtain far more accurate results.\n",
"\n",
"While we have focused on XGBoost in this example, FIL also natively supports LightGBM's text serialization format as well as Treelite's checkpoint format. Thus, the same general steps can be used to serve LightGBM models and any Treelite-convertible model (including Scikit-Learn and cuML forest models). With the new FIL backend, Triton is now ready to serve forest models of all kinds in production, whether on their own or in concert with any of the deep-learning models supported by Triton."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment