Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save amaarora/ccd77ef263931c17576cfe17ec0bb7fb to your computer and use it in GitHub Desktop.
Save amaarora/ccd77ef263931c17576cfe17ec0bb7fb to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "X4cRE8IbIrIV"
},
"source": [
"If you haven't already installed 🤗 Transformers,🤗 Datasets and `wandb`. Uncomment the following cell and run it."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Setting up the Environment"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "MOsHUjgdIrIW",
"outputId": "750db64c-6326-47ef-994a-707f0ef21785"
},
"outputs": [],
"source": [
"# ! pip install datasets transformers wandb -q"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-QKH0Hyp-igj"
},
"source": [
"You will need to have a **Weights and Biases (W&B)** and **Huggingface** account to be able to share models and experiments publicly and run the notebook below. \n",
"\n",
"**NOTE:** For Huggingface, make sure to create a `write` token to be able to share models to your workspace.\n",
"\n",
"If you don't already have an account, run the following cells to signup: "
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 386
},
"id": "vRHynpw0-igk",
"outputId": "91a02e24-ac3c-44b9-f645-a14b5e670163"
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "a7e924f8386c422ba395e271c2e69e9c",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"VBox(children=(HTML(value='<center>\\n<img src=https://huggingface.co/front/assets/huggingface_logo-noborder.sv…"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# login to Huggingface\n",
"from huggingface_hub import notebook_login\n",
"notebook_login()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[34m\u001b[1mwandb\u001b[0m: Currently logged in as: \u001b[33mwandb_fc\u001b[0m (use `wandb login --relogin` to force relogin)\n"
]
},
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# login to Weights and Biases\n",
"import wandb \n",
"wandb.login()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Cs1Qsjb_-igm"
},
"source": [
"Then you also need to install **Git-LFS** (large file storage). Uncomment the following instructions if you don't have it already:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "FRljglLf-ign",
"outputId": "1b25e833-40d9-4147-f3ce-b03e3ca69e84"
},
"outputs": [],
"source": [
"# !apt install git-lfs"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "iAxk7itD-igo"
},
"source": [
"Make sure your version of Transformers is at least 4.11.0 since the functionality used in this notebook was introduced in that version:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "ql76_VQt-igq",
"outputId": "53cbd733-7a24-4f42-9657-8c8547d22ca0"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"4.18.0\n"
]
}
],
"source": [
"import transformers\n",
"print(transformers.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# TL;DR"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rEJBSTyZIrIb"
},
"source": [
"# Fine-tuning a model on a text classification task"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kTCFado4IrIc"
},
"source": [
"In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model to a text classification task of the [GLUE Benchmark](https://gluebenchmark.com/) and use Weights and Biases for logging your experiments.\n",
"\n",
"We will also be using [Weights and Biases Sweeps](https://docs.wandb.ai/guides/sweeps) for hyperparameter tuning as part of our experimentation! The script used for Sweeps is pretty generic and can be re-used for your other projects as well. \n",
"\n",
"You can also have a look at Sweeps Quickstart [here](https://colab.research.google.com/drive/1jPi1Bi_UfKyNgxoNq5REQCGnk4KWc31E#scrollTo=XavXvMmYs5tl) to quickly get introduced to W&B Sweeps. \n",
"\n",
"## Glue Benchmark\n",
"> The GLUE Benchmark is a group of nine classification tasks on sentences or pairs of sentences which are:\n",
"\n",
"- [CoLA](https://nyu-mll.github.io/CoLA/) (Corpus of Linguistic Acceptability) Determine if a sentence is grammatically correct or not.is a dataset containing sentences labeled grammatically correct or not.\n",
"- [MNLI](https://arxiv.org/abs/1704.05426) (Multi-Genre Natural Language Inference) Determine if a sentence entails, contradicts or is unrelated to a given hypothesis. (This dataset has two versions, one with the validation and test set coming from the same distribution, another called mismatched where the validation and test use out-of-domain data.)\n",
"- [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) (Microsoft Research Paraphrase Corpus) Determine if two sentences are paraphrases from one another or not.\n",
"- [QNLI](https://rajpurkar.github.io/SQuAD-explorer/) (Question-answering Natural Language Inference) Determine if the answer to a question is in the second sentence or not. (This dataset is built from the SQuAD dataset.)\n",
"- [QQP](https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Quora Question Pairs2) Determine if two questions are semantically equivalent or not.\n",
"- [RTE](https://aclweb.org/aclwiki/Recognizing_Textual_Entailment) (Recognizing Textual Entailment) Determine if a sentence entails a given hypothesis or not.\n",
"- [SST-2](https://nlp.stanford.edu/sentiment/index.html) (Stanford Sentiment Treebank) Determine if the sentence has a positive or negative sentiment.\n",
"- [STS-B](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) (Semantic Textual Similarity Benchmark) Determine the similarity of two sentences with a score from 1 to 5.\n",
"- [WNLI](https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html) (Winograd Natural Language Inference) Determine if a sentence with an anonymous pronoun and a sentence with this pronoun replaced are entailed or not. (This dataset is built from the Winograd Schema Challenge dataset.)\n",
"\n",
"We will see how to easily load the dataset for each one of those tasks and use the Huggingface `Trainer` API to fine-tune a model on it. Each task is named by its acronym, with `mnli-mm` standing for the mismatched version of MNLI (so same training set as `mnli` but different validation and test sets):"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"id": "YZbiBDuGIrId"
},
"outputs": [],
"source": [
"GLUE_TASKS = [\"cola\", \"mnli\", \"mnli-mm\", \"mrpc\", \"qnli\", \"qqp\", \"rte\", \"sst2\", \"stsb\", \"wnli\"]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4RRkXuteIrIh"
},
"source": [
"This notebook is built to run on any of the tasks in the list above, with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a version with a classification head. Depending on your model and the GPU you are using, you might need to adjust the batch size to avoid out-of-memory errors. Set those three parameters below, then the rest of the notebook should run smoothly:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "zVvslsfMIrIh"
},
"outputs": [],
"source": [
"task = \"mrpc\"\n",
"model_checkpoint = \"distilbert-base-uncased\"\n",
"batch_size = 64"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "whPRbBNbIrIl"
},
"source": [
"## Loading the dataset"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "W7QYTpxXIrIl"
},
"source": [
"We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). We will also be logging the metric to **Weights and Biases** for easy experimentation.\n",
"\n",
"This can be easily done with the functions `load_dataset` and `load_metric`. "
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"id": "IreSlFmlIrIm"
},
"outputs": [],
"source": [
"from datasets import load_dataset, load_metric"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "CKx2zKs5IrIq"
},
"source": [
"Apart from `mnli-mm` being a special code, we can directly pass our task name to those functions. `load_dataset` will cache the dataset to avoid downloading it again the next time you run this cell."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 231,
"referenced_widgets": [
"e23c0bd606164d5a826a398049836218",
"674f6248de45431eadc5e894b206985d",
"1421037cee284945965d799bb8eadf5d",
"0c2cce713a424f1c907cca57bcd6db4b",
"9ea5bb465b8148cfb0c8dce352fed136",
"407cc0584aaf42e289c3b73d8cefe2ba",
"3ec8e8794f2341d395c5ad071aed7a68",
"9e2afa34dd0d456dbdf827743422ee5a",
"8cdb50dd51634d5f8108d450acd40cc7",
"082b93bf7cbd4d5b8691d35092a3a513",
"af57631a5b9642a68f0bc27eafebd542",
"827cc59a3c254c4e9ec1468b763bc8a5",
"49cb88f1550f4b838c14aae9a8ac8662",
"6e44e78718cd4a52aecc47a511fb7ef6",
"b58cd953f41648d29de64d9125f42c72",
"40f2fb5d5b6b438cbbc574f6e68d7d73",
"40caff94a3e1474d8ef440f0fa78d981",
"364088b341064d9dac54b5e55b5cbb93",
"fe5d0cbeb5f446ec89383d23521524c8",
"0e2dd775b54d487cafcd4e80cdddd9ed",
"e2234760a0c94c9cb9c3f71e95036b6e",
"28fd8091a58b45298c1dabebb1bac36c",
"eba159425b444bb3beb7bb849eda39a5",
"ce57e0b2d78d43989d0990402cdf2c93",
"907f0c20cdbd4f64be785e867b18feaf",
"bda796c7564a4549878a05d1ebdbaf1b",
"61fb59b5ad2f4dcca5909919617d6495",
"f5ee51060ccc4e24a28804c85932eca8",
"66d92bcab0374c8c99660452b246d294",
"91421ca5f075407a8a1fcb0fb20872d5",
"386fa0d738e04943aa0c3a534c1f421f",
"48a35f3b76734a48afda4697c5f161cc",
"fa4655d796fe49d7a362ec2e34e9b6c4",
"3f798a9dcac34dc29f94423467db9317",
"606356ca4050435c83ddacfb4cb9186f",
"91bbf765a080431bad9be7efd7749d80",
"a7217fda218240f6913962a82254c1bd",
"fc4d710e51904fc99964bb9b30bc9ec1",
"51d639f7cfa64849bd70d97447065192",
"794203270c6341de9cf0a74f320eb1bc",
"61fb78589e214ac18b898404ac07dce5",
"26b7156cff74475088c12bf92effe221",
"5219ac7d60964cc88a69ed24e5f75725",
"5a41a54e66e047e3878138edddb8709d",
"3a62be8f6aa24f1dac56d0779637dca9",
"2de2b346b3c54a95b52306e4ce7c0e26",
"9f2c68d520424853b0f0dd972001a7dc",
"f2b050e6396f41019fb98e588924cc48",
"e2038f31686449aa948a1a3806f389e1",
"84977143ae0d4c4b8112f471d7a8ee56",
"4012537f49f94f858b95158dcd2e8233",
"9c3523c172eb482dab604bd1535c2df3",
"0c2c565023074c319c64c5ad50f14c84",
"291c1a9d28454561a3168d4fa27dc9aa",
"a4906c3458e44d168e9187b8ea42d3ad",
"f42562d4740345e9a6f2c3779fe34823",
"6ef7047e954f4bc681cb8820ca51cc2e",
"71a0f213aa80450dad9dfea35a594511",
"ef7f834d37324a0faee0bf0ba3bba516",
"e116f73758ea4705a814d2589517d157",
"d7ebd8c8a29a41d7a624d3f1682862ec",
"040aa9e3dc6e41cf82d5c5e54d33565b",
"ab119786bd364cbea4e5db946402277d",
"f23cec0e3d9c4961ab876545c1835a27",
"f0df4c26f0b2428fbb45089756804951",
"969071e591654559a65f1caad7dca6bf",
"8756b89a2d6049e8b9ba04eff93532f3",
"be45258bb03d4f27a46708a09a7c964e",
"37c72ca6518949acacf8efccb9238633",
"c8af2b5d13884bcb89490eb7bc2eb707",
"3b56d8d4d8514afc8db77ef74cfc1b35",
"9afe5ce5fadf4616a53532bd3fcda831",
"111ffec8673a4cde960f35faee65451d",
"e952cfc00ad2443cba686a756a1fdaac",
"f1bdbd5e53fe4f249dfabc8b0f0ef892",
"0c3d202481574c46944d3a87f6d33648",
"93839715a90d4557a19413fbb5d3eaa3",
"7cd1e30e5f8a46bba69c34864e74c694",
"e648745b51fb4a849f851cc7d1a5c36f",
"585ee0da0c3249a18922482b610ed37b",
"1854c0258e8e44c4a74e6d4aab7d2214",
"da4f72dad003481ba3532e02417902e4",
"49a8946b1e624154bede31f76a6d3a06",
"2d88ae6cff104f0d844576a006e8dbb5",
"c8fe85e0940840708f7b32ff63bc418a",
"9f22e13082cf431c987cf21f4bf7b555",
"6d9432d4224841d9b205252443018a86",
"1f966018eb8a4ba2bc283f1d4dbfa5f5"
]
},
"id": "s_AY1ATSIrIq",
"outputId": "e2590173-64f1-496a-81cb-0bd9747d8e7a"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Reusing dataset glue (/home/arora/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "d0b509ab7a8f4cc8a58c28e5310b7dcb",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/3 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"dataset = load_dataset(\"glue\", task)\n",
"metric = load_metric('glue', task)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "RzfPtOMoIrIu"
},
"source": [
"The `dataset` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasetdict), which contains one key for the training, validation and test set (with more keys for the mismatched validation and test set in the special case of `mnli`)."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "GWiVUF0jIrIv",
"outputId": "4de184dc-3529-408d-bc4b-4be0864ddff1"
},
"outputs": [
{
"data": {
"text/plain": [
"DatasetDict({\n",
" train: Dataset({\n",
" features: ['sentence1', 'sentence2', 'label', 'idx'],\n",
" num_rows: 3668\n",
" })\n",
" validation: Dataset({\n",
" features: ['sentence1', 'sentence2', 'label', 'idx'],\n",
" num_rows: 408\n",
" })\n",
" test: Dataset({\n",
" features: ['sentence1', 'sentence2', 'label', 'idx'],\n",
" num_rows: 1725\n",
" })\n",
"})"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dataset"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "u3EtYfeHIrIz"
},
"source": [
"To access an actual element, you need to select a split first, then give an index:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "X6HrpprwIrIz",
"outputId": "1cb3aa7a-931c-45de-c211-2d6f6a5da7eb"
},
"outputs": [
{
"data": {
"text/plain": [
"{'sentence1': 'Amrozi accused his brother , whom he called \" the witness \" , of deliberately distorting his evidence .',\n",
" 'sentence2': 'Referring to him as only \" the witness \" , Amrozi accused his brother of deliberately distorting his evidence .',\n",
" 'label': 1,\n",
" 'idx': 0}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dataset[\"train\"][0]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WHUmphG3IrI3"
},
"source": [
"To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset.\n",
"\n",
"We are making use of [**Weights and Biases Tables** ](https://docs.wandb.ai/guides/data-vis/log-tables) to visualise the dataset. W&B Tables are really handy and make it really easy to share results with your colleagues or play around with the Dataset.\n",
"\n",
"You can log any Pandas DataFrame to W&B using `wandb.log(wandb.Table(dataframe=my_df)`. "
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"id": "i3j8APAoIrI3"
},
"outputs": [],
"source": [
"import datasets\n",
"import random\n",
"import pandas as pd\n",
"from IPython.display import display, HTML"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"id": "i3j8APAoIrI3"
},
"outputs": [],
"source": [
"def get_training_data(dataset, num_examples=10):\n",
" assert num_examples <= len(dataset), \"Can't pick more elements than there are in the dataset.\"\n",
" picks = []\n",
" for _ in range(num_examples):\n",
" pick = random.randint(0, len(dataset)-1)\n",
" while pick in picks:\n",
" pick = random.randint(0, len(dataset)-1)\n",
" picks.append(pick)\n",
" \n",
" df = pd.DataFrame(dataset[picks])\n",
" for column, typ in dataset.features.items():\n",
" if isinstance(typ, datasets.ClassLabel):\n",
" df[column] = df[column].transform(lambda i: typ.names[i])\n",
" return df"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"Tracking run with wandb version 0.12.14"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"Run data is saved locally in <code>/home/arora/reports/wandb/run-20220413_165449-2k3s55dz</code>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<iframe src=\"https://wandb.ai/wandb_fc/hf-sweeps/runs/2k3s55dz?jupyter=true\" style=\"border:none;width:100%;height:420px;\"></iframe>"
],
"text/plain": [
"<wandb.jupyter.IFrame at 0x7fb67e754550>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"%%wandb\n",
"# get training data and log to W&B\n",
"wandb.init(project=\"hf-sweeps\")\n",
"train_df = get_training_data(dataset[\"train\"], num_examples=len(dataset['train']))\n",
"wandb.log({'Training Data': wandb.Table(dataframe=train_df)})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the workspace above, we can see that the training data was logged as a **Weights and Biases** table. The main advantage of logging data as a **Weights and Biases** table are: \n",
"1. Tables can log any media including video, images, audio and text.\n",
"2. Tables make it very easy to share results with teammates. \n",
"3. Tables are interactive. "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "lnjDIuQ3IrI-"
},
"source": [
"The metric is an instance of [`datasets.Metric`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Metric):"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "5o4rUteaIrI_",
"outputId": "a6c3ee6e-ace8-41b5-dd85-a53b65e7237e"
},
"outputs": [
{
"data": {
"text/plain": [
"Metric(name: \"glue\", features: {'predictions': Value(dtype='int64', id=None), 'references': Value(dtype='int64', id=None)}, usage: \"\"\"\n",
"Compute GLUE evaluation metric associated to each GLUE dataset.\n",
"Args:\n",
" predictions: list of predictions to score.\n",
" Each translation should be tokenized into a list of tokens.\n",
" references: list of lists of references for each translation.\n",
" Each reference should be tokenized into a list of tokens.\n",
"Returns: depending on the GLUE subset, one or several of:\n",
" \"accuracy\": Accuracy\n",
" \"f1\": F1 score\n",
" \"pearson\": Pearson Correlation\n",
" \"spearmanr\": Spearman Correlation\n",
" \"matthews_correlation\": Matthew Correlation\n",
"Examples:\n",
"\n",
" >>> glue_metric = datasets.load_metric('glue', 'sst2') # 'sst2' or any of [\"mnli\", \"mnli_mismatched\", \"mnli_matched\", \"qnli\", \"rte\", \"wnli\", \"hans\"]\n",
" >>> references = [0, 1]\n",
" >>> predictions = [0, 1]\n",
" >>> results = glue_metric.compute(predictions=predictions, references=references)\n",
" >>> print(results)\n",
" {'accuracy': 1.0}\n",
"\n",
" >>> glue_metric = datasets.load_metric('glue', 'mrpc') # 'mrpc' or 'qqp'\n",
" >>> references = [0, 1]\n",
" >>> predictions = [0, 1]\n",
" >>> results = glue_metric.compute(predictions=predictions, references=references)\n",
" >>> print(results)\n",
" {'accuracy': 1.0, 'f1': 1.0}\n",
"\n",
" >>> glue_metric = datasets.load_metric('glue', 'stsb')\n",
" >>> references = [0., 1., 2., 3., 4., 5.]\n",
" >>> predictions = [0., 1., 2., 3., 4., 5.]\n",
" >>> results = glue_metric.compute(predictions=predictions, references=references)\n",
" >>> print({\"pearson\": round(results[\"pearson\"], 2), \"spearmanr\": round(results[\"spearmanr\"], 2)})\n",
" {'pearson': 1.0, 'spearmanr': 1.0}\n",
"\n",
" >>> glue_metric = datasets.load_metric('glue', 'cola')\n",
" >>> references = [0, 1]\n",
" >>> predictions = [0, 1]\n",
" >>> results = glue_metric.compute(predictions=predictions, references=references)\n",
" >>> print(results)\n",
" {'matthews_correlation': 1.0}\n",
"\"\"\", stored examples: 0)"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"metric"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jAWdqcUBIrJC"
},
"source": [
"You can call its `compute` method with your predictions and labels directly and it will return a dictionary with the metric(s) value:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "6XN1Rq0aIrJC",
"outputId": "29b7ff54-ade7-4267-867f-ed2d37c9a72f"
},
"outputs": [
{
"data": {
"text/plain": [
"{'accuracy': 0.515625, 'f1': 0.5373134328358209}"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import numpy as np\n",
"\n",
"fake_preds = np.random.randint(0, 2, size=(64,))\n",
"fake_labels = np.random.randint(0, 2, size=(64,))\n",
"metric.compute(predictions=fake_preds, references=fake_labels)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also simply log the metric to **Weights and Biases** after every epoch just by one line of code: `wandb.log(metric.compute(predictions=fake_preds, references=fake_labels))`"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YOCrQwPoIrJG"
},
"source": [
"Note that `load_metric` has loaded the proper metric associated to your task, which is:\n",
"\n",
"- for CoLA: [Matthews Correlation Coefficient](https://en.wikipedia.org/wiki/Matthews_correlation_coefficient)\n",
"- for MNLI (matched or mismatched): Accuracy\n",
"- for MRPC: Accuracy and [F1 score](https://en.wikipedia.org/wiki/F1_score)\n",
"- for QNLI: Accuracy\n",
"- for QQP: Accuracy and [F1 score](https://en.wikipedia.org/wiki/F1_score)\n",
"- for RTE: Accuracy\n",
"- for SST-2: Accuracy\n",
"- for STS-B: [Pearson Correlation Coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) and [Spearman's_Rank_Correlation_Coefficient](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient)\n",
"- for WNLI: Accuracy\n",
"\n",
"so the metric object only computes the one(s) needed for your task."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "n9qywopnIrJH"
},
"source": [
"## Preprocessing the data"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YVx71GdAIrJH"
},
"source": [
"Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.\n",
"\n",
"To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:\n",
"\n",
"- we get a tokenizer that corresponds to the model architecture we want to use,\n",
"- we download the vocabulary used when pretraining this specific checkpoint.\n",
"\n",
"That vocabulary will be cached, so it's not downloaded again the next time we run the cell."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 145
},
"id": "eXNLu_-nIrJI",
"outputId": "c3dec32d-58b2-4e80-b9ba-d4f0dea4a8f7"
},
"outputs": [],
"source": [
"from transformers import AutoTokenizer\n",
" \n",
"tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Vl6IidfdIrJK"
},
"source": [
"We pass along `use_fast=True` to the call above to use one of the fast tokenizers (backed by Rust) from the 🤗 Tokenizers library. Those fast tokenizers are available for almost all models, but if you got an error with the previous call, remove that argument."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rowT4iCLIrJK"
},
"source": [
"You can directly call this tokenizer on one sentence or a pair of sentences:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "a5hBlsrHIrJL",
"outputId": "d0b6d671-d2b3-4caa-e112-16d5ab345b2d"
},
"outputs": [
{
"data": {
"text/plain": [
"{'input_ids': [101, 7592, 1010, 2023, 2028, 6251, 999, 102, 1998, 2023, 6251, 3632, 2007, 2009, 1012, 102], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tokenizer(\"Hello, this one sentence!\", \"And this sentence goes with it.\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qo_0B1M2IrJM"
},
"source": [
"Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in [this tutorial](https://huggingface.co/transformers/preprocessing.html) if you're interested.\n",
"\n",
"To preprocess our dataset, we will thus need the names of the columns containing the sentence(s). The following dictionary keeps track of the correspondence task to column names:"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"id": "fyGdtK9oIrJM"
},
"outputs": [],
"source": [
"task_to_keys = {\n",
" \"cola\": (\"sentence\", None),\n",
" \"mnli\": (\"premise\", \"hypothesis\"),\n",
" \"mnli-mm\": (\"premise\", \"hypothesis\"),\n",
" \"mrpc\": (\"sentence1\", \"sentence2\"),\n",
" \"qnli\": (\"question\", \"sentence\"),\n",
" \"qqp\": (\"question1\", \"question2\"),\n",
" \"rte\": (\"sentence1\", \"sentence2\"),\n",
" \"sst2\": (\"sentence\", None),\n",
" \"stsb\": (\"sentence1\", \"sentence2\"),\n",
" \"wnli\": (\"sentence1\", \"sentence2\"),\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xbqtC4MrIrJO"
},
"source": [
"We can double check it does work on our current dataset:"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "19GG646uIrJO",
"outputId": "69434d11-0c41-43b3-e33c-af3ecb21a2ec"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sentence 1: Amrozi accused his brother , whom he called \" the witness \" , of deliberately distorting his evidence .\n",
"Sentence 2: Referring to him as only \" the witness \" , Amrozi accused his brother of deliberately distorting his evidence .\n"
]
}
],
"source": [
"sentence1_key, sentence2_key = task_to_keys[task]\n",
"if sentence2_key is None:\n",
" print(f\"Sentence: {dataset['train'][0][sentence1_key]}\")\n",
"else:\n",
" print(f\"Sentence 1: {dataset['train'][0][sentence1_key]}\")\n",
" print(f\"Sentence 2: {dataset['train'][0][sentence2_key]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2C0hcmp9IrJQ"
},
"source": [
"We can then write the function that will preprocess our samples. We just feed them to the `tokenizer` with the argument `truncation=True`. This will ensure that an input longer that what the model selected can handle will be truncated to the maximum length accepted by the model."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"id": "vc0BSBLIIrJQ"
},
"outputs": [],
"source": [
"def preprocess_function(examples):\n",
" if sentence2_key is None:\n",
" return tokenizer(examples[sentence1_key], truncation=True)\n",
" return tokenizer(examples[sentence1_key], examples[sentence2_key], truncation=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0lm8ozrJIrJR"
},
"source": [
"This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists for each key:"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "-b70jh26IrJS",
"outputId": "fdde4d3c-e19d-4a1f-8dff-928dba7ec86c"
},
"outputs": [
{
"data": {
"text/plain": [
"{'input_ids': [[101, 2572, 3217, 5831, 5496, 2010, 2567, 1010, 3183, 2002, 2170, 1000, 1996, 7409, 1000, 1010, 1997, 9969, 4487, 23809, 3436, 2010, 3350, 1012, 102, 7727, 2000, 2032, 2004, 2069, 1000, 1996, 7409, 1000, 1010, 2572, 3217, 5831, 5496, 2010, 2567, 1997, 9969, 4487, 23809, 3436, 2010, 3350, 1012, 102], [101, 9805, 3540, 11514, 2050, 3079, 11282, 2243, 1005, 1055, 2077, 4855, 1996, 4677, 2000, 3647, 4576, 1999, 2687, 2005, 1002, 1016, 1012, 1019, 4551, 1012, 102, 9805, 3540, 11514, 2050, 4149, 11282, 2243, 1005, 1055, 1999, 2786, 2005, 1002, 6353, 2509, 2454, 1998, 2853, 2009, 2000, 3647, 4576, 2005, 1002, 1015, 1012, 1022, 4551, 1999, 2687, 1012, 102], [101, 2027, 2018, 2405, 2019, 15147, 2006, 1996, 4274, 2006, 2238, 2184, 1010, 5378, 1996, 6636, 2005, 5096, 1010, 2002, 2794, 1012, 102, 2006, 2238, 2184, 1010, 1996, 2911, 1005, 1055, 5608, 2018, 2405, 2019, 15147, 2006, 1996, 4274, 1010, 5378, 1996, 14792, 2005, 5096, 1012, 102], [101, 2105, 6021, 19481, 13938, 2102, 1010, 21628, 6661, 2020, 2039, 2539, 16653, 1010, 2030, 1018, 1012, 1018, 1003, 1010, 2012, 1037, 1002, 1018, 1012, 5179, 1010, 2383, 3041, 2275, 1037, 2501, 2152, 1997, 1037, 1002, 1018, 1012, 5401, 1012, 102, 21628, 6661, 5598, 2322, 16653, 1010, 2030, 1018, 1012, 1020, 1003, 1010, 2000, 2275, 1037, 2501, 5494, 2152, 2012, 1037, 1002, 1018, 1012, 5401, 1012, 102], [101, 1996, 4518, 3123, 1002, 1016, 1012, 2340, 1010, 2030, 2055, 2340, 3867, 1010, 2000, 2485, 5958, 2012, 1002, 2538, 1012, 4868, 2006, 1996, 2047, 2259, 4518, 3863, 1012, 102, 18720, 1004, 1041, 13058, 1012, 6661, 5598, 1002, 1015, 1012, 6191, 2030, 1022, 3867, 2000, 1002, 2538, 1012, 6021, 2006, 1996, 2047, 2259, 4518, 3863, 2006, 5958, 1012, 102]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"preprocess_function(dataset['train'][:5])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zS-6iXTkIrJT"
},
"source": [
"To apply this function on all the sentences (or pairs of sentences) in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command."
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 113
},
"id": "DDtsaJeVIrJT",
"outputId": "b3737fd4-3810-427c-ce7f-bea56d0edc1d"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Loading cached processed dataset at /home/arora/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-a62a0d47220211df.arrow\n",
"Loading cached processed dataset at /home/arora/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d39394b85a376705.arrow\n",
"Loading cached processed dataset at /home/arora/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d3877bed19cc4561.arrow\n"
]
}
],
"source": [
"encoded_dataset = dataset.map(preprocess_function, batched=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "voWiw8C7IrJV"
},
"source": [
"Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.\n",
"\n",
"Note that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "545PP3o8IrJV"
},
"source": [
"## Fine-tuning the model"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FBiW8UpKIrJW"
},
"source": [
"Now that our data is ready, we can download the pretrained model and fine-tune it. Since all our tasks are about sentence classification, we use the `AutoModelForSequenceClassification` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us. The only thing we have to specify is the number of labels for our problem (which is always 2, except for STS-B which is a regression problem and MNLI where we have 3 labels):"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 156
},
"id": "TlqNaB8jIrJW",
"outputId": "e8f32561-3113-4220-9028-d77ffc79a481"
},
"outputs": [],
"source": [
"from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer\n",
"\n",
"# num_labels = 3 if task.startswith(\"mnli\") else 1 if task==\"stsb\" else 2\n",
"# model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels)\n",
"def model_init():\n",
" return AutoModelForSequenceClassification.from_pretrained(\n",
" 'distilbert-base-uncased', return_dict=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "CczA5lJlIrJX"
},
"source": [
"The warning is telling us we are throwing away some weights (the `vocab_transform` and `vocab_layer_norm` layers) and randomly initializing some other (the `pre_classifier` and `classifier` layers). This is absolutely normal in this case, because we are removing the head used to pretrain the model on a masked language modeling objective and replacing it with a new head for which we don't have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "_N8urzhyIrJY"
},
"source": [
"To instantiate a `Trainer`, we will need to define two more things. The most important is the [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments), which is a class that contains all the attributes to customize the training. It requires one folder name, which will be used to save the checkpoints of the model, and all other arguments are optional:"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [],
"source": [
"def compute_metrics(eval_pred):\n",
" predictions, labels = eval_pred\n",
" predictions = predictions.argmax(axis=-1)\n",
" return metric.compute(predictions=predictions, references=labels)"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {
"id": "Bliy8zgjIrJY"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /home/arora/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.91b885ab15d631bf9cee9dc9d25ece0afd932f2f5130eba28f2055b2220c0333\n",
"Model config DistilBertConfig {\n",
" \"_name_or_path\": \"distilbert-base-uncased\",\n",
" \"activation\": \"gelu\",\n",
" \"architectures\": [\n",
" \"DistilBertForMaskedLM\"\n",
" ],\n",
" \"attention_dropout\": 0.1,\n",
" \"dim\": 768,\n",
" \"dropout\": 0.1,\n",
" \"hidden_dim\": 3072,\n",
" \"initializer_range\": 0.02,\n",
" \"max_position_embeddings\": 512,\n",
" \"model_type\": \"distilbert\",\n",
" \"n_heads\": 12,\n",
" \"n_layers\": 6,\n",
" \"pad_token_id\": 0,\n",
" \"qa_dropout\": 0.1,\n",
" \"seq_classif_dropout\": 0.2,\n",
" \"sinusoidal_pos_embds\": false,\n",
" \"tie_weights_\": true,\n",
" \"transformers_version\": \"4.18.0\",\n",
" \"vocab_size\": 30522\n",
"}\n",
"\n",
"loading weights file https://huggingface.co/distilbert-base-uncased/resolve/main/pytorch_model.bin from cache at /home/arora/.cache/huggingface/transformers/9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a\n",
"Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_projector.bias', 'vocab_transform.bias', 'vocab_layer_norm.weight', 'vocab_projector.weight', 'vocab_layer_norm.bias', 'vocab_transform.weight']\n",
"- This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
"- This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n",
"Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.weight', 'classifier.bias', 'classifier.weight', 'pre_classifier.bias']\n",
"You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
]
}
],
"source": [
"training_args = TrainingArguments(\n",
" \"test\", evaluation_strategy=\"steps\", eval_steps=500, disable_tqdm=True, report_to='none')\n",
"trainer = Trainer(\n",
" args=training_args,\n",
" tokenizer=tokenizer,\n",
" train_dataset=encoded_dataset[\"train\"],\n",
" eval_dataset=encoded_dataset[\"validation\"],\n",
" model_init=model_init,\n",
" #compute_metrics=compute_metrics,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "km3pGVdTIrJc"
},
"source": [
"Here we set the evaluation to be done at the end of each epoch, tweak the learning rate, use the `batch_size` defined at the top of the notebook and customize the number of epochs for training, as well as the weight decay. Since the best model might not be the one at the end of training, we ask the `Trainer` to load the best model it saved (according to `metric_name`) at the end of training.\n",
"\n",
"The last argument to setup everything so we can push the model to the [Hub](https://huggingface.co/models) regularly during training. Remove it if you didn't follow the installation steps at the top of the notebook. If you want to save your model locally in a name that is different than the name of the repository it will be pushed, or if you want to push your model under an organization and not your name space, use the `hub_model_id` argument to set the repo name (it needs to be the full name, including your namespace: for instance `\"sgugger/bert-finetuned-mrpc\"` or `\"huggingface/bert-finetuned-mrpc\"`)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7sZOdRlRIrJd"
},
"source": [
"The last thing to define for our `Trainer` is how to compute the metrics from the predictions. We need to define a function for this, which will just use the `metric` we loaded earlier, the only preprocessing we have to do is to take the argmax of our predicted logits (our just squeeze the last axis in the case of STS-B):"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7k8ge1L1IrJk"
},
"source": [
"## Hyperparameter search"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "RNfajuw_IrJl"
},
"source": [
"The `Trainer` supports hyperparameter search using [optuna](https://optuna.org/), [Ray Tune](https://docs.ray.io/en/latest/tune/) or **[Weights and Biases](https://docs.wandb.ai/)**. \n",
"\n",
"We will be using **Weights and Biases** for running hyperparameter search."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It's really simple to use Weights and Biases for hyperparameter search, just pass `backend='wandb'` and we will do the rest for you."
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"wandb sweep id - d4atgzxt\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Calling wandb.login() after wandb.init() has no effect.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Create sweep with ID: d4atgzxt\n",
"Sweep URL: https://wandb.ai/wandb_fc/new_sweep_hf/sweeps/d4atgzxt\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[34m\u001b[1mwandb\u001b[0m: Agent Starting Run: uehqntlr with config:\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tlearning_rate: 1.7193769015068072e-05\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tnum_train_epochs: 2\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tper_device_train_batch_size: 32\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tseed: 1\n",
"Trying to set _wandb in the hyperparameter search but there is no corresponding field in `TrainingArguments`.\n",
"Trying to set assignments in the hyperparameter search but there is no corresponding field in `TrainingArguments`.\n",
"Trying to set metric in the hyperparameter search but there is no corresponding field in `TrainingArguments`.\n",
"W&B Sweep parameters: {'_wandb': {}, 'assignments': {}, 'metric': 'eval/loss'}\n",
"loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /home/arora/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.91b885ab15d631bf9cee9dc9d25ece0afd932f2f5130eba28f2055b2220c0333\n",
"Model config DistilBertConfig {\n",
" \"_name_or_path\": \"distilbert-base-uncased\",\n",
" \"activation\": \"gelu\",\n",
" \"architectures\": [\n",
" \"DistilBertForMaskedLM\"\n",
" ],\n",
" \"attention_dropout\": 0.1,\n",
" \"dim\": 768,\n",
" \"dropout\": 0.1,\n",
" \"hidden_dim\": 3072,\n",
" \"initializer_range\": 0.02,\n",
" \"max_position_embeddings\": 512,\n",
" \"model_type\": \"distilbert\",\n",
" \"n_heads\": 12,\n",
" \"n_layers\": 6,\n",
" \"pad_token_id\": 0,\n",
" \"qa_dropout\": 0.1,\n",
" \"seq_classif_dropout\": 0.2,\n",
" \"sinusoidal_pos_embds\": false,\n",
" \"tie_weights_\": true,\n",
" \"transformers_version\": \"4.18.0\",\n",
" \"vocab_size\": 30522\n",
"}\n",
"\n",
"loading weights file https://huggingface.co/distilbert-base-uncased/resolve/main/pytorch_model.bin from cache at /home/arora/.cache/huggingface/transformers/9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a\n",
"Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_projector.bias', 'vocab_transform.bias', 'vocab_layer_norm.weight', 'vocab_projector.weight', 'vocab_layer_norm.bias', 'vocab_transform.weight']\n",
"- This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
"- This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n",
"Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.weight', 'classifier.bias', 'classifier.weight', 'pre_classifier.bias']\n",
"You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n",
"The following columns in the training set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: sentence2, sentence1, idx. If sentence2, sentence1, idx are not expected by `DistilBertForSequenceClassification.forward`, you can safely ignore this message.\n",
"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n",
" warnings.warn(\n",
"***** Running training *****\n",
" Num examples = 3668\n",
" Num Epochs = 3\n",
" Instantaneous batch size per device = 8\n",
" Total train batch size (w. parallel, distributed & accumulation) = 8\n",
" Gradient Accumulation steps = 1\n",
" Total optimization steps = 1377\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
]
},
{
"data": {
"text/html": [
"Waiting for W&B process to finish... <strong style=\"color:green\">(success).</strong>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"VBox(children=(Label(value='1.884 MB of 1.884 MB uploaded (0.000 MB deduped)\\r'), FloatProgress(value=1.0, max…"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"Synced <strong style=\"color:#cdcd00\">hardy-cloud-29</strong>: <a href=\"https://wandb.ai/wandb_fc/hf-sweeps/runs/2k3s55dz\" target=\"_blank\">https://wandb.ai/wandb_fc/hf-sweeps/runs/2k3s55dz</a><br/>Synced 6 W&B file(s), 1 media file(s), 1 artifact file(s) and 0 other file(s)"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"Find logs at: <code>./wandb/run-20220413_165449-2k3s55dz/logs</code>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Exception in thread Thread-18:\n",
"Traceback (most recent call last):\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/agents/pyagent.py\", line 302, in _run_job\n",
" self._function()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/transformers/integrations.py\", line 383, in _objective\n",
" trainer.train(resume_from_checkpoint=None, trial=vars(config)[\"_items\"])\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/transformers/trainer.py\", line 1354, in train\n",
" self.control = self.callback_handler.on_train_begin(args, self.state, self.control)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/transformers/trainer_callback.py\", line 347, in on_train_begin\n",
" return self.call_event(\"on_train_begin\", args, state, control)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/transformers/trainer_callback.py\", line 388, in call_event\n",
" result = getattr(callback, event)(\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/transformers/integrations.py\", line 617, in on_train_begin\n",
" self._wandb.finish()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 3302, in finish\n",
" wandb.run.finish(exit_code=exit_code, quiet=quiet)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 256, in wrapper\n",
" return func(self, *args, **kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 222, in wrapper\n",
" return func(self, *args, **kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1677, in finish\n",
" return self._finish(exit_code, quiet)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1691, in _finish\n",
" if self._wl and len(self._wl._global_run_stack) > 0:\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_setup.py\", line 283, in __getattr__\n",
" return getattr(self._instance, name)\n",
"AttributeError: 'NoneType' object has no attribute '_global_run_stack'\n",
"\n",
"During handling of the above exception, another exception occurred:\n",
"\n",
"Traceback (most recent call last):\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/threading.py\", line 973, in _bootstrap_inner\n",
" self.run()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/threading.py\", line 910, in run\n",
" self._target(*self._args, **self._kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/agents/pyagent.py\", line 307, in _run_job\n",
" wandb.finish(exit_code=1)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 3302, in finish\n",
" wandb.run.finish(exit_code=exit_code, quiet=quiet)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 256, in wrapper\n",
" return func(self, *args, **kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 222, in wrapper\n",
" return func(self, *args, **kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1677, in finish\n",
" return self._finish(exit_code, quiet)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1688, in _finish\n",
" hook.call()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_init.py\", line 378, in _jupyter_teardown\n",
" ipython.display_pub.publish = ipython.display_pub._orig_publish\n",
"AttributeError: 'ZMQDisplayPublisher' object has no attribute '_orig_publish'\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: Sweep Agent: Waiting for job.\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: Job received.\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: Agent Starting Run: 76lcs239 with config:\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tlearning_rate: 4.25211264066006e-05\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tnum_train_epochs: 4\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tper_device_train_batch_size: 32\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tseed: 27\n",
"Exception in thread Thread-19:\n",
"Traceback (most recent call last):\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/agents/pyagent.py\", line 302, in _run_job\n",
" self._function()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/transformers/integrations.py\", line 378, in _objective\n",
" run.config.update({\"assignments\": {}, \"metric\": metric})\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_config.py\", line 183, in update\n",
" self._callback(data=sanitized)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1132, in _config_callback\n",
" self._backend.interface.publish_config(key=key, val=val, data=data)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/interface/interface.py\", line 185, in publish_config\n",
" self._publish_config(cfg)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/interface/interface_shared.py\", line 290, in _publish_config\n",
" self._publish(rec)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/interface/interface_queue.py\", line 49, in _publish\n",
" raise Exception(\"The wandb backend process has shutdown\")\n",
"Exception: The wandb backend process has shutdown\n",
"\n",
"During handling of the above exception, another exception occurred:\n",
"\n",
"Traceback (most recent call last):\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/threading.py\", line 973, in _bootstrap_inner\n",
" self.run()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/threading.py\", line 910, in run\n",
" self._target(*self._args, **self._kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/agents/pyagent.py\", line 307, in _run_job\n",
" wandb.finish(exit_code=1)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 3302, in finish\n",
" wandb.run.finish(exit_code=exit_code, quiet=quiet)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 256, in wrapper\n",
" return func(self, *args, **kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 222, in wrapper\n",
" return func(self, *args, **kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1677, in finish\n",
" return self._finish(exit_code, quiet)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1688, in _finish\n",
" hook.call()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_init.py\", line 378, in _jupyter_teardown\n",
" ipython.display_pub.publish = ipython.display_pub._orig_publish\n",
"AttributeError: 'ZMQDisplayPublisher' object has no attribute '_orig_publish'\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: Agent Starting Run: n0swwlxn with config:\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tlearning_rate: 1.1885585913037536e-05\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tnum_train_epochs: 4\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tper_device_train_batch_size: 32\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tseed: 7\n",
"Exception in thread Thread-20:\n",
"Traceback (most recent call last):\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/agents/pyagent.py\", line 302, in _run_job\n",
" self._function()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/transformers/integrations.py\", line 378, in _objective\n",
" run.config.update({\"assignments\": {}, \"metric\": metric})\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_config.py\", line 183, in update\n",
" self._callback(data=sanitized)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1132, in _config_callback\n",
" self._backend.interface.publish_config(key=key, val=val, data=data)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/interface/interface.py\", line 185, in publish_config\n",
" self._publish_config(cfg)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/interface/interface_shared.py\", line 290, in _publish_config\n",
" self._publish(rec)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/interface/interface_queue.py\", line 49, in _publish\n",
" raise Exception(\"The wandb backend process has shutdown\")\n",
"Exception: The wandb backend process has shutdown\n",
"\n",
"During handling of the above exception, another exception occurred:\n",
"\n",
"Traceback (most recent call last):\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/threading.py\", line 973, in _bootstrap_inner\n",
" self.run()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/threading.py\", line 910, in run\n",
" self._target(*self._args, **self._kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/agents/pyagent.py\", line 307, in _run_job\n",
" wandb.finish(exit_code=1)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 3302, in finish\n",
" wandb.run.finish(exit_code=exit_code, quiet=quiet)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 256, in wrapper\n",
" return func(self, *args, **kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 222, in wrapper\n",
" return func(self, *args, **kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1677, in finish\n",
" return self._finish(exit_code, quiet)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1688, in _finish\n",
" hook.call()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_init.py\", line 378, in _jupyter_teardown\n",
" ipython.display_pub.publish = ipython.display_pub._orig_publish\n",
"AttributeError: 'ZMQDisplayPublisher' object has no attribute '_orig_publish'\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: Sweep Agent: Waiting for job.\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: Job received.\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: Agent Starting Run: 5mn08o86 with config:\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tlearning_rate: 9.079611954856333e-05\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tnum_train_epochs: 2\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tper_device_train_batch_size: 8\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tseed: 20\n",
"Exception in thread Thread-21:\n",
"Traceback (most recent call last):\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/agents/pyagent.py\", line 302, in _run_job\n",
" self._function()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/transformers/integrations.py\", line 378, in _objective\n",
" run.config.update({\"assignments\": {}, \"metric\": metric})\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_config.py\", line 183, in update\n",
" self._callback(data=sanitized)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1132, in _config_callback\n",
" self._backend.interface.publish_config(key=key, val=val, data=data)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/interface/interface.py\", line 185, in publish_config\n",
" self._publish_config(cfg)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/interface/interface_shared.py\", line 290, in _publish_config\n",
" self._publish(rec)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/interface/interface_queue.py\", line 49, in _publish\n",
" raise Exception(\"The wandb backend process has shutdown\")\n",
"Exception: The wandb backend process has shutdown\n",
"\n",
"During handling of the above exception, another exception occurred:\n",
"\n",
"Traceback (most recent call last):\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/threading.py\", line 973, in _bootstrap_inner\n",
" self.run()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/threading.py\", line 910, in run\n",
" self._target(*self._args, **self._kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/agents/pyagent.py\", line 307, in _run_job\n",
" wandb.finish(exit_code=1)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 3302, in finish\n",
" wandb.run.finish(exit_code=exit_code, quiet=quiet)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 256, in wrapper\n",
" return func(self, *args, **kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 222, in wrapper\n",
" return func(self, *args, **kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1677, in finish\n",
" return self._finish(exit_code, quiet)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1688, in _finish\n",
" hook.call()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_init.py\", line 378, in _jupyter_teardown\n",
" ipython.display_pub.publish = ipython.display_pub._orig_publish\n",
"AttributeError: 'ZMQDisplayPublisher' object has no attribute '_orig_publish'\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: Agent Starting Run: prexfrxs with config:\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tlearning_rate: 3.869677154911815e-06\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tnum_train_epochs: 3\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tper_device_train_batch_size: 4\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: \tseed: 11\n",
"Exception in thread Thread-22:\n",
"Traceback (most recent call last):\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/agents/pyagent.py\", line 302, in _run_job\n",
" self._function()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/transformers/integrations.py\", line 378, in _objective\n",
" run.config.update({\"assignments\": {}, \"metric\": metric})\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_config.py\", line 183, in update\n",
" self._callback(data=sanitized)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1132, in _config_callback\n",
" self._backend.interface.publish_config(key=key, val=val, data=data)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/interface/interface.py\", line 185, in publish_config\n",
" self._publish_config(cfg)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/interface/interface_shared.py\", line 290, in _publish_config\n",
" self._publish(rec)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/interface/interface_queue.py\", line 49, in _publish\n",
" raise Exception(\"The wandb backend process has shutdown\")\n",
"Exception: The wandb backend process has shutdown\n",
"\n",
"During handling of the above exception, another exception occurred:\n",
"\n",
"Traceback (most recent call last):\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/threading.py\", line 973, in _bootstrap_inner\n",
" self.run()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/threading.py\", line 910, in run\n",
" self._target(*self._args, **self._kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/agents/pyagent.py\", line 307, in _run_job\n",
" wandb.finish(exit_code=1)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 3302, in finish\n",
" wandb.run.finish(exit_code=exit_code, quiet=quiet)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 256, in wrapper\n",
" return func(self, *args, **kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 222, in wrapper\n",
" return func(self, *args, **kwargs)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1677, in finish\n",
" return self._finish(exit_code, quiet)\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_run.py\", line 1688, in _finish\n",
" hook.call()\n",
" File \"/opt/conda/envs/feedback-prize/lib/python3.9/site-packages/wandb/sdk/wandb_init.py\", line 378, in _jupyter_teardown\n",
" ipython.display_pub.publish = ipython.display_pub._orig_publish\n",
"AttributeError: 'ZMQDisplayPublisher' object has no attribute '_orig_publish'\n"
]
}
],
"source": [
"best_run = trainer.hyperparameter_search(\n",
" backend=\"wandb\", \n",
" project=\"new_sweep_hf\",\n",
" n_trials=5,\n",
" metric=\"eval/loss\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"BestRun(run_id=None, objective=None, hyperparameters=None)"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"best_run"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"name": "Copy of Text Classification on GLUE",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.7"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {
"height": "588.8px",
"left": "22px",
"top": "110.338px",
"width": "256.475px"
},
"toc_section_display": true,
"toc_window_display": true
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"040aa9e3dc6e41cf82d5c5e54d33565b": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"082b93bf7cbd4d5b8691d35092a3a513": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"0c2c565023074c319c64c5ad50f14c84": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"0c2cce713a424f1c907cca57bcd6db4b": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_082b93bf7cbd4d5b8691d35092a3a513",
"placeholder": "​",
"style": "IPY_MODEL_af57631a5b9642a68f0bc27eafebd542",
"value": " 28.8k/? [00:00&lt;00:00, 9.90kB/s]"
}
},
"0c3d202481574c46944d3a87f6d33648": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"0e2dd775b54d487cafcd4e80cdddd9ed": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"111ffec8673a4cde960f35faee65451d": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"1421037cee284945965d799bb8eadf5d": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_9e2afa34dd0d456dbdf827743422ee5a",
"max": 7777,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_8cdb50dd51634d5f8108d450acd40cc7",
"value": 7777
}
},
"1854c0258e8e44c4a74e6d4aab7d2214": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_6d9432d4224841d9b205252443018a86",
"placeholder": "​",
"style": "IPY_MODEL_1f966018eb8a4ba2bc283f1d4dbfa5f5",
"value": " 5.76k/? [00:00&lt;00:00, 132kB/s]"
}
},
"1f966018eb8a4ba2bc283f1d4dbfa5f5": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"26b7156cff74475088c12bf92effe221": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"28fd8091a58b45298c1dabebb1bac36c": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"291c1a9d28454561a3168d4fa27dc9aa": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"2d88ae6cff104f0d844576a006e8dbb5": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"2de2b346b3c54a95b52306e4ce7c0e26": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_84977143ae0d4c4b8112f471d7a8ee56",
"placeholder": "​",
"style": "IPY_MODEL_4012537f49f94f858b95158dcd2e8233",
"value": "Generating validation split: 98%"
}
},
"364088b341064d9dac54b5e55b5cbb93": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"37c72ca6518949acacf8efccb9238633": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_e952cfc00ad2443cba686a756a1fdaac",
"max": 3,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_f1bdbd5e53fe4f249dfabc8b0f0ef892",
"value": 3
}
},
"386fa0d738e04943aa0c3a534c1f421f": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"3a62be8f6aa24f1dac56d0779637dca9": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_2de2b346b3c54a95b52306e4ce7c0e26",
"IPY_MODEL_9f2c68d520424853b0f0dd972001a7dc",
"IPY_MODEL_f2b050e6396f41019fb98e588924cc48"
],
"layout": "IPY_MODEL_e2038f31686449aa948a1a3806f389e1"
}
},
"3b56d8d4d8514afc8db77ef74cfc1b35": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"3ec8e8794f2341d395c5ad071aed7a68": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"3f798a9dcac34dc29f94423467db9317": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_606356ca4050435c83ddacfb4cb9186f",
"IPY_MODEL_91bbf765a080431bad9be7efd7749d80",
"IPY_MODEL_a7217fda218240f6913962a82254c1bd"
],
"layout": "IPY_MODEL_fc4d710e51904fc99964bb9b30bc9ec1"
}
},
"4012537f49f94f858b95158dcd2e8233": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"407cc0584aaf42e289c3b73d8cefe2ba": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"40caff94a3e1474d8ef440f0fa78d981": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"40f2fb5d5b6b438cbbc574f6e68d7d73": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"48a35f3b76734a48afda4697c5f161cc": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"49a8946b1e624154bede31f76a6d3a06": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"49cb88f1550f4b838c14aae9a8ac8662": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_40caff94a3e1474d8ef440f0fa78d981",
"placeholder": "​",
"style": "IPY_MODEL_364088b341064d9dac54b5e55b5cbb93",
"value": "Downloading metadata: "
}
},
"51d639f7cfa64849bd70d97447065192": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"5219ac7d60964cc88a69ed24e5f75725": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"585ee0da0c3249a18922482b610ed37b": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_c8fe85e0940840708f7b32ff63bc418a",
"max": 1844,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_9f22e13082cf431c987cf21f4bf7b555",
"value": 1844
}
},
"5a41a54e66e047e3878138edddb8709d": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"606356ca4050435c83ddacfb4cb9186f": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_51d639f7cfa64849bd70d97447065192",
"placeholder": "​",
"style": "IPY_MODEL_794203270c6341de9cf0a74f320eb1bc",
"value": "Generating train split: 88%"
}
},
"61fb59b5ad2f4dcca5909919617d6495": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"61fb78589e214ac18b898404ac07dce5": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"66d92bcab0374c8c99660452b246d294": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"674f6248de45431eadc5e894b206985d": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_407cc0584aaf42e289c3b73d8cefe2ba",
"placeholder": "​",
"style": "IPY_MODEL_3ec8e8794f2341d395c5ad071aed7a68",
"value": "Downloading builder script: "
}
},
"6d9432d4224841d9b205252443018a86": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"6e44e78718cd4a52aecc47a511fb7ef6": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_fe5d0cbeb5f446ec89383d23521524c8",
"max": 4473,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_0e2dd775b54d487cafcd4e80cdddd9ed",
"value": 4473
}
},
"6ef7047e954f4bc681cb8820ca51cc2e": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_d7ebd8c8a29a41d7a624d3f1682862ec",
"placeholder": "​",
"style": "IPY_MODEL_040aa9e3dc6e41cf82d5c5e54d33565b",
"value": "Generating test split: 42%"
}
},
"71a0f213aa80450dad9dfea35a594511": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "ProgressView",
"bar_style": "",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_ab119786bd364cbea4e5db946402277d",
"max": 1063,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_f23cec0e3d9c4961ab876545c1835a27",
"value": 1063
}
},
"794203270c6341de9cf0a74f320eb1bc": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"7cd1e30e5f8a46bba69c34864e74c694": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_e648745b51fb4a849f851cc7d1a5c36f",
"IPY_MODEL_585ee0da0c3249a18922482b610ed37b",
"IPY_MODEL_1854c0258e8e44c4a74e6d4aab7d2214"
],
"layout": "IPY_MODEL_da4f72dad003481ba3532e02417902e4"
}
},
"827cc59a3c254c4e9ec1468b763bc8a5": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_49cb88f1550f4b838c14aae9a8ac8662",
"IPY_MODEL_6e44e78718cd4a52aecc47a511fb7ef6",
"IPY_MODEL_b58cd953f41648d29de64d9125f42c72"
],
"layout": "IPY_MODEL_40f2fb5d5b6b438cbbc574f6e68d7d73"
}
},
"84977143ae0d4c4b8112f471d7a8ee56": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"8756b89a2d6049e8b9ba04eff93532f3": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_be45258bb03d4f27a46708a09a7c964e",
"IPY_MODEL_37c72ca6518949acacf8efccb9238633",
"IPY_MODEL_c8af2b5d13884bcb89490eb7bc2eb707"
],
"layout": "IPY_MODEL_3b56d8d4d8514afc8db77ef74cfc1b35"
}
},
"8cdb50dd51634d5f8108d450acd40cc7": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"907f0c20cdbd4f64be785e867b18feaf": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_91421ca5f075407a8a1fcb0fb20872d5",
"max": 376971,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_386fa0d738e04943aa0c3a534c1f421f",
"value": 376971
}
},
"91421ca5f075407a8a1fcb0fb20872d5": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"91bbf765a080431bad9be7efd7749d80": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "ProgressView",
"bar_style": "",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_61fb78589e214ac18b898404ac07dce5",
"max": 8551,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_26b7156cff74475088c12bf92effe221",
"value": 8551
}
},
"93839715a90d4557a19413fbb5d3eaa3": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"969071e591654559a65f1caad7dca6bf": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"9afe5ce5fadf4616a53532bd3fcda831": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"9c3523c172eb482dab604bd1535c2df3": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"9e2afa34dd0d456dbdf827743422ee5a": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"9ea5bb465b8148cfb0c8dce352fed136": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"9f22e13082cf431c987cf21f4bf7b555": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"9f2c68d520424853b0f0dd972001a7dc": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "ProgressView",
"bar_style": "",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_9c3523c172eb482dab604bd1535c2df3",
"max": 1043,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_0c2c565023074c319c64c5ad50f14c84",
"value": 1043
}
},
"a4906c3458e44d168e9187b8ea42d3ad": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"a7217fda218240f6913962a82254c1bd": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_5219ac7d60964cc88a69ed24e5f75725",
"placeholder": "​",
"style": "IPY_MODEL_5a41a54e66e047e3878138edddb8709d",
"value": " 7545/8551 [00:00&lt;00:00, 7309.31 examples/s]"
}
},
"ab119786bd364cbea4e5db946402277d": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"af57631a5b9642a68f0bc27eafebd542": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"b58cd953f41648d29de64d9125f42c72": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_e2234760a0c94c9cb9c3f71e95036b6e",
"placeholder": "​",
"style": "IPY_MODEL_28fd8091a58b45298c1dabebb1bac36c",
"value": " 28.7k/? [00:00&lt;00:00, 19.2kB/s]"
}
},
"bda796c7564a4549878a05d1ebdbaf1b": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_48a35f3b76734a48afda4697c5f161cc",
"placeholder": "​",
"style": "IPY_MODEL_fa4655d796fe49d7a362ec2e34e9b6c4",
"value": " 377k/377k [00:00&lt;00:00, 531kB/s]"
}
},
"be45258bb03d4f27a46708a09a7c964e": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_9afe5ce5fadf4616a53532bd3fcda831",
"placeholder": "​",
"style": "IPY_MODEL_111ffec8673a4cde960f35faee65451d",
"value": "100%"
}
},
"c8af2b5d13884bcb89490eb7bc2eb707": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_0c3d202481574c46944d3a87f6d33648",
"placeholder": "​",
"style": "IPY_MODEL_93839715a90d4557a19413fbb5d3eaa3",
"value": " 3/3 [00:00&lt;00:00, 8.20it/s]"
}
},
"c8fe85e0940840708f7b32ff63bc418a": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"ce57e0b2d78d43989d0990402cdf2c93": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_f5ee51060ccc4e24a28804c85932eca8",
"placeholder": "​",
"style": "IPY_MODEL_66d92bcab0374c8c99660452b246d294",
"value": "Downloading data: 100%"
}
},
"d7ebd8c8a29a41d7a624d3f1682862ec": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"da4f72dad003481ba3532e02417902e4": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"e116f73758ea4705a814d2589517d157": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"e2038f31686449aa948a1a3806f389e1": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"e2234760a0c94c9cb9c3f71e95036b6e": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"e23c0bd606164d5a826a398049836218": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_674f6248de45431eadc5e894b206985d",
"IPY_MODEL_1421037cee284945965d799bb8eadf5d",
"IPY_MODEL_0c2cce713a424f1c907cca57bcd6db4b"
],
"layout": "IPY_MODEL_9ea5bb465b8148cfb0c8dce352fed136"
}
},
"e648745b51fb4a849f851cc7d1a5c36f": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_49a8946b1e624154bede31f76a6d3a06",
"placeholder": "​",
"style": "IPY_MODEL_2d88ae6cff104f0d844576a006e8dbb5",
"value": "Downloading builder script: "
}
},
"e952cfc00ad2443cba686a756a1fdaac": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"eba159425b444bb3beb7bb849eda39a5": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_ce57e0b2d78d43989d0990402cdf2c93",
"IPY_MODEL_907f0c20cdbd4f64be785e867b18feaf",
"IPY_MODEL_bda796c7564a4549878a05d1ebdbaf1b"
],
"layout": "IPY_MODEL_61fb59b5ad2f4dcca5909919617d6495"
}
},
"ef7f834d37324a0faee0bf0ba3bba516": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_f0df4c26f0b2428fbb45089756804951",
"placeholder": "​",
"style": "IPY_MODEL_969071e591654559a65f1caad7dca6bf",
"value": " 445/1063 [00:00&lt;00:00, 2603.23 examples/s]"
}
},
"f0df4c26f0b2428fbb45089756804951": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"f1bdbd5e53fe4f249dfabc8b0f0ef892": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"f23cec0e3d9c4961ab876545c1835a27": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"f2b050e6396f41019fb98e588924cc48": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_291c1a9d28454561a3168d4fa27dc9aa",
"placeholder": "​",
"style": "IPY_MODEL_a4906c3458e44d168e9187b8ea42d3ad",
"value": " 1026/1043 [00:00&lt;00:00, 5996.63 examples/s]"
}
},
"f42562d4740345e9a6f2c3779fe34823": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_6ef7047e954f4bc681cb8820ca51cc2e",
"IPY_MODEL_71a0f213aa80450dad9dfea35a594511",
"IPY_MODEL_ef7f834d37324a0faee0bf0ba3bba516"
],
"layout": "IPY_MODEL_e116f73758ea4705a814d2589517d157"
}
},
"f5ee51060ccc4e24a28804c85932eca8": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"fa4655d796fe49d7a362ec2e34e9b6c4": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"fc4d710e51904fc99964bb9b30bc9ec1": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"fe5d0cbeb5f446ec89383d23521524c8": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
}
}
}
},
"nbformat": 4,
"nbformat_minor": 1
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment