Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save pszemraj/b590347a15c77a8e32db8b2f95a52a0d to your computer and use it in GitHub Desktop.
Save pszemraj/b590347a15c77a8e32db8b2f95a52a0d to your computer and use it in GitHub Desktop.
aitextgen JIBA - text generation and training on GPU.ipynb
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/pszemraj/b590347a15c77a8e32db8b2f95a52a0d/aitextgen-text-generation-and-training-on-gpu.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "H7LoMj4GA4n_"
},
"source": [
"# aitextgen — Train a GPT-2 (or GPT Neo) Text-Generating Model w/ GPU\n",
"\n",
"This notebook is based on the original tutorial from `aitextgen`!\n",
"\n",
"- For more about `aitextgen`, you can visit [this GitHub repository](https://github.com/minimaxir/aitextgen) or [read the documentation](https://docs.aitextgen.io/).\n",
"- for `ai-msgbot` (which is using `aitextgen` for chatbot-esque purposes) you can find the project repo [here](https://github.com/pszemraj/ai-msgbot)\n",
"\n",
"\n",
"_updates made by [Peter](https://peterszemraj.ch/)_\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mjDXGtcWeyM2"
},
"source": [
"---"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "LrDWdEzv3LaX"
},
"outputs": [],
"source": [
"#@markdown add auto-Colab formatting with `IPython.display`\n",
"from IPython.display import HTML, display\n",
"# colab formatting\n",
"def set_css():\n",
" display(\n",
" HTML(\n",
" \"\"\"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" \"\"\"\n",
" )\n",
" )\n",
"\n",
"get_ipython().events.register(\"pre_run_cell\", set_css)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YhbtErR7wKxC"
},
"source": [
"### GPU\n",
"\n",
"Colaboratory uses a Nvidia K80, an Nvidia P100, an Nvidia V100, or Nvidia A100. For finetuning GPT-2 124M, any of these GPUs will be fine, but for text generation, a k80 or a P100 is ideal since they have more VRAM. \n",
"\n",
"- In theory: **If you receive a T4 or a V100 GPU, you can enable `fp16=True` during training for faster/more memory efficient training.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "846niMokUVnR",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 374
},
"outputId": "6aad1fc3-54d1-47f4-89cd-3233b0b39b22"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"Sat Jan 29 04:04:23 2022 \n",
"+-----------------------------------------------------------------------------+\n",
"| NVIDIA-SMI 495.46 Driver Version: 460.32.03 CUDA Version: 11.2 |\n",
"|-------------------------------+----------------------+----------------------+\n",
"| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n",
"| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n",
"| | | MIG M. |\n",
"|===============================+======================+======================|\n",
"| 0 Tesla V100-SXM2... Off | 00000000:00:04.0 Off | 0 |\n",
"| N/A 43C P0 27W / 300W | 0MiB / 16160MiB | 0% Default |\n",
"| | | N/A |\n",
"+-------------------------------+----------------------+----------------------+\n",
" \n",
"+-----------------------------------------------------------------------------+\n",
"| Processes: |\n",
"| GPU GI CI PID Type Process name GPU Memory |\n",
"| ID ID Usage |\n",
"|=============================================================================|\n",
"| No running processes found |\n",
"+-----------------------------------------------------------------------------+\n"
]
}
],
"source": [
"#@title print GPU status\n",
"!nvidia-smi"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "HTRjl_eh2dUY",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
},
"outputId": "ba5c997b-ae81-4558-a7b4-06199a8e6d11"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"Runtime has 51.0 gigs of memory and 8 processors\n"
]
}
],
"source": [
"#@title print out the VM's CPU stats\n",
"from psutil import virtual_memory\n",
"import os\n",
"ram_gb = round(virtual_memory().total / (1024**3), 1)\n",
"print(f'Runtime has {ram_gb} gigs of memory and {os.cpu_count()} processors')\n",
"\n",
"if ram_gb < 20: print(\"WARNING - your CPU RAM allocated is less than 20.\",\n",
" \" You may experience errors loading\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "CemDfmdgjefZ"
},
"source": [
"## setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 17
},
"id": "_UJ8ek31Uz-r",
"outputId": "5b4cbe94-e379-4af1-eb9b-44a5de7aed25"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
}
],
"source": [
"#@title set torch version\n",
"!pip install torch==1.10.0+cu113 torchvision==0.11.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html -q\n",
"!pip install https://storage.googleapis.com/jax-releases/cuda111/jaxlib-0.1.71+cuda111-cp37-none-manylinux2010_x86_64.whl -q\n",
"\n",
"#@markdown see this issue https://github.com/googlecolab/colabtools/issues/2452 for colab A100 GPU"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 17
},
"id": "KBkpRgBCBS2_",
"outputId": "6770d112-926c-4d89-9fde-d704015f9102"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
}
],
"source": [
"#@title install aitextgen\n",
"!pip install -q aitextgen\n",
"\n",
"import logging\n",
"\n",
"logging.basicConfig(\n",
" format=\"%(asctime)s — %(levelname)s — %(name)s — %(message)s\",\n",
" datefmt=\"%m/%d/%Y %H:%M:%S\",\n",
" level=logging.INFO,\n",
")\n",
"\n",
"from aitextgen import aitextgen\n",
"from aitextgen.colab import mount_gdrive, copy_file_from_gdrive"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
},
"id": "DuC67no59vMe",
"outputId": "6f9fd483-dfc6-4935-d475-4999afb43b82"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"
]
}
],
"source": [
"mount_gdrive()\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "trRhgNvsH4Wn"
},
"source": [
"### Loading GPT-2 or GPT Neo\n",
"\n",
"\n",
"- A common use case is *continuing* to fine-tune a model that was originally pretrained, and then fine-tuned a little bit, but needs to be fine-tuned more for accuracy/saliency reasons or because Google cut off the runtime earlier. \n",
" - in this case, `load_from_folder` should be set to `True` and `load_folder_dir` points to where the model checkpoint is on your google drive.\n",
"- **the below section describes loading an new/pretrained model from the original tutorial.**\n",
"\n",
"> If you're retraining a model on new text, you need to download and load the GPT-2 model into the GPU. \n",
"\n",
"> There are several sizes of GPT-2:\n",
"\n",
" * `124M` (default): the \"small\" model, 500MB on disk.\n",
" * `355M` (default): the \"medium\" model, 1.5GB on disk.\n",
" * `774M` (default): the \"large\" model, 3GB on disk.\n",
"\n",
"> You can also finetune a GPT Neo model instead ([_or any textgen GPT-architecture model on huggingface_](https://huggingface.co/models?pipeline_tag=text-generation)), which is more suitable for longer texts and the base model has more recent data:\n",
"\n",
"* `125M`: Analogous to the GPT-2 124M model. (355M parameter model was removed)\n",
"* `EleutherAI/gpt-neo-1.3B` : 1.3 billion parameter model. Have yet to see this train on Colab without crashing\n",
"\n",
"> The next cell downloads the model and saves it in the Colaboratory VM. If the model has already been downloaded, running this cell will reload it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 17
},
"id": "v1sqwCk4eRb9",
"outputId": "a642bd20-079d-473d-d4b2-0e96c636fc05"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
}
],
"source": [
"model_size = \"355M\" #@param [\"355M\", \"774M\"]\n",
"load_from_folder = False #@param {type:\"boolean\"}\n",
"load_folder_dir = \"/content/drive/MyDrive/Programming/ai-msgbot/your-previous-model-name\" #@param {type:\"string\"}\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 88
},
"id": "flqSlHjMIeIw",
"outputId": "fda2c4a3-2cb2-4395-9dd4-98bae61bdc94"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"01/29/2022 04:04:40 — INFO — aitextgen — Loading 355M GPT-2 model from /aitextgen.\n",
"01/29/2022 04:04:45 — INFO — aitextgen — GPT2 loaded with 354M parameters.\n",
"01/29/2022 04:04:45 — INFO — aitextgen — Gradient checkpointing enabled for model training.\n",
"01/29/2022 04:04:45 — INFO — aitextgen — Using the default GPT-2 Tokenizer.\n"
]
}
],
"source": [
"if load_from_folder:\n",
" ai = aitextgen(\n",
" model_folder=load_folder_dir, \n",
" to_gpu=True,\n",
" gradient_checkpointing=True,\n",
" )\n",
"else:\n",
" ai = aitextgen(\n",
" tf_gpt2=model_size, \n",
" to_gpu=True,\n",
" gradient_checkpointing=True,\n",
" )\n",
"# Comment out the above line and uncomment the below line to use GPT Neo instead.\n",
"\n",
"# model_size = \"gpt2-xl\"\n",
"# ai = aitextgen(model='gpt2-xl', \n",
"# to_gpu=True, \n",
"# gradient_checkpointing=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "W7oQAqQHiFiX"
},
"source": [
"## load training data\n",
"\n",
"\n",
"- links to my parsed data:\n",
"\n",
"```\n",
"clean \"large\" whatsapp+iphone text dataset:\n",
"\n",
"https://www.dropbox.com/s/gbk9lkbcx6axk07/clean_apple_and_whatsapp_msgs.txt?dl=1\n",
"\n",
"clean \"small\" whatsapp+iphone text dataset:\n",
"\n",
"https://www.dropbox.com/s/75hvz74ve2yux02/clean_dataset-V3-whatsapp-apple.txt?dl=1\n",
"```\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 17
},
"id": "M-_ZkzxTeLm_",
"outputId": "2a6233b4-7729-4de4-82a0-67978407f813"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
}
],
"source": [
"dl_link = \"https://www.dropbox.com/s/dy8uqcq2bqww29x/rice_friends_slack_09012022_parsed.txt?dl=1\" #@param {type:\"string\"}\n",
"dataset_tag = \"jiba\" #@param {type:\"string\"}\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
},
"id": "hirJkLvbUrMb",
"outputId": "01ee058c-54f4-4a06-feac-1ff191641b83"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"('/content/training_script.txt', <http.client.HTTPMessage at 0x7f996bff4650>)"
]
},
"metadata": {},
"execution_count": 10
}
],
"source": [
"#@markdown retrieve the file behind `dl_link`\n",
"from urllib import request\n",
"from os.path import join\n",
"import os\n",
"vm_wd = os.getcwd()\n",
"local_name = join(vm_wd, \"training_script.txt\")\n",
"request.urlretrieve(dl_link, local_name)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 17
},
"id": "vaSisBYJeGve",
"outputId": "d57b090a-2167-447a-f069-0a3650fcdfb1"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
}
],
"source": [
"#@title create the `update_script_names()` and `preview_script()` functions\n",
"#@markdown adjust names in script if needed \n",
"import pprint as pp\n",
"from os.path import basename\n",
"\n",
"def update_script_names(local_name, spkr_from=\"speaker a\", \n",
" spkr_to=\"person alpha\",\n",
" resp_from=\"speaker b\", resp_to=\"person beta\",\n",
" verbose=False):\n",
" \"\"\"\n",
" update_script_names - if the textfile script has different names for the \n",
" speaker/responder than desired (i.e. it is a group conversation, and the \n",
" chatbot is just supposed to simulate 1:1) this function can be used to \n",
" standardize\n",
" \"\"\"\n",
"\n",
" with open(local_name, 'r', encoding='utf-8', errors='ignore') as fi:\n",
" orig_lines = fi.readlines()\n",
"\n",
" from tqdm.auto import tqdm\n",
"\n",
" upd_lines = []\n",
"\n",
" for line in tqdm(orig_lines, total=len(orig_lines), \n",
" desc=\"replacing speaker names\"):\n",
" \n",
" fixline = line.replace(spkr_from, spkr_to)\n",
" fixline = fixline.replace(resp_from, resp_to)\n",
" upd_lines.append(fixline)\n",
"\n",
" local_namev2 = join(vm_wd, \"V2-rename-\" + basename(local_name))\n",
"\n",
" with open(local_namev2, 'w', encoding='utf-8', errors='ignore') as fo:\n",
" fo.writelines(upd_lines)\n",
"\n",
" if verbose: pp.pprint(upd_lines[:10])\n",
" # return filepath\n",
" return local_namev2\n",
"\n",
"def preview_script(file_path, num_lines:int=20):\n",
" with open(local_name, 'r', encoding='utf-8', errors='ignore') as fi:\n",
" script_lines = fi.readlines()\n",
"\n",
" print(f\"A preview of the first {num_lines} lines of {file_path} is: \\n\")\n",
" pp.pprint(script_lines[:num_lines])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 568,
"referenced_widgets": [
"23c59d08bf5f4af793f2bcb7754655d8",
"9be719ddb5cd40f89eab05653c1aed32",
"6d1ff33e29124795895e50258e495f81",
"203aea35b71e47148c2247298a76a36b",
"9dcdb301b9954f6884ea0892f95b25ac",
"6261e8ec893c4fdf99844e7ea6353d02",
"e7bcf8f606db4513b7ad2155c63a3222",
"a626e90db4b340f2955ff226528ecc2c",
"a07828222c754f859be23114e0fd7961",
"84eb2279770846528a2af50d5b51681e",
"ebf45e65029441c3be8354bc8c536fda"
]
},
"id": "6OFnPCLADfll",
"outputId": "b43e1667-7e81-45b6-f1fd-d42a649f08ba"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "23c59d08bf5f4af793f2bcb7754655d8",
"version_minor": 0,
"version_major": 2
},
"text/plain": [
"replacing speaker names: 0%| | 0/64021 [00:00<?, ?it/s]"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"A preview of the first 20 lines of /content/V2-rename-training_script.txt is: \n",
"\n",
"['hey guys, i created this channel to plan a trip to acadia sometime this '\n",
" 'summer. i couple more people might be joining depending on who likes that '\n",
" 'message\\n',\n",
" \"cool! there isn't any major thing on my calander this summer for work / \"\n",
" \"won't be in the office until the fall so i'm super flexible\\n\",\n",
" \"i'm thinking the first week of june, 2 nights camping, 2 nights in bar \"\n",
" 'harbor?\\n',\n",
" 'have you bought kindred?\\n',\n",
" 'i think it was still pretty chilly when my sister went late may, so early '\n",
" 'june might not be _warm_ yet? but probably less crowded?\\n',\n",
" \"i can do pretty much whenever in june though assuming i'm fully vaccinated \"\n",
" 'by then\\n',\n",
" \"i'm trying\\n\",\n",
" '^ same same\\n',\n",
" 'woo!\\n',\n",
" 'does june 25th-28th work for everyone?\\n',\n",
" '@uvd9a7zlg we could drive up to boston on the 24th and back down on the '\n",
" '29th\\n',\n",
" 'works for me\\n',\n",
" 'yeah!\\n',\n",
" \"i'll block off my calander at work\\n\",\n",
" \"i'm reading\\n\",\n",
" 'we still vibing???\\n',\n",
" 'sad\\n',\n",
" 'do i need camping gera\\n',\n",
" 'are we camping\\n',\n",
" \"i'm down for camping\\n\"]\n"
]
}
],
"source": [
"local_name = update_script_names(local_name)\n",
"\n",
"file_name = local_name # update if using fn above\n",
"\n",
"preview_script(file_name)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LdpZQXknFNY3"
},
"source": [
"## Train / Finetune GPT-2\n",
"\n",
"The next cell will start the actual finetuning of GPT-2 in aitextgen. It runs for `num_steps`, and a progress bar will appear to show training progress, current loss (the lower the better the model), and average loss (to give a sense on loss trajectory).\n",
"\n",
"The model will be saved every `save_every` steps in `trained_model` by default, and when training completes. If you mounted your Google Drive, the model will _also_ be saved there in a unique folder.\n",
"\n",
"The training might time out after 4ish hours; if you did not mount to Google Drive, make sure you end training and save the results so you don't lose them! (if this happens frequently, you may want to consider using [Colab Pro](https://colab.research.google.com/signup))\n",
"\n",
"Important parameters for `train()`:\n",
"\n",
"- **`line_by_line`**: Set this to `True` if the input text file is a single-column CSV, with one record per row. aitextgen will automatically process it optimally.\n",
"- **`from_cache`**: If you compressed your dataset locally (as noted in the previous section) and are using that cache file, set this to `True`.\n",
"- **`num_steps`**: Number of steps to train the model for.\n",
"- **`generate_every`**: Interval of steps to generate example text from the model; good for qualitatively validating training.\n",
"- **`save_every`**: Interval of steps to save the model: the model will be saved in the VM to `/trained_model`.\n",
"- **`save_gdrive`**: Set this to `True` to copy the model to a unique folder in your Google Drive, if you have mounted it in the earlier cells\n",
"- **`fp16`**: Enables half-precision training for faster/more memory-efficient training. Only works on a T4 or V100 GPU.\n",
"\n",
"Here are other important parameters for `train()` that are useful but you likely do not need to change.\n",
"\n",
"- **`learning_rate`**: Learning rate of the model training.\n",
"- **`batch_size`**: Batch size of the model training; setting it too high will cause the GPU to go OOM. (if using `fp16`, you can increase the batch size more safely)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 17
},
"id": "sDmyL2A7uhaL",
"outputId": "349c76b6-7cff-42ce-98ab-63953d2cfd91",
"cellView": "form"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
}
],
"source": [
"import gc, os\n",
"from os.path import join\n",
"from datetime import datetime\n",
"#@title admin params & setup\n",
"#@markdown creates folders etc\n",
"base_dir = \"/content/drive/MyDrive/Programming/ai-msgbot\" #@param {type:\"string\"}\n",
"# update to yours\n",
"def get_timestamp():\n",
" return datetime.now().strftime(\"%b-%d-%Y_t-%H\")\n",
"\n",
"temp_gpu_path = join(base_dir, \n",
" \"GPT2-conversational-{sz}-{dt}\".format(sz=model_size,\n",
" dt=get_timestamp(),\n",
" )\n",
" )\n",
"os.makedirs(temp_gpu_path, exist_ok=True)\n",
"gc.collect()\n",
"\n",
"fp16_kwargs = {\n",
" \"amp_backend\":'apex'\n",
"}\n",
"#@markdown **tips for training:**<br> do not use warmup steps. \n",
"#@markdown if run OOM, decrease: `batch_size`, `gradient_accumulation_steps`, dataset size (length of text file input). \n",
"#@markdown or increase number of layers frozen."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000,
"referenced_widgets": [
"10460fbc850a45beb8605651f71e529f",
"4c79c10727f4403bbe560efc30454e18",
"3438b0dbea6a4c4cb7e1ff337a4ea0e1",
"a31bd62c677a450a9317ec78320ab7f3",
"a113fd6821ee4c5c96232329090f42a6",
"7b5c1e61127243d98e406587692e0a92",
"b391a8443b144c0b80ccf4f3ee02d2a7",
"8ecb280849a34b64b2bcb493a8413e44",
"b31f57ca7c264e70a71ad07fda5968a1",
"53e65a3a63f246e18f185ce8c542f9b7",
"477eea6493a04e02add3a50cb4b2e963",
"bb3fa1cbdc2e45e280662641de2cba54",
"8c495a3c7f00497d9c4ef368ab85b9cb",
"013e7cce53744d059a6c6c637c2fbaa2",
"80eeb471d6ef46f098b144d3d90921c2",
"0081a70f0cb141bc96aeca545d9d47fc",
"62288b0c761a4650a4ae44e3b0b9d0fd",
"d846702f2a374c38a88aa2fc049b8bd7",
"040b0ca2c86a4a169aee5bb8f3f72bb5",
"2c896eb7be3049bf892b65ebc2d8f1dd",
"cb49e1d86276429ab833c4a2f3b550e9",
"db386a505b954ef9916da3034b3d805c"
]
},
"id": "aeXshJM-Cuaf",
"outputId": "287c4285-d796-45ed-bb4f-7d4e4e17ec50"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"01/29/2022 04:04:52 — INFO — aitextgen — Loading text from /content/V2-rename-training_script.txt with generation length of 1024.\n"
]
},
{
"output_type": "display_data",
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "10460fbc850a45beb8605651f71e529f",
"version_minor": 0,
"version_major": 2
},
"text/plain": [
" 0%| | 0/64021 [00:00<?, ?it/s]"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"01/29/2022 04:04:52 — INFO — aitextgen.TokenDataset — Encoding 64,021 sets of tokens from /content/V2-rename-training_script.txt.\n",
"01/29/2022 04:04:53 — INFO — aitextgen — Layer freezing enabled for model training.\n",
"/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/connectors/callback_connector.py:148: LightningDeprecationWarning: Setting `Trainer(checkpoint_callback=False)` is deprecated in v1.5 and will be removed in v1.7. Please consider using `Trainer(enable_checkpointing=False)`.\n",
" f\"Setting `Trainer(checkpoint_callback={checkpoint_callback})` is deprecated in v1.5 and will \"\n",
"/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/connectors/callback_connector.py:91: LightningDeprecationWarning: Setting `Trainer(progress_bar_refresh_rate=20)` is deprecated in v1.5 and will be removed in v1.7. Please pass `pytorch_lightning.callbacks.progress.TQDMProgressBar` with `refresh_rate` directly to the Trainer's `callbacks` argument instead. Or, to disable the progress bar pass `enable_progress_bar = False` to the Trainer.\n",
" f\"Setting `Trainer(progress_bar_refresh_rate={progress_bar_refresh_rate})` is deprecated in v1.5 and\"\n",
"/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/connectors/callback_connector.py:168: LightningDeprecationWarning: Setting `Trainer(weights_summary=None)` is deprecated in v1.5 and will be removed in v1.7. Please set `Trainer(enable_model_summary=False)` instead.\n",
" \"Setting `Trainer(weights_summary=None)` is deprecated in v1.5 and will be removed\"\n",
"01/29/2022 04:04:53 — INFO — pytorch_lightning.utilities.distributed — GPU available: True, used: True\n",
"01/29/2022 04:04:53 — INFO — pytorch_lightning.utilities.distributed — TPU available: False, using: 0 TPU cores\n",
"01/29/2022 04:04:53 — INFO — pytorch_lightning.utilities.distributed — IPU available: False, using: 0 IPUs\n",
"01/29/2022 04:04:53 — INFO — pytorch_lightning.accelerators.gpu — LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]\n"
]
},
{
"output_type": "display_data",
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "bb3fa1cbdc2e45e280662641de2cba54",
"version_minor": 0,
"version_major": 2
},
"text/plain": [
" 0%| | 0/30000 [00:00<?, ?it/s]"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py:1824: LightningDeprecationWarning: `trainer.progress_bar_dict` is deprecated in v1.5 and will be removed in v1.7. Use `ProgressBarBase.get_metrics` instead.\n",
" \"`trainer.progress_bar_dict` is deprecated in v1.5 and will be removed in v1.7.\"\n"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"\u001b[1m1,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"2.2.3.4|https://open.spotify.com/track/5mfvxmw3gqpxyjtk/2.2.3.4>\n",
"i feel like this is kind of like the \"theory of love and the theory of money\"\n",
"this shows the end game\n",
"and it's all i can do\n",
"i'm happy to see this\n",
"it's just me\n",
"i'm happy to see this\n",
"oh i'm so happy to see this\n",
"can't believe this is actually happening\n",
"i'm so happy\n",
"i'm so happy to see the end of the line\n",
"i'm so happy to see this\n",
"how did we get here\n",
"i'm so happy to see\n",
"the end of the line\n",
"gonna go to the moon\n",
"<https://twitter.com/shonenjump/status/140909873300302645?s=21|https://twitter.com/shonenjump/status/1409873300302025?s=21>\n",
"i'm so happy\n",
"i have no idea what is the end of the line\n",
"this is so beautiful\n",
"oh damn i think i might do this now\n",
"i'm so happy\n",
"==========\n",
"\u001b[1m1,500 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m2,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
" or other\n",
"@uuyec9u66\n",
"ok i bought a ticket for a w/o yesterday\n",
"did we get a w/o at all\n",
"i'm still holding it\n",
"i think you are right. but i was looking at the calendar. the transaction is in the name of the person who bought the ticket. so not sure if it's your original or someone else. i think you are right. i bought a ticket for the cheapest ticket and it showed the ticket was for a person named josh\n",
"i'm gonna check that out\n",
"looks like you'll get a refund if you cancel the reservation\n",
"if you don't cancel it still seems like a good way to go, right?\n",
"i don't want to fight you\n",
"when you cancel the reservation you arent refunded. you are just released\n",
"but you can still request a refund\n",
"@uuyec9u66 @uv151t3ft\n",
"i just checked the calendar\n",
"yeah i had to cancel the reservation with the company\n",
"they never said that\n",
"okay i will refund you\n",
"i'll pay them back\n",
"and you can still use the website to buy the ticket\n",
"and you don't need to submit the payment\n",
"lolololol\n",
":simple\n",
"==========\n",
"\u001b[1m3,000 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m3,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"\n",
"lmao\n",
"my team is a bit broken rn but i'm about to turn it on so i can trade items\n",
"harden must be our best asset\n",
"harden should be our best asset\n",
"yep\n",
":is_that_legal:\n",
":is_that_legal:\n",
"damn\n",
"i'm in\n",
":sadge:\n",
":sadge:\n",
"how long does that take\n",
"i don't want to trade him until i've had time to think about it\n",
":sadge:\n",
"harden is a legend\n",
"it's time to move on\n",
"but for now he's my only one\n",
"i didn't realize how much the rockets could have had in return for him\n",
"i'm hoping we can get assets in return soon\n",
"i think we can get assets for harden\n",
"i think we can get assets for harden\n",
"he's a legend\n",
":sadge:\n",
"harden is the goat\n",
"so we can get assets for harden\n",
"how did we get harden\n",
":sadge:\n",
"we should've gotten assets for harden before the deadline\n",
":sadge:\n",
":sadge:\n",
"harden is still my only one\n",
"i think we can get\n",
"==========\n",
"\u001b[1m4,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"\n",
"if you don't think i can save it\n",
"you can ask for kristina's permission\n",
"and then you can just wait\n",
"ok you can't do both\n",
"ok how about this\n",
"okay but will you stay on the'recurring'?\n",
"no\n",
"also dont forget the'recurring'\n",
"okay\n",
"if you don't do the'recurring'\n",
"do you think that you have recurred?\n",
"i don't think so. i think that i've recurred twice.\n",
"i have only seen'recurring' twice, so can't say it's always happening.\n",
"i think it can happen in one-off circumstances. i think that if i told you that i was going to do it, you'd think it's not happening and i'd have to ask for permission.\n",
"i think that you can't get permission unless you recur, unless you've recurred multiple times\n",
"okay thanks fam\n",
"did you stay on'recurring' or do you think you recurred twice?\n",
"ya it depends on the mood of the situation.\n",
"i think that people have only recurred once when things were going well. the'recurring' mood generally only happens if the weather's looking good and people\n",
"==========\n",
"\u001b[1m4,500 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m5,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"-bitch-willy or some shit\n",
"i finished the book on time and don't regret it\n",
"i'm sad that i didn't like it enough to keep reading\n",
"i'm glad i didn't like it enough to keep reading\n",
"i'm sad that i didn't like it enough to keep reading\n",
"i'm sad that i didn't like it enough to keep reading\n",
"i'm sad that i didn't like it enough to keep reading\n",
"i'm sad that i didn't like it enough to keep reading\n",
"i'm sad that i didn't like it enough to keep reading\n",
"i'm sad that i didn't like it enough to keep reading\n",
"i'm sad that i didn't like it enough to keep reading\n",
"i'm sad that i didn't like it enough to keep reading\n",
"i'm sad that i didn't like it enough to keep reading\n",
"i'm sad that i didn't like it enough to keep reading\n",
"i'm sad that i didn't like it enough to keep reading\n",
"i'm sad that i didn't like it enough to keep reading\n",
"i'm sad that i didn't like it enough to keep reading\n",
"i'm sad that i didn't like it enough to keep reading\n",
"i'm sad that i didn't like it enough\n",
"==========\n",
"\u001b[1m6,000 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m6,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
" is it?\n",
"i'll take this week off\n",
"just have more fun anyway\n",
"but i'll probably check out the books\n",
"what are you reading\n",
"i'll start with the good reads and ask for suggestions\n",
"i've read the east of eden and the dune books\n",
"they're good\n",
"just finished good pick up\n",
"just picked up east of eden, which is pretty good. its pretty good but not amazing\n",
"just started reading though\n",
"just need to keep up with normal people instead of the drags and deaths of normal people\n",
"it is drags and deaths but it isn't boring\n",
"just finished normal people. it is drags and deaths but it isn't boring\n",
"i think it is boring\n",
"i haven't read normal people in a while but i've heard it's good. it doesn't like epic poems or anything like that usually\n",
"just started normal people\n",
"just started normal people. it is drags and killings\n",
"just finished normal people\n",
"i just started the normal people today\n",
"just finished normal people. i liked it though. it wasn't boring though\n",
"just finished normal people. it is drags and killings\n",
"just started normal people. just didn't care for it much but it wasn't boring\n",
"just started normal people\n",
"==========\n",
"\u001b[1m7,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"-and-build-the-next-nintendo-switch-promo-soundcloud-stream/>\n",
"this is the vice chair i'll use for the funeral\n",
"lmao\n",
"this guy's gonna get a ton of press for this (lmao). but i'm sure they'll say one of these chairs is good enough\n",
"ya i know lol\n",
"the problem is i don't think this guy has a sense of style\n",
"ya he definitely does lol\n",
"i was joking but i just meant i was memeing lol\n",
"and i don't think this is the first time i've seen a \"editor\" at akili. i always thought he was exaggerating the women's health at best\n",
"i just saw the whole \"project manager\" thing lol\n",
"<https://www.youtube.com/watch?v=5z-rkdfqy8&ab_channel=umap&ab_channel=umap>\n",
"this is the new dune trailer lol\n",
"i just finished the movie dune. very good\n",
"great mindfuck trailer\n",
"watch it to understand what you are missing out on\n",
"watch it to understand what you are missing out on\n",
"did you finish?\n",
"just watched it\n",
"did you finish??\n",
"\n",
"==========\n",
"\u001b[1m7,500 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m8,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"i think i might just change my mind\n",
"no\n",
"i'll look into it\n",
"just tell me i want to\n",
"just tell me\n",
"i'm just going to make plans now\n",
"okay\n",
"okay\n",
"can you guys do wednesday\n",
"jk\n",
"okay\n",
"can you just work out and we can talk about it in like 90 min?\n",
"no\n",
"i don't want to do thursday\n",
"okay\n",
"thursday it is\n",
"thursday it is\n",
"i'm trying to stay productive and productive\n",
"so i'm just changing my phone and i need to get a no android reboot to do it\n",
"i'm assuming reboot doesn't work for me\n",
"lol\n",
"i'm just doing this bc i didn't want to be a no phone guy this week and i didn't want to do this\n",
"i just wanna do thursday no\n",
"thursday it's actually going to be a no android reboot i think\n",
"i'm sorry just this thursday is going to be a no android reboot\n",
"i'm sorry just wanted to do thursday\n",
"how can you forget the 6 you guys have\n",
"okay\n",
"thursday it's actually going to be a no android reboot\n",
"i'm sorry just wanted to do it bc i didn't want to do\n",
"==========\n",
"\u001b[1m9,000 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m9,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
" to a point where you have to go to someplace to do it. if i'm being unfair you'll just have to buy the airline tickets, but when you get to the destination you have to be in a specific location to get it\n",
"i don't think we should ever have to do it but there's no point trying to convince people not to buy it\n",
"it is a purely social construct\n",
"i'm fed up with the system. buy a ticket to a movie you didn't know was coming to you\n",
"i mean i'm fed up with the fact that i have to do it. i'm just so tired of it that i don't even think about it\n",
"what is it then that you want to do?\n",
"i'm just going to do it\n",
"i'll do it\n",
"do you have other friends that like shit in your presence?\n",
"nah i'm in complete silence\n",
"it's mostly just that people act very similarly to me. i've gotten used to not interacting with strangers in my social zone but i still have some weird social interactions when i do. so i still have some weird interactions with strangers in my social zone but not as awkward as when i do with them, even if i do it more obviously\n",
"i'm not _really_\n",
"==========\n",
"\u001b[1m10,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
" was a good one to begin the day\n",
"damn the choco taco i was served on was alright but it was way better than what was there\n",
"yeah it looked good but the texture wasnt great\n",
"i'm not as familiar with the taco side of the trade as i am of the falamic one but the one thing i always notice is falamic paprika\n",
"do you have a preference\n",
"i think you like to have a lot of people if you care about the result\n",
"i think saturday isnt a good one to watch, it's kind of just what i watch Saturday\n",
"but if you're okay with what you do on saturday, let's go\n",
"you can go as early as noon for that\n",
"looks like another group i'm going to check out on tuesday. i've been watching a lot of star wars stuff so far (episode 2, 3, etc.) which i'm liking so far and very much into\n",
"<https://www.theverge.com/2021/11/29/22460618/movie-news-mandalorian-impearl-bust-bust-wars-premium-streaming>\n",
"<https://twitter.com/cnn/status/\n",
"==========\n",
"\u001b[1m10,500 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m11,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"5\n",
"imagine if we had rosetta and paul george on our team in the year 2000\n",
"in today's nba you can't have a good star with a mediocre superstar\n",
"like lebron had his brightest moment in the nba in the late 90s\n",
"go back to the 90s and you'll see that the best stars had that kind of success\n",
"so most of the best players went on to great things in the early 2000s\n",
"like gerald green had that crazy peak\n",
"curry could dunk a basketball in the 4th quarter\n",
"can't make shots in the NBA\n",
"or he'd get a phone hook up and call the knicks\n",
"lol that is a football score lol\n",
"why tf would kawhi ever dunk a basketball\n",
"like seriously though paolo fucking dunks\n",
"the best player of all time on the rockets is a legend\n",
"not like he had a superstar that he could buy\n",
"like harden\n",
"his career is probably more important\n",
"eh he had the green light but it was still early\n",
"like obviously he could dunk a lot but he was still in his prime\n",
"curry could never dunk a basketball\n",
"not like he had the green light\n",
"if he had one that was the sure thing he'd\n",
"==========\n",
"\u001b[1m12,000 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m12,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"\n",
"i thought about it and finally decided to forgo it\n",
"i've been vaping longer, would recommend\n",
"yeah it's not bad IMo\n",
"yeah it's def a cultist run, there's a ton of literature on it and by that i mean there isn't anything bad to it, it's just that it takes a lot of time\n",
"i think it's a cultist run\n",
"like ran for over a year\n",
"basically it's a cult of committed will and it's just that\n",
"but from what i've read it's a pretty cool thing\n",
"well there are books too, i'll look at those\n",
"but like it's just not for me\n",
"read more\n",
"i'll probably get re-reading vox if it's good lol\n",
"for me, it's just mostly \"good vs evil\n",
"but i still have like holy shit texts in my queue and it's not even a meme\n",
"@uvbavmtk7\n",
"i don't think you can win unwinnable games this year\n",
"like i played a ton of league today to try and win it all\n",
"up to 16 hours of games today 😈\n",
"i've been watching the holy shit i've-never-seen-shit streamers stream it and it just makes\n",
"==========\n",
"\u001b[1m13,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"\n",
"i am literally the same age lol\n",
"hmmmmm i think it's only the last 3 years\n",
"it's not totally obvious but she definitely shows a predisposed predisposition toward violence as a teen\n",
"i think it ties into that earlier trauma and grief for sure\n",
"and i think she's into that as a teen and also when she started investigating herself\n",
"like i think she did also start reading more andrew and i think she's into that like a very intro programming stuff and also believes that she's a messiah in a very dark way\n",
"when she's in parest condition and violence and all of that stuff, she is showing a lot of vulnerability and also esp with me, i think she's into that as a teen and also believes that messiah/heroism in a very dark way\n",
"like i think i saw her earlier writings/dune when she was like 9 or something, i think i was like 11 or something, but idk\n",
"i think i saw the writings when she was like 10 or 11 or something, but i think i think i read it at 12 or something\n",
"right?\n",
"she's suffering through it and is in a very vulnerable place and is also very introspective so i think it's place for that\n",
"==========\n",
"\u001b[1m13,500 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m14,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"://www.youtube.com/watch?vw=2ppcqr8w1jg|https://www.youtube.com/watch?v=2ppcqr8w1jg>\n",
"just watched this, 2nd best video ole harris and ad harte ever did on\n",
"hyelim is def a goat\n",
"this is proof\n",
"anyone interested in running this marathon\n",
"not for the stuff, no\n",
"and i have been watching http://youtu.be/mkevgwbjuece>\n",
"hyelim is going to be the 20th century of nudes\n",
"if you somehow missed this, you're a cunt\n",
"<https://youtu.be/-xhjzjq2jne|https://youtu.be/mkevgwbjuece>\n",
"what is this channel @uvbd7h2nlx\n",
"what is this\n",
"no politics, justhip\n",
"i've been wanting to make a flick for so long and finally got my shot gun today\n",
"what are peoples thoughts on this groupme i just made for you\n",
"@uvapgs1b6 is it just you and me\n",
"not fit for marriage?\n",
"the mods?!?\n",
"yw\n",
"==========\n",
"\u001b[1m15,000 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m15,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"\n",
"<https://twitter.com/crunchyroll/status/14506825374905560289?s=21|https://twitter.com/crunchyroll/status/14506825374905560289?s=21>\n",
"what does your family think\n",
"the only thing i know about cliches parents is that they think they should be fun and romantic but not in a relationship\n",
"cliches…\n",
":prayge:\n",
"<https://youtu.be/nushamedm4iw2i|https://youtu.be/nushamedm4iw2i>\n",
"if you're gonna be playing exalted, akeem would like to add you to the mind of our resident psychopath\n",
"what does he want us to do with him 🙃\n",
"i haven't even gotten to the \"what do you do with my money\" part yet\n",
"i'm just hoping it'll stay the same after this week\n",
"as he grips his purse, we will be long gone\n",
"_-\n",
"i don't quite understand this diamond\n",
"i mean even with unlimited money, you can't just keep it forever\n",
"i think we should move because it's ours and it's ours\n",
"pappy reading\n",
"==========\n",
"\u001b[1m16,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"\n",
"also i think you should play the rest of the game\n",
"the rest is going to be up to others\n",
"<https://www.reddit.com/r/nba/comments/rq3nji/how_a_15_year_old_knot_is_that_player_that_wielded_a_29/|https://www.reddit.com/r/nba/comments/rq3nji/how_a_15_year_old_knot_that_player_that_wielded_a_29/>\n",
"i think he should go for the max instead of the min\n",
"and you have a much better shot if he gets vogled\n",
"yep\n",
"i think he'll get you more rings\n",
"i think he should go for the 2x\n",
"seems like you might get the nod\n",
"i'm gonna pass on that\n",
"ya he got me\n",
"i was thinking you'd want kyrie\n",
"but obvi don't pass him up like i've been saying\n",
"i've been trying to punt the ton of screens i've gotten away with\n",
"<https://www.reddit.com/r/nba/comments/rq8c3\n",
"==========\n",
"\u001b[1m16,500 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m17,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"\n",
"we're actually the fucking time to get a handle on it, brother\n",
"is there any way we could buy it?\n",
"i think there are articles here yes\n",
"<https://www.investopedia.com/ask/answerschwander/options/19-over-1-months-purchase-calculators|https://www.investopedia.com/ask/answanders/options/19-over-1-months-purchase-calculators>\n",
"would it sell for $100 tomorrow?\n",
"would you do it\n",
"idk i'm a moron\n",
"no i wouldnt\n",
"i'm just trying to learn about options trading and how i can get into adv\n",
"🤡\n",
"that's called amc\n",
"but yeah the price has to be low\n",
"or something\n",
"if it hits 200, i'm buying\n",
"looks like bb is talking about options being used to buy houses rn\n",
"fuck the banks\n",
"i just don't understand how they are using it\n",
"not trying to buy something, just trying to get you out of bb\n",
"amc is a bank\n",
"that's what i've been saying\n",
"looks like you're right @u01fuun86kg\n",
"==========\n",
"\u001b[1m18,000 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m18,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
" on that. i think the best word for \"is\" is \"is\". it can also mean many things, but not like \"will\" or \"of\" which i think it means something like \"is\"\n",
"its literally what is\n",
"that's what i used for \"can't\"\n",
"okay good good\n",
"the best word for good z is \"tell me more\n",
"we have a whole article about how nba teams are using it\n",
"so far i have come across a couple of productive homecour in the past 24 hours\n",
"bear in the warm up, i'm shy about this, but it's something i think the 76ers should look into for sure\n",
"they are the last team lightly left standing\n",
"makes sense\n",
"i was gonna ask, but i got cold feet?\n",
"getting cold feet is most definitely goat\n",
"(as cold as they can get)\n",
"getting the benefit of the doubt\n",
"anyways hopefully the 76ers have something worthwhile to report about\n",
"my trade came back to bite them in the ass, unfortunately\n",
"good for them\n",
"at least more believed it. i think they'll do better for themselves if they trade omar jackson instead of simmons\n",
"good for him\n",
"saw that\n",
"banter https://archive.org/\n",
"==========\n",
"\u001b[1m19,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
" of the word\n",
"it's a strange way to do it 🤑\n",
"i like the idea of using relatively high-level cognitive functions to break down the programming into manageable chunks of code. you can have multiple run through code quickly, and still get to the final product. but making a bit of stuff super big and still getting stuff done is big\n",
"i've been asked \"can you apply keyword functions to map(lex) to right? which is invoked by a couple of attendees, who are paid in Google engineering maga hat, and/or the chapastika\n",
"i wouldn't necessarily say that they're used in mock interviews. they were @u0195rhclm6's\n",
"what i was asking for was code that makes phone unlocks remotely\n",
"so i asked them if they could apply a bunch of random number generators to our existing phone unlocking system, then what else\n",
"i didn't have time to think about it more, but they said they would look into it\n",
"happy birthday @uv151t3ft!\n",
"i'm gonna try and get on in here\n",
"ya btw i'm fairly confident they found some other random number generator support for this stuff (mostly because it's done, so it's easier to implement)\n",
"i'm\n",
"==========\n",
"\u001b[1m19,500 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m20,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"\n",
"are we sure that the ffs is a bug\n",
"yeah i'm checking it out\n",
"my dad said hes selling his stuff and then it all falls apart\n",
"<https://www.reddit.com/r/pokemon/comments/qc3rvk/my_goddess_did_not_own_a_switch_this_time_husband/|https://www.reddit.com/r/pokemon/comments/qc3rvk/>\n",
"nope\n",
"🍆\n",
"do you have extra patience?\n",
"no patience for anyone, just let them die\n",
"don't talk back and forth, just give me the goods\n",
"this message was deleted.\n",
"is this your first time purchasing a pokemon game?\n",
"i am visiting my aunt this week so i can get back into the dnd game thing i've been playing\n",
"this message was deleted.\n",
"is this your first time purchasing a nintendo game?\n",
"yea i am lol\n",
"<https://www.youtube.com/watch?v=nag2gjbp-fg&ab_channel=eonisbuffering|https://www.youtube.youtube.com/watch?v=nag2gjbp-\n",
"==========\n",
"\u001b[1m21,000 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m21,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
":austin_pog:\n",
"and you have to get it to play on tv\n",
"is it on netflix?\n",
"i've seen it\n",
"not technically on netflix. only had a trial season\n",
"a friend of mine was mad about it\n",
"i think he bought it for $1k but not sure if he liked it\n",
"castlevania is fun\n",
"i mean there are people who like castlevania lol\n",
"not sure if i'd recommend\n",
"<https://youtu.be/cm6n3r0s0o|https://youtu.be/cm6n3r0s0o>\n",
"<https://www.youtube.com/watch?v=sa9yf6swf7k>\n",
"this cracked me up\n",
"<https://www.youtube.com/watch?v=9y0f7pkde6g&ab_channel=willdraw4viewz>\n",
"@uvbd7h2nl you have a lot of work to do\n",
"not drawing for whatever reason but im going to try my hand at it sometime\n",
"and im in a meeting for tomorrow\n",
"yea drawing some vids\n",
"thursday in photoshop\n",
"oh kt!\n",
"ill post more.\n",
"==========\n",
"\u001b[1m22,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
" be careful though. if you're hespanic, you can use this as a guide -- very common when characters are similar but different from each other\n",
"i'm getting a few emails a day from people who are like \"idi, y'all should know better than this, but if you don't show up, you're fucked\"\n",
"<https://www.reddit.com/r/smashbros/comments/93s8eu/dread_me_will_stop_slepting_on_this_new_ime_on/>\n",
"hades won the day:\n",
"<https://youtu.be/v1xvg7sfs|https://www.youtube.com/watch?v=1xnu7sfs>\n",
"@uv1av616d\n",
"pretty pog\n",
"will give you this\n",
"@uv1av616d\n",
"i think there's a crossover event in patch 1 of new smash\n",
"which devs\n",
"the devs are gonna be out celebrating\n",
"which devs is your favorite\n",
"i think imma watch it live\n",
"maybe it's gonna be vesperia\n",
"so the question is, is the crossover event exclusive to smash\n",
"yep\n",
"or no?\n",
"if so, then\n",
"==========\n",
"\u001b[1m22,500 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m23,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
" on the board of a science fund?\n",
"that's what they do\n",
"i think it's a front call but we defer to them\n",
"it's also a pretty good one\n",
"huh?\n",
"yeah i mean what do you mean?\n",
"<https://twitter.com/i/events/status/1328251303905828208?s=19|https://twitter.com/events/1328251303905828208?s=19>\n",
"aight y'all\n",
"just set the channel description:\n",
"the ceo of an investing company or a start ups?\n",
"is that a fictitious question\n",
"like you can have 3 valid answers\n",
"just 3, sine of it is enough\n",
"my favorite emoji\n",
"it's a rectangle\n",
"how do you put a question like \"does he have eyes or do he have Polaris or something please explain\" into a question\n",
"i'll take a look\n",
"the answer is terrifying\n",
"i guess i could answer by saying he has neither\n",
"he could be kuniboh\n",
"i think if you take out the middle, middle, and top 5 positions in your hand market, and then you take out the actual indexes you calculate, then it will be roughly equal in value\n",
"also\n",
"==========\n",
"\u001b[1m24,000 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m24,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
" it's not like i've never seen a pokemon game where the main character has some weight or muscle\n",
"yea i haven't\n",
"its just muscle\n",
"how is it moving\n",
"?\n",
"honestly i really don't know anything\n",
"thats a good question\n",
"the gym is literally just the weight room full of concrete\n",
"give us a skinny ass\n",
"man i was gonna post this but we are at the building now\n",
"ok\n",
"im gonna lose some fat\n",
"what's the one inch of muscle you like?\n",
"kap\n",
"the one inch of muscle is what makes you fat\n",
"🤡\n",
"thats all you @uvbd7h2nl\n",
"like 8 inches here\n",
"<https://www.youtube.com/watch?v=dwmipv6d3lc>\n",
"the first 10 seconds are pretty much iron\n",
"but then he has his leg cross so we can assume its muscle\n",
"supposedly he was gonna be a skinny or average person but i guess he's still not sure\n",
"but he also is gonna be 2 inches taller\n",
":austin_huh:\n",
"thats insane\n",
"my god\n",
"weird that he had a street clothes stylizer before he ever went to rice\n",
"street clothes are meant\n",
"==========\n",
"\u001b[1m25,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
" you're the best thing you've ever had?\n",
"lol o7\n",
"well we did have one earlier have one thrown as a gag and then later on they released one in like japanese\n",
"(\"just joking\") 😕\n",
"next, hannah and i have a date planned for this weekend and i have <https://www.902web.com/|https://www.902web.com/>\n",
"love hannah, hannah, stick with me\n",
"this message brought to my attention\n",
"i'm the ranking official\n",
"here's the 10 current anime and a separate discussion forum\n",
"1. airug, season 1\n",
"2. demon slayer\n",
"3. champloo\n",
"4. marx\n",
"5. motm\n",
"6. zombie apocalypse\n",
"7. devilman\n",
"8. timecop\n",
"9. hxh.\n",
"10. alabama\n",
"11. monster hunter\n",
"12. naruto\n",
"13. timecop\n",
"14. axt\n",
"15. timecop\n",
"i think we have 11 more to go\n",
"damn i'm bored\n",
"wait, i thought we had like a quota for idiots\n",
"also the demon slayer fight specifically looks like one of the most boring fights in anime i think i even watched\n",
"==========\n",
"\u001b[1m25,500 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m26,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
" it's going to be a really good game\n",
"can we just buy one ticket?\n",
"i have room for 3 more tickets\n",
"yeah i can see for sure, the improvements the lightning has made over the years\n",
"they look swaggy\n",
"<https://twitter.com/kevinoconnornba/status/14055866874576888?s=21|https://twitter.com/kevinoconnornba/status/14055866874576888?s=21>\n",
"🙃\n",
"what a treat\n",
"thanks for coming to my household @uvas0rgcc!!!!\n",
"holy shit this was so beautiful to read\n",
"one last tweet to get us hyped\n",
":lakers: will be so good\n",
"yeah the game is very good\n",
"they performed pretty well\n",
"damn i was super close to celebrating\n",
"everything was going according to plan except for collisions when they purposely chose the last 3\n",
"that was the ugliest 4 minutes of basketball i've ever seen\n",
"even worse to see\n",
"the ugliest 4 minutes of basketball i've ever seen\n",
"i'm on the bus so i can't watch\n",
"i was so excited to see them!\n",
"yeah 😂 😂\n",
"y'\n",
"==========\n",
"\u001b[1m27,000 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m27,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"/\n",
"i am not a fan of this guy lol\n",
"same with bookshe was a fucking troll in hs\n",
"oh ok\n",
"ya i won't start on his followers\n",
"i had fun\n",
"the whole thing with him makes me want to be quiet\n",
"did you enjoy it?\n",
"=austin_eyes::austin_eyes:\n",
"yes\n",
"<https://www.youtube.com/watch?v=mqk9y4pv8lk>\n",
"watch it now\n",
"set the channel description: lampshade is a great game\n",
"lmao\n",
"i am really interested to see how they actually look if you are like and how does the gameplay look\n",
"yea i think it looks really good\n",
"yea true\n",
"ill try it\n",
"yea im really excited\n",
"it looks really cool\n",
"play the generator\n",
"yea i think it does\n",
"seems like a really cool game\n",
"hahahaha\n",
"<https://twitter.com/nibellion/status/1407669087293121?s=21|https://twitter.com/nibellion/status/1407669087293121?s=21>\n",
"ok i actually still have not even been able to\n",
"==========\n",
"\u001b[1m28,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
":austin_eyes:\n",
"i actually think jeez lakers fans have an amazing team this year\n",
"yeah i had the clippers in our other league a year ago and they won the league championship this year\n",
"i do believe\n",
"they lost to the warriors, this year should be a cake walk\n",
"not a bad start\n",
"idk if they are the 8 seed, just thought they are in the top 8\n",
"too bad for them lol\n",
"i forgot how shit they are last year got their big 3peat\n",
"i think they should be in the top 8 again next year and it should be fun to watch\n",
"they better be a top seed smh\n",
"harden for sure is a man of great taste\n",
"actually his playoff appearances are a night and day\n",
"even if teams lose, they come out like they have been there\n",
"i for sure dont think the warriors are ready for playoffs, can't ask rj barrett\n",
"can the celtics ever win again\n",
"yep\n",
"i'm in for it\n",
"this is make or die for this sport\n",
"rockets vs jazz is back\n",
"nu\n",
"throwback for 2010\n",
"i think there are rumors that charlotte wants out of the bubble too\n",
"good times\n",
"idk if they're\n",
"==========\n",
"\u001b[1m28,500 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m29,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"i'm still a little bitch but at this point in my life i'm progressing so fast\n",
"the thing with these issues is that they're definitely just reason to hide to look like you're being irrational. the people im with the idea of splitting it up between like left/right may and superstitions is less real/actual than you would've thought.\n",
"also having to deal with forecast failures is facepalming and yes we all share this with you\n",
"what do you mean you can't trust it to go to extremes based on their modeling???\n",
"does anyone else feel like that is a reasonable thing to believe??? i mean i do believe it's the end of the world but we're all just joint benefiting from the fact that it is bullshit in practice\n",
"even if the end of the world is coming down to us, we should all be actively working to keep it to us so it's more manageable and manageable\n",
"while the end of the west is approaching, the media is hyping the us-anxiety of \"it could happen to us\" whenever the shit finally does, when and how bad/if it does is still a point of ridicule but we've been conditioned to constantly watch the world burn in that direction\n",
"at the same time, this\n",
"==========\n",
"\u001b[1m30,000 steps reached: saving model to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\u001b[0m\n",
"\u001b[1m30,000 steps reached: generating sample texts.\u001b[0m\n",
"==========\n",
"?\n",
"we're all just here to have a good time\n",
"@uvd9a7zlg are you taking the reigns\n",
"hahah yes, my favourites are the people from my team\n",
"i still need to watch some of the teams that came to the tournament, i watched a ridiculous amount of basketball this year\n",
"the dream\n",
"i'm actually most excited for tomorrow, i've been thinking about this tournament and i'm still a little nervous lol\n",
"that's why i won't be able to watch much of the action 😂\n",
"yeah, i'm not 100% sure about which games i will be able to watch but most teams that i've seen are either 1-0 or 0-0 lol\n",
"my team, if i take time i'll watch some of those games and try to pick it up before sleeping in lol\n",
"ya i think that's probably the best time to watch that season of the nba but the worst for me\n",
"my team, if you take my team, is def the best time i can watch but it's still gonna be hard to predict\n",
"damn we get a lot of lopsided results when they lose\n",
"hopefully there's no bias\n",
"i think he was playing well\n",
"and the rockets have been playing\n",
"==========\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py:688: UserWarning: Detected KeyboardInterrupt, attempting graceful shutdown...\n",
" rank_zero_warn(\"Detected KeyboardInterrupt, attempting graceful shutdown...\")\n",
"01/29/2022 06:13:11 — INFO — aitextgen — Saving trained model pytorch_model.bin to //content/drive/MyDrive/Programming/ai-msgbot/GPT2-conversational-355M-Jan-29-2022_t-04\n"
]
}
],
"source": [
"# DO NOT USE WARMUP STEPS\n",
"\n",
"ai.train(\n",
" file_name, # text file with training data\n",
" output_dir=temp_gpu_path, # where it saves during \"save_every\"\n",
" line_by_line=False, # if using CSV file input\n",
" from_cache=False,\n",
" num_steps=30000, # takes about 5 hours on 16 gb v100 GPU fo®r 75000\n",
" generate_every=1000,\n",
" max_grad_norm=0.5,\n",
" save_every=1500,\n",
" gradient_accumulation_steps=4,\n",
" save_gdrive=False, # this is an \"automated\" save which is worse than current method (IMO)\n",
" learning_rate=1e-3,\n",
" # fp16=True, # current bug in aitextgen is MisconfigurationException: You have asked for `amp_level='O1'` but it's only supported with `amp_backend='apex'`.\n",
" batch_size=1, # if pushing model_size you probably want to leave this at 1\n",
" freeze_layers= True, # whether to change weights on ALL layers or not\n",
" num_layers_freeze = 21, # standard GPT-2 M has 24 layers. size L has 36\n",
" # fp16_opt_level=\"O2\", # different types of FP16 are possible\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "IgUz5m0i1pbj",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
},
"outputId": "db579293-8e6b-41d8-f53e-190918a37536",
"cellView": "form"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"saved! Jan-29-2022_t-06\n"
]
}
],
"source": [
"#@markdown save results to created folders\n",
"import os\n",
"from os.path import join\n",
"save_path = join(base_dir, \n",
" \"FIN-GPT-conv-{sz}-{tag}-{dt}\".format(sz=model_size,\n",
" tag=dataset_tag,\n",
" dt=get_timestamp(),\n",
" )\n",
" )\n",
"\n",
"os.makedirs(save_path, exist_ok=True)\n",
"ai.save(save_path)\n",
"\n",
"print(f'saved! {get_timestamp()}')\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qQJgV_b4bmzd"
},
"source": [
"You're done! Feel free to go to the **Generate Text From The Trained Model** section to generate text based on your retrained model."
]
},
{
"cell_type": "markdown",
"source": [
"---"
],
"metadata": {
"id": "V8DTfkwzyImg"
}
},
{
"cell_type": "markdown",
"metadata": {
"id": "pel-uBULXO2L"
},
"source": [
"\n",
"# Use a Train Model for Generation\n",
"\n",
"If you already had a trained model from this notebook, running the next cell will copy the `pytorch_model.bin` and the `config.json`file from the specified folder in Google Drive into the Colaboratory VM. (If no `from_folder` is specified, it assumes the two files are located at the root level of your Google Drive)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "oeznI_VeaDQn",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 374
},
"outputId": "4d6a9cb7-ad93-4c25-eceb-2f098f72a50e"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"Sat Jan 29 06:13:33 2022 \n",
"+-----------------------------------------------------------------------------+\n",
"| NVIDIA-SMI 495.46 Driver Version: 460.32.03 CUDA Version: 11.2 |\n",
"|-------------------------------+----------------------+----------------------+\n",
"| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n",
"| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n",
"| | | MIG M. |\n",
"|===============================+======================+======================|\n",
"| 0 Tesla V100-SXM2... Off | 00000000:00:04.0 Off | 0 |\n",
"| N/A 39C P0 39W / 300W | 11965MiB / 16160MiB | 0% Default |\n",
"| | | N/A |\n",
"+-------------------------------+----------------------+----------------------+\n",
" \n",
"+-----------------------------------------------------------------------------+\n",
"| Processes: |\n",
"| GPU GI CI PID Type Process name GPU Memory |\n",
"| ID ID Usage |\n",
"|=============================================================================|\n",
"| No running processes found |\n",
"+-----------------------------------------------------------------------------+\n"
]
}
],
"source": [
"!nvidia-smi"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "DCcx5u7sbPTD",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 124
},
"outputId": "fcd5ec32-a2c3-4a54-9332-d95d66288b17"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"01/29/2022 06:13:35 — INFO — aitextgen — Loading model from provided weights and config in //content/drive/MyDrive/Programming/ai-msgbot/FIN-GPT-conv-355M-jiba-Jan-29-2022_t-06.\n",
"/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py:354: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`.\n",
" \"Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 \"\n",
"01/29/2022 06:13:41 — INFO — aitextgen — GPT2 loaded with 354M parameters.\n",
"01/29/2022 06:13:41 — INFO — aitextgen — Using the default GPT-2 Tokenizer.\n"
]
}
],
"source": [
"# best model thus far @ 1.3B parameters and tuned for 50k steps\n",
"# from_folder = \"/content/drive/MyDrive/Programming/AI_peter/GPT-Neo-1B-V1\"\n",
"\n",
"from_folder = save_path\n",
"\n",
"if len(from_folder) > 2:\n",
"\n",
" for file in [\"pytorch_model.bin\", \"config.json\"]:\n",
" if from_folder:\n",
" copy_file_from_gdrive(file, from_folder)\n",
" else:\n",
" copy_file_from_gdrive(file)\n",
"\n",
" ai = aitextgen(model_folder=from_folder, to_gpu=True)\n",
"else:\n",
" ai = aitextgen(model_folder=\".\", to_gpu=True)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ClJwpF_ACONp"
},
"source": [
"## Generate Text From The Trained Model\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3cd0RGDbJiDp"
},
"source": [
"`generate()` without any parameters generates a single text from the loaded model to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4RNY6RBI9LmL",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 947
},
"outputId": "9844bf8e-ab98-4ebc-fc9d-17ee38adf818"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stdout",
"text": [
":austin_pog:\n",
"i think i'm gonna try to play it on tv\n",
"that was a fun game\n",
"i am so excited for pokemon but tbh i just can't recommend it\n",
"that game is actually amazing\n",
"yeah i need to play it now\n",
"so ladder\n",
"what's a ladder\n",
"<https://www.reddit.com/r/nintendoswitch/comments/nlznar/the_world_ends_with_you_final_remix_2499/|https://www.reddit.com/r/nintendoswitch/comments/nlznar/the_world_ends_with_you_final_remix_2499/>\n",
"also there is a lot of random garbage on reddit\n",
"lmao i bought into that, but didn't post about it here\n",
"<https://youtu.be/zmf9w_okqy|https://youtu.be/zmf9w_okqy>\n",
"this game looks amazing\n",
"i am so excited for it. i'm really liking it now, although it's gonna be a while before i'm done\n",
"just finished mother 3 and it was super fun\n",
"also game awards just started where you\n",
"==========\n",
"to this game\n",
"<https://twitter.com/alliemccandless/status/14415808306655248?s=21|https://twitter.com/alliemccandless/status/14415808306655248?s=21>\n",
"we playing ffxiv tonight?\n",
"i've been watching twitch and it looks so cool\n",
"i might try it\n",
"is it cross play? i can maybe play with a friend\n",
"its crossplatform\n",
"its crossplatform\n",
"<https://www.reddit.com/r/pathof exile/comments/l5w8bj2/introducing_a_netflix_app_that/|https://www.reddit.com/r/pathof exile/comments/l5w8bj2/introducing_a_netflix_app_that/>\n",
"also is it crossplatform?\n",
"its a standalone pc\n",
"and android, so we can run android on top of windows 10\n",
"so we can have a game on android, but ya we have to make space for 2giems\n",
"yeah android app support is an issue but samsung is very adamant about not adding more rn\n",
"i guess i don't know anything about android, but i\n",
"==========\n",
"that's the main problem i see\n",
"don't give into the 🌈 🐻\n",
"i'm against it\n",
"this subreddit was made for just making healthy posts about things that are really important\n",
"healthy topics\n",
"but it's not just about that\n",
"away from my main thread\n",
"away from that\n",
"away from just _reacting_ to what they were saying\n",
"away from having the story\n",
"away from _reacting_ to what theyre saying\n",
"away from _reacting_ to _reacting_ to them changing more\n",
"away from _reacting_ to _reacting_ to other people fkn around you\n",
"away from _reacting_ to _reacting_ to them slightly more\n",
"away from _reacting_ to _reacting_ to them actually not responding to what theyre saying\n",
"away from _reacting_ to _reacting_ to actual force being applied\n",
"away from _reacting_ to actual force actually being applied\n",
"off topic but i didn't really like what the last post had to say. like i get they don't like stuff like that but instead complain about not having substance first level thinking\n",
"yeah but i didn't have to explicitly know what that means b/c i was watching those debates and\n"
]
}
],
"source": [
"ai.generate(n=3, max_length=256, \n",
" temperature=1.0, top_p=0.9)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2fSH7QgiiGi7",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 285
},
"outputId": "d4ab04a0-8449-487b-e26a-a7a92ed6f5d5"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"\u001b[1mgive me a good pickup line!\n",
" person beta:\u001b[0m\n",
"<https://twitter.com/michaelharriot/status/127522384400952850?s=21|https://twitter.com/michaelharriot/status/127522384400952850?s=21>\n",
"a lot of these faces are also really cool\n",
"ya i agree though\n",
"the hardest shit i've had to do over the past couple of days\n",
"i'm still not sure what it is about\n",
"but if we had one guy who was like \"who tf are the five mexican kings\" and then we reveal what the fuck they do\n",
"the way they talk about it\n",
"makes me want to kill someone\n",
"<https://youtu.be/yfgwmzxy9txs|https://youtu.be/yfgwmzxy9txs>\n",
"@uvbavmtk7 i have a furry video game that is actually good now\n",
"thoughts?\n",
"don't play minecraft!!!!\n",
"<https://twitter.com/nintendoamerica/status/127576195182830897?s=21|https://twitter.com/nintendoamerica/status/12\n"
]
}
],
"source": [
"ai.generate(prompt=\"give me a good pickup line!\\n person beta:\", temperature=1,\n",
" min_length=10, batch_size =20, top_k=6)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "oF4-PqF0Fl7R"
},
"source": [
"If you're creating an API based on your model and need to pass the generated text elsewhere, you can do `text = ai.generate_one()`\n",
"\n",
"You can also pass in a `prompt` to the generate function to force the text to start with a given character sequence and generate text from there (good if you add an indicator when the text starts).\n",
"\n",
"You can also generate multiple texts at a time by specifing `n`. You can pass a `batch_size` to generate multiple samples in parallel, giving a massive speedup (in Colaboratory, set a maximum of 50 for `batch_size` to avoid going OOM).\n",
"\n",
"Other optional-but-helpful parameters for `ai.generate()` and friends:\n",
"\n",
"* **`min length`**: The minimum length of the generated text: if the text is shorter than this value after cleanup, aitextgen will generate another one.\n",
"* **`max_length`**: Number of tokens to generate (default 256, you can generate up to 1024 tokens with GPT-2 and 2048 with GPT Neo)\n",
"* **`temperature`**: The higher the temperature, the crazier the text (default 0.7, recommended to keep between 0.7 and 1.0)\n",
"* **`top_k`**: Limits the generated guesses to the top *k* guesses (default 0 which disables the behavior; if the generated output is super crazy, you may want to set `top_k=40`)\n",
"* **`top_p`**: Nucleus sampling: limits the generated guesses to a cumulative probability. (gets good results on a dataset with `top_p=0.9`)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8DKMc0fiej4N",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 786
},
"outputId": "1fac52a7-fe5b-413f-a490-86e008212a8f"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"\u001b[1mperson beta:\n",
" i just\u001b[0m have been watching twitch in league and it looks so smooth for competitive play\n",
"it's evolved from \"game one to game two\" to now more of an event where you have to get the credits rolling which is really cool\n",
"yeah it's evolved from game one to game two\n",
"but is fairly close to video game\n",
"demons?\n",
"wilder\n",
"yeah only easy going now\n",
"<https://twitter.com/nintendoamerica/status/12850639502784272521?s=21|https://twitter.com/nintendoamerica/status/12850639502784272521?s=21>\n",
"sengun has a new xbox and so he can't play mhw tho\n",
"crazy\n",
"<https://www.theverge.com/2020/9/23/21273422/league-of-legends-esports-rebrand-riot-games-china-lpl-lec-europe>\n",
"knowing u fucks\n",
"yeah\n",
"bc na is the best.\n",
"@uvddntjvc\n",
"dogshit\n",
"yeah looking at the standings rn\n",
"interesting to see where you are now\n",
"not where you were\n",
"==========\n",
"\u001b[1mperson beta:\n",
" i just\u001b[0m have to say \"thank you for your advice!! i'm keeping my powder sharp today!!\"\n",
"@uvbavmtk7 @uvaqalwsg your final FANTASY game looks so cool\n",
"virgin andrew with a nice evening at the game store today\n",
"<https://youtu.be/a4e3lkhe9s|https://youtu.be/a4e3lkhe9s>\n",
"@uvddntjvc\n",
"lol\n",
"also game awards just started should be a lot more cool announcements\n",
"oh wow this is going to be a long day for me\n",
"<https://www.theverge.com/2020/7/21/21293638/league-of-legends-esports-rebrand-riot-games-china-lpl-lec-eurocup-ring-game-china-lpl-lec-eurocup-eurocup-ring-game-china-lpl-lec-eurocup-ring-game-china-lpl-lec-eurocup-ring-game-china-lpl-lec-eurocup-ring-game-china-l\n",
"==========\n",
"\u001b[1mperson beta:\n",
" i just\u001b[0m read the first chapter of \"this book is amazing\" by e.g. the god emperor of dune (without spoilers). sounded super cool\n",
"yeah i went in thinking that a game would be hype because it was a \"dnd game\" and that it was going to be about the same aliens but they turned out to be bangers\n",
"with the advent of our first game\n",
"> in the next month, we should be running our first game with no funding.\n",
"goal is to raise around 10m dollars.\n",
"goal is to turn our game on a more portable screen and to have a \"cinematic feel\" for the game.\n",
"in terms of our development philosophy, we believe that a more focused approach to game design can be effective for our marketing.\n",
"in the same vein, for the book, the game looks, feels, and plays just as if its own 6 months later. i'm confident that this next year will not only be a format for telling a story (and it will be, since a lot of the game elements are borrowed from the old fairy tales), but that we will also be able to tell a lot more about our lives through scavenging/spidering our way through post-apocalyptic\n"
]
}
],
"source": [
"ai.generate(\n",
" n=3, batch_size=25, prompt=\"person beta:\\n i just\", max_length=256, \n",
" temperature=1.0, top_p=0.9\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zjjEN2Tafhl2"
},
"source": [
"For bulk generation, you can generate a large amount of texts to a file and sort out the samples locally on your computer. The next cell will generate `num_files` files, each with `n` texts and whatever other parameters you would pass to `generate()`. The files can then be downloaded from the Files sidebar!\n",
"\n",
"You can rerun the cells as many times as you want for even more generated texts!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "rKp18dTTj402",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 17
},
"outputId": "26d138d1-bdb9-423b-ecb7-f45940342f5c"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
}
],
"source": [
"save_loc = \"/content/drive/MyDrive/Programming/ai-msgbot/output_files\" #@param {type:\"string\"}\n",
"os.makedirs(save_loc, exist_ok=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Fa6p6arifSL0",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 17
},
"outputId": "231c19de-899d-4420-f5f9-bccfcb389222"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
}
],
"source": [
"p_list = [\n",
" [\"how are you doing?\"+\"\\n\", \"\\n\", \"person beta:\" + \"\\n\"], \n",
" [\"person alpha:\"+\"\\n\", \"it is obvious that \"],\n",
" [\"person alpha:\"+\"\\n\", \"this is ridiculous, \"],\n",
" [\"person alpha: \\n\", \"can you help me with my homework?\"+\"\\n\", \"\\n\", \"person beta:\" + \"\\n\"],\n",
" [\"person beta:\" + \"\\n\"],\n",
" [\"sarah 'nacho cheese' stanley:\" + \"\\n\", \n",
" \"hi! I got a new phone\" + \"\\n\",\n",
" \"\\n\",\n",
" \"person beta:\\n\",],\n",
" [\"person beta: \\n\", \n",
" \"Hey I’m meeting the astrophysics professor via zoom after school any tips?\"+\"\\n\",\n",
" \"\\n\", \n",
" \"person beta:\" + \"\\n\"],\n",
" [\"person beta:\" + \"\\n\",\n",
" \"I know\"],\n",
"]\n",
"\n",
"\n",
"prompts = [\"\".join(line) for line in p_list]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "_8et1WHilo_A",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 410
},
"outputId": "890719d2-8633-4f61-eb03-afaf734c20ca"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"{'gpt-355M-textgen-01.29.2022_file-1.txt': 'how are you doing?\\n'\n",
" '\\n'\n",
" 'person beta:\\n',\n",
" 'gpt-355M-textgen-01.29.2022_file-2.txt': 'person alpha:\\nit is obvious that ',\n",
" 'gpt-355M-textgen-01.29.2022_file-3.txt': 'person alpha:\\n'\n",
" 'this is ridiculous, ',\n",
" 'gpt-355M-textgen-01.29.2022_file-4.txt': 'person alpha: \\n'\n",
" 'can you help me with my homework?\\n'\n",
" '\\n'\n",
" 'person beta:\\n',\n",
" 'gpt-355M-textgen-01.29.2022_file-5.txt': 'person beta:\\n',\n",
" 'gpt-355M-textgen-01.29.2022_file-6.txt': \"sarah 'nacho cheese' stanley:\\n\"\n",
" 'hi! I got a new phone\\n'\n",
" '\\n'\n",
" 'person beta:\\n',\n",
" 'gpt-355M-textgen-01.29.2022_file-7.txt': 'person beta: \\n'\n",
" 'Hey I’m meeting the astrophysics '\n",
" 'professor via zoom after school '\n",
" 'any tips?\\n'\n",
" '\\n'\n",
" 'person beta:\\n',\n",
" 'gpt-355M-textgen-01.29.2022_file-8.txt': 'person beta:\\nI know'}\n"
]
}
],
"source": [
"from datetime import datetime\n",
"import pprint as pp\n",
"\n",
"ds_date_time = datetime.now().strftime(\"%m.%d.%Y\")\n",
"\n",
"base_header = \"gpt-{}-textgen-{}\".format(model_size, ds_date_time)\n",
"prompt_IDs = [base_header + \"_file-{}.txt\".format(i) for i in range(1, len(prompts)+1)]\n",
"\n",
"prompt_mng = {}\n",
"for pid, text in zip(prompt_IDs, prompts):\n",
" prompt_mng[pid] = text\n",
"pp.pprint(prompt_mng)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "HPnqt-UVlEPR",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 216,
"referenced_widgets": [
"871885b9bb6d4c8c87411bf3b2870556",
"f100b9de6836422f829905103cd226d7",
"a904448aeab445b293328cfac89f59af",
"b3e8c12d08a24615bba3e4b5fb4d2600",
"4369a43ba37b4151816b12cd5ed23d9b",
"d62ad573f850461d9f243cd6ed36a669",
"cbe8b87076eb4c008657fe5d7618a1b5",
"99c09f42e6004c69a79cffaa5c99ebd5",
"078a81089b26428a87c4c0016834f693",
"c14415928dbb4b39a986af88f6666757",
"3ce0853dd0a441c7b03d355e263f0d56",
"cfc11ae49e2845299a4ff1dab847f3e8",
"0e9d78d161764956abf485be77be823a",
"d2f3a9a983f44ebf81fd91771ba1e33c",
"4de6f8eaffde46c1a49aec3aee5b048e",
"94b09ea671d84d379a69d17dddae03cc",
"0201cba7ca15495480f167f97ac91794",
"4412b586f772455a975870bebb34ee17",
"fe0d03b3f5be41ccb0586b20ee14dff9",
"e5dcef643afc491dbad24c2131048f6a",
"c55189f439d647b598e4fc0094efea82",
"0282c401f5bb41ab98bfb6eba22d8643",
"c937ea0040b3472fa6d43d009f9e0a43",
"b1181e9c400c4236b77014515d8e4ffb",
"135525b115264fa4ad1e2237a4fd758a",
"cef9a4cfa7d74864b8ffd03305003bb4",
"32357490e0b345e9bc7490408539360c",
"1557cb6719c649dead0af64e619414a7",
"2161c8fb7b8f405bbc7a6a146248f5a0",
"cf9d2bc94fa946f48fd941f2c8598548",
"0587853124044767967a22d7a71a9230",
"f7fe41bc0f0f4611a4361130e1430c7d",
"76dcd596199d405f9bb4b123c7d5553a",
"f6082a2cf0534039ac3a1bee2dce7ebc",
"ac55c1a5729d4ca5987b0ee254e2c8d0",
"a28207dcfa404e37a240f88eb6aaa933",
"f385f09995204e86b446d3fd87e565f1",
"fe6abd9ff99f4fb6850deb3ae5722460",
"39270b7c3aa34356a0c98370d7c8f092",
"258ef0c1113144f985ed83a903a69bc2",
"7de2c3922d514529baa6ce8fe7bfd346",
"fa95bc29424048be9e453839e616260f",
"6fae9a90b0484cfe9c144eaa5766b090",
"50e063ec8ac24ad2ab5be79bd70c614d"
]
},
"outputId": "ea08f134-6bcd-4b6b-8df5-1d87c7483c23"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"01/29/2022 06:14:04 — INFO — aitextgen — Generating 60 texts to /content/drive/MyDrive/Programming/ai-msgbot/output_files/gpt-355M-textgen-01.29.2022_file-1.txt\n"
]
},
{
"output_type": "display_data",
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "871885b9bb6d4c8c87411bf3b2870556",
"version_minor": 0,
"version_major": 2
},
"text/plain": [
" 0%| | 0/60 [00:00<?, ?it/s]"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"01/29/2022 06:14:50 — INFO — aitextgen — Generating 60 texts to /content/drive/MyDrive/Programming/ai-msgbot/output_files/gpt-355M-textgen-01.29.2022_file-2.txt\n"
]
},
{
"output_type": "display_data",
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "cfc11ae49e2845299a4ff1dab847f3e8",
"version_minor": 0,
"version_major": 2
},
"text/plain": [
" 0%| | 0/60 [00:00<?, ?it/s]"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"01/29/2022 06:15:35 — INFO — aitextgen — Generating 60 texts to /content/drive/MyDrive/Programming/ai-msgbot/output_files/gpt-355M-textgen-01.29.2022_file-3.txt\n"
]
},
{
"output_type": "display_data",
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "c937ea0040b3472fa6d43d009f9e0a43",
"version_minor": 0,
"version_major": 2
},
"text/plain": [
" 0%| | 0/60 [00:00<?, ?it/s]"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"01/29/2022 06:16:20 — INFO — aitextgen — Generating 60 texts to /content/drive/MyDrive/Programming/ai-msgbot/output_files/gpt-355M-textgen-01.29.2022_file-4.txt\n"
]
},
{
"output_type": "display_data",
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "f6082a2cf0534039ac3a1bee2dce7ebc",
"version_minor": 0,
"version_major": 2
},
"text/plain": [
" 0%| | 0/60 [00:00<?, ?it/s]"
]
},
"metadata": {}
}
],
"source": [
"from os.path import join\n",
"\n",
"for pfile, my_prompt in prompt_mng.items():\n",
" ai.generate_to_file(\n",
" n=60,\n",
" batch_size=20,\n",
" prompt=my_prompt,\n",
" max_length=512,\n",
" temperature=0.85,\n",
" top_p=0.9,\n",
" destination_path=join(save_loc, pfile)\n",
" )\n"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"background_execution": "on",
"collapsed_sections": [],
"machine_shape": "hm",
"name": "aitextgen JIBA - text generation and training on GPU.ipynb",
"provenance": [],
"include_colab_link": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"23c59d08bf5f4af793f2bcb7754655d8": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HBoxModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HBoxView",
"_dom_classes": [],
"_model_name": "HBoxModel",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.5.0",
"box_style": "",
"layout": "IPY_MODEL_9be719ddb5cd40f89eab05653c1aed32",
"_model_module": "@jupyter-widgets/controls",
"children": [
"IPY_MODEL_6d1ff33e29124795895e50258e495f81",
"IPY_MODEL_203aea35b71e47148c2247298a76a36b",
"IPY_MODEL_9dcdb301b9954f6884ea0892f95b25ac"
]
}
},
"9be719ddb5cd40f89eab05653c1aed32": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"6d1ff33e29124795895e50258e495f81": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_6261e8ec893c4fdf99844e7ea6353d02",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": "replacing speaker names: 100%",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_e7bcf8f606db4513b7ad2155c63a3222"
}
},
"203aea35b71e47148c2247298a76a36b": {
"model_module": "@jupyter-widgets/controls",
"model_name": "FloatProgressModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "ProgressView",
"style": "IPY_MODEL_a626e90db4b340f2955ff226528ecc2c",
"_dom_classes": [],
"description": "",
"_model_name": "FloatProgressModel",
"bar_style": "success",
"max": 64021,
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": 64021,
"_view_count": null,
"_view_module_version": "1.5.0",
"orientation": "horizontal",
"min": 0,
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_a07828222c754f859be23114e0fd7961"
}
},
"9dcdb301b9954f6884ea0892f95b25ac": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_84eb2279770846528a2af50d5b51681e",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": " 64021/64021 [00:00&lt;00:00, 851420.45it/s]",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_ebf45e65029441c3be8354bc8c536fda"
}
},
"6261e8ec893c4fdf99844e7ea6353d02": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"e7bcf8f606db4513b7ad2155c63a3222": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"a626e90db4b340f2955ff226528ecc2c": {
"model_module": "@jupyter-widgets/controls",
"model_name": "ProgressStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "ProgressStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"bar_color": null,
"_model_module": "@jupyter-widgets/controls"
}
},
"a07828222c754f859be23114e0fd7961": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"84eb2279770846528a2af50d5b51681e": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"ebf45e65029441c3be8354bc8c536fda": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"10460fbc850a45beb8605651f71e529f": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HBoxModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HBoxView",
"_dom_classes": [],
"_model_name": "HBoxModel",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.5.0",
"box_style": "",
"layout": "IPY_MODEL_4c79c10727f4403bbe560efc30454e18",
"_model_module": "@jupyter-widgets/controls",
"children": [
"IPY_MODEL_3438b0dbea6a4c4cb7e1ff337a4ea0e1",
"IPY_MODEL_a31bd62c677a450a9317ec78320ab7f3",
"IPY_MODEL_a113fd6821ee4c5c96232329090f42a6"
]
}
},
"4c79c10727f4403bbe560efc30454e18": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": "row wrap",
"width": "100%",
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": "inline-flex",
"left": null
}
},
"3438b0dbea6a4c4cb7e1ff337a4ea0e1": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_7b5c1e61127243d98e406587692e0a92",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": "100%",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_b391a8443b144c0b80ccf4f3ee02d2a7"
}
},
"a31bd62c677a450a9317ec78320ab7f3": {
"model_module": "@jupyter-widgets/controls",
"model_name": "FloatProgressModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "ProgressView",
"style": "IPY_MODEL_8ecb280849a34b64b2bcb493a8413e44",
"_dom_classes": [],
"description": "",
"_model_name": "FloatProgressModel",
"bar_style": "success",
"max": 64021,
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": 64021,
"_view_count": null,
"_view_module_version": "1.5.0",
"orientation": "horizontal",
"min": 0,
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_b31f57ca7c264e70a71ad07fda5968a1"
}
},
"a113fd6821ee4c5c96232329090f42a6": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_53e65a3a63f246e18f185ce8c542f9b7",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": " 64021/64021 [00:01&lt;00:00, 45112.45it/s]",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_477eea6493a04e02add3a50cb4b2e963"
}
},
"7b5c1e61127243d98e406587692e0a92": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"b391a8443b144c0b80ccf4f3ee02d2a7": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"8ecb280849a34b64b2bcb493a8413e44": {
"model_module": "@jupyter-widgets/controls",
"model_name": "ProgressStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "ProgressStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"bar_color": null,
"_model_module": "@jupyter-widgets/controls"
}
},
"b31f57ca7c264e70a71ad07fda5968a1": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": "2",
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"53e65a3a63f246e18f185ce8c542f9b7": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"477eea6493a04e02add3a50cb4b2e963": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"bb3fa1cbdc2e45e280662641de2cba54": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HBoxModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HBoxView",
"_dom_classes": [],
"_model_name": "HBoxModel",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.5.0",
"box_style": "",
"layout": "IPY_MODEL_8c495a3c7f00497d9c4ef368ab85b9cb",
"_model_module": "@jupyter-widgets/controls",
"children": [
"IPY_MODEL_013e7cce53744d059a6c6c637c2fbaa2",
"IPY_MODEL_80eeb471d6ef46f098b144d3d90921c2",
"IPY_MODEL_0081a70f0cb141bc96aeca545d9d47fc"
]
}
},
"8c495a3c7f00497d9c4ef368ab85b9cb": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": "row wrap",
"width": "100%",
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": "inline-flex",
"left": null
}
},
"013e7cce53744d059a6c6c637c2fbaa2": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_62288b0c761a4650a4ae44e3b0b9d0fd",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": "Loss: 0.478 — Avg: 0.453 — GPU Mem: 11965 MB: ",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_d846702f2a374c38a88aa2fc049b8bd7"
}
},
"80eeb471d6ef46f098b144d3d90921c2": {
"model_module": "@jupyter-widgets/controls",
"model_name": "FloatProgressModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "ProgressView",
"style": "IPY_MODEL_040b0ca2c86a4a169aee5bb8f3f72bb5",
"_dom_classes": [],
"description": "",
"_model_name": "FloatProgressModel",
"bar_style": "",
"max": 30000,
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": 30000,
"_view_count": null,
"_view_module_version": "1.5.0",
"orientation": "horizontal",
"min": 0,
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_2c896eb7be3049bf892b65ebc2d8f1dd"
}
},
"0081a70f0cb141bc96aeca545d9d47fc": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_cb49e1d86276429ab833c4a2f3b550e9",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": " 30480/? [2:08:28&lt;00:00, 3.95it/s]",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_db386a505b954ef9916da3034b3d805c"
}
},
"62288b0c761a4650a4ae44e3b0b9d0fd": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"d846702f2a374c38a88aa2fc049b8bd7": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"040b0ca2c86a4a169aee5bb8f3f72bb5": {
"model_module": "@jupyter-widgets/controls",
"model_name": "ProgressStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "ProgressStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"bar_color": null,
"_model_module": "@jupyter-widgets/controls"
}
},
"2c896eb7be3049bf892b65ebc2d8f1dd": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": "2",
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"cb49e1d86276429ab833c4a2f3b550e9": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"db386a505b954ef9916da3034b3d805c": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"871885b9bb6d4c8c87411bf3b2870556": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HBoxModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HBoxView",
"_dom_classes": [],
"_model_name": "HBoxModel",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.5.0",
"box_style": "",
"layout": "IPY_MODEL_f100b9de6836422f829905103cd226d7",
"_model_module": "@jupyter-widgets/controls",
"children": [
"IPY_MODEL_a904448aeab445b293328cfac89f59af",
"IPY_MODEL_b3e8c12d08a24615bba3e4b5fb4d2600",
"IPY_MODEL_4369a43ba37b4151816b12cd5ed23d9b"
]
}
},
"f100b9de6836422f829905103cd226d7": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"a904448aeab445b293328cfac89f59af": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_d62ad573f850461d9f243cd6ed36a669",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": "100%",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_cbe8b87076eb4c008657fe5d7618a1b5"
}
},
"b3e8c12d08a24615bba3e4b5fb4d2600": {
"model_module": "@jupyter-widgets/controls",
"model_name": "FloatProgressModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "ProgressView",
"style": "IPY_MODEL_99c09f42e6004c69a79cffaa5c99ebd5",
"_dom_classes": [],
"description": "",
"_model_name": "FloatProgressModel",
"bar_style": "success",
"max": 60,
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": 60,
"_view_count": null,
"_view_module_version": "1.5.0",
"orientation": "horizontal",
"min": 0,
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_078a81089b26428a87c4c0016834f693"
}
},
"4369a43ba37b4151816b12cd5ed23d9b": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_c14415928dbb4b39a986af88f6666757",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": " 60/60 [00:46&lt;00:00, 1.31it/s]",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_3ce0853dd0a441c7b03d355e263f0d56"
}
},
"d62ad573f850461d9f243cd6ed36a669": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"cbe8b87076eb4c008657fe5d7618a1b5": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"99c09f42e6004c69a79cffaa5c99ebd5": {
"model_module": "@jupyter-widgets/controls",
"model_name": "ProgressStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "ProgressStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"bar_color": null,
"_model_module": "@jupyter-widgets/controls"
}
},
"078a81089b26428a87c4c0016834f693": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"c14415928dbb4b39a986af88f6666757": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"3ce0853dd0a441c7b03d355e263f0d56": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"cfc11ae49e2845299a4ff1dab847f3e8": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HBoxModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HBoxView",
"_dom_classes": [],
"_model_name": "HBoxModel",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.5.0",
"box_style": "",
"layout": "IPY_MODEL_0e9d78d161764956abf485be77be823a",
"_model_module": "@jupyter-widgets/controls",
"children": [
"IPY_MODEL_d2f3a9a983f44ebf81fd91771ba1e33c",
"IPY_MODEL_4de6f8eaffde46c1a49aec3aee5b048e",
"IPY_MODEL_94b09ea671d84d379a69d17dddae03cc"
]
}
},
"0e9d78d161764956abf485be77be823a": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"d2f3a9a983f44ebf81fd91771ba1e33c": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_0201cba7ca15495480f167f97ac91794",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": "100%",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_4412b586f772455a975870bebb34ee17"
}
},
"4de6f8eaffde46c1a49aec3aee5b048e": {
"model_module": "@jupyter-widgets/controls",
"model_name": "FloatProgressModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "ProgressView",
"style": "IPY_MODEL_fe0d03b3f5be41ccb0586b20ee14dff9",
"_dom_classes": [],
"description": "",
"_model_name": "FloatProgressModel",
"bar_style": "success",
"max": 60,
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": 60,
"_view_count": null,
"_view_module_version": "1.5.0",
"orientation": "horizontal",
"min": 0,
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_e5dcef643afc491dbad24c2131048f6a"
}
},
"94b09ea671d84d379a69d17dddae03cc": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_c55189f439d647b598e4fc0094efea82",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": " 60/60 [00:44&lt;00:00, 1.33it/s]",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_0282c401f5bb41ab98bfb6eba22d8643"
}
},
"0201cba7ca15495480f167f97ac91794": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"4412b586f772455a975870bebb34ee17": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"fe0d03b3f5be41ccb0586b20ee14dff9": {
"model_module": "@jupyter-widgets/controls",
"model_name": "ProgressStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "ProgressStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"bar_color": null,
"_model_module": "@jupyter-widgets/controls"
}
},
"e5dcef643afc491dbad24c2131048f6a": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"c55189f439d647b598e4fc0094efea82": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"0282c401f5bb41ab98bfb6eba22d8643": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"c937ea0040b3472fa6d43d009f9e0a43": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HBoxModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HBoxView",
"_dom_classes": [],
"_model_name": "HBoxModel",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.5.0",
"box_style": "",
"layout": "IPY_MODEL_b1181e9c400c4236b77014515d8e4ffb",
"_model_module": "@jupyter-widgets/controls",
"children": [
"IPY_MODEL_135525b115264fa4ad1e2237a4fd758a",
"IPY_MODEL_cef9a4cfa7d74864b8ffd03305003bb4",
"IPY_MODEL_32357490e0b345e9bc7490408539360c"
]
}
},
"b1181e9c400c4236b77014515d8e4ffb": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"135525b115264fa4ad1e2237a4fd758a": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_1557cb6719c649dead0af64e619414a7",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": "100%",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_2161c8fb7b8f405bbc7a6a146248f5a0"
}
},
"cef9a4cfa7d74864b8ffd03305003bb4": {
"model_module": "@jupyter-widgets/controls",
"model_name": "FloatProgressModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "ProgressView",
"style": "IPY_MODEL_cf9d2bc94fa946f48fd941f2c8598548",
"_dom_classes": [],
"description": "",
"_model_name": "FloatProgressModel",
"bar_style": "success",
"max": 60,
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": 60,
"_view_count": null,
"_view_module_version": "1.5.0",
"orientation": "horizontal",
"min": 0,
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_0587853124044767967a22d7a71a9230"
}
},
"32357490e0b345e9bc7490408539360c": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_f7fe41bc0f0f4611a4361130e1430c7d",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": " 60/60 [00:44&lt;00:00, 1.34it/s]",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_76dcd596199d405f9bb4b123c7d5553a"
}
},
"1557cb6719c649dead0af64e619414a7": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"2161c8fb7b8f405bbc7a6a146248f5a0": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"cf9d2bc94fa946f48fd941f2c8598548": {
"model_module": "@jupyter-widgets/controls",
"model_name": "ProgressStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "ProgressStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"bar_color": null,
"_model_module": "@jupyter-widgets/controls"
}
},
"0587853124044767967a22d7a71a9230": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"f7fe41bc0f0f4611a4361130e1430c7d": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"76dcd596199d405f9bb4b123c7d5553a": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"f6082a2cf0534039ac3a1bee2dce7ebc": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HBoxModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HBoxView",
"_dom_classes": [],
"_model_name": "HBoxModel",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.5.0",
"box_style": "",
"layout": "IPY_MODEL_ac55c1a5729d4ca5987b0ee254e2c8d0",
"_model_module": "@jupyter-widgets/controls",
"children": [
"IPY_MODEL_a28207dcfa404e37a240f88eb6aaa933",
"IPY_MODEL_f385f09995204e86b446d3fd87e565f1",
"IPY_MODEL_fe6abd9ff99f4fb6850deb3ae5722460"
]
}
},
"ac55c1a5729d4ca5987b0ee254e2c8d0": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"a28207dcfa404e37a240f88eb6aaa933": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_39270b7c3aa34356a0c98370d7c8f092",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": " 0%",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_258ef0c1113144f985ed83a903a69bc2"
}
},
"f385f09995204e86b446d3fd87e565f1": {
"model_module": "@jupyter-widgets/controls",
"model_name": "FloatProgressModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "ProgressView",
"style": "IPY_MODEL_7de2c3922d514529baa6ce8fe7bfd346",
"_dom_classes": [],
"description": "",
"_model_name": "FloatProgressModel",
"bar_style": "",
"max": 60,
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": 0,
"_view_count": null,
"_view_module_version": "1.5.0",
"orientation": "horizontal",
"min": 0,
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_fa95bc29424048be9e453839e616260f"
}
},
"fe6abd9ff99f4fb6850deb3ae5722460": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_6fae9a90b0484cfe9c144eaa5766b090",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "​",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": " 0/60 [00:00&lt;?, ?it/s]",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_50e063ec8ac24ad2ab5be79bd70c614d"
}
},
"39270b7c3aa34356a0c98370d7c8f092": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"258ef0c1113144f985ed83a903a69bc2": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"7de2c3922d514529baa6ce8fe7bfd346": {
"model_module": "@jupyter-widgets/controls",
"model_name": "ProgressStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "ProgressStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"bar_color": null,
"_model_module": "@jupyter-widgets/controls"
}
},
"fa95bc29424048be9e453839e616260f": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"6fae9a90b0484cfe9c144eaa5766b090": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"50e063ec8ac24ad2ab5be79bd70c614d": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
}
}
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment