Skip to content

Instantly share code, notes, and snippets.

@wesslen
Created March 17, 2024 23:47
Show Gist options
  • Save wesslen/9d3fa58d511bd562170d56162216355a to your computer and use it in GitHub Desktop.
Save wesslen/9d3fa58d511bd562170d56162216355a to your computer and use it in GitHub Desktop.
fireworks-colabtune-a100.ipynb
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/wesslen/9d3fa58d511bd562170d56162216355a/fireworks-colabtune-a100.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tmLUG6Yte4VC"
},
"source": [
"# Model tuning in colab"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NBQALK1uzPX5"
},
"source": [
"This notebook used A100, which is available under the Colab Pro account. Make sure to change your runtime type first."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "T_Tbt2vYfIvB"
},
"source": [
"## Environment setup"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "w48IGhuPfOLd"
},
"source": [
"Make sure that you are connected to a GPU host."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-1SRQmvFezo1",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "5a7450cb-4df4-424e-8453-51b6c648ff3b"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Sun Mar 17 23:33:06 2024 \n",
"+---------------------------------------------------------------------------------------+\n",
"| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |\n",
"|-----------------------------------------+----------------------+----------------------+\n",
"| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |\n",
"| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |\n",
"| | | MIG M. |\n",
"|=========================================+======================+======================|\n",
"| 0 NVIDIA A100-SXM4-40GB Off | 00000000:00:04.0 Off | 0 |\n",
"| N/A 29C P0 45W / 400W | 2MiB / 40960MiB | 0% Default |\n",
"| | | Disabled |\n",
"+-----------------------------------------+----------------------+----------------------+\n",
" \n",
"+---------------------------------------------------------------------------------------+\n",
"| Processes: |\n",
"| GPU GI CI PID Type Process name GPU Memory |\n",
"| ID ID Usage |\n",
"|=======================================================================================|\n",
"| No running processes found |\n",
"+---------------------------------------------------------------------------------------+\n"
]
}
],
"source": [
"!nvidia-smi"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "yn9T_kW3fZOg"
},
"source": [
"Check out the cook book code from github."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "usSJUnJve5Kp",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "6629a618-db98-4618-e8e2-ddda839e93e5"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Cloning into 'cookbook'...\n",
"remote: Enumerating objects: 867, done.\u001b[K\n",
"remote: Counting objects: 100% (229/229), done.\u001b[K\n",
"remote: Compressing objects: 100% (112/112), done.\u001b[K\n",
"remote: Total 867 (delta 129), reused 197 (delta 114), pack-reused 638\u001b[K\n",
"Receiving objects: 100% (867/867), 754.03 KiB | 20.38 MiB/s, done.\n",
"Resolving deltas: 100% (466/466), done.\n"
]
}
],
"source": [
"!git clone https://github.com/fw-ai/cookbook.git"
]
},
{
"cell_type": "markdown",
"source": [
"**Manual update**: Need to comment out lines 50-51 in `cookbook/recipes/common/peft.py`:\n",
"\n",
"```\n",
" # load_in_4bit=load_in_4bit,\n",
" # load_in_8bit=load_in_8bit,\n",
"```\n",
"\n",
"If not, will get:\n",
"```\n",
"finetune/0 [0]:Error executing job with overrides: ['model=llama2-7b-chat-colab']\n",
"finetune/0 [0]:Traceback (most recent call last):\n",
"finetune/0 [0]: File \"/content/cookbook/recipes/tune/instruct_lora/finetune.py\", line 56, in _app\n",
"finetune/0 [0]: model = load_train_model(config)\n",
"finetune/0 [0]: File \"/content/cookbook/recipes/common/peft.py\", line 50, in load_train_model\n",
"finetune/0 [0]: model = base_model_class.from_pretrained(\n",
"finetune/0 [0]: File \"/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py\", line 561, in from_pretrained\n",
"finetune/0 [0]: return model_class.from_pretrained(\n",
"finetune/0 [0]: File \"/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py\", line 2952, in from_pretrained\n",
"finetune/0 [0]: raise ValueError(\n",
"finetune/0 [0]:ValueError: You can't pass `load_in_4bit`or `load_in_8bit` as a kwarg when passing `quantization_config` argument at the same time.\n",
"```"
],
"metadata": {
"id": "I9apBUWwycKR"
}
},
{
"cell_type": "markdown",
"metadata": {
"id": "q92kngbbgBV6"
},
"source": [
"Install required dependencies. Added `wandb` and `flash-attn`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Dezn_0k3ffCr",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"outputId": "af45129d-de50-4a58-acfe-06040dcab161"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Collecting accelerate\n",
" Downloading accelerate-0.28.0-py3-none-any.whl (290 kB)\n",
"\u001b[?25l \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/290.1 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K \u001b[91m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[91m╸\u001b[0m \u001b[32m286.7/290.1 kB\u001b[0m \u001b[31m11.6 MB/s\u001b[0m eta \u001b[36m0:00:01\u001b[0m\r\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m290.1/290.1 kB\u001b[0m \u001b[31m7.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting bitsandbytes\n",
" Downloading bitsandbytes-0.43.0-py3-none-manylinux_2_24_x86_64.whl (102.2 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m102.2/102.2 MB\u001b[0m \u001b[31m15.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting datasets\n",
" Downloading datasets-2.18.0-py3-none-any.whl (510 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m510.5/510.5 kB\u001b[0m \u001b[31m42.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting evals\n",
" Downloading evals-2.0.0.post1-py3-none-any.whl (42.1 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m42.1/42.1 MB\u001b[0m \u001b[31m4.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting fire\n",
" Downloading fire-0.6.0.tar.gz (88 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m88.4/88.4 kB\u001b[0m \u001b[31m14.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
"Collecting guidance\n",
" Downloading guidance-0.1.10-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (229 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m229.3/229.3 kB\u001b[0m \u001b[31m24.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: huggingface_hub in /usr/local/lib/python3.10/dist-packages (0.20.3)\n",
"Collecting hydra-core\n",
" Downloading hydra_core-1.3.2-py3-none-any.whl (154 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m154.5/154.5 kB\u001b[0m \u001b[31m17.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting ninja\n",
" Downloading ninja-1.11.1.1-py2.py3-none-manylinux1_x86_64.manylinux_2_5_x86_64.whl (307 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m307.2/307.2 kB\u001b[0m \u001b[31m27.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (24.0)\n",
"Collecting peft\n",
" Downloading peft-0.9.0-py3-none-any.whl (190 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m190.9/190.9 kB\u001b[0m \u001b[31m23.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting py7zr\n",
" Downloading py7zr-0.21.0-py3-none-any.whl (67 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m67.6/67.6 kB\u001b[0m \u001b[31m7.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting s3fs\n",
" Downloading s3fs-2024.3.0-py3-none-any.whl (29 kB)\n",
"Requirement already satisfied: sentencepiece in /usr/local/lib/python3.10/dist-packages (0.1.99)\n",
"Collecting torchx\n",
" Downloading torchx-0.6.0-py3-none-any.whl (244 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m244.4/244.4 kB\u001b[0m \u001b[31m23.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: transformers in /usr/local/lib/python3.10/dist-packages (4.38.2)\n",
"Collecting zstandard\n",
" Downloading zstandard-0.22.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.4 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.4/5.4 MB\u001b[0m \u001b[31m30.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting wandb\n",
" Downloading wandb-0.16.4-py3-none-any.whl (2.2 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.2/2.2 MB\u001b[0m \u001b[31m88.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting flash-attn\n",
" Downloading flash_attn-2.5.6.tar.gz (2.5 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.5/2.5 MB\u001b[0m \u001b[31m66.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
"Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from accelerate) (1.25.2)\n",
"Requirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from accelerate) (5.9.5)\n",
"Requirement already satisfied: pyyaml in /usr/local/lib/python3.10/dist-packages (from accelerate) (6.0.1)\n",
"Requirement already satisfied: torch>=1.10.0 in /usr/local/lib/python3.10/dist-packages (from accelerate) (2.2.1+cu121)\n",
"Requirement already satisfied: safetensors>=0.3.1 in /usr/local/lib/python3.10/dist-packages (from accelerate) (0.4.2)\n",
"Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from datasets) (3.13.1)\n",
"Requirement already satisfied: pyarrow>=12.0.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (14.0.2)\n",
"Requirement already satisfied: pyarrow-hotfix in /usr/local/lib/python3.10/dist-packages (from datasets) (0.6)\n",
"Collecting dill<0.3.9,>=0.3.0 (from datasets)\n",
" Downloading dill-0.3.8-py3-none-any.whl (116 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m116.3/116.3 kB\u001b[0m \u001b[31m15.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from datasets) (1.5.3)\n",
"Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (2.31.0)\n",
"Requirement already satisfied: tqdm>=4.62.1 in /usr/local/lib/python3.10/dist-packages (from datasets) (4.66.2)\n",
"Collecting xxhash (from datasets)\n",
" Downloading xxhash-3.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (194 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m194.1/194.1 kB\u001b[0m \u001b[31m28.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting multiprocess (from datasets)\n",
" Downloading multiprocess-0.70.16-py310-none-any.whl (134 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m134.8/134.8 kB\u001b[0m \u001b[31m20.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: fsspec[http]<=2024.2.0,>=2023.1.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (2023.6.0)\n",
"Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from datasets) (3.9.3)\n",
"Collecting mypy (from evals)\n",
" Downloading mypy-1.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.5 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m12.5/12.5 MB\u001b[0m \u001b[31m92.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting openai>=1.0.0 (from evals)\n",
" Downloading openai-1.14.1-py3-none-any.whl (257 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m257.5/257.5 kB\u001b[0m \u001b[31m33.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting tiktoken (from evals)\n",
" Downloading tiktoken-0.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.8 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.8/1.8 MB\u001b[0m \u001b[31m94.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting blobfile (from evals)\n",
" Downloading blobfile-2.1.1-py3-none-any.whl (73 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m73.7/73.7 kB\u001b[0m \u001b[31m11.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting backoff (from evals)\n",
" Downloading backoff-2.2.1-py3-none-any.whl (15 kB)\n",
"Collecting snowflake-connector-python[pandas] (from evals)\n",
" Downloading snowflake_connector_python-3.7.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.6 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.6/2.6 MB\u001b[0m \u001b[31m91.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: pydantic in /usr/local/lib/python3.10/dist-packages (from evals) (2.6.4)\n",
"Requirement already satisfied: nltk in /usr/local/lib/python3.10/dist-packages (from evals) (3.8.1)\n",
"Collecting mock (from evals)\n",
" Downloading mock-5.1.0-py3-none-any.whl (30 kB)\n",
"Collecting langdetect (from evals)\n",
" Downloading langdetect-1.0.9.tar.gz (981 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m981.5/981.5 kB\u001b[0m \u001b[31m73.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
"Requirement already satisfied: termcolor in /usr/local/lib/python3.10/dist-packages (from evals) (2.4.0)\n",
"Collecting lz4 (from evals)\n",
" Downloading lz4-4.3.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m77.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting pyzstd (from evals)\n",
" Downloading pyzstd-0.15.9-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (412 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m412.3/412.3 kB\u001b[0m \u001b[31m47.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting sacrebleu (from evals)\n",
" Downloading sacrebleu-2.4.1-py3-none-any.whl (106 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m106.6/106.6 kB\u001b[0m \u001b[31m17.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: matplotlib in /usr/local/lib/python3.10/dist-packages (from evals) (3.7.1)\n",
"Requirement already satisfied: pytest in /usr/local/lib/python3.10/dist-packages (from evals) (7.4.4)\n",
"Collecting langchain (from evals)\n",
" Downloading langchain-0.1.12-py3-none-any.whl (809 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m809.1/809.1 kB\u001b[0m \u001b[31m67.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: numexpr in /usr/local/lib/python3.10/dist-packages (from evals) (2.9.0)\n",
"Collecting types-PyYAML (from evals)\n",
" Downloading types_PyYAML-6.0.12.20240311-py3-none-any.whl (15 kB)\n",
"Collecting spacy-universal-sentence-encoder (from evals)\n",
" Downloading spacy_universal_sentence_encoder-0.4.6.tar.gz (15 kB)\n",
" Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
"Collecting jiwer (from evals)\n",
" Downloading jiwer-3.0.3-py3-none-any.whl (21 kB)\n",
"Requirement already satisfied: seaborn in /usr/local/lib/python3.10/dist-packages (from evals) (0.13.1)\n",
"Requirement already satisfied: statsmodels in /usr/local/lib/python3.10/dist-packages (from evals) (0.14.1)\n",
"Requirement already satisfied: six in /usr/local/lib/python3.10/dist-packages (from fire) (1.16.0)\n",
"Collecting diskcache (from guidance)\n",
" Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m45.5/45.5 kB\u001b[0m \u001b[31m6.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting gptcache (from guidance)\n",
" Downloading gptcache-0.1.43-py3-none-any.whl (131 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m131.5/131.5 kB\u001b[0m \u001b[31m20.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: platformdirs in /usr/local/lib/python3.10/dist-packages (from guidance) (4.2.0)\n",
"Collecting msal (from guidance)\n",
" Downloading msal-1.27.0-py2.py3-none-any.whl (101 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m101.5/101.5 kB\u001b[0m \u001b[31m16.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting ordered-set (from guidance)\n",
" Downloading ordered_set-4.1.0-py3-none-any.whl (7.6 kB)\n",
"Collecting pyformlang (from guidance)\n",
" Downloading pyformlang-1.0.7-py3-none-any.whl (125 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m125.8/125.8 kB\u001b[0m \u001b[31m20.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface_hub) (4.10.0)\n",
"Collecting omegaconf<2.4,>=2.2 (from hydra-core)\n",
" Downloading omegaconf-2.3.0-py3-none-any.whl (79 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m79.5/79.5 kB\u001b[0m \u001b[31m12.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting antlr4-python3-runtime==4.9.* (from hydra-core)\n",
" Downloading antlr4-python3-runtime-4.9.3.tar.gz (117 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m117.0/117.0 kB\u001b[0m \u001b[31m20.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
"Collecting texttable (from py7zr)\n",
" Downloading texttable-1.7.0-py2.py3-none-any.whl (10 kB)\n",
"Collecting pycryptodomex>=3.16.0 (from py7zr)\n",
" Downloading pycryptodomex-3.20.0-cp35-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.1 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.1/2.1 MB\u001b[0m \u001b[31m97.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting pyppmd<1.2.0,>=1.1.0 (from py7zr)\n",
" Downloading pyppmd-1.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (138 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m138.9/138.9 kB\u001b[0m \u001b[31m21.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting pybcj<1.1.0,>=1.0.0 (from py7zr)\n",
" Downloading pybcj-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (49 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.7/49.7 kB\u001b[0m \u001b[31m6.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting multivolumefile>=0.2.3 (from py7zr)\n",
" Downloading multivolumefile-0.2.3-py3-none-any.whl (17 kB)\n",
"Collecting inflate64<1.1.0,>=1.0.0 (from py7zr)\n",
" Downloading inflate64-1.0.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (93 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m93.1/93.1 kB\u001b[0m \u001b[31m14.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting brotli>=1.1.0 (from py7zr)\n",
" Downloading Brotli-1.1.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.0 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.0/3.0 MB\u001b[0m \u001b[31m101.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting aiobotocore<3.0.0,>=2.5.4 (from s3fs)\n",
" Downloading aiobotocore-2.12.1-py3-none-any.whl (76 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m76.3/76.3 kB\u001b[0m \u001b[31m12.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting fsspec>=2023.5.0 (from huggingface_hub)\n",
" Downloading fsspec-2024.3.0-py3-none-any.whl (171 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m171.9/171.9 kB\u001b[0m \u001b[31m26.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting pyre-extensions (from torchx)\n",
" Downloading pyre_extensions-0.0.30-py3-none-any.whl (12 kB)\n",
"Collecting docstring-parser>=0.8.1 (from torchx)\n",
" Downloading docstring_parser-0.16-py3-none-any.whl (36 kB)\n",
"Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.10/dist-packages (from torchx) (7.0.2)\n",
"Collecting docker (from torchx)\n",
" Downloading docker-7.0.0-py3-none-any.whl (147 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m147.6/147.6 kB\u001b[0m \u001b[31m22.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hINFO: pip is looking at multiple versions of torchx to determine which version is compatible with other requirements. This could take a while.\n",
"Collecting torchx\n",
" Downloading torchx-0.5.0-py3-none-any.whl (251 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m251.2/251.2 kB\u001b[0m \u001b[31m31.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting docstring-parser==0.8.1 (from torchx)\n",
" Downloading docstring_parser-0.8.1.tar.gz (14 kB)\n",
" Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
"Collecting urllib3<1.27,>=1.21.1 (from torchx)\n",
" Downloading urllib3-1.26.18-py2.py3-none-any.whl (143 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m143.8/143.8 kB\u001b[0m \u001b[31m22.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: tabulate in /usr/local/lib/python3.10/dist-packages (from torchx) (0.9.0)\n",
"Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers) (2023.12.25)\n",
"Requirement already satisfied: tokenizers<0.19,>=0.14 in /usr/local/lib/python3.10/dist-packages (from transformers) (0.15.2)\n",
"Requirement already satisfied: Click!=8.0.0,>=7.1 in /usr/local/lib/python3.10/dist-packages (from wandb) (8.1.7)\n",
"Collecting GitPython!=3.1.29,>=1.0.0 (from wandb)\n",
" Downloading GitPython-3.1.42-py3-none-any.whl (195 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m195.4/195.4 kB\u001b[0m \u001b[31m28.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting sentry-sdk>=1.0.0 (from wandb)\n",
" Downloading sentry_sdk-1.42.0-py2.py3-none-any.whl (263 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m263.5/263.5 kB\u001b[0m \u001b[31m33.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting docker-pycreds>=0.4.0 (from wandb)\n",
" Downloading docker_pycreds-0.4.0-py2.py3-none-any.whl (9.0 kB)\n",
"Collecting setproctitle (from wandb)\n",
" Downloading setproctitle-1.3.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (30 kB)\n",
"Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from wandb) (67.7.2)\n",
"Requirement already satisfied: appdirs>=1.4.3 in /usr/local/lib/python3.10/dist-packages (from wandb) (1.4.4)\n",
"Requirement already satisfied: protobuf!=4.21.0,<5,>=3.19.0 in /usr/local/lib/python3.10/dist-packages (from wandb) (3.20.3)\n",
"Collecting einops (from flash-attn)\n",
" Downloading einops-0.7.0-py3-none-any.whl (44 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m44.6/44.6 kB\u001b[0m \u001b[31m7.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting botocore<1.34.52,>=1.34.41 (from aiobotocore<3.0.0,>=2.5.4->s3fs)\n",
" Downloading botocore-1.34.51-py3-none-any.whl (12.0 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m12.0/12.0 MB\u001b[0m \u001b[31m105.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: wrapt<2.0.0,>=1.10.10 in /usr/local/lib/python3.10/dist-packages (from aiobotocore<3.0.0,>=2.5.4->s3fs) (1.14.1)\n",
"Collecting aioitertools<1.0.0,>=0.5.1 (from aiobotocore<3.0.0,>=2.5.4->s3fs)\n",
" Downloading aioitertools-0.11.0-py3-none-any.whl (23 kB)\n",
"Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.3.1)\n",
"Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (23.2.0)\n",
"Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.4.1)\n",
"Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (6.0.5)\n",
"Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.9.4)\n",
"Requirement already satisfied: async-timeout<5.0,>=4.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (4.0.3)\n",
"INFO: pip is looking at multiple versions of fsspec[http] to determine which version is compatible with other requirements. This could take a while.\n",
"Collecting fsspec[http]<=2024.2.0,>=2023.1.0 (from datasets)\n",
" Downloading fsspec-2024.2.0-py3-none-any.whl (170 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m170.9/170.9 kB\u001b[0m \u001b[31m27.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Downloading fsspec-2023.12.2-py3-none-any.whl (168 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m169.0/169.0 kB\u001b[0m \u001b[31m27.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Downloading fsspec-2023.12.1-py3-none-any.whl (168 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m168.9/168.9 kB\u001b[0m \u001b[31m27.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Downloading fsspec-2023.12.0-py3-none-any.whl (168 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m168.9/168.9 kB\u001b[0m \u001b[31m25.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Downloading fsspec-2023.10.0-py3-none-any.whl (166 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m166.4/166.4 kB\u001b[0m \u001b[31m26.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Downloading fsspec-2023.9.2-py3-none-any.whl (173 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m173.4/173.4 kB\u001b[0m \u001b[31m27.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Downloading fsspec-2023.9.1-py3-none-any.whl (173 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m173.4/173.4 kB\u001b[0m \u001b[31m26.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hINFO: pip is looking at multiple versions of fsspec[http] to determine which version is compatible with other requirements. This could take a while.\n",
" Downloading fsspec-2023.9.0-py3-none-any.whl (173 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m173.2/173.2 kB\u001b[0m \u001b[31m27.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Downloading fsspec-2023.5.0-py3-none-any.whl (160 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m160.1/160.1 kB\u001b[0m \u001b[31m21.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Downloading fsspec-2023.4.0-py3-none-any.whl (153 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m154.0/154.0 kB\u001b[0m \u001b[31m22.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Downloading fsspec-2023.3.0-py3-none-any.whl (145 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m145.4/145.4 kB\u001b[0m \u001b[31m23.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Downloading fsspec-2023.1.0-py3-none-any.whl (143 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m143.0/143.0 kB\u001b[0m \u001b[31m22.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hINFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.\n",
"Collecting transformers\n",
" Downloading transformers-4.38.2-py3-none-any.whl (8.5 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m8.5/8.5 MB\u001b[0m \u001b[31m106.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting torchx\n",
" Downloading torchx-0.4.0-py3-none-any.whl (220 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m220.2/220.2 kB\u001b[0m \u001b[31m28.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Downloading torchx-0.3.0-py3-none-any.whl (202 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m202.9/202.9 kB\u001b[0m \u001b[31m29.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Downloading torchx-0.2.0-py3-none-any.whl (177 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m177.6/177.6 kB\u001b[0m \u001b[31m25.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Downloading torchx-0.1.2-py3-none-any.whl (176 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m176.4/176.4 kB\u001b[0m \u001b[31m25.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Downloading torchx-0.1.1-py3-none-any.whl (154 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m154.6/154.6 kB\u001b[0m \u001b[31m23.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h Downloading torchx-0.1.0-py3-none-any.whl (179 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m179.2/179.2 kB\u001b[0m \u001b[31m26.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting s3fs\n",
" Downloading s3fs-2024.2.0-py3-none-any.whl (28 kB)\n",
"INFO: pip is looking at multiple versions of torchx to determine which version is compatible with other requirements. This could take a while.\n",
"Collecting gitdb<5,>=4.0.1 (from GitPython!=3.1.29,>=1.0.0->wandb)\n",
" Downloading gitdb-4.0.11-py3-none-any.whl (62 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m62.7/62.7 kB\u001b[0m \u001b[31m10.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: anyio<5,>=3.5.0 in /usr/local/lib/python3.10/dist-packages (from openai>=1.0.0->evals) (3.7.1)\n",
"Requirement already satisfied: distro<2,>=1.7.0 in /usr/lib/python3/dist-packages (from openai>=1.0.0->evals) (1.7.0)\n",
"Collecting httpx<1,>=0.23.0 (from openai>=1.0.0->evals)\n",
" Downloading httpx-0.27.0-py3-none-any.whl (75 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m75.6/75.6 kB\u001b[0m \u001b[31m12.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: sniffio in /usr/local/lib/python3.10/dist-packages (from openai>=1.0.0->evals) (1.3.1)\n",
"Requirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from pydantic->evals) (0.6.0)\n",
"Requirement already satisfied: pydantic-core==2.16.3 in /usr/local/lib/python3.10/dist-packages (from pydantic->evals) (2.16.3)\n",
"Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets) (3.3.2)\n",
"Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets) (3.6)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets) (2024.2.2)\n",
"Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.10.0->accelerate) (1.12)\n",
"Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.10.0->accelerate) (3.2.1)\n",
"Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.10.0->accelerate) (3.1.3)\n",
"Collecting nvidia-cuda-nvrtc-cu12==12.1.105 (from torch>=1.10.0->accelerate)\n",
" Downloading nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (23.7 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m23.7/23.7 MB\u001b[0m \u001b[31m69.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting nvidia-cuda-runtime-cu12==12.1.105 (from torch>=1.10.0->accelerate)\n",
" Downloading nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (823 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m823.6/823.6 kB\u001b[0m \u001b[31m72.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting nvidia-cuda-cupti-cu12==12.1.105 (from torch>=1.10.0->accelerate)\n",
" Downloading nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (14.1 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m14.1/14.1 MB\u001b[0m \u001b[31m97.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting nvidia-cudnn-cu12==8.9.2.26 (from torch>=1.10.0->accelerate)\n",
" Downloading nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl (731.7 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m731.7/731.7 MB\u001b[0m \u001b[31m1.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting nvidia-cublas-cu12==12.1.3.1 (from torch>=1.10.0->accelerate)\n",
" Downloading nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl (410.6 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m410.6/410.6 MB\u001b[0m \u001b[31m2.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting nvidia-cufft-cu12==11.0.2.54 (from torch>=1.10.0->accelerate)\n",
" Downloading nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl (121.6 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m121.6/121.6 MB\u001b[0m \u001b[31m14.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting nvidia-curand-cu12==10.3.2.106 (from torch>=1.10.0->accelerate)\n",
" Downloading nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl (56.5 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m56.5/56.5 MB\u001b[0m \u001b[31m29.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting nvidia-cusolver-cu12==11.4.5.107 (from torch>=1.10.0->accelerate)\n",
" Downloading nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl (124.2 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m124.2/124.2 MB\u001b[0m \u001b[31m11.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting nvidia-cusparse-cu12==12.1.0.106 (from torch>=1.10.0->accelerate)\n",
" Downloading nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl (196.0 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m196.0/196.0 MB\u001b[0m \u001b[31m5.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting nvidia-nccl-cu12==2.19.3 (from torch>=1.10.0->accelerate)\n",
" Downloading nvidia_nccl_cu12-2.19.3-py3-none-manylinux1_x86_64.whl (166.0 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m166.0/166.0 MB\u001b[0m \u001b[31m10.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting nvidia-nvtx-cu12==12.1.105 (from torch>=1.10.0->accelerate)\n",
" Downloading nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m99.1/99.1 kB\u001b[0m \u001b[31m15.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: triton==2.2.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.10.0->accelerate) (2.2.0)\n",
"Collecting nvidia-nvjitlink-cu12 (from nvidia-cusolver-cu12==11.4.5.107->torch>=1.10.0->accelerate)\n",
" Downloading nvidia_nvjitlink_cu12-12.4.99-py3-none-manylinux2014_x86_64.whl (21.1 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m21.1/21.1 MB\u001b[0m \u001b[31m80.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: lxml~=4.9 in /usr/local/lib/python3.10/dist-packages (from blobfile->evals) (4.9.4)\n",
"Requirement already satisfied: cachetools in /usr/local/lib/python3.10/dist-packages (from gptcache->guidance) (5.3.3)\n",
"Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.10/dist-packages (from importlib-metadata->torchx) (3.18.0)\n",
"Collecting rapidfuzz<4,>=3 (from jiwer->evals)\n",
" Downloading rapidfuzz-3.6.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.4 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.4/3.4 MB\u001b[0m \u001b[31m6.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: SQLAlchemy<3,>=1.4 in /usr/local/lib/python3.10/dist-packages (from langchain->evals) (2.0.28)\n",
"Collecting dataclasses-json<0.7,>=0.5.7 (from langchain->evals)\n",
" Downloading dataclasses_json-0.6.4-py3-none-any.whl (28 kB)\n",
"Collecting jsonpatch<2.0,>=1.33 (from langchain->evals)\n",
" Downloading jsonpatch-1.33-py2.py3-none-any.whl (12 kB)\n",
"Collecting langchain-community<0.1,>=0.0.28 (from langchain->evals)\n",
" Downloading langchain_community-0.0.28-py3-none-any.whl (1.8 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.8/1.8 MB\u001b[0m \u001b[31m94.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting langchain-core<0.2.0,>=0.1.31 (from langchain->evals)\n",
" Downloading langchain_core-0.1.32-py3-none-any.whl (260 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m260.9/260.9 kB\u001b[0m \u001b[31m36.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting langchain-text-splitters<0.1,>=0.0.1 (from langchain->evals)\n",
" Downloading langchain_text_splitters-0.0.1-py3-none-any.whl (21 kB)\n",
"Collecting langsmith<0.2.0,>=0.1.17 (from langchain->evals)\n",
" Downloading langsmith-0.1.27-py3-none-any.whl (68 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m68.0/68.0 kB\u001b[0m \u001b[31m10.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: tenacity<9.0.0,>=8.1.0 in /usr/local/lib/python3.10/dist-packages (from langchain->evals) (8.2.3)\n",
"Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->evals) (1.2.0)\n",
"Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib->evals) (0.12.1)\n",
"Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->evals) (4.49.0)\n",
"Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->evals) (1.4.5)\n",
"Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->evals) (9.4.0)\n",
"Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->evals) (3.1.2)\n",
"Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib->evals) (2.8.2)\n",
"Requirement already satisfied: PyJWT[crypto]<3,>=1.0.0 in /usr/lib/python3/dist-packages (from msal->guidance) (2.3.0)\n",
"Requirement already satisfied: cryptography<45,>=0.6 in /usr/local/lib/python3.10/dist-packages (from msal->guidance) (42.0.5)\n",
"Collecting mypy-extensions>=1.0.0 (from mypy->evals)\n",
" Downloading mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)\n",
"Requirement already satisfied: tomli>=1.1.0 in /usr/local/lib/python3.10/dist-packages (from mypy->evals) (2.0.1)\n",
"Requirement already satisfied: joblib in /usr/local/lib/python3.10/dist-packages (from nltk->evals) (1.3.2)\n",
"Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets) (2023.4)\n",
"Requirement already satisfied: pydot in /usr/local/lib/python3.10/dist-packages (from pyformlang->guidance) (1.4.2)\n",
"Collecting typing-inspect (from pyre-extensions->torchx)\n",
" Downloading typing_inspect-0.9.0-py3-none-any.whl (8.8 kB)\n",
"Requirement already satisfied: iniconfig in /usr/local/lib/python3.10/dist-packages (from pytest->evals) (2.0.0)\n",
"Requirement already satisfied: pluggy<2.0,>=0.12 in /usr/local/lib/python3.10/dist-packages (from pytest->evals) (1.4.0)\n",
"Requirement already satisfied: exceptiongroup>=1.0.0rc8 in /usr/local/lib/python3.10/dist-packages (from pytest->evals) (1.2.0)\n",
"Collecting portalocker (from sacrebleu->evals)\n",
" Downloading portalocker-2.8.2-py3-none-any.whl (17 kB)\n",
"Collecting colorama (from sacrebleu->evals)\n",
" Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB)\n",
"Collecting asn1crypto<2.0.0,>0.24.0 (from snowflake-connector-python[pandas]->evals)\n",
" Downloading asn1crypto-1.5.1-py2.py3-none-any.whl (105 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m105.0/105.0 kB\u001b[0m \u001b[31m16.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: cffi<2.0.0,>=1.9 in /usr/local/lib/python3.10/dist-packages (from snowflake-connector-python[pandas]->evals) (1.16.0)\n",
"Requirement already satisfied: pyOpenSSL<25.0.0,>=16.2.0 in /usr/local/lib/python3.10/dist-packages (from snowflake-connector-python[pandas]->evals) (24.1.0)\n",
"Requirement already satisfied: sortedcontainers>=2.4.0 in /usr/local/lib/python3.10/dist-packages (from snowflake-connector-python[pandas]->evals) (2.4.0)\n",
"Collecting platformdirs (from guidance)\n",
" Downloading platformdirs-3.11.0-py3-none-any.whl (17 kB)\n",
"Collecting tomlkit (from snowflake-connector-python[pandas]->evals)\n",
" Downloading tomlkit-0.12.4-py3-none-any.whl (37 kB)\n",
"Requirement already satisfied: tensorflow<3.0.0,>=2.4.0 in /usr/local/lib/python3.10/dist-packages (from spacy-universal-sentence-encoder->evals) (2.15.0)\n",
"Requirement already satisfied: spacy<4.0.0,>=3.0.0 in /usr/local/lib/python3.10/dist-packages (from spacy-universal-sentence-encoder->evals) (3.7.4)\n",
"Requirement already satisfied: tensorflow-hub in /usr/local/lib/python3.10/dist-packages (from spacy-universal-sentence-encoder->evals) (0.16.1)\n",
"Requirement already satisfied: scipy!=1.9.2,>=1.4 in /usr/local/lib/python3.10/dist-packages (from statsmodels->evals) (1.11.4)\n",
"Requirement already satisfied: patsy>=0.5.4 in /usr/local/lib/python3.10/dist-packages (from statsmodels->evals) (0.5.6)\n",
"Collecting jmespath<2.0.0,>=0.7.1 (from botocore<1.34.52,>=1.34.41->aiobotocore<3.0.0,>=2.5.4->s3fs)\n",
" Downloading jmespath-1.0.1-py3-none-any.whl (20 kB)\n",
"Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi<2.0.0,>=1.9->snowflake-connector-python[pandas]->evals) (2.21)\n",
"Collecting marshmallow<4.0.0,>=3.18.0 (from dataclasses-json<0.7,>=0.5.7->langchain->evals)\n",
" Downloading marshmallow-3.21.1-py3-none-any.whl (49 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.4/49.4 kB\u001b[0m \u001b[31m7.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting smmap<6,>=3.0.1 (from gitdb<5,>=4.0.1->GitPython!=3.1.29,>=1.0.0->wandb)\n",
" Downloading smmap-5.0.1-py3-none-any.whl (24 kB)\n",
"Collecting httpcore==1.* (from httpx<1,>=0.23.0->openai>=1.0.0->evals)\n",
" Downloading httpcore-1.0.4-py3-none-any.whl (77 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.8/77.8 kB\u001b[0m \u001b[31m13.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting h11<0.15,>=0.13 (from httpcore==1.*->httpx<1,>=0.23.0->openai>=1.0.0->evals)\n",
" Downloading h11-0.14.0-py3-none-any.whl (58 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m10.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting jsonpointer>=1.9 (from jsonpatch<2.0,>=1.33->langchain->evals)\n",
" Downloading jsonpointer-2.4-py2.py3-none-any.whl (7.8 kB)\n",
"Collecting packaging\n",
" Downloading packaging-23.2-py3-none-any.whl (53 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m53.0/53.0 kB\u001b[0m \u001b[31m8.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting orjson<4.0.0,>=3.9.14 (from langsmith<0.2.0,>=0.1.17->langchain->evals)\n",
" Downloading orjson-3.9.15-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (138 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m138.5/138.5 kB\u001b[0m \u001b[31m21.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: spacy-legacy<3.1.0,>=3.0.11 in /usr/local/lib/python3.10/dist-packages (from spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (3.0.12)\n",
"Requirement already satisfied: spacy-loggers<2.0.0,>=1.0.0 in /usr/local/lib/python3.10/dist-packages (from spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (1.0.5)\n",
"Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.10/dist-packages (from spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (1.0.10)\n",
"Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.10/dist-packages (from spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (2.0.8)\n",
"Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.10/dist-packages (from spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (3.0.9)\n",
"Requirement already satisfied: thinc<8.3.0,>=8.2.2 in /usr/local/lib/python3.10/dist-packages (from spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (8.2.3)\n",
"Requirement already satisfied: wasabi<1.2.0,>=0.9.1 in /usr/local/lib/python3.10/dist-packages (from spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (1.1.2)\n",
"Requirement already satisfied: srsly<3.0.0,>=2.4.3 in /usr/local/lib/python3.10/dist-packages (from spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (2.4.8)\n",
"Requirement already satisfied: catalogue<2.1.0,>=2.0.6 in /usr/local/lib/python3.10/dist-packages (from spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (2.0.10)\n",
"Requirement already satisfied: weasel<0.4.0,>=0.1.0 in /usr/local/lib/python3.10/dist-packages (from spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (0.3.4)\n",
"Requirement already satisfied: typer<0.10.0,>=0.3.0 in /usr/local/lib/python3.10/dist-packages (from spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (0.9.0)\n",
"Requirement already satisfied: smart-open<7.0.0,>=5.2.1 in /usr/local/lib/python3.10/dist-packages (from spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (6.4.0)\n",
"Requirement already satisfied: langcodes<4.0.0,>=3.2.0 in /usr/local/lib/python3.10/dist-packages (from spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (3.3.0)\n",
"Requirement already satisfied: greenlet!=0.4.17 in /usr/local/lib/python3.10/dist-packages (from SQLAlchemy<3,>=1.4->langchain->evals) (3.0.3)\n",
"Requirement already satisfied: absl-py>=1.0.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (1.4.0)\n",
"Requirement already satisfied: astunparse>=1.6.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (1.6.3)\n",
"Requirement already satisfied: flatbuffers>=23.5.26 in /usr/local/lib/python3.10/dist-packages (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (24.3.7)\n",
"Requirement already satisfied: gast!=0.5.0,!=0.5.1,!=0.5.2,>=0.2.1 in /usr/local/lib/python3.10/dist-packages (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (0.5.4)\n",
"Requirement already satisfied: google-pasta>=0.1.1 in /usr/local/lib/python3.10/dist-packages (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (0.2.0)\n",
"Requirement already satisfied: h5py>=2.9.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (3.9.0)\n",
"Requirement already satisfied: libclang>=13.0.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (16.0.6)\n",
"Requirement already satisfied: ml-dtypes~=0.2.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (0.2.0)\n",
"Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.10/dist-packages (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (3.3.0)\n",
"Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in /usr/local/lib/python3.10/dist-packages (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (0.36.0)\n",
"Requirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.10/dist-packages (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (1.62.1)\n",
"Requirement already satisfied: tensorboard<2.16,>=2.15 in /usr/local/lib/python3.10/dist-packages (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (2.15.2)\n",
"Requirement already satisfied: tensorflow-estimator<2.16,>=2.15.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (2.15.0)\n",
"Requirement already satisfied: keras<2.16,>=2.15.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (2.15.0)\n",
"Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.10.0->accelerate) (2.1.5)\n",
"Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.10.0->accelerate) (1.3.0)\n",
"Requirement already satisfied: tf-keras>=2.14.1 in /usr/local/lib/python3.10/dist-packages (from tensorflow-hub->spacy-universal-sentence-encoder->evals) (2.15.1)\n",
"Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/local/lib/python3.10/dist-packages (from astunparse>=1.6.0->tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (0.43.0)\n",
"Requirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.10/dist-packages (from tensorboard<2.16,>=2.15->tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (2.27.0)\n",
"Requirement already satisfied: google-auth-oauthlib<2,>=0.5 in /usr/local/lib/python3.10/dist-packages (from tensorboard<2.16,>=2.15->tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (1.2.0)\n",
"Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.10/dist-packages (from tensorboard<2.16,>=2.15->tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (3.5.2)\n",
"Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in /usr/local/lib/python3.10/dist-packages (from tensorboard<2.16,>=2.15->tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (0.7.2)\n",
"Requirement already satisfied: werkzeug>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from tensorboard<2.16,>=2.15->tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (3.0.1)\n",
"Requirement already satisfied: blis<0.8.0,>=0.7.8 in /usr/local/lib/python3.10/dist-packages (from thinc<8.3.0,>=8.2.2->spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (0.7.11)\n",
"Requirement already satisfied: confection<1.0.0,>=0.0.1 in /usr/local/lib/python3.10/dist-packages (from thinc<8.3.0,>=8.2.2->spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (0.1.4)\n",
"Requirement already satisfied: cloudpathlib<0.17.0,>=0.7.0 in /usr/local/lib/python3.10/dist-packages (from weasel<0.4.0,>=0.1.0->spacy<4.0.0,>=3.0.0->spacy-universal-sentence-encoder->evals) (0.16.0)\n",
"Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.10/dist-packages (from google-auth<3,>=1.6.3->tensorboard<2.16,>=2.15->tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (0.3.0)\n",
"Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.10/dist-packages (from google-auth<3,>=1.6.3->tensorboard<2.16,>=2.15->tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (4.9)\n",
"Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.10/dist-packages (from google-auth-oauthlib<2,>=0.5->tensorboard<2.16,>=2.15->tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (1.4.0)\n",
"Requirement already satisfied: pyasn1<0.6.0,>=0.4.6 in /usr/local/lib/python3.10/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard<2.16,>=2.15->tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (0.5.1)\n",
"Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.10/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<2,>=0.5->tensorboard<2.16,>=2.15->tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals) (3.2.2)\n",
"Building wheels for collected packages: fire, antlr4-python3-runtime, docstring-parser, flash-attn, langdetect, spacy-universal-sentence-encoder\n",
" Building wheel for fire (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
" Created wheel for fire: filename=fire-0.6.0-py2.py3-none-any.whl size=117029 sha256=bf7c935fcbe5728309d5bc1f805ff47d895fa7c54e09de037ded96923ce63ce1\n",
" Stored in directory: /root/.cache/pip/wheels/d6/6d/5d/5b73fa0f46d01a793713f8859201361e9e581ced8c75e5c6a3\n",
" Building wheel for antlr4-python3-runtime (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
" Created wheel for antlr4-python3-runtime: filename=antlr4_python3_runtime-4.9.3-py3-none-any.whl size=144554 sha256=e34e47ff6378d1e06f1584aa54786b69b63c56e54b4d778b12b5f9708099cfdc\n",
" Stored in directory: /root/.cache/pip/wheels/12/93/dd/1f6a127edc45659556564c5730f6d4e300888f4bca2d4c5a88\n",
" Building wheel for docstring-parser (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
" Created wheel for docstring-parser: filename=docstring_parser-0.8.1-py3-none-any.whl size=19661 sha256=0c5dce54a1bf9388cd09fdfa91a6a33e71060b6e00c086e74cd30a3e6a6d78de\n",
" Stored in directory: /root/.cache/pip/wheels/26/e4/54/64439f1d0c5d3721041ddc0f001e4b57756a394880a2af8981\n",
" Building wheel for flash-attn (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
" Created wheel for flash-attn: filename=flash_attn-2.5.6-cp310-cp310-linux_x86_64.whl size=120592258 sha256=d8cf54adda65f59820221d329d274e124972d7fdc05ab3b1130253c64eee6c8a\n",
" Stored in directory: /root/.cache/pip/wheels/a8/1c/88/b959d6818b98a46d61ba231683abb7523b89ac1a7ed1e0c206\n",
" Building wheel for langdetect (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
" Created wheel for langdetect: filename=langdetect-1.0.9-py3-none-any.whl size=993227 sha256=58bbd4c08a509a200b16ec777d1b53dc3640c944a8ab37e45fbd25a97dab211a\n",
" Stored in directory: /root/.cache/pip/wheels/95/03/7d/59ea870c70ce4e5a370638b5462a7711ab78fba2f655d05106\n",
" Building wheel for spacy-universal-sentence-encoder (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
" Created wheel for spacy-universal-sentence-encoder: filename=spacy_universal_sentence_encoder-0.4.6-py3-none-any.whl size=16539 sha256=1a8c7c95d24b2e42160afa34b66de9609f1f2247bae618efb25b66163edbc118\n",
" Stored in directory: /root/.cache/pip/wheels/65/75/56/7fd780bb5e1f9c74625ccda3c03a1abeb27a2ade5a8946db33\n",
"Successfully built fire antlr4-python3-runtime docstring-parser flash-attn langdetect spacy-universal-sentence-encoder\n",
"Installing collected packages: texttable, ninja, brotli, asn1crypto, antlr4-python3-runtime, zstandard, xxhash, urllib3, types-PyYAML, tomlkit, smmap, setproctitle, rapidfuzz, pyzstd, pyppmd, pycryptodomex, pybcj, portalocker, platformdirs, packaging, orjson, ordered-set, omegaconf, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, mypy-extensions, multivolumefile, mock, lz4, langdetect, jsonpointer, jmespath, inflate64, h11, fsspec, fire, einops, docstring-parser, docker-pycreds, diskcache, dill, colorama, backoff, aioitertools, typing-inspect, sentry-sdk, sacrebleu, pyformlang, py7zr, nvidia-cusparse-cu12, nvidia-cudnn-cu12, mypy, multiprocess, marshmallow, jsonpatch, jiwer, hydra-core, httpcore, gitdb, botocore, blobfile, tiktoken, pyre-extensions, nvidia-cusolver-cu12, langsmith, httpx, gptcache, GitPython, docker, dataclasses-json, aiobotocore, wandb, torchx, snowflake-connector-python, s3fs, openai, msal, langchain-core, datasets, langchain-text-splitters, langchain-community, guidance, flash-attn, bitsandbytes, accelerate, peft, langchain, spacy-universal-sentence-encoder, evals\n",
" Attempting uninstall: urllib3\n",
" Found existing installation: urllib3 2.0.7\n",
" Uninstalling urllib3-2.0.7:\n",
" Successfully uninstalled urllib3-2.0.7\n",
" Attempting uninstall: platformdirs\n",
" Found existing installation: platformdirs 4.2.0\n",
" Uninstalling platformdirs-4.2.0:\n",
" Successfully uninstalled platformdirs-4.2.0\n",
" Attempting uninstall: packaging\n",
" Found existing installation: packaging 24.0\n",
" Uninstalling packaging-24.0:\n",
" Successfully uninstalled packaging-24.0\n",
" Attempting uninstall: fsspec\n",
" Found existing installation: fsspec 2023.6.0\n",
" Uninstalling fsspec-2023.6.0:\n",
" Successfully uninstalled fsspec-2023.6.0\n",
"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
"gcsfs 2023.6.0 requires fsspec==2023.6.0, but you have fsspec 2024.2.0 which is incompatible.\u001b[0m\u001b[31m\n",
"\u001b[0mSuccessfully installed GitPython-3.1.42 accelerate-0.28.0 aiobotocore-2.12.1 aioitertools-0.11.0 antlr4-python3-runtime-4.9.3 asn1crypto-1.5.1 backoff-2.2.1 bitsandbytes-0.43.0 blobfile-2.1.1 botocore-1.34.51 brotli-1.1.0 colorama-0.4.6 dataclasses-json-0.6.4 datasets-2.18.0 dill-0.3.8 diskcache-5.6.3 docker-7.0.0 docker-pycreds-0.4.0 docstring-parser-0.8.1 einops-0.7.0 evals-2.0.0.post1 fire-0.6.0 flash-attn-2.5.6 fsspec-2024.2.0 gitdb-4.0.11 gptcache-0.1.43 guidance-0.1.10 h11-0.14.0 httpcore-1.0.4 httpx-0.27.0 hydra-core-1.3.2 inflate64-1.0.0 jiwer-3.0.3 jmespath-1.0.1 jsonpatch-1.33 jsonpointer-2.4 langchain-0.1.12 langchain-community-0.0.28 langchain-core-0.1.32 langchain-text-splitters-0.0.1 langdetect-1.0.9 langsmith-0.1.27 lz4-4.3.3 marshmallow-3.21.1 mock-5.1.0 msal-1.27.0 multiprocess-0.70.16 multivolumefile-0.2.3 mypy-1.9.0 mypy-extensions-1.0.0 ninja-1.11.1.1 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.19.3 nvidia-nvjitlink-cu12-12.4.99 nvidia-nvtx-cu12-12.1.105 omegaconf-2.3.0 openai-1.14.1 ordered-set-4.1.0 orjson-3.9.15 packaging-23.2 peft-0.9.0 platformdirs-3.11.0 portalocker-2.8.2 py7zr-0.21.0 pybcj-1.0.2 pycryptodomex-3.20.0 pyformlang-1.0.7 pyppmd-1.1.0 pyre-extensions-0.0.30 pyzstd-0.15.9 rapidfuzz-3.6.2 s3fs-2024.2.0 sacrebleu-2.4.1 sentry-sdk-1.42.0 setproctitle-1.3.3 smmap-5.0.1 snowflake-connector-python-3.7.1 spacy-universal-sentence-encoder-0.4.6 texttable-1.7.0 tiktoken-0.6.0 tomlkit-0.12.4 torchx-0.5.0 types-PyYAML-6.0.12.20240311 typing-inspect-0.9.0 urllib3-1.26.18 wandb-0.16.4 xxhash-3.4.1 zstandard-0.22.0\n"
]
},
{
"output_type": "display_data",
"data": {
"application/vnd.colab-display-data+json": {
"pip_warning": {
"packages": [
"pydevd_plugins"
]
},
"id": "7e0d8aaa78024b9b8bb0d5fb36d00744"
}
},
"metadata": {}
}
],
"source": [
"!pip install \\\n",
" accelerate \\\n",
" bitsandbytes \\\n",
" datasets \\\n",
" evals \\\n",
" fire \\\n",
" guidance \\\n",
" huggingface_hub \\\n",
" hydra-core \\\n",
" ninja \\\n",
" packaging \\\n",
" peft \\\n",
" py7zr \\\n",
" s3fs \\\n",
" sentencepiece \\\n",
" torchx \\\n",
" transformers \\\n",
" zstandard \\\n",
" wandb \\\n",
" flash-attn --no-build-isolation"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0niRKJv-hKLq"
},
"source": [
"Log into huggingface. For that, you will need a huggingface token. You can find it by going to https://huggingface.co/settings/tokens\n",
"If you are running a recipe that uses llama2 models, don't forget to sign the license agreement on the model card."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "yiA7dxI4gdhD",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "9ed95fc4-ca5b-436e-e41d-a9c86057aed3"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
" _| _| _| _| _|_|_| _|_|_| _|_|_| _| _| _|_|_| _|_|_|_| _|_| _|_|_| _|_|_|_|\n",
" _| _| _| _| _| _| _| _|_| _| _| _| _| _| _| _|\n",
" _|_|_|_| _| _| _| _|_| _| _|_| _| _| _| _| _| _|_| _|_|_| _|_|_|_| _| _|_|_|\n",
" _| _| _| _| _| _| _| _| _| _| _|_| _| _| _| _| _| _| _|\n",
" _| _| _|_| _|_|_| _|_|_| _|_|_| _| _| _|_|_| _| _| _| _|_|_| _|_|_|_|\n",
"\n",
" To login, `huggingface_hub` requires a token generated from https://huggingface.co/settings/tokens .\n",
"Token: \n",
"Add token as git credential? (Y/n) n\n",
"Token is valid (permission: write).\n",
"Your token has been saved to /root/.cache/huggingface/token\n",
"Login successful\n"
]
}
],
"source": [
"!huggingface-cli login"
]
},
{
"cell_type": "markdown",
"source": [
"You will need permissions for LLaMa2: `llama2-7b-chat-colab`. You can request via HF/Meta website. It may take a few hours-days to get initial permissions."
],
"metadata": {
"id": "onvxgb2nzod2"
}
},
{
"cell_type": "markdown",
"metadata": {
"id": "dF84DfZ0L1b-"
},
"source": [
"## Fine tuning"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "IBW5n1wegwdQ",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "6aa56b03-27c7-42ce-8a3f-489ad26c8983"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\u001b[34mtorchx\u001b[0m \u001b[2m2024-03-17 23:37:25 INFO \u001b[0m Tracker configurations: {}\n",
"\u001b[34mtorchx\u001b[0m \u001b[2m2024-03-17 23:37:25 INFO \u001b[0m Log directory not set in scheduler cfg. Creating a temporary log dir that will be deleted on exit. To preserve log directory set the `log_dir` cfg option\n",
"\u001b[34mtorchx\u001b[0m \u001b[2m2024-03-17 23:37:25 INFO \u001b[0m Log directory is: /tmp/torchx_bbmcnc96\n",
"local_cwd://torchx/finetune-xxh6pf7fmrg3kc\n",
"\u001b[34mtorchx\u001b[0m \u001b[2m2024-03-17 23:37:25 INFO \u001b[0m Waiting for the app to finish...\n",
"\u001b[32mfinetune/0\u001b[0m [0]:/usr/local/lib/python3.10/dist-packages/hydra/_internal/defaults_list.py:251: UserWarning: In 'summarize': Defaults list is missing `_self_`. See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/default_composition_order for more information\n",
"\u001b[32mfinetune/0\u001b[0m [0]: warnings.warn(msg, UserWarning)\n",
"\u001b[32mfinetune/0\u001b[0m [0]:config: version: '0.1'\n",
"\u001b[32mfinetune/0\u001b[0m [0]:working_dir: /mnt/text/model/llama2-7b-chat-colab/dialogsum-samsum/0.1\n",
"\u001b[32mfinetune/0\u001b[0m [0]:output_model_dir: /mnt/text/model/llama2-7b-chat-colab/dialogsum-samsum/0.1/final\n",
"\u001b[32mfinetune/0\u001b[0m [0]:wandb_key: null\n",
"\u001b[32mfinetune/0\u001b[0m [0]:wandb_project: null\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: name: llama2-7b-chat-colab\n",
"\u001b[32mfinetune/0\u001b[0m [0]: huggingface_model_name: meta-llama/Llama-2-7b-chat-hf\n",
"\u001b[32mfinetune/0\u001b[0m [0]: huggingface_model_revision: 01622a9\n",
"\u001b[32mfinetune/0\u001b[0m [0]: pad_token: <unk>\n",
"\u001b[32mfinetune/0\u001b[0m [0]: fireworks:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: base_model: accounts/fireworks/models/llama-v2-7b-chat\n",
"\u001b[32mfinetune/0\u001b[0m [0]: conversation_config:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: system: '[INST] <<SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: You are an assistant and you are tasked with writing text summaries. For each\n",
"\u001b[32mfinetune/0\u001b[0m [0]: input text, provide a summary. The summary should be concise, accurate and\n",
"\u001b[32mfinetune/0\u001b[0m [0]: truthful. Do not make up facts or answers.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: <</SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: '\n",
"\u001b[32mfinetune/0\u001b[0m [0]: roles:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: - '[INST]'\n",
"\u001b[32mfinetune/0\u001b[0m [0]: - '[/INST]'\n",
"\u001b[32mfinetune/0\u001b[0m [0]: offset: 0\n",
"\u001b[32mfinetune/0\u001b[0m [0]: sep_style: LLAMA2\n",
"\u001b[32mfinetune/0\u001b[0m [0]: sep: ' '\n",
"\u001b[32mfinetune/0\u001b[0m [0]: sep2: ' </s><s>'\n",
"\u001b[32mfinetune/0\u001b[0m [0]: stop_token_ids:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: - 2\n",
"\u001b[32mfinetune/0\u001b[0m [0]: micro_batch_size: 2\n",
"\u001b[32mfinetune/0\u001b[0m [0]: batch_size: 32\n",
"\u001b[32mfinetune/0\u001b[0m [0]: epochs: 1\n",
"\u001b[32mfinetune/0\u001b[0m [0]: learning_rate: 2.0e-05\n",
"\u001b[32mfinetune/0\u001b[0m [0]: cutoff_len: 512\n",
"\u001b[32mfinetune/0\u001b[0m [0]: lora_r: 64\n",
"\u001b[32mfinetune/0\u001b[0m [0]: lora_alpha: 16\n",
"\u001b[32mfinetune/0\u001b[0m [0]: lora_dropout: 0.05\n",
"\u001b[32mfinetune/0\u001b[0m [0]: lora_target_modules:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: - q_proj\n",
"\u001b[32mfinetune/0\u001b[0m [0]: - k_proj\n",
"\u001b[32mfinetune/0\u001b[0m [0]: - v_proj\n",
"\u001b[32mfinetune/0\u001b[0m [0]: gradient_checkpointing: true\n",
"\u001b[32mfinetune/0\u001b[0m [0]: flash_attention: false\n",
"\u001b[32mfinetune/0\u001b[0m [0]: load_in_4bit: true\n",
"\u001b[32mfinetune/0\u001b[0m [0]: torch_dtype: float16\n",
"\u001b[32mfinetune/0\u001b[0m [0]: fp16: true\n",
"\u001b[32mfinetune/0\u001b[0m [0]: quantization_config:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: load_in_4bit: true\n",
"\u001b[32mfinetune/0\u001b[0m [0]: bnb_4bit_use_double_quant: true\n",
"\u001b[32mfinetune/0\u001b[0m [0]: bnb_4bit_quant_type: nf4\n",
"\u001b[32mfinetune/0\u001b[0m [0]: bnb_4bit_compute_dtype: float16\n",
"\u001b[32mfinetune/0\u001b[0m [0]:data:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: dataset:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: dialogsum:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: huggingface_name: knkarthick/dialogsum\n",
"\u001b[32mfinetune/0\u001b[0m [0]: huggingface_revision: '4195720'\n",
"\u001b[32mfinetune/0\u001b[0m [0]: max_samples: 10000\n",
"\u001b[32mfinetune/0\u001b[0m [0]: transform:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: class: recipes.common.batch_transform.StringTemplate\n",
"\u001b[32mfinetune/0\u001b[0m [0]: prompt_prefix: '[INST] <<SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: You are an assistant and you are tasked with writing text summaries. For\n",
"\u001b[32mfinetune/0\u001b[0m [0]: each input text, provide a summary. The summary should be concise, accurate\n",
"\u001b[32mfinetune/0\u001b[0m [0]: and truthful. Do not make up facts or answers.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: <</SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: '\n",
"\u001b[32mfinetune/0\u001b[0m [0]: task_prompt: '{dialogue}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: [/INST]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: '\n",
"\u001b[32mfinetune/0\u001b[0m [0]: prompt_template: '[INST] <<SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: You are an assistant and you are tasked with writing text summaries. For\n",
"\u001b[32mfinetune/0\u001b[0m [0]: each input text, provide a summary. The summary should be concise, accurate\n",
"\u001b[32mfinetune/0\u001b[0m [0]: and truthful. Do not make up facts or answers.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: <</SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: {dialogue}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: [/INST]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: '\n",
"\u001b[32mfinetune/0\u001b[0m [0]: completion_template: '{summary}'\n",
"\u001b[32mfinetune/0\u001b[0m [0]: samsum:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: huggingface_name: samsum\n",
"\u001b[32mfinetune/0\u001b[0m [0]: huggingface_revision: 500cefe\n",
"\u001b[32mfinetune/0\u001b[0m [0]: max_samples: 10000\n",
"\u001b[32mfinetune/0\u001b[0m [0]: transform:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: class: recipes.common.batch_transform.StringTemplate\n",
"\u001b[32mfinetune/0\u001b[0m [0]: prompt_prefix: '[INST] <<SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: You are an assistant and you are tasked with writing text summaries. For\n",
"\u001b[32mfinetune/0\u001b[0m [0]: each input text, provide a summary. The summary should be concise, accurate\n",
"\u001b[32mfinetune/0\u001b[0m [0]: and truthful. Do not make up facts or answers.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: <</SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: '\n",
"\u001b[32mfinetune/0\u001b[0m [0]: task_prompt: '{dialogue}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: [/INST]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: '\n",
"\u001b[32mfinetune/0\u001b[0m [0]: prompt_template: '[INST] <<SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: You are an assistant and you are tasked with writing text summaries. For\n",
"\u001b[32mfinetune/0\u001b[0m [0]: each input text, provide a summary. The summary should be concise, accurate\n",
"\u001b[32mfinetune/0\u001b[0m [0]: and truthful. Do not make up facts or answers.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: <</SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: {dialogue}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: [/INST]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: '\n",
"\u001b[32mfinetune/0\u001b[0m [0]: completion_template: '{summary}'\n",
"\u001b[32mfinetune/0\u001b[0m [0]: name: dialogsum-samsum\n",
"\u001b[32mfinetune/0\u001b[0m [0]: mask_prompt: false\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:tokenizer_config.json: 0%| | 0.00/770 [00:00<?, ?B/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:tokenizer_config.json: 100%|██████████| 770/770 [00:00<00:00, 4.67MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:tokenizer.model: 0%| | 0.00/500k [00:00<?, ?B/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:tokenizer.model: 100%|██████████| 500k/500k [00:00<00:00, 30.4MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:tokenizer.json: 0%| | 0.00/1.84M [00:00<?, ?B/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:tokenizer.json: 100%|██████████| 1.84M/1.84M [00:00<00:00, 5.67MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:tokenizer.json: 100%|██████████| 1.84M/1.84M [00:00<00:00, 5.64MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:special_tokens_map.json: 0%| | 0.00/414 [00:00<?, ?B/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:special_tokens_map.json: 100%|██████████| 414/414 [00:00<00:00, 2.71MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading readme: 0%| | 0.00/4.56k [00:00<?, ?B/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading readme: 100%|██████████| 4.56k/4.56k [00:00<00:00, 24.1MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading data: 0%| | 0.00/11.3M [00:00<?, ?B/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading data: 37%|███▋ | 4.19M/11.3M [00:00<00:00, 24.2MB/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading data: 100%|██████████| 11.3M/11.3M [00:00<00:00, 40.9MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading data: 100%|██████████| 11.3M/11.3M [00:00<00:00, 37.0MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading data: 0%| | 0.00/442k [00:00<?, ?B/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading data: 100%|██████████| 442k/442k [00:00<00:00, 2.42MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading data: 100%|██████████| 442k/442k [00:00<00:00, 2.41MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading data: 0%| | 0.00/1.35M [00:00<?, ?B/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading data: 100%|██████████| 1.35M/1.35M [00:00<00:00, 7.22MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading data: 100%|██████████| 1.35M/1.35M [00:00<00:00, 7.20MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating train split: 0 examples [00:00, ? examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating train split: 10000 examples [00:00, 67877.01 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:loaded dataset dialogsum of size 12460\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating train split: 12460 examples [00:00, 68468.65 examples/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating validation split: 0 examples [00:00, ? examples/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating validation split: 500 examples [00:00, 48847.09 examples/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating test split: 0 examples [00:00, ? examples/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating test split: 1500 examples [00:00, 87348.58 examples/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:using max length 512\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 0%| | 0/12460 [00:00<?, ? examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 6%|▋ | 779/12460 [00:02<00:31, 376.06 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 19%|█▉ | 2337/12460 [00:02<00:08, 1198.30 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 25%|██▌ | 3116/12460 [00:02<00:05, 1578.24 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 44%|████▍ | 5453/12460 [00:02<00:02, 3492.64 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 63%|██████▎ | 7790/12460 [00:02<00:00, 5516.21 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 75%|███████▌ | 9347/12460 [00:02<00:00, 6449.24 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 88%|████████▊ | 10904/12460 [00:03<00:00, 7801.09 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 100%|██████████| 12460/12460 [00:03<00:00, 6956.47 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 100%|██████████| 12460/12460 [00:03<00:00, 3629.40 examples/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:truncated dataset dialogsum to size 10000\n",
"\u001b[32mfinetune/0\u001b[0m [0]:sample prompt 0 for dataset dialogsum:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:[INST] <<SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:You are an assistant and you are tasked with writing text summaries. For each input text, provide a summary. The summary should be concise, accurate and truthful. Do not make up facts or answers.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:<</SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: How may I help you, miss?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: I want to change my hairstyle. What would you suggest?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: Would you like to have a perm?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: Do you think it will suit me?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: Oh, absolutely.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: Ok, I'll have it for a change.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:[/INST]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2# wants to change her hairstyle. #Person1# suggests a perm and #Person2# agrees.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:sample prompt 1 for dataset dialogsum:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:[INST] <<SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:You are an assistant and you are tasked with writing text summaries. For each input text, provide a summary. The summary should be concise, accurate and truthful. Do not make up facts or answers.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:<</SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: Hi, Bob. I heard about your accident but I didn't think it would be this bad.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: Well, thanks for making me feel better.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: I can hardly recognize you. Tell me what happened.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: I just got back from Africa where I had a terrible accident in a motorcycle race. I broke both my legs when my motor failed and was hit by another motorcycle. I was laid up in a hospital over there for three weeks.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: Three weeks, that's a long time. What did you do while you were in the hospital?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: Well, if you can believe it, I read all about motorcycle racing. I love racing even if it hurts.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: But I'm afraid you don't make it look very funny. You're lucky to be alive.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: That's for sure. I am lucky to be alive.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: How soon can you get well? Did the doctor tell you about it?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: They say I still need to stay in bed for two weeks or so.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: I think you'll get a bit fatter by then. By the way, is there anything I can do for you?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: No, thank you. Thank you for the beautiful flowers. It's very kind of you to come to see me.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: I'll be going then. Bye.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: Bye.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:[/INST]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1# visits Bob in the hospital. Bob still needs to stay in bed. He had a terrible accident in a motorcycle race but still loves racing even if it hurts. #Person1# thinks he's lucky to be alive.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:sample prompt 2 for dataset dialogsum:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:[INST] <<SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:You are an assistant and you are tasked with writing text summaries. For each input text, provide a summary. The summary should be concise, accurate and truthful. Do not make up facts or answers.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:<</SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: What's the problem, Nada? You look down in the dumps.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: I don't know. My life is a big mess. Everything is so complicated.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: come on, nothing can be that bad.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: but promise me, you'll keep it a secret.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: ok, I promise. So what's troubling you so much?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: I've fallen in love with my boss.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: really? Is he married?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: no, of course not. He is still single.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: then what's your problem?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: I try to keep it to myself. But there is a lot of gossip about us.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: oh, I see. Office romance tends to be the subject of gossip.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: worse still, he is trying to avoid me these days.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: office romance is very tricky.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person2#: it gives me a lot of pressure and I feel depressed.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:#Person1#: cheer up, Nada. You'll be fine.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:[/INST]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Nada tells #Person1# she's upset because she has fallen in love with her boss which causes gossips and she finds her boss is trying to avoid her. #Person1# comforts Nada.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 0%| | 0/10000 [00:00<?, ? examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 0%| | 29/10000 [00:00<01:59, 83.51 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 2%|▏ | 155/10000 [00:00<00:26, 371.39 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 4%|▍ | 443/10000 [00:00<00:10, 947.96 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 7%|▋ | 670/10000 [00:00<00:07, 1167.32 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 12%|█▏ | 1231/10000 [00:00<00:03, 2266.35 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 15%|█▌ | 1549/10000 [00:00<00:03, 2493.52 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 20%|█▉ | 1958/10000 [00:01<00:02, 2880.95 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 23%|██▎ | 2284/10000 [00:01<00:02, 2974.18 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 27%|██▋ | 2701/10000 [00:01<00:02, 3273.35 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 31%|███ | 3112/10000 [00:01<00:01, 3502.23 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 36%|███▌ | 3567/10000 [00:01<00:01, 3793.48 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 41%|████ | 4052/10000 [00:01<00:01, 4080.30 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 45%|████▌ | 4546/10000 [00:01<00:01, 4310.36 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 50%|█████ | 5040/10000 [00:01<00:01, 4458.21 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 55%|█████▌ | 5525/10000 [00:01<00:00, 4524.05 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 60%|█████▉ | 5995/10000 [00:02<00:00, 4535.21 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 65%|██████▍ | 6477/10000 [00:02<00:00, 4504.94 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 69%|██████▉ | 6931/10000 [00:02<00:00, 4465.85 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 74%|███████▍ | 7409/10000 [00:02<00:00, 4547.15 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 79%|███████▉ | 7877/10000 [00:02<00:00, 4467.41 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 83%|████████▎ | 8335/10000 [00:02<00:00, 4117.06 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 88%|████████▊ | 8759/10000 [00:02<00:00, 4080.44 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 92%|█████████▏| 9178/10000 [00:02<00:00, 3968.11 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 96%|█████████▌| 9588/10000 [00:02<00:00, 3570.10 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 100%|█████████▉| 9970/10000 [00:03<00:00, 2811.61 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 100%|██████████| 10000/10000 [00:03<00:00, 3018.11 examples/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:/usr/local/lib/python3.10/dist-packages/datasets/load.py:1461: FutureWarning: The repository for samsum contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/samsum\n",
"\u001b[32mfinetune/0\u001b[0m [0]:You can avoid this message in future by passing the argument `trust_remote_code=True`.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.\n",
"\u001b[32mfinetune/0\u001b[0m [0]: warnings.warn(\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading builder script: 0%| | 0.00/3.36k [00:00<?, ?B/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading builder script: 100%|██████████| 3.36k/3.36k [00:00<00:00, 22.9MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading metadata: 0%| | 0.00/1.58k [00:00<?, ?B/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading metadata: 100%|██████████| 1.58k/1.58k [00:00<00:00, 9.26MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading readme: 0%| | 0.00/7.04k [00:00<?, ?B/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading readme: 100%|██████████| 7.04k/7.04k [00:00<00:00, 35.4MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading data: 0%| | 0.00/2.94M [00:00<?, ?B/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading data: 100%|██████████| 2.94M/2.94M [00:00<00:00, 76.7MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating train split: 0%| | 0/14732 [00:00<?, ? examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating train split: 0%| | 1/14732 [00:00<1:23:56, 2.92 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating train split: 24%|██▍ | 3600/14732 [00:00<00:01, 10607.18 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating train split: 49%|████▉ | 7221/14732 [00:00<00:00, 18190.55 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating train split: 74%|███████▍ | 10912/14732 [00:00<00:00, 23752.85 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating train split: 99%|█████████▉| 14598/14732 [00:00<00:00, 27659.42 examples/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating train split: 100%|██████████| 14732/14732 [00:00<00:00, 19597.62 examples/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating test split: 0%| | 0/819 [00:00<?, ? examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating test split: 0%| | 1/819 [00:00<03:09, 4.32 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating test split: 100%|██████████| 819/819 [00:00<00:00, 3218.38 examples/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating validation split: 0%| | 0/818 [00:00<?, ? examples/s]\u001b[32mfinetune/0\u001b[0m [0]:loaded dataset samsum of size 14732\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating validation split: 0%| | 1/818 [00:00<03:09, 4.32 examples/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Generating validation split: 100%|██████████| 818/818 [00:00<00:00, 3202.87 examples/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:using max length 512\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 0%| | 0/14732 [00:00<?, ? examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 6%|▋ | 921/14732 [00:02<00:33, 407.85 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 19%|█▉ | 2763/14732 [00:02<00:08, 1439.84 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 25%|██▌ | 3684/14732 [00:02<00:05, 1893.87 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 38%|███▊ | 5526/14732 [00:02<00:03, 2762.12 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 50%|█████ | 7368/14732 [00:03<00:01, 4070.62 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 63%|██████▎ | 9210/14732 [00:03<00:00, 5711.07 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 81%|████████▏ | 11971/14732 [00:03<00:00, 8632.63 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 94%|█████████▍| 13812/14732 [00:03<00:00, 8101.29 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Filter (num_proc=16): 100%|██████████| 14732/14732 [00:03<00:00, 3937.21 examples/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:truncated dataset samsum to size 10000\n",
"\u001b[32mfinetune/0\u001b[0m [0]:sample prompt 0 for dataset samsum:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:[INST] <<SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:You are an assistant and you are tasked with writing text summaries. For each input text, provide a summary. The summary should be concise, accurate and truthful. Do not make up facts or answers.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:<</SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Caleb: hi, I am coming this evening!\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Nancy: I know! 🎉\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Caleb: How is the weather?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Nancy: Really ugly, raining all the time. But we'll have each other!\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Caleb: True!\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:[/INST]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Caleb is coming to see Nancy this evening. It's raining all the time.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:sample prompt 1 for dataset samsum:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:[INST] <<SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:You are an assistant and you are tasked with writing text summaries. For each input text, provide a summary. The summary should be concise, accurate and truthful. Do not make up facts or answers.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:<</SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Eleonor: ok gals where do we meet up next Sunday?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Patricia: I'm out, my mother in law has bday :C\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Tasha: oh fuck so you'll spend Sunday with Cercei then\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Betty: I want to meet up next to Palladium cinema\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Eleonor: ok so we'll meet up (minus Pat) at Jeff's bar\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Tasha: are u serious? bar? do you want to seduce a bald 40 yr old divorcee?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Patricia: lol\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Patricia: looks I'll be missing out a lot\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Betty: isn't Gloria a bald 40 yr old divorcee too? \n",
"\u001b[32mfinetune/0\u001b[0m [0]:Eleonor: hahahaha\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Patricia: LOL\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Tasha: <file_gif>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Betty: let's meet up in that French place, it's not cheap but c'mon it's SO worth it\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Tasha: agreed\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Eleonor: ok\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Patricia: if she dies before Sunday I'll come too\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Eleonor: lol\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:[/INST]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Eleonor, Tasha and Betty agree to meet up in the French place next Sunday. Patricia can't come as the meeting collides with her mother-in-law's birthday.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:sample prompt 2 for dataset samsum:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:[INST] <<SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:You are an assistant and you are tasked with writing text summaries. For each input text, provide a summary. The summary should be concise, accurate and truthful. Do not make up facts or answers.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:<</SYS>>\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Karen: Is Auntie's birthday this week or next?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:May: next\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Karen: Phew!\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Karen: My parents were getting nervous that they didn't buy her anything and may not make it this week\n",
"\u001b[32mfinetune/0\u001b[0m [0]:May: Hm... I haven't heard that she'll throw a party or anything\n",
"\u001b[32mfinetune/0\u001b[0m [0]:May: I'll ask her though\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Karen: If you may that'd be amazing. You know my parents, they need to know everything in advance\n",
"\u001b[32mfinetune/0\u001b[0m [0]:May: Ok, no problem. Are you going away this week?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Karen: They're probably going to the seaside for the weekend. I'm staying\n",
"\u001b[32mfinetune/0\u001b[0m [0]:May: Oh, so maybe I could join you?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Karen: Yeah, why not, bring wine, we could order some pizza\n",
"\u001b[32mfinetune/0\u001b[0m [0]:May: Cool. I was supposed to go out with my friends but they went down with a cold\n",
"\u001b[32mfinetune/0\u001b[0m [0]:May: Should I bring anything else?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Karen: No, I think we're pretty much set with netflix\n",
"\u001b[32mfinetune/0\u001b[0m [0]:May: when are your parents coming back?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Karen: Sunday, I think\n",
"\u001b[32mfinetune/0\u001b[0m [0]:May: Ok, can I come over on Saturday?\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Karen: Sure, you can come even on Friday. They're leaving straight after work\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:[/INST]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Auntie's birthday is next week. May will ask if she's throwing a party. Karen's parents are going to the seaside for the weekend. May will join her on Friday. They'll have some wine, pizza and Netflix.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 0%| | 0/10000 [00:00<?, ? examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 0%| | 26/10000 [00:00<02:21, 70.71 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 1%| | 112/10000 [00:00<00:35, 277.18 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 4%|▍ | 398/10000 [00:00<00:10, 890.07 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 8%|▊ | 765/10000 [00:00<00:05, 1612.41 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 10%|█ | 1012/10000 [00:00<00:06, 1486.19 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 16%|█▌ | 1584/10000 [00:01<00:03, 2313.05 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 23%|██▎ | 2301/10000 [00:01<00:02, 3460.20 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 27%|██▋ | 2736/10000 [00:01<00:01, 3665.23 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 32%|███▏ | 3169/10000 [00:01<00:01, 3841.00 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 36%|███▋ | 3640/10000 [00:01<00:01, 4046.08 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 42%|████▏ | 4166/10000 [00:01<00:01, 4368.87 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 47%|████▋ | 4744/10000 [00:01<00:01, 4727.02 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 53%|█████▎ | 5272/10000 [00:01<00:00, 4838.61 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 58%|█████▊ | 5822/10000 [00:01<00:00, 4907.51 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 64%|██████▎ | 6365/10000 [00:01<00:00, 4996.11 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 69%|██████▉ | 6892/10000 [00:02<00:00, 4825.50 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 74%|███████▍ | 7389/10000 [00:02<00:00, 4593.68 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 79%|███████▉ | 7882/10000 [00:02<00:00, 4675.88 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 84%|████████▎ | 8371/10000 [00:02<00:00, 4585.35 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 88%|████████▊ | 8836/10000 [00:02<00:00, 4392.29 examples/s][0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 93%|█████████▎| 9293/10000 [00:02<00:00, 4418.73 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 98%|█████████▊| 9764/10000 [00:02<00:00, 3928.14 examples/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Map (num_proc=16): 100%|██████████| 10000/10000 [00:03<00:00, 3256.06 examples/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:using 20000 examples for training\n",
"\u001b[32mfinetune/0\u001b[0m [0]:using quantization config: {'quant_method': <QuantizationMethod.BITS_AND_BYTES: 'bitsandbytes'>, '_load_in_8bit': False, '_load_in_4bit': True, 'llm_int8_threshold': 6.0, 'llm_int8_skip_modules': None, 'llm_int8_enable_fp32_cpu_offload': False, 'llm_int8_has_fp16_weight': False, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': True, 'bnb_4bit_compute_dtype': 'float16', 'load_in_4bit': True, 'load_in_8bit': False}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:config.json: 0%| | 0.00/635 [00:00<?, ?B/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:config.json: 100%|██████████| 635/635 [00:00<00:00, 3.84MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model.safetensors.index.json: 0%| | 0.00/26.8k [00:00<?, ?B/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model.safetensors.index.json: 100%|██████████| 26.8k/26.8k [00:00<00:00, 86.6MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading shards: 0%| | 0/2 [00:00<?, ?it/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 0%| | 0.00/9.98G [00:00<?, ?B/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 1%| | 52.4M/9.98G [00:00<00:20, 475MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 1%| | 105M/9.98G [00:00<00:20, 475MB/s] \u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 2%|▏ | 157M/9.98G [00:00<00:20, 471MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 2%|▏ | 210M/9.98G [00:00<00:20, 472MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 3%|▎ | 262M/9.98G [00:00<00:21, 445MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 3%|▎ | 315M/9.98G [00:00<00:21, 441MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 4%|▎ | 367M/9.98G [00:00<00:22, 436MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 4%|▍ | 419M/9.98G [00:00<00:22, 433MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 5%|▍ | 472M/9.98G [00:01<00:22, 432MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 5%|▌ | 524M/9.98G [00:01<00:21, 431MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 6%|▌ | 577M/9.98G [00:01<00:21, 431MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 6%|▋ | 629M/9.98G [00:01<00:21, 431MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 7%|▋ | 682M/9.98G [00:01<00:21, 431MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 7%|▋ | 734M/9.98G [00:01<00:21, 431MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 8%|▊ | 786M/9.98G [00:01<00:21, 432MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 8%|▊ | 839M/9.98G [00:01<00:21, 429MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 9%|▉ | 891M/9.98G [00:02<00:21, 426MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 9%|▉ | 944M/9.98G [00:02<00:21, 424MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 10%|▉ | 996M/9.98G [00:02<00:21, 424MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 11%|█ | 1.05G/9.98G [00:02<00:20, 426MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 11%|█ | 1.10G/9.98G [00:02<00:20, 424MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 12%|█▏ | 1.15G/9.98G [00:02<00:20, 426MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 12%|█▏ | 1.21G/9.98G [00:02<00:20, 424MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 13%|█▎ | 1.26G/9.98G [00:02<00:21, 410MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 13%|█▎ | 1.30G/9.98G [00:03<00:21, 405MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 13%|█▎ | 1.34G/9.98G [00:03<00:22, 388MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 14%|█▍ | 1.38G/9.98G [00:03<00:22, 384MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 14%|█▍ | 1.43G/9.98G [00:03<00:23, 363MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 15%|█▍ | 1.47G/9.98G [00:03<00:25, 339MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 15%|█▌ | 1.51G/9.98G [00:03<00:24, 345MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 16%|█▌ | 1.55G/9.98G [00:03<00:24, 347MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 16%|█▌ | 1.59G/9.98G [00:03<00:24, 349MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 16%|█▋ | 1.64G/9.98G [00:04<00:24, 343MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 17%|█▋ | 1.69G/9.98G [00:04<00:22, 373MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 17%|█▋ | 1.74G/9.98G [00:04<00:20, 396MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 18%|█▊ | 1.79G/9.98G [00:04<00:19, 414MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 18%|█▊ | 1.85G/9.98G [00:04<00:18, 430MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 19%|█▉ | 1.90G/9.98G [00:04<00:18, 441MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 20%|█▉ | 1.95G/9.98G [00:04<00:17, 447MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 20%|██ | 2.00G/9.98G [00:04<00:17, 444MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 21%|██ | 2.06G/9.98G [00:04<00:18, 439MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 21%|██ | 2.11G/9.98G [00:05<00:18, 436MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 22%|██▏ | 2.16G/9.98G [00:05<00:18, 434MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 22%|██▏ | 2.21G/9.98G [00:05<00:18, 426MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 23%|██▎ | 2.26G/9.98G [00:05<00:20, 367MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 23%|██▎ | 2.31G/9.98G [00:05<00:21, 351MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 24%|██▎ | 2.35G/9.98G [00:05<00:22, 344MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 24%|██▍ | 2.39G/9.98G [00:05<00:23, 329MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 24%|██▍ | 2.44G/9.98G [00:06<00:21, 357MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 25%|██▍ | 2.49G/9.98G [00:06<00:21, 348MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 25%|██▌ | 2.54G/9.98G [00:06<00:20, 365MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 26%|██▌ | 2.58G/9.98G [00:06<00:20, 365MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 26%|██▋ | 2.63G/9.98G [00:06<00:19, 376MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 27%|██▋ | 2.67G/9.98G [00:06<00:19, 370MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 27%|██▋ | 2.72G/9.98G [00:06<00:19, 381MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 28%|██▊ | 2.76G/9.98G [00:06<00:20, 356MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 28%|██▊ | 2.81G/9.98G [00:07<00:18, 386MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 29%|██▊ | 2.85G/9.98G [00:07<00:18, 385MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 29%|██▉ | 2.89G/9.98G [00:07<00:19, 366MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 29%|██▉ | 2.94G/9.98G [00:07<00:21, 333MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 30%|██▉ | 2.98G/9.98G [00:07<00:19, 350MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 30%|███ | 3.02G/9.98G [00:07<00:19, 352MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 31%|███ | 3.07G/9.98G [00:07<00:19, 361MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 31%|███ | 3.11G/9.98G [00:07<00:18, 369MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 32%|███▏ | 3.16G/9.98G [00:07<00:18, 376MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 32%|███▏ | 3.20G/9.98G [00:08<00:19, 349MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 32%|███▏ | 3.24G/9.98G [00:08<00:18, 356MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 33%|███▎ | 3.29G/9.98G [00:08<00:17, 378MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 34%|███▎ | 3.34G/9.98G [00:08<00:16, 397MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 34%|███▍ | 3.40G/9.98G [00:08<00:16, 406MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 34%|███▍ | 3.44G/9.98G [00:08<00:16, 407MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 35%|███▍ | 3.49G/9.98G [00:08<00:15, 413MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 35%|███▌ | 3.53G/9.98G [00:08<00:15, 409MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 36%|███▌ | 3.59G/9.98G [00:09<00:15, 417MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 36%|███▋ | 3.64G/9.98G [00:09<00:14, 432MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 37%|███▋ | 3.69G/9.98G [00:09<00:14, 443MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 38%|███▊ | 3.74G/9.98G [00:09<00:13, 452MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 38%|███▊ | 3.80G/9.98G [00:09<00:13, 458MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 39%|███▊ | 3.85G/9.98G [00:09<00:13, 461MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 39%|███▉ | 3.90G/9.98G [00:09<00:13, 464MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 40%|███▉ | 3.95G/9.98G [00:09<00:12, 467MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 40%|████ | 4.01G/9.98G [00:09<00:12, 467MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 41%|████ | 4.06G/9.98G [00:10<00:12, 469MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 41%|████ | 4.11G/9.98G [00:10<00:12, 470MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 42%|████▏ | 4.16G/9.98G [00:10<00:12, 470MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 42%|████▏ | 4.22G/9.98G [00:10<00:12, 469MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 43%|████▎ | 4.27G/9.98G [00:10<00:14, 396MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 43%|████▎ | 4.31G/9.98G [00:10<00:16, 336MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 44%|████▎ | 4.35G/9.98G [00:10<00:17, 318MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 44%|████▍ | 4.39G/9.98G [00:11<00:18, 296MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 44%|████▍ | 4.42G/9.98G [00:11<00:19, 290MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 45%|████▍ | 4.46G/9.98G [00:11<00:19, 282MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 45%|████▌ | 4.50G/9.98G [00:11<00:18, 289MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 46%|████▌ | 4.54G/9.98G [00:11<00:18, 288MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 46%|████▌ | 4.58G/9.98G [00:11<00:18, 295MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 46%|████▋ | 4.62G/9.98G [00:11<00:17, 305MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 47%|████▋ | 4.67G/9.98G [00:11<00:16, 327MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 47%|████▋ | 4.71G/9.98G [00:12<00:15, 332MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 48%|████▊ | 4.75G/9.98G [00:12<00:15, 346MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 48%|████▊ | 4.80G/9.98G [00:12<00:13, 373MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 49%|████▊ | 4.84G/9.98G [00:12<00:15, 342MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 49%|████▉ | 4.89G/9.98G [00:12<00:16, 309MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 49%|████▉ | 4.93G/9.98G [00:12<00:15, 321MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 50%|████▉ | 4.97G/9.98G [00:12<00:14, 336MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 50%|█████ | 5.01G/9.98G [00:12<00:13, 356MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 51%|█████ | 5.05G/9.98G [00:13<00:13, 369MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 51%|█████ | 5.11G/9.98G [00:13<00:12, 387MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 52%|█████▏ | 5.15G/9.98G [00:13<00:12, 391MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 52%|█████▏ | 5.19G/9.98G [00:13<00:12, 391MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 53%|█████▎ | 5.24G/9.98G [00:13<00:11, 400MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 53%|█████▎ | 5.28G/9.98G [00:13<00:11, 404MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 53%|█████▎ | 5.34G/9.98G [00:13<00:11, 412MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 54%|█████▍ | 5.39G/9.98G [00:13<00:11, 417MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 55%|█████▍ | 5.44G/9.98G [00:13<00:10, 421MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 55%|█████▌ | 5.49G/9.98G [00:14<00:10, 418MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 56%|█████▌ | 5.55G/9.98G [00:14<00:10, 430MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 56%|█████▌ | 5.60G/9.98G [00:14<00:09, 443MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 57%|█████▋ | 5.65G/9.98G [00:14<00:09, 451MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 57%|█████▋ | 5.70G/9.98G [00:14<00:09, 458MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 58%|█████▊ | 5.76G/9.98G [00:14<00:09, 461MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 58%|█████▊ | 5.81G/9.98G [00:14<00:09, 462MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 59%|█████▉ | 5.86G/9.98G [00:14<00:08, 463MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 59%|█████▉ | 5.91G/9.98G [00:15<00:08, 464MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 60%|█████▉ | 5.97G/9.98G [00:15<00:10, 390MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 60%|██████ | 6.01G/9.98G [00:15<00:11, 359MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 61%|██████ | 6.05G/9.98G [00:15<00:11, 328MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 61%|██████ | 6.09G/9.98G [00:15<00:11, 324MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 61%|██████▏ | 6.13G/9.98G [00:15<00:11, 324MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 62%|██████▏ | 6.18G/9.98G [00:15<00:11, 324MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 62%|██████▏ | 6.22G/9.98G [00:15<00:10, 344MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 63%|██████▎ | 6.27G/9.98G [00:16<00:09, 375MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 63%|██████▎ | 6.32G/9.98G [00:16<00:09, 400MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 64%|██████▍ | 6.38G/9.98G [00:16<00:08, 417MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 64%|██████▍ | 6.43G/9.98G [00:16<00:09, 385MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 65%|██████▍ | 6.47G/9.98G [00:16<00:09, 361MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 65%|██████▌ | 6.51G/9.98G [00:16<00:10, 339MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 66%|██████▌ | 6.55G/9.98G [00:16<00:10, 331MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 66%|██████▌ | 6.60G/9.98G [00:17<00:10, 317MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 67%|██████▋ | 6.64G/9.98G [00:17<00:10, 313MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 67%|██████▋ | 6.68G/9.98G [00:17<00:10, 305MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 67%|██████▋ | 6.71G/9.98G [00:17<00:10, 300MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 68%|██████▊ | 6.74G/9.98G [00:17<00:10, 302MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 68%|██████▊ | 6.77G/9.98G [00:17<00:10, 297MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 68%|██████▊ | 6.81G/9.98G [00:17<00:10, 293MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 69%|██████▊ | 6.84G/9.98G [00:17<00:10, 291MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 69%|██████▉ | 6.87G/9.98G [00:17<00:10, 288MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 69%|██████▉ | 6.90G/9.98G [00:18<00:10, 281MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 70%|██████▉ | 6.94G/9.98G [00:18<00:09, 309MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 70%|██████▉ | 6.98G/9.98G [00:18<00:09, 323MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 70%|███████ | 7.03G/9.98G [00:18<00:08, 333MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 71%|███████ | 7.07G/9.98G [00:18<00:08, 352MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 71%|███████▏ | 7.11G/9.98G [00:18<00:08, 354MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 72%|███████▏ | 7.15G/9.98G [00:18<00:08, 339MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 72%|███████▏ | 7.19G/9.98G [00:19<00:10, 268MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 73%|███████▎ | 7.25G/9.98G [00:19<00:08, 315MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 73%|███████▎ | 7.29G/9.98G [00:19<00:09, 297MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 74%|███████▎ | 7.34G/9.98G [00:19<00:08, 327MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 74%|███████▍ | 7.38G/9.98G [00:19<00:07, 333MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 74%|███████▍ | 7.42G/9.98G [00:19<00:07, 329MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 75%|███████▍ | 7.47G/9.98G [00:19<00:07, 333MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 75%|███████▌ | 7.51G/9.98G [00:19<00:07, 317MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 76%|███████▌ | 7.55G/9.98G [00:20<00:07, 315MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 76%|███████▌ | 7.59G/9.98G [00:20<00:07, 304MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 76%|███████▋ | 7.62G/9.98G [00:20<00:07, 300MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 77%|███████▋ | 7.65G/9.98G [00:20<00:07, 302MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 77%|███████▋ | 7.69G/9.98G [00:20<00:07, 300MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 77%|███████▋ | 7.73G/9.98G [00:20<00:07, 299MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 78%|███████▊ | 7.76G/9.98G [00:20<00:07, 302MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 78%|███████▊ | 7.80G/9.98G [00:20<00:07, 309MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 79%|███████▊ | 7.83G/9.98G [00:21<00:06, 310MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 79%|███████▉ | 7.86G/9.98G [00:21<00:06, 308MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 79%|███████▉ | 7.90G/9.98G [00:21<00:06, 303MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 79%|███████▉ | 7.93G/9.98G [00:21<00:06, 298MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 80%|███████▉ | 7.96G/9.98G [00:21<00:06, 300MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 80%|████████ | 7.99G/9.98G [00:21<00:06, 297MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 80%|████████ | 8.02G/9.98G [00:21<00:06, 294MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 81%|████████ | 8.05G/9.98G [00:21<00:06, 286MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 81%|████████ | 8.08G/9.98G [00:21<00:06, 278MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 81%|████████▏ | 8.12G/9.98G [00:22<00:06, 270MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 82%|████████▏ | 8.15G/9.98G [00:22<00:06, 266MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 82%|████████▏ | 8.18G/9.98G [00:22<00:07, 252MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 82%|████████▏ | 8.21G/9.98G [00:22<00:07, 241MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 83%|████████▎ | 8.24G/9.98G [00:22<00:07, 228MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 83%|████████▎ | 8.27G/9.98G [00:22<00:07, 221MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 83%|████████▎ | 8.30G/9.98G [00:22<00:07, 216MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 84%|████████▎ | 8.34G/9.98G [00:23<00:07, 213MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 84%|████████▍ | 8.37G/9.98G [00:23<00:07, 218MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 84%|████████▍ | 8.40G/9.98G [00:23<00:07, 215MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 85%|████████▍ | 8.44G/9.98G [00:23<00:05, 258MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 85%|████████▍ | 8.47G/9.98G [00:23<00:05, 257MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 85%|████████▌ | 8.50G/9.98G [00:23<00:05, 263MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 86%|████████▌ | 8.54G/9.98G [00:23<00:05, 270MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 86%|████████▌ | 8.57G/9.98G [00:23<00:05, 277MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 86%|████████▋ | 8.61G/9.98G [00:24<00:04, 292MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 87%|████████▋ | 8.65G/9.98G [00:24<00:04, 306MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 87%|████████▋ | 8.69G/9.98G [00:24<00:03, 323MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 88%|████████▊ | 8.73G/9.98G [00:24<00:03, 316MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 88%|████████▊ | 8.78G/9.98G [00:24<00:03, 323MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 88%|████████▊ | 8.82G/9.98G [00:24<00:03, 318MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 89%|████████▉ | 8.86G/9.98G [00:24<00:03, 324MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 89%|████████▉ | 8.90G/9.98G [00:24<00:03, 309MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 90%|████████▉ | 8.93G/9.98G [00:25<00:05, 207MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 90%|████████▉ | 8.98G/9.98G [00:25<00:04, 241MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 90%|█████████ | 9.02G/9.98G [00:25<00:03, 274MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 91%|█████████ | 9.06G/9.98G [00:25<00:03, 296MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 91%|█████████ | 9.10G/9.98G [00:25<00:02, 314MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 92%|█████████▏| 9.14G/9.98G [00:25<00:02, 321MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 92%|█████████▏| 9.19G/9.98G [00:25<00:02, 340MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 92%|█████████▏| 9.23G/9.98G [00:26<00:02, 352MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 93%|█████████▎| 9.27G/9.98G [00:26<00:02, 331MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 93%|█████████▎| 9.31G/9.98G [00:26<00:02, 314MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 94%|█████████▍| 9.35G/9.98G [00:26<00:01, 312MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 94%|█████████▍| 9.40G/9.98G [00:26<00:01, 310MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 95%|█████████▍| 9.44G/9.98G [00:26<00:01, 316MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 95%|█████████▌| 9.48G/9.98G [00:26<00:01, 327MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 95%|█████████▌| 9.52G/9.98G [00:26<00:01, 339MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 96%|█████████▌| 9.57G/9.98G [00:27<00:01, 366MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 96%|█████████▋| 9.63G/9.98G [00:27<00:00, 382MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 97%|█████████▋| 9.68G/9.98G [00:27<00:00, 395MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 97%|█████████▋| 9.72G/9.98G [00:29<00:03, 78.3MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 98%|█████████▊| 9.75G/9.98G [00:29<00:02, 94.1MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 98%|█████████▊| 9.80G/9.98G [00:29<00:01, 130MB/s] \u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 99%|█████████▉| 9.86G/9.98G [00:29<00:00, 167MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 99%|█████████▉| 9.90G/9.98G [00:29<00:00, 186MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 100%|█████████▉| 9.94G/9.98G [00:29<00:00, 214MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 100%|██████████| 9.98G/9.98G [00:29<00:00, 225MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00001-of-00002.safetensors: 100%|██████████| 9.98G/9.98G [00:29<00:00, 335MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading shards: 50%|█████ | 1/2 [00:30<00:30, 30.15s/it]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 0%| | 0.00/3.50G [00:00<?, ?B/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 1%|▏ | 52.4M/3.50G [00:00<00:07, 474MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 3%|▎ | 105M/3.50G [00:00<00:07, 464MB/s] \u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 4%|▍ | 157M/3.50G [00:00<00:07, 451MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 6%|▌ | 210M/3.50G [00:00<00:08, 385MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 7%|▋ | 252M/3.50G [00:00<00:09, 346MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 8%|▊ | 294M/3.50G [00:00<00:09, 324MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 10%|▉ | 336M/3.50G [00:00<00:10, 316MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 11%|█ | 377M/3.50G [00:01<00:10, 304MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 12%|█▏ | 409M/3.50G [00:01<00:10, 300MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 13%|█▎ | 440M/3.50G [00:01<00:10, 296MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 13%|█▎ | 472M/3.50G [00:01<00:10, 291MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 14%|█▍ | 503M/3.50G [00:01<00:10, 287MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 15%|█▌ | 535M/3.50G [00:01<00:10, 277MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 16%|█▌ | 566M/3.50G [00:01<00:11, 265MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 17%|█▋ | 598M/3.50G [00:01<00:11, 250MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 18%|█▊ | 629M/3.50G [00:02<00:12, 234MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 19%|█▉ | 661M/3.50G [00:02<00:12, 219MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 20%|█▉ | 692M/3.50G [00:02<00:13, 204MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 20%|██ | 713M/3.50G [00:02<00:14, 192MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 21%|██ | 734M/3.50G [00:02<00:14, 187MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 22%|██▏ | 755M/3.50G [00:02<00:15, 181MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 22%|██▏ | 776M/3.50G [00:02<00:15, 180MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 23%|██▎ | 807M/3.50G [00:03<00:12, 212MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 24%|██▍ | 849M/3.50G [00:03<00:10, 256MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 26%|██▌ | 902M/3.50G [00:03<00:08, 310MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 27%|██▋ | 944M/3.50G [00:03<00:08, 318MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 28%|██▊ | 986M/3.50G [00:03<00:08, 312MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 30%|██▉ | 1.04G/3.50G [00:03<00:07, 344MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 31%|███ | 1.08G/3.50G [00:03<00:07, 346MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 32%|███▏ | 1.13G/3.50G [00:03<00:06, 370MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 34%|███▎ | 1.17G/3.50G [00:04<00:06, 377MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 35%|███▌ | 1.23G/3.50G [00:04<00:05, 403MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 37%|███▋ | 1.28G/3.50G [00:04<00:05, 411MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 38%|███▊ | 1.33G/3.50G [00:04<00:05, 422MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 40%|███▉ | 1.38G/3.50G [00:04<00:04, 429MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 41%|████ | 1.44G/3.50G [00:04<00:04, 433MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 43%|████▎ | 1.49G/3.50G [00:04<00:04, 408MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 44%|████▎ | 1.53G/3.50G [00:04<00:05, 367MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 45%|████▍ | 1.57G/3.50G [00:05<00:05, 324MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 46%|████▌ | 1.61G/3.50G [00:05<00:05, 327MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 47%|████▋ | 1.66G/3.50G [00:05<00:05, 346MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 49%|████▊ | 1.70G/3.50G [00:05<00:07, 241MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 50%|█████ | 1.75G/3.50G [00:05<00:06, 285MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 52%|█████▏ | 1.80G/3.50G [00:05<00:05, 317MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 53%|█████▎ | 1.86G/3.50G [00:05<00:04, 348MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 55%|█████▍ | 1.91G/3.50G [00:06<00:04, 369MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 56%|█████▌ | 1.96G/3.50G [00:06<00:03, 388MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 58%|█████▊ | 2.01G/3.50G [00:06<00:03, 403MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 59%|█████▉ | 2.07G/3.50G [00:06<00:03, 411MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 61%|██████ | 2.12G/3.50G [00:06<00:03, 420MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 62%|██████▏ | 2.17G/3.50G [00:06<00:03, 423MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 64%|██████▎ | 2.22G/3.50G [00:06<00:02, 429MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 65%|██████▌ | 2.28G/3.50G [00:06<00:02, 432MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 67%|██████▋ | 2.33G/3.50G [00:07<00:02, 431MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 68%|██████▊ | 2.38G/3.50G [00:07<00:02, 429MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 69%|██████▉ | 2.43G/3.50G [00:07<00:02, 428MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 71%|███████ | 2.49G/3.50G [00:07<00:02, 429MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 72%|███████▏ | 2.54G/3.50G [00:07<00:02, 425MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 74%|███████▍ | 2.59G/3.50G [00:07<00:02, 426MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 75%|███████▌ | 2.64G/3.50G [00:07<00:02, 423MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 77%|███████▋ | 2.69G/3.50G [00:07<00:01, 419MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 78%|███████▊ | 2.75G/3.50G [00:08<00:01, 405MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 80%|███████▉ | 2.79G/3.50G [00:08<00:01, 407MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 81%|████████ | 2.83G/3.50G [00:08<00:01, 409MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 82%|████████▏ | 2.87G/3.50G [00:08<00:01, 409MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 83%|████████▎ | 2.92G/3.50G [00:08<00:01, 410MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 85%|████████▍ | 2.97G/3.50G [00:08<00:01, 417MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 86%|████████▌ | 3.01G/3.50G [00:08<00:01, 417MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 87%|████████▋ | 3.06G/3.50G [00:08<00:01, 423MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 89%|████████▉ | 3.11G/3.50G [00:08<00:00, 428MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 90%|█████████ | 3.17G/3.50G [00:09<00:00, 434MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 92%|█████████▏| 3.22G/3.50G [00:09<00:00, 445MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 93%|█████████▎| 3.27G/3.50G [00:09<00:00, 453MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 95%|█████████▍| 3.32G/3.50G [00:09<00:00, 459MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 96%|█████████▋| 3.38G/3.50G [00:09<00:00, 447MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 98%|█████████▊| 3.43G/3.50G [00:09<00:00, 453MB/s]\u001b[A\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 99%|█████████▉| 3.48G/3.50G [00:09<00:00, 457MB/s]\u001b[A\n",
"\u001b[32mfinetune/0\u001b[0m [0]:model-00002-of-00002.safetensors: 100%|██████████| 3.50G/3.50G [00:09<00:00, 358MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading shards: 100%|██████████| 2/2 [00:40<00:00, 18.24s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Downloading shards: 100%|██████████| 2/2 [00:40<00:00, 20.03s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:The model was loaded with use_flash_attention_2=True, which is deprecated and may be removed in a future release. Please use `attn_implementation=\"flash_attention_2\"` instead.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Loading checkpoint shards: 50%|█████ | 1/2 [00:03<00:03, 3.55s/it]\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Loading checkpoint shards: 100%|██████████| 2/2 [00:04<00:00, 2.20s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:Loading checkpoint shards: 100%|██████████| 2/2 [00:04<00:00, 2.41s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]:generation_config.json: 0%| | 0.00/167 [00:00<?, ?B/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:generation_config.json: 100%|██████████| 167/167 [00:00<00:00, 1.10MB/s]\n",
"\u001b[32mfinetune/0\u001b[0m [0]:/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:410: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.\n",
"\u001b[32mfinetune/0\u001b[0m [0]: warnings.warn(\n",
"\u001b[32mfinetune/0\u001b[0m [0]:/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:415: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.\n",
"\u001b[32mfinetune/0\u001b[0m [0]: warnings.warn(\n",
"\u001b[32mfinetune/0\u001b[0m [0]:/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:410: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.\n",
"\u001b[32mfinetune/0\u001b[0m [0]: warnings.warn(\n",
"\u001b[32mfinetune/0\u001b[0m [0]:/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:415: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.\n",
"\u001b[32mfinetune/0\u001b[0m [0]: warnings.warn(\n",
"\u001b[32mfinetune/0\u001b[0m [0]:trainable params: 50,331,648 || all params: 6,788,747,264 || trainable%: 0.7413981702766185\n",
"\u001b[32mfinetune/0\u001b[0m [0]:per_device_train_batch_size: 2 gradient_accumulation_steps: 16\n",
"\u001b[32mfinetune/0\u001b[0m [0]:2024-03-17 23:38:52.345168: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"\u001b[32mfinetune/0\u001b[0m [0]:2024-03-17 23:38:52.345263: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"\u001b[32mfinetune/0\u001b[0m [0]:2024-03-17 23:38:52.347080: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"\u001b[32mfinetune/0\u001b[0m [0]:2024-03-17 23:38:53.438461: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n",
"\u001b[32mfinetune/0\u001b[0m [0]:/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py:432: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches', 'even_batches', 'use_seedable_sampler']). Please pass an `accelerate.DataLoaderConfiguration` instead: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False, even_batches=True, use_seedable_sampler=True)\n",
"\u001b[32mfinetune/0\u001b[0m [0]: warnings.warn(\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 0%| | 0/625 [00:00<?, ?it/s]\u001b[32mfinetune/0\u001b[0m [0]:/usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py:460: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.\n",
"\u001b[32mfinetune/0\u001b[0m [0]: warnings.warn(\n",
"\u001b[32mfinetune/0\u001b[0m [0]:The input hidden states seems to be silently casted in float32, this might be related to the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in torch.float16.\n",
"\u001b[32mfinetune/0\u001b[0m [0]:{'loss': 3.2435, 'grad_norm': 0.2975690960884094, 'learning_rate': 2.0000000000000003e-06, 'epoch': 0.0}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 0%| | 1/625 [00:30<5:17:36, 30.54s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 0%| | 1/625 [00:30<5:17:36, 30.54s/it]\u001b[32mfinetune/0\u001b[0m [0]:{'loss': 3.1279, 'grad_norm': 0.29220321774482727, 'learning_rate': 4.000000000000001e-06, 'epoch': 0.0}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 0%| | 2/625 [01:01<5:19:10, 30.74s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 0%| | 2/625 [01:01<5:19:10, 30.74s/it]\u001b[32mfinetune/0\u001b[0m [0]:{'loss': 3.144, 'grad_norm': 0.2971901595592499, 'learning_rate': 6e-06, 'epoch': 0.0}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 0%| | 3/625 [01:30<5:12:11, 30.12s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 0%| | 3/625 [01:30<5:12:11, 30.12s/it]\u001b[32mfinetune/0\u001b[0m [0]:{'loss': 3.0367, 'grad_norm': 0.2809338867664337, 'learning_rate': 8.000000000000001e-06, 'epoch': 0.01}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 1%| | 4/625 [02:01<5:14:21, 30.37s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 1%| | 4/625 [02:01<5:14:21, 30.37s/it]\u001b[32mfinetune/0\u001b[0m [0]:{'loss': 3.1422, 'grad_norm': 0.3060510754585266, 'learning_rate': 1e-05, 'epoch': 0.01}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 1%| | 5/625 [02:31<5:12:20, 30.23s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 1%| | 5/625 [02:31<5:12:20, 30.23s/it]\u001b[32mfinetune/0\u001b[0m [0]:{'loss': 3.3023, 'grad_norm': 0.3176755905151367, 'learning_rate': 1.2e-05, 'epoch': 0.01}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 1%| | 6/625 [02:59<5:03:43, 29.44s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 1%| | 6/625 [02:59<5:03:43, 29.44s/it]\u001b[32mfinetune/0\u001b[0m [0]:{'loss': 2.955, 'grad_norm': 0.28816351294517517, 'learning_rate': 1.4e-05, 'epoch': 0.01}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 1%| | 7/625 [03:30<5:08:48, 29.98s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 1%| | 7/625 [03:30<5:08:48, 29.98s/it]\u001b[32mfinetune/0\u001b[0m [0]:{'loss': 3.3336, 'grad_norm': 0.32736852765083313, 'learning_rate': 1.6000000000000003e-05, 'epoch': 0.01}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 1%|▏ | 8/625 [03:58<5:01:59, 29.37s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 1%|▏ | 8/625 [03:58<5:01:59, 29.37s/it]\u001b[32mfinetune/0\u001b[0m [0]:{'loss': 3.1183, 'grad_norm': 0.3063420355319977, 'learning_rate': 1.8e-05, 'epoch': 0.01}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 1%|▏ | 9/625 [04:28<5:04:05, 29.62s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 1%|▏ | 9/625 [04:28<5:04:05, 29.62s/it]\u001b[32mfinetune/0\u001b[0m [0]:{'loss': 3.1547, 'grad_norm': 0.30801552534103394, 'learning_rate': 2e-05, 'epoch': 0.02}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 2%|▏ | 10/625 [04:59<5:05:57, 29.85s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 2%|▏ | 10/625 [04:59<5:05:57, 29.85s/it]\u001b[32mfinetune/0\u001b[0m [0]:{'loss': 3.1852, 'grad_norm': 0.3539949953556061, 'learning_rate': 1.996747967479675e-05, 'epoch': 0.02}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 2%|▏ | 11/625 [05:28<5:04:41, 29.77s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 2%|▏ | 11/625 [05:28<5:04:41, 29.77s/it]\u001b[32mfinetune/0\u001b[0m [0]:{'loss': 3.2012, 'grad_norm': 0.35657501220703125, 'learning_rate': 1.9934959349593495e-05, 'epoch': 0.02}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 2%|▏ | 12/625 [05:57<5:01:23, 29.50s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 2%|▏ | 12/625 [05:57<5:01:23, 29.50s/it]\u001b[32mfinetune/0\u001b[0m [0]:{'loss': 3.1134, 'grad_norm': 0.36276134848594666, 'learning_rate': 1.9902439024390247e-05, 'epoch': 0.02}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 2%|▏ | 13/625 [06:25<4:56:07, 29.03s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 2%|▏ | 13/625 [06:25<4:56:07, 29.03s/it]\u001b[32mfinetune/0\u001b[0m [0]:{'loss': 3.1465, 'grad_norm': 0.3630853295326233, 'learning_rate': 1.9869918699186996e-05, 'epoch': 0.02}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 2%|▏ | 14/625 [06:55<4:57:43, 29.24s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 2%|▏ | 14/625 [06:55<4:57:43, 29.24s/it]\u001b[32mfinetune/0\u001b[0m [0]:{'loss': 3.2319, 'grad_norm': 0.3843749761581421, 'learning_rate': 1.983739837398374e-05, 'epoch': 0.02}\n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 2%|▏ | 15/625 [07:24<4:58:28, 29.36s/it]\n",
"\u001b[32mfinetune/0\u001b[0m [0]: \n",
"\u001b[32mfinetune/0\u001b[0m [0]:\n",
"\u001b[32mfinetune/0\u001b[0m [0]: 2%|▏ | 15/625 [07:24<4:58:28, 29.36s/it]\u001b[32mfinetune/0\u001b[0m [2024-03-17 23:46:32,325] torch.distributed.elastic.agent.server.api: [WARNING] Received Signals.SIGTERM death signal, shutting down workers\n",
"\u001b[32mfinetune/0\u001b[0m [2024-03-17 23:46:32,325] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 3300 closing signal SIGTERM\n",
"Traceback (most recent call last):\n",
" File \"/usr/local/bin/torchx\", line 8, in <module>\n",
" sys.exit(main())\n",
" File \"/usr/local/lib/python3.10/dist-packages/torchx/cli/main.py\", line 116, in main\n",
" run_main(get_sub_cmds(), argv)\n",
" File \"/usr/local/lib/python3.10/dist-packages/torchx/cli/main.py\", line 112, in run_main\n",
" args.func(args)\n",
" File \"/usr/local/lib/python3.10/dist-packages/torchx/cli/cmd_run.py\", line 248, in run\n",
" self._run(runner, args)\n",
" File \"/usr/local/lib/python3.10/dist-packages/torchx/cli/cmd_run.py\", line 220, in _run\n",
" self._wait_and_exit(runner, app_handle, log=True)\n",
" File \"/usr/local/lib/python3.10/dist-packages/torchx/cli/cmd_run.py\", line 255, in _wait_and_exit\n",
" status = runner.wait(app_handle, wait_interval=1)\n",
" File \"/usr/local/lib/python3.10/dist-packages/torchx/runner/api.py\", line 472, in wait\n",
" time.sleep(wait_interval)\n",
" File \"/usr/local/lib/python3.10/dist-packages/torchx/schedulers/local_scheduler.py\", line 80, in _terminate_process_handler\n",
" raise SignalException(f\"Process {os.getpid()} got signal: {sigval}\", sigval=sigval)\n",
" File \"/usr/local/lib/python3.10/dist-packages/torchx/schedulers/local_scheduler.py\", line 65, in __init__\n",
" def __init__(self, msg: str, sigval: signal.Signals) -> None:\n",
" File \"/usr/local/lib/python3.10/dist-packages/torchx/schedulers/local_scheduler.py\", line 80, in _terminate_process_handler\n",
" raise SignalException(f\"Process {os.getpid()} got signal: {sigval}\", sigval=sigval)\n",
"torchx.schedulers.local_scheduler.SignalException: Process 3270 got signal: 2\n"
]
}
],
"source": [
"!PYTHONPATH=\"$PYTHONPATH:/content/cookbook\" torchx run -s local_cwd dist.ddp -j 1x1 --script /content/cookbook/recipes/tune/instruct_lora/finetune.py -- --config-name=summarize model=llama2-7b-chat-colab"
]
},
{
"cell_type": "markdown",
"source": [
"The process was stopped to avoid charges; estimated it would take about 5 hours using Colab A100."
],
"metadata": {
"id": "u5axPEyiz3xz"
}
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"provenance": [],
"machine_shape": "hm",
"gpuType": "A100",
"include_colab_link": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment