Skip to content

Instantly share code, notes, and snippets.

@mangtronix
Last active October 5, 2020 10:30
Show Gist options
  • Save mangtronix/6794c4138c613bdab27c00abb8f451f3 to your computer and use it in GitHub Desktop.
Save mangtronix/6794c4138c613bdab27c00abb8f451f3 to your computer and use it in GitHub Desktop.
StyleGAN2-Colab-Train.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "StyleGAN2-Colab-Train.ipynb",
"provenance": [],
"collapsed_sections": [],
"machine_shape": "hm",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/mangtronix/6794c4138c613bdab27c00abb8f451f3/stylegan2-colab-train.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rEx9IOnF1hKO"
},
"source": [
"#Training StyleGAN2 on Colab\n",
"Y’all won’t stop asking me about this so here ya go 😂\n",
"\n",
"If it were me I’d sign up for Colab Pro ($10/month) to get a couple extra hours of training time in per session. But you crazy kids can do whatever you want."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WcDHP4h-11Ii"
},
"source": [
"##Mounting Google Drive\n",
"So I’m actually gonna install my entire repo directly into Google Drive. This will make the setup a little easier, but its a little strange I admit.\n",
"\n",
"First, connect your Drive to Colab."
]
},
{
"cell_type": "code",
"metadata": {
"id": "lJazuNYurryY",
"outputId": "6c8e16f2-cefb-4341-8dd8-9775e0dafd0c",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
}
},
"source": [
"from google.colab import drive\n",
"drive.mount('/content/drive')"
],
"execution_count": 1,
"outputs": [
{
"output_type": "stream",
"text": [
"Mounted at /content/drive\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "aadrbMyR2F00"
},
"source": [
"##Install the repo\n",
"**Only do this for the first time ever setting this up!**\n",
"\n",
"If this is your first time ever running this notebook, you’ll want to install my fork of StyleGAN2 to your Drive account. Make sure you have ample space on your Drive (I’d say at least 50GB). This will install the repo and add some dependecies (like transferring from FFHQ the first time).\n",
"\n",
"Every time after your first use of this notebook you’ll want to skip this cell and run the cell after this."
]
},
{
"cell_type": "code",
"metadata": {
"id": "QoTNQ3Gyr0ih",
"outputId": "b972e57d-f326-4bb5-bb78-4a30237b35d8",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 374
}
},
"source": [
"#SKIP this if you already have a stylegan2 folder in your google drive\n",
"%cd /content/drive/My\\ Drive/\n",
"!mkdir stylegan2-colab\n",
"%cd stylegan2-colab/\n",
"!git clone https://github.com/dvschultz/stylegan2\n",
"%cd stylegan2\n",
"!mkdir pkl\n",
"%cd pkl\n",
"!gdown --id 1JLqXE5bGZnlu2BkbLPD5_ZxoO3Nz-AvF #inception: https://drive.google.com/open?id=1JLqXE5bGZnlu2BkbLPD5_ZxoO3Nz-AvF\n",
"%cd ../\n",
"!mkdir results\n",
"!mkdir results/00001-pretrained\n",
"%cd results/00001-pretrained\n",
"!gdown --id 1UlDmJVLLnBD9SnLSMXeiZRO6g-OMQCA_\n",
"!mv stylegan2-ffhq-config-f.pkl network-snapshot-10000.pkl\n",
"%cd ../../\n",
"%mkdir datasets"
],
"execution_count": 2,
"outputs": [
{
"output_type": "stream",
"text": [
"/content/drive/My Drive\n",
"mkdir: cannot create directory ‘stylegan2-colab’: File exists\n",
"/content/drive/My Drive/stylegan2-colab\n",
"fatal: destination path 'stylegan2' already exists and is not an empty directory.\n",
"/content/drive/My Drive/stylegan2-colab/stylegan2\n",
"mkdir: cannot create directory ‘pkl’: File exists\n",
"/content/drive/My Drive/stylegan2-colab/stylegan2/pkl\n",
"Downloading...\n",
"From: https://drive.google.com/uc?id=1JLqXE5bGZnlu2BkbLPD5_ZxoO3Nz-AvF\n",
"To: /content/drive/My Drive/stylegan2-colab/stylegan2/pkl/inception_v3_features.pkl\n",
"87.3MB [00:00, 122MB/s]\n",
"/content/drive/My Drive/stylegan2-colab/stylegan2\n",
"mkdir: cannot create directory ‘results’: File exists\n",
"mkdir: cannot create directory ‘results/00001-pretrained’: File exists\n",
"/content/drive/My Drive/stylegan2-colab/stylegan2/results/00001-pretrained\n",
"Downloading...\n",
"From: https://drive.google.com/uc?id=1UlDmJVLLnBD9SnLSMXeiZRO6g-OMQCA_\n",
"To: /content/drive/My Drive/stylegan2-colab/stylegan2/results/00001-pretrained/stylegan2-ffhq-config-f.pkl\n",
"382MB [00:04, 76.7MB/s]\n",
"/content/drive/My Drive/stylegan2-colab/stylegan2\n",
"mkdir: cannot create directory ‘datasets’: File exists\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NN2K68CE27py"
},
"source": [
"##Picking up from a previous session\n",
"If you already have the StyleGAN2 repo installed in Drive skip the above cell and run the following. This will make sure you have the latest version in case I make updates."
]
},
{
"cell_type": "code",
"metadata": {
"id": "J8WgjhRFsFJ0",
"outputId": "fe11f1ee-26e1-455d-c56f-3b3660c7e425",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
}
},
"source": [
"#USE this if you already have a stylegan2 folder in google drive\n",
"%cd /content/drive/My\\ Drive/stylegan2-colab/stylegan2\n",
"!git pull"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"text": [
"HEAD is now at cdc8c0e set latest\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HSbEY2pT3TOb"
},
"source": [
"##Make sure Tensorflow 1.15 is set\n",
"Colab now defaults to Tensorflow 2. Make sure you run this cell to reset it to TF1."
]
},
{
"cell_type": "code",
"metadata": {
"id": "UMdpKY1XODz2",
"outputId": "697608e3-41a9-413f-9a9b-6ac89a4a1040",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
}
},
"source": [
"%tensorflow_version 1.x"
],
"execution_count": 3,
"outputs": [
{
"output_type": "stream",
"text": [
"TensorFlow 1.x selected.\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "i1pIBZGzZxSA"
},
"source": [
"##Converting your dataset\n",
"StyleGAN requires you to convert your standard jpg or png images into a new format (.tfrecords). \n",
"\n",
"I’ve seen some recommendations to run this command every time you restart your Colab machine. I think if you ahve a small-ish dataset (< 2000 images) that’s probably unnecessary.\n",
"\n",
"I recommend you upload your dataset to Google Drive and copy its path from the Drive folder in Colab and paste its path in the below cell.\n",
"\n",
"After the `create_from_images` argument you need to pass in two paths. The first path is where the .tfrecords files should be output (just edit the last part to have a unique name). The second path is to the directory of your images.\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "Ixjcx2-cbTDm",
"outputId": "3a94c32a-47f9-428a-dd70-a3890918c2ee",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 51
}
},
"source": [
"#2nd argument is where to put your tfrecords files\n",
"#3rd should point at your image dataset\n",
"!python dataset_tool.py create_from_images ./datasets/dayinthelife /content/drive/My\\ Drive/datasets"
],
"execution_count": 7,
"outputs": [
{
"output_type": "stream",
"text": [
"Loading images from \"/content/drive/My Drive/datasets\"\n",
"100% 29/29 [00:18<00:00, 1.54it/s]\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "iAfUNG60aRD1"
},
"source": [
"##Training\n",
"Note: this will require you to restart your Colab machine every 8–16 hours. You’ve been warned!\n",
"\n",
"This library is set up to automatically find your latest .pkl file so it should keep re-training and making progress.\n",
"\n",
"##Training Options\n",
"`--dataset`\n",
"\n",
"This should be the name you used in the first path when converting your dataset.\n",
"\n",
"`--mirror-augment`\n",
"\n",
"Using this option loads some images at random mirrored horizontally (left-to-right). This might help if you have a very small dataset.\n",
"\n",
"`--metrics`\n",
"\n",
"METRICS DON’T MATTER. It’s art! Use your eyes. Set `--metrics=None` and live your life.\n",
"\n",
"If you must use metrics, you have a few options. `fid50k`, the default, uses Frechet Inception Distance score. It’s what was used in StyleGAN1 and what most people know. It’s fine for images of animals and things, but it’s not great. `ppl_wend` is what StyleGAN2 prefers and claims to be more accurate. There are a bunch of other options but I’d recommend you stick with those. Note that both of these take 30–45minutes to run every time it runs so that cuts into your training time in Colab.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "8QrOVqEHaipA",
"outputId": "15385ba9-beff-4dce-9e61-6eacc8b7f091",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
}
},
"source": [
"!python run_training.py --num-gpus=1 --data-dir=./datasets --config=config-f --dataset=dayinthelife --mirror-augment=true --metrics=None"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"text": [
"Local submit - run_dir: results/00003-stylegan2-dayinthelife-1gpu-config-f\n",
"dnnlib: Running training.training_loop.training_loop() on localhost...\n",
"Streaming data using training.dataset.TFRecordDataset...\n",
"tcmalloc: large alloc 4294967296 bytes == 0x75a8000 @ 0x7f8b6525d001 0x7f8b61ce9765 0x7f8b61d4dbb0 0x7f8b61d4fa4f 0x7f8b61de6048 0x50a7f5 0x50cfd6 0x507f24 0x509202 0x594b01 0x54a17f 0x5517c1 0x59fe1e 0x50d596 0x507f24 0x588fac 0x59fe1e 0x50d596 0x507f24 0x588fac 0x59fe1e 0x50d596 0x509918 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 0x507f24 0x588fac 0x59fe1e\n",
"tcmalloc: large alloc 4294967296 bytes == 0x7f89932c8000 @ 0x7f8b6525b1e7 0x7f8b61ce95e1 0x7f8b61d4dc78 0x7f8b61d4df37 0x7f8b61de5f28 0x50a7f5 0x50cfd6 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x507f24\n",
"tcmalloc: large alloc 4294967296 bytes == 0x7f88922c6000 @ 0x7f8b6525b1e7 0x7f8b61ce95e1 0x7f8b61d4dc78 0x7f8b61d4df37 0x7f8b1d9740c5 0x7f8b1d2f7902 0x7f8b1d2f7eb2 0x7f8b1d2b0c3e 0x50a47f 0x50c1f4 0x509918 0x50a64d 0x50c1f4 0x507f24 0x588ddb 0x59fe1e 0x50d596 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509202 0x594b01\n",
"Dataset shape = [3, 1024, 1024]\n",
"Dynamic range = [0, 255]\n",
"Label size = 0\n",
"Loading networks from \"results/00001-pretrained/network-snapshot-10000.pkl\"...\n",
"Setting up TensorFlow plugin \"fused_bias_act.cu\": Preprocessing... Compiling... Loading... Done.\n",
"Setting up TensorFlow plugin \"upfirdn_2d.cu\": Preprocessing... Compiling... Loading... Done.\n",
"\n",
"G Params OutputShape WeightShape \n",
"--- --- --- --- \n",
"latents_in - (?, 512) - \n",
"labels_in - (?, 0) - \n",
"lod - () - \n",
"dlatent_avg - (512,) - \n",
"G_mapping/latents_in - (?, 512) - \n",
"G_mapping/labels_in - (?, 0) - \n",
"G_mapping/Normalize - (?, 512) - \n",
"G_mapping/Dense0 262656 (?, 512) (512, 512) \n",
"G_mapping/Dense1 262656 (?, 512) (512, 512) \n",
"G_mapping/Dense2 262656 (?, 512) (512, 512) \n",
"G_mapping/Dense3 262656 (?, 512) (512, 512) \n",
"G_mapping/Dense4 262656 (?, 512) (512, 512) \n",
"G_mapping/Dense5 262656 (?, 512) (512, 512) \n",
"G_mapping/Dense6 262656 (?, 512) (512, 512) \n",
"G_mapping/Dense7 262656 (?, 512) (512, 512) \n",
"G_mapping/Broadcast - (?, 18, 512) - \n",
"G_mapping/dlatents_out - (?, 18, 512) - \n",
"Truncation/Lerp - (?, 18, 512) - \n",
"G_synthesis/dlatents_in - (?, 18, 512) - \n",
"G_synthesis/4x4/Const 8192 (?, 512, 4, 4) (1, 512, 4, 4) \n",
"G_synthesis/4x4/Conv 2622465 (?, 512, 4, 4) (3, 3, 512, 512)\n",
"G_synthesis/4x4/ToRGB 264195 (?, 3, 4, 4) (1, 1, 512, 3) \n",
"G_synthesis/8x8/Conv0_up 2622465 (?, 512, 8, 8) (3, 3, 512, 512)\n",
"G_synthesis/8x8/Conv1 2622465 (?, 512, 8, 8) (3, 3, 512, 512)\n",
"G_synthesis/8x8/Upsample - (?, 3, 8, 8) - \n",
"G_synthesis/8x8/ToRGB 264195 (?, 3, 8, 8) (1, 1, 512, 3) \n",
"G_synthesis/16x16/Conv0_up 2622465 (?, 512, 16, 16) (3, 3, 512, 512)\n",
"G_synthesis/16x16/Conv1 2622465 (?, 512, 16, 16) (3, 3, 512, 512)\n",
"G_synthesis/16x16/Upsample - (?, 3, 16, 16) - \n",
"G_synthesis/16x16/ToRGB 264195 (?, 3, 16, 16) (1, 1, 512, 3) \n",
"G_synthesis/32x32/Conv0_up 2622465 (?, 512, 32, 32) (3, 3, 512, 512)\n",
"G_synthesis/32x32/Conv1 2622465 (?, 512, 32, 32) (3, 3, 512, 512)\n",
"G_synthesis/32x32/Upsample - (?, 3, 32, 32) - \n",
"G_synthesis/32x32/ToRGB 264195 (?, 3, 32, 32) (1, 1, 512, 3) \n",
"G_synthesis/64x64/Conv0_up 2622465 (?, 512, 64, 64) (3, 3, 512, 512)\n",
"G_synthesis/64x64/Conv1 2622465 (?, 512, 64, 64) (3, 3, 512, 512)\n",
"G_synthesis/64x64/Upsample - (?, 3, 64, 64) - \n",
"G_synthesis/64x64/ToRGB 264195 (?, 3, 64, 64) (1, 1, 512, 3) \n",
"G_synthesis/128x128/Conv0_up 1442561 (?, 256, 128, 128) (3, 3, 512, 256)\n",
"G_synthesis/128x128/Conv1 721409 (?, 256, 128, 128) (3, 3, 256, 256)\n",
"G_synthesis/128x128/Upsample - (?, 3, 128, 128) - \n",
"G_synthesis/128x128/ToRGB 132099 (?, 3, 128, 128) (1, 1, 256, 3) \n",
"G_synthesis/256x256/Conv0_up 426369 (?, 128, 256, 256) (3, 3, 256, 128)\n",
"G_synthesis/256x256/Conv1 213249 (?, 128, 256, 256) (3, 3, 128, 128)\n",
"G_synthesis/256x256/Upsample - (?, 3, 256, 256) - \n",
"G_synthesis/256x256/ToRGB 66051 (?, 3, 256, 256) (1, 1, 128, 3) \n",
"G_synthesis/512x512/Conv0_up 139457 (?, 64, 512, 512) (3, 3, 128, 64) \n",
"G_synthesis/512x512/Conv1 69761 (?, 64, 512, 512) (3, 3, 64, 64) \n",
"G_synthesis/512x512/Upsample - (?, 3, 512, 512) - \n",
"G_synthesis/512x512/ToRGB 33027 (?, 3, 512, 512) (1, 1, 64, 3) \n",
"G_synthesis/1024x1024/Conv0_up 51297 (?, 32, 1024, 1024) (3, 3, 64, 32) \n",
"G_synthesis/1024x1024/Conv1 25665 (?, 32, 1024, 1024) (3, 3, 32, 32) \n",
"G_synthesis/1024x1024/Upsample - (?, 3, 1024, 1024) - \n",
"G_synthesis/1024x1024/ToRGB 16515 (?, 3, 1024, 1024) (1, 1, 32, 3) \n",
"G_synthesis/images_out - (?, 3, 1024, 1024) - \n",
"G_synthesis/noise0 - (1, 1, 4, 4) - \n",
"G_synthesis/noise1 - (1, 1, 8, 8) - \n",
"G_synthesis/noise2 - (1, 1, 8, 8) - \n",
"G_synthesis/noise3 - (1, 1, 16, 16) - \n",
"G_synthesis/noise4 - (1, 1, 16, 16) - \n",
"G_synthesis/noise5 - (1, 1, 32, 32) - \n",
"G_synthesis/noise6 - (1, 1, 32, 32) - \n",
"G_synthesis/noise7 - (1, 1, 64, 64) - \n",
"G_synthesis/noise8 - (1, 1, 64, 64) - \n",
"G_synthesis/noise9 - (1, 1, 128, 128) - \n",
"G_synthesis/noise10 - (1, 1, 128, 128) - \n",
"G_synthesis/noise11 - (1, 1, 256, 256) - \n",
"G_synthesis/noise12 - (1, 1, 256, 256) - \n",
"G_synthesis/noise13 - (1, 1, 512, 512) - \n",
"G_synthesis/noise14 - (1, 1, 512, 512) - \n",
"G_synthesis/noise15 - (1, 1, 1024, 1024) - \n",
"G_synthesis/noise16 - (1, 1, 1024, 1024) - \n",
"images_out - (?, 3, 1024, 1024) - \n",
"--- --- --- --- \n",
"Total 30370060 \n",
"\n",
"\n",
"D Params OutputShape WeightShape \n",
"--- --- --- --- \n",
"images_in - (?, 3, 1024, 1024) - \n",
"labels_in - (?, 0) - \n",
"1024x1024/FromRGB 128 (?, 32, 1024, 1024) (1, 1, 3, 32) \n",
"1024x1024/Conv0 9248 (?, 32, 1024, 1024) (3, 3, 32, 32) \n",
"1024x1024/Conv1_down 18496 (?, 64, 512, 512) (3, 3, 32, 64) \n",
"1024x1024/Skip 2048 (?, 64, 512, 512) (1, 1, 32, 64) \n",
"512x512/Conv0 36928 (?, 64, 512, 512) (3, 3, 64, 64) \n",
"512x512/Conv1_down 73856 (?, 128, 256, 256) (3, 3, 64, 128) \n",
"512x512/Skip 8192 (?, 128, 256, 256) (1, 1, 64, 128) \n",
"256x256/Conv0 147584 (?, 128, 256, 256) (3, 3, 128, 128)\n",
"256x256/Conv1_down 295168 (?, 256, 128, 128) (3, 3, 128, 256)\n",
"256x256/Skip 32768 (?, 256, 128, 128) (1, 1, 128, 256)\n",
"128x128/Conv0 590080 (?, 256, 128, 128) (3, 3, 256, 256)\n",
"128x128/Conv1_down 1180160 (?, 512, 64, 64) (3, 3, 256, 512)\n",
"128x128/Skip 131072 (?, 512, 64, 64) (1, 1, 256, 512)\n",
"64x64/Conv0 2359808 (?, 512, 64, 64) (3, 3, 512, 512)\n",
"64x64/Conv1_down 2359808 (?, 512, 32, 32) (3, 3, 512, 512)\n",
"64x64/Skip 262144 (?, 512, 32, 32) (1, 1, 512, 512)\n",
"32x32/Conv0 2359808 (?, 512, 32, 32) (3, 3, 512, 512)\n",
"32x32/Conv1_down 2359808 (?, 512, 16, 16) (3, 3, 512, 512)\n",
"32x32/Skip 262144 (?, 512, 16, 16) (1, 1, 512, 512)\n",
"16x16/Conv0 2359808 (?, 512, 16, 16) (3, 3, 512, 512)\n",
"16x16/Conv1_down 2359808 (?, 512, 8, 8) (3, 3, 512, 512)\n",
"16x16/Skip 262144 (?, 512, 8, 8) (1, 1, 512, 512)\n",
"8x8/Conv0 2359808 (?, 512, 8, 8) (3, 3, 512, 512)\n",
"8x8/Conv1_down 2359808 (?, 512, 4, 4) (3, 3, 512, 512)\n",
"8x8/Skip 262144 (?, 512, 4, 4) (1, 1, 512, 512)\n",
"4x4/MinibatchStddev - (?, 513, 4, 4) - \n",
"4x4/Conv 2364416 (?, 512, 4, 4) (3, 3, 513, 512)\n",
"4x4/Dense0 4194816 (?, 512) (8192, 512) \n",
"Output 513 (?, 1) (512, 1) \n",
"scores_out - (?, 1) - \n",
"--- --- --- --- \n",
"Total 29012513 \n",
"\n",
"Building TensorFlow graph...\n",
"Initializing logs...\n",
"Training for 25000 kimg...\n",
"\n",
"tick 0 kimg 10000.1 lod 0.00 minibatch 32 time 53s sec/tick 53.4 sec/kimg 417.08 maintenance 0.0 gpumem 13.1\n",
"tick 1 kimg 10004.2 lod 0.00 minibatch 32 time 17m 11s sec/tick 958.3 sec/kimg 233.96 maintenance 19.6 gpumem 13.1\n",
"tick 2 kimg 10008.3 lod 0.00 minibatch 32 time 33m 13s sec/tick 958.9 sec/kimg 234.10 maintenance 2.9 gpumem 13.1\n",
"tick 3 kimg 10012.4 lod 0.00 minibatch 32 time 49m 15s sec/tick 959.1 sec/kimg 234.16 maintenance 2.9 gpumem 13.1\n",
"tick 4 kimg 10016.5 lod 0.00 minibatch 32 time 1h 05m 17s sec/tick 959.2 sec/kimg 234.17 maintenance 2.9 gpumem 13.1\n",
"tick 5 kimg 10020.6 lod 0.00 minibatch 32 time 1h 21m 20s sec/tick 959.0 sec/kimg 234.12 maintenance 4.0 gpumem 13.1\n",
"tick 6 kimg 10024.7 lod 0.00 minibatch 32 time 1h 37m 22s sec/tick 958.7 sec/kimg 234.07 maintenance 2.9 gpumem 13.1\n",
"tick 7 kimg 10028.8 lod 0.00 minibatch 32 time 1h 53m 23s sec/tick 958.6 sec/kimg 234.03 maintenance 2.8 gpumem 13.1\n",
"tick 8 kimg 10032.9 lod 0.00 minibatch 32 time 2h 09m 24s sec/tick 957.8 sec/kimg 233.83 maintenance 2.9 gpumem 13.1\n",
"tick 9 kimg 10037.0 lod 0.00 minibatch 32 time 2h 25m 26s sec/tick 958.3 sec/kimg 233.95 maintenance 4.1 gpumem 13.1\n",
"tick 10 kimg 10041.1 lod 0.00 minibatch 32 time 2h 41m 26s sec/tick 957.1 sec/kimg 233.67 maintenance 2.8 gpumem 13.1\n",
"tick 11 kimg 10045.2 lod 0.00 minibatch 32 time 2h 57m 26s sec/tick 957.5 sec/kimg 233.76 maintenance 2.8 gpumem 13.1\n",
"tick 12 kimg 10049.3 lod 0.00 minibatch 32 time 3h 13m 27s sec/tick 957.5 sec/kimg 233.76 maintenance 2.8 gpumem 13.1\n",
"tick 13 kimg 10053.4 lod 0.00 minibatch 32 time 3h 29m 28s sec/tick 957.3 sec/kimg 233.72 maintenance 4.0 gpumem 13.1\n",
"tick 14 kimg 10057.5 lod 0.00 minibatch 32 time 3h 45m 28s sec/tick 957.6 sec/kimg 233.78 maintenance 2.8 gpumem 13.1\n",
"tick 15 kimg 10061.6 lod 0.00 minibatch 32 time 4h 01m 28s sec/tick 957.4 sec/kimg 233.74 maintenance 2.7 gpumem 13.1\n",
"tick 16 kimg 10065.7 lod 0.00 minibatch 32 time 4h 17m 29s sec/tick 958.1 sec/kimg 233.91 maintenance 2.7 gpumem 13.1\n",
"tick 17 kimg 10069.8 lod 0.00 minibatch 32 time 4h 33m 31s sec/tick 958.0 sec/kimg 233.88 maintenance 4.0 gpumem 13.1\n",
"tick 18 kimg 10073.9 lod 0.00 minibatch 32 time 4h 49m 32s sec/tick 958.0 sec/kimg 233.88 maintenance 2.7 gpumem 13.1\n",
"tick 19 kimg 10078.0 lod 0.00 minibatch 32 time 5h 05m 32s sec/tick 957.8 sec/kimg 233.84 maintenance 2.8 gpumem 13.1\n",
"tick 20 kimg 10082.0 lod 0.00 minibatch 32 time 5h 21m 33s sec/tick 957.5 sec/kimg 233.78 maintenance 2.8 gpumem 13.1\n",
"tick 21 kimg 10086.1 lod 0.00 minibatch 32 time 5h 37m 34s sec/tick 957.3 sec/kimg 233.73 maintenance 4.1 gpumem 13.1\n",
"tick 22 kimg 10090.2 lod 0.00 minibatch 32 time 5h 53m 35s sec/tick 957.7 sec/kimg 233.82 maintenance 2.7 gpumem 13.1\n",
"tick 23 kimg 10094.3 lod 0.00 minibatch 32 time 6h 09m 36s sec/tick 958.3 sec/kimg 233.96 maintenance 2.7 gpumem 13.1\n",
"tick 24 kimg 10098.4 lod 0.00 minibatch 32 time 6h 25m 36s sec/tick 957.9 sec/kimg 233.86 maintenance 2.9 gpumem 13.1\n",
"tick 25 kimg 10102.5 lod 0.00 minibatch 32 time 6h 41m 40s sec/tick 958.5 sec/kimg 234.02 maintenance 4.5 gpumem 13.1\n",
"tick 26 kimg 10106.6 lod 0.00 minibatch 32 time 6h 57m 41s sec/tick 958.0 sec/kimg 233.89 maintenance 3.0 gpumem 13.1\n",
"tick 27 kimg 10110.7 lod 0.00 minibatch 32 time 7h 13m 41s sec/tick 957.8 sec/kimg 233.84 maintenance 3.0 gpumem 13.1\n",
"tick 28 kimg 10114.8 lod 0.00 minibatch 32 time 7h 29m 42s sec/tick 957.6 sec/kimg 233.79 maintenance 2.9 gpumem 13.1\n",
"tick 29 kimg 10118.9 lod 0.00 minibatch 32 time 7h 45m 44s sec/tick 958.0 sec/kimg 233.88 maintenance 4.2 gpumem 13.1\n",
"tick 30 kimg 10123.0 lod 0.00 minibatch 32 time 8h 01m 45s sec/tick 957.9 sec/kimg 233.87 maintenance 3.2 gpumem 13.1\n",
"tick 31 kimg 10127.1 lod 0.00 minibatch 32 time 8h 17m 46s sec/tick 958.4 sec/kimg 233.99 maintenance 2.9 gpumem 13.1\n",
"tick 32 kimg 10131.2 lod 0.00 minibatch 32 time 8h 33m 47s sec/tick 957.4 sec/kimg 233.73 maintenance 2.8 gpumem 13.1\n",
"tick 33 kimg 10135.3 lod 0.00 minibatch 32 time 8h 49m 49s sec/tick 958.4 sec/kimg 233.97 maintenance 4.1 gpumem 13.1\n",
"tick 34 kimg 10139.4 lod 0.00 minibatch 32 time 9h 05m 51s sec/tick 958.4 sec/kimg 233.98 maintenance 3.1 gpumem 13.1\n",
"tick 35 kimg 10143.5 lod 0.00 minibatch 32 time 9h 21m 52s sec/tick 958.4 sec/kimg 233.99 maintenance 2.8 gpumem 13.1\n",
"tick 36 kimg 10147.6 lod 0.00 minibatch 32 time 9h 37m 53s sec/tick 958.6 sec/kimg 234.03 maintenance 3.0 gpumem 13.1\n",
"tick 37 kimg 10151.7 lod 0.00 minibatch 32 time 9h 53m 57s sec/tick 958.9 sec/kimg 234.11 maintenance 4.3 gpumem 13.1\n",
"tick 38 kimg 10155.8 lod 0.00 minibatch 32 time 10h 09m 57s sec/tick 958.0 sec/kimg 233.88 maintenance 2.9 gpumem 13.1\n",
"tick 39 kimg 10159.9 lod 0.00 minibatch 32 time 10h 25m 57s sec/tick 956.7 sec/kimg 233.57 maintenance 2.9 gpumem 13.1\n",
"tick 40 kimg 10164.0 lod 0.00 minibatch 32 time 10h 41m 56s sec/tick 956.0 sec/kimg 233.39 maintenance 2.9 gpumem 13.1\n",
"tick 41 kimg 10168.1 lod 0.00 minibatch 32 time 10h 57m 58s sec/tick 957.8 sec/kimg 233.83 maintenance 4.4 gpumem 13.1\n",
"tick 42 kimg 10172.2 lod 0.00 minibatch 32 time 11h 13m 59s sec/tick 957.8 sec/kimg 233.83 maintenance 3.0 gpumem 13.1\n",
"tick 43 kimg 10176.3 lod 0.00 minibatch 32 time 11h 30m 02s sec/tick 960.6 sec/kimg 234.53 maintenance 3.0 gpumem 13.1\n",
"tick 44 kimg 10180.4 lod 0.00 minibatch 32 time 11h 46m 07s sec/tick 961.4 sec/kimg 234.73 maintenance 3.0 gpumem 13.1\n",
"tick 45 kimg 10184.4 lod 0.00 minibatch 32 time 12h 02m 12s sec/tick 960.7 sec/kimg 234.54 maintenance 4.4 gpumem 13.1\n",
"tick 46 kimg 10188.5 lod 0.00 minibatch 32 time 12h 18m 16s sec/tick 961.6 sec/kimg 234.77 maintenance 2.9 gpumem 13.1\n",
"tick 47 kimg 10192.6 lod 0.00 minibatch 32 time 12h 34m 22s sec/tick 962.4 sec/kimg 234.95 maintenance 2.9 gpumem 13.1\n",
"tick 48 kimg 10196.7 lod 0.00 minibatch 32 time 12h 50m 27s sec/tick 962.4 sec/kimg 234.97 maintenance 2.9 gpumem 13.1\n",
"tick 49 kimg 10200.8 lod 0.00 minibatch 32 time 13h 06m 33s sec/tick 961.8 sec/kimg 234.81 maintenance 4.3 gpumem 13.1\n",
"tick 50 kimg 10204.9 lod 0.00 minibatch 32 time 13h 22m 38s sec/tick 962.2 sec/kimg 234.92 maintenance 2.9 gpumem 13.1\n",
"tick 51 kimg 10209.0 lod 0.00 minibatch 32 time 13h 38m 43s sec/tick 961.4 sec/kimg 234.71 maintenance 2.8 gpumem 13.1\n",
"tick 52 kimg 10213.1 lod 0.00 minibatch 32 time 13h 54m 48s sec/tick 962.3 sec/kimg 234.94 maintenance 2.8 gpumem 13.1\n",
"tick 53 kimg 10217.2 lod 0.00 minibatch 32 time 14h 10m 56s sec/tick 962.7 sec/kimg 235.03 maintenance 5.4 gpumem 13.1\n",
"tick 54 kimg 10221.3 lod 0.00 minibatch 32 time 14h 26m 58s sec/tick 959.5 sec/kimg 234.26 maintenance 2.9 gpumem 13.1\n",
"tick 55 kimg 10225.4 lod 0.00 minibatch 32 time 14h 43m 02s sec/tick 960.6 sec/kimg 234.53 maintenance 2.9 gpumem 13.1\n",
"tick 56 kimg 10229.5 lod 0.00 minibatch 32 time 14h 59m 07s sec/tick 961.9 sec/kimg 234.85 maintenance 3.0 gpumem 13.1\n",
"tick 57 kimg 10233.6 lod 0.00 minibatch 32 time 15h 15m 13s sec/tick 961.8 sec/kimg 234.80 maintenance 4.7 gpumem 13.1\n",
"tick 58 kimg 10237.7 lod 0.00 minibatch 32 time 15h 31m 18s sec/tick 961.4 sec/kimg 234.72 maintenance 3.1 gpumem 13.1\n",
"tick 59 kimg 10241.8 lod 0.00 minibatch 32 time 15h 47m 21s sec/tick 959.9 sec/kimg 234.36 maintenance 2.9 gpumem 13.1\n",
"tick 60 kimg 10245.9 lod 0.00 minibatch 32 time 16h 03m 25s sec/tick 961.2 sec/kimg 234.68 maintenance 3.0 gpumem 13.1\n",
"tick 61 kimg 10250.0 lod 0.00 minibatch 32 time 16h 19m 31s sec/tick 961.8 sec/kimg 234.82 maintenance 4.2 gpumem 13.1\n",
"tick 62 kimg 10254.1 lod 0.00 minibatch 32 time 16h 35m 36s sec/tick 962.2 sec/kimg 234.92 maintenance 2.9 gpumem 13.1\n",
"tick 63 kimg 10258.2 lod 0.00 minibatch 32 time 16h 51m 40s sec/tick 961.6 sec/kimg 234.76 maintenance 2.8 gpumem 13.1\n",
"tick 64 kimg 10262.3 lod 0.00 minibatch 32 time 17h 07m 45s sec/tick 961.6 sec/kimg 234.76 maintenance 2.9 gpumem 13.1\n",
"tick 65 kimg 10266.4 lod 0.00 minibatch 32 time 17h 23m 52s sec/tick 962.9 sec/kimg 235.07 maintenance 4.4 gpumem 13.1\n",
"tick 66 kimg 10270.5 lod 0.00 minibatch 32 time 17h 39m 59s sec/tick 963.6 sec/kimg 235.26 maintenance 3.1 gpumem 13.1\n",
"tick 67 kimg 10274.6 lod 0.00 minibatch 32 time 17h 56m 05s sec/tick 963.3 sec/kimg 235.18 maintenance 2.9 gpumem 13.1\n",
"tick 68 kimg 10278.7 lod 0.00 minibatch 32 time 18h 12m 11s sec/tick 962.9 sec/kimg 235.09 maintenance 2.8 gpumem 13.1\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "yQ7U1ftuj_Dc"
},
"source": [
"Once running, your training files will show up in the results folder."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "F9vCDt9LRtXl"
},
"source": [
"#Testing the model (generating images)\n",
"The following command will generate 55 sample images from the model.\n",
"\n",
"##Options\n",
"`--network`\n",
"\n",
"Make sure the `--network` argument points to your .pkl file. (My preferred method is to right click on the file in the Files pane to your left and choose `Copy Path`, then paste that into the argument after the `=` sign).\n",
"\n",
"`--seeds`\n",
"\n",
"This allows you to choose random seeds from the model. Remember that our input to StyleGAN is a 512-dimensional array. These seeds will generate those 512 values. Each seed will generate a different, random array. The same seed value will also always generate the same random array, so we can later use it for other purposes like interpolation.\n",
"\n",
"`--truncation-psi`\n",
"\n",
"Truncation is a special argument of StyleGAN. Essentially values that are closer to 0 will be more real than numbers further away from 0. I generally recommend a value between `0.5` and `1.0`. `0.5` will give you pretty \"realistic\" results, while `1.0` is likely to give you \"weirder\" results."
]
},
{
"cell_type": "code",
"metadata": {
"id": "l3MhXEAMOMXH",
"outputId": "537f4e6e-4150-4a73-efe8-9a593bda6de9",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 544
}
},
"source": [
"!python run_generator.py generate-images --network=/content/ladiesfloralcrop-network-snapshot-010237.pkl --seeds=3875451-3876000 --truncation-psi=0.7"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"text": [
"Local submit - run_dir: results/00000-generate-images\n",
"dnnlib: Running run_generator.generate_images() on localhost...\n",
"Loading networks from \"/content/ladiesfloralcrop-network-snapshot-010237.pkl\"...\n",
"Setting up TensorFlow plugin \"fused_bias_act.cu\": Preprocessing... Compiling... Loading... Done.\n",
"Setting up TensorFlow plugin \"upfirdn_2d.cu\": Preprocessing... Compiling... Loading... Done.\n",
"Generating image for seed 1 (0/25) ...\n",
"Generating image for seed 2 (1/25) ...\n",
"Generating image for seed 3 (2/25) ...\n",
"Generating image for seed 4 (3/25) ...\n",
"Generating image for seed 5 (4/25) ...\n",
"Generating image for seed 6 (5/25) ...\n",
"Generating image for seed 7 (6/25) ...\n",
"Generating image for seed 8 (7/25) ...\n",
"Generating image for seed 9 (8/25) ...\n",
"Generating image for seed 10 (9/25) ...\n",
"Generating image for seed 11 (10/25) ...\n",
"Generating image for seed 12 (11/25) ...\n",
"Generating image for seed 13 (12/25) ...\n",
"Generating image for seed 14 (13/25) ...\n",
"Generating image for seed 15 (14/25) ...\n",
"Generating image for seed 16 (15/25) ...\n",
"Generating image for seed 17 (16/25) ...\n",
"Generating image for seed 18 (17/25) ...\n",
"Generating image for seed 19 (18/25) ...\n",
"Generating image for seed 20 (19/25) ...\n",
"Generating image for seed 21 (20/25) ...\n",
"Generating image for seed 22 (21/25) ...\n",
"Generating image for seed 23 (22/25) ...\n",
"Generating image for seed 24 (23/25) ...\n",
"Generating image for seed 25 (24/25) ...\n",
"dnnlib: Finished run_generator.generate_images() in 1m 19s.\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FMiqkA3IReZB"
},
"source": [
"Let’s zip the generated files and download them."
]
},
{
"cell_type": "code",
"metadata": {
"id": "tp8O01O3PlFx",
"outputId": "e9e84d6d-1523-4283-af1e-11299a014c6b",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 544
}
},
"source": [
"!zip -r generated-0.7.zip /content/stylegan2/results/00000-generate-images"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"text": [
" adding: content/stylegan2/results/00000-generate-images/ (stored 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0025.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0014.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0007.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0018.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0017.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0010.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0015.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0024.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0021.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0004.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/log.txt (deflated 74%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0012.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0009.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0022.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0013.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0011.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0008.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0002.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0001.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/submit_config.txt (deflated 53%)\n",
" adding: content/stylegan2/results/00000-generate-images/submit_config.pkl (deflated 43%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0019.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0016.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/run.txt (deflated 34%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0006.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0005.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/_finished.txt (stored 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0023.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0020.png (deflated 0%)\n",
" adding: content/stylegan2/results/00000-generate-images/seed0003.png (deflated 0%)\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BTwJjmCrlfAc"
},
"source": [
"##Interpolation\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "yQ2rYIC4TdaJ",
"outputId": "889a6ced-d1bc-421f-8d71-a6afedd5310c",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
}
},
"source": [
"!python run_generator.py generate-latent-walk --network=/content/ladiesfloralcrop-network-snapshot-010237.pkl --seeds=3,11,17,25,3 --frames 200 --truncation-psi=0.7"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"text": [
"Local submit - run_dir: results/00002-generate-latent-walk\n",
"dnnlib: Running run_generator.generate_latent_walk() on localhost...\n",
"Loading networks from \"/content/ladiesfloralcrop-network-snapshot-010237.pkl\"...\n",
"Setting up TensorFlow plugin \"fused_bias_act.cu\": Preprocessing... Loading... Done.\n",
"Setting up TensorFlow plugin \"upfirdn_2d.cu\": Preprocessing... Loading... Done.\n",
"Generating image for step 0/204 ...\n",
"Generating image for step 1/204 ...\n",
"Generating image for step 2/204 ...\n",
"Generating image for step 3/204 ...\n",
"Generating image for step 4/204 ...\n",
"Generating image for step 5/204 ...\n",
"Generating image for step 6/204 ...\n",
"Generating image for step 7/204 ...\n",
"Generating image for step 8/204 ...\n",
"Generating image for step 9/204 ...\n",
"Generating image for step 10/204 ...\n",
"Generating image for step 11/204 ...\n",
"Generating image for step 12/204 ...\n",
"Generating image for step 13/204 ...\n",
"Generating image for step 14/204 ...\n",
"Generating image for step 15/204 ...\n",
"Generating image for step 16/204 ...\n",
"Generating image for step 17/204 ...\n",
"Generating image for step 18/204 ...\n",
"Generating image for step 19/204 ...\n",
"Generating image for step 20/204 ...\n",
"Generating image for step 21/204 ...\n",
"Generating image for step 22/204 ...\n",
"Generating image for step 23/204 ...\n",
"Generating image for step 24/204 ...\n",
"Generating image for step 25/204 ...\n",
"Generating image for step 26/204 ...\n",
"Generating image for step 27/204 ...\n",
"Generating image for step 28/204 ...\n",
"Generating image for step 29/204 ...\n",
"Generating image for step 30/204 ...\n",
"Generating image for step 31/204 ...\n",
"Generating image for step 32/204 ...\n",
"Generating image for step 33/204 ...\n",
"Generating image for step 34/204 ...\n",
"Generating image for step 35/204 ...\n",
"Generating image for step 36/204 ...\n",
"Generating image for step 37/204 ...\n",
"Generating image for step 38/204 ...\n",
"Generating image for step 39/204 ...\n",
"Generating image for step 40/204 ...\n",
"Generating image for step 41/204 ...\n",
"Generating image for step 42/204 ...\n",
"Generating image for step 43/204 ...\n",
"Generating image for step 44/204 ...\n",
"Generating image for step 45/204 ...\n",
"Generating image for step 46/204 ...\n",
"Generating image for step 47/204 ...\n",
"Generating image for step 48/204 ...\n",
"Generating image for step 49/204 ...\n",
"Generating image for step 50/204 ...\n",
"Generating image for step 51/204 ...\n",
"Generating image for step 52/204 ...\n",
"Generating image for step 53/204 ...\n",
"Generating image for step 54/204 ...\n",
"Generating image for step 55/204 ...\n",
"Generating image for step 56/204 ...\n",
"Generating image for step 57/204 ...\n",
"Generating image for step 58/204 ...\n",
"Generating image for step 59/204 ...\n",
"Generating image for step 60/204 ...\n",
"Generating image for step 61/204 ...\n",
"Generating image for step 62/204 ...\n",
"Generating image for step 63/204 ...\n",
"Generating image for step 64/204 ...\n",
"Generating image for step 65/204 ...\n",
"Generating image for step 66/204 ...\n",
"Generating image for step 67/204 ...\n",
"Generating image for step 68/204 ...\n",
"Generating image for step 69/204 ...\n",
"Generating image for step 70/204 ...\n",
"Generating image for step 71/204 ...\n",
"Generating image for step 72/204 ...\n",
"Generating image for step 73/204 ...\n",
"Generating image for step 74/204 ...\n",
"Generating image for step 75/204 ...\n",
"Generating image for step 76/204 ...\n",
"Generating image for step 77/204 ...\n",
"Generating image for step 78/204 ...\n",
"Generating image for step 79/204 ...\n",
"Generating image for step 80/204 ...\n",
"Generating image for step 81/204 ...\n",
"Generating image for step 82/204 ...\n",
"Generating image for step 83/204 ...\n",
"Generating image for step 84/204 ...\n",
"Generating image for step 85/204 ...\n",
"Generating image for step 86/204 ...\n",
"Generating image for step 87/204 ...\n",
"Generating image for step 88/204 ...\n",
"Generating image for step 89/204 ...\n",
"Generating image for step 90/204 ...\n",
"Generating image for step 91/204 ...\n",
"Generating image for step 92/204 ...\n",
"Generating image for step 93/204 ...\n",
"Generating image for step 94/204 ...\n",
"Generating image for step 95/204 ...\n",
"Generating image for step 96/204 ...\n",
"Generating image for step 97/204 ...\n",
"Generating image for step 98/204 ...\n",
"Generating image for step 99/204 ...\n",
"Generating image for step 100/204 ...\n",
"Generating image for step 101/204 ...\n",
"Generating image for step 102/204 ...\n",
"Generating image for step 103/204 ...\n",
"Generating image for step 104/204 ...\n",
"Generating image for step 105/204 ...\n",
"Generating image for step 106/204 ...\n",
"Generating image for step 107/204 ...\n",
"Generating image for step 108/204 ...\n",
"Generating image for step 109/204 ...\n",
"Generating image for step 110/204 ...\n",
"Generating image for step 111/204 ...\n",
"Generating image for step 112/204 ...\n",
"Generating image for step 113/204 ...\n",
"Generating image for step 114/204 ...\n",
"Generating image for step 115/204 ...\n",
"Generating image for step 116/204 ...\n",
"Generating image for step 117/204 ...\n",
"Generating image for step 118/204 ...\n",
"Generating image for step 119/204 ...\n",
"Generating image for step 120/204 ...\n",
"Generating image for step 121/204 ...\n",
"Generating image for step 122/204 ...\n",
"Generating image for step 123/204 ...\n",
"Generating image for step 124/204 ...\n",
"Generating image for step 125/204 ...\n",
"Generating image for step 126/204 ...\n",
"Generating image for step 127/204 ...\n",
"Generating image for step 128/204 ...\n",
"Generating image for step 129/204 ...\n",
"Generating image for step 130/204 ...\n",
"Generating image for step 131/204 ...\n",
"Generating image for step 132/204 ...\n",
"Generating image for step 133/204 ...\n",
"Generating image for step 134/204 ...\n",
"Generating image for step 135/204 ...\n",
"Generating image for step 136/204 ...\n",
"Generating image for step 137/204 ...\n",
"Generating image for step 138/204 ...\n",
"Generating image for step 139/204 ...\n",
"Generating image for step 140/204 ...\n",
"Generating image for step 141/204 ...\n",
"Generating image for step 142/204 ...\n",
"Generating image for step 143/204 ...\n",
"Generating image for step 144/204 ...\n",
"Generating image for step 145/204 ...\n",
"Generating image for step 146/204 ...\n",
"Generating image for step 147/204 ...\n",
"Generating image for step 148/204 ...\n",
"Generating image for step 149/204 ...\n",
"Generating image for step 150/204 ...\n",
"Generating image for step 151/204 ...\n",
"Generating image for step 152/204 ...\n",
"Generating image for step 153/204 ...\n",
"Generating image for step 154/204 ...\n",
"Generating image for step 155/204 ...\n",
"Generating image for step 156/204 ...\n",
"Generating image for step 157/204 ...\n",
"Generating image for step 158/204 ...\n",
"Generating image for step 159/204 ...\n",
"Generating image for step 160/204 ...\n",
"Generating image for step 161/204 ...\n",
"Generating image for step 162/204 ...\n",
"Generating image for step 163/204 ...\n",
"Generating image for step 164/204 ...\n",
"Generating image for step 165/204 ...\n",
"Generating image for step 166/204 ...\n",
"Generating image for step 167/204 ...\n",
"Generating image for step 168/204 ...\n",
"Generating image for step 169/204 ...\n",
"Generating image for step 170/204 ...\n",
"Generating image for step 171/204 ...\n",
"Generating image for step 172/204 ...\n",
"Generating image for step 173/204 ...\n",
"Generating image for step 174/204 ...\n",
"Generating image for step 175/204 ...\n",
"Generating image for step 176/204 ...\n",
"Generating image for step 177/204 ...\n",
"Generating image for step 178/204 ...\n",
"Generating image for step 179/204 ...\n",
"Generating image for step 180/204 ...\n",
"Generating image for step 181/204 ...\n",
"Generating image for step 182/204 ...\n",
"Generating image for step 183/204 ...\n",
"Generating image for step 184/204 ...\n",
"Generating image for step 185/204 ...\n",
"Generating image for step 186/204 ...\n",
"Generating image for step 187/204 ...\n",
"Generating image for step 188/204 ...\n",
"Generating image for step 189/204 ...\n",
"Generating image for step 190/204 ...\n",
"Generating image for step 191/204 ...\n",
"Generating image for step 192/204 ...\n",
"Generating image for step 193/204 ...\n",
"Generating image for step 194/204 ...\n",
"Generating image for step 195/204 ...\n",
"Generating image for step 196/204 ...\n",
"Generating image for step 197/204 ...\n",
"Generating image for step 198/204 ...\n",
"Generating image for step 199/204 ...\n",
"Generating image for step 200/204 ...\n",
"Generating image for step 201/204 ...\n",
"Generating image for step 202/204 ...\n",
"Generating image for step 203/204 ...\n",
"dnnlib: Finished run_generator.generate_latent_walk() in 2m 20s.\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "dceBSxTsmW1H",
"outputId": "3e7f602e-8ec4-461d-ab6c-5aee753fff83",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 870
}
},
"source": [
"#convert to video \n",
"!ffmpeg -r 24 -i ./results/00001-generate-latent-walk/step%05d.png -vcodec libx264 -pix_fmt yuv420p latent-walk-v2.mp4"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"text": [
"ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers\n",
" built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)\n",
" configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared\n",
" libavutil 55. 78.100 / 55. 78.100\n",
" libavcodec 57.107.100 / 57.107.100\n",
" libavformat 57. 83.100 / 57. 83.100\n",
" libavdevice 57. 10.100 / 57. 10.100\n",
" libavfilter 6.107.100 / 6.107.100\n",
" libavresample 3. 7. 0 / 3. 7. 0\n",
" libswscale 4. 8.100 / 4. 8.100\n",
" libswresample 2. 9.100 / 2. 9.100\n",
" libpostproc 54. 7.100 / 54. 7.100\n",
"Input #0, image2, from './results/00001-generate-latent-walk/step%05d.png':\n",
" Duration: 00:00:08.16, start: 0.000000, bitrate: N/A\n",
" Stream #0:0: Video: png, rgb24(pc), 1024x1024, 25 fps, 25 tbr, 25 tbn, 25 tbc\n",
"Stream mapping:\n",
" Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))\n",
"Press [q] to stop, [?] for help\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0musing cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mprofile High, level 3.2\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0m264 - core 152 r2854 e9a5903 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=24 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00\n",
"Output #0, mp4, to 'latent-walk-v2.mp4':\n",
" Metadata:\n",
" encoder : Lavf57.83.100\n",
" Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1024x1024, q=-1--1, 24 fps, 12288 tbn, 24 tbc\n",
" Metadata:\n",
" encoder : Lavc57.107.100 libx264\n",
" Side data:\n",
" cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1\n",
"frame= 204 fps=7.7 q=-1.0 Lsize= 12124kB time=00:00:08.37 bitrate=11859.2kbits/s speed=0.315x \n",
"video:12121kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.024653%\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mframe I:1 Avg QP:27.07 size:147205\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mframe P:93 Avg QP:27.89 size: 81048\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mframe B:110 Avg QP:30.42 size: 42971\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mconsecutive B-frames: 27.5% 2.0% 0.0% 70.6%\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mmb I I16..4: 8.6% 57.3% 34.2%\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mmb P I16..4: 0.6% 15.3% 8.5% P16..4: 32.1% 22.5% 15.4% 0.0% 0.0% skip: 5.6%\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mmb B I16..4: 0.1% 2.7% 2.8% B16..8: 27.2% 16.2% 8.5% direct:17.4% skip:25.2% L0:29.0% L1:34.5% BI:36.5%\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0m8x8 transform intra:59.7% inter:56.9%\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mcoded y,uvDC,uvAC intra: 88.8% 83.9% 51.5% inter: 53.3% 28.1% 1.7%\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mi16 v,h,dc,p: 43% 22% 21% 14%\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mi8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 16% 11% 12% 8% 11% 12% 10% 11% 10%\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mi4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 17% 12% 13% 7% 13% 12% 11% 9% 7%\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mi8c dc,h,v,p: 52% 18% 21% 9%\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mWeighted P-Frames: Y:19.4% UV:14.0%\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mref P L0: 64.4% 26.9% 6.1% 2.0% 0.5%\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mref B L0: 95.6% 3.3% 1.1%\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mref B L1: 99.1% 0.9%\n",
"\u001b[1;36m[libx264 @ 0x55a800235e00] \u001b[0mkb/s:11681.39\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "u7A7jRRGmzhU",
"outputId": "7cdce8b6-3ee4-459a-cff8-c787bd016269",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 51
}
},
"source": [
"rm -r /content/drive/My Drive/stylegan2-colab-test/stylegan2/results/00002-stylegan2-birdaus-1gpu-config-f"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"text": [
"rm: cannot remove '/content/drive/My': No such file or directory\n",
"rm: cannot remove 'Drive/stylegan2-colab-test/stylegan2/results/00002-stylegan2-birdaus-1gpu-config-f': No such file or directory\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "r9UpP_fVdql9"
},
"source": [
""
],
"execution_count": null,
"outputs": []
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment