Last active
April 19, 2024 06:19
-
-
Save l0g1c-80m8/9186a549f24083fdea0b92d9bff49376 to your computer and use it in GitHub Desktop.
Nerfstudio Colab
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"nbformat": 4, | |
"nbformat_minor": 0, | |
"metadata": { | |
"colab": { | |
"provenance": [], | |
"include_colab_link": true | |
}, | |
"kernelspec": { | |
"name": "python3", | |
"display_name": "Python 3" | |
}, | |
"language_info": { | |
"name": "python" | |
}, | |
"accelerator": "GPU", | |
"gpuClass": "standard" | |
}, | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "view-in-github", | |
"colab_type": "text" | |
}, | |
"source": [ | |
"<a href=\"https://colab.research.google.com/gist/l0g1c-80m8/9186a549f24083fdea0b92d9bff49376/nerfstudio-colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"<p align=\"center\">\n", | |
" <picture>\n", | |
" <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://docs.nerf.studio/en/latest/_images/logo-dark.png\">\n", | |
" <source media=\"(prefers-color-scheme: light)\" srcset=\"https://docs.nerf.studio/en/latest/_images/logo.png\">\n", | |
" <img alt=\"nerfstudio\" src=\"https://docs.nerf.studio/en/latest/_images/logo.png\" width=\"400\">\n", | |
" </picture>\n", | |
"</p>\n", | |
"\n", | |
"\n", | |
"# Nerfstudio: A collaboration friendly studio for NeRFs\n", | |
"\n", | |
"\n", | |
"![GitHub stars](https://img.shields.io/github/stars/nerfstudio-project/nerfstudio?color=gold&style=social)\n", | |
"\n", | |
"This colab shows how to train and view NeRFs from Nerfstudio both on pre-made datasets or from your own videos/images.\n", | |
"\n", | |
"\\\\\n", | |
"\n", | |
"Credit to [NeX](https://nex-mpi.github.io/) for Google Colab format." | |
], | |
"metadata": { | |
"id": "SiiXJ7K_fePG" | |
} | |
}, | |
{ | |
"cell_type": "markdown", | |
"source": [ | |
"## Frequently Asked Questions\n", | |
"\n", | |
"* **Downloading custom data is stalling (no output):**\n", | |
" * This is a bug in Colab. The data is processing, but may take a while to complete. You will know processing completed if `data/nerfstudio/custom_data/transforms.json` exists.\n", | |
"* **Training is not showing progress**:\n", | |
" * The lack of output is a bug in Colab. You can see the training progress from the viewer.\n", | |
"\n", | |
"* **Viewer Quality is bad / Low resolution**:\n", | |
" * This may be because more GPU is being used on training that rendering the viewer. Try pausing training or decreasing training utilization.\n", | |
"\n", | |
"* **Other problems?**\n", | |
" * Feel free to create an issue on our [GitHub repo](https://github.com/nerfstudio-project/nerfstudio).\n" | |
], | |
"metadata": { | |
"id": "Yyx5h6kz5ga7" | |
} | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"#@title # Install Conda (requires runtime restart) { vertical-output: true, display-mode: \"form\" }\n", | |
"\n", | |
"!pip install -q condacolab\n", | |
"import condacolab\n", | |
"condacolab.install()" | |
], | |
"metadata": { | |
"id": "RGr33zHaHak0" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"#@title # Install Nerfstudio and Dependencies (~10 min) { vertical-output: true, display-mode: \"form\" }\n", | |
"\n", | |
"%cd /content/\n", | |
"!pip install --upgrade pip\n", | |
"!pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 -f https://download.pytorch.org/whl/torch_stable.html\n", | |
"\n", | |
"# Installing TinyCuda\n", | |
"%cd /content/\n", | |
"!gdown \"https://drive.google.com/u/1/uc?id=1q8fuc-Mqiev5GTBTRA5UPgCaQDzuqKqj\"\n", | |
"!pip install tinycudann-1.6-cp37-cp37m-linux_x86_64.whl\n", | |
"\n", | |
"# Installing COLMAP\n", | |
"%cd /content/\n", | |
"!conda install -c conda-forge colmap\n", | |
"\n", | |
"# Install nerfstudio\n", | |
"%cd /content/\n", | |
"!pip install nerfstudio" | |
], | |
"metadata": { | |
"id": "9oyLHl8QfYwP" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"#@markdown <h1> Downloading Data</h1>\n", | |
"#@markdown <h3>Pick the preset scene or upload your own images/video</h3>\n", | |
"import os\n", | |
"from google.colab import files\n", | |
"from IPython.core.display import display, HTML\n", | |
"\n", | |
"scene = '\\uD83D\\uDDBC poster' #@param ['🖼 poster', '🚜 dozer', '🌄 desolation', '📤 upload your images' , '🎥 upload your own video']\n", | |
"scene = ' '.join(scene.split(' ')[1:])\n", | |
"\n", | |
"if scene not in ['upload your images', 'upload your own video']:\n", | |
" %cd /content/\n", | |
" !ns-download-data --dataset=nerfstudio --capture=$scene\n", | |
"else:\n", | |
" display(HTML('<h3>Select your custom data</h3>'))\n", | |
" display(HTML('<p/>You can select multiple images by pressing ctrl, cmd or shift and click.<p>'))\n", | |
" display(HTML('<p/>Note: This may take time, especially on hires inputs, so we recommend to download dataset after creation.<p>'))\n", | |
" !mkdir -p /content/data/nerfstudio/custom_data\n", | |
" if scene == 'upload your images':\n", | |
" !mkdir -p /content/data/nerfstudio/custom_data/raw_images\n", | |
" %cd /content/data/nerfstudio/custom_data/raw_images\n", | |
" uploaded = files.upload()\n", | |
" dir = os.getcwd()\n", | |
" else:\n", | |
" %cd /content/data/nerfstudio/custom_data/\n", | |
" uploaded = files.upload()\n", | |
" dir = os.getcwd()\n", | |
" preupload_datasets = [os.path.join(dir, f) for f in uploaded.keys()]\n", | |
" del uploaded\n", | |
" %cd /content/\n", | |
"\n", | |
" if scene == 'upload your images':\n", | |
" !ns-process-data images --data /content/data/nerfstudio/custom_data/raw_images --output-dir /content/data/nerfstudio/custom_data/\n", | |
" else:\n", | |
" video_path = preupload_datasets[0]\n", | |
" !ns-process-data video --data $video_path --output-dir /content/data/nerfstudio/custom_data/\n", | |
"\n", | |
" scene = \"custom_data\"" | |
], | |
"metadata": { | |
"id": "msVLprI4gRA4", | |
"cellView": "form" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"#@title # Set Up Viewer\n", | |
"\n", | |
"%cd /content\n", | |
"\n", | |
"# Install localtunnel\n", | |
"# We are using localtunnel https://github.com/localtunnel/localtunnel but ngrok could also be used\n", | |
"!npm install -g localtunnel\n", | |
"\n", | |
"# Tunnel port 7007, the default for\n", | |
"!rm url.txt 2> /dev/null\n", | |
"get_ipython().system_raw('lt --port 7007 >> url.txt 2>&1 &')" | |
], | |
"metadata": { | |
"id": "Eju-3p2hjcB2", | |
"cellView": "form" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"#@title # Start Viewer\n", | |
"\n", | |
"with open('url.txt') as f:\n", | |
" lines = f.readlines()\n", | |
"websocket_url = lines[0].split(\": \")[1].strip().replace(\"https\", \"wss\")\n", | |
"# from nerfstudio.utils.io import load_from_json\n", | |
"# from pathlib import Path\n", | |
"# json_filename = \"nerfstudio/nerfstudio/viewer/app/package.json\"\n", | |
"# version = load_from_json(Path(json_filename))[\"version\"]\n", | |
"url = f\"https://viewer.nerf.studio/?websocket_url={websocket_url}\"\n", | |
"print(url)\n", | |
"print(\"You may need to click Refresh Page after you start training!\")\n", | |
"from IPython import display\n", | |
"display.IFrame(src=url, height=800, width=\"100%\")" | |
], | |
"metadata": { | |
"id": "VoKDxqEcjmfC", | |
"cellView": "form" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"#@title # Start Training { vertical-output: true }\n", | |
"\n", | |
"%cd /content\n", | |
"!ns-train nerfacto --viewer.websocket-port 7007 nerfstudio-data --data data/nerfstudio/$scene --downscale-factor 4" | |
], | |
"metadata": { | |
"id": "m_N8_cLfjoXD", | |
"cellView": "form" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "code", | |
"source": [ | |
"#@title # Render Video { vertical-output: true }\n", | |
"#@markdown <h3>Export the camera path from within the viewer, then run this cell.</h3>\n", | |
"#@markdown <h5>The rendered video should be at renders/output.mp4!</h5>\n", | |
"\n", | |
"\n", | |
"base_dir = \"/content/outputs/data-nerfstudio-\" + scene + \"/nerfacto/\"\n", | |
"training_run_dir = base_dir + os.listdir(base_dir)[0]\n", | |
"\n", | |
"from IPython.core.display import display, HTML\n", | |
"display(HTML('<h3>Upload the camera path JSON.</h3>'))\n", | |
"%cd $training_run_dir\n", | |
"uploaded = files.upload()\n", | |
"uploaded_camera_path_filename = list(uploaded.keys())[0]\n", | |
"\n", | |
"config_filename = training_run_dir + \"/config.yml\"\n", | |
"camera_path_filename = training_run_dir + \"/\" + uploaded_camera_path_filename\n", | |
"camera_path_filename = camera_path_filename.replace(\" \", \"\\\\ \").replace(\"(\", \"\\\\(\").replace(\")\", \"\\\\)\")\n", | |
"\n", | |
"%cd /content/\n", | |
"!ns-render --load-config $config_filename --traj filename --camera-path-filename $camera_path_filename --output-path renders/output.mp4" | |
], | |
"metadata": { | |
"id": "WGt8ukG6Htg3", | |
"cellView": "form" | |
}, | |
"execution_count": null, | |
"outputs": [] | |
} | |
] | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment