Skip to content

Instantly share code, notes, and snippets.

View ZekunZh's full-sized avatar

Zekun ZHANG ZekunZh

  • Gleamer.ai
  • PARIS, FRANCE
  • 10:31 (UTC +02:00)
View GitHub Profile
@ZekunZh
ZekunZh / daemon.json
Created May 26, 2023 09:56
Docker daemon.json with support for custom storage location & Nvidia GPU
{
"graph": "/data/docker",
"storage-driver": "overlay2",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"features": {
def iter_batches(iterable, batch_size):
"""Iterates over the given iterable in batches.
Args:
iterable: an iterable
batch_size: the desired batch size, or None to return the contents in
a single batch
Returns:
a generator that emits tuples of elements of the requested batch size
import os
import platform
import subprocess
import warnings
from distutils import spawn


def get_gpu_model_name() -> str:
    if platform.system() == "Windows":

If you only want to change for current notebook, add the following code to cell:

from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))

Source

jupyter nbconvert results.ipynb --no-input --to html

Source

docker rmi $(docker images | grep stuff_ | tr -s ' ' | cut -d ' ' -f 3)

Source

Source

sudo nvpmodel -m 0

Mode Mode Name Denver 2 Frequency ARM A57 Frequency GPU Frequency

0 Max-N 2 2.0 GHz 4 2.0 GHz 1.30 Ghz

python3 -c "import torch; print(torch.tensor([0.], device='cuda:0'))"

Docker Image Installation on Nvidia Jetson TX2


Requirements

Hardware

  1. Nvidia Jetson TX2 card with 8GB meomory
sudo umount /dev/mmcblk0
sudo badblocks -n -v /dev/mmcblk0

A flash based medium should normally never give errors while using badblocks to the OS/application. If it does it means that:

It is worn out to the point the wear-leveling doesn't have enough room anymore. (part of) the flash memory itself is faulty.