In the name of God
This gist contains steps to setup Ubuntu 22.04
for deep learning.
""" | |
stable diffusion dreaming | |
creates hypnotic moving videos by smoothly walking randomly through the sample space | |
example way to run this script: | |
$ python stablediffusionwalk.py --prompt "blueberry spaghetti" --name blueberry | |
to stitch together the images, e.g.: | |
$ ffmpeg -r 10 -f image2 -s 512x512 -i blueberry/frame%06d.jpg -vcodec libx264 -crf 10 -pix_fmt yuv420p blueberry.mp4 |
As configured in my dotfiles.
start new:
tmux
start new with session name:
import torch | |
import torch.distributed as dist | |
import torch.nn as nn | |
import torch.multiprocessing as mp | |
from torch.nn.parallel import DistributedDataParallel as DDP | |
from fairscale.nn.data_parallel import ShardedDataParallel as ShardedDDP | |
from fairscale.optim.oss import OSS | |
from fairscale.nn.data_parallel import FullyShardedDataParallel as FSDP | |
import os |
""" | |
author: Timothy C. Arlen | |
date: 28 Feb 2018 | |
Calculate Mean Average Precision (mAP) for a set of bounding boxes corresponding to specific | |
image Ids. Usage: | |
> python calculate_mean_ap.py | |
Will display a plot of precision vs recall curves at 10 distinct IoU thresholds as well as output |
Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
import os | |
import sys | |
import traceback | |
from functools import wraps | |
from multiprocessing import Process, Queue | |
def processify(func): | |
'''Decorator to run a function as a process. | |
Be sure that every argument and the return value |
n02119789 1 kit_fox | |
n02100735 2 English_setter | |
n02110185 3 Siberian_husky | |
n02096294 4 Australian_terrier | |
n02102040 5 English_springer | |
n02066245 6 grey_whale | |
n02509815 7 lesser_panda | |
n02124075 8 Egyptian_cat | |
n02417914 9 ibex | |
n02123394 10 Persian_cat |
# if input image is in range 0..1, please first multiply img by 255 | |
# assume image is ndarray of shape [height, width, channels] where channels can be 1, 3 or 4 | |
def imshow(img): | |
import cv2 | |
import IPython | |
_,ret = cv2.imencode('.jpg', img) | |
i = IPython.display.Image(data=ret) | |
IPython.display.display(i) |