- Color banding
- Purple fringing
- Lens flare
- Ringing artifacts
- Posterization
- Aliasing/Moiré Pattern
- Chromatic aberration
- Rolling shutter
- Vignetting
- Noise
import argparse | |
import tensorflow as tf | |
from tensorflow.examples.tutorials.mnist import input_data | |
def train(): | |
mnist = input_data.read_data_sets("/tmp/data", | |
one_hot=True, | |
fake_data=False) | |
sess = tf.InteractiveSession() |
"""Benchmark tensorflow distributed by assigning a tensor between two workers. | |
Usage: | |
Start worker 1: | |
python rdma_bench.py --workers="hostname1:port,hostname2:port" --protocol=grpc+verbs --task 0 | |
Start worker 2: | |
python rdma_bench.py --workers="hostname1:port,hostname2:port" --protocol=grpc+verbs --task 1 | |
Run the tests: |
/* | |
* g++ -std=c++11 -pthread memtest.cc | |
*/ | |
#include <cstring> | |
#include <chrono> | |
#include <condition_variable> | |
#include <functional> | |
#include <future> |
set nu | |
set ruler | |
set tabstop=2 shiftwidth=2 expandtab | |
set colorcolumn=80 | |
" pathogen | |
execute pathogen#infect() | |
syntax on | |
filetype plugin indent on |
model_name device_name soc abi runtime init warmup run_avg tuned | |
mobilenet_v2 polaris sdm845 armeabi-v7a GPU 42.868 11.087 9.908 True | |
mobilenet_v2 MI MAX msm8952 armeabi-v7a GPU 122.791 43.038 39.875 True | |
mobilenet_v2 BKL-AL00 kirin970 armeabi-v7a GPU 767.932 1226.373 47.597 True | |
mobilenet_v2 polaris sdm845 arm64-v8a GPU 42.3 10.737 10.004 True | |
mobilenet_v2 MI MAX msm8952 arm64-v8a GPU 129.123 42.584 39.552 True | |
mobilenet_v2 BKL-AL00 kirin970 arm64-v8a GPU 753.43 1170.291 48.016 True | |
mobilenet_v2 polaris sdm845 armeabi-v7a CPU 16.035 69.761 41.627 False | |
mobilenet_v2 MI MAX msm8952 armeabi-v7a CPU |
name: "MobileNet-SSD" | |
input: "data" | |
input_shape { | |
dim: 1 | |
dim: 3 | |
dim: 300 | |
dim: 300 | |
} | |
layer { | |
name: "conv0" |
[ | |
{ | |
"id": "camera0", | |
"parentIds": [], | |
"status": "running" | |
}, | |
{ | |
"id": "lidar0", | |
"parentIds": [], | |
"status": "running" |
Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
|Title of the paper |Project cited |Published media |Download URL