Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.
![Screenshot 2023-12-18 at 10 40 27 PM](https://private-user-images.githubusercontent.com/3837836/291468646-4c30ad72-76ee-4939-a5fb-16b570d38cf2.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkwNTUzOTcsIm5iZiI6MTcxOTA1NTA5NywicGF0aCI6Ii8zODM3ODM2LzI5MTQ2ODY0Ni00YzMwYWQ3Mi03NmVlLTQ5MzktYTVmYi0xNmI1NzBkMzhjZjIucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDYyMiUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA2MjJUMTExODE3WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9NDU1NmEyZjk0ZWUyMWJmMzc2ODkwYzI2NzdiYWY0MGViMmNjMTU5YjYxNTIxMTlkMjcyZTA3ZmIzOTI1NDNhZCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.hV_6WHUwoqfYDhvCN4n8rCw7lEyi3wC4zqWN83K051Q)
docker run -d \ | |
--name=crashplan-pro \ | |
-h $HOSTNAME \ | |
-e USER_ID=0 \ | |
-e GROUP_ID=0 \ | |
-e TZ=“America/Los_Angeles” \ | |
-p 5800:5800 \ | |
-p 5900:5900 \ | |
-v /share/CACHEDEV1_DATA/Container/config/crashplanpro:/config:rw \ | |
-v /share/CACHEDEV1_DATA:/storage:rw \ |
# git clone from https://github.com/tkarras/progressive_growing_of_gans | |
# download the snapshot from their Google drive | |
# use the following code in the same directory to generate random faces | |
import os | |
import sys | |
import time | |
import glob | |
import shutil | |
import operator | |
import theano |
https://gist.github.com/victor-shepardson/5b3d3087dc2b4817b9bffdb8e87a57c4
I'm using Ubuntu 16.04 with a GTX 1060
Notes from arXiv:1611.07004v1 [cs.CV] 21 Nov 2016
x
and random noise vector z
to y
: y = f(x, z)
G
is trained to produce outputs that cannot be distinguished from "real" images by an adversarially trained discrimintor, D
which is trained to do as well as possible at detecting the generator's "fakes".D
, learns to classify between real and synthesized pairs. The generator learns to fool the discriminator.Code for Keras plays catch blog post
python qlearn.py
#!/usr/bin/env python | |
try: | |
# for python newer than 2.7 | |
from collections import OrderedDict | |
except ImportError: | |
# use backport from pypi | |
from ordereddict import OrderedDict | |
import yaml |
nb2md
script below in your path and make executable.gitattributes
file, which can be in your home directory (use nb2md
for all projects) or in the root of your project:*.ipynb diff=nb2md
##VGG16 model for Keras
This is the Keras model of the 16-layer network used by the VGG team in the ILSVRC-2014 competition.
It has been obtained by directly converting the Caffe model provived by the authors.
Details about the network architecture can be found in the following arXiv paper:
Very Deep Convolutional Networks for Large-Scale Image Recognition
K. Simonyan, A. Zisserman
""" | |
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy) | |
BSD License | |
""" | |
import numpy as np | |
# data I/O | |
data = open('input.txt', 'r').read() # should be simple plain text file | |
chars = list(set(data)) | |
data_size, vocab_size = len(data), len(chars) |