Bootstrap knowledge of LLMs ASAP. With a bias/focus to GPT.
Avoid being a link dump. Try to provide only valuable well tuned information.
Neural network links before starting with transformers.
import json | |
import subprocess | |
import pendulum # pip install pendulum | |
# Note: This is a hacky, quick solution prototyped in less than an hour, the ideal solution would involve using boto3 and other best practices auth | |
# ensure that your AWS CLI is configured and has the keys and default region set up. This script works in the default region | |
instance_cmd = """ | |
aws ec2 describe-instances \ |
Recently found some clowny gist was the top result for 'google takeout multiple tgz', where it was using two bash scripts to extract all the tgz files and then merge them together. Don't do that. Use brace expansion, cat
the TGZs, and extract:
$ cat takeout-20201023T123551Z-{001..011}.tgz | tar xzivf -
You don't even need to use brace expansion. Globbing will order the files numerically:
$ cat takeout-20201023T123551Z-*.tgz | tar xzivf -
Run everything on host | |
$~/.ssh/config | |
Host <HostName> | |
HostName <HostIP> | |
User <username> | |
$ ssh-keygen | |
$ ssh-copy-id <HostName> |
# ensure you have activated the environment | |
python -m ipykernel install --user --name my_env --display-name "Python (my_env)" | |
# this will create a Jupyter kernel called my_env that will use the python binary and the packages from the current project |
from sklearn.datasets import load_iris | |
iris = load_iris() | |
# Model (can also use single decision tree) | |
from sklearn.ensemble import RandomForestClassifier | |
model = RandomForestClassifier(n_estimators=10) | |
# Train | |
model.fit(iris.data, iris.target) | |
# Extract single tree |
# docker build --pull -t tf/tensorflow-serving --label 1.6 -f Dockerfile . | |
# export TF_SERVING_PORT=9000 | |
# export TF_SERVING_MODEL_PATH=/tf_models/mymodel | |
# export CONTAINER_NAME=tf_serving_1_6 | |
# CUDA_VISIBLE_DEVICES=0 docker run --runtime=nvidia -it -p $TF_SERVING_PORT:$TF_SERVING_PORT -v $TF_SERVING_MODEL_PATH:/root/tf_model --name $CONTAINER_NAME tf/tensorflow-serving /usr/local/bin/tensorflow_model_server --port=$TF_SERVING_PORT --enable_batching=true --model_base_path=/root/tf_model/ | |
# docker start -ai $CONTAINER_NAME | |
FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04 | |