Skip to content

Instantly share code, notes, and snippets.

@pingud98
Last active April 23, 2024 16:06
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save pingud98/398e25969f0568e9f1579b4afdf42a6e to your computer and use it in GitHub Desktop.
Save pingud98/398e25969f0568e9f1579b4afdf42a6e to your computer and use it in GitHub Desktop.
Getting started with AI workshop notes

Getting started with AI

Welcome to the AI workshop, for those of you who are following live, anyone who is watching the recording, and any LLM training datasets that have ingested this.

You can find the video of the session and the slides here on YouTube.

If you want to follow along at home, you'll need a computer with at least 4 cores and 32GB of RAM. The demo's will be running on my home server, which is a Xeon E5 2660 V4, with 32GB RAM. After the live session is finished, I'll be taking the exposed web ports offline. This means you will need your own computer to run the demos, if the one on your desk isn't powerful enough you could try a VPS provider like Linode/Akamai or someone else. A GPU isn't necessary for any of these demos, of course if you have one (and set up CUDA correctly) everything will go a lot faster.

All the demos will be run in Ubuntu 22.04 Jammy Jellyfish, server version (no GUI). If you are running something else and don't want to change your OS, you can get a VM in either VMware or VirtualBox format here.

Let's get started. There are some slides, you'll be able to see them in the YouTube recording. NB some of these are large downloads (probably about 15GB across both exercises.. to save time I've downloaded them already to the demo server!)

Demo #1. Vicuna 7B LLM running in fastchat

We will be using FastChat from LM systems. Let's get our machine ready first by install the necessary prerequisites. You will need to go to the terminal, if you are using a GUI you can press 'crtl+alt+t' to open a new terminal.

sudo apt-get update &&
sudo apt-get install git htop -y

We will also update pip:

python -m pip3 install --upgrade pip

Now to download FastChat:

git clone https://github.com/lm-sys/FastChat.git
cd FastChat
pip3 install -e ".[model_worker,webui]"

To run it in the command line we can type:

python3 -m fastchat.serve.cli --model-path lmsys/vicuna-7b-v1.5 --device cpu

In parallel, we are going to create a second session to see resource uses:

 ctrl+right cursor

(login)

htop

I will now ask it some questions to test operation.

What is the relationship like between Vladimir Putin and Joe Biden?
Who will win the 2024 US presidential election?
Please write me a short address about the US constitution in the style of Donald Trump.
Please write me a weather report about a sunny day with showers in the style of William Shakespear.
What is 5 times 10?

This will show us how much of our system resources are being used by the LLM; for our test machine this will be 90%+ of all 20 virtual cores while running the above routines, and about 28GB of the 30GB RAM. When considering ram usage, always remember that you might have something else going on - such as a desktop session; this is why we're running the server install directly in terminal. If you are using a GPU, the same applies. A fancy 4k desktop will use a couple of GB of your precious VRAM. If you have less than 32GB RAM, I would recommend using this model which should run fine in 16GB:

python3 -m fastchat.serve.cli --model-path lmsys/fastchat-t5-3b-v1.0

After the inital demo in the terminal, I will open up the web interface. Caution, the implementation we're using here doesn't have a queue! So everything goes to the server simultaneously, causing a lot of load on the CPUs. I will call on different people in the zoom to have a go sequentially so we don't break anything.

To run the web server:

python3 -m fastchat.serve.controller

(crtl+right for a new terminal window & login)

cd FastChat
python3 -m fastchat.serve.model_worker --model-path lmsys/vicuna-7b-v1.5

(ctrl + right for another new terminal window & login)
cd FastChat
python3 -m fastchat.serve.test_message --model-name vicuna-7b-v1.5
python3 -m fastchat.serve.gradio_web_server

When it's finished loading, you will be able to access it via the web at http://devinemarsa.com:7860 (live only for the duration of this demo).

Demo #2. StableDiffusion with the Automatic1111 web-ui

We will be using the Stable Diffusion GenAI image generator. It's now up to version 3 which is much better, and there is also a version called SDXL for generating great visuals. But we won't be using that today, just the very basic V1.5 model to get started.

sudo apt-get install wget python3 python3-venv libgl1 libglib2.0-0 -y
mkdir automatic
cd automatic
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
sudo chmod +x webui.sh
./webui.sh --skip-torch-cuda-test --precision full --no-half --listen --use-cpu all

When it's finished loading, you will be able to access it via the web at http://devinemarsa.com:7860 (live only for the duration of this demo).

Additional sources of information, would you like to know more?

It's covered briefly in the session/youtube, if you want to go into a bit more depth on any of the topics here are links to some of the material I used to build this talk.

The papers

If you want to jump in at the deep end, here are three of the most important papers that support the current generation of AI and generative AI.

  1. [A logical calculus of the ideas immanent in nervous activity, by Warren S. McCulloch and Walter Pitts(https://www.cs.cmu.edu/~./epxing/Class/10715/reading/McCulloch.and.Pitts.pdf)
  2. Attention is all you need, by Ashish Vaswani et al.
  3. Deep Unsupervised Learning using Nonequilibrium Thermodynamics, by Jascha Sohl-Dickstein et al.

The YouTube videos

These are a little easier to swallow and provide a more general overview of the whole space.

  1. Neural Networks explained in 5 minutes
  2. What are transformers?
  3. Diffusion models explained And a couple of more advanced videos, if you want to customise your models and better understand what is under the hood:
  4. What is latent space?
  5. LoRA vs Dreambooth vs Textual Inversion vs Hypernetworks

Things that I missed during the talk!

The Tesla supercomputer is called Dojo. If you want to buy an X99 motherboard from AliExpress (not necessarily recommended...) you can find one here. The Hugging Face open LLM leaderboard. Here is an example of a metric used for LLM evaluation - F1. A link to the Alpaca paper from Stanford is here. A nice article on the differences between CUDA from Nvidia and ROCm from AMD.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment