Skip to content

Instantly share code, notes, and snippets.

@jboursiquot
Last active July 6, 2024 07:10
Show Gist options
  • Save jboursiquot/b2e7a50ab47cd0128cf265c9f5364c82 to your computer and use it in GitHub Desktop.
Save jboursiquot/b2e7a50ab47cd0128cf265c9f5364c82 to your computer and use it in GitHub Desktop.
GopherCon Europe 2024 - Go For Experienced Programmers pre-workshop preparations

GopherCon Europe 2024 - Go For Experienced Programmers pre-workshop preparations

Hello fellow Gophers!

In preparation for the workshop, you'll need to download and install some tools ahead of time so save bandwith and have everything you need to follow along during our time together.

Docker

You'll need Docker Desktop installed: https://www.docker.com/products/docker-desktop/

Ollama

Our LLM work will rely on running moddels locally using Ollama. You can either install it directly on your machine or pull a Docker image to run as a container.

$ docker pull ollama/ollama:latest
docker run --name ollama ollama/ollama

Ollama itself is a runner for models, which means we'll need to pull the actual LLM we'll rely on for our work. You can do so by having the ollama container pull the image itself (all-minilm is less than 50MB).

$ docker exec -it ollama ollama pull all-minilm

pgvector

We'll rely on a PostgreSQL version that's been extended to support the pgvector extension out of the box. If you have PostgreSQL running locally already, you can look into installing the extension yourself. Otherwise, we'll use the docker image below.

$ docker pull pgvector/pgvector:pg16

Get ready!

That's it! If you have any questions, reach out on X: @jboursiquot.

@jboursiquot
Copy link
Author

Thanks for the feedback @prasanthu.

The appropriate instruction is docker exec -it ollama ollama pull all-minilm, using pull, not run. Updated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment