Skip to content

Instantly share code, notes, and snippets.

View mertbozkir's full-sized avatar
🔥
Delayed Gratification!

Mert Bozkir mertbozkir

🔥
Delayed Gratification!
View GitHub Profile
@alanwill
alanwill / glue-json2parquet.py
Last active July 17, 2022 07:04
AWS Glue JSON to Parquet transformation script
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
bucketpathparam = getResolvedOptions(sys.argv, ['s3_path'])
@Qfl3x
Qfl3x / week3.md
Last active May 30, 2023 07:31
MLOps-Zoomcamp: Workflow Orchestration - Prefect

NOTE: commands and UI are deprecated

Content:

  • Negative Engineering
  • What is workflow orchestration?
  • Introduction to Prefect 2.0
  • First Prefect flow and Basics

Workflow Orchestration

@cedrickchee
cedrickchee / llama-7b-m1.md
Last active July 13, 2024 04:59
4 Steps in Running LLaMA-7B on a M1 MacBook with `llama.cpp`

4 Steps in Running LLaMA-7B on a M1 MacBook

The large language models usability

The problem with large language models is that you can’t run these locally on your laptop. Thanks to Georgi Gerganov and his llama.cpp project, it is now possible to run Meta’s LLaMA on a single computer without a dedicated GPU.

Running LLaMA

There are multiple steps involved in running LLaMA locally on a M1 Mac after downloading the model weights.

@cedrickchee
cedrickchee / alpaca-native-langchain-chatbot-tutorial.md
Last active October 20, 2023 06:58
Creating a chatbot using Alpaca native and LangChain

Creating a chatbot using Alpaca native and LangChain

Let's talk to an Alpaca-7B model using LangChain with a conversational chain and a memory window.

Setup and installation

Install python packages using pip. Note that you need to install HuggingFace Transformers from source (GitHub) currently.

$ pip install git+https://github.com/huggingface/transformers
@jrknox1977
jrknox1977 / multi_ollama_containers.md
Last active February 21, 2024 21:07
Running Multiple ollama containers on a single host.

Multiple Ollama Containers on a single host (with multiple GPUs)

I don't want model RELOAD

  • I have a large machine with 2 GPUs and a considerable amount of RAM.
  • I was trying to use ollama to server llava and mistral BUT it would reload the models every time I switched model requests.
  • So this is the solution that appears to be working: Multiple Containers, each serving a different model, on different ports.

Ollama model working dir:

  • I have many models already downloaded on my machine so I mount the host ollama working dir to the containers.
  • Linux (At least on my linux machine) - /usr/share/ollama/.ollama