Skip to content

Instantly share code, notes, and snippets.

View thpham's full-sized avatar

Thomas Kim Pham thpham

View GitHub Profile
@thpham
thpham / .gitconfig
Created January 27, 2024 10:00 — forked from arnauldvm/.gitconfig
Git sync all local tracking branches with remotes
[alias]
tracking = "!f() { git for-each-ref --format '%(refname:short):%(upstream:short)' 'refs/heads' | egrep -v ':$'; }; f"
is-clean-workdir = "!f() { git diff --stat --exit-code || { echo \"Workdir dirty\"; exit 1; }; }; f"
is-clean-index = "!f() { git diff --stat --cached --exit-code || { echo \"Index dirty\"; exit 2; }; }; f"
is-clean = "!f() { git is-clean-workdir && git is-clean-index; }; f"
co-merge = "!f() { local=\"$1\"; remote=\"$2\"; git checkout \"$local\"; git merge --ff-only \"$remote\"; }; f"
current-branch = rev-parse --abbrev-ref HEAD
sync = "!f() { git is-clean || { echo Aborting sync.; exit 1; }; current=$(git current-branch); git fetch --all; git tracking | while IFS=: read local remote; do echo \"Merging $local with $remote\"; git co-merge \"$local\" \"$remote\"; done 3>&1 1>&2 2>&3 | egrep -i --color 'fatal|$' 3>&1 1>&2 2>&3; git checkout \"$current\"; }; f"
@thpham
thpham / flake.nix
Created October 26, 2023 21:17 — forked from mausch/flake.nix
llama-vicuna.nix
{
description = "llama.cpp running vicuna";
inputs = {
llama.url = "github:ggerganov/llama.cpp/aaf3b23debc1fe1a06733c8c6468fb84233cc44f";
flake-utils.url = "github:numtide/flake-utils/033b9f258ca96a10e543d4442071f614dc3f8412";
nixpkgs.url = "github:NixOS/nixpkgs/d9f759f2ea8d265d974a6e1259bd510ac5844c5d";
};
outputs = { self, flake-utils, llama, nixpkgs }:
@thpham
thpham / lima-on-m1-mac-installation-guide.md
Created October 3, 2022 18:00 — forked from toricls/lima-on-m1-mac-installation-guide.md
Using Lima to run containers with containerd and nerdctl (without Docker Desktop) on M1 Macs

Lima (Linux virtual machines, on macOS) installation guide for M1 Mac.

Sep. 27th 2021 UPDATED

Now we can install patched version of QEMU via Homebrew (thank you everyone for the info!). Here is the updated instruction with it:

Used M1 Mac mini 2020 with macOS Big Sur Version 11.6.

1. Install QEMU & Lima

# Instructions for fresh install
$ sh <(curl -L https://nixos.org/nix/install) --darwin-use-unencrypted-nix-store-volume --daemon
# reboot
$ source /nix/var/nix/profiles/default/etc/profile.d/nix-daemon.sh
$ echo 'export NIX_PATH=darwin-config=$HOME/.nixpkgs/darwin-configuration.nix:$HOME/.nix-defexpr/channels${NIX_PATH:+:}$NIX_PATH' | tee -a ~/.zshrc
$ echo 'source $HOME/.nix-profile/etc/profile.d/hm-session-vars.sh' | tee -a ~/.zshrc
$ nix-channel --add https://nixos.org/channels/nixpkgs-unstable
$ nix-channel --add https://github.com/LnL7/nix-darwin/archive/master.tar.gz darwin
$ nix-channel --add https://github.com/nix-community/home-manager/archive/master.tar.gz home-manager

FWIW: I'm not the author of the content presented here (which is an outline from Edmond Lau's book). I've just copy-pasted it from somewhere over the Internet, but I cannot remember what exactly the original source is. I was also not able to find the author's name, so I cannot give him/her the proper credits.


Effective Engineer - Notes

What's an Effective Engineer?

@thpham
thpham / myweechat.md
Created June 20, 2021 22:35 — forked from pascalpoitras/config.md
My always up-to-date WeeChat configuration (weechat-dev)

WeeChat Screenshot

You need at least WeeChat 3.2-dev

Enable mouse

/mouse enable

@thpham
thpham / HADOOP-ON-K8S.md
Created April 13, 2021 18:52 — forked from TeemuKoivisto/HADOOP-ON-K8S.md
How to install Hadoop to your local Kubernetes cluster

How to install Hadoop on your local Kubernetes cluster

Okey this is not the easiest way of running Hadoop on your local computer and probably you should instead just install it locally.

However if you really insist doing this here's how:

  1. Install kubectl, minikube and Docker if you don't already have it. I recommend using package-manager like Chocolatey. Minikube should install with VirtualBox as default driver which I recommend. When starting minikube we should increase its memory limit since our Hadoop node's pods need at least 2GB: minikube --memory 4096 --cpus 2 start (minikube's default is 1GB). NOTE: actually the Hadoop cluster by default uses about 10GB in memory limits and about 3GB running memory. From what I looked my k8s will overprovision to 300% of its capacity limits but use far less.
  2. Install helm. Then run helm init.
  3. Now you