Skip to content

Instantly share code, notes, and snippets.

@SunMarc
SunMarc / finetune_llama_gptq.py
Last active May 2, 2024 16:41
Finetune GPTQ model with peft and tlr
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
@nmwsharp
nmwsharp / printarr
Last active October 24, 2023 08:06
Pretty print tables summarizing properties of tensor arrays in numpy, pytorch, jax, etc. --- now on pip: `pip install arrgh`
Pretty print tables summarizing properties of tensor arrays in numpy, pytorch, jax, etc.
Now on pip! `pip install arrgh` https://github.com/nmwsharp/arrgh
@kinoc
kinoc / j6b_train_hf_ds.py
Last active August 9, 2023 03:05
So now you want to finetune that GPT-J-6B on a 3090/TITAN GPU ... okay, using HF and DeepSpeed too
# So now you want to finetune that GPT-J-6B on a 3090/TITAN GPU ... okay
# More exploratory coding. It uses the Huggingface model port, deepspeed and reads all text/md files from a target directory
# It is a fragment of a larger system with remote editing, but that's another story
# This is the raw, training tester. Items to look out for:
# - uses DeepSpeed and has a DS config
# - to save space uses SGD instead of ADAM
# - uses gradient checkpointing
# - freezes 25% of the layers to fit
# Assumes you can already run https://gist.github.com/kinoc/2d636a68876cd3de7b6e9c9452b61089
@johnhw
johnhw / umap_sparse.py
Last active January 6, 2024 16:09
1 million prime UMAP layout
### JHW 2018
import numpy as np
import umap
# This code from the excellent module at:
# https://stackoverflow.com/questions/4643647/fast-prime-factorization-module
import random
@gravitylow
gravitylow / codesign_gdb.md
Last active April 16, 2024 02:18 — forked from hlissner/codesign_gdb.md
Codesign gdb on macOS

If you are getting this in gdb on macOS while trying to run a program:

Unable to find Mach task port for process-id 57573: (os/kern) failure (0x5).
 (please check gdb is codesigned - see taskgated(8))
  1. Open Keychain Access
  2. In menu, open Keychain Access > Certificate Assistant > Create a certificate
  3. Give it a name (e.g. gdbc)
@peterroelants
peterroelants / mnist_estimator.py
Last active February 14, 2024 11:26
Example using TensorFlow Estimator, Experiment & Dataset on MNIST data.
"""Script to illustrate usage of tf.estimator.Estimator in TF v1.3"""
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data as mnist_data
from tensorflow.contrib import slim
from tensorflow.contrib.learn import ModeKeys
from tensorflow.contrib.learn import learn_runner
# Show debugging output
@0x4D31
0x4D31 / beautiful_idiomatic_python.md
Last active July 8, 2024 09:36 — forked from JeffPaine/beautiful_idiomatic_python.md
[Beautiful Idiomatic Python] Transforming Code into Beautiful, Idiomatic Python #python

Transforming Code into Beautiful, Idiomatic Python

Notes from Raymond Hettinger's talk at pycon US 2013 video, slides.

The code examples and direct quotes are all from Raymond's talk. I've reproduced them here for my own edification and the hopes that others will find them as handy as I have!

Looping over a range of numbers

for i in [0, 1, 2, 3, 4, 5]:
@Tushar-N
Tushar-N / pad_packed_demo.py
Last active December 27, 2022 06:35
How to use pad_packed_sequence in pytorch<1.1.0
import torch
import torch.nn as nn
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
seqs = ['gigantic_string','tiny_str','medium_str']
# make <pad> idx 0
vocab = ['<pad>'] + sorted(set(''.join(seqs)))
# make model
@ilblackdragon
ilblackdragon / seq2seq.py
Last active May 22, 2022 21:42
Example of Seq2Seq with Attention using all the latest APIs
import logging
import numpy as np
import tensorflow as tf
from tensorflow.contrib import layers
GO_TOKEN = 0
END_TOKEN = 1
UNK_TOKEN = 2
@rtqichen
rtqichen / pytorch_weight_norm.py
Last active May 11, 2023 06:58
Pytorch weight normalization - works for all nn.Module (probably)
## Weight norm is now added to pytorch as a pre-hook, so use that instead :)
import torch
import torch.nn as nn
from torch.nn import Parameter
from functools import wraps
class WeightNorm(nn.Module):
append_g = '_g'
append_v = '_v'