Skip to content

Instantly share code, notes, and snippets.

View rreece's full-sized avatar
🙈
doing things

Ryan Reece rreece

🙈
doing things
View GitHub Profile
@awni
awni / l3min.py
Last active August 23, 2024 22:35
A minimal, fast implementation of Llama 3.1 in MLX.
"""
A minimal, fast example generating text with Llama 3.1 in MLX.
To run, install the requirements:
pip install -U mlx transformers fire
Then generate text with:
python l3min.py "How tall is K2?"
@kratsg
kratsg / ATLASSUSY_ReproducibleSummaryPlots.ipynb
Last active December 7, 2023 10:59
ATLAS SUSY Reproducible Summary Plots
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
import pyhf
pyhf.set_backend('pytorch', 'minuit')
nSig = 4.166929245
errSig = 4.166929245
nBkg = 0.11
errBkgUp = 0.20
errBkgDown = 0.11
model_json = {
@ebarsoum
ebarsoum / gpu_memory_overhead_pycuda.py
Created October 26, 2019 00:24
GPU memory overhead for PyCUDA
import numpy as np
from pynvml.smi import nvidia_smi
import pycuda.gpuarray as ga
import pycuda.driver as cuda
nvsmi = nvidia_smi.getInstance()
def getGPUMemoryUsage(gpu_index=0):
return nvsmi.DeviceQuery("memory.used")["gpu"][gpu_index]['fb_memory_usage']['used']
@johnhw
johnhw / umap_sparse.py
Last active January 6, 2024 16:09
1 million prime UMAP layout
### JHW 2018
import numpy as np
import umap
# This code from the excellent module at:
# https://stackoverflow.com/questions/4643647/fast-prime-factorization-module
import random
@HarshTrivedi
HarshTrivedi / pad_packed_demo.py
Last active September 4, 2024 13:28 — forked from Tushar-N/pad_packed_demo.py
Minimal tutorial on packing (pack_padded_sequence) and unpacking (pad_packed_sequence) sequences in pytorch.
import torch
from torch import LongTensor
from torch.nn import Embedding, LSTM
from torch.autograd import Variable
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
## We want to run LSTM on a batch of 3 character sequences ['long_str', 'tiny', 'medium']
#
# Step 1: Construct Vocabulary
# Step 2: Load indexed data (list of instances, where each instance is list of character indices)
@blepfo
blepfo / TensorFlow-Best-Practices-Q1-2018.md
Last active December 13, 2018 10:22
TensorFlow Best Practices as of Q1 2018

TensorFlow Best Practices as of Q1 2018

By Adam Anderson

adam.b.anderson.96@gmail.com

Preface

This write-up assumes you have an general understanding of the TensorFlow programming model, but maybe you haven't kept up to date with the latest library features/standard practices.

@LucaCappelletti94
LucaCappelletti94 / Firing up LaTex on macOS.md
Last active August 3, 2024 16:46
Firing up Latex on macOS

Firing up LaTex on macOS 🔥

As I'm writing this small tutorial, I assume you've read my previous one about setting up macOS, so if for any tool I'll use without explanation, look to that other article.

MacTex

The full version IS NOT MANDATORY, as in the tutorial that follows I installed the smaller version of MacTeX and proceded installing every needed dependency. Installing the complete package is about ~3.5GB of download and ~5GB on disk, the smaller one is just about 80MBs.

Click here to download the complete version or here to download the smaller version.

Gnuplot

@LucaCappelletti94
LucaCappelletti94 / MacOs quick setup.md
Last active October 27, 2023 03:17
MacOs commands to get you started.

MacOs quick setup 🚀

Getting everything ready

1 - Xcode/Ruby/Command line tools

You need to have Xcode installed to proceed.

xcode-select --install
sudo xcodebuild -license accept

2 - Brew

@damienpontifex
damienpontifex / tf-experiment-template.py
Last active March 9, 2021 09:43
A template for a custom tensorflow estimator and experiment with python3 typings for desired parameter types
import argparse
import psutil
import tensorflow as tf
from typing import Dict, Any, Callable, Tuple
## Data Input Function
def data_input_fn(data_param,
batch_size:int=None,
shuffle=False) -> Callable[[], Tuple]:
"""Return the input function to get the test data.