Skip to content

Instantly share code, notes, and snippets.

@JD-P
JD-P / agent_foundations_llms.md
Last active April 21, 2024 21:24
On Distributed AI Economy Excerpt

Alignment

I did a podcast with Zvi after seeing that Shane Legg couldn't answer a straightforward question about deceptive alignment on a podcast. Demis Hassabis was recently interviewed on the same podcast and also doesn't seem able to answer a straightforward question about alignment. OpenAI's "Superalignment" plan is literally to build AGI and have it solve alignment for us. The public consensus seems to be that "alignment" is a mysterious pre-paradigmatic field with a lot of [open problems](https://www.great

'''
https://arxiv.org/abs/2312.00858
1. put this file in ComfyUI/custom_nodes
2. load node from <loaders>
start_step, end_step: apply this method when the timestep is between start_step and end_step
cache_interval: interval of caching (1 means no caching)
cache_depth: depth of caching
'''
@younesbelkada
younesbelkada / bechmark-fa-2-mistral-7b.py
Created October 2, 2023 16:21
Benchmark transformers + FA2 + Mistral 7B
import argparse
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = torch.device("cuda:0")
def get_parser():
parser = argparse.ArgumentParser()
parser.add_argument(
"--max-new-tokens",
@younesbelkada
younesbelkada / benchmark-mistral-7b.py
Last active February 14, 2024 13:11
Benchmark Mistral 7b model
import argparse
from mistral.cache import RotatingBufferCache
import torch
import inspect
from typing import List
from pathlib import Path
from mistral.model import Transformer
from mistral.tokenizer import Tokenizer
# https://medium.com/@dave1010/amazingly-alarming-autonomous-ai-agents-62f8a785e4d8
# https://github.com/dave1010/hubcap
# to run we need a few libraries:
# pip install rich typer
import os
import subprocess
import sys
import time
@chu-tianxiang
chu-tianxiang / rerope.py
Last active March 8, 2024 02:00
triton implementation of ReRope
# Adapted from the triton implementation of flash-attention v2
# https://github.com/openai/triton/blob/main/python/tutorials/06-fused-attention.py
import time
import torch
import torch.utils.benchmark as benchmark
import triton
import triton.language as tl
@triton.jit
@SunMarc
SunMarc / finetune_llama_gptq.py
Last active May 2, 2024 16:41
Finetune GPTQ model with peft and tlr
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
@davidad
davidad / lead.py
Created August 4, 2023 20:14
Lead poisoning data analysis (thanks GPT-4)
import pandas as pd
# Load the data
df = pd.read_excel('pnas.2118631119.sd01.xlsx')
import matplotlib.pyplot as plt
# Filter the data for ages 22-35
df_filtered = df[(df['AGE'] >= 22) & (df['AGE'] <= 35) & (df['YEAR'] >= 1955) & (df['YEAR'] <= 2040)]
@younesbelkada
younesbelkada / finetune_llama_v2.py
Last active May 6, 2024 23:58
Fine tune Llama v2 models on Guanaco Dataset
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
@0xdevalias
0xdevalias / bypassing-cloudflare-akamai-etc.md
Last active May 3, 2024 06:11
Some notes/resources for bypassing anti-bot/scraping features on Cloudflare, Akamai, etc.