Skip to content

Instantly share code, notes, and snippets.

View ehartford's full-sized avatar

Eric Hartford ehartford

View GitHub Profile
# This supports merging as many adapters as you want.
# python merge_adapters.py --base_model_name_or_path <base_model> --peft_model_paths <adapter1> <adapter2> <adapter3> --output_dir <merged_model>
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
import os
import argparse

Gauge Emergent Gravity

Preon Field: $\phi$ scalar with U(1) gauge symmetry.
Gauge Field: $A_{\mu}$

Lagrangian Components:

  • Gauge: $$\mathcal{L}{\text{gauge}} = -\frac{1}{4} F^{\mu\nu}F{\mu\nu}$$ , where $$F_{\mu\nu} = \partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu}$$
  • Interaction: $$\mathcal{L}{\text{interaction}} = q \bar{\phi} \gamma^\mu \phi A\mu$$
  • Spontaneous Symmetry Breaking: $$\langle \phi \rangle = v$$ (non-zero vacuum expectation)
  • Emergent Gravity: $$S_{\text{gravity}} = \int d^4x \sqrt{-g} \left( \frac{R}{16\pi G} + \mathcal{L}{\text{emergent}} \right)$$, $$\mathcal{L}{\text{emergent}}$$ reflects post-symmetry breaking preon dynamics.
@ehartford
ehartford / claudeDeepspeed.txt
Created March 10, 2024 20:01
conversation with claude about deepspeed
ME:
[rank14]: Traceback (most recent call last):
[rank14]: File "<frozen runpy>", line 198, in _run_module_as_main
[rank14]: File "<frozen runpy>", line 88, in _run_code
[rank14]: File "/scratch/axolotl/src/axolotl/cli/train.py", line 59, in <module>
[rank14]: fire.Fire(do_cli)
[rank14]: File "/home/ehartford/miniconda3/envs/axolotl/lib/python3.12/site-packages/fire/core.py", line 141, in Fire
[rank14]: component_trace = _Fire(component, args, parsed_flag_args, context, name)
[rank14]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank14]: File "/home/ehartford/miniconda3/envs/axolotl/lib/python3.12/site-packages/fire/core.py", line 475, in _Fire
@ehartford
ehartford / gist:5d8452c1f2e8395398e86106388660df
Created January 1, 2024 07:09
convert yayi2-30b to llama. All the credit to Charles Goddard and Weyaxi
import copy
import os
import safetensors.torch
import glob
import json
def transform_st(path: str, out_dir: str):
data = safetensors.torch.load_file(path)
old_keys = list(data.keys())
@ehartford
ehartford / err.log
Last active December 21, 2023 22:33
The following values were not passed to `accelerate launch` and had defaults used instead:
`--num_processes` was set to a value of `4`
More than one GPU was found, enabling multi-GPU training.
If this was unintended please pass in `--num_processes=1`.
`--num_machines` was set to a value of `1`
`--mixed_precision` was set to a value of `'no'`
`--dynamo_backend` was set to a value of `'no'`
To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
/workspace/axolotl/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations
warnings.warn(
// npm i express axios && node ./oailogger.js
const express = require('express');
const axios = require('axios');
const bodyParser = require('body-parser');
const stream = require('stream');
const { promisify } = require('util');
const fs = require('fs');
const logStream = fs.createWriteStream('logs.jsonl', { flags: 'a' });
const app = express();
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
import os
import argparse
def get_args():
parser = argparse.ArgumentParser()