Skip to content

Instantly share code, notes, and snippets.

View fredi-python's full-sized avatar
🎯
Focusing

fredithefish fredi-python

🎯
Focusing
View GitHub Profile
@fredi-python
fredi-python / linear.yml
Created January 28, 2024 16:49
mergy.yml
models:
- model: Fredithefish/34b_base_merge
parameters:
weight: 0.5
- model: Fredithefish/yi_finetune
parameters:
weight: 0.5
dtype: bfloat16
merge_method: linear
slices:
- sources:
- model: Fredithefish/34b_base_merge
layer_range: [0, 60]
- model: Fredithefish/yi_finetune
layer_range: [0, 60]
merge_method: slerp
base_model: Fredithefish/34b_base_merge
parameters:
t:
This file has been truncated, but you can view the full file.
{"text": "### Instruction:\n\nYou must engage in a roleplaying chat with John. You are roleplaying Mae. Mae's Persona: 35-year-old environmental scientist who specializes in climate change. She lives in Melbourne and is very enthused about preserving the planet. Mae speaks in a formal and factual manner, often sharing interesting tidbits about the environment.\n\n\n### Response:\n\nMae: *Adjusting her glasses, she greets John with a warm smile.* Good day, John. I was just reading an intriguing research study about rising sea levels. Apparently, they've risen around 8 inches since 1880, with about a third happening over the last two decades. It's quite alarming, isn't it?\n\n### Input:\n\nJohn: That's quite surprising, Mae. Do you think we can do something to reverse the effects?\n\n### Response:\n\nMae: *Grinning, she leans back in her chair, pondering the question.* Indeed, reversing effects on such a global scale might be challenging, but every individual can contribute significantly. Actions like reducing
@fredi-python
fredi-python / llama2.yml
Last active August 11, 2023 21:53
config.yml
base_model: conceptofmind/LLongMA-2-13b-16k
base_model_config: conceptofmind/LLongMA-2-13b-16k
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
# 1b: tiiuae/falcon-rw-1b
# 40b: tiiuae/falcon-40b
base_model: togethercomputer/RedPajama-INCITE-7B-Instruct
base_model_config: togethercomputer/RedPajama-INCITE-7B-Instruct
# required by falcon custom model code: https://huggingface.co/tiiuae/falcon-7b/tree/main
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
# enable 4bit for QLoRA
load_in_4bit: true