This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Begin by enclosing all thoughts within <thinking> tags, exploring multiple angles and approaches. | |
Break down the solution into clear steps within <step> tags. Start with a 20-step budget, requesting more for complex problems if needed. | |
Use <count> tags after each step to show the remaining budget. Stop when reaching 0. | |
Continuously adjust your reasoning based on intermediate results and reflections, adapting your strategy as you progress. | |
Regularly evaluate progress using <reflection> tags. Be critical and honest about your reasoning process. | |
Assign a quality score between 0.0 and 1.0 using <reward> tags after each reflection. Use this to guide your approach: | |
0.8+: Continue current approach | |
0.5-0.7: Consider minor adjustments | |
Below 0.5: Seriously consider backtracking and trying a different approach |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Understand the Task: Grasp the main objective, goals, requirements, constraints, and expected output. | |
- Minimal Changes: If an existing prompt is provided, improve it only if it's simple. For complex prompts, enhance clarity and add missing elements without altering the original structure. | |
- Reasoning Before Conclusions: Encourage reasoning steps before any conclusions are reached. ATTENTION! If the user provides examples where the reasoning happens afterward, REVERSE the order! NEVER START EXAMPLES WITH CONCLUSIONS! | |
- Reasoning Order: Call out reasoning portions of the prompt and conclusion parts (specific fields by name). For each, determine the ORDER in which this is done, and whether it needs to be reversed. | |
- Conclusion, classifications, or results should ALWAYS appear last. | |
- Examples: Include high-quality examples if helpful, using placeholders [in brackets] for complex elements. | |
- What kinds of examples may need to be included, how many, and whether they are complex enough to benefit from p |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import os | |
import asyncio | |
import subprocess | |
import time | |
from typing import List, Dict | |
import torch | |
from openai import AsyncOpenAI | |
from tqdm.asyncio import tqdm | |
import logging |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from typing import Dict, List | |
import torch | |
from transformers import AutoModelForSequenceClassification, AutoTokenizer | |
class ArmoRMPipeline: | |
def __init__(self, model_id, device_map="auto", torch_dtype=torch.bfloat16, truncation=True, trust_remote_code=False, max_length=4096): | |
self.model = AutoModelForSequenceClassification.from_pretrained( | |
model_id, | |
device_map=device_map, |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import requests as r | |
from huggingface_hub import HfFolder | |
from tqdm import tqdm | |
from datasets import Dataset | |
headers = {"Authorization": f"Bearer {HfFolder.get_token()}"} | |
sess = r.Session() | |
sess.headers.update(headers) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
start=$(date +%s) | |
# Initialize RESULT_DIRECTORY with default value and HF_MODEL_ID with an empty string | |
RESULT_DIRECTORY="nous" | |
HF_MODEL_ID="" | |
TRUST_REMOTE_CODE="False" | |
CURRENT_DIR=$(pwd) | |
# List of Benchmarking Tasks |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from openai import OpenAI | |
# initialize the client but point it to TGI | |
client = OpenAI( | |
base_url="https://api-inference.huggingface.co/v1", | |
api_key="hf_xxx" # Replace with your token | |
) | |
chat_completion = client.chat.completions.create( | |
model="google/gemma-7b-it", |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
torchrun --nnodes 2 --nproc_per_node 32 --master_addr algo-1 --master_port 7777 --node_rank 0 train_llama.py \ | |
--model_id "meta-llama/Llama-2-70b-hf" \ | |
--lr 5e-5 \ | |
--per_device_train_batch_size 16 \ | |
--bf16 True \ | |
--epochs 3 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import re | |
cli_output = ''' | |
Text Generation Launcher | |
Usage: text-generation-launcher [OPTIONS] | |
Options: | |
--model-id <MODEL_ID> | |
The name of the model to load. Can be a MODEL_ID as listed on <https://hf.co/models> like `gpt2` or `OpenAssistant/oasst-sft-1-pythia-12b`. Or it can be a local directory containing the necessary files as saved by `save_pretrained(...)` methods of transformers |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Nuremberg (/ˈnjʊərəmbɜːrɡ/ NURE-əm-burg; German: Nürnberg [ˈnʏʁnbɛʁk] (listen); in the local East Franconian dialect: Nämberch [ˈnɛmbɛrç]) is the second-largest city of the German state of Bavaria after its capital Munich, and its 541.000 inhabitants[3] make it the 14th-largest city in Germany. On the Pegnitz River (from its confluence with the Rednitz in Fürth onwards: Regnitz, a tributary of the River Main) and the Rhine–Main–Danube Canal, it lies in the Bavarian administrative region of Middle Franconia, and is the largest city and the unofficial capital of Franconia. Nuremberg forms with the neighbouring cities of Fürth, Erlangen and Schwabach a continuous conurbation with a total population of 800,376 (2019), which is the heart of the urban area region with around 1.4 million inhabitants,[4] while the larger Nuremberg Metropolitan Region has approximately 3.6 million inhabitants. The city lies about 170 kilometres (110 mi) north of Munich. It is the largest city in the East Franconian dialect area (collo |
NewerOlder