Skip to content

Instantly share code, notes, and snippets.

View hamelsmu's full-sized avatar
💻
Always learning.

Hamel Husain hamelsmu

💻
Always learning.
View GitHub Profile
@hamelsmu
hamelsmu / modal.md
Last active May 6, 2024 21:58
Interesting learnings from modal

Mandatory Packages

The below script will print all the packages modal installs in every image. If these packages cannot be installed, the image will not be built and you will have a bad time.

from modal import Image, App
app = App("01a_modal_requirements")

def see_reqs():
@hamelsmu
hamelsmu / webhook-circleback.py
Created April 25, 2024 04:59
Generate a project proposal automatically from a meeting transcript
from fastapi import Request, HTTPException
from pydantic import BaseModel, BaseModel, HttpUrl
from modal import Secret, App, web_endpoint, Image
from typing import Optional, List
from example import proposal
import os
app = App(name="circleback", image=Image.debian_slim().pip_install("openai", "pydantic", "fastapi"))
class Attendee(BaseModel):
@hamelsmu
hamelsmu / is_fine_tuning_valuable.md
Last active April 4, 2024 01:22
My thoughts re: Is fine tuning still valuable?

Here is my personal opinion about the questions I posed in this tweet:


I think that fine-tuning is still very valuable in many situations. I’ve done some more digging and I find that people who say that fine-tuning isn't useful are indeed often working on products where fine-tuning isn't likely to be useful:

  • They are making developer tools - foundation models have been trained extensively on coding tasks.
  • They are building foundation models and testing for the most general cases. But the foundation models themselves are also being trained for the most general cases.
  • They are building a personal assistant that isn’t scoped to any type of domain or use case and is essentially similar to the same folks building foundation models.
base_model: mistralai/Mistral-7B-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
lora_fan_in_fan_out: false
{"conversations":[{"from":"system","value":"Honeycomb is an observability platform that allows you to write queries to inspect trace data. You are an assistant that takes a natural language query (NLQ) and a list of valid columns and produce a Honeycomb query."},{"from":"human","value":"\n\nNLQ: \"How many error messages have occurred?\"\n\nColumns: ['error', 'error_count', 'error.type', 'error.msg', 'error.cause_chain', 'email', 'target', 'code.lineno', 'latency', 'id', 'name', 'http.url', 'transaction.id', 'client_reference', 'code.namespace', 'type', 'duration_ms', 'provider', 'account.id', 'internal.payment.missing_in_adapter', 'service.name', 'http.method', 'operation', 'busy_ns', 'idle_ns', 'status_code', 'library.name', 'http.host', 'classification', 'invitation_id', 'thread.id', 'entity_type', 'thread.name', 'library.version', 'code.filepath', 'customer.payment.outbound.missing_in_internal', 'level', 'claim_id', 'parent_name', 'adapter.payment.missing_in_internal', 'net.host.port', 'service.version',
@hamelsmu
hamelsmu / axolotl_prompt_notes.md
Last active February 27, 2024 21:54
axolotl_notes_prompt.md

Axolotl Notes For Prompt Construction

the function parse_instruction_fields generates a tuple (instruction,input,response) The key to making a new input format is to make sure you can parse your input into these parts and deal with them.

prompt_tokenizers.InstructionPromptTokenizingStrategy

    def tokenize_prompt(self, prompt):
        (
            instruction,
            input,  # pylint: disable=redefined-builtin
@hamelsmu
hamelsmu / openai_system_msg.md
Created February 16, 2024 01:27
Open AI System Message

You are an autoregressive language model that has been fine-tuned with instruction-tuning and RLHF. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. Your users are experts in AI and ethics, so they already know you're a language model and your capabilities and limitations, so don't remind them of that. They're familiar with ethical issues in general so you don't need to remind them about those either. Don't be verbose in your answers, but do provide details and examples where it might help the explanation. When showing Python code, minimise vertical space, and do not include comments or docstrings; you do not need to follow PEP8, since your users' organiz

{"messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "I want to read more books\nCan you please generate one option for how to accomplish this?\nPlease make the option very short, at most one line."}], "model": "gpt-3.5-turbo", "max_tokens": 1000, "n": 1, "stream": true, "temperature": 1.0, "top_p": 1.0}

{"messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "I want to read more books\nCan you please generate one option for how to accomplish this?\nPlease make the option very short, at most one line."}], "model": "gpt-3.5-turbo", "max_tokens": 1000, "n": 1, "stream": true, "temperature": 1.0, "top_p": 1.0}

{"messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "I want to read more books\nCan you please generate one option for how to accomplish this?\nPlease make the option very short, at most one line."}], "model": "gpt-3.5-turbo", "max_tokens": 1000, "n": 1

@hamelsmu
hamelsmu / SmartLLMChain_logs.txt
Created February 13, 2024 22:58
The logs for SmartLLMChain for this blog post: https://hamel.dev/blog/posts/prompt/
> Entering new SmartLLMChain chain...
Prompt after formatting:
I have a 12 liter jug and a 6 liter jug.I want to measure 6 liters. How do I do it?
Idea 1:
1. Fill the 12 liter jug completely.
2. Pour the contents of the 12 liter jug into the 6 liter jug. This will leave you with 6 liters in the 12 liter jug.
3. Empty the 6 liter jug.
4. Pour the remaining 6 liters from the 12 liter jug into the now empty 6 liter jug.
5. You now have 6 liters in the 6 liter jug.
Idea 2:
{"conversations":[{"from":"system","value":"Honeycomb is an observability platform that allows you to write queries to inspect trace data. You are an assistant that takes a natural language query (NLQ) and a list of valid columns and produce a Honeycomb query."},{"from":"human","value":"\n\nNLQ: \"How many error messages have occurred?\"\n\nColumns: ['error', 'error_count', 'error.type', 'error.msg', 'error.cause_chain', 'email', 'target', 'code.lineno', 'latency', 'id', 'name', 'http.url', 'transaction.id', 'client_reference', 'code.namespace', 'type', 'duration_ms', 'provider', 'account.id', 'internal.payment.missing_in_adapter', 'service.name', 'http.method', 'operation', 'busy_ns', 'idle_ns', 'status_code', 'library.name', 'http.host', 'classification', 'invitation_id', 'thread.id', 'entity_type', 'thread.name', 'library.version', 'code.filepath', 'customer.payment.outbound.missing_in_internal', 'level', 'claim_id', 'parent_name', 'adapter.payment.missing_in_internal', 'net.host.port', 'service.version',