Skip to content

Instantly share code, notes, and snippets.

View veekaybee's full-sized avatar
💫
in the latent space

Vicki Boykis veekaybee

💫
in the latent space
View GitHub Profile
@vatsalsaglani
vatsalsaglani / mistral_ctx.py
Last active July 2, 2024 13:05
Token counting and message token management for MistralAI
from typing import List, Dict, Literal, Union
from transformers import AutoTokenizer
class MistralAICtx:
def __init__(self, model_name: str):
assert "mistral" in model_name, "MistralCtx only available for Mistral models"
self.tokenizer = AutoTokenizer.from_pretrained(
"mistralai/Mistral-7B-Instruct-v0.2")
@Maximilian-Winter
Maximilian-Winter / gbnf_grammar_generator.py
Last active June 2, 2024 11:33
GBNF grammar generator for always valid function calls and object creation in JSON with llama.cpp
import inspect
import json
import re
import typing
from inspect import isclass, getdoc
from types import NoneType
from pydantic import BaseModel, Field
from pydantic.fields import FieldInfo
from typing import Any, Type, List, get_args, get_origin, Tuple, Union, Optional
@Tostino
Tostino / inkbot-chunked-summary.txt
Last active July 14, 2024 06:28
Generate a chunked summary prompt example
<#meta#>
- Date: 2023-10-05
- Task: summary
<#system#>
Your main objective is to condense the content of the document into a concise summary, capturing the main points and themes.
<#chat#>
<#user#>
Please read the provided Original section to understand the context and content. Use this understanding to generate a summary of the Original section, incorporating relevant details and maintaining coherence with the Prior Summary.
Notes:
@anna-geller
anna-geller / test.yml
Created June 12, 2023 14:44
For Vicki
id: test
namespace: dev
tasks:
- id: deploy
type: io.kestra.core.tasks.flows.Worker
tasks:
- id: cloneRepository
type: io.kestra.plugin.git.Clone
url: https://github.com/veekaybee/viberary
branch: main

Reinforcement Learning for Language Models

Yoav Goldberg, April 2023.

Why RL?

With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much

@niranjv
niranjv / install_python_36_amazon_linux.sh
Last active January 30, 2023 21:49
Install Python 3.6 in Amazon Linux
# A virtualenv running Python3.6 on Amazon Linux/EC2 (approximately) simulates the Python 3.6 Docker container used by Lambda
# and can be used for developing/testing Python 3.6 Lambda functions
# This script installs Python 3.6 on an EC2 instance running Amazon Linux and creates a virtualenv running this version of Python
# This is required because Amazon Linux does not come with Python 3.6 pre-installed
# and several packages available in Amazon Linux are not available in the Lambda Python 3.6 runtime
# The script has been tested successfully on a t2.micro EC2 instance (Root device type: ebs; Virtualization type: hvm)
# running Amazon Linux AMI 2017.03.0 (HVM), SSD Volume Type - ami-c58c1dd3
# and was developed with the help of AWS Support
@ZZR-china
ZZR-china / falskapp-docker-compose.yml
Created January 23, 2017 13:58
simple flask nginx mongo redis docker-compose.yml
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
@andywirv
andywirv / gist:f312d561c9702522f6d4ede1fe2750bd
Created November 22, 2016 19:58
Update Lambda function configuration, specifically the environment variables. AWS Cli version aws-cli/1.11.20
#quotes on json, single around entire object and double on each property
aws lambda update-function-configuration --function-name=[lambda function name] --environment '{"Variables":{"abc":"124"}}'

Getting Started in Scala

This is my attempt to give Scala newcomers a quick-and-easy rundown to the prerequisite steps they need to a) try Scala, and b) get a standard project up and running on their machine. I'm not going to talk about the language at all; there are plenty of better resources a google search away. This is just focused on the prerequisite tooling and machine setup. I will not be assuming you have any background in JVM languages. So if you're coming from Python, Ruby, JavaScript, Haskell, or anywhere…  I hope to present the information you need without assuming anything.

Disclaimer It has been over a decade since I was new to Scala, and when I was new to Scala, I was coming from a Java and Ruby background. This has probably caused me to unknowingly make some assumptions. Please feel free to call me out in comments/tweets!

One assumption I'm knowingly making is that you're on a Unix-like platform. Sorry, Windows users.

Getting the JVM

@cirla
cirla / Philly PUG 2016-03-24.ipynb
Created March 24, 2016 22:56
Analyzing Philadelphia 2012 General Election Results with PySpark
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.