Skip to content

Instantly share code, notes, and snippets.

@veekaybee
veekaybee / normcore-llm.md
Last active May 6, 2024 16:10
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models

@momentmaker
momentmaker / share.md
Last active May 6, 2024 01:36
sharing interesting links

health

wellnessfx - Improve health, fitness, wellness through blood tests, doctors, nutrition

driphydration - Drip Hydration Mobile IV Therapy | NAD, Dehydration, Hangover, Flu, Myers Cocktail

elysiumhealth - Breakthroughs in aging research are no longer a myth of science fiction but scientific fact. Elysium makes first-of-their-kind health products and technologies backed by partnerships with institutions at the forefront of aging research.

obasan - Obasan Mattress

@imba-tjd
imba-tjd / .Cloud.md
Last active May 5, 2024 13:48
☁️ 一些免费的云资源

IaaS指提供系统(可以自己选)或者储存空间之类的硬件,软件要自己手动装;PaaS提供语言环境和框架(可以自己选);SaaS只能使用开发好的软件(卖软件本身);BaaS一般类似于非关系数据库,但各家不通用,有时还有一些其它东西。

其他人的集合

@rain-1
rain-1 / LLM.md
Last active May 5, 2024 07:13
LLM Introduction: Learn Language Models

Purpose

Bootstrap knowledge of LLMs ASAP. With a bias/focus to GPT.

Avoid being a link dump. Try to provide only valuable well tuned information.

Prelude

Neural network links before starting with transformers.

@piscisaureus
piscisaureus / pr.md
Created August 13, 2012 16:12
Checkout github pull requests locally

Locate the section for your github remote in the .git/config file. It looks like this:

[remote "origin"]
	fetch = +refs/heads/*:refs/remotes/origin/*
	url = git@github.com:joyent/node.git

Now add the line fetch = +refs/pull/*/head:refs/remotes/origin/pr/* to this section. Obviously, change the github url to match your project's URL. It ends up looking like this:

@TengdaHan
TengdaHan / ddp_notes.md
Last active April 22, 2024 00:19
Multi-node-training on slurm with PyTorch

Multi-node-training on slurm with PyTorch

What's this?

  • A simple note for how to start multi-node-training on slurm scheduler with PyTorch.
  • Useful especially when scheduler is too busy that you cannot get multiple GPUs allocated, or you need more than 4 GPUs for a single job.
  • Requirement: Have to use PyTorch DistributedDataParallel(DDP) for this purpose.
  • Warning: might need to re-factor your own code.
  • Warning: might be secretly condemned by your colleagues because using too many GPUs.
@AliMD
AliMD / gist:3344523
Created August 13, 2012 22:28
All github Emoji (Smiles)

All github Emoji (Smiles)

ali.md/emoji

:bowtie: | 😄 | 😆 | 😊 | 😃 | ☺️ | 😏 | 😍 | 😘 | :kissing_face: | 😳 | 😌 | 😆 | 😁 | 😉 | :wink2: | 👅 | 😒 | 😅 | 😓

😩 | 😔 | 😞 | 😖 | 😨 | 😰 | 😣 | 😢 | 😭 | 😂 | 😲 | 😱 | :neckbeard: | 😫 | 😠 | 😡 | 😤 | 😪 | 😋 | 😷

😎 | 😵 | 👿 | 😈 | 😐 | 😶 | 😇 | 👽 | 💛 | 💙 | 💜 | ❤️ | 💚 | 💔 | 💓 | 💗 | 💕 | 💞 | 💘 | ✨

Reinforcement Learning for Language Models

Yoav Goldberg, April 2023.

Why RL?

With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much

@jdnichollsc
jdnichollsc / ABC.md
Last active April 16, 2024 03:40
The Job Interview Guide

The Job Interview Guide 💼

And English is a Work in Progress ⌛