Skip to content

Instantly share code, notes, and snippets.

View qmdnls's full-sized avatar
👋

Björn Bebensee qmdnls

👋
View GitHub Profile
@MohamedAlaa
MohamedAlaa / tmux-cheatsheet.markdown
Last active April 16, 2024 12:17
tmux shortcuts & cheatsheet

tmux shortcuts & cheatsheet

start new:

tmux

start new with session name:

tmux new -s myname
@gunjanpatel
gunjanpatel / revert-a-commit.md
Last active April 12, 2024 15:18
Git HowTo: revert a commit already pushed to a remote repository

Revert the full commit

Sometimes you may want to undo a whole commit with all changes. Instead of going through all the changes manually, you can simply tell git to revert a commit, which does not even have to be the last one. Reverting a commit means to create a new commit that undoes all changes that were made in the bad commit. Just like above, the bad commit remains there, but it no longer affects the the current master and any future commits on top of it.

git revert {commit_id}

About History Rewriting

Delete the last commit

Deleting the last commit is the easiest case. Let's say we have a remote origin with branch master that currently points to commit dd61ab32. We want to remove the top commit. Translated to git terminology, we want to force the master branch of the origin remote repository to the parent of dd61ab32:

@nzec
nzec / README.MD
Last active February 23, 2024 01:08
DeezLoader Offical Page

Thanks to /u/zpoo32 for reporting several issues in this list!

Deemix

  • deemix: just the cli and the library
  • deemix-pyweb: the app with a GUI
  • deemix-server: just the server part of deemix-pyweb
@federicodangelo
federicodangelo / LineSimplification.cs
Created September 26, 2018 18:42
Iterative version of Ramer–Douglas–Peucker line simplification algorithm
using System;
using System.Collections.Generic;
using System.Drawing;
//https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm
namespace Math.Helpers
{
public static class RamerDouglasPeuckerAlgorithm
{
@IanColdwater
IanColdwater / twittermute.txt
Last active April 3, 2024 19:43
Here are some terms to mute on Twitter to clean your timeline up a bit.
Mute these words in your settings here: https://twitter.com/settings/muted_keywords
ActivityTweet
generic_activity_highlights
generic_activity_momentsbreaking
RankedOrganicTweet
suggest_activity
suggest_activity_feed
suggest_activity_highlights
suggest_activity_tweet
@xiaohk
xiaohk / arxiv-preparation.md
Last active December 11, 2023 15:20
Prepare for an arXiv submission

Submission Steps

  1. Download source code from Overleaf if you use it: menu -> download -> source.

  2. Strip comments and combine all tex files (f01-main.tex, f02-intro.tex, etc.) into one file arxiv_main.tex.

# Replace f01-main.tex with the main tex file in your overleaf project
latexpand --empty-comments f01-main.tex > arxiv_main.tex
@taviso
taviso / .Xresources
Last active April 4, 2024 10:13
XTerm Configuration
! XTerm resources
!
! Remember to run `xrdb < .Xresources` after changing anything.
!
! Tavis Ormandy <taviso@gmail.com>
! Set the default UI font (menus, toolbar, etc)
XTerm*XftFont: Segoe UI:size=10:antialias=true:style=Regular
! Color of UI Components
@yoavg
yoavg / LLMs.md
Last active February 17, 2024 18:39

Some remarks on Large Language Models

Yoav Goldberg, January 2023

Audience: I assume you heard of chatGPT, maybe played with it a little, and was imressed by it (or tried very hard not to be). And that you also heard that it is "a large language model". And maybe that it "solved natural language understanding". Here is a short personal perspective of my thoughts of this (and similar) models, and where we stand with respect to language understanding.

Intro

Around 2014-2017, right within the rise of neural-network based methods for NLP, I was giving a semi-academic-semi-popsci lecture, revolving around the story that achieving perfect language modeling is equivalent to being as intelligent as a human. Somewhere around the same time I was also asked in an academic panel "what would you do if you were given infinite compute and no need to worry about labour costs" to which I cockily responded "I would train a really huge language model, just to show that it doesn't solve everything!". We

Reinforcement Learning for Language Models

Yoav Goldberg, April 2023.

Why RL?

With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much