Skip to content

Instantly share code, notes, and snippets.

@gocarlos
gocarlos / Eigen Cheat sheet
Last active July 3, 2024 07:33
Cheat sheet for the linear algebra library Eigen: http://eigen.tuxfamily.org/
// A simple quickref for Eigen. Add anything that's missing.
// Main author: Keir Mierle
#include <Eigen/Dense>
Matrix<double, 3, 3> A; // Fixed rows and cols. Same as Matrix3d.
Matrix<double, 3, Dynamic> B; // Fixed rows, dynamic cols.
Matrix<double, Dynamic, Dynamic> C; // Full dynamic. Same as MatrixXd.
Matrix<double, 3, 3, RowMajor> E; // Row major; default is column-major.
Matrix3f P, Q, R; // 3x3 float matrix.
@sebjai
sebjai / short_term_alpha.ipynb
Last active July 25, 2024 08:35
Market Making in Short-Term Alpha (Chapter 10.4.2 of Algorithmic and High-Frequency Trading by Cartea, Jaimungal, Penalva, published by Cambridge University Press)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@darwing1210
darwing1210 / async_download_files.py
Last active July 19, 2024 20:23
Script to download files in a async way, using Python asyncio
import os
import asyncio
import aiohttp # pip install aiohttp
import aiofile # pip install aiofile
REPORTS_FOLDER = "reports"
FILES_PATH = os.path.join(REPORTS_FOLDER, "files")
def download_files_from_report(urls):
@mcarilli
mcarilli / nsight.sh
Last active July 25, 2024 14:01
Favorite nsight systems profiling commands for Pytorch scripts
# This isn't supposed to run as a bash script, i named it with ".sh" for syntax highlighting.
# https://developer.nvidia.com/nsight-systems
# https://docs.nvidia.com/nsight-systems/profiling/index.html
# My preferred nsys (command line executable used to create profiles) commands
#
# In your script, write
# torch.cuda.nvtx.range_push("region name")
# ...
@hoakbuilds
hoakbuilds / Solana
Created September 28, 2021 12:12
Solana dev resources
Guides / Walkthroughs
Intro to Programming on Solana
https://paulx.dev/blog/2021/01/14/programming-on-solana-an-introduction/
Development Tutorial by Solong
https://solongwallet.medium.com/solana-development-tutorial-things-you-should-know-before-structuring-your-code-807f0e2ee43
Intro to Anchor Framework
https://project-serum.github.io/anchor/getting-started/introduction.html
@ezyang
ezyang / gist:2fe72ebb73a2c4c348bbe2cac1cbcd32
Created January 22, 2022 01:38
CSCI-UA.490: Special Topics in Programming Languages - 2022 syllabus
Mon, Jan 24 - Introduction / JavaScript
Wed, Jan 26 - Haskell Basics
Mon, Jan 31 - Haskell Basics 2 (HW1 basic)
Wed, Jan 2 - Algebraic Data Types / QuickCheck
Mon, Jan 7 - Lambda Calculus (HW2 calculator)
Wed, Feb 9 - Flex Slot for Haskell
Mon, Feb 14 - Type Classes (HW3 lambda)

Some remarks on Large Language Models

Yoav Goldberg, January 2023

Audience: I assume you heard of chatGPT, maybe played with it a little, and was imressed by it (or tried very hard not to be). And that you also heard that it is "a large language model". And maybe that it "solved natural language understanding". Here is a short personal perspective of my thoughts of this (and similar) models, and where we stand with respect to language understanding.

Intro

Around 2014-2017, right within the rise of neural-network based methods for NLP, I was giving a semi-academic-semi-popsci lecture, revolving around the story that achieving perfect language modeling is equivalent to being as intelligent as a human. Somewhere around the same time I was also asked in an academic panel "what would you do if you were given infinite compute and no need to worry about labour costs" to which I cockily responded "I would train a really huge language model, just to show that it doesn't solve everything!". We

@akhan619
akhan619 / tokenizers.md
Last active October 31, 2023 10:22
Exploring Tokenizers from Hugging Face

Exploring Tokenizers from Hugging Face

Hugging Face (HF) has made NLP (Natural Language Processing) a breeze. In this post, we are going to take a look at tokenization using a hands on approach with the help of the Tokenizers library. We are going to load a real world dataset containing 10-K filings of public firms and see how to train a tokenizer from scratch based on the BERT tokenization scheme. In the process we will understand tokenization in detail and some gotchas to keep an eye out for.

Background on NLP (Optional)

If you already have an understanding of the NLP pipeline, you can safely skip this section.

For any NLP task, one of the first steps is pre-processing the data so that it can be fed into our NLP models. For those new to NLP, the general pipeline for any NLP task (text classification, question answering, etc.) is as follows:

@rain-1
rain-1 / LLM.md
Last active July 25, 2024 18:44
LLM Introduction: Learn Language Models

Purpose

Bootstrap knowledge of LLMs ASAP. With a bias/focus to GPT.

Avoid being a link dump. Try to provide only valuable well tuned information.

Prelude

Neural network links before starting with transformers.