// A simple quickref for Eigen. Add anything that's missing. | |
// Main author: Keir Mierle | |
#include <Eigen/Dense> | |
Matrix<double, 3, 3> A; // Fixed rows and cols. Same as Matrix3d. | |
Matrix<double, 3, Dynamic> B; // Fixed rows, dynamic cols. | |
Matrix<double, Dynamic, Dynamic> C; // Full dynamic. Same as MatrixXd. | |
Matrix<double, 3, 3, RowMajor> E; // Row major; default is column-major. | |
Matrix3f P, Q, R; // 3x3 float matrix. |
import os | |
import asyncio | |
import aiohttp # pip install aiohttp | |
import aiofile # pip install aiofile | |
REPORTS_FOLDER = "reports" | |
FILES_PATH = os.path.join(REPORTS_FOLDER, "files") | |
def download_files_from_report(urls): |
# This isn't supposed to run as a bash script, i named it with ".sh" for syntax highlighting. | |
# https://developer.nvidia.com/nsight-systems | |
# https://docs.nvidia.com/nsight-systems/profiling/index.html | |
# My preferred nsys (command line executable used to create profiles) commands | |
# | |
# In your script, write | |
# torch.cuda.nvtx.range_push("region name") | |
# ... |
Guides / Walkthroughs | |
Intro to Programming on Solana | |
https://paulx.dev/blog/2021/01/14/programming-on-solana-an-introduction/ | |
Development Tutorial by Solong | |
https://solongwallet.medium.com/solana-development-tutorial-things-you-should-know-before-structuring-your-code-807f0e2ee43 | |
Intro to Anchor Framework | |
https://project-serum.github.io/anchor/getting-started/introduction.html |
Mon, Jan 24 - Introduction / JavaScript | |
Wed, Jan 26 - Haskell Basics | |
Mon, Jan 31 - Haskell Basics 2 (HW1 basic) | |
Wed, Jan 2 - Algebraic Data Types / QuickCheck | |
Mon, Jan 7 - Lambda Calculus (HW2 calculator) | |
Wed, Feb 9 - Flex Slot for Haskell | |
Mon, Feb 14 - Type Classes (HW3 lambda) |
Audience: I assume you heard of chatGPT, maybe played with it a little, and was imressed by it (or tried very hard not to be). And that you also heard that it is "a large language model". And maybe that it "solved natural language understanding". Here is a short personal perspective of my thoughts of this (and similar) models, and where we stand with respect to language understanding.
Around 2014-2017, right within the rise of neural-network based methods for NLP, I was giving a semi-academic-semi-popsci lecture, revolving around the story that achieving perfect language modeling is equivalent to being as intelligent as a human. Somewhere around the same time I was also asked in an academic panel "what would you do if you were given infinite compute and no need to worry about labour costs" to which I cockily responded "I would train a really huge language model, just to show that it doesn't solve everything!". We
Hugging Face (HF) has made NLP (Natural Language Processing) a breeze. In this post, we are going to take a look at tokenization using a hands on approach with the help of the Tokenizers library. We are going to load a real world dataset containing 10-K filings of public firms and see how to train a tokenizer from scratch based on the BERT tokenization scheme. In the process we will understand tokenization in detail and some gotchas to keep an eye out for.
If you already have an understanding of the NLP pipeline, you can safely skip this section.
For any NLP task, one of the first steps is pre-processing the data so that it can be fed into our NLP models. For those new to NLP, the general pipeline for any NLP task (text classification, question answering, etc.) is as follows: