Skip to content

Instantly share code, notes, and snippets.

View matagus's full-sized avatar
🐿️
I miss the old Github activity stream

Matías Agustín Méndez matagus

🐿️
I miss the old Github activity stream
View GitHub Profile
@willurd
willurd / web-servers.md
Last active May 8, 2024 18:19
Big list of http static server one-liners

Each of these commands will run an ad hoc http static server in your current (or specified) directory, available at http://localhost:8000. Use this power wisely.

Discussion on reddit.

Python 2.x

$ python -m SimpleHTTPServer 8000
@jboner
jboner / latency.txt
Last active May 8, 2024 16:32
Latency Numbers Every Programmer Should Know
Latency Comparison Numbers (~2012)
----------------------------------
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns 3 us
Send 1K bytes over 1 Gbps network 10,000 ns 10 us
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD
@gokulkrishh
gokulkrishh / media-query.css
Last active May 8, 2024 06:59
CSS Media Queries for Desktop, Tablet, Mobile.
/*
##Device = Desktops
##Screen = 1281px to higher resolution desktops
*/
@media (min-width: 1281px) {
/* CSS */
@hellerbarde
hellerbarde / latency.markdown
Created May 31, 2012 13:16 — forked from jboner/latency.txt
Latency numbers every programmer should know

Latency numbers every programmer should know

L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns             
Compress 1K bytes with Zippy ............. 3,000 ns  =   3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns  =  20 µs
SSD random read ........................ 150,000 ns  = 150 µs

Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs

@wosephjeber
wosephjeber / ngrok-installation.md
Last active May 6, 2024 20:19
Installing ngrok on Mac

Installing ngrok on OSX

For Homebrew v2.6.x and below:

brew cask install ngrok

For Homebrew v2.7.x and above:

@veekaybee
veekaybee / normcore-llm.md
Last active May 6, 2024 16:10
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models

@ohanhi
ohanhi / frp.md
Last active May 6, 2024 05:17
Learning FP the hard way: Experiences on the Elm language

Learning FP the hard way: Experiences on the Elm language

by Ossi Hanhinen, @ohanhi

with the support of Futurice 💚.

Licensed under CC BY 4.0.

Editorial note

@Hellisotherpeople
Hellisotherpeople / blog.md
Last active May 4, 2024 01:57
You probably don't know how to do Prompt Engineering, let me educate you.

You probably don't know how to do Prompt Engineering

(This post could also be titled "Features missing from most LLM front-ends that should exist")

Apologies for the snarky title, but there has been a huge amount of discussion around so called "Prompt Engineering" these past few months on all kinds of platforms. Much of it is coming from individuals who are peddling around an awful lot of "Prompting" and very little "Engineering".

Most of these discussions are little more than users finding that writing more creative and complicated prompts can help them solve a task that a more simple prompt was unable to help with. I claim this is not Prompt Engineering. This is not to say that crafting good prompts is not a difficult task, but it does not involve doing any kind of sophisticated modifications to general "template" of a prompt.

Others, who I think do deserve to call themselves "Prompt Engineers" (and an awful lot more than that), have been writing about and utilizing the rich new eco-system

@denji
denji / nginx-tuning.md
Last active May 3, 2024 03:57
NGINX tuning for best performance

Moved to git repository: https://github.com/denji/nginx-tuning

NGINX Tuning For Best Performance

For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.

Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon with HyperThreading enabled, but it can work without problem on slower machines.

You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.

@erikbern
erikbern / use_pfx_with_requests.py
Last active May 1, 2024 11:45
How to use a .pfx file with Python requests – also works with .p12 files
import contextlib
import OpenSSL.crypto
import os
import requests
import ssl
import tempfile
@contextlib.contextmanager
def pfx_to_pem(pfx_path, pfx_password):
''' Decrypts the .pfx file to be used with requests. '''