Skip to content

Instantly share code, notes, and snippets.

View Nooshu's full-sized avatar

Matt Hobbs Nooshu

View GitHub Profile
@hellerbarde
hellerbarde / latency.markdown
Created May 31, 2012 13:16 — forked from jboner/latency.txt
Latency numbers every programmer should know

Latency numbers every programmer should know

L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns             
Compress 1K bytes with Zippy ............. 3,000 ns  =   3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns  =  20 µs
SSD random read ........................ 150,000 ns  = 150 µs

Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs

@Rich-Harris
Rich-Harris / service-workers.md
Last active April 21, 2024 16:24
Stuff I wish I'd known sooner about service workers

Stuff I wish I'd known sooner about service workers

I recently had several days of extremely frustrating experiences with service workers. Here are a few things I've since learned which would have made my life much easier but which isn't particularly obvious from most of the blog posts and videos I've seen.

I'll add to this list over time – suggested additions welcome in the comments or via twitter.com/rich_harris.

Use Canary for development instead of Chrome stable

Chrome 51 has some pretty wild behaviour related to console.log in service workers. Canary doesn't, and it has a load of really good service worker related stuff in devtools.

@veekaybee
veekaybee / chatgpt.md
Last active April 12, 2024 20:16
Everything I understand about chatgpt

ChatGPT Resources

Context

ChatGPT appeared like an explosion on all my social media timelines in early December 2022. While I keep up with machine learning as an industry, I wasn't focused so much on this particular corner, and all the screenshots seemed like they came out of nowhere. What was this model? How did the chat prompting work? What was the context of OpenAI doing this work and collecting my prompts for training data?

I decided to do a quick investigation. Here's all the information I've found so far. I'm aggregating and synthesizing it as I go, so it's currently changing pretty frequently.

Model Architecture

Screencapture and animated gifs

I say "animated gif" but in reality I think it's irresponsible to be serving "real" GIF files to people now. You should be serving gfy's, gifv's, webm, mp4s, whatever. They're a fraction of the filesize making it easier for you to deliver high fidelity, full color animation very quickly, especially on bad mobile connections. (But I suppose if you're just doing this for small audiences (like bug reporting), then LICEcap is a good solution).

Capturing (Easy)

  1. Launch quicktime player
  2. do Screen recording

screen shot 2014-10-22 at 11 16 23 am

@radum
radum / charles-map-remote.md
Last active March 21, 2024 08:02
Charles proxy Map Remote over HTTP or HTTPS

Charles Proxy Map Remote over HTTP or HTTPS

The Map Remote tool changes the request location, per the configured mappings, so that the response is transparently served from the new location as if that was the original request.

HTTP

Using this feature for http resources does't require anything else apart from just configuring your Map Remote entry.

Always make sure you are clearing your cache before you test. Even if Charles is configured properly you might not see the changes unless the browser gets the resource again from the server and not for its local cache.
@stevekinney
stevekinney / web-performance.md
Last active January 31, 2024 14:23
Web Performance Workshop

Web Performance

Requirements

Repositories

@sorenlouv
sorenlouv / cpu-intensive.js
Last active December 19, 2023 06:00
A CPU intensive operation. Use to test imitate blocking code, test WebWorkers etc.
function mySlowFunction(baseNumber) {
console.time('mySlowFunction');
let result = 0;
for (var i = Math.pow(baseNumber, 7); i >= 0; i--) {
result += Math.atan(i) * Math.tan(i);
};
console.timeEnd('mySlowFunction');
}
mySlowFunction(8); // higher number => more iterations => slower
@O-Zone
O-Zone / transitionEnd.js
Last active November 23, 2023 22:43
Get the name of the browsers transitionEnd event. After calling this, window.transitionEnd will contain the browser specific name of the transitionEnd event. This is an extract of EvandroLG's transitionEnd snippet, without all support for testing different elements and using cache.
(function (window) {
var transitions = {
'transition': 'transitionend',
'WebkitTransition': 'webkitTransitionEnd',
'MozTransition': 'transitionend',
'OTransition': 'otransitionend'
},
elem = document.createElement('div');
for(var t in transitions){
@radum
radum / A Web performance resources.md
Last active August 30, 2023 08:11
Web performance resources

Articles

Performance API

// https://github.com/WICG/paint-timing
performance.getEntries();
performance.getEntriesByType('paint');

// Calculate the time it spends to run something
@wesbos
wesbos / commit-msg
Created July 4, 2016 18:55
ESLint 3.0 Git Pre Commit Hook
#!/bin/bash
files=$(git diff --cached --name-only | grep '\.jsx\?$')
# Prevent ESLint help message if no files matched
if [[ $files = "" ]] ; then
exit 0
fi
failed=0
for file in ${files}; do