Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.
Latency Comparison Numbers (~2012) | |
---------------------------------- | |
L1 cache reference 0.5 ns | |
Branch mispredict 5 ns | |
L2 cache reference 7 ns 14x L1 cache | |
Mutex lock/unlock 25 ns | |
Main memory reference 100 ns 20x L2 cache, 200x L1 cache | |
Compress 1K bytes with Zippy 3,000 ns 3 us | |
Send 1K bytes over 1 Gbps network 10,000 ns 10 us | |
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD |
L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns
Compress 1K bytes with Zippy ............. 3,000 ns = 3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns = 20 µs
SSD random read ........................ 150,000 ns = 150 µs
Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs
The web is full of benchmarks showing the supernatural speed of Git even with very big repositories, but unfortunately they use the wrong variable. Size is not important, but the number of files in the repository really is!
Why is that? Well, that's because Git works in a very different way compared to Synergy. You don't have to checkout a file in order to edit it; Git will do that for you automatically. But at what price?
The price is that for every Git operation that requires to know which files changed (git status, git commmit, etc etc) an lstat() call will be executed for every single file
Wow! So how does that perform on a fairly large repository? Let's find out! For this example I will use an example project, which has 19384 files in 1326 folders.
/* | |
##Device = Desktops | |
##Screen = 1281px to higher resolution desktops | |
*/ | |
@media (min-width: 1281px) { | |
/* CSS */ | |
Each of these commands will run an ad hoc http static server in your current (or specified) directory, available at http://localhost:8000. Use this power wisely.
$ python -m SimpleHTTPServer 8000
Moved to git repository: https://github.com/denji/nginx-tuning
For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon
with HyperThreading enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
This gist assumes:
- you have a local git repo
- with an online remote repository (github / bitbucket etc)
- and a cloud server (Rackspace cloud / Amazon EC2 etc)
- your (PHP) scripts are served from /var/www/html/
- your webpages are executed by apache
- apache's home directory is /var/www/
echo 'export PATH=$HOME/local/bin:$PATH' >> ~/.bashrc | |
. ~/.bashrc | |
mkdir ~/local | |
mkdir ~/node-latest-install | |
cd ~/node-latest-install | |
curl http://nodejs.org/dist/node-latest.tar.gz | tar xz --strip-components=1 | |
./configure --prefix=~/local | |
make install # ok, fine, this step probably takes more than 30 seconds... | |
curl https://www.npmjs.org/install.sh | sh |
#!/bin/sh | |
# References | |
# https://www.pythonguis.com/tutorials/packaging-pyqt5-applications-pyinstaller-macos-dmg/ | |
# https://medium.com/@jackhuang.wz/in-just-two-steps-you-can-turn-a-python-script-into-a-macos-application-installer-6e21bce2ee71 | |
# --------------------------------------- | |
# Clean up previous builds | |
# --------------------------------------- |