Use this to run qdirstat for a server on which it is not installed:
ssh root@server 'curl -sL https://git.io/fj42l | perl -- - / -' | qdirstat -c /dev/stdin
Yes, the arguments to perl are perl -- - dir_to_scan -
. Beautiful, isn't it?
/node_modules | |
/charts.json | |
/stats.json |
Use this to run qdirstat for a server on which it is not installed:
ssh root@server 'curl -sL https://git.io/fj42l | perl -- - / -' | qdirstat -c /dev/stdin
Yes, the arguments to perl are perl -- - dir_to_scan -
. Beautiful, isn't it?
run in a somewhat interesting dir (e.g. checkout of a linux kernel) with <100k files (otherwise too slow).
mv -i ~/.histdb/zsh-history.db ~/.histdb/realhistory
./makedb.sh | sqlite3 ~/.histdb/zsh-history.db
http { | |
log_format json_combined escape=json | |
'{' | |
'"time_iso8601":"$time_iso8601", "remote_addr":"$remote_addr", "remote_user":"$remote_user", "request":"$request",' | |
'"status": "$status", "body_bytes_sent":"$body_bytes_sent", "request_time":"$request_time","http_host":"$http_host","host":"$host",' | |
'"args":"$args",' | |
'"connection":"$connection","content_length":"$content_length","content_type":"$content_type","uri":"$uri","request_filename":"$request_filename",' | |
'"http_referrer":"$http_referer", "http_user_agent":"$http_user_agent",' | |
'"upstream_connect_time": "$upstream_connect_time", "upstream_response_time":"$upstream_response_time"' | |
'}'; |
import { makeClient } from "./makeTypedApi"; | |
import { Api } from "./common"; | |
const api = makeClient(Api); | |
// has all the HTTP methods like normal methods, e.g. | |
const results = await api.byDate() |
tiny guitar synth in 96 chars of C
works by starting with a array filled with white noise (from /dev/urandom), then continuously modulating it with a low pass filter of the desired frequency.
this results in a sound pretty similar to a guitar with steel or nylon strings.
<meta charset="utf-8"> | |
<script src="https://unpkg.com/sql.js@1.2.2/dist/sql-asm.js"></script> | |
<script> | |
async function go() { | |
const SQL = await initSqlJs(); | |
const dbres = await fetch("https://rawcdn.githack.com/kotartemiy/newscatcher/b30358cf57c9f8f4a481b51c0a0884a64e0b85b2/newscatcher/data/package_rss.db"); |
#!/bin/bash | |
# usage: `rg --no-line-number --sort-files --pre pdfextract "$@"` | |
# better and much faster solution: https://github.com/phiresky/ripgrep-all | |
fname="$1" | |
cachedir=/tmp/pdfextract | |
mkdir -p "$cachedir" |
/node_modules | |
*.log |
You can scale a SQLite database to multiple GByte in size and many concurrent readers by applying the below optimizations.
(some are applied permanently, but others are reset on new connection)
pragma journal_mode = WAL;
Instead of writing directly to the db file, write to a write-ahead-log instead and regularily commit the changes. Allows multiple concurrent readers, and can significantly improve performance.