Skip to content

Instantly share code, notes, and snippets.

PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE bun_migrations ("id" INTEGER NOT NULL, "name" VARCHAR, "group_id" INTEGER, "migrated_at" TIMESTAMP NOT NULL DEFAULT current_timestamp, PRIMARY KEY ("id"));
INSERT INTO "bun_migrations" VALUES(1,'20220926100948',1,'2023-01-29 05:15:20');
INSERT INTO "bun_migrations" VALUES(2,'20220926101947',1,'2023-01-29 05:15:20');
CREATE TABLE bun_migration_locks ("id" INTEGER NOT NULL, "table_name" VARCHAR, PRIMARY KEY ("id"), UNIQUE ("table_name"));
CREATE TABLE "metrics" ("id" INTEGER NOT NULL, "project_id" INTEGER, "name" VARCHAR, "description" VARCHAR, "unit" VARCHAR, "instrument" VARCHAR, "created_at" TIMESTAMP DEFAULT CURRENT_TIMESTAMP, "updated_at" TIMESTAMP DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY ("id"));
INSERT INTO "metrics" VALUES(1,1,'uptrace.tracing.spans','Spans duration (excluding events)','microseconds','histogram','2023-01-29 05:15:19.062477+00:00','2023-01-29 05:15:19.062477+00:00');
INSERT INTO "metrics" VALUES(2,2,'uptrace.tracing.spans','Spans dur
@gabriel-v
gabriel-v / how_to_wireguard.sh
Last active February 4, 2024 22:20
Wireguard configuration for dummies
# install
firefox https://www.wireguard.com/install/
# for macOS use the brew/ports version, not the app
# be root
sudo -i
mkdir /etc/wireguard || true
cd /etc/wireguard
# create keys

Bugs

[1]: GTest fails when run by mach try coverage

Bug no: TODO

Error message: shutil error: Destination path 'Z:\task_1535734500\build\application\firefox\gmp-clearkey' already exists

Further investigation: try running it with mach try fuzzy, and then mach try fuzzy with a path prefix.

@gabriel-v
gabriel-v / grcov-toolchain-fetch-entry-generator.sh
Last active September 7, 2018 13:00
Script to append to taskcluster/ci/fetch/toolchain.yml from github grcov releases
#!/bin/bash
set -ex
if [ -z $1 ]; then
echo "usage: $0 VERSION"
exit 1
fi
version=$1
echo > toolchain.yml
@gabriel-v
gabriel-v / download_cloud.sh
Last active October 28, 2017 11:24
Download Liquid Investigations development VMs
#!/bin/bash -ex
mkdir -pv factory/images/cloud-x86_64
UBUNTU_CLOUD=https://jenkins.liquiddemo.org/job/liquidinvestigations/job/factory/job/master/lastSuccessfulBuild/artifact/cloud-x86_64-image.tar.xz
curl -L $UBUNTU_CLOUD | xzcat | tar -x -C factory/images/cloud-x86_64
echo '{"login": {"username": "ubuntu", "password": "ubuntu"}}' > factory/images/cloud-x86_64/config.json
  • Monday: full work day
  • Tuesday: free after 18:30
  • Wed: free after 17:30
  • Thursday: free after 14:30
  • Friday: free until 13:30 + free after 17
+ /opt/hoover/bin/hoover snoop walk testdata
Traceback (most recent call last):
File "/opt/hoover/snoop/manage.py", line 8, in <module>
execute_from_command_line(sys.argv)
File "/opt/hoover/venvs/snoop/lib/python3.5/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line
utility.execute()
File "/opt/hoover/venvs/snoop/lib/python3.5/site-packages/django/core/management/__init__.py", line 359, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/opt/hoover/venvs/snoop/lib/python3.5/site-packages/django/core/management/base.py", line 294, in run_from_argv
self.execute(*args, **cmd_options)
#!/bin/bash
( ls ~/bin/flake8-venv || virtualenv -p python3 ~/bin/flake8-venv ) &> /dev/null
~/bin/flake8-venv/bin/pip install flake8 &> /dev/null
exec ~/bin/flake8-venv/bin/python -m flake8 "$@"
@gabriel-v
gabriel-v / gist:59790f94fb8ac2771f9b0b045e8b5448
Last active February 20, 2017 11:44 — forked from philipz/gist:04a9a165f8ce561f7ddd
Debian ARM64 (Aarch64) image for QEMU

QEMU version: 2.2.0

HDD init:

  • qemu-img create -f qcow debian8-arm64.img 10G

Netinstall initrd:

  • wget ftp://ftp.ru.debian.org/debian/dists/jessie/main/installer-arm64/20150422/images/netboot/debian-installer/arm64/initrd.gz
@gabriel-v
gabriel-v / batch-api.md
Last active November 9, 2016 21:13
Draft API for the hoover batch search

Hoover search batch API

The idea is to facilitate searching for a large number of terms without hitting the rate limiter and with a decent accuracy.

The solution is to use the Elasticsearch _msearch endpoint with count operations, so the query will only return the hit count for each individual query, along with any aggregations that were requested.

The request