Skip to content

Instantly share code, notes, and snippets.

View comerford's full-sized avatar

Adam Comerford comerford

View GitHub Profile
@comerford
comerford / whw03v1.md
Last active March 19, 2024 14:37
Building OpenWRT for the WHW03 V1

making a recent build by cherry-picking from @protectivedad repo

This is a verbose, step-by-step-from-scratch account of how I built a version of OpenWRT for my Linksys Velop WHW03 V1 based off the 23.05.2 release and cherry picking in the changes from a reported working repo for the WHW03 V1

Build Environment Notes

For the build step I use a docker container, that is from here:

https://github.com/mwarning/docker-openwrt-build-env

Basically though, assuming you have docker working already it is:

@comerford
comerford / agones-udp-ping.py
Last active May 17, 2023 09:22
Python script to ping the Agones UDP ping endpoint and return the RTT
import socket
import time
import sys
# Set the default port
DEFAULT_PORT = 7000
# Set the number of iterations used to calculate an average
ITERATIONS = 3
@comerford
comerford / get-ue5-custom-s3.ps1
Created March 22, 2023 19:57
Public rclone wrapper
param(
[Parameter(Mandatory=$true)]
[string]$access_key,
[Parameter(Mandatory=$true)]
[string]$secret_key,
[string]$path="c:\unreal-5.0"
)
# this asssumes that the custom UE5 that you want to download is in an S3 bucket and is in a self-extracting archive (7zip or similar)
# for the last version built, 225GB is needed at least, but we might be re-running, so need to get the current amount downloaded and subtract if present
$driveInfo = Get-PSDrive -PSProvider 'FileSystem' | Where-Object { $_.Name -eq $path[0] }
@comerford
comerford / chunktest.js
Created February 27, 2014 13:27
Creating an odd chunk distribution in MongoDB - mistaken pre-split
// start a shell from the command line, do not connect to a database
./mongo --nodb
// using that shell start a new cluster, with a 1MB chunk size
cluster = new ShardingTest({shards: 2, chunksize: 1});
// open another shell (previous one will be full of logging and not actually connected to anything)
./mongo --port 30999
// stop the balancer
sh.stopBalancer()
sh.getBalancerState()
// select test DB, enable sharding
@comerford
comerford / insert100k.js
Last active October 16, 2022 18:45
Insert 100k Minimal Docs into test DB - MongoDB
// simple for loop to insert 100k records into the test databases
var testDB = db.getSiblingDB("test");
// drop the collection, avoid dupes
testDB.timecheck.drop();
for(var i = 0; i < 100000; i++){
testDB.timecheck.insert(
{_id : i}
)
};
@comerford
comerford / oidtest.js
Created December 18, 2014 12:49
Testing ObjectId Snippet
// start a mongo shell to act as a JS interpreter (no db connection required)
mongo --nodb
// store sample id in a variable
var id = new ObjectId("533bc0f60015a0a814000001");
// print out the variable
> id
ObjectId("533bc0f60015a0a814000001")
// try some methods
id.getTimestamp()
ISODate("2014-04-02T07:49:10Z")
@comerford
comerford / timed_ex_explain.js
Created November 17, 2014 18:16
Timed explain with execution for MongoDB 2.8
// the start/end is not really needed since explain contains timing information
// but, this is useful for comparison with other commands (touch) which do not have such info
var start = new Date().getTime();
db.data.find().explain("executionStats")
var end = new Date().getTime();
print("Time to touch data: " + (end - start) + "ms");
@comerford
comerford / compress_test.js
Created November 12, 2014 16:06
Generating data for MongoDB compression testing
// these docs, in 2.6, get bucketed into the 256 bucket (size without header = 240)
// From Object.bsonsize(db.data.findOne()), the size is actually 198 for reference, so add 16 to that for an exact fit
// with that doc size, 80,000 is a nice round number under the 16MiB limit, so will use that for the inner loop
// We are shooting for ~16 GiB of data, without indexes, so do 1,024 iterations (512 from each client)
// This will mean being a little short (~500MiB) in terms of target data size, but keeps things simple
for(var j = 0; j < 512; j++){ //
bigDoc = [];
for(var i = 0; i < 80000; i++){
@comerford
comerford / preheat.js
Last active October 6, 2022 15:13
Function to Pre-Heat data using ObjectID timestamp component in MongoDB
// function will take a number of days, a collection name, an index Field name, and a boolean as args
// it assumes the index is ObjectID based and creates an ObjectID with a timestamp X days in the past
// Finally, it queries that index/data (controlled by boolean), loading it into memory
//
// Example - 2 days data, foo collection, _id index, pre-heat index only
// preHeatData(2, "foo", "_id", true)
// Example - 7 days data, foo collection, _id index, pre-heat data also
// preHeatData(7, "foo", "_id", false)
// Example - 2 days data, bar collection, blah index, pre-heat index only
// preHeatData(2, "bar", "blah", false)
// kills long running ops in MongoDB (taking seconds as an arg to define "long")
// attempts to be a bit safer than killing all by excluding replication related operations
// and only targeting queries as opposed to commands etc.
killLongRunningOps = function(maxSecsRunning) {
currOp = db.currentOp();
for (oper in currOp.inprog) {
op = currOp.inprog[oper-0];
if (op.secs_running > maxSecsRunning && op.op == "query" && !op.ns.startsWith("local")) {
print("Killing opId: " + op.opid