Skip to content

Instantly share code, notes, and snippets.

View shcallaway's full-sized avatar
😉
Building Opkit

Sherwood Callaway shcallaway

😉
Building Opkit
View GitHub Profile
@shcallaway
shcallaway / README.md
Last active April 15, 2024 14:25
Use jq to parse JSON logs into something more readable

Structured logs are way better than normal logs for a whole bunch of reasons, but they can sometimes be a pain to read in the shell. Take this logline for example:

{"erlang_pid":"#PID<0.1584.0>","level":"error","message":"Got error when retry: :econnrefused, will retry after 1535ms. Have retried 2 times, :infinity times left.","module":"","release":"c2ef629cb357c136f529abec997426d6d58de485","timestamp":"2019-12-17T19:22:11.164Z"}

This format is hard for a human to parse. How about this format instead?

error | 2019-12-17T19:21:02.944Z | Got error when retry: :econnrefused, will retry after 1648ms. Have retried 2 times, :infinity times left.
@shcallaway
shcallaway / README.md
Last active February 7, 2024 21:45
Chrome Bookmarklet that opens GitHub commits merged to master over the past week for a particular repo
@shcallaway
shcallaway / README.md
Last active December 14, 2023 17:58
Scan logs archived from Datadog to an S3 bucket for a particular string

To run this in the background, and detach the process from your current shell:

$ GREP_ARGS=my-query OUTPUT=datadog-s3-log-scan.txt BUCKET=my-s3-bucket >stdout 2>stderr &
$ disown
@shcallaway
shcallaway / README.md
Last active October 25, 2023 19:15
Datadog reserved log attributes

Datadog's reserved log attributes are confusing as heck. It's not clear what each attribute does, so you can’t predict or understand what will happen when you create a mapping. Allow me to demonstrate.

I got this logline from my Datadog S3 archive bucket. It gives you a sense of what logs look like after going through Datadog's opaque transformations.

{
    "_id": "AW5Hc8y8FxIBf2udiA1a", // Log ID generated by Datadog
    "attributes": { // Key-values from the original JSON logline are moved under "attributes"
        "@timestamp": "2019-11-07T19:59:59.804Z",
        "@version": "1",
@shcallaway
shcallaway / README.md
Created September 5, 2023 14:36
Compare Zod schemas for equality

This is a naive way of comparing two Zod schemas to see if they are equivalent. Basically, it works by serializing the two schemas and comparing the resulting strings. This approach is limited because serialization doesn't capture all of the information about certain Zod types, e.g. functions.

@shcallaway
shcallaway / apply-ecr-lifecycle-policy.sh
Last active May 30, 2023 07:00
Apply the same lifecycle policy to all AWS ECR repositories
#!/bin/bash
aws ecr describe-repositories | jq '.repositories[].repositoryName' | xargs -I {} aws ecr put-lifecycle-policy --repository-name {} --lifecycle-policy-text "file://policy.json"
@shcallaway
shcallaway / count-ecr-images.sh
Last active October 19, 2022 04:00
Count the number of images in an AWS ECR repository
#!/bin/bash
aws ecr list-images --repository-name $1 | jq '.imageIds | unique_by(.imageDigest) | length'
@shcallaway
shcallaway / README.md
Last active July 11, 2021 16:54
List CMS datasets

List names of CMS datasets in Socrata:

curl -s 'http://api.us.socrata.com/api/catalog/v1?domains=data.cms.gov&search_context=data.cms.gov&limit=2000' | jq ".results | .[] | .resource.name"

Put CMS dataset names, ids, descriptions, etc. into CSV:

curl -s 'http://api.us.socrata.com/api/catalog/v1?domains=data.cms.gov&amp;search_context=data.cms.gov&amp;limit=2000' | jq ".results | .[] | .resource | [.name, .id, .description, .createdAt, .updatedAt, .data_updated_at] | @csv"
@shcallaway
shcallaway / db-write.py
Created July 5, 2021 00:51
Writes CPTs and cases to MySQL database from CSV
#!/usr/local/bin/python3
# pip3 install mysql-connector-python
import mysql.connector
import argparse
import csv
DB_USER = 'root'
DB_HOST = 'localhost'
DB_NAME = 'opkit'
@shcallaway
shcallaway / sis-parser.py
Created July 4, 2021 20:25
Parse CPT codes and case info from a SIS Complete CSV export
#!/usr/local/bin/python3
import csv
import re
import argparse
CASES_OUT = 'sis-parser-out-cases.csv'
CPTS_OUT = 'sis-parser-out-cpts.csv'
parser = argparse.ArgumentParser(prog="sis-parser.py", description='Parse cases and CPT codes from a SIS Complete CSV export.')