Skip to content

Instantly share code, notes, and snippets.

View davehowell's full-sized avatar
🏠
Working from home - just like you

David Howell davehowell

🏠
Working from home - just like you
View GitHub Profile
@davehowell
davehowell / mojo.md
Last active June 20, 2025 15:27
mojo

In Mojo, composition is achieved by including one or more structs as fields within another struct, rather than using inheritance (which Mojo does not support). This is similar to how composition works in Go and Rust.

Example: Composition in Mojo Suppose you have two structs, Position and Drawable, and you want to compose them into a Sprite struct:

struct Position:
    var x: Int
    var y: Int

Add local servers to my (global) user config

claude mcp add --scope user context7 -- npx -y @upstash/context7-mcp
claude mcp add --scope user pickapicon -- npx -y @pickapicon-mcp/latest

claude mcp add --scope user zeplin -- npx -y @zeplin/mcp-server@latest


# Need to have a gemini API key, put it in /.gemini_api_key,  and add the following to ~/.zshrc
# export GEMINI_API_KEY="$(cat "$HOME/.gemini_api_key" 2>/dev/null)"
@davehowell
davehowell / math_tricks.md
Last active March 5, 2025 06:37
math_tricks

Converting scales

Normalizing from one linear scale to another or from linear to logarithmic. Exp is the inverse of Log so converting from logarithmic to linear can be done with exp (or power() depending on lang)

e.g. say you have data in a scale from 2 to 8 million, but want to display it in a scale from 0.1 to 10

My specific use case where this came up was choosing a penwidth (edge weighting thickness) for a graphviz dot file. I wanted the thickness of the edge to be relative to the size of the flow between nodes.

Task: convert to log scale, e.g. 2 to 8M into 0.1 to 10 for the edge thickness in graphviz dot file. Note: Think about this as merely adjusting the value using the ratio of your X scale range to your Y scale range, where your X scale is your actual data, in this case 1 to 8 Million, and your Y scale is the range you want, in this case 0.1 to 10.

# THE WSL2 with Ubuntu 20.04 and bash 5.0 edition. Mainly for VSCode usage
# This is just the customized parts
# if you can only use /usr/bin/python3 because public apt repo is blocked
alias python='python3'
#create a python venv within project dir using current python
# python -m venv .venv
alias vnv='source ./.venv/bin/activate'
@davehowell
davehowell / BigQuery.md
Last active January 16, 2025 00:52
gcp

Execution details

select * from `region-here`.INFORMATION_SCHEMA.JOBS_TIMELINE_BY_PROJECT
where job_id = "bquxjob_blah_blah`

list tables

select 
table_type, table_name
@davehowell
davehowell / AWS Data and Analytics New Services.md
Last active February 17, 2023 05:52
AWS Reinvent 2022 Recap

AWS Re:invent recap Feb 2023

Redshift

Aurora to Redshift serverless “zero ETL” transaction replication

  • MySQL flavour only
  • Redshift serverless only
  • Create an “integration” on the Aurora side
  • Create database from integration on redshift side
  • Cannot replicate deletes
@davehowell
davehowell / Answers.md
Last active April 6, 2023 05:37
databricks
@davehowell
davehowell / merge-schemas.scala
Created October 19, 2021 10:38 — forked from eduardorost/merge-schemas.scala
Merge Schema with structs
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types._
import org.slf4j.{Logger, LoggerFactory}
object Main {
val logger: Logger = LoggerFactory.getLogger(this.getClass)
private lazy val sparkConf: SparkConf = new SparkConf()
.setMaster("local[*]")
version: 2
jobs:
terraform-validate:
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
steps:
- checkout
- run:
name: Terraform Validate
command: |