Skip to content

Instantly share code, notes, and snippets.

celoyd /
Last active September 15, 2023 17:04
A little CLI workflow for visualizing OSM tile traffic data

This is how to make animations like this (alternate), inspired by Paul Norman’s. This is a write-up of a one-off 45 minute project, so it’s rough around the edges and probably has a few typos; feel free to point them out. It’s mostly command-line work, using tools like GNU parallel and ImageMagick convert; it’s slow and wastes a lot of filesystem space compared to a more monolithic approach in (say) python, but it’s very flexible.

1. Get data

I use curl globs, for example:

mkdir xzs
cd xzs
curl -O '[01-12]-[01-31].txt.xz'
celoyd /
Last active September 14, 2023 21:38
Draw an OSM daily aggregate tile traffic log to an image (proof of concept quality)
""" osm_tile_traffic_file.txt output_image.tiff
Write a float32 TIFF representing tile traffic in the OSM aggregate log.
Charlie Loyd, 2023-09-14. Inspired by
I recommend downloading the .xz compressed files and using xzcat to feed
them to this script. (Doing xz decompression in-script was slow.) E.g.,

Dear Ms Liang,

I am an Oakland resident and a past and future user of OAK.

I want a plan to make OAK a source of local pride and global leadership in responding to the climate crisis. That is not the plan on the table today. This plan only gestures at the climate crisis and aviation’s role in it, and invests nothing serious to help solve it. In fact, what’s proposed would make OAK an increasing part of the problem for years to come.

The plan is insufficient to our moment. I oppose it firmly enough to invest my personal time and resources to publicly work against it.

I hope you will reconsider and submit a better plan. I would expect it to start from the principle that any change to OAK must make it responsible for less CO2e emission. That is, the net climate impact of the airport, calculated globally and not just to the mixing level, and operating as designed, must be at worst steady.

import numpy as np
from skimage import io
import time
import datetime
from ciso8601 import parse_datetime
from sys import argv
timestep = 60 # in seconds
lonsteps = 1 # pixels per degree, i.e., 1 means an image 360 wide, 2 means 720
celoyd / BMP085.csv
Created October 16, 2021 22:10
BMP085 output: timestamp, temperature in C, pressure in Pa
View BMP085.csv
1634422106.41013 25.8 101336
1634422103.86678 25.8 101330
1634422101.32353 25.8 101330
1634422098.78481 25.8 101325
1634422096.24276 25.9 101327
1634422093.40026 25.8 101334
1634422090.86181 25.9 101331
1634422088.32217 25.8 101338
1634422085.78382 25.8 101330
1634422083.24592 25.9 101331
celoyd /
Last active September 1, 2021 14:47
Learning to despeckle sar with speckly targets

Learning to despeckle sar with speckly targets

These are notes from a one-day project to test a hunch. The idea is to train a convolutional neural network to remove speckle from sar (synthetic aperture radar) using only one other observation – with its own speckles – as the target. This method does not come close to state of the art despeckling, and can be biased by the skewed distribution of noise in a way that makes it useless for quantitative research. However, I hadn’t noticed it in the literature and I think it’s kind of funny, so I’m writing it up.

Everything here is about Sentinel-1 L1 GRD-HD data, since it’s what I used, since it’s free.


Sar observations contain speckle, a form of interference related to the sparkles in reflected laser light. By some definitions speckle is not noise, since it’s physically real outside the sensor and contains information, but we will treat it as noise. Speckle is (close enough to) independent between radar chirps, a.k.a. looks, and even its distributio


Pansharpening notes, mid-2021

First posted in August 2021. This is basically a snapshot of my thinking about pansharpening at that time; I’m not making any substantial updates. Last typo and clarity fixes in February 2023.


This is a collection of notes on how I’ve been approaching convolutional neural networks for pansharpening. It’s an edited version of an e-mail to a friend who had asked about this tweet, so it’s informal and somewhat silly; it’s not as polished as, say, a blog post would be. It’s basically the advice I would give to an image processing hobbyist before they started working on pansharpening.

If you want a more serious introduction, start with the literature review in Learning deep multiresolution representations for pansharpening. Most of the academic work I would recommend is mentioned there.

#!/usr/bin/env python
# Demo a brightened luminance-only inversion on 8-bit images
from skimage import io
import numpy as np
from sys import argv
# RGB <-> YCoCg-R almost straight from Wikipedia
View not-the-ca-or-border.json
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

Free Idea: Enhancing Astronaut Photography of Earth

Beta version. May contain bad ideas.

By a free idea I mean something that I think is probably fun and probably possible but that I don’t have the combination of time, skill, energy, patience, etc. to do myself. I hope someone does this. I hope someone reads this and does just the specific part that they’re interested in. I’m trying to get the idea out there without giving the impression that it’s my project. It’s just an idea.

To do the whole thing as laid out here I think you’d need at least an intermediate understanding of convolutional neural networks for image processing, access to a GPU, some sense of geography and astronomy (to gut-check your intermediate results), and a reasonable internet connection to download the images.

The idea