Skip to content

Instantly share code, notes, and snippets.

View chrisroat's full-sized avatar
💭
contributing to the aggregate of tiny pushes by each honest worker

Chris Roat chrisroat

💭
contributing to the aggregate of tiny pushes by each honest worker
View GitHub Profile
@chrisroat
chrisroat / home.html
Created April 1, 2018 07:12
multi-user flask-dance with sqlalchemy via twitter
<!DOCTYPE html>
<head>
<title>Flask-Dance Multi-User SQLAlchemy</title>
</head>
<body>
{% with messages = get_flashed_messages(with_categories=true) %}
{% if messages %}
<ul class="flash">
{% for category, message in messages %}
<li class="{{ category }}">{{ message }}</li>
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Jigsaw puzzle</title>
<script type="text/javascript">
function save(filename, data)
{
var blob = new Blob([data], {type: "text/csv"});
@chrisroat
chrisroat / cloudvolume_dask_wrapper.py
Last active December 7, 2019 19:16
Example of wrapping dask-array and cloud-volume objects so they will place nice
from cloudvolume import CloudVolume
import dask.array as da
import numpy as np
# Wraps a DaskArray to provide a `tostring` method, as well as providing
# pass-throughs for methods needed in interacting with `cloudvolume`.
class DaskWriteToCvWrap:
def __init__(self, dask_arr):
self.arr = dask_arr
@chrisroat
chrisroat / dj_params_test.py
Last active September 22, 2020 15:05
Datajoint pipeline processing only subsets of cross products
"""Example of linear pipeline with per-stage parameters.
Note that only one of the two tests can be run at a time, since the schema
is dropped at the end of the test. (The schema is not created per-test,
as datajoint dbs are created on module import here.)
"""
import numpy as np
import pytest
import dask.array as da
# Adapted from dask.array.map_overlap
def map_overlap_multi(func, arr, aux, depth, boundary=None, trim=True, **kwargs):
"""Variation on map_overlap that maps both the data array and the aux data, with overlaps in the data array."""
depth2 = da.overlap.coerce_depth(arr.ndim, depth)
boundary2 = da.overlap.coerce_boundary(arr.ndim, boundary)
for i in range(arr.ndim):
@chrisroat
chrisroat / histogram_matching.py
Last active November 12, 2020 16:48
Dask-based histogram matching
import dask
import dask.array as da
import numpy as np
from dask.array import reductions
def _match_cumulative_cdf(source, template):
"""
Return modified source array so that the cumulative density function of
import datajoint as dj
schema = dj.schema('test_aggr')
# An acquisition has several rounds that come in independently over days.
# The preprocessing and processing can proceed independently for each round.
# But there is an analysis done once all preprocessing is done, and
# another analysis done once all processing is done.
# This can be accomplished, in a brittle manner, by creating artificial
# When acquisition is inserted, the number of images is known
# and all metadata can be inserted.
class Acquisition(dj.Computed):
definition = """
acq_name: varchar(32)
"""
class ImageMetadata(dj.Part):
definition = """
-> Acquisition
-> image_index: int
import re
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from scipy.cluster import hierarchy
from matplotlib.colors import ListedColormap
def cluster_order(d):
pdist = hierarchy.distance.pdist(d.values)
linkage = hierarchy.linkage(pdist, method="complete")
idx = hierarchy.fcluster(linkage, 0.5 * pdist.max(), "distance")
idx = np.argsort(idx)
return d.iloc[idx, idx]
def draw(df, symmetric=True):