Skip to content

Instantly share code, notes, and snippets.


Kyle McDonald kylemcdonald

View GitHub Profile
cibomahto / format.h
Last active Sep 21, 2021
Pattern file format
View format.h
//! @brief Pattern file recorder/playback
//! The purpose is to allow capure and playback of streamed pattern data. The
//! recorder is intended to capture raw data and sync packets directly from the
//! listener (ie, before mapping or color/brightness manipulation is applied).
//! During playback, the raw packets are sent to the mapper for processing.
//! This allows the mapping and output settings to be adjusted after recording.
//! Packets are recorded with a time resolution of 1 ms.
View etherscanapi.js
'use strict';
const fetch = require('node-fetch');
const msautils = require('./utils');
let apikey;
function setApiKey(_apikey) {
apikey = _apikey;
knandersen /
Last active Sep 13, 2022 — forked from ferrihydrite/
Allows you to use Ableton projects and exports as reels for the Make Noise Morphagene eurorack module. Since a few people have found the script not working or difficulty getting python to work, I have created a web-based tool:
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
USAGE: -w <inputwavfile> -l <inputlabels> -o <outputfile>'
Instructions in Ableton:
Insert locators as splice markers in your project (Create > Add Locator)
Export Audio/Video with
Sample Rate: 48000 Hz
Mahedi-61 / cuda_11.3_installation_on_Ubuntu_20.04
Last active Sep 23, 2022
Instructions for CUDA v11.3 and cuDNN 8.2 installation on Ubuntu 20.04 for PyTorch 1.11
View cuda_11.3_installation_on_Ubuntu_20.04
### steps ####
# verify the system has a cuda-capable gpu
# download and install the nvidia cuda toolkit and cudnn
# setup environmental variables
# verify the installation
### to verify your gpu is cuda enable check
carlthome / Signal reconstruction from spectrograms.ipynb
Created May 31, 2018
Try to recover audio from filtered magnitudes when phase information has been lost.
View Signal reconstruction from spectrograms.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Last active Nov 29, 2020
STFT Benchmarks on CPU and GPU in Python
MIT License
Copyright (c) 2017 Jan Schlüter
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
victor-shepardson /
Last active Sep 10, 2022
using pycuda and glumpy to draw pytorch GPU tensors to the screen without copying to host memory
from contextlib import contextmanager
import numpy as np
import torch
from torch import Tensor, ByteTensor
import torch.nn.functional as F
from torch.autograd import Variable
import pycuda.driver
from import graphics_map_flags
from glumpy import app, gloo, gl
View Adversarial variational bayes toy example.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
shagunsodhani / Batch
Last active Sep 11, 2022
Notes for "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" paper
View Batch

The Batch Normalization paper describes a method to address the various issues related to training of Deep Neural Networks. It makes normalization a part of the architecture itself and reports significant improvements in terms of the number of iterations required to train the network.

Issues With Training Deep Neural Networks

Internal Covariate shift

Covariate shift refers to the change in the input distribution to a learning system. In the case of deep networks, the input to each layer is affected by parameters in all the input layers. So even small changes to the network get amplified down the network. This leads to change in the input distribution to internal layers of the deep network and is known as internal covariate shift.

It is well established that networks converge faster if the inputs have been whitened (ie zero mean, unit variances) and are uncorrelated and internal covariate shift leads to just the opposite.