Skip to content

Instantly share code, notes, and snippets.

Avatar

Kyle McDonald kylemcdonald

View GitHub Profile
@cibomahto
cibomahto / format.h
Last active Sep 21, 2021
Pattern file format
View format.h
//! @brief Pattern file recorder/playback
//!
//! The purpose is to allow capure and playback of streamed pattern data. The
//! recorder is intended to capture raw data and sync packets directly from the
//! listener (ie, before mapping or color/brightness manipulation is applied).
//! During playback, the raw packets are sent to the mapper for processing.
//! This allows the mapping and output settings to be adjusted after recording.
//!
//! Packets are recorded with a time resolution of 1 ms.
//!
View etherscanapi.js
'use strict';
const fetch = require('node-fetch');
const msautils = require('./utils');
let apikey;
function setApiKey(_apikey) {
apikey = _apikey;
@knandersen
knandersen / morphagene_ableton.py
Last active Sep 17, 2021 — forked from ferrihydrite/morphagene_audacity.py
Allows you to use Ableton projects and exports as reels for the Make Noise Morphagene eurorack module.
View morphagene_ableton.py
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
USAGE:
morphagene_ableton.py -w <inputwavfile> -l <inputlabels> -o <outputfile>'
Instructions in Ableton:
Insert locators as splice markers in your project (Create > Add Locator)
Export Audio/Video with
Sample Rate: 48000 Hz
@Mahedi-61
Mahedi-61 / cuda_10.1_installation_on_Ubuntu_18.04
Last active Sep 8, 2021
CUDA 10.1 Installation on Ubuntu 18.04
View cuda_10.1_installation_on_Ubuntu_18.04
#!/bin/bash
## This gist contains instructions about cuda v10.1 and cudnn 7.6 installation in Ubuntu 18.04 for Tensorflow 2.1.0
### steps ####
# verify the system has a cuda-capable gpu
# download and install the nvidia cuda toolkit and cudnn
# setup environmental variables
# verify the installation
###
@carlthome
carlthome / Signal reconstruction from spectrograms.ipynb
Created May 31, 2018
Try to recover audio from filtered magnitudes when phase information has been lost.
View Signal reconstruction from spectrograms.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@f0k
f0k / LICENSE
Last active Nov 29, 2020
STFT Benchmarks on CPU and GPU in Python
View LICENSE
MIT License
Copyright (c) 2017 Jan Schlüter
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
@victor-shepardson
victor-shepardson / pytorch-glumpy.py
Last active Aug 23, 2021
using pycuda and glumpy to draw pytorch GPU tensors to the screen without copying to host memory
View pytorch-glumpy.py
from contextlib import contextmanager
import numpy as np
import torch
from torch import Tensor, ByteTensor
import torch.nn.functional as F
from torch.autograd import Variable
import pycuda.driver
from pycuda.gl import graphics_map_flags
from glumpy import app, gloo, gl
View Adversarial variational bayes toy example.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@shagunsodhani
shagunsodhani / Batch Normalization.md
Last active Sep 16, 2021
Notes for "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" paper
View Batch Normalization.md

The Batch Normalization paper describes a method to address the various issues related to training of Deep Neural Networks. It makes normalization a part of the architecture itself and reports significant improvements in terms of the number of iterations required to train the network.

Issues With Training Deep Neural Networks

Internal Covariate shift

Covariate shift refers to the change in the input distribution to a learning system. In the case of deep networks, the input to each layer is affected by parameters in all the input layers. So even small changes to the network get amplified down the network. This leads to change in the input distribution to internal layers of the deep network and is known as internal covariate shift.

It is well established that networks converge faster if the inputs have been whitened (ie zero mean, unit variances) and are uncorrelated and internal covariate shift leads to just the opposite.