Skip to content

Instantly share code, notes, and snippets.

Avatar

João Felipe Santos jfsantos

View GitHub Profile
@jfsantos
jfsantos / test_phone.yaml
Created Mar 7, 2014
pylearn2 model using MLPWithSource and CompositeLayerWithSource
View test_phone.yaml
!obj:pylearn2.train.Train {
dataset: &train !obj:research.code.pylearn2.datasets.timit.TIMIT {
which_set: 'train',
frame_length: &flen 160,
frames_per_example: &fpe 1,
samples_to_predict: &ylen 1,
n_next_phones: 1,
n_prev_phones: 1,
#start: 0,
#stop: 100,
@jfsantos
jfsantos / build.jl
Last active Aug 29, 2015
BinDeps OSX problem
View build.jl
using BinDeps
@BinDeps.setup
@unix_only begin
ecos = library_dependency("ecos",aliases=["libecos"])
end
provides(Sources, URI("https://github.com/ifa-ethz/ecos/archive/master.zip"),
[ecos], os = :Unix, unpacked_dir="ecos-master")
View ltsd_vad.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import wave
import numpy as np
import scipy as sp
WINSIZE=8192
sound='sound.wav'
def read_signal(filename, winsize):
View gist:4a666a3c99756546507b
### Keybase proof
I hereby claim:
* I am jfsantos on github.
* I am jfsantos (https://keybase.io/jfsantos) on keybase.
* I have a public key whose fingerprint is C422 70CC D7E3 C653 09B0 E52E 06AF B67E AD5E 95E5
To claim this, I am signing this object:
View ltsd_vad.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import wave
import numpy as np
import scipy as sp
WINSIZE=8192
sound='sound.wav'
def read_signal(filename, winsize):
@jfsantos
jfsantos / gist:f46214f5165b298030fb
Created Dec 19, 2014
music21 under Python 3 install error
View gist:f46214f5165b298030fb
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/z3/s0__77pd3n1bpym4f75hrzbc0000gn/T/pip_build_jfsantos/music21/setup.py", line 65, in <module>
include_package_data=True,
File "/Users/jfsantos/anaconda/envs/py3/lib/python3.3/distutils/core.py", line 148, in setup
View fft_gtgram_comparison.py
from gammatone.fftweight import fft_gtgram
from scipy.io.matlab import loadmat
s = loadmat("test.mat")["s"][:,0]
fs = 16000
# gt_py has 260 frames
gt_py = fft_gtgram(s, fs, 0.010, 0.0025, 23, 125)
# gt_mat has 269 frames
View eval_mlp.jl
ENV["MOCHA_USE_CUDA"] = "true"
using HDF5, JLD, Mocha
X = Array[]
push!(X, rand(Float32, 128,11*129,1,1))
y = Array[]
push!(y, rand(Float32, 128, 129, 1, 1))
#data_layer = AsyncHDF5DataLayer("train", "train.txt", 128, 1000, [:features, :targets], false, [])
View test_lms.py
import numpy as np
import scipy.signal as sig
from adaptfilt import lms
if __name__ == '__main__':
import matplotlib.pyplot as plt
from scipy.io import wavfile
sigma = 0.1
order = 100