Skip to content

Instantly share code, notes, and snippets.


Zach Caceres zcaceres

Block or report user

Report or block zcaceres

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
zcaceres /
Last active Oct 9, 2019
Articles Written to Help New JS Devs
zcaceres / Audio Data Augmentation.ipynb
Last active Mar 27, 2019
Some data augmentation for audio
View Audio Data Augmentation.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
zcaceres /
Last active Jul 21, 2019
Rough Draft Faster, Better Speech Recognition with Wav2Letter's Auto Segmentation Criterion

Faster, Better Speech Recognition with Wav2Letter's Auto Segmentation Criterion

In 2016, Facebook AI Research (FAIR) broke new ground with Wav2Letter, a fully convolutional speech recognition system.

In Wav2Letter, FAIR showed that systems based on convolutional neural networks (CNNs) could person as well as traditional recurrent neural network-based approaches.

In this article, we'll focus on an understudied module at the core of Wav2Letter: the Auto Segmentation (ASG) Criterion.

Architecture of the wav2letter model

zcaceres /
Last active Oct 17, 2019
Supercharge Your Bash Workflows with GNU `parallel`

Supercharge Your Bash Workflows with GNU parallel

GNU parallel is a command line tool for running jobs in parallel.

parallel is awesome and belongs in the toolbox of every programmer. But I found the docs a bit overwhelming at first. Fortunately, you can start being useful with parallel with just a few basic commands.

Why is parallel so useful?

Let's compare sequential and parallel execution of the same compute-intensive task.

Imagine you have a folder of .wav audio files to convert to .flac:

from model_fastai import FastaiImageClassifier
class PythonServer(object):
def listen(self):
print(f'Python Server started listening on {PORT} ...')
def predict_from_img(self, img_path):
model = FastaiImageClassifier()
return model.predict(img_path)
View node-python-fastai-3.js
static async invoke(method, ...args) {
try {
const zerorpc = PythonConnector.server();
return await Utils.promisify(zerorpc.invoke, zerorpc, method, ...args);
catch (e) {
return Promise.reject(e)
View node-python-fastai-2.js
// Our prediction endpoint (Receives an image as req.file)'/predict', upload.single('img'), async function (req, res) {
const { path } = req.file
try {
const prediction = await PythonConnector.invoke('predict_from_img', path);
catch (e) {
View node-python-fastai-1.js
class PythonConnector {
static server() {
if (!PythonConnector.connected) {
console.log('PythonConnector – making a new connection to the python layer');
PythonConnector.zerorpcProcess = spawn('python3', ['-u', path.join(__dirname, '')]);
PythonConnector.zerorpcProcess.stdout.on('data', function(data) {'python:', data.toString());
PythonConnector.zerorpcProcess.stderr.on('data', function(data) {
console.error('python:', data.toString());
zcaceres /
Last active Oct 15, 2019
Starter code to use NodeJS with a Python layer for the model.

Deploying a Deep Learning Image Classification Model with NodeJS, Python, and Fastai

TL|DR: Use this to easily deploy a FastAI Python model using NodeJS.

You've processed your data and trained your model and now it's time to move it to the cloud.

If you've used a Python-based framework like fastai to build your model, there are several excellent solutions for deployment like Django or Starlette. But many web devs prefer to work in NodeJS, especially if your model is only part of a broader application.

My friend Navjot pointed out that NodeJS and Python could run together if we could send remote procedure calls from NodeJS to Python.

zcaceres /
Created Oct 31, 2018
Composer Deep Learning #8
#! /bin/bash
tar -zxvf yourfile.tar.gz
You can’t perform that action at this time.