Skip to content

Instantly share code, notes, and snippets.

View mrm8488's full-sized avatar
🏠
Working from home

Manuel Romero mrm8488

🏠
Working from home
View GitHub Profile
@mrm8488
mrm8488 / highfive.js
Last active August 27, 2015 11:13 — forked from kmoe/highfive.js
module['exports'] = function highFive(hook) {
// hook.io has a range of node modules available - see
// https://hook.io/modules.
// We use request (https://www.npmjs.com/package/request) for an easy way to
// make the HTTP request.
var request = require('request');
// The parameters passed in via the slash command POST request.
var params = hook.params;
@mrm8488
mrm8488 / README.md
Created October 24, 2019 01:09 — forked from notwaldorf/README.md
ServiceWorker code to cache Tensorflow model shards.

ServiceWorker code to cache TensorFlow model shards.

One of the problems I have when testing giant TensorFlow models in TensorFlow.js is that they're huge (like 500 MB) and they take forever to download, every time I refresh the page. This is how I setup my ServiceWorker code so that at least in testing I only have to download the model once, and then it's saved in the cache for the next time.

@mrm8488
mrm8488 / an-inquiry-into-matplotlib-figures.ipynb
Created December 23, 2019 20:55 — forked from akashpalrecha/an-inquiry-into-matplotlib-figures.ipynb
This notebook dives deep into Matplotlib's Figures, Axes, subplots and the very amazing GridSpec!
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
#!/bin/sh
set -x
# == Swarm training (alpha release) ==
# Setup:
#
# git clone https://github.com/shawwn/gpt-2
# cd gpt-2
# git checkout dev-shard
@mrm8488
mrm8488 / app.js
Created February 4, 2020 01:32 — forked from stongo/app.js
Joi validation in a Mongoose model
var mongoose = require('mongoose');
mongoose.connect('mongodb://localhost/test');
var db = mongoose.connection;
db.on('error', function() {
return console.error.bind(console, 'connection error: ');
});
from torch.utils.data import IterableDataset
class CustomIterableDataset(IterableDataset):
def __init__(self, filename):
#Store the filename in object's memory
self.filename = filename
#And that's it, we no longer need to store the contents in the memory
#Creating the iterable dataset object
dataset = CustomIterableDataset('path_to/somefile')
#Creating the dataloader
dataloader = DataLoader(dataset, batch_size = 64)
for data in dataloader:
#Data is a list containing 64 (=batch_size) consecutive lines of the file
print(len(data)) #[64,]
#We still need to separate the text and labels from each other and preprocess the text
class CustomIterableDatasetv1(IterableDataset):
def __init__(self, filename):
#Store the filename in object's memory
self.filename = filename
#And that's it, we no longer need to store the contents in the memory
def preprocess(self, text):
dataset = CustomIterableDatasetv1('path_to/somefile')
dataloader = DataLoader(dataset, batch_size = 64)
for X, y in dataloader:
print(len(X)) # 64
print(y.shape) # (64,)
### Do something with X and y
###
class CustomIterableDatasetv2(IterableDataset):
def __init__(self, filename_en, filename_gm):
#Store the filenames in object's memory
self.filename_en = filename_en
self.filename_gm = filename_gm
#And that's it, we no longer need to store the contents in the memory