Skip to content

Instantly share code, notes, and snippets.

View mrm8488's full-sized avatar
🏠
Working from home

Manuel Romero mrm8488

🏠
Working from home
View GitHub Profile
@mrm8488
mrm8488 / Info-commands.sh
Created January 7, 2020 04:21
Commands to obtain all interesting info of a machine
printenv
ifconfig -a
iptable -L
cat /etc/apache2/site-enabled/*
netstat -punta
export default function generateSocialImage({
title,
tagline,
+ cloudName,
+ imagePublicID,
+ cloudinaryUrlBase = 'https://res.cloudinary.com',
+ version = null,
titleFont = 'arial',
titleExtraConfig = '',
taglineExtraConfig = '',
@mrm8488
mrm8488 / pdfToMP3.py
Created January 14, 2020 13:48
Convert PDF file to text and then to audio
import pdftotext
from gtts import gTTS
from sys import argv
with open(argv[1], "rb") as f:
pdf = pdftotext.PDF(f)
document= "\n\n".join(pdf)
tts = gTTS(document)
print("Saving Audio file")
tts.save(argv[1]+".mp3")
#!/bin/sh
set -x
# == Swarm training (alpha release) ==
# Setup:
#
# git clone https://github.com/shawwn/gpt-2
# cd gpt-2
# git checkout dev-shard
@mrm8488
mrm8488 / express-cache.js
Created January 18, 2020 04:25
Simple Express cache
'use strict'
var express = require('express');
var app = express();
var mcache = require('memory-cache');
app.set('view engine', 'jade');
var cache = (duration) => {
return (req, res, next) => {
@mrm8488
mrm8488 / from_kaggle_to_colab.py
Created January 21, 2020 10:35
How to import/download a Kaggle dataset into a Google Colab Notebook
# Easy Steps to persist Kaggle profile by @mrm8488 (Manuel Romero)
# Download kaggle.json from Kaggle -- MyAccount -> Create New API Token -> auto downloads as "kaggle.json
# Import json into notebook - run in a cell
from google.colab import files
files.upload()
# Browse to downloaded kaggle.json and upload
@mrm8488
mrm8488 / memory_profiling.sh
Created January 24, 2020 15:09
Profile memory usage of a script
while ps auxw | grep '[m]yscript'; do sleep 30; done | stdbuf -o0 uniq | ts
# Monitor changes in memory usage of myscript and timestamp the lines using ts. stdbuf -o0 turns off output buffering. [m] in the grep expression prevents the grep process line itself from being matched.
@mrm8488
mrm8488 / app.js
Created February 4, 2020 01:32 — forked from stongo/app.js
Joi validation in a Mongoose model
var mongoose = require('mongoose');
mongoose.connect('mongodb://localhost/test');
var db = mongoose.connection;
db.on('error', function() {
return console.error.bind(console, 'connection error: ');
});
from torch.utils.data import IterableDataset
class CustomIterableDataset(IterableDataset):
def __init__(self, filename):
#Store the filename in object's memory
self.filename = filename
#And that's it, we no longer need to store the contents in the memory
#Creating the iterable dataset object
dataset = CustomIterableDataset('path_to/somefile')
#Creating the dataloader
dataloader = DataLoader(dataset, batch_size = 64)
for data in dataloader:
#Data is a list containing 64 (=batch_size) consecutive lines of the file
print(len(data)) #[64,]
#We still need to separate the text and labels from each other and preprocess the text