Skip to content

Instantly share code, notes, and snippets.

View HarshTrivedi's full-sized avatar
💭
Am probably having coffee ☕☕☕

Harsh Trivedi HarshTrivedi

💭
Am probably having coffee ☕☕☕
View GitHub Profile
#!/bin/sh
# This script will migrate schema and data from a SQLite3 database to PostgreSQL.
# Schema translation based on http://stackoverflow.com/a/4581921/1303625.
# Some column types are not handled (e.g blobs).
SQLITE_DB_PATH=$1
PG_DB_NAME=$2
PG_USER_NAME=$3
@HarshTrivedi
HarshTrivedi / 0_reuse_code.js
Created April 17, 2016 04:18
Here are some things you can do with Gists in GistBox.
// Use Gists to store code you would like to remember later on
console.log(window); // log the "window" object to the console
# Source Link: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04
sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo apt update
apt-cache policy docker-ce
# Should see something like (may be with different version number):
#########################################################
local setting = std.extVar("setting"); #options: full, fixture
local num_cores = std.parseInt(std.extVar("num_cores"));
#########################################################
# Set this for memory optimization
local activation_checkpointing = false;
local seed = 100;
local transformer_model_name = "nielsr/nt5-small-rc1";
@HarshTrivedi
HarshTrivedi / pad_packed_demo.py
Last active March 2, 2024 16:49 — forked from Tushar-N/pad_packed_demo.py
Minimal tutorial on packing (pack_padded_sequence) and unpacking (pad_packed_sequence) sequences in pytorch.
import torch
from torch import LongTensor
from torch.nn import Embedding, LSTM
from torch.autograd import Variable
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
## We want to run LSTM on a batch of 3 character sequences ['long_str', 'tiny', 'medium']
#
# Step 1: Construct Vocabulary
# Step 2: Load indexed data (list of instances, where each instance is list of character indices)