Skip to content

Instantly share code, notes, and snippets.

@tae-jun
tae-jun / Dockerfile
Last active December 31, 2023 16:51
Deploy NVIDIA+PyTorch container using Dockerfile & docker-compose
ARG UBUNTU_VERSION=18.04
ARG CUDA_VERSION=10.2
FROM nvidia/cuda:${CUDA_VERSION}-base-ubuntu${UBUNTU_VERSION}
# An ARG declared before a FROM is outside of a build stage,
# so it can’t be used in any instruction after a FROM
ARG USER=reasearch_monster
ARG PASSWORD=${USER}123$
ARG PYTHON_VERSION=3.8
# To use the default value of an ARG declared before the first FROM,
# use an ARG instruction without a value inside of a build stage:
@williamFalcon
williamFalcon / Pytorch_LSTM_variable_mini_batches.py
Last active April 24, 2024 17:53
Simple batched PyTorch LSTM
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch.nn import functional as F
"""
Blog post:
Taming LSTMs: Variable-sized mini-batches and why PyTorch is good for your health:
https://medium.com/@_willfalcon/taming-lstms-variable-sized-mini-batches-and-why-pytorch-is-good-for-your-health-61d35642972e
"""
// LABjs.jquery.ready -- adds .ready() to $LAB api for wrapping a .wait() and a $(document).ready(...) together
// v0.0.1 (c) Kyle Simpson
// MIT License
(function(global){
var oDOC = global.document;
if (!global.$LAB || !global.jQuery) return; // only adapt LABjs if LABjs exists and jQuery is present
function wrap_API(obj) {
var ret = {