Skip to content

Instantly share code, notes, and snippets.

View VSehwag's full-sized avatar

Vikash Sehwag VSehwag

View GitHub Profile
## Make sure to first download the model_best_dense.pth.tar from https://www.dropbox.com/sh/56yyfy16elwbnr8/AADmr7bXgFkrNdoHjKWwIFKqa?dl=0
import os
import argparse
import torch
import torch.nn as nn
import torchvision.datasets as datasets
import torch.utils.data as data
import torchvision.transforms as transforms
import torch.nn.functional as F
@VSehwag
VSehwag / bicubic_xla.py
Last active January 6, 2023 23:02
We provide a blazing-fast implementation of bicubic resampling for PyTorch-XLA (which only supports nearest/bilinear resampling)
'''
A standalone PyTorch implementation for fast and efficient bicubic resampling.
Well suited for Pytorch-XLA, which doesn't support pytorch native bicubic resampling,
i.e., only nearest/bilinear are lowered for xla. As of now, bicubic resampling is very
slow on TPUs with PyTorch XLA.
This implementation dramatically reduces this overhead and (almost) makes it as fast as
on a gpu.
## Hacked by: Vikash Sehwag
""" ConvNeXt
Paper: `A ConvNet for the 2020s` - https://arxiv.org/pdf/2201.03545.pdf
Original code and weights from https://github.com/facebookresearch/ConvNeXt, original copyright below
Model defs atto, femto, pico, nano and _ols / _hnf variants are timm specific.
Modifications and additions for timm hacked together by / Copyright 2022, Ross Wightman