Skip to content

Instantly share code, notes, and snippets.

import os
import datetime
import random
from typing import Callable, Dict, List, Tuple, Any
import logging
import warnings
import torch
import torch.nn.functional as F
from transformers import AutoModelForCausalLM, AutoTokenizer, PreTrainedModel
#!/bin/bash
cd /content/sapiens/lite || exit
SAPIENS_CHECKPOINT_ROOT=/content/sapiens_0.3b/sapiens_lite_host
MODE='torchscript' ## original. no optimizations (slow). full precision inference.
# MODE='bfloat16' ## A100 gpus. faster inference at bfloat16
# MODE='float16' ## V100 gpus. faster inference at float16 (no flash attn)
SAPIENS_CHECKPOINT_ROOT=$SAPIENS_CHECKPOINT_ROOT/$MODE
@Luvata
Luvata / README.md
Last active May 26, 2024 04:30
Simple message queue with asyncio and FastAPI

A simple message queue application

How to run

  1. First run the server

uvicorn app:app --reload

  1. Interacting with the server using cli.py
import os
import shutil
import tarfile
import zipfile
import requests
import re
import sys
def remove_preamble(latex):
@Luvata
Luvata / arxiv_tex_downloader.py
Last active August 3, 2023 15:23
Download arxiv paper and merge them into a single .tex file
import os
import shutil
import tarfile
import zipfile
import requests
import fnmatch
import re
import sys
@Luvata
Luvata / DragGAN.md
Last active May 25, 2023 10:00
A conversation between me and Bing Chat about DragGAN

Summarize this paper

The summary is derived from the current web page context.

This paper presents DragGAN, a method for interactive point-based manipulation of images generated by GANs. The user can click on any points on the image and drag them to desired positions, and the method will deform the image accordingly. DragGAN consists of two components: a feature-based motion supervision that guides the points to move towards their targets, and a point tracking approach that uses the discriminative features of the generator to locate the points on the image. DragGAN can control various spatial attributes such as pose, shape, expression, and layout of different object categories. DragGAN produces realistic results even for challenging scenarios such as occluding or deforming parts of the object. The paper compares DragGAN with prior methods and shows its advantages in image manipulation and point tracking. The paper also demonstrates how DragGAN can be applied to real images using GAN inversion.

Source: Co

@Luvata
Luvata / DRIVE.md
Created January 29, 2023 05:04
Cách tải file lớn từ google drive bằng terminal
@Luvata
Luvata / tpu_vm_tutorial.md
Last active November 23, 2022 08:49
Apply TRC và training model đầu tiên với TPU

Cách apply TRC và sử dụng TPU-VM

1. Đăng ký TRC

  1. Apply TRC: https://sites.research.google/trc/
  2. Đợi mail, trong thời gian đó mở cloud console (cùng mail đăng ký TRC), tạo project mới
  3. Khi có mail, điền project number vào form (Nhìn phần dashboard -> Project info -> Project number)
  4. Đợi confirm (khoảng 1-2 tiếng), sẽ có thông tin là được những TPU nào tại Zone nào

2. TPU - VM

"""
This file uses code from
https://github.com/t-vi/pytorch-tvmisc/blob/master/hacks/visualize-jit-models.ipynb
MIT License
Copyright (c) 2017 Thomas Viehmann <tv.code@beamnet.de>
Permission is hereby granted, free of charge, to any person obtaining a copy
from load_data import DataGenerator
import matplotlib.pyplot as plt
N = 10
K = 1
B = 3
generator = DataGenerator(num_classes=N, num_samples_per_class=K*2)
all_imgs, all_labels = generator.sample_batch("train", B) # (3, 2, 10, 784)