- First run the server
uvicorn app:app --reload
- Interacting with the server using cli.py
| import os | |
| import datetime | |
| import random | |
| from typing import Callable, Dict, List, Tuple, Any | |
| import logging | |
| import warnings | |
| import torch | |
| import torch.nn.functional as F | |
| from transformers import AutoModelForCausalLM, AutoTokenizer, PreTrainedModel | 
| #!/bin/bash | |
| cd /content/sapiens/lite || exit | |
| SAPIENS_CHECKPOINT_ROOT=/content/sapiens_0.3b/sapiens_lite_host | |
| MODE='torchscript' ## original. no optimizations (slow). full precision inference. | |
| # MODE='bfloat16' ## A100 gpus. faster inference at bfloat16 | |
| # MODE='float16' ## V100 gpus. faster inference at float16 (no flash attn) | |
| SAPIENS_CHECKPOINT_ROOT=$SAPIENS_CHECKPOINT_ROOT/$MODE | 
| import os | |
| import shutil | |
| import tarfile | |
| import zipfile | |
| import requests | |
| import re | |
| import sys | |
| def remove_preamble(latex): | 
| import os | |
| import shutil | |
| import tarfile | |
| import zipfile | |
| import requests | |
| import fnmatch | |
| import re | |
| import sys | |
Summarize this paper
The summary is derived from the current web page context.
This paper presents DragGAN, a method for interactive point-based manipulation of images generated by GANs. The user can click on any points on the image and drag them to desired positions, and the method will deform the image accordingly. DragGAN consists of two components: a feature-based motion supervision that guides the points to move towards their targets, and a point tracking approach that uses the discriminative features of the generator to locate the points on the image. DragGAN can control various spatial attributes such as pose, shape, expression, and layout of different object categories. DragGAN produces realistic results even for challenging scenarios such as occluding or deforming parts of the object. The paper compares DragGAN with prior methods and shows its advantages in image manipulation and point tracking. The paper also demonstrates how DragGAN can be applied to real images using GAN inversion.
Source: Co
https://stackoverflow.com/a/67550427
| """ | |
| This file uses code from | |
| https://github.com/t-vi/pytorch-tvmisc/blob/master/hacks/visualize-jit-models.ipynb | |
| MIT License | |
| Copyright (c) 2017 Thomas Viehmann <tv.code@beamnet.de> | |
| Permission is hereby granted, free of charge, to any person obtaining a copy | 
| from load_data import DataGenerator | |
| import matplotlib.pyplot as plt | |
| N = 10 | |
| K = 1 | |
| B = 3 | |
| generator = DataGenerator(num_classes=N, num_samples_per_class=K*2) | |
| all_imgs, all_labels = generator.sample_batch("train", B) # (3, 2, 10, 784) |