This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
"""testing vram in pytorch cuda | |
every time a variable is put inside a container in python, to remove it completely | |
one needs to delete variable and container, | |
this can be problematic when using pytorch cuda if one doesnt clear all containers | |
Three tests: | |
>>> python memory_tests list | |
# creates 2 tensors puts them in a list, modifies them in place, deletes them | |
# in place mod changes original tensors | |
# list and both tensors need to be deleted |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" I was writing a dataloader from a video stream. I ran some numbers. | |
# in a nutshell. | |
-> np.transpose() or torch.permute() is faster as uint8, no difference between torch and numpy | |
-> np.uint8/number results in np.float64, never do it, if anything cast as np.float32 | |
-> convert to pytorch before converting uint8 to float32 | |
-> contiguous() is is faster in torch than numpy | |
-> contiguous() is faster for torch.float32 than for torch.uint8 | |
-> convert to CUDA in the numpy to pytorch conversion, if you can. | |
-> in CPU tensor/my_float is > 130% more costly than tensor.div_(myfloat), however tensor.div_() | |
does not keep track of gradients, so be careful using it. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
Perceptual Loss Toy Implemntation from https://arxiv.org/pdf/1603.08155.pdf | |
""" | |
import os.path as osp | |
import torch | |
from torch import Tensor | |
import torch.nn as nn | |
import torch.nn.functional as F | |
from torchvision import models |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" | |
>>> import interpolation_diff as idf | |
>>> algs = ("pil", "tensorflow") # ("pil", "torch") | |
>>> save = "/home/z/share/benchestchirescat_%s_%s"%algs | |
>>> idf.test_dif(dtype="float32", mode="bilinear", show=True, algs=algs, crop=(170,370, 180, 380), save=save) | |
# re https://discuss.pytorch.org/t/nn-interpolate-function-unexpected-behaviour/106684/6 | |
""" | |
import io | |
import os |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
"""@xvdp | |
reciprocals for torch.cumsum and torch.cumprod | |
I noticed that torch has cumsum and cumprod but not their reciprocals | |
even thought cumdif and cumdiv have meanings in analysis and probability | |
and are routinely used. | |
Are these interpretations correct? | |
> cumsum can be thought of as a discrete integral | |
> cumdif as discrete derivative |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
""" @xvdp | |
Simple solution to Show frame number and frame time with VLC or other video players | |
Make a subtitle file | |
caveats: | |
only tested in linux | |
requires ffmpeg / only tested with ffmpeg 4.3 | |
only tested with limited number of .mov and .mkv files | |
it may fail if ffprobe reports nb_frames, DURATION or frame_rate with keys not parsed here. | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
# change cache paths for huggingface, torch, openai (uses ~/.cache/ by default) | |
# use: set_model_caches.sh /mnt/my/model/cache/path | |
cache_path=$1 | |
# Check if the argument is provided | |
if [ -z "$cache_path" ]; then | |
echo "Error: Cache path argument is missing." | |
echo "Usage: $0 /path_to_cache" | |
exit 1 |
OlderNewer