Skip to content

Instantly share code, notes, and snippets.

"""
Conversion from WGS-84 to Transverse Mercator
WGS-84 is World Geodetic System data from 1984, utilized by GPS standard.
Transverse mercator is a general Mercator projection (projection of the oblate spheroid of earth onto a cylinder) that is used in most mapping systems.
These short codes allow user to specify center latitude and longitude.
Contains a simpler projection assuming earth is a sphere
>>> tmercator_meters_to_gps(coords, center)
>>> tmercator_gps_to_meters(coords, center)
@xvdp
xvdp / set_model_caches.sh
Last active June 14, 2023 23:39
Set DL model caches to a shared mount or away from $HOME/.caches
#!/bin/bash
# change cache paths for huggingface, torch, openai (uses ~/.cache/ by default)
# use: set_model_caches.sh /mnt/my/model/cache/path
cache_path=$1
# Check if the argument is provided
if [ -z "$cache_path" ]; then
echo "Error: Cache path argument is missing."
echo "Usage: $0 /path_to_cache"
exit 1
@xvdp
xvdp / frame_subtitles.py
Last active July 20, 2023 01:05
Simple solution to show frame number and frame time in VLC or other video players.
""" @xvdp
Simple solution to Show frame number and frame time with VLC or other video players
Make a subtitle file
caveats:
only tested in linux
requires ffmpeg / only tested with ffmpeg 4.3
only tested with limited number of .mov and .mkv files
it may fail if ffprobe reports nb_frames, DURATION or frame_rate with keys not parsed here.
@xvdp
xvdp / cuminv.py
Last active July 27, 2022 16:47
cumdiv() and cumdif(): reciprocals to torch cumulative functions cumsum() and cumprod()
"""@xvdp
reciprocals for torch.cumsum and torch.cumprod
I noticed that torch has cumsum and cumprod but not their reciprocals
even thought cumdif and cumdiv have meanings in analysis and probability
and are routinely used.
Are these interpretations correct?
> cumsum can be thought of as a discrete integral
> cumdif as discrete derivative
@xvdp
xvdp / page_rank_sketch.ipynb
Last active July 13, 2022 17:41
Page_Rank_Sketch.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@xvdp
xvdp / hsic.ipynb
Created April 19, 2022 18:52
HSIC.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@xvdp
xvdp / interpolation_diff.py
Created January 10, 2021 00:12
cv2 PIL.Image torch and tensorflow have sligthly different interpolation algorithms; curious
"""
>>> import interpolation_diff as idf
>>> algs = ("pil", "tensorflow") # ("pil", "torch")
>>> save = "/home/z/share/benchestchirescat_%s_%s"%algs
>>> idf.test_dif(dtype="float32", mode="bilinear", show=True, algs=algs, crop=(170,370, 180, 380), save=save)
# re https://discuss.pytorch.org/t/nn-interpolate-function-unexpected-behaviour/106684/6
"""
import io
import os
@xvdp
xvdp / Image_Moments.ipynb
Created December 26, 2020 20:05
Image Moments, test and Implementation.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@xvdp
xvdp / perceptual_loss.py
Created December 21, 2020 20:45
Implementation of 2d Perceptual Loss utilizing base losses from torch.nn
"""
Perceptual Loss Toy Implemntation from https://arxiv.org/pdf/1603.08155.pdf
"""
import os.path as osp
import torch
from torch import Tensor
import torch.nn as nn
import torch.nn.functional as F
from torchvision import models
@xvdp
xvdp / npuint8_torchfloat32.py
Last active April 12, 2024 09:21
numpy uint8 to pytorch float32; how to do it efficiently
""" I was writing a dataloader from a video stream. I ran some numbers.
# in a nutshell.
-> np.transpose() or torch.permute() is faster as uint8, no difference between torch and numpy
-> np.uint8/number results in np.float64, never do it, if anything cast as np.float32
-> convert to pytorch before converting uint8 to float32
-> contiguous() is is faster in torch than numpy
-> contiguous() is faster for torch.float32 than for torch.uint8
-> convert to CUDA in the numpy to pytorch conversion, if you can.
-> in CPU tensor/my_float is > 130% more costly than tensor.div_(myfloat), however tensor.div_()
does not keep track of gradients, so be careful using it.