Skip to content

Instantly share code, notes, and snippets.

View mvdoc's full-sized avatar
😵‍💫

Matteo Visconti di Oleggio Castello mvdoc

😵‍💫
View GitHub Profile
#!/bin/bash
### ABOUT: See: http://gist.github.com/366269
### Runs rsync, retrying on errors up to a maximum number of tries.
### On failure script waits for internect connection to come back up by pinging google.com before continuing.
###
### Usage: $ ./rsync-retry.sh source destination
### Example: $ ./rsync-retry.sh user@server.example.com:~/* ~/destination/path/
###
### INPORTANT:
def appendSpherical_np(xyz):
ptsnew = np.hstack((xyz, np.zeros(xyz.shape)))
xy = xyz[:,0]**2 + xyz[:,1]**2
ptsnew[:,3] = np.sqrt(xy + xyz[:,2]**2)
ptsnew[:,4] = np.arctan2(np.sqrt(xy), xyz[:,2]) # for elevation angle defined from Z-axis down
#ptsnew[:,4] = np.arctan2(xyz[:,2], np.sqrt(xy)) # for elevation angle defined from XY-plane up
ptsnew[:,5] = np.arctan2(xyz[:,1], xyz[:,0])
return ptsnew
@mvdoc
mvdoc / nested_cv_parallel.py
Last active November 28, 2017 20:27
Example with PyMVPA and joblib to run nested classification in parallel
from mvpa2.suite import *
# increase verbosity a bit for now
verbose.level = 3
# pre-seed RNG if you want to investigate the effects, thus
# needing reproducible results
#mvpa2.seed(3)
# we import Parallel and delayed from joblib to run in parallel
from joblib import Parallel, delayed
"""
@mvdoc
mvdoc / jupyter_notebook_config.py
Last active April 30, 2020 22:04
Jupyter notebook post hook save to rename untitled notebooks to `YYYY-MM-DD_untitled-N.ipynb`
# This file should be put in ~/.jupyter/
# New notebooks will be renamed to YYYY-MM-DD_untitled-N.ipynb instead of Untitled.ipynb
# Please be aware that this code has **NOT** been tested extensively, I wrote it rather quickly,
# and thus your notebook might be deleted by accident.
#
# USE THIS CODE AT YOUR OWN RISK
#
import os
import re
@mvdoc
mvdoc / get_size_localcopy_annex.py
Last active May 26, 2024 18:34
Compute total size of git-annexed files with only one local copy
# This script computes the total size of git-annex files with only a single local copy.
# It's useful to figure out how much data will be used if all the files were to be archived.
import subprocess
from tqdm import tqdm
import json
import os
def get_files_with_one_copy():
try:
result = subprocess.run(['git-annex', 'find', '--copies=1', '--and', '--not', '--copies=2', '--and', '--in=here'], capture_output=True, text=True, check=True)