Skip to content

Instantly share code, notes, and snippets.

@wolfv
wolfv / github_actions.yaml
Last active March 10, 2024 15:28
micromamba usage
name: CI
on:
push:
branches:
- main
pull_request:
branches:
- main
@madelinegannon
madelinegannon / setup-azure-kinect-on-jetson-x-nx.md
Last active June 17, 2024 20:40
Notes on Setting up the Microsoft Azure Kinect on Ubuntu 18.04
@sbarratt
sbarratt / torch_jacobian.py
Created May 9, 2019 19:40
Get the jacobian of a vector-valued function that takes batch inputs, in pytorch.
def get_jacobian(net, x, noutputs):
x = x.squeeze()
n = x.size()[0]
x = x.repeat(noutputs, 1)
x.requires_grad_(True)
y = net(x)
y.backward(torch.eye(noutputs))
return x.grad.data
@gocarlos
gocarlos / Eigen Cheat sheet
Last active July 3, 2024 07:33
Cheat sheet for the linear algebra library Eigen: http://eigen.tuxfamily.org/
// A simple quickref for Eigen. Add anything that's missing.
// Main author: Keir Mierle
#include <Eigen/Dense>
Matrix<double, 3, 3> A; // Fixed rows and cols. Same as Matrix3d.
Matrix<double, 3, Dynamic> B; // Fixed rows, dynamic cols.
Matrix<double, Dynamic, Dynamic> C; // Full dynamic. Same as MatrixXd.
Matrix<double, 3, 3, RowMajor> E; // Row major; default is column-major.
Matrix3f P, Q, R; // 3x3 float matrix.
@samrocketman
samrocketman / libimobiledevice_ifuse_Ubuntu.md
Last active July 3, 2024 07:05
On Ubuntu 16.04, since iOS 10 update, libimobiledevice can't connect to my iPhone. This is my attempt to document a fix.

Why this document?

I upgraded my iPhone 5s to iOS 10 and could no longer retrieve photos from it. This was unacceptable for me so I worked at achieving retrieving my photos. This document is my story (on Ubuntu 16.04).

The solution is to compile libimobiledevice and ifuse from source.

Audience

Who is this guide intended for?

@shagunsodhani
shagunsodhani / Batch Normalization.md
Last active July 25, 2023 18:07
Notes for "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" paper

The Batch Normalization paper describes a method to address the various issues related to training of Deep Neural Networks. It makes normalization a part of the architecture itself and reports significant improvements in terms of the number of iterations required to train the network.

Issues With Training Deep Neural Networks

Internal Covariate shift

Covariate shift refers to the change in the input distribution to a learning system. In the case of deep networks, the input to each layer is affected by parameters in all the input layers. So even small changes to the network get amplified down the network. This leads to change in the input distribution to internal layers of the deep network and is known as internal covariate shift.

It is well established that networks converge faster if the inputs have been whitened (ie zero mean, unit variances) and are uncorrelated and internal covariate shift leads to just the opposite.

@cbsmith
cbsmith / random_selection.cpp
Last active August 9, 2022 12:29
Hopefully serves as a reference implementation on how to do random selection of an element from a container.
// -*- compile-command: "clang++ -ggdb -o random_selection -std=c++0x -stdlib=libc++ random_selection.cpp" -*-
//Reference implementation for doing random number selection from a container.
//Kept for posterity and because I made a surprising number of subtle mistakes on my first attempt.
#include <random>
#include <iterator>
template <typename RandomGenerator = std::default_random_engine>
struct random_selector
{
//On most platforms, you probably want to use std::random_device("/dev/urandom")()