Skip to content

Instantly share code, notes, and snippets.

@wangruohui
wangruohui / Install NVIDIA Driver and CUDA.md
Last active June 29, 2024 09:06
Install NVIDIA Driver and CUDA on Ubuntu / CentOS / Fedora Linux OS
@cbaziotis
cbaziotis / Attention.py
Last active March 28, 2023 11:50
Keras Layer that implements an Attention mechanism for temporal data. Supports Masking. Follows the work of Raffel et al. [https://arxiv.org/abs/1512.08756]
from keras import backend as K, initializers, regularizers, constraints
from keras.engine.topology import Layer
def dot_product(x, kernel):
"""
Wrapper for dot product operation, in order to be compatible with both
Theano and Tensorflow
Args:
@cbaziotis
cbaziotis / AttentionWithContext.py
Last active April 25, 2022 14:37
Keras Layer that implements an Attention mechanism, with a context/query vector, for temporal data. Supports Masking. Follows the work of Yang et al. [https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf] "Hierarchical Attention Networks for Document Classification"
def dot_product(x, kernel):
"""
Wrapper for dot product operation, in order to be compatible with both
Theano and Tensorflow
Args:
x (): input
kernel (): weights
Returns:
"""
if K.backend() == 'tensorflow':
@wangruohui
wangruohui / intel-nvidia.md
Last active July 17, 2024 00:58
Intel for display, Nvidia for computing

Intel for display, NVIDIA for computing

This guide will show you how to use Intel graphics for rendering display and NVIDIA graphics for CUDA computing on Ubuntu 18.04 / 20.04 desktop.

I made this work on an ordinary gaming PC with two graphics devices, an Intel UHD Graphics 630 plus an NVIDIA GeForce GTX 1080 Ti. Both of them can be shown via lspci | grep VGA.

00:02.0 VGA compatible controller: Intel Corporation Device 3e92
01:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1)
@normandipalo
normandipalo / dinobot_oneshot_il.py
Last active July 16, 2024 13:44
Code snippet for the one-shot imitation learning phase of DINOBot (alignment + replay).
"""
In this script, we demonstrate how to use DINOBot to do one-shot imitation learning.
You first need to install the following repo and its requirements: https://github.com/ShirAmir/dino-vit-features.
You can then run this file inside that repo.
There are a few setup-dependent functions you need to implement, like getting an RGBD observation from the camera
or moving the robot, that you will find on top of this file.
"""
import torch
import numpy as np