Skip to content

Instantly share code, notes, and snippets.

View hrdxwandg's full-sized avatar

hrdxwandg hrdxwandg

  • https://www.100.me/
  • shanghai
View GitHub Profile
@agramfort
agramfort / ranking.py
Created March 18, 2012 13:10 — forked from fabianp/ranking.py
Pairwise ranking using scikit-learn LinearSVC
"""
Implementation of pairwise ranking using scikit-learn LinearSVC
Reference: "Large Margin Rank Boundaries for Ordinal Regression", R. Herbrich,
T. Graepel, K. Obermayer.
Authors: Fabian Pedregosa <fabian@fseoane.net>
Alexandre Gramfort <alexandre.gramfort@inria.fr>
"""
@suziewong
suziewong / git.md
Last active September 30, 2024 08:18
Git的多账号如何处理? 1.同一台电脑多个git(不同网站的)账号 2.同一台电脑多个git(同一个网站的比如github的)多个账号

1.同一台电脑可以有2个git账号(不同网站的)

首先不同网站,当然可以使用同一个邮箱,比如我的github,gitlab,bitbucket的账号都是monkeysuzie[at]gmail.com 这时候不用担心密钥的问题,因为这些网站push pull 认证的唯一性的是邮箱 比如我的windows 上 2个账号一个gitlab 一个github (用的都是id_rsa)

host github
  hostname github.com
  Port 22

host gitlab.zjut.com

@mbollmann
mbollmann / attention_lstm.py
Last active August 22, 2024 07:06
My attempt at creating an LSTM with attention in Keras
class AttentionLSTM(LSTM):
"""LSTM with attention mechanism
This is an LSTM incorporating an attention mechanism into its hidden states.
Currently, the context vector calculated from the attended vector is fed
into the model's internal states, closely following the model by Xu et al.
(2016, Sec. 3.1.2), using a soft attention model following
Bahdanau et al. (2014).
The layer expects two inputs instead of the usual one:
@cbaziotis
cbaziotis / Attention.py
Last active October 22, 2024 08:31
Keras Layer that implements an Attention mechanism for temporal data. Supports Masking. Follows the work of Raffel et al. [https://arxiv.org/abs/1512.08756]
from keras import backend as K, initializers, regularizers, constraints
from keras.engine.topology import Layer
def dot_product(x, kernel):
"""
Wrapper for dot product operation, in order to be compatible with both
Theano and Tensorflow
Args:
@cbaziotis
cbaziotis / AttentionWithContext.py
Last active April 25, 2022 14:37
Keras Layer that implements an Attention mechanism, with a context/query vector, for temporal data. Supports Masking. Follows the work of Yang et al. [https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf] "Hierarchical Attention Networks for Document Classification"
def dot_product(x, kernel):
"""
Wrapper for dot product operation, in order to be compatible with both
Theano and Tensorflow
Args:
x (): input
kernel (): weights
Returns:
"""
if K.backend() == 'tensorflow':
import numpy as np
from keras import backend as K
from keras.models import Sequential
from keras.layers import Dense, Activation
def fro_norm(w):
"""Frobenius norm."""
return K.sqrt(K.sum(K.square(K.abs(w))))
@simonnanty
simonnanty / LRUCell.py
Created October 10, 2017 07:18
A TensorFlow implementation of Lattice Recurrent Unit (LRU) from arXiv:1710.02254v1.
import tensorflow as tf
from tensorflow.python.ops import init_ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops.rnn_cell_impl import RNNCell, _linear
class LRUCell(RNNCell):
"""Lattice Recurrent Unit (LRU).
This implementation is based on:
@alessiamarcolini
alessiamarcolini / time-based_LR_schedule.py
Created May 7, 2018 20:53
Time Based Learning Rate Schedule - Keras
# Compile model
epochs = 50
learning_rate = 0.1
decay_rate = learning_rate / epochs
momentum = 0.8
sgd = SGD(lr=learning_rate, momentum=momentum, decay=decay_rate, nesterov=False)
model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy'])
@alessiamarcolini
alessiamarcolini / drop-based_LR_schedule.py
Created May 7, 2018 21:20
Drop Based Learning Rate Schedule - Keras
def step_decay(epoch):
initial_lrate = 0.1
drop = 0.5
epochs_drop = 10.0
lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
return lrate
# ...
lrate = LearningRateScheduler(step_decay)
@alessiamarcolini
alessiamarcolini / clr.py
Created June 7, 2018 16:04
Cyclical Learning Rates - Keras
def clr(self):
cycle = np.floor(1 + self.clr_iterations / (2 * self.step_size))
x = np.abs(self.clr_iterations / self.step_size - 2 * cycle + 1)
if self.scale_mode == 'cycle':
return self.base_lr + (self.max_lr - self.base_lr) * \
np.maximum(0, (1 - x)) * self.scale_fn(cycle)
else:
return self.base_lr + (self.max_lr - self.base_lr) * \
np.maximum(0, (1 - x)) * self.scale_fn(self.clr_iterations)