Skip to content

Instantly share code, notes, and snippets.

@luckmoon
luckmoon / -.md
Created July 27, 2018 12:50 — forked from ifduyue/-.md
Resolve pycurl: libcurl link-time ssl backend (nss) is different from compile-time ssl backend (openssl)
# yum
There was a problem importing one of the Python modules
required to run yum. The error leading to this problem was:

   pycurl: libcurl link-time ssl backend (nss) is different from compile-time ssl backend (openssl)

Please install a package which provides this module, or
verify that the module is installed correctly.
wget https://cmake.org/files/v3.11/cmake-3.11.0.tar.gz
  • Compile from source and install
tar zxvf cmake-3.11.0.tar.gz && cd cmake-3.11.0
@luckmoon
luckmoon / gmmhmm.py
Last active August 9, 2018 14:49 — forked from kastnerkyle/gmmhmm.py
GMM-HMM (Hidden markov model with Gaussian mixture emissions) implementation for speech recognition and other uses
# (C) Kyle Kastner, June 2014
# License: BSD 3 clause
import scipy.stats as st
import numpy as np
class gmmhmm:
#This class converted with modifications from https://code.google.com/p/hmm-speech-recognition/source/browse/Word.m
def __init__(self, n_states):
self.n_states = n_states
@luckmoon
luckmoon / Upgrade vim
Created August 13, 2018 13:32 — forked from yevrah/Upgrade vim
Update to Vim8 on Centos 7
# You may use this CentOS 7 repository on Fedora Copr for Vim 8 builds.
# https://copr.fedorainfracloud.org/coprs/mcepl/vim8/
#
# Run these commands on CentOS 7.
# Add this repository:
sudo curl -L https://copr.fedorainfracloud.org/coprs/mcepl/vim8/repo/epel-7/mcepl-vim8-epel-7.repo -o /etc/yum.repos.d/mcepl-vim8-epel-7.repo
# Upgrade Vim to vim 8:
@luckmoon
luckmoon / scroll.py
Created October 8, 2019 07:49 — forked from hmldd/scroll.py
Example of Elasticsearch scrolling using Python client
# coding:utf-8
from elasticsearch import Elasticsearch
import json
# Define config
host = "127.0.0.1"
port = 9200
timeout = 1000
index = "index"
@luckmoon
luckmoon / weight_init.py
Created October 29, 2019 04:24 — forked from jeasinema/weight_init.py
A simple script for parameter initialization for PyTorch
#!/usr/bin/env python
# -*- coding:UTF-8 -*-
import torch
import torch.nn as nn
import torch.nn.init as init
def weight_init(m):
'''
@luckmoon
luckmoon / pad_packed_demo.py
Created December 5, 2019 06:53 — forked from HarshTrivedi/pad_packed_demo.py
Minimal tutorial on packing (pack_padded_sequence) and unpacking (pad_packed_sequence) sequences in pytorch.
import torch
from torch import LongTensor
from torch.nn import Embedding, LSTM
from torch.autograd import Variable
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
## We want to run LSTM on a batch of 3 character sequences ['long_str', 'tiny', 'medium']
#
# Step 1: Construct Vocabulary
# Step 2: Load indexed data (list of instances, where each instance is list of character indices)
@luckmoon
luckmoon / 論文閱讀心得.md
Created April 7, 2020 03:14 — forked from david30907d/論文閱讀心得.md
所有讀過的論文都寫在這邊!!
@luckmoon
luckmoon / tensorflow-graph-error-handling.py
Created December 15, 2020 12:03 — forked from alexwal/tensorflow-graph-error-handling.py
Example of how to handle errors in a tf.data.Dataset input pipeline
import tensorflow as tf
def create_bad_dataset(create_batches=True):
dataset = tf.data.Dataset.from_tensor_slices([1., 2., 0., 4., 8., 16.])
# Computing `tf.check_numerics(1. / 0.)` will raise an InvalidArgumentError.
if create_batches:
# Demonstrates that error handling works with map_and_batch
dataset = dataset.apply(tf.contrib.data.map_and_batch(
map_func=lambda x: tf.check_numerics(1. / x, 'error'), batch_size=2))
@luckmoon
luckmoon / gist:3ff508298d6ae10fac52fc4d33918499
Created February 1, 2021 07:59 — forked from glhfgg1024/gist:6d54faf29ccaf5dc7cca8034287e39e0
copy pretrained weights from "saved_model.pb" into a new model for finetuning or transer learning
import numpy as np
import tensorflow as flow
from tensorflow.python.saved_model import loader
# first, read the pretrained weights into a dictionary
variables = {}
g1 = tf.Graph()
with g1.as_default():
restore_from = 'pretrained_model/1513006564'
with tf.Session() as sess: