Skip to content

Instantly share code, notes, and snippets.

View developer-mayuan's full-sized avatar

Mabuchi Akira developer-mayuan

View GitHub Profile
"=============================================================================
" dark_powered.vim --- Dark powered mode of SpaceVim
" Copyright (c) 2016-2017 Wang Shidong & Contributors
" Author: Wang Shidong < wsdjeg at 163.com >
" URL: https://spacevim.org
" License: GPLv3
"=============================================================================
" SpaceVim Options: {{{
@developer-mayuan
developer-mayuan / dataset_input_fn.py
Created March 19, 2018 21:04
dataset input fn using Tensorflow 1.4 API.
def dataset_input_fn(is_training, data_dir, batch_size, num_epochs=1):
"""Input function which provides batches for train or eval.
"""
dataset = tf.data.Dataset.from_tensor_slices(_filenames(is_training,
data_dir))
if is_training:
dataset = dataset.shuffle(buffer_size=_FILE_SHUFFLE_BUFFER)
dataset = dataset.flat_map(tf.data.TFRecordDataset)
git config --global core.editor "vim"
@developer-mayuan
developer-mayuan / nginx-websocket-proxy.conf
Created March 12, 2018 07:49 — forked from uorat/nginx-websocket-proxy.conf
Nginx Reverse Proxy for WebSocket
upstream websocket {
server localhost:3000;
}
server {
listen 80;
server_name localhost;
access_log /var/log/nginx/websocket.access.log main;
@developer-mayuan
developer-mayuan / hopenet_input.py
Created January 19, 2018 22:51
Imagenet like tensorflow input pipeline using 1.1 API
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
'atom-text-editor.vim-mode-plus.insert-mode':
'j j': 'vim-mode-plus:activate-normal-mode'
'j k': 'vim-mode-plus:activate-normal-mode'
'atom-text-editor[mini]':
'j j': 'core:cancel'
'j k': 'core:cancel'
# Working example for my blog post at:
# https://danijar.github.io/structuring-your-tensorflow-models
import functools
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
def doublewrap(function):
"""
A decorator decorator, allowing to use the decorator to be used without
If this is your first install of dbus, automatically load on login with:
mkdir -p ~/Library/LaunchAgents
cp /Users/mayuan-mabuchi/Ordnance/anaconda3/envs/Hopenet/org.freedesktop.dbus-session.plist ~/Library/LaunchAgents/
launchctl load -w ~/Library/LaunchAgents/org.freedesktop.dbus-session.plist
If this is an upgrade and you already have the org.freedesktop.dbus-session.plist loaded:
launchctl unload -w ~/Library/LaunchAgents/org.freedesktop.dbus-session.plist
cp /Users/mayuan-mabuchi/Ordnance/anaconda3/envs/Hopenet/org.freedesktop.dbus-session.plist ~/Library/LaunchAgents/
launchctl load -w ~/Library/LaunchAgents/org.freedesktop.dbus-session.plist
@developer-mayuan
developer-mayuan / bottleneck_block.py
Created December 21, 2017 01:38
Bottleneck building block implementation for resnet
def bottleneck_block(inputs, filters, is_training, projection_shortcut, strides,
data_format):
"""
Bottleneck block variant for residual networks with BN before convolutions.
:param inputs: A tensor of size
:param filters: The number of filers for the first two convolutions. Note that the third and final convolution will use 4 times as many filters.
:param is_training: A Boolean for whether the model is in training or inference mode. Needed for batch normalization.
:param projection_shortcut: The function to use for projection shortcuts (typcially a 1x1 convolution when downsampling in the input).
:param strides: The block's stride. If greater than 1, this block will ultimately downsample the input.
:param data_format: The input format ('channels_last' or 'channels_first').
@developer-mayuan
developer-mayuan / batch_norm_relu.py
Created December 21, 2017 01:00
Batch normalization followed by ReLU activation in tensorflow.
def batch_norm_relu(inputs, is_training, data_format):
"""
Performs a batch normalization followed by a ReLU.
:param inputs:
:param is_training:
:param data_format:
:return:
"""
inputs = tf.layers.batch_normalization(
inputs=inputs, axis=1 if data_format == 'channels_first' else 3,