Skip to content

Instantly share code, notes, and snippets.

@walkoncross
walkoncross / howto-standalone-toolchain.md
Created August 20, 2016 10:07 — forked from Tydus/howto-standalone-toolchain.md
How to install Standalone toolchain for Android

HOWTO Cross compiling on Android

5W1H

What is NDK

NDK (Native Develop Toolkit) is a toolchain from Android official, originally for users who writes native C/C++ code as JNI library. It's not designed for compiling standalone programs (./a.out) and not compatible with automake/cmake etc.

What is Standalone Toolchain

"Standalone" refers to two meanings:

  1. The program is standalone (has nothing connect to NDK, and don't need helper scripts to run it)
  2. The toolchain is made for building standalone programs and libs, and which can used by automake etc.

(Optional) Why NDK is hard to use

By default, NDK uses android flavor directory structure when it's finding headers and libs, which is different from GNU flavor, so the compiler cannot find them. For Example:

@walkoncross
walkoncross / README.md
Created September 6, 2016 07:50 — forked from GilLevi/README.md
Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns

Gil Levi and Tal Hassner, Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns

Convolutional neural networks for emotion classification from facial images as described in the following work:

Gil Levi and Tal Hassner, Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns, Proc. ACM International Conference on Multimodal Interaction (ICMI), Seattle, Nov. 2015

Project page: http://www.openu.ac.il/home/hassner/projects/cnn_emotions/

If you find our models useful, please add suitable reference to our paper in your work.

@walkoncross
walkoncross / README.md
Created September 10, 2016 11:37 — forked from GilLevi/README.md
Age and Gender Classification using Convolutional Neural Networks
@walkoncross
walkoncross / readme.md
Created September 10, 2016 11:58 — forked from ishay2b/readme.md
Vanilla CNN caffe model
name caffemodel caffemodel_url license sha1 caffe_commit
Vanilla CNN Model
vanillaCNN.caffemodel
unrestricted
b5e34ce75d078025e07452cb47e65d198fe27912
9c9f94e18a8909580a6b94c44dbb1e46f0ee8eb8

Implementation of the Vanilla CNN described in the paper: Yue Wu and Tal Hassner, "Facial Landmark Detection with Tweaked Convolutional Neural Networks", arXiv preprint arXiv:1511.04031, 12 Nov. 2015. See project page for more information about this project.

@walkoncross
walkoncross / readme.md
Created September 10, 2016 20:36 — forked from jimgoo/readme.md
CaffeNet fine-tuned on the Oxford 102 category flower dataset
@walkoncross
walkoncross / _Instructions.md
Created October 19, 2016 19:12 — forked from genekogan/_Instructions.md
instructions for generating a style transfer animation from a video

Instructions for making a Neural-Style movie

The following instructions are for creating your own animations using the style transfer technique described by Gatys, Ecker, and Bethge, and implemented by Justin Johnson. To see an example of such an animation, see this video of Alice in Wonderland re-styled by 17 paintings.

Setting up the environment

The easiest way to set up the environment is to simply load Samim's a pre-built Terminal.com snap or use another cloud service like Amazon EC2. Unfortunately the g2.2xlarge GPU instances cost $0.99 per hour, and depending on parameters selected, it may take 10-15 minutes to produce a 512px-wide image, so it can cost $2-3 to generate 1 sec of video at 12fps.

If you do load the

@walkoncross
walkoncross / _readme.md
Created October 25, 2016 22:47 — forked from kevinlin311tw/_readme.md
Deep Learning of Binary Hash Codes CIFAR10
@walkoncross
walkoncross / poolmean.prototxt
Created October 25, 2016 23:20 — forked from vsubhashini/poolmean.prototxt
Translating Videos to Natural Language Using Deep Recurrent Neural Networks
# The network is used for the video description experiments in [1].
# Please consider citing [1] if you use this example in your work.
#
# [1] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. Mooney, K.Saenko.
# "Translating Videos to Natural Language using Deep Recurrrent Neural
# Networks." NAACL-HLT 2015.
name: "mean_fc7_to_lstm"
layer {
name: "data"
@walkoncross
walkoncross / Sequence to Sequence -- Video to Text
Last active November 21, 2016 23:29 — forked from vsubhashini/readme.md
Sequence to Sequence - Video to Text (S2VT)
##Sequence to Sequence -- Video to Text
Paper : [ICCV 2015 PDF](http://www.cs.utexas.edu/users/ml/papers/venugopalan.iccv15.pdf)
Download Model: [S2VT_VGG_RGB_MODEL](https://www.dropbox.com/s/wn6k2oqurxzt6e2/s2s_vgg_pstream_allvocab_fac2_iter_16000.caffemodel?dl=1) (333MB)
[Project Page](https://vsubhashini.github.io/s2vt.html)
### Description
@walkoncross
walkoncross / multiple_ssh_setting.md
Created November 2, 2016 20:19 — forked from jexchan/multiple_ssh_setting.md
Multiple SSH keys for different github accounts

Multiple SSH Keys settings for different github account

create different public key

create different ssh key according the article Mac Set-Up Git

$ ssh-keygen -t rsa -C "your_email@youremail.com"