This is used to have a bunch of ZeroRPC clients and workers talking to each other.
WARNING: this is not compatible with heartbeats and streaming!
Clients connect to the "in" side of the hub.
Workers connect to the "out" side of the hub.
Guide for merging accelerated convolution to Caffe. | |
==== | |
*Yuanjun Xiong* | |
--- | |
[TOC] |
CHECK_EQ(mdb_cursor_get(mdb_cursor_, &mdb_key_, | |
&mdb_value_, MDB_GET_CURRENT), MDB_SUCCESS); | |
datum.ParseFromArray(mdb_value_.mv_data, | |
mdb_value_.mv_size); | |
LOG(INFO)<<"Read "<<item_id<<" "<<(char*)mdb_key_.mv_data; |
int ReadVectorToDatum(float* data_ptr, int data_len, Datum* datum){ | |
//reshape and clear data | |
datum->set_channels(data_len); | |
datum->set_height(1); | |
datum->set_width(1); | |
datum->set_label(0); | |
datum->clear_data(); | |
datum->clear_float_data(); | |
<!DOCTYPE html> | |
<html lang="en"> | |
<head> | |
<!-- Le styles --> | |
<link href="../bootstrap/css/bootstrap.css" rel="stylesheet"> | |
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.7/jquery.js"></script> | |
</head> | |
<body> |
This gist holds the model spec for the baseline CNN model on the WIDER dataset.
The CNN structure is AlexNet. Network parameters are initialized using a model pretrained on ImageNet.
The weights can be downloaded at
cuhk_wider_baseline_cnn.caffemodel
Please refer to
This gist holds the Caffe style model spec for the CVPR'15 paper
Recognize Complex Events from Static Images by Fusing Deep Channels
The model has two channels, one for appearance analysis, the other one for detection bounding box analysis.
The appearcance analysis channel has the similar structure of the AlexNet and thus is initialized using a model pretrained on ImageNet.
__author__ = 'alex' | |
# from pyspark import SparkContext, SparkConf | |
import nltk | |
from nltk.corpus import stopwords | |
sw = stopwords.words('english') | |
tk = nltk.tokenize.WordPunctTokenizer() |
__author__ = 'Yuanjun Xiong' | |
""" | |
This script will transform an image based Caffe model to its optic flow ready form | |
The basic approach is to average the three channels of the first set of convolution filters. | |
Averaged filters are then replicated K times to incorporate K input frames of optical flow maps. | |
Refer to "Towards Good Practices for Very Deep Two-Stream ConvNets" for more details. | |
====================================================================== | |
Usage: | |
python build_flow_network.py <caffe root> <first layer name> <image model prototxt> <image model weights> <flow model prototxt> <flow model weights[out]> |
youtube-dl -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/bestvideo+bestaudio' \ | |
--merge-output-format mp4 \ | |
"http://www.youtube.com/watch?v=P9pzm5b6FFY" | |
# This command downloads the best available quality video together with the best audio. Then it combines them with the post-processor. |