I hereby claim:
- I am abishekk92 on github.
- I am abishekk92_sem3 (https://keybase.io/abishekk92_sem3) on keybase.
- I have a public key ASAKTPWub6-tnDxkc-prAFhEwHLv4_7jyQi3F4FD3pcoDwo
To claim this, I am signing this object:
let mapleader="," | |
syntax enable | |
set nobackup | |
set nowritebackup | |
set background=dark | |
colorscheme desert |
I hereby claim:
To claim this, I am signing this object:
# English to Japanese Dicitionary | |
jp_eng_str = "A= ka, B= tu, C= mi, D= te, E= ku, F= lu, G= ji, H= ri, I= ki, J= zu, K= me, L= ta, M= rin, N= to, O= mo, P= no, Q= ke, R= shi, S= ari, T= chi, U= do, V= ru, W= mei, X= na, Y= fu, Z= zi" | |
jp_eng_dict = dict(map(lambda a : a.strip().split("="), jp_eng_str.lower().split(","))) | |
#Convert an English word to a Japanese word. | |
to_jp = lambda word : "".join(map(lambda a : jp_eng_dict.get(a, '').strip(), word.lower())).title() | |
#Convert an English word to a Japanese name. | |
to_jp_name = lambda name : " ".join(map(to_jp, name.split(" "))) |
So I tried training a 2 Layer GRU Encoder - Decoder Recurrent Neural Network to solve the well known fizzbuzz problem. | |
For a max sequence length of 5 and 5K toy samples, the network was able to reach 98% validation accuracy in 30 epochs. | |
Model Summary | |
============ | |
____________________________________________________________________________________________________ | |
Layer (type) Output Shape Param # Connected to | |
==================================================================================================== | |
gru_1 (GRU) (None, 5, 128) 63744 gru_input_1[0][0] |
Verifying that +abishekk92 is my blockchain ID. https://onename.com/abishekk92 |
I hereby claim:
To claim this, I am signing this object:
final = Appendable() | |
for foo in bar: | |
final.append(apply_map(apply_filter(foo))) | |
import util.Random.shuffle | |
object Sort{ | |
val toSort = List(1234, 23, 45, 56, 1) | |
def insert(list : List[Int], t : Int) : List[Int] = list match { | |
case Nil => List[Int](t) | |
case x :: xs if x > t => t :: x :: xs | |
case x :: xs => x :: insert(xs, t) | |
} |
I recently came across how one can parallelize an algorithm in terms of Map Reduce by nailing down the operations on the data as a well defined Algebraic Structure (A monoid). The idea has strong mathematical grounding and explains why Twitter is so much invested in Algebird, Summingbird. I will try to briefly explain what it is and why I am particularly fascinated by it.
In today's world of ever growing data, the computation we need to run keeps getting complex and complex, these computations need to be run as efficiently(cost and resource) as possible. Most of these computations are run as Map Reduce jobs on a cluster of commodity grade hardware.
The idea behind Map Reduce is to move the computation to the data than to aggregate data to perform computation.
Each of the computation happen locally on the data(Mapping) on all the nodes at the same time, hence the notion of parallelism, these mapped data can be aggregated together and reduced into a representative value.
package abishekk92.executor | |
import abishekk92.Logging | |
import org.apache.mesos.{ Executor, ExecutorDriver } | |
import org.apache.mesos.Protos._ | |
class Demo_frameworkExecutor extends Executor with Logging { | |