Skip to content

Instantly share code, notes, and snippets.

100233936 100234114 100234114 100218263 100234114 100233936 100233934 100233000 100229577 100190941 100229577
100234306 100234605 100234306 100234121 100226884
100187578 100234636
100221431 100230351 100198432 100219407 100219381 100198764 100219407 100231798 100224042 100223818 100223896 100221520 100221265 100221503 100221528 100221500 100212197 100218137 100217796 100223550 100219548 100200024 100212197 100223648 100199377 100226896 100226896 100213820 100224042 100201946 100202468 100228053 100228314 100197186 100213628 100213114 100225315 100227910 100232348 100232765 100232757 100232779 100225968 100225856 100230319 100222892 100223829 100223649 100228390 100227100 100213901 100226824 100227066 100226533 100221269 100218377 100212805 100212831 100213363 100224890 100183836 100225410 100183839 100167352 100222968 100223946 100222499 100222681 100199011 100226532 100222626 100206983 100221716 100215997 100221263 1002
@JonyD
JonyD / ML
Last active June 19, 2018 11:44
//Code excerpts
final SentenceIterator it1 = loadData(fileName);
final TokenizerFactory tf = tokenizeData();
final int layerSize = 100;
final long seed = 5;
final int windowSize = 50;
final int minWordFrequency = 2;
final Word2Vec vec = trainModel(it1, tf, layerSize, seed, windowSize, minWordFrequency);
@JonyD
JonyD / test_file2.txt
Created October 19, 2016 13:48
test2
this is a test 2