Skip to content

Instantly share code, notes, and snippets.

[0, 1, 3, 2, 0, 1, 0, 0, 1, 2, 1, 2, 1, 0, 2, 0, 1, 2, 2, 0, 1, 2, 0, 0,
2, 0, 2, 1, 2, 2, 2, 1, 0, 2, 2, 1, 0, 0, 2, 1, 2, 2, 2, 1, 0, 1, 2, 0,
2, 1, 3, 1, 0, 2, 2, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 2, 1, 2,
0, 0, 1, 2, 2, 2, 2, 0, 1, 2, 1, 0, 1, 0, 1, 3, 1, 1, 1, 0, 2, 1, 0, 1,
1, 2, 1, 0, 1, 2, 2, 1, 0, 2, 1, 1, 2, 1, 1, 0, 1, 1, 2, 1, 1, 1, 1, 2,
2, 2, 1, 0, 1, 1, 0, 2, 2, 1, 2, 2, 2, 2, 2, 0, 0, 1, 1, 0, 0, 2, 2, 2,
0, 1, 2, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 2, 0, 1, 0, 1, 0, 0, 0, 2, 0, 0,
0, 1, 2, 2, 0, 0, 2, 1, 2, 0, 1, 0, 1, 2, 2, 2, 2, 2, 1, 1, 2, 0, 2, 2,
1, 1, 0, 2, 1, 1, 2, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 2, 1, 3,
1, 0, 2, 0, 1, 0, 0, 2, 0, 0, 2, 2, 1, 1, 0, 2, 1, 0, 2, 0, 1, 2, 1, 2,
@harveyslash
harveyslash / Proposal.md
Last active August 16, 2018 10:54
Proposal

Deep Neural Networks for Large Scale User behaviour analysis

Deep Neural networks have shown superior performance for learning complex abstract representations of data. These representations often correlate with our own understanding of the world. For example, a convolutional neural network learns about very high level concepts like cars, animals etc. It is even more interesting to note that it does it in a simple to complex heirarchical order. So the first layer learns simpler concepts such as edges, and then these edges are used as features to learn shapes, and so on.

It must be noted that this immense power of Deep Neural Networks can be extended beyond images. Deep learning is currently the state of the art in almost all tasks in natural language processing tasks such as sentiment analysis.

Using Deep Learning for Collaborative Filtering

Beatest <--> Open Edx Integration

Open Edx provides key features that can be used as a starting point for Beatest MOOC E-Learning.

Key Features to be reused with (almost) no modification:

1. CMS (Content Management System)

Instructors can be manually registered using django admin panel. Then they have access to create courses. These courses will be published by beatest administrators.

@harveyslash
harveyslash / Beatest-edx.md
Last active July 5, 2018 13:06
Welcome file
<title>Welcome file</title>
<h1 id="beatest----open-edx-integration">Beatest &lt;–&gt; Open Edx Integration</h1>
<p>Open Edx provides key features that can be used as a starting point for Beatest MOOC E-Learning.</p>
<h3 id="key-features-to-be-reused-with-almost-no-modification">Key Features to be reused with (almost) no modification:</h3>
<h4 id="cms-content-management-system">1. CMS (Content Management System)</h4>
<p>Instructors can be manually registered using django admin panel. Then they have access to create courses. These courses will be published by beatest administrators.</p>
<h4 id="core-lms-learning-management-system">2. Core LMS (Learning Management System)</h4>
<p>Core aspects include Assignments, Videos, Discussions, Quizzes. They will be used out of the box as is.</p>
<h3 id="important-features-to-be-modified-with-significant-changes-1">Important Features to be modified with Significant changes [1]:</h3>
<h4 id="login-system">1. Login System</h4>
<p>The current login system will be completely redesigned to work with SSO

Impersonet- Developing a unique writing style by learning from multiple authors [NLP]

By reading vastly different styles of text (such as fiction and philosopy), it may be possible to come up with a writing style that takes inspiration from both, resulting in a completely unique writing style. This is useful for artificial data generation.

Implementation so far/ideas

  • Character level neural language model
  • regularized by GANs
  • leraning by just raw sampling yeilds poor results, so need custom training algorithm (I have a promising one, but not implemented and tested)
\documentclass[12pt]{article}
\pagestyle{empty}
\usepackage{tikz}
\usepackage{comment}
\usepackage{amsmath}
\usepackage{booktabs}
\begin{document}
Text generation models typically learn the distribution of a text and
used this learned information to generate text themselves. By tuning
various hyperparameters, it is possible to finetune the tradeoff between
the deviation of the text from the distribution and the robustness of
the text.
I present a way to extend this ability of generative models to learn
multiple distributions in order to generate a style of text
that is unique from both distributions.
I also discuss ways to regularize the presented model using Generative
# Makefile for CCS Experiment 5 - merge sort
#
#
# Location of the processing programs
#
RASM = /home/fac/wrc/bin/rasm
RLINK = /home/fac/wrc/bin/rlink
#
# Makefile for CCS Experiment 5 - merge sort
#
#
# Location of the processing programs
#
RASM = /home/fac/wrc/bin/rasm
RLINK = /home/fac/wrc/bin/rlink