Skip to content

Instantly share code, notes, and snippets.

View hmisra's full-sized avatar
💭
NEO is the ONE

Himanshu Misra hmisra

💭
NEO is the ONE
View GitHub Profile
@hmisra
hmisra / dynamicBarCharts.html
Last active August 29, 2015 14:23
Dynamic Bar Chart with D3
<html>
<head>
<title>Demo for Dynamic Bar Charts
</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.5/d3.min.js" charset="utf-8"></script>
<script>
function loadChart()
{
var profit=[40,50,50,45,30,35,50,60,55,30,40,50];
var h=100;
@hmisra
hmisra / RecursiveBayesianFilter
Created July 10, 2015 11:16
Recursive Bayesian Filter
library(ggplot2)
library(mvtnorm)
plot(seq(1, 6, 0.01), dnorm(seq(1,6,0.01), 3.5,0.5), xlab = "A", ylab="P(A)", main="Prior Distribution", type="l")
mean=c(4,4)
sigma=matrix(c(1,0,0,1), nrow = 2, ncol = 2)
dist<-matrix(rep(1, 36), nrow = 6, ncol = 6)
for(j in c(1:6))
{
@hmisra
hmisra / 0_reuse_code.js
Created August 7, 2016 09:01
Here are some things you can do with Gists in GistBox.
// Use Gists to store code you would like to remember later on
console.log(window); // log the "window" object to the console
1. Written Memories: Understanding, Deriving and Extending the LSTM : http://r2rt.com/written-memories-understanding-deriving-and-extending-the-lstm.html
2. RECURRENT NEURAL NETWORKS TUTORIAL, PART 1 – INTRODUCTION TO RNNS : http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
3. The Unreasonable Effectiveness of Recurrent Neural Networks: http://karpathy.github.io/2015/05/21/rnn-effectiveness/
4. Understanding LSTM: http://colah.github.io/posts/2015-08-Understanding-LSTMs/
5. Generative models : https://blog.openai.com/generative-models/
class Pacer:
def __init__(self, campaign_impression_goal=0, refresh_rate=1, total_cycles=1):
self.campaign_impression_goal = campaign_impression_goal
self.refresh_rate = refresh_rate
self.total_cycles = total_cycles
# variable to hold how many cycles has elapsed
self.cycles_so_far = 0
# optimal rate is the the rate at which distribution would be uniform
self.optimal_rate = 1.0*self.campaign_impression_goal/self.total_cycles
# flag to mark the first cycle to output all the impressions

Understanding Proofs - Universal Approximation Theorem

December 18, 2024

What is the Universal Approximation Theorem?
The Universal Approximation Theorem (UAT) states that a feedforward neural network with a single hidden layer, using a suitable activation function, can approximate any continuous function defined on a compact domain (like the unit cube $[0,1]^n$) as closely as we wish, provided we have enough hidden units. In other words, such networks are universal approximators of continuous functions.

Formal Statement:

Framework for Zero-Shot Learning with Large Language Models (LLMs)

This document outlines essential prompting techniques for leveraging zero-shot learning capabilities of Large Language Models (LLMs). These methods allow you to perform a wide variety of tasks without requiring prior task-specific training data.


1. Natural Language Descriptions

Overview

Describe the task or concept in clear, natural language so the model understands what to do.

# Resolution - Definitions
1. **The brain's inherent ability to analyze data at different resolutions is remarkable.**
2. **The problem of optimizing data compression** is best suited for attaining intelligence.
3. Transitioning **from material to intelligence** allows us to put **life into inorganic material**
(through the whole architecture of computation → Neural Networks → LLMs → Intelligence).
4. **Predicting the future** is the name of the game; each brain does exactly that.

Understanding Proofs - 2 - Noether's Theorem: Symmetry, Invariances and Deep Learning

Jan 7,2025

Introduction: The Deep Connection Between Symmetry and Invariances.

In the landscape of modern science, few principles have proven as universally powerful as Emmy Noether's Theorem. Published in 1918, this remarkable insight connects symmetries in physical systems to their invariances (conservation laws). Today, over a century later, we're discovering that these same principles govern not just the physical world, but also the behavior of artificial neural networks and deep learning systems.

This comprehensive exploration will bridge the gap between classical physics and cutting-edge artificial intelligence, revealing how Noether's insights illuminate both fields. We'll begin with fundamental mathematical principles, progress through classical applications, and ultimately reveal how these same concepts manifest in modern deep learning architectures.

Java Developer → AI Systems Builder

A Practical Learning Path with Comprehensive Resources (3–6 months, part-time)

Leveraging your existing software engineering skills to build production AI applications

Phase 0: Foundation Setup (Week 0)

Goal: Get your development environment ready

Essential Setup

  • Python: Anaconda distribution (includes everything) or Python 3.10+ with pip