Skip to content

Instantly share code, notes, and snippets.

View hmisra's full-sized avatar
💭
NEO is the ONE

Himanshu Misra hmisra

💭
NEO is the ONE
View GitHub Profile
@hmisra
hmisra / Agents.md
Last active October 17, 2025 03:47

How to Employ LLMs

What Do They Learn - and Why Do We Consider Them Intelligent?

For you to effectively harness the intelligence encoded in LLMs, it’s important to understand key emergent behaviors.
During the unsupervised learning phase, when large models are trained on internet-scale datasets for next-token prediction, they tend to learn the following intelligence patterns:

1. Compressed World Models

LLMs seem to develop internal representations of how the world works — causal relationships, physical laws, social dynamics, logical structures.
They’re not just memorizing text patterns; they’re building models that let them reason about novel situations.

Key Learnings for UI/UX Designers

  1. Treat Prompts as Design Briefs or Constraints Some posts highlight a UI/UX Design & Product Ideation prompt as one of the “must-keep” prompt templates.  Users often request result-driven prompts for tasks like user journeys, user flows, journey mapping, personas, and information architecture (IA).  Takeaway: When using a language model for UI/UX work, treat your prompt like a design brief. Specify: Which artifacts you need (flows, mappings, personas, IA) Their structure and constraints (platform, user types) Desired fidelity and style

Java Developer → AI Systems Builder

A Practical Learning Path with Comprehensive Resources (3–6 months, part-time)

Leveraging your existing software engineering skills to build production AI applications

Phase 0: Foundation Setup (Week 0)

Goal: Get your development environment ready

Essential Setup

  • Python: Anaconda distribution (includes everything) or Python 3.10+ with pip

Understanding Proofs - 2 - Noether's Theorem: Symmetry, Invariances and Deep Learning

Jan 7,2025

Introduction: The Deep Connection Between Symmetry and Invariances.

In the landscape of modern science, few principles have proven as universally powerful as Emmy Noether's Theorem. Published in 1918, this remarkable insight connects symmetries in physical systems to their invariances (conservation laws). Today, over a century later, we're discovering that these same principles govern not just the physical world, but also the behavior of artificial neural networks and deep learning systems.

This comprehensive exploration will bridge the gap between classical physics and cutting-edge artificial intelligence, revealing how Noether's insights illuminate both fields. We'll begin with fundamental mathematical principles, progress through classical applications, and ultimately reveal how these same concepts manifest in modern deep learning architectures.

# Resolution - Definitions
1. **The brain's inherent ability to analyze data at different resolutions is remarkable.**
2. **The problem of optimizing data compression** is best suited for attaining intelligence.
3. Transitioning **from material to intelligence** allows us to put **life into inorganic material**
(through the whole architecture of computation → Neural Networks → LLMs → Intelligence).
4. **Predicting the future** is the name of the game; each brain does exactly that.

Framework for Zero-Shot Learning with Large Language Models (LLMs)

This document outlines essential prompting techniques for leveraging zero-shot learning capabilities of Large Language Models (LLMs). These methods allow you to perform a wide variety of tasks without requiring prior task-specific training data.


1. Natural Language Descriptions

Overview

Describe the task or concept in clear, natural language so the model understands what to do.

Understanding Proofs - Universal Approximation Theorem

December 18, 2024

What is the Universal Approximation Theorem?
The Universal Approximation Theorem (UAT) states that a feedforward neural network with a single hidden layer, using a suitable activation function, can approximate any continuous function defined on a compact domain (like the unit cube $[0,1]^n$) as closely as we wish, provided we have enough hidden units. In other words, such networks are universal approximators of continuous functions.

Formal Statement:

class Pacer:
def __init__(self, campaign_impression_goal=0, refresh_rate=1, total_cycles=1):
self.campaign_impression_goal = campaign_impression_goal
self.refresh_rate = refresh_rate
self.total_cycles = total_cycles
# variable to hold how many cycles has elapsed
self.cycles_so_far = 0
# optimal rate is the the rate at which distribution would be uniform
self.optimal_rate = 1.0*self.campaign_impression_goal/self.total_cycles
# flag to mark the first cycle to output all the impressions
1. Written Memories: Understanding, Deriving and Extending the LSTM : http://r2rt.com/written-memories-understanding-deriving-and-extending-the-lstm.html
2. RECURRENT NEURAL NETWORKS TUTORIAL, PART 1 – INTRODUCTION TO RNNS : http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
3. The Unreasonable Effectiveness of Recurrent Neural Networks: http://karpathy.github.io/2015/05/21/rnn-effectiveness/
4. Understanding LSTM: http://colah.github.io/posts/2015-08-Understanding-LSTMs/
5. Generative models : https://blog.openai.com/generative-models/
@hmisra
hmisra / 0_reuse_code.js
Created August 7, 2016 09:01
Here are some things you can do with Gists in GistBox.
// Use Gists to store code you would like to remember later on
console.log(window); // log the "window" object to the console