Skip to content

Instantly share code, notes, and snippets.

View kamath's full-sized avatar
😳
swaggerific

Anirudh Kamath kamath

😳
swaggerific
View GitHub Profile
@kamath
kamath / .vimrc
Last active March 22, 2023 05:42
vimrc
" NOTE: this is a .vimrc but a lot of plugins here are only compatible with neovim.
" To use this vimrc with neovim, use the init.vim below in ~/.config/nvim/init.vim
" NOTE: coc.nvim needs to be installed separately
" https://github.com/neoclide/coc.nvim
" Map space to vertical split + file explorer
nnoremap <Space> <C-w>v<C-w>l :Ex<CR>
nnoremap <s-Right> <C-w>l
nnoremap <s-Left> <C-w>h
@kamath
kamath / .vimrc
Created September 2, 2021 17:45
call plug#begin('~/.vim/plugged')
Plug 'vim-airline/vim-airline'
Plug 'vim-airline/vim-airline-themes'
Plug 'scrooloose/nerdtree'
call plug#end()
let g:airline_powerline_fonts = 1
set rtp+=/usr/local/opt/fzf
let g:NERDTreeDirArrowExpandable = '▸'
LOAD CSV WITH HEADERS FROM "file:///styles.csv" AS row
MERGE (sc:SubCategory {name: coalesce(row.subCategory, "unknown")})
MERGE (mc:MasterCategory {name: coalesce(row.masterCategory, "unknown")})
MERGE (g:Gender {name: coalesce(row.gender, "unknown")})
MERGE (y:Year {name: coalesce(row.year, "unknown")})
MERGE (t:ArticleType {name: coalesce(row.articleType, "unknown")})
MERGE (b:BaseColor {name: coalesce(row.baseColour, "unknown")})
MERGE (s:Season {name: coalesce(row.season, "unknown")})
MERGE (p:Product {name: row.productDisplayName, id: row.id})
@kamath
kamath / problem3.py
Last active February 22, 2021 04:09
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits import mplot3d
import seaborn as sns
sns.set()
'''
import numpy as np
import matplotlib.pyplot as plt
y = np.array([2.85, 1.5, .49, 1.57, 1.9, 0.6, 0.38, 2.33, 1.65, 0.3])
x = np.arange(0, 20, 2)
b = np.ones(3) # 3 parameters, b0-b2
eta = .02 # learning rate
g = lambda x: np.array([1, np.sin(x), np.cos(x)])
import numpy as np
import matplotlib.pyplot as plt
X = np.array([1.2, 3.2, 5.1, 3.5, 2.6]).reshape(-1, 1)
y = np.array([7.8, 1.2, 6.4, 2.6, 8.1])
ints = np.ones(shape=y.shape)[..., None]
X = np.concatenate((X, ints), 1)
# When l = 0, it's RSS, otherwise we can specify lambda via l term
@kamath
kamath / analyze.py
Created November 2, 2020 06:14
Homework #7 Explained
def analyze(filename, columns=[], precision=1):
""" Read a CSV file named filename. For each
listed (numeric) column, compute the min, max, and average value.
Generate a table where each row is one of the columns listed
and the columns correspond to the min, average, and max value.
The average grade should be rounded to the number of decimal
places specified by the precision parameter. """
file = open(filename, 'r') # You could've also said "with open(filename, 'r') as file:" and indented the rest of your block
@kamath
kamath / hashset
Last active October 1, 2020 00:21
class Solution:
def firstMissingPositive(self, nums: List[int]) -> int:
if nums == []:
return 1
maxval = max(nums)
if maxval <= 0:
return 1
nums = set(nums)
for i in range(1, maxval):
if i not in nums:
#
# Binary trees are already defined with this interface:
# class Tree(object):
# def __init__(self, x):
# self.value = x
# self.left = None
# self.right = None
def mostFrequentSum(root):
if root is None:
return []

Serverless Graph DB

In an increasingly serverless tech industry, graph databases like Neo4J still require infrastructure provisioning. When you start a Neo4J instance, you run the risk of starting a server you may not use, thereby paying for uptime on a server that you think will scale up, but have no guarantee that it will.

AWS Glue and Athena - serverless ETLs and databases

AWS Glue has "crawlers" that can schematize JSON, text, and CSV files, and store that data in a serverless database, called AWS Athena. The output of a Glue crawler is typically a Parquet file that is stored in S3 (regular cloud storage for files), which Athena reads as a table in its database. AWS Glue also allows for Spark jobs that allow you to relationalize the output of a Crawler, meaning you can turn any unstructured data into structured data that can be queried with SQL in Athena. The fact that it uses Parquet also means it enforces strong data typing that typical CSVs and JSON files don't allow. It also compresses regula