Skip to content

Instantly share code, notes, and snippets.

View ShreckYe's full-sized avatar

Shreck Ye ShreckYe

View GitHub Profile
@ShreckYe
ShreckYe / gist:66b53d1434c34e8e9f417e466a72cbef
Created June 22, 2023 10:03
Result of benchmarking some Exposed operations with Containerized PostgreSQL (AMD Ryzen 3900X, 32 GB RAM)
benchmarks: com.huanshankeji.exposed.benchmark.EmptyBenchmark.empty
Warm-up 1: 2692396136.352 ops/s
Warm-up 2: 2732507882.089 ops/s
Iteration 1: 2058280496.859 ops/s
Iteration 2: 2072328782.928 ops/s
2065304639.893 ops/s
benchmarks: com.huanshankeji.exposed.benchmark.PreparedSqlGenerationBenchmark.arguments1M | statementEnum=SelectAll
@ShreckYe
ShreckYe / collaborative-filtering-equaivalent-to-3-layer-neural-network-proof.md
Created July 16, 2020 08:02
A proof showing that collaborative filtering is equaivalent to a special type of 3-layer neural networks for multi-class classification

In Andrew Ng's Coursera course Machine Learning, he introduced a collaborative filtering algorithm, where the optimization objective is

$$ \min_{x^{(1)}, \dots, x^{(n_m)} \ \theta^{(1)}, \dots, \theta^{(n_\mu)}} \frac{1}{2} \sum_{(i, j): r(i ,j) = 1} ((\theta^{(j)})^T x^{(i)} - y^{(i, j)})^2 + \frac{\lambda}{2} \sum_{i = 1}^{n_m} \sum_{k = 1}^n (x^{(i)}k)^2 + \frac{\lambda}{2} \sum{j = 1}^{n_\mu} \sum_{k = 1}^n (\theta^{(j)}_k)^2 $$

and the vectorized expression is

$$ \min_{X, \Theta} \frac{1}{2} ||{R \circ (X \Theta^T - Y)}||^2 + \frac{\lambda}{2} ||X||^2 + \frac{\lambda}{2} ||\Theta||^2 $$

where $\circ$ denotes element-wise product, and $||A||^2$ denotes the squared sum of all elements of $A$.