Skip to content

Instantly share code, notes, and snippets.

@cshjin
cshjin / seq.py
Created July 26, 2023 17:41
OEIS - A360497
# OEIS sequence id A360497: https://oeis.org/A360497
#
# Author: Hongwei Jin <jinh@anl.gov>
# Date: 2023-07-26
from sympy import isprime
import matplotlib.pyplot as plt
import networkx as nx
@cshjin
cshjin / README
Last active September 7, 2020 17:50
Dataset
# Dataset
@cshjin
cshjin / defaultSettings.yaml
Created February 14, 2020 00:17
My latexindent defaultSettings
# defaultSettings.yaml for latexindent.pl, version 3.7.1, 2019-09-07
# a script that aims to
# beautify .tex, .sty, .cls files
#
# (or latexindent.exe if you're on Windows)
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
@cshjin
cshjin / __init__.py
Last active November 20, 2019 17:08
Clean up the log info in running tf
try:
# import tf 2.X
import tensorflow.compat.v2 as tf
except Exception:
# import tf 1.X
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print('Using TF {}'.format(tf.__version__))
# set logging level for tf
@cshjin
cshjin / readme.md
Created November 15, 2019 01:12
Collection of Coding Practice

Collection of Coding Practice

This is a collection of online programming practice. Several website provide online judge, some provide problem list and some others provide only result judge.

List of online resources

Website OJ RJ Practice Challenge Interviews Languages
CareerCup C, C++, Java, Python, etc.
CodeChef C, C++, Java, Python, etc.
@cshjin
cshjin / kl.py
Last active July 11, 2019 17:12
KL divergence in Numpy
# -*- coding: utf-8 -*-
#!/bin/env python
import numpy as np
import unittest
def softmax(x):
""" Given a vector, apply the softmax activation function
Parameters
@cshjin
cshjin / minimax.py
Last active December 1, 2021 03:13
Solve a minimax problem in tensorflow
""" Solve a minimax problem
The problme is defined as
$$ \min_{x} \max_{y} f(x, y) = x^2(y+1) + y^2(x+1)$$
The first order gradient is
$$ \frac{\partial f}{\partial x} = 2x(y+1) + y^2 $$
$$ \frac{\partial f}{\partial y} = x^2 + 2y(x+1) $$
From the first order optimality condition, the alternatively solver
should solve the problem and converge to a stationary point.
@cshjin
cshjin / matfac.m
Last active September 27, 2019 02:23
Matrix factorization problem implemented in Tensorflow and Matlab (with yalmip)
yalmip('clear')
T = [0 1 0 1; 1 0 1 0; 0 1 0 1; 1 0 1 0];
x = sdpvar(4, 1);
y = sdpvar(4, 1);
assign(x, randn(4, 1));
assign(y, randn(4, 1));
const = [x'*x <= 1; y'*y <=1];
% const = [];
@cshjin
cshjin / tf_autograd.py
Created January 16, 2019 01:13
Verify the tensorflow autogradient and the gradient w.r.t matrix
import numpy as np
import pickle
import matplotlib.pyplot as plt
import scipy.io as sio
import tensorflow as tf
tf.enable_eager_execution()
tmm = tf.matmul
def _test():
@cshjin
cshjin / power_method.py
Created September 22, 2018 23:28
Power method
import numpy as np
import sys
# reference: https://en.wikipedia.org/wiki/Power_iteration
def power_method(M):
b = np.random.rand(M.shape[1])
diff = np.linalg.norm(b)
while diff > 10 ** (-6):
b_new = np.dot(M, b)
b_norm = np.linalg.norm(b_new)