Instantly share code, notes, and snippets.

View Bump.md

bump2: an optimization problem from A.J.Keane, https://www.southampton.ac.uk/~ajk/bump.html .

Keywords: optimization, test case, random search, python

Problems in mathematical optimization have several different aspects:

  • contest: whose program reaches minimum __ in cpu time __ and time-to-understand __
  • learning how to explore particular function terrains
  • learning how to constrain.
View Minimize-trustconstr-noconstraints-bounds.md

Minimize trust-constr works well with no constraints and no bounds, so well that it would be nice if bounds worked too -- make trust-constr the method of choice. But constraints=() and bounds -> ValueError in tr_interior_point.py BarrierSubproblem, n_eq 0 -> arrays of shape (0, n_ineq) (I added a couple of debug prints, but a fix is way over my head).

A test case with logs is under https://gist.github.com/denis-bz:

View Gish.md

Gish: sharing files on gist.github by name

Keywords: file sharing, gist, github, CLI, python, remote file server

gish is a command-line program to copy files between local computers and gist.github.com, using file names or gist ids. An example:

Alice:  gish put @Alice AA.md aa.py bb.py  # upload a gist with 3 files
View noisyUSV.py
#!/usr/bin/env python2
""" A = noisyUSV( n, d, r, noise ): U S V + noise, n x d, rank r """
from __future__ import division
import numpy as np
from numpy.linalg import norm
from etc import znumpyutil as nu
__version__ = "2018-05-15 May denis-bz-py t-online de" # scale noise * S.max
View Munich-NO2.md

NO2 in Munich 2016: high traffic => high NO2

2016-mu5-hours-junedec

This plot shows NO2 levels over the day in Munich in June and December 2016. München-Landshuter-Allee on the left has about the highest NO2 levels in all Germany, and a lot of traffic — 120,000 to 150,000 cars and light trucks per day.
Surprise: high traffic => high NO2.

View 0-EPA-air-quality.md
View 0-Gradient-descent-with-0-crossing.md

Gradient descent with 2-point line fit where gradients cross 0

The Gradient_descent method iterates

xnew = xold - rate(t) * grad(xold)

GD is a workhorse in machine learning, because it's so simple, uses gradients only (not function values), and can do very big x.

rate(t) is a step-size or "learning rate" (aka η, Greek eta).

View half-brokenstick.py
#!/usr/bin/env python2
""" How many of the longest pieces of a randomly-broken stick add up to half its length ? """
# http://demonstrations.wolfram.com/BrokenStickRule
from __future__ import division
import sys
import numpy as np
__version__ = "2014-10-26 oct denis-bz-py t-online de"
np.set_printoptions( 1, threshold=100, edgeitems=5, suppress=True )
View exp-L1-L2.py
#!/usr/bin/env python2
""" min_x av |exp - x| at 0.7 -- W Least_absolute_deviations, L1
min_x rms( exp - x ) at 1 -- least squares, L2
are both very flat
which might explain why L1 minimization with IRLS doesn't work very well.
"""
# goo "L1 minimization" irls
# different L1 min problems: sparsity, outliers
from __future__ import division
View 0-Adaptive-soft-threshold-smooth-abs.md

Adaptive soft threshold and smooth abs: scale by average |X|

The soft threshold and smooth absolute value functions

adasoft

are widely used in optimization and signal processing. (Soft thresholding squeezes small values to 0; if "noise" is small and "signal" large, this improves the signal-to-noise ratio. Smooth abs, also called