Skip to content

Instantly share code, notes, and snippets.

denis-bz

Block or report user

Report or block denis-bz

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
@denis-bz
denis-bz / 0-lp_np.md
Last active Jul 17, 2019
lp_np: NumPy <-> gnu Linear Programming kit GLPK 2019-07-17 Jul 16:06z
View 0-lp_np.md

lp_np: numpy <--> GLPK, the gnu linear programming kit

Keywords: linear programming, LP, python, numpy, scipy, GLPK, GMPL, translate

lp_np connects the numpy-scipy-python world to the GNU Linear Programming Kit GLPK, to have the advantages of both:

  • numpy and scipy help construct large sparse LP models, and connect to dozens of specialized tools
  • GLPK reads and writes LP models in many formats.
@denis-bz
denis-bz / 0-transp-linprog.md
Created Jun 10, 2019
A 6 x 6 linprog testcase from glpk: 3 methods x sparse / dense give different results 2019-06-10 Jun 12:45z
View 0-transp-linprog.md

A 6 x 6 linprog testcase from glpk: 3 methods x sparse / dense give different results

transp-linprog.py below is a 6 x 6 testcase of scipy.optimize.linprog from a GLPK example, transp. In outline:

for method in ["interior-point", "revised simplex", "simplex"]:
    for sparse in [True, False]
        try:
            linprog( ... )
@denis-bz
denis-bz / 0-Lower-risk-stochastic-programming-Dakota-problem.md
Last active May 30, 2019
Lower-risk stochastic programming: the Dakota problem 29may2019
View 0-Lower-risk-stochastic-programming-Dakota-problem.md

Easy lower-risk stochastic programming: the Dakota problem

Keywords: stochastic programming, risk, linear programming, python

Users of stochastic programming sometimes look only at "expected" payoff and ignore risk. For example, a well-known tutorial problem, Dakota furniture from Higle 2005, gives a max-expected profit $ 1730, but with 30 % chance of $ 650 loss, 70 % $2750 profit. That looks risky to me (an engineer, not a businessman).

View intergrid.pandoc
Intergrid: interpolate data given on an N-d rectangular grid
============================================================
Purpose: interpolate data given on an N-dimensional rectangular grid,
uniform or non-uniform,
with the fast `scipy.ndimage.map_coordinates` .
Non-uniform grids are first uniformized with `numpy.interp` .
Background:
the reader should know some Python and NumPy
@denis-bz
denis-bz / glpk-examples-summary
Created Apr 25, 2019
A summary of the problems in glpk-4.65/examples/*.mod
View glpk-examples-summary
# A summary of the problems in glpk-4.65/examples/*.mod
# lp/mip min/max rows cols nnz
assign p lp min 17 64 192
bpp p mip min 11 28 56
cal p lp min 0 0 0
cf12a p lp min 20 40 113
cf12b p lp min 58 41 152
cflsq p lp min 40 40 114
color p mip min 92 48 288
cpp p lp min 30 14 59
@denis-bz
denis-bz / 0-scipy-sparse-solve-time.md
Created Feb 26, 2019
timeit scipy.sparse linear solvers: spsolve qmr lgmres splu spilu minres 2019-02-26 Feb
View 0-scipy-sparse-solve-time.md

timeit scipy.sparse linear solvers: spsolve qmr lgmres splu spilu

Here is a simple test of 5 scipy.sparse solvers of Ax = b, with A = diag*I + sparse random-uniform 4000 x 4000, density 1e-3.

Keywords: sparse linear solver, test case, random matrix, scipy, GMRES, Krylov


@denis-bz
denis-bz / Diff35.md
Last active Jan 19, 2019
3-point and 5-point finite-difference derivative approximations
View Diff35.md

3-point and 5-point finite-difference derivative approximations

Central differences like

diff1 = (f_{t+1} - f{t-1}) / 2,  [0 -1 0 1 0] / 2
diff2 = (f_{t+2} - f{t-2}) / 4,  [-1 0 0 0 1] / 4

approximate the derivative f'(t) much better then the one-sided difference f_{t+1} - f_t; see e.g. Wikipedia

@denis-bz
denis-bz / 0-Optimize-ABXC.md
Created Nov 23, 2018
Nonsmooth optimization: the ABXC test problem
View 0-Optimize-ABXC.md

ABXC nonsmooth optimization problems

ABXC is a range of nonsmooth optimization problems from Curtis et al.; see the problem description and links in abxc.py below. They're hard to optimize, very noisy near minima, and some are infeasible. Here's a plot of scipy.optimize.trust-constr creeping up an infeasible slope:

20nov-plot-abxc

@denis-bz
denis-bz / 0-mopta08-py.md
Last active Nov 3, 2018
Optimizing MOPTA08 (128 variables, 68 constraints) with SLSQP from python
View 0-mopta08-py.md
@denis-bz
denis-bz / Coord-sketch.py
Created Sep 30, 2018
Trivial sketch of a function of n variables: vary one at a time
View Coord-sketch.py
#!/usr/bin/env python2
""" coord_sketch: k * dim func() values, varying one coordinate at a time,
give a cheap starting point for further optimization
especially if func() is monotone, up or down, in each coordinate.
"""
from __future__ import division
import sys
import numpy as np
# from https://github.com/SMTorg/smt Surrogate Modeling Toolbox
You can’t perform that action at this time.