Purpose: compare 4 scikit-learn classifiers on a venerable test case, the MNIST database of 70000 handwritten digits, 28 x 28 pixels.
Keywords: classification, benchmark, MNIST, KNN, SVM, scikit-learn, python
Purpose: compare 4 scikit-learn classifiers on a venerable test case, the MNIST database of 70000 handwritten digits, 28 x 28 pixels.
Keywords: classification, benchmark, MNIST, KNN, SVM, scikit-learn, python
""" Which {sparse or numpy array} * {sparse or numpy array} work ? | |
Which combinations are valid, what's the result type ? | |
Here try-it-and-see on N^2 combinations | |
with `safe_sparse_dot` from scikit-learn, not "*" . | |
See also: | |
http://scipy-lectures.github.com/advanced/scipy_sparse/ | |
https://scipy.github.io/old-wiki/pages/SciPyPackages/Sparse.html | |
http://stackoverflow.com/questions/tagged/scipy+sparse-matrix (lots) | |
""" | |
# Keywords: scipy sparse dot-product basics tutorial |
# a tiny example of how to monitor gradients in theano | |
# from http://www.marekrei.com/blog/theano-tutorial/ 9. Minimal Training Example | |
# denis-bz 2016-11-04 nov | |
import theano | |
import theano.tensor as TT | |
import numpy as np | |
floatx = theano.config.floatX | |
np.set_printoptions( threshold=20, edgeitems=10, linewidth=100, |
#!/usr/bin/env python2 | |
from __future__ import division | |
import numpy as np | |
__version__ = "2017-01-13 jan denis-bz-py t-online de" | |
#............................................................................... | |
class Covariance_iir( object ): | |
""" running Covariance_iir filter, up-weighting more recent data like IIR |
Algorithms for classification, in particular binary classification, have two different objectives:
Purpose: short, clear code for
This code is for students of programming and optimization to read and try out, not for professionals.
The soft threshold and smooth absolute value functions
are widely used in optimization and signal processing. (Soft thresholding squeezes small values to 0; if "noise" is small and "signal" large, this improves the signal-to-noise ratio. Smooth abs, also called
#!/usr/bin/env python2 | |
""" min_x av |exp - x| at 0.7 -- W Least_absolute_deviations, L1 | |
min_x rms( exp - x ) at 1 -- least squares, L2 | |
are both very flat | |
which might explain why L1 minimization with IRLS doesn't work very well. | |
""" | |
# goo "L1 minimization" irls | |
# different L1 min problems: sparsity, outliers | |
from __future__ import division |