- mlpack overview{: .language-link #always }
- data formats{: .language-link #cli }
- data formats{: .language-link #python }
- data formats{: .language-link #julia }
- data formats{: .language-link #go }
- data formats{: .language-link #r }
mlpack is an intuitive, fast, and flexible C++ machine learning library with bindings to other languages. It is meant to be a machine learning analog to LAPACK, and aims to implement a wide array of machine learning methods and functions as a "swiss army knife" for machine learning researchers.
This reference page details mlpack's bindings to other languages. Further useful mlpack documentation links are given below.
mlpack bindings for CLI take and return a restricted set of types, for simplicity. These include primitive types, matrix/vector types, categorical matrix types, and model types. Each type is detailed below.
int
{: #doc_cli_int }: An integer (i.e., "1").double
{: #doc_cli_double }: A floating-point number (i.e., "0.5").flag
{: #doc_cli_flag }: A boolean flag option. If not specified, it is false; if specified, it is true.string
{: #doc_cli_string }: A character string (i.e., "hello").int vector
{: #doc_cli_int_vector }: A vector of integers, separated by commas (i.e., "1,2,3").string vector
{: #doc_cli_string_vector }: A vector of strings, separated by commas (i.e., "hello","goodbye").2-d matrix file
{: #doc_cli_2-d_matrix_file }: A data matrix filename. The file can be CSV (.csv), TSV (.csv), ASCII (space-separated values, .txt), Armadillo ASCII (.txt), PGM (.pgm), PPM (.ppm), Armadillo binary (.bin), or HDF5 (.h5, .hdf, .hdf5, or .he5), if mlpack was compiled with HDF5 support. The type of the data is detected by the extension of the filename. The storage should be such that one row corresponds to one point, and one column corresponds to one dimension (this is the typical storage format for on-disk data). All values of the matrix will be loaded as double-precision floating point data.2-d index matrix file
{: #doc_cli_2-d_index_matrix_file }: A data matrix filename, where the matrix holds only non-negative integer values. This type is often used for labels or indices. The file can be CSV (.csv), TSV (.csv), ASCII (space-separated values, .txt), Armadillo ASCII (.txt), PGM (.pgm), PPM (.ppm), Armadillo binary (.bin), or HDF5 (.h5, .hdf, .hdf5, or .he5), if mlpack was compiled with HDF5 support. The type of the data is detected by the extension of the filename. The storage should be such that one row corresponds to one point, and one column corresponds to one dimension (this is the typical storage format for on-disk data). All values of the matrix will be loaded as unsigned integers.1-d matrix file
{: #doc_cli_1-d_matrix_file }: A one-dimensional vector filename. This file can take the same formats as the data matrix filenames; however, it must either contain one row and many columns, or one column and many rows.1-d index matrix file
{: #doc_cli_1-d_index_matrix_file }: A one-dimensional vector filename, where the matrix holds only non-negative integer values. This type is typically used for labels or predictions or other indices. This file can take the same formats as the data matrix filenames; however, it must either contain one row and many columns, or one column and many rows.2-d categorical matrix file
{: #doc_cli_2-d_categorical_matrix_file }: A filename for a data matrix that can contain categorical (non-numeric) data. If the file contains only numeric data, then the same formats for regular data matrices can be used. If the file contains strings or other values that can't be parsed as numbers, then the type to be loaded must be CSV (.csv) or ARFF (.arff). Any non-numeric data will be converted to an unsigned integer value, and dimensions where the data is converted will be treated as categorical dimensions. When using this format, there is no need for one-hot encoding of categorical data.mlpackModel file
{: #doc_cli_model }: A filename containing an mlpack model. These can have one of three formats: binary (.bin), text (.txt), and XML (.xml). The XML format produces the largest (but most human-readable) files, while the binary format can be significantly more compact and quicker to load and save.
mlpack bindings for Python take and return a restricted set of types, for simplicity. These include primitive types, matrix/vector types, categorical matrix types, and model types. Each type is detailed below.
int
{: #doc_python_int }: An integer (i.e., "1").float
{: #doc_python_float }: A floating-point number (i.e., "0.5").bool
{: #doc_python_bool }: A boolean flag option (True or False).str
{: #doc_python_str }: A character string (i.e., "hello").list of ints
{: #doc_python_list_of_ints }: A list of integers; i.e., [0, 1, 2].list of strs
{: #doc_python_list_of_strs }: A list of strings; i.e., ["hello", "goodbye"].matrix
{: #doc_python_matrix }: A 2-d arraylike containing data. This can be a list of lists, a numpy ndarray, or a pandas DataFrame. If the dtype is not already float64, it will be converted.int matrix
{: #doc_python_int_matrix }: A 2-d arraylike containing data with a uint64 dtype. This can be a list of lists, a numpy ndarray, or a pandas DataFrame. If the dtype is not already uint64, it will be converted.vector
{: #doc_python_vector }: A 1-d arraylike containing data. This can be a 2-d matrix where one dimension has size 1, or it can also be a list, a numpy 1-d ndarray, or a 1-d pandas DataFrame. If the dtype is not already float64, it will be converted.int vector
{: #doc_python_int_vector }: A 1-d arraylike containing data with a uint64 dtype. This can be a 2-d matrix where one dimension has size 1, or it can also be a list, a numpy 1-d ndarray, or a 1-d pandas DataFrame. If the dtype is not already uint64, it will be converted.categorical matrix
{: #doc_python_categorical_matrix }: A 2-d arraylike containing data. Like the regular 2-d matrices, this can be a list of lists, a numpy ndarray, or a pandas DataFrame. However, this type can also accept a pandas DataFrame that has columns of type 'CategoricalDtype'. These categorical values will be converted to numeric indices before being passed to mlpack, and then inside mlpack they will be properly treated as categorical variables, so there is no need to do one-hot encoding for this matrix type. If the dtype of the given matrix is not already float64, it will be converted.mlpackModelType
{: #doc_python_model }: An mlpack model pointer. This type can be pickled to or from disk, and internally holds a pointer to C++ memory containing the mlpack model. Note that this means that the mlpack model itself cannot be easily inspected in Python; however, the pickled model can be loaded in C++ and inspected there.
mlpack bindings for Julia take and return a restricted set of types, for simplicity. These include primitive types, matrix/vector types, categorical matrix types, and model types. Each type is detailed below.
Int
{: #doc_julia_Int }: An integer (i.e.,1
).Float64
{: #doc_julia_Float64 }: A floating-point number (i.e.,0.5
).Bool
{: #doc_julia_Bool }: A boolean flag option (true
orfalse
).String
{: #doc_julia_String }: A character string (i.e.,"hello"
).Array{Int, 1}
{: #doc_julia_Array_Int,1 }: A vector of integers; i.e.,[0, 1, 2]
.Array{String, 1}
{: #doc_julia_Array_String,1 }: A vector of strings; i.e.,["hello", "goodbye"]
.Float64 matrix-like
{: #doc_julia_Float64_matrix-like }: A 2-d matrix-like containingFloat64
data (could be anArray{Float64, 2}
or aDataFrame
or anything convertible to anArray{Float64, 2}
). It is expected that each row of the matrix corresponds to a data point, unlesspoints_are_rows
is set tofalse
when calling mlpack bindings.Int matrix-like
{: #doc_julia_Int_matrix-like }: A 2-d matrix-like containingInt
data (elements should be greater than or equal to 0). Could be anArray{Int, 2}
or aDataFrame
or anything convertible to anArray{Int, 2}
. It is expected that each row of the matrix corresponds to a data point, unlesspoints_are_rows
is set tofalse
when calling mlpack bindings.Float64 vector-like
{: #doc_julia_Float64_vector-like }: A 1-d vector-like containingFloat64
data (could be anArray{Float64, 1}
, anArray{Float64, 2}
with one dimension of size 1, or anything convertible toArray{Float64, 1}
.Int vector-like
{: #doc_julia_Int_vector-like }: A 1-d vector-like containingInt
data (elements should be greater than or equal to 0). Could be anArray{Int, 1}
, anArray{Int, 2}
with one dimension of size 1, or anything convertible toArray{Int, 1}
.Tuple{Array{Bool, 1}, Array{Float64, 2}}
{: #doc_julia_Tuple_Array_Bool,1,Array_Float64,2 }: A 2-d array containingFloat64
data along with a boolean array indicating which dimensions are categorical (represented bytrue
) and which are numeric (represented byfalse
). The number of elements in the boolean array should be the same as the dimensionality of the data matrix. It is expected that each row of the matrix corresponds to a single data point, unlesspoints_are_rows
is set tofalse
when calling mlpack bindings.<Model> (mlpack model)
{: #doc_julia_model }: An mlpack model pointer.<Model>
refers to the type of model that is being stored, so, e.g., forCF()
, the type will beCFModel
. This type holds a pointer to C++ memory containing the mlpack model. Note that this means the mlpack model itself cannot be easily inspected in Julia. However, the pointer can be passed to subsequent calls to mlpack functions, and can be serialized and deserialized via either theSerialization
package, or themlpack.serialize_bin()
andmlpack.deserialize_bin()
functions.
mlpack bindings for Go take and return a restricted set of types, for simplicity. These include primitive types, matrix/vector types, categorical matrix types, and model types. Each type is detailed below.
int
{: #doc_go_int }: An integer (i.e.,1
).float64
{: #doc_go_float64 }: A floating-point number (i.e.,0.5
).bool
{: #doc_go_bool }: A boolean flag option (true
orfalse
).string
{: #doc_go_string }: A character string (i.e.,"hello"
).array of ints
{: #doc_go_array_of_ints }: An array of integers; i.e.,[]int{0, 1, 2}
.array of strings
{: #doc_go_array_of_strings }: An array of strings; i.e.,[]string{"hello", "goodbye"}
.*mat.Dense
{: #doc_go_*mat.Dense }: A 2-d gonum Matrix. If the type is not alreadyfloat64
, it will be converted.*mat.Dense (with ints)
{: #doc_go_*mat.Dense_(with_ints) }: A 2-d gonum Matrix. If the type is not alreadyint64
, it will be converted.*mat.Dense (1d)
{: #doc_go_*mat.Dense_(1d) }: A 1-d gonum Matrix (that is, a Matrix where either the number of rows or number of columns is 1).*mat.Dense (1d with ints)
{: #doc_go_*mat.Dense_(1d_with_ints) }: A 1-d gonum Matrix (that is, a Matrix where either the number of rows or number of columns is 1).matrixWithInfo
{: #doc_go_matrixWithInfo }: A Tuple(matrixWithInfo) containingfloat64
data (Data) along with a boolean array (Categoricals) indicating which dimensions are categorical (represented bytrue
) and which are numeric (represented byfalse
). The number of elements in the boolean array should be the same as the dimensionality of the data matrix. It is expected that each row of the matrix corresponds to a single data point when calling mlpack bindings.mlpackModel
{: #doc_go_model }: An mlpack model pointer. This type holds a pointer to C++ memory containing the mlpack model. Note that this means the mlpack model itself cannot be easily inspected in Go. However, the pointer can be passed to subsequent calls to mlpack functions.
mlpack bindings for R take and return a restricted set of types, for simplicity. These include primitive types, matrix/vector types, categorical matrix types, and model types. Each type is detailed below.
integer
{: #doc_r_integer }: An integer (i.e.,1
).numeric
{: #doc_r_numeric }: A floating-point number (i.e.,0.5
).logical
{: #doc_r_logical }: A boolean flag option (i.e.TRUE
orFALSE
).character
{: #doc_r_character }: A character string (i.e.,"hello"
).vector of integers
{: #doc_r_vector_of_integers }: A vector of integers; i.e.,c(0, 1, 2)
.vector of characters
{: #doc_r_vector_of_characters }: A vector of strings; i.e.,c("hello", "goodbye")
.numeric matrix
{: #doc_r_numeric_matrix }: A 2-d matrix-like containingnumeric
data (could be anmatrix
or adata.frame
or anything convertible to an 2-dmatrix
).integer matrix
{: #doc_r_integer_matrix }: A 2-d matrix-like containinginteger
data (could be anmatrix
or adata.frame
or anything convertible to an 2-dmatrix
).numeric vector
{: #doc_r_numeric_vector }: A 1-d matrix-like containingnumeric
data (could be anmatrix
or adata.frame
with one dimension of size 1).integer vector
{: #doc_r_integer_vector }: A 1-d matrix-like containinginteger
data (could be anmatrix
or adata.frame
with one dimension of size 1).categorical matrix/data.frame
{: #doc_r_categorical_matrix/data.frame }: A 2-d array containingnumeric
data. Like the regular 2-d matrices, this can be amatrix
, or adata.frame
. However, this type can also accept adata.frame
that has columns of typecharacter
,logical
orfactor
. These values will be converted tonumeric
indices before being passed to mlpack, and then inside mlpack they will be properly treated as categorical variables, so there is no need to do one-hot encoding for this matrix type.<Model> (mlpack model)
{: #doc_r_model }: An mlpack model pointer.<Model>
refers to the type of model that is being stored, so, e.g., forcf()
, the type will beCFModel
. This type holds a pointer to C++ memory containing the mlpack model. Note that this means the mlpack model itself cannot be easily inspected in R. However, the pointer can be passed to subsequent calls to mlpack functions, and can be serialized and deserialized via either theSerialize()
andUnserialize()
functions.
// Initialize optional parameters for Adaboost(). param := mlpack.AdaboostOptions() param.InputModel = nil param.Iterations = 1000 param.Labels = mat.NewDense(1, 1, nil) param.Test = mat.NewDense(1, 1, nil) param.Tolerance = 1e-10 param.Training = mat.NewDense(1, 1, nil) param.WeakLearner = "decision_stump"
output, output_model, predictions, probabilities := mlpack.Adaboost(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- adaboost(input_model=NA, iterations=1000,
labels=matrix(integer(), 0, 0), test=matrix(numeric(), 0, 0),
tolerance=1e-10, training=matrix(numeric(), 0, 0), verbose=FALSE,
weak_learner="decision_stump")
R> output <- d$output
R> output_model <- d$output_model
R> predictions <- d$predictions
R> probabilities <- d$probabilities
An implementation of the AdaBoost.MH (Adaptive Boosting) algorithm for classification. This can be used to train an AdaBoost model on labeled data or use an existing AdaBoost model to predict the classes of new points. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_model_file (-m) |
AdaBoostModel file |
Input AdaBoost model. | '' |
--iterations (-i) |
int |
The maximum number of boosting iterations to be run (0 will run until convergence.) | 1000 |
--labels_file (-l) |
1-d index matrix file |
Labels for the training set. | '' |
--test_file (-T) |
2-d matrix file |
Test dataset. | '' |
--tolerance (-e) |
double |
The tolerance for change in values of the weighted error during training. | 1e-10 |
--training_file (-t) |
2-d matrix file |
Dataset for training AdaBoost. | '' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. | |
--weak_learner (-w) |
string |
The type of weak learner to use: 'decision_stump', or 'perceptron'. | 'decision_stump' |
name | type | description |
---|---|---|
--output_file (-o) |
1-d index matrix file |
Predicted labels for the test set. |
--output_model_file (-M) |
AdaBoostModel file |
Output trained AdaBoost model. |
--predictions_file (-P) |
1-d index matrix file |
Predicted labels for the test set. |
--probabilities_file (-p) |
2-d matrix file |
Predicted class probabilities for each point in the test set. |
{: #cli_adaboost_detailed-documentation }
This program implements the AdaBoost (or Adaptive Boosting) algorithm. The variant of AdaBoost implemented here is AdaBoost.MH. It uses a weak learner, either decision stumps or perceptrons, and over many iterations, creates a strong learner that is a weighted ensemble of weak learners. It runs these iterations until a tolerance value is crossed for change in the value of the weighted training error.
For more information about the algorithm, see the paper "Improved Boosting Algorithms Using Confidence-Rated Predictions", by R.E. Schapire and Y. Singer.
This program allows training of an AdaBoost model, and then application of that model to a test dataset. To train a model, a dataset must be passed with the --training_file (-t)
option. Labels can be given with the --labels_file (-l)
option; if no labels are specified, the labels will be assumed to be the last column of the input dataset. Alternately, an AdaBoost model may be loaded with the --input_model_file (-m)
option.
Once a model is trained or loaded, it may be used to provide class predictions for a given test dataset. A test dataset may be specified with the --test_file (-T)
parameter. The predicted classes for each point in the test dataset are output to the --predictions_file (-P)
output parameter. The AdaBoost model itself is output to the --output_model_file (-M)
output parameter.
Note: the following parameter is deprecated and will be removed in mlpack 4.0.0: --output_file (-o)
.
Use --predictions_file (-P)
instead of --output_file (-o)
.
For example, to run AdaBoost on an input dataset 'data.csv'
with labels 'labels.csv'
and perceptrons as the weak learner type, storing the trained model in 'model.bin'
, one could use the following command:
$ mlpack_adaboost --training_file data.csv --labels_file labels.csv
--output_model_file model.bin --weak_learner perceptron
Similarly, an already-trained model in 'model.bin'
can be used to provide class predictions from test data 'test_data.csv'
and store the output in 'predictions.csv'
with the following command:
$ mlpack_adaboost --input_model_file model.bin --test_file test_data.csv
--predictions_file predictions.csv
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
input_model |
AdaBoostModelType |
Input AdaBoost model. | None |
iterations |
int |
The maximum number of boosting iterations to be run (0 will run until convergence.) | 1000 |
labels |
int vector |
Labels for the training set. | np.empty([0], dtype=np.uint64) |
test |
matrix |
Test dataset. | np.empty([0, 0]) |
tolerance |
float |
The tolerance for change in values of the weighted error during training. | 1e-10 |
training |
matrix |
Dataset for training AdaBoost. | np.empty([0, 0]) |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
weak_learner |
str |
The type of weak learner to use: 'decision_stump', or 'perceptron'. | 'decision_stump' |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output |
int vector |
Predicted labels for the test set. |
output_model |
AdaBoostModelType |
Output trained AdaBoost model. |
predictions |
int vector |
Predicted labels for the test set. |
probabilities |
matrix |
Predicted class probabilities for each point in the test set. |
{: #python_adaboost_detailed-documentation }
This program implements the AdaBoost (or Adaptive Boosting) algorithm. The variant of AdaBoost implemented here is AdaBoost.MH. It uses a weak learner, either decision stumps or perceptrons, and over many iterations, creates a strong learner that is a weighted ensemble of weak learners. It runs these iterations until a tolerance value is crossed for change in the value of the weighted training error.
For more information about the algorithm, see the paper "Improved Boosting Algorithms Using Confidence-Rated Predictions", by R.E. Schapire and Y. Singer.
This program allows training of an AdaBoost model, and then application of that model to a test dataset. To train a model, a dataset must be passed with the training
option. Labels can be given with the labels
option; if no labels are specified, the labels will be assumed to be the last column of the input dataset. Alternately, an AdaBoost model may be loaded with the input_model
option.
Once a model is trained or loaded, it may be used to provide class predictions for a given test dataset. A test dataset may be specified with the test
parameter. The predicted classes for each point in the test dataset are output to the predictions
output parameter. The AdaBoost model itself is output to the output_model
output parameter.
Note: the following parameter is deprecated and will be removed in mlpack 4.0.0: output
.
Use predictions
instead of output
.
For example, to run AdaBoost on an input dataset 'data'
with labels 'labels'
and perceptrons as the weak learner type, storing the trained model in 'model'
, one could use the following command:
>>> output = adaboost(training=data, labels=labels,
weak_learner='perceptron')
>>> model = output['output_model']
Similarly, an already-trained model in 'model'
can be used to provide class predictions from test data 'test_data'
and store the output in 'predictions'
with the following command:
>>> output = adaboost(input_model=model, test=test_data)
>>> predictions = output['predictions']
name | type | description | default |
---|---|---|---|
input_model |
AdaBoostModel |
Input AdaBoost model. | nothing |
iterations |
Int |
The maximum number of boosting iterations to be run (0 will run until convergence.) | 1000 |
labels |
Int vector-like |
Labels for the training set. | Int[] |
test |
Float64 matrix-like |
Test dataset. | zeros(0, 0) |
tolerance |
Float64 |
The tolerance for change in values of the weighted error during training. | 1e-10 |
training |
Float64 matrix-like |
Dataset for training AdaBoost. | zeros(0, 0) |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
weak_learner |
String |
The type of weak learner to use: 'decision_stump', or 'perceptron'. | "decision_stump" |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output |
Int vector-like |
Predicted labels for the test set. |
output_model |
AdaBoostModel |
Output trained AdaBoost model. |
predictions |
Int vector-like |
Predicted labels for the test set. |
probabilities |
Float64 matrix-like |
Predicted class probabilities for each point in the test set. |
{: #julia_adaboost_detailed-documentation }
This program implements the AdaBoost (or Adaptive Boosting) algorithm. The variant of AdaBoost implemented here is AdaBoost.MH. It uses a weak learner, either decision stumps or perceptrons, and over many iterations, creates a strong learner that is a weighted ensemble of weak learners. It runs these iterations until a tolerance value is crossed for change in the value of the weighted training error.
For more information about the algorithm, see the paper "Improved Boosting Algorithms Using Confidence-Rated Predictions", by R.E. Schapire and Y. Singer.
This program allows training of an AdaBoost model, and then application of that model to a test dataset. To train a model, a dataset must be passed with the training
option. Labels can be given with the labels
option; if no labels are specified, the labels will be assumed to be the last column of the input dataset. Alternately, an AdaBoost model may be loaded with the input_model
option.
Once a model is trained or loaded, it may be used to provide class predictions for a given test dataset. A test dataset may be specified with the test
parameter. The predicted classes for each point in the test dataset are output to the predictions
output parameter. The AdaBoost model itself is output to the output_model
output parameter.
Note: the following parameter is deprecated and will be removed in mlpack 4.0.0: output
.
Use predictions
instead of output
.
For example, to run AdaBoost on an input dataset data
with labels labels
and perceptrons as the weak learner type, storing the trained model in model
, one could use the following command:
julia> using CSV
julia> data = CSV.read("data.csv")
julia> labels = CSV.read("labels.csv"; type=Int)
julia> _, model, _, _ = adaboost(labels=labels, training=data,
weak_learner="perceptron")
Similarly, an already-trained model in model
can be used to provide class predictions from test data test_data
and store the output in predictions
with the following command:
julia> using CSV
julia> test_data = CSV.read("test_data.csv")
julia> _, _, predictions, _ = adaboost(input_model=model,
test=test_data)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
InputModel |
adaBoostModel |
Input AdaBoost model. | nil |
Iterations |
int |
The maximum number of boosting iterations to be run (0 will run until convergence.) | 1000 |
Labels |
*mat.Dense (1d with ints) |
Labels for the training set. | mat.NewDense(1, 1, nil) |
Test |
*mat.Dense |
Test dataset. | mat.NewDense(1, 1, nil) |
Tolerance |
float64 |
The tolerance for change in values of the weighted error during training. | 1e-10 |
Training |
*mat.Dense |
Dataset for training AdaBoost. | mat.NewDense(1, 1, nil) |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
WeakLearner |
string |
The type of weak learner to use: 'decision_stump', or 'perceptron'. | "decision_stump" |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
output |
*mat.Dense (1d with ints) |
Predicted labels for the test set. |
outputModel |
adaBoostModel |
Output trained AdaBoost model. |
predictions |
*mat.Dense (1d with ints) |
Predicted labels for the test set. |
probabilities |
*mat.Dense |
Predicted class probabilities for each point in the test set. |
{: #go_adaboost_detailed-documentation }
This program implements the AdaBoost (or Adaptive Boosting) algorithm. The variant of AdaBoost implemented here is AdaBoost.MH. It uses a weak learner, either decision stumps or perceptrons, and over many iterations, creates a strong learner that is a weighted ensemble of weak learners. It runs these iterations until a tolerance value is crossed for change in the value of the weighted training error.
For more information about the algorithm, see the paper "Improved Boosting Algorithms Using Confidence-Rated Predictions", by R.E. Schapire and Y. Singer.
This program allows training of an AdaBoost model, and then application of that model to a test dataset. To train a model, a dataset must be passed with the Training
option. Labels can be given with the Labels
option; if no labels are specified, the labels will be assumed to be the last column of the input dataset. Alternately, an AdaBoost model may be loaded with the InputModel
option.
Once a model is trained or loaded, it may be used to provide class predictions for a given test dataset. A test dataset may be specified with the Test
parameter. The predicted classes for each point in the test dataset are output to the Predictions
output parameter. The AdaBoost model itself is output to the OutputModel
output parameter.
Note: the following parameter is deprecated and will be removed in mlpack 4.0.0: Output
.
Use Predictions
instead of Output
.
For example, to run AdaBoost on an input dataset data
with labels labels
and perceptrons as the weak learner type, storing the trained model in model
, one could use the following command:
// Initialize optional parameters for Adaboost().
param := mlpack.AdaboostOptions()
param.Training = data
param.Labels = labels
param.WeakLearner = "perceptron"
_, model, _, _ := mlpack.Adaboost(param)
Similarly, an already-trained model in model
can be used to provide class predictions from test data test_data
and store the output in predictions
with the following command:
// Initialize optional parameters for Adaboost().
param := mlpack.AdaboostOptions()
param.InputModel = &model
param.Test = test_data
_, _, predictions, _ := mlpack.Adaboost(param)
name | type | description | default |
---|---|---|---|
input_model |
AdaBoostModel |
Input AdaBoost model. | NA |
iterations |
integer |
The maximum number of boosting iterations to be run (0 will run until convergence.) | 1000 |
labels |
integer vector |
Labels for the training set. | matrix(integer(), 0, 0) |
test |
numeric matrix |
Test dataset. | matrix(numeric(), 0, 0) |
tolerance |
numeric |
The tolerance for change in values of the weighted error during training. | 1e-10 |
training |
numeric matrix |
Dataset for training AdaBoost. | matrix(numeric(), 0, 0) |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
weak_learner |
character |
The type of weak learner to use: 'decision_stump', or 'perceptron'. | "decision_stump" |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output |
integer vector |
Predicted labels for the test set. |
output_model |
AdaBoostModel |
Output trained AdaBoost model. |
predictions |
integer vector |
Predicted labels for the test set. |
probabilities |
numeric matrix |
Predicted class probabilities for each point in the test set. |
{: #r_adaboost_detailed-documentation }
This program implements the AdaBoost (or Adaptive Boosting) algorithm. The variant of AdaBoost implemented here is AdaBoost.MH. It uses a weak learner, either decision stumps or perceptrons, and over many iterations, creates a strong learner that is a weighted ensemble of weak learners. It runs these iterations until a tolerance value is crossed for change in the value of the weighted training error.
For more information about the algorithm, see the paper "Improved Boosting Algorithms Using Confidence-Rated Predictions", by R.E. Schapire and Y. Singer.
This program allows training of an AdaBoost model, and then application of that model to a test dataset. To train a model, a dataset must be passed with the training
option. Labels can be given with the labels
option; if no labels are specified, the labels will be assumed to be the last column of the input dataset. Alternately, an AdaBoost model may be loaded with the input_model
option.
Once a model is trained or loaded, it may be used to provide class predictions for a given test dataset. A test dataset may be specified with the test
parameter. The predicted classes for each point in the test dataset are output to the predictions
output parameter. The AdaBoost model itself is output to the output_model
output parameter.
Note: the following parameter is deprecated and will be removed in mlpack 4.0.0: output
.
Use predictions
instead of output
.
For example, to run AdaBoost on an input dataset "data"
with labels "labels"
and perceptrons as the weak learner type, storing the trained model in "model"
, one could use the following command:
R> output <- adaboost(training=data, labels=labels,
weak_learner="perceptron")
R> model <- output$output_model
Similarly, an already-trained model in "model"
can be used to provide class predictions from test data "test_data"
and store the output in "predictions"
with the following command:
R> output <- adaboost(input_model=model, test=test_data)
R> predictions <- output$predictions
// Initialize optional parameters for ApproxKfn(). param := mlpack.ApproxKfnOptions() param.Algorithm = "ds" param.CalculateError = false param.ExactDistances = mat.NewDense(1, 1, nil) param.InputModel = nil param.K = 0 param.NumProjections = 5 param.NumTables = 5 param.Query = mat.NewDense(1, 1, nil) param.Reference = mat.NewDense(1, 1, nil)
distances, neighbors, output_model := mlpack.ApproxKfn(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- approx_kfn(algorithm="ds", calculate_error=FALSE,
exact_distances=matrix(numeric(), 0, 0), input_model=NA, k=0,
num_projections=5, num_tables=5, query=matrix(numeric(), 0, 0),
reference=matrix(numeric(), 0, 0), verbose=FALSE)
R> distances <- d$distances
R> neighbors <- d$neighbors
R> output_model <- d$output_model
An implementation of two strategies for furthest neighbor search. This can be used to compute the furthest neighbor of query point(s) from a set of points; furthest neighbor models can be saved and reused with future query point(s). Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--algorithm (-a) |
string |
Algorithm to use: 'ds' or 'qdafn'. | 'ds' |
--calculate_error (-e) |
flag |
If set, calculate the average distance error for the first furthest neighbor only. | |
--exact_distances_file (-x) |
2-d matrix file |
Matrix containing exact distances to furthest neighbors; this can be used to avoid explicit calculation when --calculate_error is set. | '' |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_model_file (-m) |
ApproxKFNModel file |
File containing input model. | '' |
--k (-k) |
int |
Number of furthest neighbors to search for. | 0 |
--num_projections (-p) |
int |
Number of projections to use in each hash table. | 5 |
--num_tables (-t) |
int |
Number of hash tables to use. | 5 |
--query_file (-q) |
2-d matrix file |
Matrix containing query points. | '' |
--reference_file (-r) |
2-d matrix file |
Matrix containing the reference dataset. | '' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--distances_file (-d) |
2-d matrix file |
Matrix to save furthest neighbor distances to. |
--neighbors_file (-n) |
2-d index matrix file |
Matrix to save neighbor indices to. |
--output_model_file (-M) |
ApproxKFNModel file |
File to save output model to. |
{: #cli_approx_kfn_detailed-documentation }
This program implements two strategies for furthest neighbor search. These strategies are:
- The 'qdafn' algorithm from "Approximate Furthest Neighbor in High Dimensions" by R. Pagh, F. Silvestri, J. Sivertsen, and M. Skala, in Similarity Search and Applications 2015 (SISAP).
- The 'DrusillaSelect' algorithm from "Fast approximate furthest neighbors with data-dependent candidate selection", by R.R. Curtin and A.B. Gardner, in Similarity Search and Applications 2016 (SISAP).
These two strategies give approximate results for the furthest neighbor search problem and can be used as fast replacements for other furthest neighbor techniques such as those found in the mlpack_kfn program. Note that typically, the 'ds' algorithm requires far fewer tables and projections than the 'qdafn' algorithm.
Specify a reference set (set to search in) with --reference_file (-r)
, specify a query set with --query_file (-q)
, and specify algorithm parameters with --num_tables (-t)
and --num_projections (-p)
(or don't and defaults will be used). The algorithm to be used (either 'ds'---the default---or 'qdafn') may be specified with --algorithm (-a)
. Also specify the number of neighbors to search for with --k (-k)
.
Note that for 'qdafn' in lower dimensions, --num_projections (-p)
may need to be set to a high value in order to return results for each query point.
If no query set is specified, the reference set will be used as the query set. The --output_model_file (-M)
output parameter may be used to store the built model, and an input model may be loaded instead of specifying a reference set with the --input_model_file (-m)
option.
Results for each query point can be stored with the --neighbors_file (-n)
and --distances_file (-d)
output parameters. Each row of these output matrices holds the k distances or neighbor indices for each query point.
For example, to find the 5 approximate furthest neighbors with 'reference_set.csv'
as the reference set and 'query_set.csv'
as the query set using DrusillaSelect, storing the furthest neighbor indices to 'neighbors.csv'
and the furthest neighbor distances to 'distances.csv'
, one could call
$ mlpack_approx_kfn --query_file query_set.csv --reference_file
reference_set.csv --k 5 --algorithm ds --neighbors_file neighbors.csv
--distances_file distances.csv
and to perform approximate all-furthest-neighbors search with k=1 on the set 'data.csv'
storing only the furthest neighbor distances to 'distances.csv'
, one could call
$ mlpack_approx_kfn --reference_file reference_set.csv --k 1 --distances_file
distances.csv
A trained model can be re-used. If a model has been previously saved to 'model.bin'
, then we may find 3 approximate furthest neighbors on a query set 'new_query_set.csv'
using that model and store the furthest neighbor indices into 'neighbors.csv'
by calling
$ mlpack_approx_kfn --input_model_file model.bin --query_file
new_query_set.csv --k 3 --neighbors_file neighbors.csv
name | type | description | default |
---|---|---|---|
algorithm |
str |
Algorithm to use: 'ds' or 'qdafn'. | 'ds' |
calculate_error |
bool |
If set, calculate the average distance error for the first furthest neighbor only. | False |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
exact_distances |
matrix |
Matrix containing exact distances to furthest neighbors; this can be used to avoid explicit calculation when --calculate_error is set. | np.empty([0, 0]) |
input_model |
ApproxKFNModelType |
File containing input model. | None |
k |
int |
Number of furthest neighbors to search for. | 0 |
num_projections |
int |
Number of projections to use in each hash table. | 5 |
num_tables |
int |
Number of hash tables to use. | 5 |
query |
matrix |
Matrix containing query points. | np.empty([0, 0]) |
reference |
matrix |
Matrix containing the reference dataset. | np.empty([0, 0]) |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
distances |
matrix |
Matrix to save furthest neighbor distances to. |
neighbors |
int matrix |
Matrix to save neighbor indices to. |
output_model |
ApproxKFNModelType |
File to save output model to. |
{: #python_approx_kfn_detailed-documentation }
This program implements two strategies for furthest neighbor search. These strategies are:
- The 'qdafn' algorithm from "Approximate Furthest Neighbor in High Dimensions" by R. Pagh, F. Silvestri, J. Sivertsen, and M. Skala, in Similarity Search and Applications 2015 (SISAP).
- The 'DrusillaSelect' algorithm from "Fast approximate furthest neighbors with data-dependent candidate selection", by R.R. Curtin and A.B. Gardner, in Similarity Search and Applications 2016 (SISAP).
These two strategies give approximate results for the furthest neighbor search problem and can be used as fast replacements for other furthest neighbor techniques such as those found in the mlpack_kfn program. Note that typically, the 'ds' algorithm requires far fewer tables and projections than the 'qdafn' algorithm.
Specify a reference set (set to search in) with reference
, specify a query set with query
, and specify algorithm parameters with num_tables
and num_projections
(or don't and defaults will be used). The algorithm to be used (either 'ds'---the default---or 'qdafn') may be specified with algorithm
. Also specify the number of neighbors to search for with k
.
Note that for 'qdafn' in lower dimensions, num_projections
may need to be set to a high value in order to return results for each query point.
If no query set is specified, the reference set will be used as the query set. The output_model
output parameter may be used to store the built model, and an input model may be loaded instead of specifying a reference set with the input_model
option.
Results for each query point can be stored with the neighbors
and distances
output parameters. Each row of these output matrices holds the k distances or neighbor indices for each query point.
For example, to find the 5 approximate furthest neighbors with 'reference_set'
as the reference set and 'query_set'
as the query set using DrusillaSelect, storing the furthest neighbor indices to 'neighbors'
and the furthest neighbor distances to 'distances'
, one could call
>>> output = approx_kfn(query=query_set, reference=reference_set, k=5,
algorithm='ds')
>>> neighbors = output['neighbors']
>>> distances = output['distances']
and to perform approximate all-furthest-neighbors search with k=1 on the set 'data'
storing only the furthest neighbor distances to 'distances'
, one could call
>>> output = approx_kfn(reference=reference_set, k=1)
>>> distances = output['distances']
A trained model can be re-used. If a model has been previously saved to 'model'
, then we may find 3 approximate furthest neighbors on a query set 'new_query_set'
using that model and store the furthest neighbor indices into 'neighbors'
by calling
>>> output = approx_kfn(input_model=model, query=new_query_set, k=3)
>>> neighbors = output['neighbors']
name | type | description | default |
---|---|---|---|
algorithm |
String |
Algorithm to use: 'ds' or 'qdafn'. | "ds" |
calculate_error |
Bool |
If set, calculate the average distance error for the first furthest neighbor only. | false |
exact_distances |
Float64 matrix-like |
Matrix containing exact distances to furthest neighbors; this can be used to avoid explicit calculation when --calculate_error is set. | zeros(0, 0) |
input_model |
ApproxKFNModel |
File containing input model. | nothing |
k |
Int |
Number of furthest neighbors to search for. | 0 |
num_projections |
Int |
Number of projections to use in each hash table. | 5 |
num_tables |
Int |
Number of hash tables to use. | 5 |
query |
Float64 matrix-like |
Matrix containing query points. | zeros(0, 0) |
reference |
Float64 matrix-like |
Matrix containing the reference dataset. | zeros(0, 0) |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
distances |
Float64 matrix-like |
Matrix to save furthest neighbor distances to. |
neighbors |
Int matrix-like |
Matrix to save neighbor indices to. |
output_model |
ApproxKFNModel |
File to save output model to. |
{: #julia_approx_kfn_detailed-documentation }
This program implements two strategies for furthest neighbor search. These strategies are:
- The 'qdafn' algorithm from "Approximate Furthest Neighbor in High Dimensions" by R. Pagh, F. Silvestri, J. Sivertsen, and M. Skala, in Similarity Search and Applications 2015 (SISAP).
- The 'DrusillaSelect' algorithm from "Fast approximate furthest neighbors with data-dependent candidate selection", by R.R. Curtin and A.B. Gardner, in Similarity Search and Applications 2016 (SISAP).
These two strategies give approximate results for the furthest neighbor search problem and can be used as fast replacements for other furthest neighbor techniques such as those found in the mlpack_kfn program. Note that typically, the 'ds' algorithm requires far fewer tables and projections than the 'qdafn' algorithm.
Specify a reference set (set to search in) with reference
, specify a query set with query
, and specify algorithm parameters with num_tables
and num_projections
(or don't and defaults will be used). The algorithm to be used (either 'ds'---the default---or 'qdafn') may be specified with algorithm
. Also specify the number of neighbors to search for with k
.
Note that for 'qdafn' in lower dimensions, num_projections
may need to be set to a high value in order to return results for each query point.
If no query set is specified, the reference set will be used as the query set. The output_model
output parameter may be used to store the built model, and an input model may be loaded instead of specifying a reference set with the input_model
option.
Results for each query point can be stored with the neighbors
and distances
output parameters. Each row of these output matrices holds the k distances or neighbor indices for each query point.
For example, to find the 5 approximate furthest neighbors with reference_set
as the reference set and query_set
as the query set using DrusillaSelect, storing the furthest neighbor indices to neighbors
and the furthest neighbor distances to distances
, one could call
julia> using CSV
julia> query_set = CSV.read("query_set.csv")
julia> reference_set = CSV.read("reference_set.csv")
julia> distances, neighbors, _ = approx_kfn(algorithm="ds", k=5,
query=query_set, reference=reference_set)
and to perform approximate all-furthest-neighbors search with k=1 on the set data
storing only the furthest neighbor distances to distances
, one could call
julia> using CSV
julia> reference_set = CSV.read("reference_set.csv")
julia> distances, _, _ = approx_kfn(k=1, reference=reference_set)
A trained model can be re-used. If a model has been previously saved to model
, then we may find 3 approximate furthest neighbors on a query set new_query_set
using that model and store the furthest neighbor indices into neighbors
by calling
julia> using CSV
julia> new_query_set = CSV.read("new_query_set.csv")
julia> _, neighbors, _ = approx_kfn(input_model=model, k=3,
query=new_query_set)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
Algorithm |
string |
Algorithm to use: 'ds' or 'qdafn'. | "ds" |
CalculateError |
bool |
If set, calculate the average distance error for the first furthest neighbor only. | false |
ExactDistances |
*mat.Dense |
Matrix containing exact distances to furthest neighbors; this can be used to avoid explicit calculation when --calculate_error is set. | mat.NewDense(1, 1, nil) |
InputModel |
approxkfnModel |
File containing input model. | nil |
K |
int |
Number of furthest neighbors to search for. | 0 |
NumProjections |
int |
Number of projections to use in each hash table. | 5 |
NumTables |
int |
Number of hash tables to use. | 5 |
Query |
*mat.Dense |
Matrix containing query points. | mat.NewDense(1, 1, nil) |
Reference |
*mat.Dense |
Matrix containing the reference dataset. | mat.NewDense(1, 1, nil) |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
distances |
*mat.Dense |
Matrix to save furthest neighbor distances to. |
neighbors |
*mat.Dense (with ints) |
Matrix to save neighbor indices to. |
outputModel |
approxkfnModel |
File to save output model to. |
{: #go_approx_kfn_detailed-documentation }
This program implements two strategies for furthest neighbor search. These strategies are:
- The 'qdafn' algorithm from "Approximate Furthest Neighbor in High Dimensions" by R. Pagh, F. Silvestri, J. Sivertsen, and M. Skala, in Similarity Search and Applications 2015 (SISAP).
- The 'DrusillaSelect' algorithm from "Fast approximate furthest neighbors with data-dependent candidate selection", by R.R. Curtin and A.B. Gardner, in Similarity Search and Applications 2016 (SISAP).
These two strategies give approximate results for the furthest neighbor search problem and can be used as fast replacements for other furthest neighbor techniques such as those found in the mlpack_kfn program. Note that typically, the 'ds' algorithm requires far fewer tables and projections than the 'qdafn' algorithm.
Specify a reference set (set to search in) with Reference
, specify a query set with Query
, and specify algorithm parameters with NumTables
and NumProjections
(or don't and defaults will be used). The algorithm to be used (either 'ds'---the default---or 'qdafn') may be specified with Algorithm
. Also specify the number of neighbors to search for with K
.
Note that for 'qdafn' in lower dimensions, NumProjections
may need to be set to a high value in order to return results for each query point.
If no query set is specified, the reference set will be used as the query set. The OutputModel
output parameter may be used to store the built model, and an input model may be loaded instead of specifying a reference set with the InputModel
option.
Results for each query point can be stored with the Neighbors
and Distances
output parameters. Each row of these output matrices holds the k distances or neighbor indices for each query point.
For example, to find the 5 approximate furthest neighbors with reference_set
as the reference set and query_set
as the query set using DrusillaSelect, storing the furthest neighbor indices to neighbors
and the furthest neighbor distances to distances
, one could call
// Initialize optional parameters for ApproxKfn().
param := mlpack.ApproxKfnOptions()
param.Query = query_set
param.Reference = reference_set
param.K = 5
param.Algorithm = "ds"
distances, neighbors, _ := mlpack.ApproxKfn(param)
and to perform approximate all-furthest-neighbors search with k=1 on the set data
storing only the furthest neighbor distances to distances
, one could call
// Initialize optional parameters for ApproxKfn().
param := mlpack.ApproxKfnOptions()
param.Reference = reference_set
param.K = 1
distances, _, _ := mlpack.ApproxKfn(param)
A trained model can be re-used. If a model has been previously saved to model
, then we may find 3 approximate furthest neighbors on a query set new_query_set
using that model and store the furthest neighbor indices into neighbors
by calling
// Initialize optional parameters for ApproxKfn().
param := mlpack.ApproxKfnOptions()
param.InputModel = &model
param.Query = new_query_set
param.K = 3
_, neighbors, _ := mlpack.ApproxKfn(param)
name | type | description | default |
---|---|---|---|
algorithm |
character |
Algorithm to use: 'ds' or 'qdafn'. | "ds" |
calculate_error |
logical |
If set, calculate the average distance error for the first furthest neighbor only. | FALSE |
exact_distances |
numeric matrix |
Matrix containing exact distances to furthest neighbors; this can be used to avoid explicit calculation when --calculate_error is set. | matrix(numeric(), 0, 0) |
input_model |
ApproxKFNModel |
File containing input model. | NA |
k |
integer |
Number of furthest neighbors to search for. | 0 |
num_projections |
integer |
Number of projections to use in each hash table. | 5 |
num_tables |
integer |
Number of hash tables to use. | 5 |
query |
numeric matrix |
Matrix containing query points. | matrix(numeric(), 0, 0) |
reference |
numeric matrix |
Matrix containing the reference dataset. | matrix(numeric(), 0, 0) |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
distances |
numeric matrix |
Matrix to save furthest neighbor distances to. |
neighbors |
integer matrix |
Matrix to save neighbor indices to. |
output_model |
ApproxKFNModel |
File to save output model to. |
{: #r_approx_kfn_detailed-documentation }
This program implements two strategies for furthest neighbor search. These strategies are:
- The 'qdafn' algorithm from "Approximate Furthest Neighbor in High Dimensions" by R. Pagh, F. Silvestri, J. Sivertsen, and M. Skala, in Similarity Search and Applications 2015 (SISAP).
- The 'DrusillaSelect' algorithm from "Fast approximate furthest neighbors with data-dependent candidate selection", by R.R. Curtin and A.B. Gardner, in Similarity Search and Applications 2016 (SISAP).
These two strategies give approximate results for the furthest neighbor search problem and can be used as fast replacements for other furthest neighbor techniques such as those found in the mlpack_kfn program. Note that typically, the 'ds' algorithm requires far fewer tables and projections than the 'qdafn' algorithm.
Specify a reference set (set to search in) with reference
, specify a query set with query
, and specify algorithm parameters with num_tables
and num_projections
(or don't and defaults will be used). The algorithm to be used (either 'ds'---the default---or 'qdafn') may be specified with algorithm
. Also specify the number of neighbors to search for with k
.
Note that for 'qdafn' in lower dimensions, num_projections
may need to be set to a high value in order to return results for each query point.
If no query set is specified, the reference set will be used as the query set. The output_model
output parameter may be used to store the built model, and an input model may be loaded instead of specifying a reference set with the input_model
option.
Results for each query point can be stored with the neighbors
and distances
output parameters. Each row of these output matrices holds the k distances or neighbor indices for each query point.
For example, to find the 5 approximate furthest neighbors with "reference_set"
as the reference set and "query_set"
as the query set using DrusillaSelect, storing the furthest neighbor indices to "neighbors"
and the furthest neighbor distances to "distances"
, one could call
R> output <- approx_kfn(query=query_set, reference=reference_set, k=5,
algorithm="ds")
R> neighbors <- output$neighbors
R> distances <- output$distances
and to perform approximate all-furthest-neighbors search with k=1 on the set "data"
storing only the furthest neighbor distances to "distances"
, one could call
R> output <- approx_kfn(reference=reference_set, k=1)
R> distances <- output$distances
A trained model can be re-used. If a model has been previously saved to "model"
, then we may find 3 approximate furthest neighbors on a query set "new_query_set"
using that model and store the furthest neighbor indices into "neighbors"
by calling
R> output <- approx_kfn(input_model=model, query=new_query_set, k=3)
R> neighbors <- output$neighbors
// Initialize optional parameters for BayesianLinearRegression(). param := mlpack.BayesianLinearRegressionOptions() param.Center = false param.Input = mat.NewDense(1, 1, nil) param.InputModel = nil param.Responses = mat.NewDense(1, 1, nil) param.Scale = false param.Test = mat.NewDense(1, 1, nil)
output_model, predictions, stds := mlpack.BayesianLinearRegression(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- bayesian_linear_regression(center=FALSE, input=matrix(numeric(),
0, 0), input_model=NA, responses=matrix(numeric(), 0, 0), scale=FALSE,
test=matrix(numeric(), 0, 0), verbose=FALSE)
R> output_model <- d$output_model
R> predictions <- d$predictions
R> stds <- d$stds
An implementation of the bayesian linear regression. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--center (-c) |
flag |
Center the data and fit the intercept if enabled. | |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_file (-i) |
2-d matrix file |
Matrix of covariates (X). | '' |
--input_model_file (-m) |
BayesianLinearRegression file |
Trained BayesianLinearRegression model to use. | '' |
--responses_file (-r) |
1-d matrix file |
Matrix of responses/observations (y). | '' |
--scale (-s) |
flag |
Scale each feature by their standard deviations if enabled. | |
--test_file (-t) |
2-d matrix file |
Matrix containing points to regress on (test points). | '' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_model_file (-M) |
BayesianLinearRegression file |
Output BayesianLinearRegression model. |
--predictions_file (-o) |
2-d matrix file |
If --test_file is specified, this file is where the predicted responses will be saved. |
--stds_file (-u) |
2-d matrix file |
If specified, this is where the standard deviations of the predictive distribution will be saved. |
{: #cli_bayesian_linear_regression_detailed-documentation }
An implementation of the bayesian linear regression. This model is a probabilistic view and implementation of the linear regression. The final solution is obtained by computing a posterior distribution from gaussian likelihood and a zero mean gaussian isotropic prior distribution on the solution. Optimization is AUTOMATIC and does not require cross validation. The optimization is performed by maximization of the evidence function. Parameters are tuned during the maximization of the marginal likelihood. This procedure includes the Ockham's razor that penalizes over complex solutions.
This program is able to train a Bayesian linear regression model or load a model from file, output regression predictions for a test set, and save the trained model to a file.
To train a BayesianLinearRegression model, the --input_file (-i)
and --responses_file (-r)
parameters must be given. The --center (-c)
and --scale (-s)
parameters control the centering and the normalizing options. A trained model can be saved with the --output_model_file (-M)
. If no training is desired at all, a model can be passed via the --input_model_file (-m)
parameter.
The program can also provide predictions for test data using either the trained model or the given input model. Test points can be specified with the --test_file (-t)
parameter. Predicted responses to the test points can be saved with the --predictions_file (-o)
output parameter. The corresponding standard deviation can be save by precising the --stds_file (-u)
parameter.
For example, the following command trains a model on the data 'data.csv'
and responses 'responses.csv'
with center set to true and scale set to false (so, Bayesian linear regression is being solved, and then the model is saved to 'blr_model.bin'
:
$ mlpack_bayesian_linear_regression --input_file data.csv --responses_file
responses.csv --center --scale --output_model_file blr_model.bin
The following command uses the 'blr_model.bin'
to provide predicted responses for the data 'test.csv'
and save those responses to 'test_predictions.csv'
:
$ mlpack_bayesian_linear_regression --input_model_file blr_model.bin
--test_file test.csv --predictions_file test_predictions.csv
Because the estimator computes a predictive distribution instead of a simple point estimate, the --stds_file (-u)
parameter allows one to save the prediction uncertainties:
$ mlpack_bayesian_linear_regression --input_model_file blr_model.bin
--test_file test.csv --predictions_file test_predictions.csv --stds_file
stds.csv
- Bayesian Interpolation
- [Bayesian Linear Regression, Section 3.3](MLA Bishop, Christopher M. Pattern Recognition and Machine Learning. New York :Springer, 2006, section 3.3.)
- mlpack::regression::BayesianLinearRegression C++ class documentation
name | type | description | default |
---|---|---|---|
center |
bool |
Center the data and fit the intercept if enabled. | False |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
input |
matrix |
Matrix of covariates (X). | np.empty([0, 0]) |
input_model |
BayesianLinearRegressionType |
Trained BayesianLinearRegression model to use. | None |
responses |
vector |
Matrix of responses/observations (y). | np.empty([0]) |
scale |
bool |
Scale each feature by their standard deviations if enabled. | False |
test |
matrix |
Matrix containing points to regress on (test points). | np.empty([0, 0]) |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
BayesianLinearRegressionType |
Output BayesianLinearRegression model. |
predictions |
matrix |
If --test_file is specified, this file is where the predicted responses will be saved. |
stds |
matrix |
If specified, this is where the standard deviations of the predictive distribution will be saved. |
{: #python_bayesian_linear_regression_detailed-documentation }
An implementation of the bayesian linear regression. This model is a probabilistic view and implementation of the linear regression. The final solution is obtained by computing a posterior distribution from gaussian likelihood and a zero mean gaussian isotropic prior distribution on the solution. Optimization is AUTOMATIC and does not require cross validation. The optimization is performed by maximization of the evidence function. Parameters are tuned during the maximization of the marginal likelihood. This procedure includes the Ockham's razor that penalizes over complex solutions.
This program is able to train a Bayesian linear regression model or load a model from file, output regression predictions for a test set, and save the trained model to a file.
To train a BayesianLinearRegression model, the input
and responses
parameters must be given. The center
and scale
parameters control the centering and the normalizing options. A trained model can be saved with the output_model
. If no training is desired at all, a model can be passed via the input_model
parameter.
The program can also provide predictions for test data using either the trained model or the given input model. Test points can be specified with the test
parameter. Predicted responses to the test points can be saved with the predictions
output parameter. The corresponding standard deviation can be save by precising the stds
parameter.
For example, the following command trains a model on the data 'data'
and responses 'responses'
with center set to true and scale set to false (so, Bayesian linear regression is being solved, and then the model is saved to 'blr_model'
:
>>> output = bayesian_linear_regression(input=data, responses=responses,
center=1, scale=0)
>>> blr_model = output['output_model']
The following command uses the 'blr_model'
to provide predicted responses for the data 'test'
and save those responses to 'test_predictions'
:
>>> output = bayesian_linear_regression(input_model=blr_model, test=test)
>>> test_predictions = output['predictions']
Because the estimator computes a predictive distribution instead of a simple point estimate, the stds
parameter allows one to save the prediction uncertainties:
>>> output = bayesian_linear_regression(input_model=blr_model, test=test)
>>> test_predictions = output['predictions']
>>> stds = output['stds']
- Bayesian Interpolation
- [Bayesian Linear Regression, Section 3.3](MLA Bishop, Christopher M. Pattern Recognition and Machine Learning. New York :Springer, 2006, section 3.3.)
- mlpack::regression::BayesianLinearRegression C++ class documentation
name | type | description | default |
---|---|---|---|
center |
Bool |
Center the data and fit the intercept if enabled. | false |
input |
Float64 matrix-like |
Matrix of covariates (X). | zeros(0, 0) |
input_model |
BayesianLinearRegression |
Trained BayesianLinearRegression model to use. | nothing |
responses |
Float64 vector-like |
Matrix of responses/observations (y). | Float64[] |
scale |
Bool |
Scale each feature by their standard deviations if enabled. | false |
test |
Float64 matrix-like |
Matrix containing points to regress on (test points). | zeros(0, 0) |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output_model |
BayesianLinearRegression |
Output BayesianLinearRegression model. |
predictions |
Float64 matrix-like |
If --test_file is specified, this file is where the predicted responses will be saved. |
stds |
Float64 matrix-like |
If specified, this is where the standard deviations of the predictive distribution will be saved. |
{: #julia_bayesian_linear_regression_detailed-documentation }
An implementation of the bayesian linear regression. This model is a probabilistic view and implementation of the linear regression. The final solution is obtained by computing a posterior distribution from gaussian likelihood and a zero mean gaussian isotropic prior distribution on the solution. Optimization is AUTOMATIC and does not require cross validation. The optimization is performed by maximization of the evidence function. Parameters are tuned during the maximization of the marginal likelihood. This procedure includes the Ockham's razor that penalizes over complex solutions.
This program is able to train a Bayesian linear regression model or load a model from file, output regression predictions for a test set, and save the trained model to a file.
To train a BayesianLinearRegression model, the input
and responses
parameters must be given. The center
and scale
parameters control the centering and the normalizing options. A trained model can be saved with the output_model
. If no training is desired at all, a model can be passed via the input_model
parameter.
The program can also provide predictions for test data using either the trained model or the given input model. Test points can be specified with the test
parameter. Predicted responses to the test points can be saved with the predictions
output parameter. The corresponding standard deviation can be save by precising the stds
parameter.
For example, the following command trains a model on the data data
and responses responses
with center set to true and scale set to false (so, Bayesian linear regression is being solved, and then the model is saved to blr_model
:
julia> using CSV
julia> data = CSV.read("data.csv")
julia> responses = CSV.read("responses.csv")
julia> blr_model, _, _ = bayesian_linear_regression(center=1,
input=data, responses=responses, scale=0)
The following command uses the blr_model
to provide predicted responses for the data test
and save those responses to test_predictions
:
julia> using CSV
julia> test = CSV.read("test.csv")
julia> _, test_predictions, _ =
bayesian_linear_regression(input_model=blr_model, test=test)
Because the estimator computes a predictive distribution instead of a simple point estimate, the stds
parameter allows one to save the prediction uncertainties:
julia> using CSV
julia> test = CSV.read("test.csv")
julia> _, test_predictions, stds =
bayesian_linear_regression(input_model=blr_model, test=test)
- Bayesian Interpolation
- [Bayesian Linear Regression, Section 3.3](MLA Bishop, Christopher M. Pattern Recognition and Machine Learning. New York :Springer, 2006, section 3.3.)
- mlpack::regression::BayesianLinearRegression C++ class documentation
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
Center |
bool |
Center the data and fit the intercept if enabled. | false |
Input |
*mat.Dense |
Matrix of covariates (X). | mat.NewDense(1, 1, nil) |
InputModel |
bayesianLinearRegression |
Trained BayesianLinearRegression model to use. | nil |
Responses |
*mat.Dense (1d) |
Matrix of responses/observations (y). | mat.NewDense(1, 1, nil) |
Scale |
bool |
Scale each feature by their standard deviations if enabled. | false |
Test |
*mat.Dense |
Matrix containing points to regress on (test points). | mat.NewDense(1, 1, nil) |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
outputModel |
bayesianLinearRegression |
Output BayesianLinearRegression model. |
predictions |
*mat.Dense |
If --test_file is specified, this file is where the predicted responses will be saved. |
stds |
*mat.Dense |
If specified, this is where the standard deviations of the predictive distribution will be saved. |
{: #go_bayesian_linear_regression_detailed-documentation }
An implementation of the bayesian linear regression. This model is a probabilistic view and implementation of the linear regression. The final solution is obtained by computing a posterior distribution from gaussian likelihood and a zero mean gaussian isotropic prior distribution on the solution. Optimization is AUTOMATIC and does not require cross validation. The optimization is performed by maximization of the evidence function. Parameters are tuned during the maximization of the marginal likelihood. This procedure includes the Ockham's razor that penalizes over complex solutions.
This program is able to train a Bayesian linear regression model or load a model from file, output regression predictions for a test set, and save the trained model to a file.
To train a BayesianLinearRegression model, the Input
and Responses
parameters must be given. The Center
and Scale
parameters control the centering and the normalizing options. A trained model can be saved with the OutputModel
. If no training is desired at all, a model can be passed via the InputModel
parameter.
The program can also provide predictions for test data using either the trained model or the given input model. Test points can be specified with the Test
parameter. Predicted responses to the test points can be saved with the Predictions
output parameter. The corresponding standard deviation can be save by precising the Stds
parameter.
For example, the following command trains a model on the data data
and responses responses
with center set to true and scale set to false (so, Bayesian linear regression is being solved, and then the model is saved to blr_model
:
// Initialize optional parameters for BayesianLinearRegression().
param := mlpack.BayesianLinearRegressionOptions()
param.Input = data
param.Responses = responses
param.Center = 1
param.Scale = 0
blr_model, _, _ := mlpack.BayesianLinearRegression(param)
The following command uses the blr_model
to provide predicted responses for the data test
and save those responses to test_predictions
:
// Initialize optional parameters for BayesianLinearRegression().
param := mlpack.BayesianLinearRegressionOptions()
param.InputModel = &blr_model
param.Test = test
_, test_predictions, _ := mlpack.BayesianLinearRegression(param)
Because the estimator computes a predictive distribution instead of a simple point estimate, the Stds
parameter allows one to save the prediction uncertainties:
// Initialize optional parameters for BayesianLinearRegression().
param := mlpack.BayesianLinearRegressionOptions()
param.InputModel = &blr_model
param.Test = test
_, test_predictions, stds := mlpack.BayesianLinearRegression(param)
- Bayesian Interpolation
- [Bayesian Linear Regression, Section 3.3](MLA Bishop, Christopher M. Pattern Recognition and Machine Learning. New York :Springer, 2006, section 3.3.)
- mlpack::regression::BayesianLinearRegression C++ class documentation
name | type | description | default |
---|---|---|---|
center |
logical |
Center the data and fit the intercept if enabled. | FALSE |
input |
numeric matrix |
Matrix of covariates (X). | matrix(numeric(), 0, 0) |
input_model |
BayesianLinearRegression |
Trained BayesianLinearRegression model to use. | NA |
responses |
numeric vector |
Matrix of responses/observations (y). | matrix(numeric(), 0, 0) |
scale |
logical |
Scale each feature by their standard deviations if enabled. | FALSE |
test |
numeric matrix |
Matrix containing points to regress on (test points). | matrix(numeric(), 0, 0) |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
BayesianLinearRegression |
Output BayesianLinearRegression model. |
predictions |
numeric matrix |
If --test_file is specified, this file is where the predicted responses will be saved. |
stds |
numeric matrix |
If specified, this is where the standard deviations of the predictive distribution will be saved. |
{: #r_bayesian_linear_regression_detailed-documentation }
An implementation of the bayesian linear regression. This model is a probabilistic view and implementation of the linear regression. The final solution is obtained by computing a posterior distribution from gaussian likelihood and a zero mean gaussian isotropic prior distribution on the solution. Optimization is AUTOMATIC and does not require cross validation. The optimization is performed by maximization of the evidence function. Parameters are tuned during the maximization of the marginal likelihood. This procedure includes the Ockham's razor that penalizes over complex solutions.
This program is able to train a Bayesian linear regression model or load a model from file, output regression predictions for a test set, and save the trained model to a file.
To train a BayesianLinearRegression model, the input
and responses
parameters must be given. The center
and scale
parameters control the centering and the normalizing options. A trained model can be saved with the output_model
. If no training is desired at all, a model can be passed via the input_model
parameter.
The program can also provide predictions for test data using either the trained model or the given input model. Test points can be specified with the test
parameter. Predicted responses to the test points can be saved with the predictions
output parameter. The corresponding standard deviation can be save by precising the stds
parameter.
For example, the following command trains a model on the data "data"
and responses "responses"
with center set to true and scale set to false (so, Bayesian linear regression is being solved, and then the model is saved to "blr_model"
:
R> output <- bayesian_linear_regression(input=data, responses=responses,
center=1, scale=0)
R> blr_model <- output$output_model
The following command uses the "blr_model"
to provide predicted responses for the data "test"
and save those responses to "test_predictions"
:
R> output <- bayesian_linear_regression(input_model=blr_model, test=test)
R> test_predictions <- output$predictions
Because the estimator computes a predictive distribution instead of a simple point estimate, the stds
parameter allows one to save the prediction uncertainties:
R> output <- bayesian_linear_regression(input_model=blr_model, test=test)
R> test_predictions <- output$predictions
R> stds <- output$stds
- Bayesian Interpolation
- [Bayesian Linear Regression, Section 3.3](MLA Bishop, Christopher M. Pattern Recognition and Machine Learning. New York :Springer, 2006, section 3.3.)
- mlpack::regression::BayesianLinearRegression C++ class documentation
// Initialize optional parameters for Cf(). param := mlpack.CfOptions() param.Algorithm = "NMF" param.AllUserRecommendations = false param.InputModel = nil param.Interpolation = "average" param.IterationOnlyTermination = false param.MaxIterations = 1000 param.MinResidue = 1e-05 param.NeighborSearch = "euclidean" param.Neighborhood = 5 param.Normalization = "none" param.Query = mat.NewDense(1, 1, nil) param.Rank = 0 param.Recommendations = 5 param.Seed = 0 param.Test = mat.NewDense(1, 1, nil) param.Training = mat.NewDense(1, 1, nil)
output, output_model := mlpack.Cf(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- cf(algorithm="NMF", all_user_recommendations=FALSE,
input_model=NA, interpolation="average",
iteration_only_termination=FALSE, max_iterations=1000,
min_residue=1e-05, neighbor_search="euclidean", neighborhood=5,
normalization="none", query=matrix(integer(), 0, 0), rank=0,
recommendations=5, seed=0, test=matrix(numeric(), 0, 0),
training=matrix(numeric(), 0, 0), verbose=FALSE)
R> output <- d$output
R> output_model <- d$output_model
An implementation of several collaborative filtering (CF) techniques for recommender systems. This can be used to train a new CF model, or use an existing CF model to compute recommendations. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--algorithm (-a) |
string |
Algorithm used for matrix factorization. | 'NMF' |
--all_user_recommendations (-A) |
flag |
Generate recommendations for all users. | |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_model_file (-m) |
CFModel file |
Trained CF model to load. | '' |
--interpolation (-i) |
string |
Algorithm used for weight interpolation. | 'average' |
--iteration_only_termination (-I) |
flag |
Terminate only when the maximum number of iterations is reached. | |
--max_iterations (-N) |
int |
Maximum number of iterations. If set to zero, there is no limit on the number of iterations. | 1000 |
--min_residue (-r) |
double |
Residue required to terminate the factorization (lower values generally mean better fits). | 1e-05 |
--neighbor_search (-S) |
string |
Algorithm used for neighbor search. | 'euclidean' |
--neighborhood (-n) |
int |
Size of the neighborhood of similar users to consider for each query user. | 5 |
--normalization (-z) |
string |
Normalization performed on the ratings. | 'none' |
--query_file (-q) |
2-d index matrix file |
List of query users for which recommendations should be generated. | '' |
--rank (-R) |
int |
Rank of decomposed matrices (if 0, a heuristic is used to estimate the rank). | 0 |
--recommendations (-c) |
int |
Number of recommendations to generate for each query user. | 5 |
--seed (-s) |
int |
Set the random seed (0 uses std::time(NULL)). | 0 |
--test_file (-T) |
2-d matrix file |
Test set to calculate RMSE on. | '' |
--training_file (-t) |
2-d matrix file |
Input dataset to perform CF on. | '' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_file (-o) |
2-d index matrix file |
Matrix that will store output recommendations. |
--output_model_file (-M) |
CFModel file |
Output for trained CF model. |
{: #cli_cf_detailed-documentation }
This program performs collaborative filtering (CF) on the given dataset. Given a list of user, item and preferences (the --training_file (-t)
parameter), the program will perform a matrix decomposition and then can perform a series of actions related to collaborative filtering. Alternately, the program can load an existing saved CF model with the --input_model_file (-m)
parameter and then use that model to provide recommendations or predict values.
The input matrix should be a 3-dimensional matrix of ratings, where the first dimension is the user, the second dimension is the item, and the third dimension is that user's rating of that item. Both the users and items should be numeric indices, not names. The indices are assumed to start from 0.
A set of query users for which recommendations can be generated may be specified with the --query_file (-q)
parameter; alternately, recommendations may be generated for every user in the dataset by specifying the --all_user_recommendations (-A)
parameter. In addition, the number of recommendations per user to generate can be specified with the --recommendations (-c)
parameter, and the number of similar users (the size of the neighborhood) to be considered when generating recommendations can be specified with the --neighborhood (-n)
parameter.
For performing the matrix decomposition, the following optimization algorithms can be specified via the --algorithm (-a)
parameter:
- 'RegSVD' -- Regularized SVD using a SGD optimizer
- 'NMF' -- Non-negative matrix factorization with alternating least squares update rules
- 'BatchSVD' -- SVD batch learning
- 'SVDIncompleteIncremental' -- SVD incomplete incremental learning
- 'SVDCompleteIncremental' -- SVD complete incremental learning
- 'BiasSVD' -- Bias SVD using a SGD optimizer
- 'SVDPP' -- SVD++ using a SGD optimizer
The following neighbor search algorithms can be specified via the --neighbor_search (-S)
parameter:
- 'cosine' -- Cosine Search Algorithm
- 'euclidean' -- Euclidean Search Algorithm
- 'pearson' -- Pearson Search Algorithm
The following weight interpolation algorithms can be specified via the --interpolation (-i)
parameter:
- 'average' -- Average Interpolation Algorithm
- 'regression' -- Regression Interpolation Algorithm
- 'similarity' -- Similarity Interpolation Algorithm
The following ranking normalization algorithms can be specified via the --normalization (-z)
parameter:
- 'none' -- No Normalization
- 'item_mean' -- Item Mean Normalization
- 'overall_mean' -- Overall Mean Normalization
- 'user_mean' -- User Mean Normalization
- 'z_score' -- Z-Score Normalization
A trained model may be saved to with the --output_model_file (-M)
output parameter.
To train a CF model on a dataset 'training_set.csv'
using NMF for decomposition and saving the trained model to 'model.bin'
, one could call:
$ mlpack_cf --training_file training_set.csv --algorithm NMF
--output_model_file model.bin
Then, to use this model to generate recommendations for the list of users in the query set 'users.csv'
, storing 5 recommendations in 'recommendations.csv'
, one could call
$ mlpack_cf --input_model_file model.bin --query_file users.csv
--recommendations 5 --output_file recommendations.csv
name | type | description | default |
---|---|---|---|
algorithm |
str |
Algorithm used for matrix factorization. | 'NMF' |
all_user_recommendations |
bool |
Generate recommendations for all users. | False |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
input_model |
CFModelType |
Trained CF model to load. | None |
interpolation |
str |
Algorithm used for weight interpolation. | 'average' |
iteration_only_termination |
bool |
Terminate only when the maximum number of iterations is reached. | False |
max_iterations |
int |
Maximum number of iterations. If set to zero, there is no limit on the number of iterations. | 1000 |
min_residue |
float |
Residue required to terminate the factorization (lower values generally mean better fits). | 1e-05 |
neighbor_search |
str |
Algorithm used for neighbor search. | 'euclidean' |
neighborhood |
int |
Size of the neighborhood of similar users to consider for each query user. | 5 |
normalization |
str |
Normalization performed on the ratings. | 'none' |
query |
int matrix |
List of query users for which recommendations should be generated. | np.empty([0, 0], dtype=np.uint64) |
rank |
int |
Rank of decomposed matrices (if 0, a heuristic is used to estimate the rank). | 0 |
recommendations |
int |
Number of recommendations to generate for each query user. | 5 |
seed |
int |
Set the random seed (0 uses std::time(NULL)). | 0 |
test |
matrix |
Test set to calculate RMSE on. | np.empty([0, 0]) |
training |
matrix |
Input dataset to perform CF on. | np.empty([0, 0]) |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output |
int matrix |
Matrix that will store output recommendations. |
output_model |
CFModelType |
Output for trained CF model. |
{: #python_cf_detailed-documentation }
This program performs collaborative filtering (CF) on the given dataset. Given a list of user, item and preferences (the training
parameter), the program will perform a matrix decomposition and then can perform a series of actions related to collaborative filtering. Alternately, the program can load an existing saved CF model with the input_model
parameter and then use that model to provide recommendations or predict values.
The input matrix should be a 3-dimensional matrix of ratings, where the first dimension is the user, the second dimension is the item, and the third dimension is that user's rating of that item. Both the users and items should be numeric indices, not names. The indices are assumed to start from 0.
A set of query users for which recommendations can be generated may be specified with the query
parameter; alternately, recommendations may be generated for every user in the dataset by specifying the all_user_recommendations
parameter. In addition, the number of recommendations per user to generate can be specified with the recommendations
parameter, and the number of similar users (the size of the neighborhood) to be considered when generating recommendations can be specified with the neighborhood
parameter.
For performing the matrix decomposition, the following optimization algorithms can be specified via the algorithm
parameter:
- 'RegSVD' -- Regularized SVD using a SGD optimizer
- 'NMF' -- Non-negative matrix factorization with alternating least squares update rules
- 'BatchSVD' -- SVD batch learning
- 'SVDIncompleteIncremental' -- SVD incomplete incremental learning
- 'SVDCompleteIncremental' -- SVD complete incremental learning
- 'BiasSVD' -- Bias SVD using a SGD optimizer
- 'SVDPP' -- SVD++ using a SGD optimizer
The following neighbor search algorithms can be specified via the neighbor_search
parameter:
- 'cosine' -- Cosine Search Algorithm
- 'euclidean' -- Euclidean Search Algorithm
- 'pearson' -- Pearson Search Algorithm
The following weight interpolation algorithms can be specified via the interpolation
parameter:
- 'average' -- Average Interpolation Algorithm
- 'regression' -- Regression Interpolation Algorithm
- 'similarity' -- Similarity Interpolation Algorithm
The following ranking normalization algorithms can be specified via the normalization
parameter:
- 'none' -- No Normalization
- 'item_mean' -- Item Mean Normalization
- 'overall_mean' -- Overall Mean Normalization
- 'user_mean' -- User Mean Normalization
- 'z_score' -- Z-Score Normalization
A trained model may be saved to with the output_model
output parameter.
To train a CF model on a dataset 'training_set'
using NMF for decomposition and saving the trained model to 'model'
, one could call:
>>> output = cf(training=training_set, algorithm='NMF')
>>> model = output['output_model']
Then, to use this model to generate recommendations for the list of users in the query set 'users'
, storing 5 recommendations in 'recommendations'
, one could call
>>> output = cf(input_model=model, query=users, recommendations=5)
>>> recommendations = output['output']
name | type | description | default |
---|---|---|---|
algorithm |
String |
Algorithm used for matrix factorization. | "NMF" |
all_user_recommendations |
Bool |
Generate recommendations for all users. | false |
input_model |
CFModel |
Trained CF model to load. | nothing |
interpolation |
String |
Algorithm used for weight interpolation. | "average" |
iteration_only_termination |
Bool |
Terminate only when the maximum number of iterations is reached. | false |
max_iterations |
Int |
Maximum number of iterations. If set to zero, there is no limit on the number of iterations. | 1000 |
min_residue |
Float64 |
Residue required to terminate the factorization (lower values generally mean better fits). | 1e-05 |
neighbor_search |
String |
Algorithm used for neighbor search. | "euclidean" |
neighborhood |
Int |
Size of the neighborhood of similar users to consider for each query user. | 5 |
normalization |
String |
Normalization performed on the ratings. | "none" |
query |
Int matrix-like |
List of query users for which recommendations should be generated. | zeros(Int, 0, 0) |
rank |
Int |
Rank of decomposed matrices (if 0, a heuristic is used to estimate the rank). | 0 |
recommendations |
Int |
Number of recommendations to generate for each query user. | 5 |
seed |
Int |
Set the random seed (0 uses std::time(NULL)). | 0 |
test |
Float64 matrix-like |
Test set to calculate RMSE on. | zeros(0, 0) |
training |
Float64 matrix-like |
Input dataset to perform CF on. | zeros(0, 0) |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output |
Int matrix-like |
Matrix that will store output recommendations. |
output_model |
CFModel |
Output for trained CF model. |
{: #julia_cf_detailed-documentation }
This program performs collaborative filtering (CF) on the given dataset. Given a list of user, item and preferences (the training
parameter), the program will perform a matrix decomposition and then can perform a series of actions related to collaborative filtering. Alternately, the program can load an existing saved CF model with the input_model
parameter and then use that model to provide recommendations or predict values.
The input matrix should be a 3-dimensional matrix of ratings, where the first dimension is the user, the second dimension is the item, and the third dimension is that user's rating of that item. Both the users and items should be numeric indices, not names. The indices are assumed to start from 0.
A set of query users for which recommendations can be generated may be specified with the query
parameter; alternately, recommendations may be generated for every user in the dataset by specifying the all_user_recommendations
parameter. In addition, the number of recommendations per user to generate can be specified with the recommendations
parameter, and the number of similar users (the size of the neighborhood) to be considered when generating recommendations can be specified with the neighborhood
parameter.
For performing the matrix decomposition, the following optimization algorithms can be specified via the algorithm
parameter:
- 'RegSVD' -- Regularized SVD using a SGD optimizer
- 'NMF' -- Non-negative matrix factorization with alternating least squares update rules
- 'BatchSVD' -- SVD batch learning
- 'SVDIncompleteIncremental' -- SVD incomplete incremental learning
- 'SVDCompleteIncremental' -- SVD complete incremental learning
- 'BiasSVD' -- Bias SVD using a SGD optimizer
- 'SVDPP' -- SVD++ using a SGD optimizer
The following neighbor search algorithms can be specified via the neighbor_search
parameter:
- 'cosine' -- Cosine Search Algorithm
- 'euclidean' -- Euclidean Search Algorithm
- 'pearson' -- Pearson Search Algorithm
The following weight interpolation algorithms can be specified via the interpolation
parameter:
- 'average' -- Average Interpolation Algorithm
- 'regression' -- Regression Interpolation Algorithm
- 'similarity' -- Similarity Interpolation Algorithm
The following ranking normalization algorithms can be specified via the normalization
parameter:
- 'none' -- No Normalization
- 'item_mean' -- Item Mean Normalization
- 'overall_mean' -- Overall Mean Normalization
- 'user_mean' -- User Mean Normalization
- 'z_score' -- Z-Score Normalization
A trained model may be saved to with the output_model
output parameter.
To train a CF model on a dataset training_set
using NMF for decomposition and saving the trained model to model
, one could call:
julia> using CSV
julia> training_set = CSV.read("training_set.csv")
julia> _, model = cf(algorithm="NMF", training=training_set)
Then, to use this model to generate recommendations for the list of users in the query set users
, storing 5 recommendations in recommendations
, one could call
julia> using CSV
julia> users = CSV.read("users.csv"; type=Int)
julia> recommendations, _ = cf(input_model=model, query=users,
recommendations=5)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
Algorithm |
string |
Algorithm used for matrix factorization. | "NMF" |
AllUserRecommendations |
bool |
Generate recommendations for all users. | false |
InputModel |
cfModel |
Trained CF model to load. | nil |
Interpolation |
string |
Algorithm used for weight interpolation. | "average" |
IterationOnlyTermination |
bool |
Terminate only when the maximum number of iterations is reached. | false |
MaxIterations |
int |
Maximum number of iterations. If set to zero, there is no limit on the number of iterations. | 1000 |
MinResidue |
float64 |
Residue required to terminate the factorization (lower values generally mean better fits). | 1e-05 |
NeighborSearch |
string |
Algorithm used for neighbor search. | "euclidean" |
Neighborhood |
int |
Size of the neighborhood of similar users to consider for each query user. | 5 |
Normalization |
string |
Normalization performed on the ratings. | "none" |
Query |
*mat.Dense (with ints) |
List of query users for which recommendations should be generated. | mat.NewDense(1, 1, nil) |
Rank |
int |
Rank of decomposed matrices (if 0, a heuristic is used to estimate the rank). | 0 |
Recommendations |
int |
Number of recommendations to generate for each query user. | 5 |
Seed |
int |
Set the random seed (0 uses std::time(NULL)). | 0 |
Test |
*mat.Dense |
Test set to calculate RMSE on. | mat.NewDense(1, 1, nil) |
Training |
*mat.Dense |
Input dataset to perform CF on. | mat.NewDense(1, 1, nil) |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
output |
*mat.Dense (with ints) |
Matrix that will store output recommendations. |
outputModel |
cfModel |
Output for trained CF model. |
{: #go_cf_detailed-documentation }
This program performs collaborative filtering (CF) on the given dataset. Given a list of user, item and preferences (the Training
parameter), the program will perform a matrix decomposition and then can perform a series of actions related to collaborative filtering. Alternately, the program can load an existing saved CF model with the InputModel
parameter and then use that model to provide recommendations or predict values.
The input matrix should be a 3-dimensional matrix of ratings, where the first dimension is the user, the second dimension is the item, and the third dimension is that user's rating of that item. Both the users and items should be numeric indices, not names. The indices are assumed to start from 0.
A set of query users for which recommendations can be generated may be specified with the Query
parameter; alternately, recommendations may be generated for every user in the dataset by specifying the AllUserRecommendations
parameter. In addition, the number of recommendations per user to generate can be specified with the Recommendations
parameter, and the number of similar users (the size of the neighborhood) to be considered when generating recommendations can be specified with the Neighborhood
parameter.
For performing the matrix decomposition, the following optimization algorithms can be specified via the Algorithm
parameter:
- 'RegSVD' -- Regularized SVD using a SGD optimizer
- 'NMF' -- Non-negative matrix factorization with alternating least squares update rules
- 'BatchSVD' -- SVD batch learning
- 'SVDIncompleteIncremental' -- SVD incomplete incremental learning
- 'SVDCompleteIncremental' -- SVD complete incremental learning
- 'BiasSVD' -- Bias SVD using a SGD optimizer
- 'SVDPP' -- SVD++ using a SGD optimizer
The following neighbor search algorithms can be specified via the NeighborSearch
parameter:
- 'cosine' -- Cosine Search Algorithm
- 'euclidean' -- Euclidean Search Algorithm
- 'pearson' -- Pearson Search Algorithm
The following weight interpolation algorithms can be specified via the Interpolation
parameter:
- 'average' -- Average Interpolation Algorithm
- 'regression' -- Regression Interpolation Algorithm
- 'similarity' -- Similarity Interpolation Algorithm
The following ranking normalization algorithms can be specified via the Normalization
parameter:
- 'none' -- No Normalization
- 'item_mean' -- Item Mean Normalization
- 'overall_mean' -- Overall Mean Normalization
- 'user_mean' -- User Mean Normalization
- 'z_score' -- Z-Score Normalization
A trained model may be saved to with the OutputModel
output parameter.
To train a CF model on a dataset training_set
using NMF for decomposition and saving the trained model to model
, one could call:
// Initialize optional parameters for Cf().
param := mlpack.CfOptions()
param.Training = training_set
param.Algorithm = "NMF"
_, model := mlpack.Cf(param)
Then, to use this model to generate recommendations for the list of users in the query set users
, storing 5 recommendations in recommendations
, one could call
// Initialize optional parameters for Cf().
param := mlpack.CfOptions()
param.InputModel = &model
param.Query = users
param.Recommendations = 5
recommendations, _ := mlpack.Cf(param)
name | type | description | default |
---|---|---|---|
algorithm |
character |
Algorithm used for matrix factorization. | "NMF" |
all_user_recommendations |
logical |
Generate recommendations for all users. | FALSE |
input_model |
CFModel |
Trained CF model to load. | NA |
interpolation |
character |
Algorithm used for weight interpolation. | "average" |
iteration_only_termination |
logical |
Terminate only when the maximum number of iterations is reached. | FALSE |
max_iterations |
integer |
Maximum number of iterations. If set to zero, there is no limit on the number of iterations. | 1000 |
min_residue |
numeric |
Residue required to terminate the factorization (lower values generally mean better fits). | 1e-05 |
neighbor_search |
character |
Algorithm used for neighbor search. | "euclidean" |
neighborhood |
integer |
Size of the neighborhood of similar users to consider for each query user. | 5 |
normalization |
character |
Normalization performed on the ratings. | "none" |
query |
integer matrix |
List of query users for which recommendations should be generated. | matrix(integer(), 0, 0) |
rank |
integer |
Rank of decomposed matrices (if 0, a heuristic is used to estimate the rank). | 0 |
recommendations |
integer |
Number of recommendations to generate for each query user. | 5 |
seed |
integer |
Set the random seed (0 uses std::time(NULL)). | 0 |
test |
numeric matrix |
Test set to calculate RMSE on. | matrix(numeric(), 0, 0) |
training |
numeric matrix |
Input dataset to perform CF on. | matrix(numeric(), 0, 0) |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output |
integer matrix |
Matrix that will store output recommendations. |
output_model |
CFModel |
Output for trained CF model. |
{: #r_cf_detailed-documentation }
This program performs collaborative filtering (CF) on the given dataset. Given a list of user, item and preferences (the training
parameter), the program will perform a matrix decomposition and then can perform a series of actions related to collaborative filtering. Alternately, the program can load an existing saved CF model with the input_model
parameter and then use that model to provide recommendations or predict values.
The input matrix should be a 3-dimensional matrix of ratings, where the first dimension is the user, the second dimension is the item, and the third dimension is that user's rating of that item. Both the users and items should be numeric indices, not names. The indices are assumed to start from 0.
A set of query users for which recommendations can be generated may be specified with the query
parameter; alternately, recommendations may be generated for every user in the dataset by specifying the all_user_recommendations
parameter. In addition, the number of recommendations per user to generate can be specified with the recommendations
parameter, and the number of similar users (the size of the neighborhood) to be considered when generating recommendations can be specified with the neighborhood
parameter.
For performing the matrix decomposition, the following optimization algorithms can be specified via the algorithm
parameter:
- 'RegSVD' -- Regularized SVD using a SGD optimizer
- 'NMF' -- Non-negative matrix factorization with alternating least squares update rules
- 'BatchSVD' -- SVD batch learning
- 'SVDIncompleteIncremental' -- SVD incomplete incremental learning
- 'SVDCompleteIncremental' -- SVD complete incremental learning
- 'BiasSVD' -- Bias SVD using a SGD optimizer
- 'SVDPP' -- SVD++ using a SGD optimizer
The following neighbor search algorithms can be specified via the neighbor_search
parameter:
- 'cosine' -- Cosine Search Algorithm
- 'euclidean' -- Euclidean Search Algorithm
- 'pearson' -- Pearson Search Algorithm
The following weight interpolation algorithms can be specified via the interpolation
parameter:
- 'average' -- Average Interpolation Algorithm
- 'regression' -- Regression Interpolation Algorithm
- 'similarity' -- Similarity Interpolation Algorithm
The following ranking normalization algorithms can be specified via the normalization
parameter:
- 'none' -- No Normalization
- 'item_mean' -- Item Mean Normalization
- 'overall_mean' -- Overall Mean Normalization
- 'user_mean' -- User Mean Normalization
- 'z_score' -- Z-Score Normalization
A trained model may be saved to with the output_model
output parameter.
To train a CF model on a dataset "training_set"
using NMF for decomposition and saving the trained model to "model"
, one could call:
R> output <- cf(training=training_set, algorithm="NMF")
R> model <- output$output_model
Then, to use this model to generate recommendations for the list of users in the query set "users"
, storing 5 recommendations in "recommendations"
, one could call
R> output <- cf(input_model=model, query=users, recommendations=5)
R> recommendations <- output$output
// Initialize optional parameters for Dbscan(). param := mlpack.DbscanOptions() param.Epsilon = 1 param.MinSize = 5 param.Naive = false param.SelectionType = "ordered" param.SingleMode = false param.TreeType = "kd"
assignments, centroids := mlpack.Dbscan(input, param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- dbscan(epsilon=1, input=matrix(numeric(), 0, 0), min_size=5,
naive=FALSE, selection_type="ordered", single_mode=FALSE,
tree_type="kd", verbose=FALSE)
R> assignments <- d$assignments
R> centroids <- d$centroids
An implementation of DBSCAN clustering. Given a dataset, this can compute and return a clustering of that dataset. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--epsilon (-e) |
double |
Radius of each range search. | 1 |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_file (-i) |
2-d matrix file |
Input dataset to cluster. | **--** |
--min_size (-m) |
int |
Minimum number of points for a cluster. | 5 |
--naive (-N) |
flag |
If set, brute-force range search (not tree-based) will be used. | |
--selection_type (-s) |
string |
If using point selection policy, the type of selection to use ('ordered', 'random'). | 'ordered' |
--single_mode (-S) |
flag |
If set, single-tree range search (not dual-tree) will be used. | |
--tree_type (-t) |
string |
If using single-tree or dual-tree search, the type of tree to use ('kd', 'r', 'r-star', 'x', 'hilbert-r', 'r-plus', 'r-plus-plus', 'cover', 'ball'). | 'kd' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--assignments_file (-a) |
1-d index matrix file |
Output matrix for assignments of each point. |
--centroids_file (-C) |
2-d matrix file |
Matrix to save output centroids to. |
{: #cli_dbscan_detailed-documentation }
This program implements the DBSCAN algorithm for clustering using accelerated tree-based range search. The type of tree that is used may be parameterized, or brute-force range search may also be used.
The input dataset to be clustered may be specified with the --input_file (-i)
parameter; the radius of each range search may be specified with the --epsilon (-e)
parameters, and the minimum number of points in a cluster may be specified with the --min_size (-m)
parameter.
The --assignments_file (-a)
and --centroids_file (-C)
output parameters may be used to save the output of the clustering. --assignments_file (-a)
contains the cluster assignments of each point, and --centroids_file (-C)
contains the centroids of each cluster.
The range search may be controlled with the --tree_type (-t)
, --single_mode (-S)
, and --naive (-N)
parameters. --tree_type (-t)
can control the type of tree used for range search; this can take a variety of values: 'kd', 'r', 'r-star', 'x', 'hilbert-r', 'r-plus', 'r-plus-plus', 'cover', 'ball'. The --single_mode (-S)
parameter will force single-tree search (as opposed to the default dual-tree search), and '--naive (-N)
will force brute-force range search.
An example usage to run DBSCAN on the dataset in 'input.csv'
with a radius of 0.5 and a minimum cluster size of 5 is given below:
$ mlpack_dbscan --input_file input.csv --epsilon 0.5 --min_size 5
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
epsilon |
float |
Radius of each range search. | 1 |
input |
matrix |
Input dataset to cluster. | **--** |
min_size |
int |
Minimum number of points for a cluster. | 5 |
naive |
bool |
If set, brute-force range search (not tree-based) will be used. | False |
selection_type |
str |
If using point selection policy, the type of selection to use ('ordered', 'random'). | 'ordered' |
single_mode |
bool |
If set, single-tree range search (not dual-tree) will be used. | False |
tree_type |
str |
If using single-tree or dual-tree search, the type of tree to use ('kd', 'r', 'r-star', 'x', 'hilbert-r', 'r-plus', 'r-plus-plus', 'cover', 'ball'). | 'kd' |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
assignments |
int vector |
Output matrix for assignments of each point. |
centroids |
matrix |
Matrix to save output centroids to. |
{: #python_dbscan_detailed-documentation }
This program implements the DBSCAN algorithm for clustering using accelerated tree-based range search. The type of tree that is used may be parameterized, or brute-force range search may also be used.
The input dataset to be clustered may be specified with the input
parameter; the radius of each range search may be specified with the epsilon
parameters, and the minimum number of points in a cluster may be specified with the min_size
parameter.
The assignments
and centroids
output parameters may be used to save the output of the clustering. assignments
contains the cluster assignments of each point, and centroids
contains the centroids of each cluster.
The range search may be controlled with the tree_type
, single_mode
, and naive
parameters. tree_type
can control the type of tree used for range search; this can take a variety of values: 'kd', 'r', 'r-star', 'x', 'hilbert-r', 'r-plus', 'r-plus-plus', 'cover', 'ball'. The single_mode
parameter will force single-tree search (as opposed to the default dual-tree search), and 'naive
will force brute-force range search.
An example usage to run DBSCAN on the dataset in 'input'
with a radius of 0.5 and a minimum cluster size of 5 is given below:
>>> dbscan(input=input, epsilon=0.5, min_size=5)
name | type | description | default |
---|---|---|---|
epsilon |
Float64 |
Radius of each range search. | 1 |
input |
Float64 matrix-like |
Input dataset to cluster. | **--** |
min_size |
Int |
Minimum number of points for a cluster. | 5 |
naive |
Bool |
If set, brute-force range search (not tree-based) will be used. | false |
selection_type |
String |
If using point selection policy, the type of selection to use ('ordered', 'random'). | "ordered" |
single_mode |
Bool |
If set, single-tree range search (not dual-tree) will be used. | false |
tree_type |
String |
If using single-tree or dual-tree search, the type of tree to use ('kd', 'r', 'r-star', 'x', 'hilbert-r', 'r-plus', 'r-plus-plus', 'cover', 'ball'). | "kd" |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
assignments |
Int vector-like |
Output matrix for assignments of each point. |
centroids |
Float64 matrix-like |
Matrix to save output centroids to. |
{: #julia_dbscan_detailed-documentation }
This program implements the DBSCAN algorithm for clustering using accelerated tree-based range search. The type of tree that is used may be parameterized, or brute-force range search may also be used.
The input dataset to be clustered may be specified with the input
parameter; the radius of each range search may be specified with the epsilon
parameters, and the minimum number of points in a cluster may be specified with the min_size
parameter.
The assignments
and centroids
output parameters may be used to save the output of the clustering. assignments
contains the cluster assignments of each point, and centroids
contains the centroids of each cluster.
The range search may be controlled with the tree_type
, single_mode
, and naive
parameters. tree_type
can control the type of tree used for range search; this can take a variety of values: 'kd', 'r', 'r-star', 'x', 'hilbert-r', 'r-plus', 'r-plus-plus', 'cover', 'ball'. The single_mode
parameter will force single-tree search (as opposed to the default dual-tree search), and 'naive
will force brute-force range search.
An example usage to run DBSCAN on the dataset in input
with a radius of 0.5 and a minimum cluster size of 5 is given below:
julia> using CSV
julia> input = CSV.read("input.csv")
julia> _, _ = dbscan(input; epsilon=0.5, min_size=5)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
Epsilon |
float64 |
Radius of each range search. | 1 |
input |
*mat.Dense |
Input dataset to cluster. | **--** |
MinSize |
int |
Minimum number of points for a cluster. | 5 |
Naive |
bool |
If set, brute-force range search (not tree-based) will be used. | false |
SelectionType |
string |
If using point selection policy, the type of selection to use ('ordered', 'random'). | "ordered" |
SingleMode |
bool |
If set, single-tree range search (not dual-tree) will be used. | false |
TreeType |
string |
If using single-tree or dual-tree search, the type of tree to use ('kd', 'r', 'r-star', 'x', 'hilbert-r', 'r-plus', 'r-plus-plus', 'cover', 'ball'). | "kd" |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
assignments |
*mat.Dense (1d with ints) |
Output matrix for assignments of each point. |
centroids |
*mat.Dense |
Matrix to save output centroids to. |
{: #go_dbscan_detailed-documentation }
This program implements the DBSCAN algorithm for clustering using accelerated tree-based range search. The type of tree that is used may be parameterized, or brute-force range search may also be used.
The input dataset to be clustered may be specified with the Input
parameter; the radius of each range search may be specified with the Epsilon
parameters, and the minimum number of points in a cluster may be specified with the MinSize
parameter.
The Assignments
and Centroids
output parameters may be used to save the output of the clustering. Assignments
contains the cluster assignments of each point, and Centroids
contains the centroids of each cluster.
The range search may be controlled with the TreeType
, SingleMode
, and Naive
parameters. TreeType
can control the type of tree used for range search; this can take a variety of values: 'kd', 'r', 'r-star', 'x', 'hilbert-r', 'r-plus', 'r-plus-plus', 'cover', 'ball'. The SingleMode
parameter will force single-tree search (as opposed to the default dual-tree search), and 'Naive
will force brute-force range search.
An example usage to run DBSCAN on the dataset in input
with a radius of 0.5 and a minimum cluster size of 5 is given below:
// Initialize optional parameters for Dbscan().
param := mlpack.DbscanOptions()
param.Epsilon = 0.5
param.MinSize = 5
_, _ := mlpack.Dbscan(input, param)
name | type | description | default |
---|---|---|---|
epsilon |
numeric |
Radius of each range search. | 1 |
input |
numeric matrix |
Input dataset to cluster. | **--** |
min_size |
integer |
Minimum number of points for a cluster. | 5 |
naive |
logical |
If set, brute-force range search (not tree-based) will be used. | FALSE |
selection_type |
character |
If using point selection policy, the type of selection to use ('ordered', 'random'). | "ordered" |
single_mode |
logical |
If set, single-tree range search (not dual-tree) will be used. | FALSE |
tree_type |
character |
If using single-tree or dual-tree search, the type of tree to use ('kd', 'r', 'r-star', 'x', 'hilbert-r', 'r-plus', 'r-plus-plus', 'cover', 'ball'). | "kd" |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
assignments |
integer vector |
Output matrix for assignments of each point. |
centroids |
numeric matrix |
Matrix to save output centroids to. |
{: #r_dbscan_detailed-documentation }
This program implements the DBSCAN algorithm for clustering using accelerated tree-based range search. The type of tree that is used may be parameterized, or brute-force range search may also be used.
The input dataset to be clustered may be specified with the input
parameter; the radius of each range search may be specified with the epsilon
parameters, and the minimum number of points in a cluster may be specified with the min_size
parameter.
The assignments
and centroids
output parameters may be used to save the output of the clustering. assignments
contains the cluster assignments of each point, and centroids
contains the centroids of each cluster.
The range search may be controlled with the tree_type
, single_mode
, and naive
parameters. tree_type
can control the type of tree used for range search; this can take a variety of values: 'kd', 'r', 'r-star', 'x', 'hilbert-r', 'r-plus', 'r-plus-plus', 'cover', 'ball'. The single_mode
parameter will force single-tree search (as opposed to the default dual-tree search), and 'naive
will force brute-force range search.
An example usage to run DBSCAN on the dataset in "input"
with a radius of 0.5 and a minimum cluster size of 5 is given below:
R> output <- dbscan(input=input, epsilon=0.5, min_size=5)
// Initialize optional parameters for DecisionStump(). param := mlpack.DecisionStumpOptions() param.BucketSize = 6 param.InputModel = nil param.Labels = mat.NewDense(1, 1, nil) param.Test = mat.NewDense(1, 1, nil) param.Training = mat.NewDense(1, 1, nil)
output_model, predictions := mlpack.DecisionStump(param)
</div>
An implementation of a decision stump, which is a single-level decision tree. Given labeled data, a new decision stump can be trained; or, an existing decision stump can be used to classify points. [Detailed documentation](#cli_decision_stump_detailed-documentation){: .language-detail-link #cli }[Detailed documentation](#python_decision_stump_detailed-documentation){: .language-detail-link #python }[Detailed documentation](#julia_decision_stump_detailed-documentation){: .language-detail-link #julia }[Detailed documentation](#go_decision_stump_detailed-documentation){: .language-detail-link #go }.
<div class="language-section" id="cli" markdown="1">
### Input options
| ***name*** | ***type*** | ***description*** | ***default*** |
|------------|------------|-------------------|---------------|
| `--bucket_size (-b)` | [`int`](#doc_cli_int) | The minimum number of training points in each decision stump bucket. | `6` |
| `--help (-h)` | [`flag`](#doc_cli_flag) | Default help info. <span class="special">Only exists in CLI binding.</span> | |
| `--info` | [`string`](#doc_cli_string) | Print help on a specific option. <span class="special">Only exists in CLI binding.</span> | `''` |
| `--input_model_file (-m)` | [`DSModel file`](#doc_cli_model) | Decision stump model to load. | `''` |
| `--labels_file (-l)` | [`1-d index matrix file`](#doc_cli_1-d_index_matrix_file) | Labels for the training set. If not specified, the labels are assumed to be the last row of the training data. | `''` |
| `--test_file (-T)` | [`2-d matrix file`](#doc_cli_2-d_matrix_file) | A dataset to calculate predictions for. | `''` |
| `--training_file (-t)` | [`2-d matrix file`](#doc_cli_2-d_matrix_file) | The dataset to train on. | `''` |
| `--verbose (-v)` | [`flag`](#doc_cli_flag) | Display informational messages and the full list of parameters and timers at the end of execution. | |
| `--version (-V)` | [`flag`](#doc_cli_flag) | Display the version of mlpack. <span class="special">Only exists in CLI binding.</span> | |
### Output options
| ***name*** | ***type*** | ***description*** |
|------------|------------|-------------------|
| `--output_model_file (-M)` | [`DSModel file`](#doc_cli_model) | Output decision stump model to save. |
| `--predictions_file (-p)` | [`1-d index matrix file`](#doc_cli_1-d_index_matrix_file) | The output matrix that will hold the predicted labels for the test set. |
### Detailed documentation
{: #cli_decision_stump_detailed-documentation }
This program implements a decision stump, which is a single-level decision tree. The decision stump will split on one dimension of the input data, and will split into multiple buckets. The dimension and bins are selected by maximizing the information gain of the split. Optionally, the minimum number of training points in each bin can be specified with the `--bucket_size (-b)` parameter.
The decision stump is parameterized by a splitting dimension and a vector of values that denote the splitting values of each bin.
This program enables several applications: a decision tree may be trained or loaded, and then that decision tree may be used to classify a given set of test points. The decision tree may also be saved to a file for later usage.
To train a decision stump, training data should be passed with the `--training_file (-t)` parameter, and their corresponding labels should be passed with the `--labels_file (-l)` option. Optionally, if `--labels_file (-l)` is not specified, the labels are assumed to be the last dimension of the training dataset. The `--bucket_size (-b)` parameter controls the minimum number of training points in each decision stump bucket.
For classifying a test set, a decision stump may be loaded with the `--input_model_file (-m)` parameter (useful for the situation where a stump has already been trained), and a test set may be specified with the `--test_file (-T)` parameter. The predicted labels can be saved with the `--predictions_file (-p)` output parameter.
Because decision stumps are trained in batch, retraining does not make sense and thus it is not possible to pass both `--training_file (-t)` and `--input_model_file (-m)`; instead, simply build a new decision stump with the training data.
After training, a decision stump can be saved with the `--output_model_file (-M)` output parameter. That stump may later be re-used in subsequent calls to this program (or others).
### See also
- [Decision tree](#cli_decision_tree)
- [Decision stumps on Wikipedia](https://en.wikipedia.org/wiki/Decision_stump)
- [mlpack::decision_stump::DecisionStump class documentation](https://mlpack.org/doc/mlpack-git/doxygen/classmlpack_1_1decision__stump_1_1DecisionStump.html)
</div>
<div class="language-section" id="python" markdown="1">
### Input options
| ***name*** | ***type*** | ***description*** | ***default*** |
|------------|------------|-------------------|---------------|
| `bucket_size` | [`int`](#doc_python_int) | The minimum number of training points in each decision stump bucket. | `6` |
| `copy_all_inputs` | [`bool`](#doc_python_bool) | If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. <span class="special">Only exists in Python binding.</span> | `False` |
| `input_model` | [`DSModelType`](#doc_python_model) | Decision stump model to load. | `None` |
| `labels` | [`int vector`](#doc_python_int_vector) | Labels for the training set. If not specified, the labels are assumed to be the last row of the training data. | `np.empty([0], dtype=np.uint64)` |
| `test` | [`matrix`](#doc_python_matrix) | A dataset to calculate predictions for. | `np.empty([0, 0])` |
| `training` | [`matrix`](#doc_python_matrix) | The dataset to train on. | `np.empty([0, 0])` |
| `verbose` | [`bool`](#doc_python_bool) | Display informational messages and the full list of parameters and timers at the end of execution. | `False` |
### Output options
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
| ***name*** | ***type*** | ***description*** |
|------------|------------|-------------------|
| `output_model` | [`DSModelType`](#doc_python_model) | Output decision stump model to save. |
| `predictions` | [`int vector`](#doc_python_int_vector) | The output matrix that will hold the predicted labels for the test set. |
### Detailed documentation
{: #python_decision_stump_detailed-documentation }
This program implements a decision stump, which is a single-level decision tree. The decision stump will split on one dimension of the input data, and will split into multiple buckets. The dimension and bins are selected by maximizing the information gain of the split. Optionally, the minimum number of training points in each bin can be specified with the `bucket_size` parameter.
The decision stump is parameterized by a splitting dimension and a vector of values that denote the splitting values of each bin.
This program enables several applications: a decision tree may be trained or loaded, and then that decision tree may be used to classify a given set of test points. The decision tree may also be saved to a file for later usage.
To train a decision stump, training data should be passed with the `training` parameter, and their corresponding labels should be passed with the `labels` option. Optionally, if `labels` is not specified, the labels are assumed to be the last dimension of the training dataset. The `bucket_size` parameter controls the minimum number of training points in each decision stump bucket.
For classifying a test set, a decision stump may be loaded with the `input_model` parameter (useful for the situation where a stump has already been trained), and a test set may be specified with the `test` parameter. The predicted labels can be saved with the `predictions` output parameter.
Because decision stumps are trained in batch, retraining does not make sense and thus it is not possible to pass both `training` and `input_model`; instead, simply build a new decision stump with the training data.
After training, a decision stump can be saved with the `output_model` output parameter. That stump may later be re-used in subsequent calls to this program (or others).
### See also
- [Decision tree](#python_decision_tree)
- [Decision stumps on Wikipedia](https://en.wikipedia.org/wiki/Decision_stump)
- [mlpack::decision_stump::DecisionStump class documentation](https://mlpack.org/doc/mlpack-git/doxygen/classmlpack_1_1decision__stump_1_1DecisionStump.html)
</div>
<div class="language-section" id="julia" markdown="1">
### Input options
| ***name*** | ***type*** | ***description*** | ***default*** |
|------------|------------|-------------------|---------------|
| `bucket_size` | [`Int`](#doc_julia_Int) | The minimum number of training points in each decision stump bucket. | `6` |
| `input_model` | [`DSModel`](#doc_julia_model) | Decision stump model to load. | `nothing` |
| `labels` | [`Int vector-like`](#doc_julia_Int_vector-like) | Labels for the training set. If not specified, the labels are assumed to be the last row of the training data. | `Int[]` |
| `test` | [`Float64 matrix-like`](#doc_julia_Float64_matrix-like) | A dataset to calculate predictions for. | `zeros(0, 0)` |
| `training` | [`Float64 matrix-like`](#doc_julia_Float64_matrix-like) | The dataset to train on. | `zeros(0, 0)` |
| `verbose` | [`Bool`](#doc_julia_Bool) | Display informational messages and the full list of parameters and timers at the end of execution. | `false` |
### Output options
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
| ***name*** | ***type*** | ***description*** |
|------------|------------|-------------------|
| `output_model` | [`DSModel`](#doc_julia_model) | Output decision stump model to save. |
| `predictions` | [`Int vector-like`](#doc_julia_Int_vector-like) | The output matrix that will hold the predicted labels for the test set. |
### Detailed documentation
{: #julia_decision_stump_detailed-documentation }
This program implements a decision stump, which is a single-level decision tree. The decision stump will split on one dimension of the input data, and will split into multiple buckets. The dimension and bins are selected by maximizing the information gain of the split. Optionally, the minimum number of training points in each bin can be specified with the `bucket_size` parameter.
The decision stump is parameterized by a splitting dimension and a vector of values that denote the splitting values of each bin.
This program enables several applications: a decision tree may be trained or loaded, and then that decision tree may be used to classify a given set of test points. The decision tree may also be saved to a file for later usage.
To train a decision stump, training data should be passed with the `training` parameter, and their corresponding labels should be passed with the `labels` option. Optionally, if `labels` is not specified, the labels are assumed to be the last dimension of the training dataset. The `bucket_size` parameter controls the minimum number of training points in each decision stump bucket.
For classifying a test set, a decision stump may be loaded with the `input_model` parameter (useful for the situation where a stump has already been trained), and a test set may be specified with the `test` parameter. The predicted labels can be saved with the `predictions` output parameter.
Because decision stumps are trained in batch, retraining does not make sense and thus it is not possible to pass both `training` and `input_model`; instead, simply build a new decision stump with the training data.
After training, a decision stump can be saved with the `output_model` output parameter. That stump may later be re-used in subsequent calls to this program (or others).
### See also
- [Decision tree](#julia_decision_tree)
- [Decision stumps on Wikipedia](https://en.wikipedia.org/wiki/Decision_stump)
- [mlpack::decision_stump::DecisionStump class documentation](https://mlpack.org/doc/mlpack-git/doxygen/classmlpack_1_1decision__stump_1_1DecisionStump.html)
</div>
<div class="language-section" id="go" markdown="1">
### Input options
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
| ***name*** | ***type*** | ***description*** | ***default*** |
|------------|------------|-------------------|---------------|
| `BucketSize` | [`int`](#doc_go_int) | The minimum number of training points in each decision stump bucket. | `6` |
| `InputModel` | [`dsModel`](#doc_go_model) | Decision stump model to load. | `nil` |
| `Labels` | [`*mat.Dense (1d with ints)`](#doc_go_*mat.Dense_(1d_with_ints)) | Labels for the training set. If not specified, the labels are assumed to be the last row of the training data. | `mat.NewDense(1, 1, nil)` |
| `Test` | [`*mat.Dense`](#doc_go_*mat.Dense) | A dataset to calculate predictions for. | `mat.NewDense(1, 1, nil)` |
| `Training` | [`*mat.Dense`](#doc_go_*mat.Dense) | The dataset to train on. | `mat.NewDense(1, 1, nil)` |
| `Verbose` | [`bool`](#doc_go_bool) | Display informational messages and the full list of parameters and timers at the end of execution. | `false` |
### Output options
Output options are returned via Go's support for multiple return values, in the order listed below.
| ***name*** | ***type*** | ***description*** |
|------------|------------|-------------------|
| `outputModel` | [`dsModel`](#doc_go_model) | Output decision stump model to save. |
| `predictions` | [`*mat.Dense (1d with ints)`](#doc_go_*mat.Dense_(1d_with_ints)) | The output matrix that will hold the predicted labels for the test set. |
### Detailed documentation
{: #go_decision_stump_detailed-documentation }
This program implements a decision stump, which is a single-level decision tree. The decision stump will split on one dimension of the input data, and will split into multiple buckets. The dimension and bins are selected by maximizing the information gain of the split. Optionally, the minimum number of training points in each bin can be specified with the `BucketSize` parameter.
The decision stump is parameterized by a splitting dimension and a vector of values that denote the splitting values of each bin.
This program enables several applications: a decision tree may be trained or loaded, and then that decision tree may be used to classify a given set of test points. The decision tree may also be saved to a file for later usage.
To train a decision stump, training data should be passed with the `Training` parameter, and their corresponding labels should be passed with the `Labels` option. Optionally, if `Labels` is not specified, the labels are assumed to be the last dimension of the training dataset. The `BucketSize` parameter controls the minimum number of training points in each decision stump bucket.
For classifying a test set, a decision stump may be loaded with the `InputModel` parameter (useful for the situation where a stump has already been trained), and a test set may be specified with the `Test` parameter. The predicted labels can be saved with the `Predictions` output parameter.
Because decision stumps are trained in batch, retraining does not make sense and thus it is not possible to pass both `Training` and `InputModel`; instead, simply build a new decision stump with the training data.
After training, a decision stump can be saved with the `OutputModel` output parameter. That stump may later be re-used in subsequent calls to this program (or others).
### See also
- [Decision tree](#go_decision_tree)
- [Decision stumps on Wikipedia](https://en.wikipedia.org/wiki/Decision_stump)
- [mlpack::decision_stump::DecisionStump class documentation](https://mlpack.org/doc/mlpack-git/doxygen/classmlpack_1_1decision__stump_1_1DecisionStump.html)
</div>
<div class="language-title" id="cli" markdown="1">
## mlpack_decision_tree
{: #cli_decision_tree }
</div>
<div class="language-title" id="python" markdown="1">
## decision_tree()
{: #python_decision_tree }
</div>
<div class="language-title" id="julia" markdown="1">
## decision_tree()
{: #julia_decision_tree }
</div>
<div class="language-title" id="go" markdown="1">
## DecisionTree()
{: #go_decision_tree }
</div>
<div class="language-title" id="r" markdown="1">
## decision_tree()
{: #r_decision_tree }
</div>
#### Decision tree
<div class="language-decl" id="cli" markdown="1">
```bash
$ mlpack_decision_tree [--input_model_file <string>] [--labels_file
<string>] [--maximum_depth 0] [--minimum_gain_split 1e-07]
[--minimum_leaf_size 20] [--print_training_accuracy]
[--print_training_error] [--test_file <string>] [--test_labels_file
<string>] [--training_file <string>] [--weights_file <string>]
[--output_model_file <string>] [--predictions_file <string>]
[--probabilities_file <string>]
// Initialize optional parameters for DecisionTree(). param := mlpack.DecisionTreeOptions() param.InputModel = nil param.Labels = mat.NewDense(1, 1, nil) param.MaximumDepth = 0 param.MinimumGainSplit = 1e-07 param.MinimumLeafSize = 20 param.PrintTrainingAccuracy = false param.PrintTrainingError = false param.Test = mat.NewDense(1, 1, nil) param.TestLabels = mat.NewDense(1, 1, nil) param.Training = mat.NewDense(1, 1, nil) param.Weights = mat.NewDense(1, 1, nil)
output_model, predictions, probabilities := mlpack.DecisionTree(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- decision_tree(input_model=NA, labels=matrix(integer(), 0, 0),
maximum_depth=0, minimum_gain_split=1e-07, minimum_leaf_size=20,
print_training_accuracy=FALSE, print_training_error=FALSE,
test=matrix(numeric(), 0, 0), test_labels=matrix(integer(), 0, 0),
training=matrix(numeric(), 0, 0), verbose=FALSE,
weights=matrix(numeric(), 0, 0))
R> output_model <- d$output_model
R> predictions <- d$predictions
R> probabilities <- d$probabilities
An implementation of an ID3-style decision tree for classification, which supports categorical data. Given labeled data with numeric or categorical features, a decision tree can be trained and saved; or, an existing decision tree can be used for classification on new points. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_model_file (-m) |
DecisionTreeModel file |
Pre-trained decision tree, to be used with test points. | '' |
--labels_file (-l) |
1-d index matrix file |
Training labels. | '' |
--maximum_depth (-D) |
int |
Maximum depth of the tree (0 means no limit). | 0 |
--minimum_gain_split (-g) |
double |
Minimum gain for node splitting. | 1e-07 |
--minimum_leaf_size (-n) |
int |
Minimum number of points in a leaf. | 20 |
--print_training_accuracy (-a) |
flag |
Print the training accuracy. | |
--print_training_error (-e) |
flag |
Print the training error (deprecated; will be removed in mlpack 4.0.0). | |
--test_file (-T) |
2-d categorical matrix file |
Testing dataset (may be categorical). | '' |
--test_labels_file (-L) |
1-d index matrix file |
Test point labels, if accuracy calculation is desired. | '' |
--training_file (-t) |
2-d categorical matrix file |
Training dataset (may be categorical). | '' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. | |
--weights_file (-w) |
2-d matrix file |
The weight of labels | '' |
name | type | description |
---|---|---|
--output_model_file (-M) |
DecisionTreeModel file |
Output for trained decision tree. |
--predictions_file (-p) |
1-d index matrix file |
Class predictions for each test point. |
--probabilities_file (-P) |
2-d matrix file |
Class probabilities for each test point. |
{: #cli_decision_tree_detailed-documentation }
Train and evaluate using a decision tree. Given a dataset containing numeric or categorical features, and associated labels for each point in the dataset, this program can train a decision tree on that data.
The training set and associated labels are specified with the --training_file (-t)
and --labels_file (-l)
parameters, respectively. The labels should be in the range [0, num_classes - 1]. Optionally, if --labels_file (-l)
is not specified, the labels are assumed to be the last dimension of the training dataset.
When a model is trained, the --output_model_file (-M)
output parameter may be used to save the trained model. A model may be loaded for predictions with the --input_model_file (-m)
parameter. The --input_model_file (-m)
parameter may not be specified when the --training_file (-t)
parameter is specified. The --minimum_leaf_size (-n)
parameter specifies the minimum number of training points that must fall into each leaf for it to be split. The --minimum_gain_split (-g)
parameter specifies the minimum gain that is needed for the node to split. The --maximum_depth (-D)
parameter specifies the maximum depth of the tree. If --print_training_error (-e)
is specified, the training error will be printed.
Test data may be specified with the --test_file (-T)
parameter, and if performance numbers are desired for that test set, labels may be specified with the --test_labels_file (-L)
parameter. Predictions for each test point may be saved via the --predictions_file (-p)
output parameter. Class probabilities for each prediction may be saved with the --probabilities_file (-P)
output parameter.
For example, to train a decision tree with a minimum leaf size of 20 on the dataset contained in 'data.csv'
with labels 'labels.csv'
, saving the output model to 'tree.bin'
and printing the training error, one could call
$ mlpack_decision_tree --training_file data.arff --labels_file labels.csv
--output_model_file tree.bin --minimum_leaf_size 20 --minimum_gain_split 0.001
--print_training_accuracy
Then, to use that model to classify points in 'test_set.csv'
and print the test error given the labels 'test_labels.csv'
using that model, while saving the predictions for each point to 'predictions.csv'
, one could call
$ mlpack_decision_tree --input_model_file tree.bin --test_file test_set.arff
--test_labels_file test_labels.csv --predictions_file predictions.csv
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
input_model |
DecisionTreeModelType |
Pre-trained decision tree, to be used with test points. | None |
labels |
int vector |
Training labels. | np.empty([0], dtype=np.uint64) |
maximum_depth |
int |
Maximum depth of the tree (0 means no limit). | 0 |
minimum_gain_split |
float |
Minimum gain for node splitting. | 1e-07 |
minimum_leaf_size |
int |
Minimum number of points in a leaf. | 20 |
print_training_accuracy |
bool |
Print the training accuracy. | False |
print_training_error |
bool |
Print the training error (deprecated; will be removed in mlpack 4.0.0). | False |
test |
categorical matrix |
Testing dataset (may be categorical). | np.empty([0, 0]) |
test_labels |
int vector |
Test point labels, if accuracy calculation is desired. | np.empty([0], dtype=np.uint64) |
training |
categorical matrix |
Training dataset (may be categorical). | np.empty([0, 0]) |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
weights |
matrix |
The weight of labels | np.empty([0, 0]) |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
DecisionTreeModelType |
Output for trained decision tree. |
predictions |
int vector |
Class predictions for each test point. |
probabilities |
matrix |
Class probabilities for each test point. |
{: #python_decision_tree_detailed-documentation }
Train and evaluate using a decision tree. Given a dataset containing numeric or categorical features, and associated labels for each point in the dataset, this program can train a decision tree on that data.
The training set and associated labels are specified with the training
and labels
parameters, respectively. The labels should be in the range [0, num_classes - 1]. Optionally, if labels
is not specified, the labels are assumed to be the last dimension of the training dataset.
When a model is trained, the output_model
output parameter may be used to save the trained model. A model may be loaded for predictions with the input_model
parameter. The input_model
parameter may not be specified when the training
parameter is specified. The minimum_leaf_size
parameter specifies the minimum number of training points that must fall into each leaf for it to be split. The minimum_gain_split
parameter specifies the minimum gain that is needed for the node to split. The maximum_depth
parameter specifies the maximum depth of the tree. If print_training_error
is specified, the training error will be printed.
Test data may be specified with the test
parameter, and if performance numbers are desired for that test set, labels may be specified with the test_labels
parameter. Predictions for each test point may be saved via the predictions
output parameter. Class probabilities for each prediction may be saved with the probabilities
output parameter.
For example, to train a decision tree with a minimum leaf size of 20 on the dataset contained in 'data'
with labels 'labels'
, saving the output model to 'tree'
and printing the training error, one could call
>>> output = decision_tree(training=data, labels=labels, minimum_leaf_size=20,
minimum_gain_split=0.001, print_training_accuracy=True)
>>> tree = output['output_model']
Then, to use that model to classify points in 'test_set'
and print the test error given the labels 'test_labels'
using that model, while saving the predictions for each point to 'predictions'
, one could call
>>> output = decision_tree(input_model=tree, test=test_set,
test_labels=test_labels)
>>> predictions = output['predictions']
name | type | description | default |
---|---|---|---|
input_model |
DecisionTreeModel |
Pre-trained decision tree, to be used with test points. | nothing |
labels |
Int vector-like |
Training labels. | Int[] |
maximum_depth |
Int |
Maximum depth of the tree (0 means no limit). | 0 |
minimum_gain_split |
Float64 |
Minimum gain for node splitting. | 1e-07 |
minimum_leaf_size |
Int |
Minimum number of points in a leaf. | 20 |
print_training_accuracy |
Bool |
Print the training accuracy. | false |
print_training_error |
Bool |
Print the training error (deprecated; will be removed in mlpack 4.0.0). | false |
test |
Tuple{Array{Bool, 1}, Array{Float64, 2}} |
Testing dataset (may be categorical). | zeros(0, 0) |
test_labels |
Int vector-like |
Test point labels, if accuracy calculation is desired. | Int[] |
training |
Tuple{Array{Bool, 1}, Array{Float64, 2}} |
Training dataset (may be categorical). | zeros(0, 0) |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
weights |
Float64 matrix-like |
The weight of labels | zeros(0, 0) |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output_model |
DecisionTreeModel |
Output for trained decision tree. |
predictions |
Int vector-like |
Class predictions for each test point. |
probabilities |
Float64 matrix-like |
Class probabilities for each test point. |
{: #julia_decision_tree_detailed-documentation }
Train and evaluate using a decision tree. Given a dataset containing numeric or categorical features, and associated labels for each point in the dataset, this program can train a decision tree on that data.
The training set and associated labels are specified with the training
and labels
parameters, respectively. The labels should be in the range [0, num_classes - 1]. Optionally, if labels
is not specified, the labels are assumed to be the last dimension of the training dataset.
When a model is trained, the output_model
output parameter may be used to save the trained model. A model may be loaded for predictions with the input_model
parameter. The input_model
parameter may not be specified when the training
parameter is specified. The minimum_leaf_size
parameter specifies the minimum number of training points that must fall into each leaf for it to be split. The minimum_gain_split
parameter specifies the minimum gain that is needed for the node to split. The maximum_depth
parameter specifies the maximum depth of the tree. If print_training_error
is specified, the training error will be printed.
Test data may be specified with the test
parameter, and if performance numbers are desired for that test set, labels may be specified with the test_labels
parameter. Predictions for each test point may be saved via the predictions
output parameter. Class probabilities for each prediction may be saved with the probabilities
output parameter.
For example, to train a decision tree with a minimum leaf size of 20 on the dataset contained in data
with labels labels
, saving the output model to tree
and printing the training error, one could call
julia> using CSV
julia> labels = CSV.read("labels.csv"; type=Int)
julia> tree, _, _ = decision_tree(labels=labels,
minimum_gain_split=0.001, minimum_leaf_size=20,
print_training_accuracy=1, training=data)
Then, to use that model to classify points in test_set
and print the test error given the labels test_labels
using that model, while saving the predictions for each point to predictions
, one could call
julia> using CSV
julia> test_labels = CSV.read("test_labels.csv"; type=Int)
julia> _, predictions, _ = decision_tree(input_model=tree,
test=test_set, test_labels=test_labels)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
InputModel |
decisionTreeModel |
Pre-trained decision tree, to be used with test points. | nil |
Labels |
*mat.Dense (1d with ints) |
Training labels. | mat.NewDense(1, 1, nil) |
MaximumDepth |
int |
Maximum depth of the tree (0 means no limit). | 0 |
MinimumGainSplit |
float64 |
Minimum gain for node splitting. | 1e-07 |
MinimumLeafSize |
int |
Minimum number of points in a leaf. | 20 |
PrintTrainingAccuracy |
bool |
Print the training accuracy. | false |
PrintTrainingError |
bool |
Print the training error (deprecated; will be removed in mlpack 4.0.0). | false |
Test |
matrixWithInfo |
Testing dataset (may be categorical). | mat.NewDense(1, 1, nil) |
TestLabels |
*mat.Dense (1d with ints) |
Test point labels, if accuracy calculation is desired. | mat.NewDense(1, 1, nil) |
Training |
matrixWithInfo |
Training dataset (may be categorical). | mat.NewDense(1, 1, nil) |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Weights |
*mat.Dense |
The weight of labels | mat.NewDense(1, 1, nil) |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
outputModel |
decisionTreeModel |
Output for trained decision tree. |
predictions |
*mat.Dense (1d with ints) |
Class predictions for each test point. |
probabilities |
*mat.Dense |
Class probabilities for each test point. |
{: #go_decision_tree_detailed-documentation }
Train and evaluate using a decision tree. Given a dataset containing numeric or categorical features, and associated labels for each point in the dataset, this program can train a decision tree on that data.
The training set and associated labels are specified with the Training
and Labels
parameters, respectively. The labels should be in the range [0, num_classes - 1]. Optionally, if Labels
is not specified, the labels are assumed to be the last dimension of the training dataset.
When a model is trained, the OutputModel
output parameter may be used to save the trained model. A model may be loaded for predictions with the InputModel
parameter. The InputModel
parameter may not be specified when the Training
parameter is specified. The MinimumLeafSize
parameter specifies the minimum number of training points that must fall into each leaf for it to be split. The MinimumGainSplit
parameter specifies the minimum gain that is needed for the node to split. The MaximumDepth
parameter specifies the maximum depth of the tree. If PrintTrainingError
is specified, the training error will be printed.
Test data may be specified with the Test
parameter, and if performance numbers are desired for that test set, labels may be specified with the TestLabels
parameter. Predictions for each test point may be saved via the Predictions
output parameter. Class probabilities for each prediction may be saved with the Probabilities
output parameter.
For example, to train a decision tree with a minimum leaf size of 20 on the dataset contained in data
with labels labels
, saving the output model to tree
and printing the training error, one could call
// Initialize optional parameters for DecisionTree().
param := mlpack.DecisionTreeOptions()
param.Training = data
param.Labels = labels
param.MinimumLeafSize = 20
param.MinimumGainSplit = 0.001
param.PrintTrainingAccuracy = true
tree, _, _ := mlpack.DecisionTree(param)
Then, to use that model to classify points in test_set
and print the test error given the labels test_labels
using that model, while saving the predictions for each point to predictions
, one could call
// Initialize optional parameters for DecisionTree().
param := mlpack.DecisionTreeOptions()
param.InputModel = &tree
param.Test = test_set
param.TestLabels = test_labels
_, predictions, _ := mlpack.DecisionTree(param)
name | type | description | default |
---|---|---|---|
input_model |
DecisionTreeModel |
Pre-trained decision tree, to be used with test points. | NA |
labels |
integer vector |
Training labels. | matrix(integer(), 0, 0) |
maximum_depth |
integer |
Maximum depth of the tree (0 means no limit). | 0 |
minimum_gain_split |
numeric |
Minimum gain for node splitting. | 1e-07 |
minimum_leaf_size |
integer |
Minimum number of points in a leaf. | 20 |
print_training_accuracy |
logical |
Print the training accuracy. | FALSE |
print_training_error |
logical |
Print the training error (deprecated; will be removed in mlpack 4.0.0). | FALSE |
test |
categorical matrix/data.frame |
Testing dataset (may be categorical). | matrix(numeric(), 0, 0) |
test_labels |
integer vector |
Test point labels, if accuracy calculation is desired. | matrix(integer(), 0, 0) |
training |
categorical matrix/data.frame |
Training dataset (may be categorical). | matrix(numeric(), 0, 0) |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
weights |
numeric matrix |
The weight of labels | matrix(numeric(), 0, 0) |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
DecisionTreeModel |
Output for trained decision tree. |
predictions |
integer vector |
Class predictions for each test point. |
probabilities |
numeric matrix |
Class probabilities for each test point. |
{: #r_decision_tree_detailed-documentation }
Train and evaluate using a decision tree. Given a dataset containing numeric or categorical features, and associated labels for each point in the dataset, this program can train a decision tree on that data.
The training set and associated labels are specified with the training
and labels
parameters, respectively. The labels should be in the range [0, num_classes - 1]. Optionally, if labels
is not specified, the labels are assumed to be the last dimension of the training dataset.
When a model is trained, the output_model
output parameter may be used to save the trained model. A model may be loaded for predictions with the input_model
parameter. The input_model
parameter may not be specified when the training
parameter is specified. The minimum_leaf_size
parameter specifies the minimum number of training points that must fall into each leaf for it to be split. The minimum_gain_split
parameter specifies the minimum gain that is needed for the node to split. The maximum_depth
parameter specifies the maximum depth of the tree. If print_training_error
is specified, the training error will be printed.
Test data may be specified with the test
parameter, and if performance numbers are desired for that test set, labels may be specified with the test_labels
parameter. Predictions for each test point may be saved via the predictions
output parameter. Class probabilities for each prediction may be saved with the probabilities
output parameter.
For example, to train a decision tree with a minimum leaf size of 20 on the dataset contained in "data"
with labels "labels"
, saving the output model to "tree"
and printing the training error, one could call
R> output <- decision_tree(training=data, labels=labels, minimum_leaf_size=20,
minimum_gain_split=0.001, print_training_accuracy=TRUE)
R> tree <- output$output_model
Then, to use that model to classify points in "test_set"
and print the test error given the labels "test_labels"
using that model, while saving the predictions for each point to "predictions"
, one could call
R> output <- decision_tree(input_model=tree, test=test_set,
test_labels=test_labels)
R> predictions <- output$predictions
// Initialize optional parameters for Det(). param := mlpack.DetOptions() param.Folds = 10 param.InputModel = nil param.MaxLeafSize = 10 param.MinLeafSize = 5 param.PathFormat = "lr" param.SkipPruning = false param.Test = mat.NewDense(1, 1, nil) param.Training = mat.NewDense(1, 1, nil)
output_model, tag_counters_file, tag_file, test_set_estimates, training_set_estimates, vi := mlpack.Det(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- det(folds=10, input_model=NA, max_leaf_size=10, min_leaf_size=5,
path_format="lr", skip_pruning=FALSE, test=matrix(numeric(), 0, 0),
training=matrix(numeric(), 0, 0), verbose=FALSE)
R> output_model <- d$output_model
R> tag_counters_file <- d$tag_counters_file
R> tag_file <- d$tag_file
R> test_set_estimates <- d$test_set_estimates
R> training_set_estimates <- d$training_set_estimates
R> vi <- d$vi
An implementation of density estimation trees for the density estimation task. Density estimation trees can be trained or used to predict the density at locations given by query points. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--folds (-f) |
int |
The number of folds of cross-validation to perform for the estimation (0 is LOOCV) | 10 |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_model_file (-m) |
DTree<> file |
Trained density estimation tree to load. | '' |
--max_leaf_size (-L) |
int |
The maximum size of a leaf in the unpruned, fully grown DET. | 10 |
--min_leaf_size (-l) |
int |
The minimum size of a leaf in the unpruned, fully grown DET. | 5 |
--path_format (-p) |
string |
The format of path printing: 'lr', 'id-lr', or 'lr-id'. | 'lr' |
--skip_pruning (-s) |
flag |
Whether to bypass the pruning process and output the unpruned tree only. | |
--test_file (-T) |
2-d matrix file |
A set of test points to estimate the density of. | '' |
--training_file (-t) |
2-d matrix file |
The data set on which to build a density estimation tree. | '' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_model_file (-M) |
DTree<> file |
Output to save trained density estimation tree to. |
--tag_counters_file (-c) |
string |
The file to output the number of points that went to each leaf. |
--tag_file (-g) |
string |
The file to output the tags (and possibly paths) for each sample in the test set. |
--test_set_estimates_file (-E) |
2-d matrix file |
The output estimates on the test set from the final optimally pruned tree. |
--training_set_estimates_file (-e) |
2-d matrix file |
The output density estimates on the training set from the final optimally pruned tree. |
--vi_file (-i) |
2-d matrix file |
The output variable importance values for each feature. |
{: #cli_det_detailed-documentation }
This program performs a number of functions related to Density Estimation Trees. The optimal Density Estimation Tree (DET) can be trained on a set of data (specified by --training_file (-t)
) using cross-validation (with number of folds specified with the --folds (-f)
parameter). This trained density estimation tree may then be saved with the --output_model_file (-M)
output parameter.
The variable importances (that is, the feature importance values for each dimension) may be saved with the --vi_file (-i)
output parameter, and the density estimates for each training point may be saved with the --training_set_estimates_file (-e)
output parameter.
Enabling path printing for each node outputs the path from the root node to a leaf for each entry in the test set, or training set (if a test set is not provided). Strings like 'LRLRLR' (indicating that traversal went to the left child, then the right child, then the left child, and so forth) will be output. If 'lr-id' or 'id-lr' are given as the --path_format (-p)
parameter, then the ID (tag) of every node along the path will be printed after or before the L or R character indicating the direction of traversal, respectively.
This program also can provide density estimates for a set of test points, specified in the --test_file (-T)
parameter. The density estimation tree used for this task will be the tree that was trained on the given training points, or a tree given as the parameter --input_model_file (-m)
. The density estimates for the test points may be saved using the --test_set_estimates_file (-E)
output parameter.
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
folds |
int |
The number of folds of cross-validation to perform for the estimation (0 is LOOCV) | 10 |
input_model |
DTree<>Type |
Trained density estimation tree to load. | None |
max_leaf_size |
int |
The maximum size of a leaf in the unpruned, fully grown DET. | 10 |
min_leaf_size |
int |
The minimum size of a leaf in the unpruned, fully grown DET. | 5 |
path_format |
str |
The format of path printing: 'lr', 'id-lr', or 'lr-id'. | 'lr' |
skip_pruning |
bool |
Whether to bypass the pruning process and output the unpruned tree only. | False |
test |
matrix |
A set of test points to estimate the density of. | np.empty([0, 0]) |
training |
matrix |
The data set on which to build a density estimation tree. | np.empty([0, 0]) |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
DTree<>Type |
Output to save trained density estimation tree to. |
tag_counters_file |
str |
The file to output the number of points that went to each leaf. |
tag_file |
str |
The file to output the tags (and possibly paths) for each sample in the test set. |
test_set_estimates |
matrix |
The output estimates on the test set from the final optimally pruned tree. |
training_set_estimates |
matrix |
The output density estimates on the training set from the final optimally pruned tree. |
vi |
matrix |
The output variable importance values for each feature. |
{: #python_det_detailed-documentation }
This program performs a number of functions related to Density Estimation Trees. The optimal Density Estimation Tree (DET) can be trained on a set of data (specified by training
) using cross-validation (with number of folds specified with the folds
parameter). This trained density estimation tree may then be saved with the output_model
output parameter.
The variable importances (that is, the feature importance values for each dimension) may be saved with the vi
output parameter, and the density estimates for each training point may be saved with the training_set_estimates
output parameter.
Enabling path printing for each node outputs the path from the root node to a leaf for each entry in the test set, or training set (if a test set is not provided). Strings like 'LRLRLR' (indicating that traversal went to the left child, then the right child, then the left child, and so forth) will be output. If 'lr-id' or 'id-lr' are given as the path_format
parameter, then the ID (tag) of every node along the path will be printed after or before the L or R character indicating the direction of traversal, respectively.
This program also can provide density estimates for a set of test points, specified in the test
parameter. The density estimation tree used for this task will be the tree that was trained on the given training points, or a tree given as the parameter input_model
. The density estimates for the test points may be saved using the test_set_estimates
output parameter.
name | type | description | default |
---|---|---|---|
folds |
Int |
The number of folds of cross-validation to perform for the estimation (0 is LOOCV) | 10 |
input_model |
DTree |
Trained density estimation tree to load. | nothing |
max_leaf_size |
Int |
The maximum size of a leaf in the unpruned, fully grown DET. | 10 |
min_leaf_size |
Int |
The minimum size of a leaf in the unpruned, fully grown DET. | 5 |
path_format |
String |
The format of path printing: 'lr', 'id-lr', or 'lr-id'. | "lr" |
skip_pruning |
Bool |
Whether to bypass the pruning process and output the unpruned tree only. | false |
test |
Float64 matrix-like |
A set of test points to estimate the density of. | zeros(0, 0) |
training |
Float64 matrix-like |
The data set on which to build a density estimation tree. | zeros(0, 0) |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output_model |
DTree |
Output to save trained density estimation tree to. |
tag_counters_file |
String |
The file to output the number of points that went to each leaf. |
tag_file |
String |
The file to output the tags (and possibly paths) for each sample in the test set. |
test_set_estimates |
Float64 matrix-like |
The output estimates on the test set from the final optimally pruned tree. |
training_set_estimates |
Float64 matrix-like |
The output density estimates on the training set from the final optimally pruned tree. |
vi |
Float64 matrix-like |
The output variable importance values for each feature. |
{: #julia_det_detailed-documentation }
This program performs a number of functions related to Density Estimation Trees. The optimal Density Estimation Tree (DET) can be trained on a set of data (specified by training
) using cross-validation (with number of folds specified with the folds
parameter). This trained density estimation tree may then be saved with the output_model
output parameter.
The variable importances (that is, the feature importance values for each dimension) may be saved with the vi
output parameter, and the density estimates for each training point may be saved with the training_set_estimates
output parameter.
Enabling path printing for each node outputs the path from the root node to a leaf for each entry in the test set, or training set (if a test set is not provided). Strings like 'LRLRLR' (indicating that traversal went to the left child, then the right child, then the left child, and so forth) will be output. If 'lr-id' or 'id-lr' are given as the path_format
parameter, then the ID (tag) of every node along the path will be printed after or before the L or R character indicating the direction of traversal, respectively.
This program also can provide density estimates for a set of test points, specified in the test
parameter. The density estimation tree used for this task will be the tree that was trained on the given training points, or a tree given as the parameter input_model
. The density estimates for the test points may be saved using the test_set_estimates
output parameter.
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
Folds |
int |
The number of folds of cross-validation to perform for the estimation (0 is LOOCV) | 10 |
InputModel |
dTree |
Trained density estimation tree to load. | nil |
MaxLeafSize |
int |
The maximum size of a leaf in the unpruned, fully grown DET. | 10 |
MinLeafSize |
int |
The minimum size of a leaf in the unpruned, fully grown DET. | 5 |
PathFormat |
string |
The format of path printing: 'lr', 'id-lr', or 'lr-id'. | "lr" |
SkipPruning |
bool |
Whether to bypass the pruning process and output the unpruned tree only. | false |
Test |
*mat.Dense |
A set of test points to estimate the density of. | mat.NewDense(1, 1, nil) |
Training |
*mat.Dense |
The data set on which to build a density estimation tree. | mat.NewDense(1, 1, nil) |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
outputModel |
dTree |
Output to save trained density estimation tree to. |
tagCountersFile |
string |
The file to output the number of points that went to each leaf. |
tagFile |
string |
The file to output the tags (and possibly paths) for each sample in the test set. |
testSetEstimates |
*mat.Dense |
The output estimates on the test set from the final optimally pruned tree. |
trainingSetEstimates |
*mat.Dense |
The output density estimates on the training set from the final optimally pruned tree. |
vi |
*mat.Dense |
The output variable importance values for each feature. |
{: #go_det_detailed-documentation }
This program performs a number of functions related to Density Estimation Trees. The optimal Density Estimation Tree (DET) can be trained on a set of data (specified by Training
) using cross-validation (with number of folds specified with the Folds
parameter). This trained density estimation tree may then be saved with the OutputModel
output parameter.
The variable importances (that is, the feature importance values for each dimension) may be saved with the Vi
output parameter, and the density estimates for each training point may be saved with the TrainingSetEstimates
output parameter.
Enabling path printing for each node outputs the path from the root node to a leaf for each entry in the test set, or training set (if a test set is not provided). Strings like 'LRLRLR' (indicating that traversal went to the left child, then the right child, then the left child, and so forth) will be output. If 'lr-id' or 'id-lr' are given as the PathFormat
parameter, then the ID (tag) of every node along the path will be printed after or before the L or R character indicating the direction of traversal, respectively.
This program also can provide density estimates for a set of test points, specified in the Test
parameter. The density estimation tree used for this task will be the tree that was trained on the given training points, or a tree given as the parameter InputModel
. The density estimates for the test points may be saved using the TestSetEstimates
output parameter.
name | type | description | default |
---|---|---|---|
folds |
integer |
The number of folds of cross-validation to perform for the estimation (0 is LOOCV) | 10 |
input_model |
DTree |
Trained density estimation tree to load. | NA |
max_leaf_size |
integer |
The maximum size of a leaf in the unpruned, fully grown DET. | 10 |
min_leaf_size |
integer |
The minimum size of a leaf in the unpruned, fully grown DET. | 5 |
path_format |
character |
The format of path printing: 'lr', 'id-lr', or 'lr-id'. | "lr" |
skip_pruning |
logical |
Whether to bypass the pruning process and output the unpruned tree only. | FALSE |
test |
numeric matrix |
A set of test points to estimate the density of. | matrix(numeric(), 0, 0) |
training |
numeric matrix |
The data set on which to build a density estimation tree. | matrix(numeric(), 0, 0) |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
DTree |
Output to save trained density estimation tree to. |
tag_counters_file |
character |
The file to output the number of points that went to each leaf. |
tag_file |
character |
The file to output the tags (and possibly paths) for each sample in the test set. |
test_set_estimates |
numeric matrix |
The output estimates on the test set from the final optimally pruned tree. |
training_set_estimates |
numeric matrix |
The output density estimates on the training set from the final optimally pruned tree. |
vi |
numeric matrix |
The output variable importance values for each feature. |
{: #r_det_detailed-documentation }
This program performs a number of functions related to Density Estimation Trees. The optimal Density Estimation Tree (DET) can be trained on a set of data (specified by training
) using cross-validation (with number of folds specified with the folds
parameter). This trained density estimation tree may then be saved with the output_model
output parameter.
The variable importances (that is, the feature importance values for each dimension) may be saved with the vi
output parameter, and the density estimates for each training point may be saved with the training_set_estimates
output parameter.
Enabling path printing for each node outputs the path from the root node to a leaf for each entry in the test set, or training set (if a test set is not provided). Strings like 'LRLRLR' (indicating that traversal went to the left child, then the right child, then the left child, and so forth) will be output. If 'lr-id' or 'id-lr' are given as the path_format
parameter, then the ID (tag) of every node along the path will be printed after or before the L or R character indicating the direction of traversal, respectively.
This program also can provide density estimates for a set of test points, specified in the test
parameter. The density estimation tree used for this task will be the tree that was trained on the given training points, or a tree given as the parameter input_model
. The density estimates for the test points may be saved using the test_set_estimates
output parameter.
// Initialize optional parameters for Emst(). param := mlpack.EmstOptions() param.LeafSize = 1 param.Naive = false
output := mlpack.Emst(input, param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- emst(input=matrix(numeric(), 0, 0), leaf_size=1, naive=FALSE,
verbose=FALSE)
R> output <- d$output
An implementation of the Dual-Tree Boruvka algorithm for computing the Euclidean minimum spanning tree of a set of input points. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_file (-i) |
2-d matrix file |
Input data matrix. | **--** |
--leaf_size (-l) |
int |
Leaf size in the kd-tree. One-element leaves give the empirically best performance, but at the cost of greater memory requirements. | 1 |
--naive (-n) |
flag |
Compute the MST using O(n^2) naive algorithm. | |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_file (-o) |
2-d matrix file |
Output data. Stored as an edge list. |
{: #cli_emst_detailed-documentation }
This program can compute the Euclidean minimum spanning tree of a set of input points using the dual-tree Boruvka algorithm.
The set to calculate the minimum spanning tree of is specified with the --input_file (-i)
parameter, and the output may be saved with the --output_file (-o)
output parameter.
The --leaf_size (-l)
parameter controls the leaf size of the kd-tree that is used to calculate the minimum spanning tree, and if the --naive (-n)
option is given, then brute-force search is used (this is typically much slower in low dimensions). The leaf size does not affect the results, but it may have some effect on the runtime of the algorithm.
For example, the minimum spanning tree of the input dataset 'data.csv'
can be calculated with a leaf size of 20 and stored as 'spanning_tree.csv'
using the following command:
$ mlpack_emst --input_file data.csv --leaf_size 20 --output_file
spanning_tree.csv
The output matrix is a three-dimensional matrix, where each row indicates an edge. The first dimension corresponds to the lesser index of the edge; the second dimension corresponds to the greater index of the edge; and the third column corresponds to the distance between the two points.
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
input |
matrix |
Input data matrix. | **--** |
leaf_size |
int |
Leaf size in the kd-tree. One-element leaves give the empirically best performance, but at the cost of greater memory requirements. | 1 |
naive |
bool |
Compute the MST using O(n^2) naive algorithm. | False |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output |
matrix |
Output data. Stored as an edge list. |
{: #python_emst_detailed-documentation }
This program can compute the Euclidean minimum spanning tree of a set of input points using the dual-tree Boruvka algorithm.
The set to calculate the minimum spanning tree of is specified with the input
parameter, and the output may be saved with the output
output parameter.
The leaf_size
parameter controls the leaf size of the kd-tree that is used to calculate the minimum spanning tree, and if the naive
option is given, then brute-force search is used (this is typically much slower in low dimensions). The leaf size does not affect the results, but it may have some effect on the runtime of the algorithm.
For example, the minimum spanning tree of the input dataset 'data'
can be calculated with a leaf size of 20 and stored as 'spanning_tree'
using the following command:
>>> output = emst(input=data, leaf_size=20)
>>> spanning_tree = output['output']
The output matrix is a three-dimensional matrix, where each row indicates an edge. The first dimension corresponds to the lesser index of the edge; the second dimension corresponds to the greater index of the edge; and the third column corresponds to the distance between the two points.
name | type | description | default |
---|---|---|---|
input |
Float64 matrix-like |
Input data matrix. | **--** |
leaf_size |
Int |
Leaf size in the kd-tree. One-element leaves give the empirically best performance, but at the cost of greater memory requirements. | 1 |
naive |
Bool |
Compute the MST using O(n^2) naive algorithm. | false |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output |
Float64 matrix-like |
Output data. Stored as an edge list. |
{: #julia_emst_detailed-documentation }
This program can compute the Euclidean minimum spanning tree of a set of input points using the dual-tree Boruvka algorithm.
The set to calculate the minimum spanning tree of is specified with the input
parameter, and the output may be saved with the output
output parameter.
The leaf_size
parameter controls the leaf size of the kd-tree that is used to calculate the minimum spanning tree, and if the naive
option is given, then brute-force search is used (this is typically much slower in low dimensions). The leaf size does not affect the results, but it may have some effect on the runtime of the algorithm.
For example, the minimum spanning tree of the input dataset data
can be calculated with a leaf size of 20 and stored as spanning_tree
using the following command:
julia> using CSV
julia> data = CSV.read("data.csv")
julia> spanning_tree = emst(data; leaf_size=20)
The output matrix is a three-dimensional matrix, where each row indicates an edge. The first dimension corresponds to the lesser index of the edge; the second dimension corresponds to the greater index of the edge; and the third column corresponds to the distance between the two points.
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
input |
*mat.Dense |
Input data matrix. | **--** |
LeafSize |
int |
Leaf size in the kd-tree. One-element leaves give the empirically best performance, but at the cost of greater memory requirements. | 1 |
Naive |
bool |
Compute the MST using O(n^2) naive algorithm. | false |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
output |
*mat.Dense |
Output data. Stored as an edge list. |
{: #go_emst_detailed-documentation }
This program can compute the Euclidean minimum spanning tree of a set of input points using the dual-tree Boruvka algorithm.
The set to calculate the minimum spanning tree of is specified with the Input
parameter, and the output may be saved with the Output
output parameter.
The LeafSize
parameter controls the leaf size of the kd-tree that is used to calculate the minimum spanning tree, and if the Naive
option is given, then brute-force search is used (this is typically much slower in low dimensions). The leaf size does not affect the results, but it may have some effect on the runtime of the algorithm.
For example, the minimum spanning tree of the input dataset data
can be calculated with a leaf size of 20 and stored as spanning_tree
using the following command:
// Initialize optional parameters for Emst().
param := mlpack.EmstOptions()
param.LeafSize = 20
spanning_tree := mlpack.Emst(data, param)
The output matrix is a three-dimensional matrix, where each row indicates an edge. The first dimension corresponds to the lesser index of the edge; the second dimension corresponds to the greater index of the edge; and the third column corresponds to the distance between the two points.
name | type | description | default |
---|---|---|---|
input |
numeric matrix |
Input data matrix. | **--** |
leaf_size |
integer |
Leaf size in the kd-tree. One-element leaves give the empirically best performance, but at the cost of greater memory requirements. | 1 |
naive |
logical |
Compute the MST using O(n^2) naive algorithm. | FALSE |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output |
numeric matrix |
Output data. Stored as an edge list. |
{: #r_emst_detailed-documentation }
This program can compute the Euclidean minimum spanning tree of a set of input points using the dual-tree Boruvka algorithm.
The set to calculate the minimum spanning tree of is specified with the input
parameter, and the output may be saved with the output
output parameter.
The leaf_size
parameter controls the leaf size of the kd-tree that is used to calculate the minimum spanning tree, and if the naive
option is given, then brute-force search is used (this is typically much slower in low dimensions). The leaf size does not affect the results, but it may have some effect on the runtime of the algorithm.
For example, the minimum spanning tree of the input dataset "data"
can be calculated with a leaf size of 20 and stored as "spanning_tree"
using the following command:
R> output <- emst(input=data, leaf_size=20)
R> spanning_tree <- output$output
The output matrix is a three-dimensional matrix, where each row indicates an edge. The first dimension corresponds to the lesser index of the edge; the second dimension corresponds to the greater index of the edge; and the third column corresponds to the distance between the two points.
// Initialize optional parameters for Fastmks(). param := mlpack.FastmksOptions() param.Bandwidth = 1 param.Base = 2 param.Degree = 2 param.InputModel = nil param.K = 0 param.Kernel = "linear" param.Naive = false param.Offset = 0 param.Query = mat.NewDense(1, 1, nil) param.Reference = mat.NewDense(1, 1, nil) param.Scale = 1 param.Single = false
indices, kernels, output_model := mlpack.Fastmks(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- fastmks(bandwidth=1, base=2, degree=2, input_model=NA, k=0,
kernel="linear", naive=FALSE, offset=0, query=matrix(numeric(), 0, 0),
reference=matrix(numeric(), 0, 0), scale=1, single=FALSE,
verbose=FALSE)
R> indices <- d$indices
R> kernels <- d$kernels
R> output_model <- d$output_model
An implementation of the single-tree and dual-tree fast max-kernel search (FastMKS) algorithm. Given a set of reference points and a set of query points, this can find the reference point with maximum kernel value for each query point; trained models can be reused for future queries. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--bandwidth (-w) |
double |
Bandwidth (for Gaussian, Epanechnikov, and triangular kernels). | 1 |
--base (-b) |
double |
Base to use during cover tree construction. | 2 |
--degree (-d) |
double |
Degree of polynomial kernel. | 2 |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_model_file (-m) |
FastMKSModel file |
Input FastMKS model to use. | '' |
--k (-k) |
int |
Number of maximum kernels to find. | 0 |
--kernel (-K) |
string |
Kernel type to use: 'linear', 'polynomial', 'cosine', 'gaussian', 'epanechnikov', 'triangular', 'hyptan'. | 'linear' |
--naive (-N) |
flag |
If true, O(n^2) naive mode is used for computation. | |
--offset (-o) |
double |
Offset of kernel (for polynomial and hyptan kernels). | 0 |
--query_file (-q) |
2-d matrix file |
The query dataset. | '' |
--reference_file (-r) |
2-d matrix file |
The reference dataset. | '' |
--scale (-s) |
double |
Scale of kernel (for hyptan kernel). | 1 |
--single (-S) |
flag |
If true, single-tree search is used (as opposed to dual-tree search. | |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--indices_file (-i) |
2-d index matrix file |
Output matrix of indices. |
--kernels_file (-p) |
2-d matrix file |
Output matrix of kernels. |
--output_model_file (-M) |
FastMKSModel file |
Output for FastMKS model. |
{: #cli_fastmks_detailed-documentation }
This program will find the k maximum kernels of a set of points, using a query set and a reference set (which can optionally be the same set). More specifically, for each point in the query set, the k points in the reference set with maximum kernel evaluations are found. The kernel function used is specified with the --kernel (-K)
parameter.
For example, the following command will calculate, for each point in the query set 'query.csv'
, the five points in the reference set 'reference.csv'
with maximum kernel evaluation using the linear kernel. The kernel evaluations may be saved with the 'kernels.csv'
output parameter and the indices may be saved with the 'indices.csv'
output parameter.
$ mlpack_fastmks --k 5 --reference_file reference.csv --query_file query.csv
--indices_file indices.csv --kernels_file kernels.csv --kernel linear
The output matrices are organized such that row i and column j in the indices matrix corresponds to the index of the point in the reference set that has j'th largest kernel evaluation with the point in the query set with index i. Row i and column j in the kernels matrix corresponds to the kernel evaluation between those two points.
This program performs FastMKS using a cover tree. The base used to build the cover tree can be specified with the --base (-b)
parameter.
name | type | description | default |
---|---|---|---|
bandwidth |
float |
Bandwidth (for Gaussian, Epanechnikov, and triangular kernels). | 1 |
base |
float |
Base to use during cover tree construction. | 2 |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
degree |
float |
Degree of polynomial kernel. | 2 |
input_model |
FastMKSModelType |
Input FastMKS model to use. | None |
k |
int |
Number of maximum kernels to find. | 0 |
kernel |
str |
Kernel type to use: 'linear', 'polynomial', 'cosine', 'gaussian', 'epanechnikov', 'triangular', 'hyptan'. | 'linear' |
naive |
bool |
If true, O(n^2) naive mode is used for computation. | False |
offset |
float |
Offset of kernel (for polynomial and hyptan kernels). | 0 |
query |
matrix |
The query dataset. | np.empty([0, 0]) |
reference |
matrix |
The reference dataset. | np.empty([0, 0]) |
scale |
float |
Scale of kernel (for hyptan kernel). | 1 |
single |
bool |
If true, single-tree search is used (as opposed to dual-tree search. | False |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
indices |
int matrix |
Output matrix of indices. |
kernels |
matrix |
Output matrix of kernels. |
output_model |
FastMKSModelType |
Output for FastMKS model. |
{: #python_fastmks_detailed-documentation }
This program will find the k maximum kernels of a set of points, using a query set and a reference set (which can optionally be the same set). More specifically, for each point in the query set, the k points in the reference set with maximum kernel evaluations are found. The kernel function used is specified with the kernel
parameter.
For example, the following command will calculate, for each point in the query set 'query'
, the five points in the reference set 'reference'
with maximum kernel evaluation using the linear kernel. The kernel evaluations may be saved with the 'kernels'
output parameter and the indices may be saved with the 'indices'
output parameter.
>>> output = fastmks(k=5, reference=reference, query=query, kernel='linear')
>>> indices = output['indices']
>>> kernels = output['kernels']
The output matrices are organized such that row i and column j in the indices matrix corresponds to the index of the point in the reference set that has j'th largest kernel evaluation with the point in the query set with index i. Row i and column j in the kernels matrix corresponds to the kernel evaluation between those two points.
This program performs FastMKS using a cover tree. The base used to build the cover tree can be specified with the base
parameter.
name | type | description | default |
---|---|---|---|
bandwidth |
Float64 |
Bandwidth (for Gaussian, Epanechnikov, and triangular kernels). | 1 |
base |
Float64 |
Base to use during cover tree construction. | 2 |
degree |
Float64 |
Degree of polynomial kernel. | 2 |
input_model |
FastMKSModel |
Input FastMKS model to use. | nothing |
k |
Int |
Number of maximum kernels to find. | 0 |
kernel |
String |
Kernel type to use: 'linear', 'polynomial', 'cosine', 'gaussian', 'epanechnikov', 'triangular', 'hyptan'. | "linear" |
naive |
Bool |
If true, O(n^2) naive mode is used for computation. | false |
offset |
Float64 |
Offset of kernel (for polynomial and hyptan kernels). | 0 |
query |
Float64 matrix-like |
The query dataset. | zeros(0, 0) |
reference |
Float64 matrix-like |
The reference dataset. | zeros(0, 0) |
scale |
Float64 |
Scale of kernel (for hyptan kernel). | 1 |
single |
Bool |
If true, single-tree search is used (as opposed to dual-tree search. | false |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
indices |
Int matrix-like |
Output matrix of indices. |
kernels |
Float64 matrix-like |
Output matrix of kernels. |
output_model |
FastMKSModel |
Output for FastMKS model. |
{: #julia_fastmks_detailed-documentation }
This program will find the k maximum kernels of a set of points, using a query set and a reference set (which can optionally be the same set). More specifically, for each point in the query set, the k points in the reference set with maximum kernel evaluations are found. The kernel function used is specified with the kernel
parameter.
For example, the following command will calculate, for each point in the query set query
, the five points in the reference set reference
with maximum kernel evaluation using the linear kernel. The kernel evaluations may be saved with the kernels
output parameter and the indices may be saved with the indices
output parameter.
julia> using CSV
julia> reference = CSV.read("reference.csv")
julia> query = CSV.read("query.csv")
julia> indices, kernels, _ = fastmks(k=5, kernel="linear",
query=query, reference=reference)
The output matrices are organized such that row i and column j in the indices matrix corresponds to the index of the point in the reference set that has j'th largest kernel evaluation with the point in the query set with index i. Row i and column j in the kernels matrix corresponds to the kernel evaluation between those two points.
This program performs FastMKS using a cover tree. The base used to build the cover tree can be specified with the base
parameter.
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
Bandwidth |
float64 |
Bandwidth (for Gaussian, Epanechnikov, and triangular kernels). | 1 |
Base |
float64 |
Base to use during cover tree construction. | 2 |
Degree |
float64 |
Degree of polynomial kernel. | 2 |
InputModel |
fastmksModel |
Input FastMKS model to use. | nil |
K |
int |
Number of maximum kernels to find. | 0 |
Kernel |
string |
Kernel type to use: 'linear', 'polynomial', 'cosine', 'gaussian', 'epanechnikov', 'triangular', 'hyptan'. | "linear" |
Naive |
bool |
If true, O(n^2) naive mode is used for computation. | false |
Offset |
float64 |
Offset of kernel (for polynomial and hyptan kernels). | 0 |
Query |
*mat.Dense |
The query dataset. | mat.NewDense(1, 1, nil) |
Reference |
*mat.Dense |
The reference dataset. | mat.NewDense(1, 1, nil) |
Scale |
float64 |
Scale of kernel (for hyptan kernel). | 1 |
Single |
bool |
If true, single-tree search is used (as opposed to dual-tree search. | false |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
indices |
*mat.Dense (with ints) |
Output matrix of indices. |
kernels |
*mat.Dense |
Output matrix of kernels. |
outputModel |
fastmksModel |
Output for FastMKS model. |
{: #go_fastmks_detailed-documentation }
This program will find the k maximum kernels of a set of points, using a query set and a reference set (which can optionally be the same set). More specifically, for each point in the query set, the k points in the reference set with maximum kernel evaluations are found. The kernel function used is specified with the Kernel
parameter.
For example, the following command will calculate, for each point in the query set query
, the five points in the reference set reference
with maximum kernel evaluation using the linear kernel. The kernel evaluations may be saved with the kernels
output parameter and the indices may be saved with the indices
output parameter.
// Initialize optional parameters for Fastmks().
param := mlpack.FastmksOptions()
param.K = 5
param.Reference = reference
param.Query = query
param.Kernel = "linear"
indices, kernels, _ := mlpack.Fastmks(param)
The output matrices are organized such that row i and column j in the indices matrix corresponds to the index of the point in the reference set that has j'th largest kernel evaluation with the point in the query set with index i. Row i and column j in the kernels matrix corresponds to the kernel evaluation between those two points.
This program performs FastMKS using a cover tree. The base used to build the cover tree can be specified with the Base
parameter.
name | type | description | default |
---|---|---|---|
bandwidth |
numeric |
Bandwidth (for Gaussian, Epanechnikov, and triangular kernels). | 1 |
base |
numeric |
Base to use during cover tree construction. | 2 |
degree |
numeric |
Degree of polynomial kernel. | 2 |
input_model |
FastMKSModel |
Input FastMKS model to use. | NA |
k |
integer |
Number of maximum kernels to find. | 0 |
kernel |
character |
Kernel type to use: 'linear', 'polynomial', 'cosine', 'gaussian', 'epanechnikov', 'triangular', 'hyptan'. | "linear" |
naive |
logical |
If true, O(n^2) naive mode is used for computation. | FALSE |
offset |
numeric |
Offset of kernel (for polynomial and hyptan kernels). | 0 |
query |
numeric matrix |
The query dataset. | matrix(numeric(), 0, 0) |
reference |
numeric matrix |
The reference dataset. | matrix(numeric(), 0, 0) |
scale |
numeric |
Scale of kernel (for hyptan kernel). | 1 |
single |
logical |
If true, single-tree search is used (as opposed to dual-tree search. | FALSE |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
indices |
integer matrix |
Output matrix of indices. |
kernels |
numeric matrix |
Output matrix of kernels. |
output_model |
FastMKSModel |
Output for FastMKS model. |
{: #r_fastmks_detailed-documentation }
This program will find the k maximum kernels of a set of points, using a query set and a reference set (which can optionally be the same set). More specifically, for each point in the query set, the k points in the reference set with maximum kernel evaluations are found. The kernel function used is specified with the kernel
parameter.
For example, the following command will calculate, for each point in the query set "query"
, the five points in the reference set "reference"
with maximum kernel evaluation using the linear kernel. The kernel evaluations may be saved with the "kernels"
output parameter and the indices may be saved with the "indices"
output parameter.
R> output <- fastmks(k=5, reference=reference, query=query, kernel="linear")
R> indices <- output$indices
R> kernels <- output$kernels
The output matrices are organized such that row i and column j in the indices matrix corresponds to the index of the point in the reference set that has j'th largest kernel evaluation with the point in the query set with index i. Row i and column j in the kernels matrix corresponds to the kernel evaluation between those two points.
This program performs FastMKS using a cover tree. The base used to build the cover tree can be specified with the base
parameter.
// Initialize optional parameters for GmmTrain(). param := mlpack.GmmTrainOptions() param.DiagonalCovariance = false param.InputModel = nil param.KmeansMaxIterations = 1000 param.MaxIterations = 250 param.NoForcePositive = false param.Noise = 0 param.Percentage = 0.02 param.RefinedStart = false param.Samplings = 100 param.Seed = 0 param.Tolerance = 1e-10 param.Trials = 1
output_model := mlpack.GmmTrain(gaussians, input, param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- gmm_train(diagonal_covariance=FALSE, gaussians=0,
input=matrix(numeric(), 0, 0), input_model=NA,
kmeans_max_iterations=1000, max_iterations=250, no_force_positive=FALSE,
noise=0, percentage=0.02, refined_start=FALSE, samplings=100, seed=0,
tolerance=1e-10, trials=1, verbose=FALSE)
R> output_model <- d$output_model
An implementation of the EM algorithm for training Gaussian mixture models (GMMs). Given a dataset, this can train a GMM for future use with other tools. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--diagonal_covariance (-d) |
flag |
Force the covariance of the Gaussians to be diagonal. This can accelerate training time significantly. | |
--gaussians (-g) |
int |
Number of Gaussians in the GMM. | **--** |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_file (-i) |
2-d matrix file |
The training data on which the model will be fit. | **--** |
--input_model_file (-m) |
GMM file |
Initial input GMM model to start training with. | '' |
--kmeans_max_iterations (-k) |
int |
Maximum number of iterations for the k-means algorithm (used to initialize EM). | 1000 |
--max_iterations (-n) |
int |
Maximum number of iterations of EM algorithm (passing 0 will run until convergence). | 250 |
--no_force_positive (-P) |
flag |
Do not force the covariance matrices to be positive definite. | |
--noise (-N) |
double |
Variance of zero-mean Gaussian noise to add to data. | 0 |
--percentage (-p) |
double |
If using --refined_start, specify the percentage of the dataset used for each sampling (should be between 0.0 and 1.0). | 0.02 |
--refined_start (-r) |
flag |
During the initialization, use refined initial positions for k-means clustering (Bradley and Fayyad, 1998). | |
--samplings (-S) |
int |
If using --refined_start, specify the number of samplings used for initial points. | 100 |
--seed (-s) |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
--tolerance (-T) |
double |
Tolerance for convergence of EM. | 1e-10 |
--trials (-t) |
int |
Number of trials to perform in training GMM. | 1 |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_model_file (-M) |
GMM file |
Output for trained GMM model. |
{: #cli_gmm_train_detailed-documentation }
This program takes a parametric estimate of a Gaussian mixture model (GMM) using the EM algorithm to find the maximum likelihood estimate. The model may be saved and reused by other mlpack GMM tools.
The input data to train on must be specified with the --input_file (-i)
parameter, and the number of Gaussians in the model must be specified with the --gaussians (-g)
parameter. Optionally, many trials with different random initializations may be run, and the result with highest log-likelihood on the training data will be taken. The number of trials to run is specified with the --trials (-t)
parameter. By default, only one trial is run.
The tolerance for convergence and maximum number of iterations of the EM algorithm are specified with the --tolerance (-T)
and --max_iterations (-n)
parameters, respectively. The GMM may be initialized for training with another model, specified with the --input_model_file (-m)
parameter. Otherwise, the model is initialized by running k-means on the data. The k-means clustering initialization can be controlled with the --kmeans_max_iterations (-k)
, --refined_start (-r)
, --samplings (-S)
, and --percentage (-p)
parameters. If --refined_start (-r)
is specified, then the Bradley-Fayyad refined start initialization will be used. This can often lead to better clustering results.
The 'diagonal_covariance' flag will cause the learned covariances to be diagonal matrices. This significantly simplifies the model itself and causes training to be faster, but restricts the ability to fit more complex GMMs.
If GMM training fails with an error indicating that a covariance matrix could not be inverted, make sure that the --no_force_positive (-P)
parameter is not specified. Alternately, adding a small amount of Gaussian noise (using the --noise (-N)
parameter) to the entire dataset may help prevent Gaussians with zero variance in a particular dimension, which is usually the cause of non-invertible covariance matrices.
The --no_force_positive (-P)
parameter, if set, will avoid the checks after each iteration of the EM algorithm which ensure that the covariance matrices are positive definite. Specifying the flag can cause faster runtime, but may also cause non-positive definite covariance matrices, which will cause the program to crash.
As an example, to train a 6-Gaussian GMM on the data in 'data.csv'
with a maximum of 100 iterations of EM and 3 trials, saving the trained GMM to 'gmm.bin'
, the following command can be used:
$ mlpack_gmm_train --input_file data.csv --gaussians 6 --trials 3
--output_model_file gmm.bin
To re-train that GMM on another set of data 'data2.csv'
, the following command may be used:
$ mlpack_gmm_train --input_model_file gmm.bin --input_file data2.csv
--gaussians 6 --output_model_file new_gmm.bin
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
diagonal_covariance |
bool |
Force the covariance of the Gaussians to be diagonal. This can accelerate training time significantly. | False |
gaussians |
int |
Number of Gaussians in the GMM. | **--** |
input |
matrix |
The training data on which the model will be fit. | **--** |
input_model |
GMMType |
Initial input GMM model to start training with. | None |
kmeans_max_iterations |
int |
Maximum number of iterations for the k-means algorithm (used to initialize EM). | 1000 |
max_iterations |
int |
Maximum number of iterations of EM algorithm (passing 0 will run until convergence). | 250 |
no_force_positive |
bool |
Do not force the covariance matrices to be positive definite. | False |
noise |
float |
Variance of zero-mean Gaussian noise to add to data. | 0 |
percentage |
float |
If using --refined_start, specify the percentage of the dataset used for each sampling (should be between 0.0 and 1.0). | 0.02 |
refined_start |
bool |
During the initialization, use refined initial positions for k-means clustering (Bradley and Fayyad, 1998). | False |
samplings |
int |
If using --refined_start, specify the number of samplings used for initial points. | 100 |
seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
tolerance |
float |
Tolerance for convergence of EM. | 1e-10 |
trials |
int |
Number of trials to perform in training GMM. | 1 |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
GMMType |
Output for trained GMM model. |
{: #python_gmm_train_detailed-documentation }
This program takes a parametric estimate of a Gaussian mixture model (GMM) using the EM algorithm to find the maximum likelihood estimate. The model may be saved and reused by other mlpack GMM tools.
The input data to train on must be specified with the input
parameter, and the number of Gaussians in the model must be specified with the gaussians
parameter. Optionally, many trials with different random initializations may be run, and the result with highest log-likelihood on the training data will be taken. The number of trials to run is specified with the trials
parameter. By default, only one trial is run.
The tolerance for convergence and maximum number of iterations of the EM algorithm are specified with the tolerance
and max_iterations
parameters, respectively. The GMM may be initialized for training with another model, specified with the input_model
parameter. Otherwise, the model is initialized by running k-means on the data. The k-means clustering initialization can be controlled with the kmeans_max_iterations
, refined_start
, samplings
, and percentage
parameters. If refined_start
is specified, then the Bradley-Fayyad refined start initialization will be used. This can often lead to better clustering results.
The 'diagonal_covariance' flag will cause the learned covariances to be diagonal matrices. This significantly simplifies the model itself and causes training to be faster, but restricts the ability to fit more complex GMMs.
If GMM training fails with an error indicating that a covariance matrix could not be inverted, make sure that the no_force_positive
parameter is not specified. Alternately, adding a small amount of Gaussian noise (using the noise
parameter) to the entire dataset may help prevent Gaussians with zero variance in a particular dimension, which is usually the cause of non-invertible covariance matrices.
The no_force_positive
parameter, if set, will avoid the checks after each iteration of the EM algorithm which ensure that the covariance matrices are positive definite. Specifying the flag can cause faster runtime, but may also cause non-positive definite covariance matrices, which will cause the program to crash.
As an example, to train a 6-Gaussian GMM on the data in 'data'
with a maximum of 100 iterations of EM and 3 trials, saving the trained GMM to 'gmm'
, the following command can be used:
>>> output = gmm_train(input=data, gaussians=6, trials=3)
>>> gmm = output['output_model']
To re-train that GMM on another set of data 'data2'
, the following command may be used:
>>> output = gmm_train(input_model=gmm, input=data2, gaussians=6)
>>> new_gmm = output['output_model']
name | type | description | default |
---|---|---|---|
diagonal_covariance |
Bool |
Force the covariance of the Gaussians to be diagonal. This can accelerate training time significantly. | false |
gaussians |
Int |
Number of Gaussians in the GMM. | **--** |
input |
Float64 matrix-like |
The training data on which the model will be fit. | **--** |
input_model |
GMM |
Initial input GMM model to start training with. | nothing |
kmeans_max_iterations |
Int |
Maximum number of iterations for the k-means algorithm (used to initialize EM). | 1000 |
max_iterations |
Int |
Maximum number of iterations of EM algorithm (passing 0 will run until convergence). | 250 |
no_force_positive |
Bool |
Do not force the covariance matrices to be positive definite. | false |
noise |
Float64 |
Variance of zero-mean Gaussian noise to add to data. | 0 |
percentage |
Float64 |
If using --refined_start, specify the percentage of the dataset used for each sampling (should be between 0.0 and 1.0). | 0.02 |
refined_start |
Bool |
During the initialization, use refined initial positions for k-means clustering (Bradley and Fayyad, 1998). | false |
samplings |
Int |
If using --refined_start, specify the number of samplings used for initial points. | 100 |
seed |
Int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
tolerance |
Float64 |
Tolerance for convergence of EM. | 1e-10 |
trials |
Int |
Number of trials to perform in training GMM. | 1 |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output_model |
GMM |
Output for trained GMM model. |
{: #julia_gmm_train_detailed-documentation }
This program takes a parametric estimate of a Gaussian mixture model (GMM) using the EM algorithm to find the maximum likelihood estimate. The model may be saved and reused by other mlpack GMM tools.
The input data to train on must be specified with the input
parameter, and the number of Gaussians in the model must be specified with the gaussians
parameter. Optionally, many trials with different random initializations may be run, and the result with highest log-likelihood on the training data will be taken. The number of trials to run is specified with the trials
parameter. By default, only one trial is run.
The tolerance for convergence and maximum number of iterations of the EM algorithm are specified with the tolerance
and max_iterations
parameters, respectively. The GMM may be initialized for training with another model, specified with the input_model
parameter. Otherwise, the model is initialized by running k-means on the data. The k-means clustering initialization can be controlled with the kmeans_max_iterations
, refined_start
, samplings
, and percentage
parameters. If refined_start
is specified, then the Bradley-Fayyad refined start initialization will be used. This can often lead to better clustering results.
The 'diagonal_covariance' flag will cause the learned covariances to be diagonal matrices. This significantly simplifies the model itself and causes training to be faster, but restricts the ability to fit more complex GMMs.
If GMM training fails with an error indicating that a covariance matrix could not be inverted, make sure that the no_force_positive
parameter is not specified. Alternately, adding a small amount of Gaussian noise (using the noise
parameter) to the entire dataset may help prevent Gaussians with zero variance in a particular dimension, which is usually the cause of non-invertible covariance matrices.
The no_force_positive
parameter, if set, will avoid the checks after each iteration of the EM algorithm which ensure that the covariance matrices are positive definite. Specifying the flag can cause faster runtime, but may also cause non-positive definite covariance matrices, which will cause the program to crash.
As an example, to train a 6-Gaussian GMM on the data in data
with a maximum of 100 iterations of EM and 3 trials, saving the trained GMM to gmm
, the following command can be used:
julia> using CSV
julia> data = CSV.read("data.csv")
julia> gmm = gmm_train(6, data; trials=3)
To re-train that GMM on another set of data data2
, the following command may be used:
julia> using CSV
julia> data2 = CSV.read("data2.csv")
julia> new_gmm = gmm_train(6, data2; input_model=gmm)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
DiagonalCovariance |
bool |
Force the covariance of the Gaussians to be diagonal. This can accelerate training time significantly. | false |
gaussians |
int |
Number of Gaussians in the GMM. | **--** |
input |
*mat.Dense |
The training data on which the model will be fit. | **--** |
InputModel |
gmm |
Initial input GMM model to start training with. | nil |
KmeansMaxIterations |
int |
Maximum number of iterations for the k-means algorithm (used to initialize EM). | 1000 |
MaxIterations |
int |
Maximum number of iterations of EM algorithm (passing 0 will run until convergence). | 250 |
NoForcePositive |
bool |
Do not force the covariance matrices to be positive definite. | false |
Noise |
float64 |
Variance of zero-mean Gaussian noise to add to data. | 0 |
Percentage |
float64 |
If using --refined_start, specify the percentage of the dataset used for each sampling (should be between 0.0 and 1.0). | 0.02 |
RefinedStart |
bool |
During the initialization, use refined initial positions for k-means clustering (Bradley and Fayyad, 1998). | false |
Samplings |
int |
If using --refined_start, specify the number of samplings used for initial points. | 100 |
Seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
Tolerance |
float64 |
Tolerance for convergence of EM. | 1e-10 |
Trials |
int |
Number of trials to perform in training GMM. | 1 |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
outputModel |
gmm |
Output for trained GMM model. |
{: #go_gmm_train_detailed-documentation }
This program takes a parametric estimate of a Gaussian mixture model (GMM) using the EM algorithm to find the maximum likelihood estimate. The model may be saved and reused by other mlpack GMM tools.
The input data to train on must be specified with the Input
parameter, and the number of Gaussians in the model must be specified with the Gaussians
parameter. Optionally, many trials with different random initializations may be run, and the result with highest log-likelihood on the training data will be taken. The number of trials to run is specified with the Trials
parameter. By default, only one trial is run.
The tolerance for convergence and maximum number of iterations of the EM algorithm are specified with the Tolerance
and MaxIterations
parameters, respectively. The GMM may be initialized for training with another model, specified with the InputModel
parameter. Otherwise, the model is initialized by running k-means on the data. The k-means clustering initialization can be controlled with the KmeansMaxIterations
, RefinedStart
, Samplings
, and Percentage
parameters. If RefinedStart
is specified, then the Bradley-Fayyad refined start initialization will be used. This can often lead to better clustering results.
The 'diagonal_covariance' flag will cause the learned covariances to be diagonal matrices. This significantly simplifies the model itself and causes training to be faster, but restricts the ability to fit more complex GMMs.
If GMM training fails with an error indicating that a covariance matrix could not be inverted, make sure that the NoForcePositive
parameter is not specified. Alternately, adding a small amount of Gaussian noise (using the Noise
parameter) to the entire dataset may help prevent Gaussians with zero variance in a particular dimension, which is usually the cause of non-invertible covariance matrices.
The NoForcePositive
parameter, if set, will avoid the checks after each iteration of the EM algorithm which ensure that the covariance matrices are positive definite. Specifying the flag can cause faster runtime, but may also cause non-positive definite covariance matrices, which will cause the program to crash.
As an example, to train a 6-Gaussian GMM on the data in data
with a maximum of 100 iterations of EM and 3 trials, saving the trained GMM to gmm
, the following command can be used:
// Initialize optional parameters for GmmTrain().
param := mlpack.GmmTrainOptions()
param.Trials = 3
gmm := mlpack.GmmTrain(data, 6, param)
To re-train that GMM on another set of data data2
, the following command may be used:
// Initialize optional parameters for GmmTrain().
param := mlpack.GmmTrainOptions()
param.InputModel = &gmm
new_gmm := mlpack.GmmTrain(data2, 6, param)
name | type | description | default |
---|---|---|---|
diagonal_covariance |
logical |
Force the covariance of the Gaussians to be diagonal. This can accelerate training time significantly. | FALSE |
gaussians |
integer |
Number of Gaussians in the GMM. | **--** |
input |
numeric matrix |
The training data on which the model will be fit. | **--** |
input_model |
GMM |
Initial input GMM model to start training with. | NA |
kmeans_max_iterations |
integer |
Maximum number of iterations for the k-means algorithm (used to initialize EM). | 1000 |
max_iterations |
integer |
Maximum number of iterations of EM algorithm (passing 0 will run until convergence). | 250 |
no_force_positive |
logical |
Do not force the covariance matrices to be positive definite. | FALSE |
noise |
numeric |
Variance of zero-mean Gaussian noise to add to data. | 0 |
percentage |
numeric |
If using --refined_start, specify the percentage of the dataset used for each sampling (should be between 0.0 and 1.0). | 0.02 |
refined_start |
logical |
During the initialization, use refined initial positions for k-means clustering (Bradley and Fayyad, 1998). | FALSE |
samplings |
integer |
If using --refined_start, specify the number of samplings used for initial points. | 100 |
seed |
integer |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
tolerance |
numeric |
Tolerance for convergence of EM. | 1e-10 |
trials |
integer |
Number of trials to perform in training GMM. | 1 |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
GMM |
Output for trained GMM model. |
{: #r_gmm_train_detailed-documentation }
This program takes a parametric estimate of a Gaussian mixture model (GMM) using the EM algorithm to find the maximum likelihood estimate. The model may be saved and reused by other mlpack GMM tools.
The input data to train on must be specified with the input
parameter, and the number of Gaussians in the model must be specified with the gaussians
parameter. Optionally, many trials with different random initializations may be run, and the result with highest log-likelihood on the training data will be taken. The number of trials to run is specified with the trials
parameter. By default, only one trial is run.
The tolerance for convergence and maximum number of iterations of the EM algorithm are specified with the tolerance
and max_iterations
parameters, respectively. The GMM may be initialized for training with another model, specified with the input_model
parameter. Otherwise, the model is initialized by running k-means on the data. The k-means clustering initialization can be controlled with the kmeans_max_iterations
, refined_start
, samplings
, and percentage
parameters. If refined_start
is specified, then the Bradley-Fayyad refined start initialization will be used. This can often lead to better clustering results.
The 'diagonal_covariance' flag will cause the learned covariances to be diagonal matrices. This significantly simplifies the model itself and causes training to be faster, but restricts the ability to fit more complex GMMs.
If GMM training fails with an error indicating that a covariance matrix could not be inverted, make sure that the no_force_positive
parameter is not specified. Alternately, adding a small amount of Gaussian noise (using the noise
parameter) to the entire dataset may help prevent Gaussians with zero variance in a particular dimension, which is usually the cause of non-invertible covariance matrices.
The no_force_positive
parameter, if set, will avoid the checks after each iteration of the EM algorithm which ensure that the covariance matrices are positive definite. Specifying the flag can cause faster runtime, but may also cause non-positive definite covariance matrices, which will cause the program to crash.
As an example, to train a 6-Gaussian GMM on the data in "data"
with a maximum of 100 iterations of EM and 3 trials, saving the trained GMM to "gmm"
, the following command can be used:
R> output <- gmm_train(input=data, gaussians=6, trials=3)
R> gmm <- output$output_model
To re-train that GMM on another set of data "data2"
, the following command may be used:
R> output <- gmm_train(input_model=gmm, input=data2, gaussians=6)
R> new_gmm <- output$output_model
// Initialize optional parameters for GmmGenerate(). param := mlpack.GmmGenerateOptions() param.Seed = 0
output := mlpack.GmmGenerate(inputModel, samples, param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- gmm_generate(input_model=NA, samples=0, seed=0, verbose=FALSE)
R> output <- d$output
A sample generator for pre-trained GMMs. Given a pre-trained GMM, this can sample new points randomly from that distribution. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_model_file (-m) |
GMM file |
Input GMM model to generate samples from. | **--** |
--samples (-n) |
int |
Number of samples to generate. | **--** |
--seed (-s) |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_file (-o) |
2-d matrix file |
Matrix to save output samples in. |
{: #cli_gmm_generate_detailed-documentation }
This program is able to generate samples from a pre-trained GMM (use gmm_train to train a GMM). The pre-trained GMM must be specified with the --input_model_file (-m)
parameter. The number of samples to generate is specified by the --samples (-n)
parameter. Output samples may be saved with the --output_file (-o)
output parameter.
The following command can be used to generate 100 samples from the pre-trained GMM 'gmm.bin'
and store those generated samples in 'samples.csv'
:
$ mlpack_gmm_generate --input_model_file gmm.bin --samples 100 --output_file
samples.csv
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
input_model |
GMMType |
Input GMM model to generate samples from. | **--** |
samples |
int |
Number of samples to generate. | **--** |
seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output |
matrix |
Matrix to save output samples in. |
{: #python_gmm_generate_detailed-documentation }
This program is able to generate samples from a pre-trained GMM (use gmm_train to train a GMM). The pre-trained GMM must be specified with the input_model
parameter. The number of samples to generate is specified by the samples
parameter. Output samples may be saved with the output
output parameter.
The following command can be used to generate 100 samples from the pre-trained GMM 'gmm'
and store those generated samples in 'samples'
:
>>> output = gmm_generate(input_model=gmm, samples=100)
>>> samples = output['output']
name | type | description | default |
---|---|---|---|
input_model |
GMM |
Input GMM model to generate samples from. | **--** |
samples |
Int |
Number of samples to generate. | **--** |
seed |
Int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output |
Float64 matrix-like |
Matrix to save output samples in. |
{: #julia_gmm_generate_detailed-documentation }
This program is able to generate samples from a pre-trained GMM (use gmm_train to train a GMM). The pre-trained GMM must be specified with the input_model
parameter. The number of samples to generate is specified by the samples
parameter. Output samples may be saved with the output
output parameter.
The following command can be used to generate 100 samples from the pre-trained GMM gmm
and store those generated samples in samples
:
julia> samples = gmm_generate(gmm, 100)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
inputModel |
gmm |
Input GMM model to generate samples from. | **--** |
samples |
int |
Number of samples to generate. | **--** |
Seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
output |
*mat.Dense |
Matrix to save output samples in. |
{: #go_gmm_generate_detailed-documentation }
This program is able to generate samples from a pre-trained GMM (use gmm_train to train a GMM). The pre-trained GMM must be specified with the InputModel
parameter. The number of samples to generate is specified by the Samples
parameter. Output samples may be saved with the Output
output parameter.
The following command can be used to generate 100 samples from the pre-trained GMM gmm
and store those generated samples in samples
:
// Initialize optional parameters for GmmGenerate().
param := mlpack.GmmGenerateOptions()
samples := mlpack.GmmGenerate(&gmm, 100, param)
name | type | description | default |
---|---|---|---|
input_model |
GMM |
Input GMM model to generate samples from. | **--** |
samples |
integer |
Number of samples to generate. | **--** |
seed |
integer |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output |
numeric matrix |
Matrix to save output samples in. |
{: #r_gmm_generate_detailed-documentation }
This program is able to generate samples from a pre-trained GMM (use gmm_train to train a GMM). The pre-trained GMM must be specified with the input_model
parameter. The number of samples to generate is specified by the samples
parameter. Output samples may be saved with the output
output parameter.
The following command can be used to generate 100 samples from the pre-trained GMM "gmm"
and store those generated samples in "samples"
:
R> output <- gmm_generate(input_model=gmm, samples=100)
R> samples <- output$output
// Initialize optional parameters for GmmProbability(). param := mlpack.GmmProbabilityOptions()
output := mlpack.GmmProbability(input, inputModel, )
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- gmm_probability(input=matrix(numeric(), 0, 0), input_model=NA,
verbose=FALSE)
R> output <- d$output
A probability calculator for GMMs. Given a pre-trained GMM and a set of points, this can compute the probability that each point is from the given GMM. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_file (-i) |
2-d matrix file |
Input matrix to calculate probabilities of. | **--** |
--input_model_file (-m) |
GMM file |
Input GMM to use as model. | **--** |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_file (-o) |
2-d matrix file |
Matrix to store calculated probabilities in. |
{: #cli_gmm_probability_detailed-documentation }
This program calculates the probability that given points came from a given GMM (that is, P(X | gmm)). The GMM is specified with the --input_model_file (-m)
parameter, and the points are specified with the --input_file (-i)
parameter. The output probabilities may be saved via the --output_file (-o)
output parameter.
So, for example, to calculate the probabilities of each point in 'points.csv'
coming from the pre-trained GMM 'gmm.bin'
, while storing those probabilities in 'probs.csv'
, the following command could be used:
$ mlpack_gmm_probability --input_model_file gmm.bin --input_file points.csv
--output_file probs.csv
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
input |
matrix |
Input matrix to calculate probabilities of. | **--** |
input_model |
GMMType |
Input GMM to use as model. | **--** |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output |
matrix |
Matrix to store calculated probabilities in. |
{: #python_gmm_probability_detailed-documentation }
This program calculates the probability that given points came from a given GMM (that is, P(X | gmm)). The GMM is specified with the input_model
parameter, and the points are specified with the input
parameter. The output probabilities may be saved via the output
output parameter.
So, for example, to calculate the probabilities of each point in 'points'
coming from the pre-trained GMM 'gmm'
, while storing those probabilities in 'probs'
, the following command could be used:
>>> output = gmm_probability(input_model=gmm, input=points)
>>> probs = output['output']
name | type | description | default |
---|---|---|---|
input |
Float64 matrix-like |
Input matrix to calculate probabilities of. | **--** |
input_model |
GMM |
Input GMM to use as model. | **--** |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output |
Float64 matrix-like |
Matrix to store calculated probabilities in. |
{: #julia_gmm_probability_detailed-documentation }
This program calculates the probability that given points came from a given GMM (that is, P(X | gmm)). The GMM is specified with the input_model
parameter, and the points are specified with the input
parameter. The output probabilities may be saved via the output
output parameter.
So, for example, to calculate the probabilities of each point in points
coming from the pre-trained GMM gmm
, while storing those probabilities in probs
, the following command could be used:
julia> using CSV
julia> points = CSV.read("points.csv")
julia> probs = gmm_probability(points, gmm)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
input |
*mat.Dense |
Input matrix to calculate probabilities of. | **--** |
inputModel |
gmm |
Input GMM to use as model. | **--** |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
output |
*mat.Dense |
Matrix to store calculated probabilities in. |
{: #go_gmm_probability_detailed-documentation }
This program calculates the probability that given points came from a given GMM (that is, P(X | gmm)). The GMM is specified with the InputModel
parameter, and the points are specified with the Input
parameter. The output probabilities may be saved via the Output
output parameter.
So, for example, to calculate the probabilities of each point in points
coming from the pre-trained GMM gmm
, while storing those probabilities in probs
, the following command could be used:
// Initialize optional parameters for GmmProbability().
param := mlpack.GmmProbabilityOptions()
probs := mlpack.GmmProbability(&gmm, points, param)
name | type | description | default |
---|---|---|---|
input |
numeric matrix |
Input matrix to calculate probabilities of. | **--** |
input_model |
GMM |
Input GMM to use as model. | **--** |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output |
numeric matrix |
Matrix to store calculated probabilities in. |
{: #r_gmm_probability_detailed-documentation }
This program calculates the probability that given points came from a given GMM (that is, P(X | gmm)). The GMM is specified with the input_model
parameter, and the points are specified with the input
parameter. The output probabilities may be saved via the output
output parameter.
So, for example, to calculate the probabilities of each point in "points"
coming from the pre-trained GMM "gmm"
, while storing those probabilities in "probs"
, the following command could be used:
R> output <- gmm_probability(input_model=gmm, input=points)
R> probs <- output$output
Hidden Markov Model (HMM) Training
// Initialize optional parameters for HmmTrain(). param := mlpack.HmmTrainOptions() param.Batch = false param.Gaussians = 0 param.InputModel = nil param.LabelsFile = "" param.Seed = 0 param.States = 0 param.Tolerance = 1e-05 param.Type = "gaussian"
output_model := mlpack.HmmTrain(inputFile, param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- hmm_train(batch=FALSE, gaussians=0, input_file="",
input_model=NA, labels_file="", seed=0, states=0, tolerance=1e-05,
type="gaussian", verbose=FALSE)
R> output_model <- d$output_model
An implementation of training algorithms for Hidden Markov Models (HMMs). Given labeled or unlabeled data, an HMM can be trained for further use with other mlpack HMM tools. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--batch (-b) |
flag |
If true, input_file (and if passed, labels_file) are expected to contain a list of files to use as input observation sequences (and label sequences). | |
--gaussians (-g) |
int |
Number of gaussians in each GMM (necessary when type is 'gmm'). | 0 |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_file (-i) |
string |
File containing input observations. | **--** |
--input_model_file (-m) |
HMMModel file |
Pre-existing HMM model to initialize training with. | '' |
--labels_file (-l) |
string |
Optional file of hidden states, used for labeled training. | '' |
--seed (-s) |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
--states (-n) |
int |
Number of hidden states in HMM (necessary, unless model_file is specified). | 0 |
--tolerance (-T) |
double |
Tolerance of the Baum-Welch algorithm. | 1e-05 |
--type (-t) |
string |
Type of HMM: discrete | gaussian | diag_gmm | gmm. | 'gaussian' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_model_file (-M) |
HMMModel file |
Output for trained HMM. |
{: #cli_hmm_train_detailed-documentation }
This program allows a Hidden Markov Model to be trained on labeled or unlabeled data. It supports four types of HMMs: Discrete HMMs, Gaussian HMMs, GMM HMMs, or Diagonal GMM HMMs
Either one input sequence can be specified (with --input_file (-i)
), or, a file containing files in which input sequences can be found (when --input_file (-i)
and--batch (-b)
are used together). In addition, labels can be provided in the file specified by --labels_file (-l)
, and if --batch (-b)
is used, the file given to --labels_file (-l)
should contain a list of files of labels corresponding to the sequences in the file given to --input_file (-i)
.
The HMM is trained with the Baum-Welch algorithm if no labels are provided. The tolerance of the Baum-Welch algorithm can be set with the --tolerance (-T)
option. By default, the transition matrix is randomly initialized and the emission distributions are initialized to fit the extent of the data.
Optionally, a pre-created HMM model can be used as a guess for the transition matrix and emission probabilities; this is specifiable with --output_model_file (-M)
.
name | type | description | default |
---|---|---|---|
batch |
bool |
If true, input_file (and if passed, labels_file) are expected to contain a list of files to use as input observation sequences (and label sequences). | False |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
gaussians |
int |
Number of gaussians in each GMM (necessary when type is 'gmm'). | 0 |
input_file |
str |
File containing input observations. | **--** |
input_model |
HMMModelType |
Pre-existing HMM model to initialize training with. | None |
labels_file |
str |
Optional file of hidden states, used for labeled training. | '' |
seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
states |
int |
Number of hidden states in HMM (necessary, unless model_file is specified). | 0 |
tolerance |
float |
Tolerance of the Baum-Welch algorithm. | 1e-05 |
type |
str |
Type of HMM: discrete | gaussian | diag_gmm | gmm. | 'gaussian' |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
HMMModelType |
Output for trained HMM. |
{: #python_hmm_train_detailed-documentation }
This program allows a Hidden Markov Model to be trained on labeled or unlabeled data. It supports four types of HMMs: Discrete HMMs, Gaussian HMMs, GMM HMMs, or Diagonal GMM HMMs
Either one input sequence can be specified (with input_file
), or, a file containing files in which input sequences can be found (when input_file
andbatch
are used together). In addition, labels can be provided in the file specified by labels_file
, and if batch
is used, the file given to labels_file
should contain a list of files of labels corresponding to the sequences in the file given to input_file
.
The HMM is trained with the Baum-Welch algorithm if no labels are provided. The tolerance of the Baum-Welch algorithm can be set with the tolerance
option. By default, the transition matrix is randomly initialized and the emission distributions are initialized to fit the extent of the data.
Optionally, a pre-created HMM model can be used as a guess for the transition matrix and emission probabilities; this is specifiable with output_model
.
name | type | description | default |
---|---|---|---|
batch |
Bool |
If true, input_file (and if passed, labels_file) are expected to contain a list of files to use as input observation sequences (and label sequences). | false |
gaussians |
Int |
Number of gaussians in each GMM (necessary when type is 'gmm'). | 0 |
input_file |
String |
File containing input observations. | **--** |
input_model |
HMMModel |
Pre-existing HMM model to initialize training with. | nothing |
labels_file |
String |
Optional file of hidden states, used for labeled training. | "" |
seed |
Int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
states |
Int |
Number of hidden states in HMM (necessary, unless model_file is specified). | 0 |
tolerance |
Float64 |
Tolerance of the Baum-Welch algorithm. | 1e-05 |
type |
String |
Type of HMM: discrete | gaussian | diag_gmm | gmm. | "gaussian" |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output_model |
HMMModel |
Output for trained HMM. |
{: #julia_hmm_train_detailed-documentation }
This program allows a Hidden Markov Model to be trained on labeled or unlabeled data. It supports four types of HMMs: Discrete HMMs, Gaussian HMMs, GMM HMMs, or Diagonal GMM HMMs
Either one input sequence can be specified (with input_file
), or, a file containing files in which input sequences can be found (when input_file
andbatch
are used together). In addition, labels can be provided in the file specified by labels_file
, and if batch
is used, the file given to labels_file
should contain a list of files of labels corresponding to the sequences in the file given to input_file
.
The HMM is trained with the Baum-Welch algorithm if no labels are provided. The tolerance of the Baum-Welch algorithm can be set with the tolerance
option. By default, the transition matrix is randomly initialized and the emission distributions are initialized to fit the extent of the data.
Optionally, a pre-created HMM model can be used as a guess for the transition matrix and emission probabilities; this is specifiable with output_model
.
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
Batch |
bool |
If true, input_file (and if passed, labels_file) are expected to contain a list of files to use as input observation sequences (and label sequences). | false |
Gaussians |
int |
Number of gaussians in each GMM (necessary when type is 'gmm'). | 0 |
inputFile |
string |
File containing input observations. | **--** |
InputModel |
hmmModel |
Pre-existing HMM model to initialize training with. | nil |
LabelsFile |
string |
Optional file of hidden states, used for labeled training. | "" |
Seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
States |
int |
Number of hidden states in HMM (necessary, unless model_file is specified). | 0 |
Tolerance |
float64 |
Tolerance of the Baum-Welch algorithm. | 1e-05 |
Type |
string |
Type of HMM: discrete | gaussian | diag_gmm | gmm. | "gaussian" |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
outputModel |
hmmModel |
Output for trained HMM. |
{: #go_hmm_train_detailed-documentation }
This program allows a Hidden Markov Model to be trained on labeled or unlabeled data. It supports four types of HMMs: Discrete HMMs, Gaussian HMMs, GMM HMMs, or Diagonal GMM HMMs
Either one input sequence can be specified (with InputFile
), or, a file containing files in which input sequences can be found (when InputFile
andBatch
are used together). In addition, labels can be provided in the file specified by LabelsFile
, and if Batch
is used, the file given to LabelsFile
should contain a list of files of labels corresponding to the sequences in the file given to InputFile
.
The HMM is trained with the Baum-Welch algorithm if no labels are provided. The tolerance of the Baum-Welch algorithm can be set with the Tolerance
option. By default, the transition matrix is randomly initialized and the emission distributions are initialized to fit the extent of the data.
Optionally, a pre-created HMM model can be used as a guess for the transition matrix and emission probabilities; this is specifiable with OutputModel
.
name | type | description | default |
---|---|---|---|
batch |
logical |
If true, input_file (and if passed, labels_file) are expected to contain a list of files to use as input observation sequences (and label sequences). | FALSE |
gaussians |
integer |
Number of gaussians in each GMM (necessary when type is 'gmm'). | 0 |
input_file |
character |
File containing input observations. | **--** |
input_model |
HMMModel |
Pre-existing HMM model to initialize training with. | NA |
labels_file |
character |
Optional file of hidden states, used for labeled training. | "" |
seed |
integer |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
states |
integer |
Number of hidden states in HMM (necessary, unless model_file is specified). | 0 |
tolerance |
numeric |
Tolerance of the Baum-Welch algorithm. | 1e-05 |
type |
character |
Type of HMM: discrete | gaussian | diag_gmm | gmm. | "gaussian" |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
HMMModel |
Output for trained HMM. |
{: #r_hmm_train_detailed-documentation }
This program allows a Hidden Markov Model to be trained on labeled or unlabeled data. It supports four types of HMMs: Discrete HMMs, Gaussian HMMs, GMM HMMs, or Diagonal GMM HMMs
Either one input sequence can be specified (with input_file
), or, a file containing files in which input sequences can be found (when input_file
andbatch
are used together). In addition, labels can be provided in the file specified by labels_file
, and if batch
is used, the file given to labels_file
should contain a list of files of labels corresponding to the sequences in the file given to input_file
.
The HMM is trained with the Baum-Welch algorithm if no labels are provided. The tolerance of the Baum-Welch algorithm can be set with the tolerance
option. By default, the transition matrix is randomly initialized and the emission distributions are initialized to fit the extent of the data.
Optionally, a pre-created HMM model can be used as a guess for the transition matrix and emission probabilities; this is specifiable with output_model
.
Hidden Markov Model (HMM) Sequence Log-Likelihood
// Initialize optional parameters for HmmLoglik(). param := mlpack.HmmLoglikOptions()
log_likelihood := mlpack.HmmLoglik(input, inputModel, )
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- hmm_loglik(input=matrix(numeric(), 0, 0), input_model=NA,
verbose=FALSE)
R> log_likelihood <- d$log_likelihood
A utility for computing the log-likelihood of a sequence for Hidden Markov Models (HMMs). Given a pre-trained HMM and an observation sequence, this computes and returns the log-likelihood of that sequence being observed from that HMM. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_file (-i) |
2-d matrix file |
File containing observations, | **--** |
--input_model_file (-m) |
HMMModel file |
File containing HMM. | **--** |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--log_likelihood |
double |
Log-likelihood of the sequence. |
{: #cli_hmm_loglik_detailed-documentation }
This utility takes an already-trained HMM, specified with the --input_model_file (-m)
parameter, and evaluates the log-likelihood of a sequence of observations, given with the --input_file (-i)
parameter. The computed log-likelihood is given as output.
For example, to compute the log-likelihood of the sequence 'seq.csv'
with the pre-trained HMM 'hmm.bin'
, the following command may be used:
$ mlpack_hmm_loglik --input_file seq.csv --input_model_file hmm.bin
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
input |
matrix |
File containing observations, | **--** |
input_model |
HMMModelType |
File containing HMM. | **--** |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
log_likelihood |
float |
Log-likelihood of the sequence. |
{: #python_hmm_loglik_detailed-documentation }
This utility takes an already-trained HMM, specified with the input_model
parameter, and evaluates the log-likelihood of a sequence of observations, given with the input
parameter. The computed log-likelihood is given as output.
For example, to compute the log-likelihood of the sequence 'seq'
with the pre-trained HMM 'hmm'
, the following command may be used:
>>> hmm_loglik(input=seq, input_model=hmm)
name | type | description | default |
---|---|---|---|
input |
Float64 matrix-like |
File containing observations, | **--** |
input_model |
HMMModel |
File containing HMM. | **--** |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
log_likelihood |
Float64 |
Log-likelihood of the sequence. |
{: #julia_hmm_loglik_detailed-documentation }
This utility takes an already-trained HMM, specified with the input_model
parameter, and evaluates the log-likelihood of a sequence of observations, given with the input
parameter. The computed log-likelihood is given as output.
For example, to compute the log-likelihood of the sequence seq
with the pre-trained HMM hmm
, the following command may be used:
julia> using CSV
julia> seq = CSV.read("seq.csv")
julia> _ = hmm_loglik(seq, hmm)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
input |
*mat.Dense |
File containing observations, | **--** |
inputModel |
hmmModel |
File containing HMM. | **--** |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
logLikelihood |
float64 |
Log-likelihood of the sequence. |
{: #go_hmm_loglik_detailed-documentation }
This utility takes an already-trained HMM, specified with the InputModel
parameter, and evaluates the log-likelihood of a sequence of observations, given with the Input
parameter. The computed log-likelihood is given as output.
For example, to compute the log-likelihood of the sequence seq
with the pre-trained HMM hmm
, the following command may be used:
// Initialize optional parameters for HmmLoglik().
param := mlpack.HmmLoglikOptions()
_ := mlpack.HmmLoglik(seq, &hmm, param)
name | type | description | default |
---|---|---|---|
input |
numeric matrix |
File containing observations, | **--** |
input_model |
HMMModel |
File containing HMM. | **--** |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
log_likelihood |
numeric |
Log-likelihood of the sequence. |
{: #r_hmm_loglik_detailed-documentation }
This utility takes an already-trained HMM, specified with the input_model
parameter, and evaluates the log-likelihood of a sequence of observations, given with the input
parameter. The computed log-likelihood is given as output.
For example, to compute the log-likelihood of the sequence "seq"
with the pre-trained HMM "hmm"
, the following command may be used:
R> output <- hmm_loglik(input=seq, input_model=hmm)
Hidden Markov Model (HMM) Viterbi State Prediction
// Initialize optional parameters for HmmViterbi(). param := mlpack.HmmViterbiOptions()
output := mlpack.HmmViterbi(input, inputModel, )
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- hmm_viterbi(input=matrix(numeric(), 0, 0), input_model=NA,
verbose=FALSE)
R> output <- d$output
A utility for computing the most probable hidden state sequence for Hidden Markov Models (HMMs). Given a pre-trained HMM and an observed sequence, this uses the Viterbi algorithm to compute and return the most probable hidden state sequence. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_file (-i) |
2-d matrix file |
Matrix containing observations, | **--** |
--input_model_file (-m) |
HMMModel file |
Trained HMM to use. | **--** |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_file (-o) |
2-d index matrix file |
File to save predicted state sequence to. |
{: #cli_hmm_viterbi_detailed-documentation }
This utility takes an already-trained HMM, specified as --input_model_file (-m)
, and evaluates the most probable hidden state sequence of a given sequence of observations (specified as '--input_file (-i)
, using the Viterbi algorithm. The computed state sequence may be saved using the --output_file (-o)
output parameter.
For example, to predict the state sequence of the observations 'obs.csv'
using the HMM 'hmm.bin'
, storing the predicted state sequence to 'states.csv'
, the following command could be used:
$ mlpack_hmm_viterbi --input_file obs.csv --input_model_file hmm.bin
--output_file states.csv
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
input |
matrix |
Matrix containing observations, | **--** |
input_model |
HMMModelType |
Trained HMM to use. | **--** |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output |
int matrix |
File to save predicted state sequence to. |
{: #python_hmm_viterbi_detailed-documentation }
This utility takes an already-trained HMM, specified as input_model
, and evaluates the most probable hidden state sequence of a given sequence of observations (specified as 'input
, using the Viterbi algorithm. The computed state sequence may be saved using the output
output parameter.
For example, to predict the state sequence of the observations 'obs'
using the HMM 'hmm'
, storing the predicted state sequence to 'states'
, the following command could be used:
>>> output = hmm_viterbi(input=obs, input_model=hmm)
>>> states = output['output']
name | type | description | default |
---|---|---|---|
input |
Float64 matrix-like |
Matrix containing observations, | **--** |
input_model |
HMMModel |
Trained HMM to use. | **--** |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output |
Int matrix-like |
File to save predicted state sequence to. |
{: #julia_hmm_viterbi_detailed-documentation }
This utility takes an already-trained HMM, specified as input_model
, and evaluates the most probable hidden state sequence of a given sequence of observations (specified as 'input
, using the Viterbi algorithm. The computed state sequence may be saved using the output
output parameter.
For example, to predict the state sequence of the observations obs
using the HMM hmm
, storing the predicted state sequence to states
, the following command could be used:
julia> using CSV
julia> obs = CSV.read("obs.csv")
julia> states = hmm_viterbi(obs, hmm)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
input |
*mat.Dense |
Matrix containing observations, | **--** |
inputModel |
hmmModel |
Trained HMM to use. | **--** |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
output |
*mat.Dense (with ints) |
File to save predicted state sequence to. |
{: #go_hmm_viterbi_detailed-documentation }
This utility takes an already-trained HMM, specified as InputModel
, and evaluates the most probable hidden state sequence of a given sequence of observations (specified as 'Input
, using the Viterbi algorithm. The computed state sequence may be saved using the Output
output parameter.
For example, to predict the state sequence of the observations obs
using the HMM hmm
, storing the predicted state sequence to states
, the following command could be used:
// Initialize optional parameters for HmmViterbi().
param := mlpack.HmmViterbiOptions()
states := mlpack.HmmViterbi(obs, &hmm, param)
name | type | description | default |
---|---|---|---|
input |
numeric matrix |
Matrix containing observations, | **--** |
input_model |
HMMModel |
Trained HMM to use. | **--** |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output |
integer matrix |
File to save predicted state sequence to. |
{: #r_hmm_viterbi_detailed-documentation }
This utility takes an already-trained HMM, specified as input_model
, and evaluates the most probable hidden state sequence of a given sequence of observations (specified as 'input
, using the Viterbi algorithm. The computed state sequence may be saved using the output
output parameter.
For example, to predict the state sequence of the observations "obs"
using the HMM "hmm"
, storing the predicted state sequence to "states"
, the following command could be used:
R> output <- hmm_viterbi(input=obs, input_model=hmm)
R> states <- output$output
Hidden Markov Model (HMM) Sequence Generator
// Initialize optional parameters for HmmGenerate(). param := mlpack.HmmGenerateOptions() param.Seed = 0 param.StartState = 0
output, state := mlpack.HmmGenerate(length, model, param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- hmm_generate(length=0, model=NA, seed=0, start_state=0,
verbose=FALSE)
R> output <- d$output
R> state <- d$state
A utility to generate random sequences from a pre-trained Hidden Markov Model (HMM). The length of the desired sequence can be specified, and a random sequence of observations is returned. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--length (-l) |
int |
Length of sequence to generate. | **--** |
--model_file (-m) |
HMMModel file |
Trained HMM to generate sequences with. | **--** |
--seed (-s) |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
--start_state (-t) |
int |
Starting state of sequence. | 0 |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_file (-o) |
2-d matrix file |
Matrix to save observation sequence to. |
--state_file (-S) |
2-d index matrix file |
Matrix to save hidden state sequence to. |
{: #cli_hmm_generate_detailed-documentation }
This utility takes an already-trained HMM, specified as the --model_file (-m)
parameter, and generates a random observation sequence and hidden state sequence based on its parameters. The observation sequence may be saved with the --output_file (-o)
output parameter, and the internal state sequence may be saved with the --state_file (-S)
output parameter.
The state to start the sequence in may be specified with the --start_state (-t)
parameter.
For example, to generate a sequence of length 150 from the HMM 'hmm.bin'
and save the observation sequence to 'observations.csv'
and the hidden state sequence to 'states.csv'
, the following command may be used:
$ mlpack_hmm_generate --model_file hmm.bin --length 150 --output_file
observations.csv --state_file states.csv
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
length |
int |
Length of sequence to generate. | **--** |
model |
HMMModelType |
Trained HMM to generate sequences with. | **--** |
seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
start_state |
int |
Starting state of sequence. | 0 |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output |
matrix |
Matrix to save observation sequence to. |
state |
int matrix |
Matrix to save hidden state sequence to. |
{: #python_hmm_generate_detailed-documentation }
This utility takes an already-trained HMM, specified as the model
parameter, and generates a random observation sequence and hidden state sequence based on its parameters. The observation sequence may be saved with the output
output parameter, and the internal state sequence may be saved with the state
output parameter.
The state to start the sequence in may be specified with the start_state
parameter.
For example, to generate a sequence of length 150 from the HMM 'hmm'
and save the observation sequence to 'observations'
and the hidden state sequence to 'states'
, the following command may be used:
>>> output = hmm_generate(model=hmm, length=150)
>>> observations = output['output']
>>> states = output['state']
name | type | description | default |
---|---|---|---|
length |
Int |
Length of sequence to generate. | **--** |
model |
HMMModel |
Trained HMM to generate sequences with. | **--** |
seed |
Int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
start_state |
Int |
Starting state of sequence. | 0 |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output |
Float64 matrix-like |
Matrix to save observation sequence to. |
state |
Int matrix-like |
Matrix to save hidden state sequence to. |
{: #julia_hmm_generate_detailed-documentation }
This utility takes an already-trained HMM, specified as the model
parameter, and generates a random observation sequence and hidden state sequence based on its parameters. The observation sequence may be saved with the output
output parameter, and the internal state sequence may be saved with the state
output parameter.
The state to start the sequence in may be specified with the start_state
parameter.
For example, to generate a sequence of length 150 from the HMM hmm
and save the observation sequence to observations
and the hidden state sequence to states
, the following command may be used:
julia> observations, states = hmm_generate(150, hmm)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
length |
int |
Length of sequence to generate. | **--** |
model |
hmmModel |
Trained HMM to generate sequences with. | **--** |
Seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
StartState |
int |
Starting state of sequence. | 0 |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
output |
*mat.Dense |
Matrix to save observation sequence to. |
state |
*mat.Dense (with ints) |
Matrix to save hidden state sequence to. |
{: #go_hmm_generate_detailed-documentation }
This utility takes an already-trained HMM, specified as the Model
parameter, and generates a random observation sequence and hidden state sequence based on its parameters. The observation sequence may be saved with the Output
output parameter, and the internal state sequence may be saved with the State
output parameter.
The state to start the sequence in may be specified with the StartState
parameter.
For example, to generate a sequence of length 150 from the HMM hmm
and save the observation sequence to observations
and the hidden state sequence to states
, the following command may be used:
// Initialize optional parameters for HmmGenerate().
param := mlpack.HmmGenerateOptions()
observations, states := mlpack.HmmGenerate(&hmm, 150, param)
name | type | description | default |
---|---|---|---|
length |
integer |
Length of sequence to generate. | **--** |
model |
HMMModel |
Trained HMM to generate sequences with. | **--** |
seed |
integer |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
start_state |
integer |
Starting state of sequence. | 0 |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output |
numeric matrix |
Matrix to save observation sequence to. |
state |
integer matrix |
Matrix to save hidden state sequence to. |
{: #r_hmm_generate_detailed-documentation }
This utility takes an already-trained HMM, specified as the model
parameter, and generates a random observation sequence and hidden state sequence based on its parameters. The observation sequence may be saved with the output
output parameter, and the internal state sequence may be saved with the state
output parameter.
The state to start the sequence in may be specified with the start_state
parameter.
For example, to generate a sequence of length 150 from the HMM "hmm"
and save the observation sequence to "observations"
and the hidden state sequence to "states"
, the following command may be used:
R> output <- hmm_generate(model=hmm, length=150)
R> observations <- output$output
R> states <- output$state
// Initialize optional parameters for HoeffdingTree(). param := mlpack.HoeffdingTreeOptions() param.BatchMode = false param.Bins = 10 param.Confidence = 0.95 param.InfoGain = false param.InputModel = nil param.Labels = mat.NewDense(1, 1, nil) param.MaxSamples = 5000 param.MinSamples = 100 param.NumericSplitStrategy = "binary" param.ObservationsBeforeBinning = 100 param.Passes = 1 param.Test = mat.NewDense(1, 1, nil) param.TestLabels = mat.NewDense(1, 1, nil) param.Training = mat.NewDense(1, 1, nil)
output_model, predictions, probabilities := mlpack.HoeffdingTree(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- hoeffding_tree(batch_mode=FALSE, bins=10, confidence=0.95,
info_gain=FALSE, input_model=NA, labels=matrix(integer(), 0, 0),
max_samples=5000, min_samples=100, numeric_split_strategy="binary",
observations_before_binning=100, passes=1, test=matrix(numeric(), 0, 0),
test_labels=matrix(integer(), 0, 0), training=matrix(numeric(), 0, 0),
verbose=FALSE)
R> output_model <- d$output_model
R> predictions <- d$predictions
R> probabilities <- d$probabilities
An implementation of Hoeffding trees, a form of streaming decision tree for classification. Given labeled data, a Hoeffding tree can be trained and saved for later use, or a pre-trained Hoeffding tree can be used for predicting the classifications of new points. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--batch_mode (-b) |
flag |
If true, samples will be considered in batch instead of as a stream. This generally results in better trees but at the cost of memory usage and runtime. | |
--bins (-B) |
int |
If the 'domingos' split strategy is used, this specifies the number of bins for each numeric split. | 10 |
--confidence (-c) |
double |
Confidence before splitting (between 0 and 1). | 0.95 |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--info_gain (-i) |
flag |
If set, information gain is used instead of Gini impurity for calculating Hoeffding bounds. | |
--input_model_file (-m) |
HoeffdingTreeModel file |
Input trained Hoeffding tree model. | '' |
--labels_file (-l) |
1-d index matrix file |
Labels for training dataset. | '' |
--max_samples (-n) |
int |
Maximum number of samples before splitting. | 5000 |
--min_samples (-I) |
int |
Minimum number of samples before splitting. | 100 |
--numeric_split_strategy (-N) |
string |
The splitting strategy to use for numeric features: 'domingos' or 'binary'. | 'binary' |
--observations_before_binning (-o) |
int |
If the 'domingos' split strategy is used, this specifies the number of samples observed before binning is performed. | 100 |
--passes (-s) |
int |
Number of passes to take over the dataset. | 1 |
--test_file (-T) |
2-d categorical matrix file |
Testing dataset (may be categorical). | '' |
--test_labels_file (-L) |
1-d index matrix file |
Labels of test data. | '' |
--training_file (-t) |
2-d categorical matrix file |
Training dataset (may be categorical). | '' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_model_file (-M) |
HoeffdingTreeModel file |
Output for trained Hoeffding tree model. |
--predictions_file (-p) |
1-d index matrix file |
Matrix to output label predictions for test data into. |
--probabilities_file (-P) |
2-d matrix file |
In addition to predicting labels, provide rediction probabilities in this matrix. |
{: #cli_hoeffding_tree_detailed-documentation }
This program implements Hoeffding trees, a form of streaming decision tree suited best for large (or streaming) datasets. This program supports both categorical and numeric data. Given an input dataset, this program is able to train the tree with numerous training options, and save the model to a file. The program is also able to use a trained model or a model from file in order to predict classes for a given test set.
The training file and associated labels are specified with the --training_file (-t)
and --labels_file (-l)
parameters, respectively. Optionally, if --labels_file (-l)
is not specified, the labels are assumed to be the last dimension of the training dataset.
The training may be performed in batch mode (like a typical decision tree algorithm) by specifying the --batch_mode (-b)
option, but this may not be the best option for large datasets.
When a model is trained, it may be saved via the --output_model_file (-M)
output parameter. A model may be loaded from file for further training or testing with the --input_model_file (-m)
parameter.
Test data may be specified with the --test_file (-T)
parameter, and if performance statistics are desired for that test set, labels may be specified with the --test_labels_file (-L)
parameter. Predictions for each test point may be saved with the --predictions_file (-p)
output parameter, and class probabilities for each prediction may be saved with the --probabilities_file (-P)
output parameter.
For example, to train a Hoeffding tree with confidence 0.99 with data 'dataset.csv'
, saving the trained tree to 'tree.bin'
, the following command may be used:
$ mlpack_hoeffding_tree --training_file dataset.arff --confidence 0.99
--output_model_file tree.bin
Then, this tree may be used to make predictions on the test set 'test_set.csv'
, saving the predictions into 'predictions.csv'
and the class probabilities into 'class_probs.csv'
with the following command:
$ mlpack_hoeffding_tree --input_model_file tree.bin --test_file test_set.arff
--predictions_file predictions.csv --probabilities_file class_probs.csv
name | type | description | default |
---|---|---|---|
batch_mode |
bool |
If true, samples will be considered in batch instead of as a stream. This generally results in better trees but at the cost of memory usage and runtime. | False |
bins |
int |
If the 'domingos' split strategy is used, this specifies the number of bins for each numeric split. | 10 |
confidence |
float |
Confidence before splitting (between 0 and 1). | 0.95 |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
info_gain |
bool |
If set, information gain is used instead of Gini impurity for calculating Hoeffding bounds. | False |
input_model |
HoeffdingTreeModelType |
Input trained Hoeffding tree model. | None |
labels |
int vector |
Labels for training dataset. | np.empty([0], dtype=np.uint64) |
max_samples |
int |
Maximum number of samples before splitting. | 5000 |
min_samples |
int |
Minimum number of samples before splitting. | 100 |
numeric_split_strategy |
str |
The splitting strategy to use for numeric features: 'domingos' or 'binary'. | 'binary' |
observations_before_binning |
int |
If the 'domingos' split strategy is used, this specifies the number of samples observed before binning is performed. | 100 |
passes |
int |
Number of passes to take over the dataset. | 1 |
test |
categorical matrix |
Testing dataset (may be categorical). | np.empty([0, 0]) |
test_labels |
int vector |
Labels of test data. | np.empty([0], dtype=np.uint64) |
training |
categorical matrix |
Training dataset (may be categorical). | np.empty([0, 0]) |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
HoeffdingTreeModelType |
Output for trained Hoeffding tree model. |
predictions |
int vector |
Matrix to output label predictions for test data into. |
probabilities |
matrix |
In addition to predicting labels, provide rediction probabilities in this matrix. |
{: #python_hoeffding_tree_detailed-documentation }
This program implements Hoeffding trees, a form of streaming decision tree suited best for large (or streaming) datasets. This program supports both categorical and numeric data. Given an input dataset, this program is able to train the tree with numerous training options, and save the model to a file. The program is also able to use a trained model or a model from file in order to predict classes for a given test set.
The training file and associated labels are specified with the training
and labels
parameters, respectively. Optionally, if labels
is not specified, the labels are assumed to be the last dimension of the training dataset.
The training may be performed in batch mode (like a typical decision tree algorithm) by specifying the batch_mode
option, but this may not be the best option for large datasets.
When a model is trained, it may be saved via the output_model
output parameter. A model may be loaded from file for further training or testing with the input_model
parameter.
Test data may be specified with the test
parameter, and if performance statistics are desired for that test set, labels may be specified with the test_labels
parameter. Predictions for each test point may be saved with the predictions
output parameter, and class probabilities for each prediction may be saved with the probabilities
output parameter.
For example, to train a Hoeffding tree with confidence 0.99 with data 'dataset'
, saving the trained tree to 'tree'
, the following command may be used:
>>> output = hoeffding_tree(training=dataset, confidence=0.99)
>>> tree = output['output_model']
Then, this tree may be used to make predictions on the test set 'test_set'
, saving the predictions into 'predictions'
and the class probabilities into 'class_probs'
with the following command:
>>> output = hoeffding_tree(input_model=tree, test=test_set)
>>> predictions = output['predictions']
>>> class_probs = output['probabilities']
name | type | description | default |
---|---|---|---|
batch_mode |
Bool |
If true, samples will be considered in batch instead of as a stream. This generally results in better trees but at the cost of memory usage and runtime. | false |
bins |
Int |
If the 'domingos' split strategy is used, this specifies the number of bins for each numeric split. | 10 |
confidence |
Float64 |
Confidence before splitting (between 0 and 1). | 0.95 |
info_gain |
Bool |
If set, information gain is used instead of Gini impurity for calculating Hoeffding bounds. | false |
input_model |
HoeffdingTreeModel |
Input trained Hoeffding tree model. | nothing |
labels |
Int vector-like |
Labels for training dataset. | Int[] |
max_samples |
Int |
Maximum number of samples before splitting. | 5000 |
min_samples |
Int |
Minimum number of samples before splitting. | 100 |
numeric_split_strategy |
String |
The splitting strategy to use for numeric features: 'domingos' or 'binary'. | "binary" |
observations_before_binning |
Int |
If the 'domingos' split strategy is used, this specifies the number of samples observed before binning is performed. | 100 |
passes |
Int |
Number of passes to take over the dataset. | 1 |
test |
Tuple{Array{Bool, 1}, Array{Float64, 2}} |
Testing dataset (may be categorical). | zeros(0, 0) |
test_labels |
Int vector-like |
Labels of test data. | Int[] |
training |
Tuple{Array{Bool, 1}, Array{Float64, 2}} |
Training dataset (may be categorical). | zeros(0, 0) |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output_model |
HoeffdingTreeModel |
Output for trained Hoeffding tree model. |
predictions |
Int vector-like |
Matrix to output label predictions for test data into. |
probabilities |
Float64 matrix-like |
In addition to predicting labels, provide rediction probabilities in this matrix. |
{: #julia_hoeffding_tree_detailed-documentation }
This program implements Hoeffding trees, a form of streaming decision tree suited best for large (or streaming) datasets. This program supports both categorical and numeric data. Given an input dataset, this program is able to train the tree with numerous training options, and save the model to a file. The program is also able to use a trained model or a model from file in order to predict classes for a given test set.
The training file and associated labels are specified with the training
and labels
parameters, respectively. Optionally, if labels
is not specified, the labels are assumed to be the last dimension of the training dataset.
The training may be performed in batch mode (like a typical decision tree algorithm) by specifying the batch_mode
option, but this may not be the best option for large datasets.
When a model is trained, it may be saved via the output_model
output parameter. A model may be loaded from file for further training or testing with the input_model
parameter.
Test data may be specified with the test
parameter, and if performance statistics are desired for that test set, labels may be specified with the test_labels
parameter. Predictions for each test point may be saved with the predictions
output parameter, and class probabilities for each prediction may be saved with the probabilities
output parameter.
For example, to train a Hoeffding tree with confidence 0.99 with data dataset
, saving the trained tree to tree
, the following command may be used:
julia> tree, _, _ = hoeffding_tree(confidence=0.99,
training=dataset)
Then, this tree may be used to make predictions on the test set test_set
, saving the predictions into predictions
and the class probabilities into class_probs
with the following command:
julia> _, predictions, class_probs =
hoeffding_tree(input_model=tree, test=test_set)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
BatchMode |
bool |
If true, samples will be considered in batch instead of as a stream. This generally results in better trees but at the cost of memory usage and runtime. | false |
Bins |
int |
If the 'domingos' split strategy is used, this specifies the number of bins for each numeric split. | 10 |
Confidence |
float64 |
Confidence before splitting (between 0 and 1). | 0.95 |
InfoGain |
bool |
If set, information gain is used instead of Gini impurity for calculating Hoeffding bounds. | false |
InputModel |
hoeffdingTreeModel |
Input trained Hoeffding tree model. | nil |
Labels |
*mat.Dense (1d with ints) |
Labels for training dataset. | mat.NewDense(1, 1, nil) |
MaxSamples |
int |
Maximum number of samples before splitting. | 5000 |
MinSamples |
int |
Minimum number of samples before splitting. | 100 |
NumericSplitStrategy |
string |
The splitting strategy to use for numeric features: 'domingos' or 'binary'. | "binary" |
ObservationsBeforeBinning |
int |
If the 'domingos' split strategy is used, this specifies the number of samples observed before binning is performed. | 100 |
Passes |
int |
Number of passes to take over the dataset. | 1 |
Test |
matrixWithInfo |
Testing dataset (may be categorical). | mat.NewDense(1, 1, nil) |
TestLabels |
*mat.Dense (1d with ints) |
Labels of test data. | mat.NewDense(1, 1, nil) |
Training |
matrixWithInfo |
Training dataset (may be categorical). | mat.NewDense(1, 1, nil) |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
outputModel |
hoeffdingTreeModel |
Output for trained Hoeffding tree model. |
predictions |
*mat.Dense (1d with ints) |
Matrix to output label predictions for test data into. |
probabilities |
*mat.Dense |
In addition to predicting labels, provide rediction probabilities in this matrix. |
{: #go_hoeffding_tree_detailed-documentation }
This program implements Hoeffding trees, a form of streaming decision tree suited best for large (or streaming) datasets. This program supports both categorical and numeric data. Given an input dataset, this program is able to train the tree with numerous training options, and save the model to a file. The program is also able to use a trained model or a model from file in order to predict classes for a given test set.
The training file and associated labels are specified with the Training
and Labels
parameters, respectively. Optionally, if Labels
is not specified, the labels are assumed to be the last dimension of the training dataset.
The training may be performed in batch mode (like a typical decision tree algorithm) by specifying the BatchMode
option, but this may not be the best option for large datasets.
When a model is trained, it may be saved via the OutputModel
output parameter. A model may be loaded from file for further training or testing with the InputModel
parameter.
Test data may be specified with the Test
parameter, and if performance statistics are desired for that test set, labels may be specified with the TestLabels
parameter. Predictions for each test point may be saved with the Predictions
output parameter, and class probabilities for each prediction may be saved with the Probabilities
output parameter.
For example, to train a Hoeffding tree with confidence 0.99 with data dataset
, saving the trained tree to tree
, the following command may be used:
// Initialize optional parameters for HoeffdingTree().
param := mlpack.HoeffdingTreeOptions()
param.Training = dataset
param.Confidence = 0.99
tree, _, _ := mlpack.HoeffdingTree(param)
Then, this tree may be used to make predictions on the test set test_set
, saving the predictions into predictions
and the class probabilities into class_probs
with the following command:
// Initialize optional parameters for HoeffdingTree().
param := mlpack.HoeffdingTreeOptions()
param.InputModel = &tree
param.Test = test_set
_, predictions, class_probs := mlpack.HoeffdingTree(param)
name | type | description | default |
---|---|---|---|
batch_mode |
logical |
If true, samples will be considered in batch instead of as a stream. This generally results in better trees but at the cost of memory usage and runtime. | FALSE |
bins |
integer |
If the 'domingos' split strategy is used, this specifies the number of bins for each numeric split. | 10 |
confidence |
numeric |
Confidence before splitting (between 0 and 1). | 0.95 |
info_gain |
logical |
If set, information gain is used instead of Gini impurity for calculating Hoeffding bounds. | FALSE |
input_model |
HoeffdingTreeModel |
Input trained Hoeffding tree model. | NA |
labels |
integer vector |
Labels for training dataset. | matrix(integer(), 0, 0) |
max_samples |
integer |
Maximum number of samples before splitting. | 5000 |
min_samples |
integer |
Minimum number of samples before splitting. | 100 |
numeric_split_strategy |
character |
The splitting strategy to use for numeric features: 'domingos' or 'binary'. | "binary" |
observations_before_binning |
integer |
If the 'domingos' split strategy is used, this specifies the number of samples observed before binning is performed. | 100 |
passes |
integer |
Number of passes to take over the dataset. | 1 |
test |
categorical matrix/data.frame |
Testing dataset (may be categorical). | matrix(numeric(), 0, 0) |
test_labels |
integer vector |
Labels of test data. | matrix(integer(), 0, 0) |
training |
categorical matrix/data.frame |
Training dataset (may be categorical). | matrix(numeric(), 0, 0) |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
HoeffdingTreeModel |
Output for trained Hoeffding tree model. |
predictions |
integer vector |
Matrix to output label predictions for test data into. |
probabilities |
numeric matrix |
In addition to predicting labels, provide rediction probabilities in this matrix. |
{: #r_hoeffding_tree_detailed-documentation }
This program implements Hoeffding trees, a form of streaming decision tree suited best for large (or streaming) datasets. This program supports both categorical and numeric data. Given an input dataset, this program is able to train the tree with numerous training options, and save the model to a file. The program is also able to use a trained model or a model from file in order to predict classes for a given test set.
The training file and associated labels are specified with the training
and labels
parameters, respectively. Optionally, if labels
is not specified, the labels are assumed to be the last dimension of the training dataset.
The training may be performed in batch mode (like a typical decision tree algorithm) by specifying the batch_mode
option, but this may not be the best option for large datasets.
When a model is trained, it may be saved via the output_model
output parameter. A model may be loaded from file for further training or testing with the input_model
parameter.
Test data may be specified with the test
parameter, and if performance statistics are desired for that test set, labels may be specified with the test_labels
parameter. Predictions for each test point may be saved with the predictions
output parameter, and class probabilities for each prediction may be saved with the probabilities
output parameter.
For example, to train a Hoeffding tree with confidence 0.99 with data "dataset"
, saving the trained tree to "tree"
, the following command may be used:
R> output <- hoeffding_tree(training=dataset, confidence=0.99)
R> tree <- output$output_model
Then, this tree may be used to make predictions on the test set "test_set"
, saving the predictions into "predictions"
and the class probabilities into "class_probs"
with the following command:
R> output <- hoeffding_tree(input_model=tree, test=test_set)
R> predictions <- output$predictions
R> class_probs <- output$probabilities
// Initialize optional parameters for Kde(). param := mlpack.KdeOptions() param.AbsError = 0 param.Algorithm = "dual-tree" param.Bandwidth = 1 param.InitialSampleSize = 100 param.InputModel = nil param.Kernel = "gaussian" param.McBreakCoef = 0.4 param.McEntryCoef = 3 param.McProbability = 0.95 param.MonteCarlo = false param.Query = mat.NewDense(1, 1, nil) param.Reference = mat.NewDense(1, 1, nil) param.RelError = 0.05 param.Tree = "kd-tree"
output_model, predictions := mlpack.Kde(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- kde(abs_error=0, algorithm="dual-tree", bandwidth=1,
initial_sample_size=100, input_model=NA, kernel="gaussian",
mc_break_coef=0.4, mc_entry_coef=3, mc_probability=0.95,
monte_carlo=FALSE, query=matrix(numeric(), 0, 0),
reference=matrix(numeric(), 0, 0), rel_error=0.05, tree="kd-tree",
verbose=FALSE)
R> output_model <- d$output_model
R> predictions <- d$predictions
An implementation of kernel density estimation with dual-tree algorithms. Given a set of reference points and query points and a kernel function, this can estimate the density function at the location of each query point using trees; trees that are built can be saved for later use. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--abs_error (-E) |
double |
Relative error tolerance for the prediction. | 0 |
--algorithm (-a) |
string |
Algorithm to use for the prediction.('dual-tree', 'single-tree'). | 'dual-tree' |
--bandwidth (-b) |
double |
Bandwidth of the kernel. | 1 |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--initial_sample_size (-s) |
int |
Initial sample size for Monte Carlo estimations. | 100 |
--input_model_file (-m) |
KDEModel file |
Contains pre-trained KDE model. | '' |
--kernel (-k) |
string |
Kernel to use for the prediction.('gaussian', 'epanechnikov', 'laplacian', 'spherical', 'triangular'). | 'gaussian' |
--mc_break_coef (-c) |
double |
Controls what fraction of the amount of node's descendants is the limit for the sample size before it recurses. | 0.4 |
--mc_entry_coef (-C) |
double |
Controls how much larger does the amount of node descendants has to be compared to the initial sample size in order to be a candidate for Monte Carlo estimations. | 3 |
--mc_probability (-P) |
double |
Probability of the estimation being bounded by relative error when using Monte Carlo estimations. | 0.95 |
--monte_carlo (-S) |
flag |
Whether to use Monte Carlo estimations when possible. | |
--query_file (-q) |
2-d matrix file |
Query dataset to KDE on. | '' |
--reference_file (-r) |
2-d matrix file |
Input reference dataset use for KDE. | '' |
--rel_error (-e) |
double |
Relative error tolerance for the prediction. | 0.05 |
--tree (-t) |
string |
Tree to use for the prediction.('kd-tree', 'ball-tree', 'cover-tree', 'octree', 'r-tree'). | 'kd-tree' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_model_file (-M) |
KDEModel file |
If specified, the KDE model will be saved here. |
--predictions_file (-p) |
1-d matrix file |
Vector to store density predictions. |
{: #cli_kde_detailed-documentation }
This program performs a Kernel Density Estimation. KDE is a non-parametric way of estimating probability density function. For each query point the program will estimate its probability density by applying a kernel function to each reference point. The computational complexity of this is O(N^2) where there are N query points and N reference points, but this implementation will typically see better performance as it uses an approximate dual or single tree algorithm for acceleration.
Dual or single tree optimization avoids many barely relevant calculations (as kernel function values decrease with distance), so it is an approximate computation. You can specify the maximum relative error tolerance for each query value with --rel_error (-e)
as well as the maximum absolute error tolerance with the parameter --abs_error (-E)
. This program runs using an Euclidean metric. Kernel function can be selected using the --kernel (-k)
option. You can also choose what which type of tree to use for the dual-tree algorithm with --tree (-t)
. It is also possible to select whether to use dual-tree algorithm or single-tree algorithm using the --algorithm (-a)
option.
Monte Carlo estimations can be used to accelerate the KDE estimate when the Gaussian Kernel is used. This provides a probabilistic guarantee on the the error of the resulting KDE instead of an absolute guarantee.To enable Monte Carlo estimations, the --monte_carlo (-S)
flag can be used, and success probability can be set with the --mc_probability (-P)
option. It is possible to set the initial sample size for the Monte Carlo estimation using --initial_sample_size (-s)
. This implementation will only consider a node, as a candidate for the Monte Carlo estimation, if its number of descendant nodes is bigger than the initial sample size. This can be controlled using a coefficient that will multiply the initial sample size and can be set using --mc_entry_coef (-C)
. To avoid using the same amount of computations an exact approach would take, this program recurses the tree whenever a fraction of the amount of the node's descendant points have already been computed. This fraction is set using --mc_break_coef (-c)
.
For example, the following will run KDE using the data in 'ref_data.csv'
for training and the data in 'qu_data.csv'
as query data. It will apply an Epanechnikov kernel with a 0.2 bandwidth to each reference point and use a KD-Tree for the dual-tree optimization. The returned predictions will be within 5% of the real KDE value for each query point.
$ mlpack_kde --reference_file ref_data.csv --query_file qu_data.csv
--bandwidth 0.2 --kernel epanechnikov --tree kd-tree --rel_error 0.05
--predictions_file out_data.csv
the predicted density estimations will be stored in 'out_data.csv'
.
If no --query_file (-q)
is provided, then KDE will be computed on the --reference_file (-r)
dataset.
It is possible to select either a reference dataset or an input model but not both at the same time. If an input model is selected and parameter values are not set (e.g. --bandwidth (-b)
) then default parameter values will be used.
In addition to the last program call, it is also possible to activate Monte Carlo estimations if a Gaussian kernel is used. This can provide faster results, but the KDE will only have a probabilistic guarantee of meeting the desired error bound (instead of an absolute guarantee). The following example will run KDE using a Monte Carlo estimation when possible. The results will be within a 5% of the real KDE value with a 95% probability. Initial sample size for the Monte Carlo estimation will be 200 points and a node will be a candidate for the estimation only when it contains 700 (i.e. 3.5200) points. If a node contains 700 points and 420 (i.e. 0.6700) have already been sampled, then the algorithm will recurse instead of keep sampling.
$ mlpack_kde --reference_file ref_data.csv --query_file qu_data.csv
--bandwidth 0.2 --kernel gaussian --tree kd-tree --rel_error 0.05
--predictions_file out_data.csv --monte_carlo --mc_probability 0.95
--initial_sample_size 200 --mc_entry_coef 3.5 --mc_break_coef 0.6
name | type | description | default |
---|---|---|---|
abs_error |
float |
Relative error tolerance for the prediction. | 0 |
algorithm |
str |
Algorithm to use for the prediction.('dual-tree', 'single-tree'). | 'dual-tree' |
bandwidth |
float |
Bandwidth of the kernel. | 1 |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
initial_sample_size |
int |
Initial sample size for Monte Carlo estimations. | 100 |
input_model |
KDEModelType |
Contains pre-trained KDE model. | None |
kernel |
str |
Kernel to use for the prediction.('gaussian', 'epanechnikov', 'laplacian', 'spherical', 'triangular'). | 'gaussian' |
mc_break_coef |
float |
Controls what fraction of the amount of node's descendants is the limit for the sample size before it recurses. | 0.4 |
mc_entry_coef |
float |
Controls how much larger does the amount of node descendants has to be compared to the initial sample size in order to be a candidate for Monte Carlo estimations. | 3 |
mc_probability |
float |
Probability of the estimation being bounded by relative error when using Monte Carlo estimations. | 0.95 |
monte_carlo |
bool |
Whether to use Monte Carlo estimations when possible. | False |
query |
matrix |
Query dataset to KDE on. | np.empty([0, 0]) |
reference |
matrix |
Input reference dataset use for KDE. | np.empty([0, 0]) |
rel_error |
float |
Relative error tolerance for the prediction. | 0.05 |
tree |
str |
Tree to use for the prediction.('kd-tree', 'ball-tree', 'cover-tree', 'octree', 'r-tree'). | 'kd-tree' |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
KDEModelType |
If specified, the KDE model will be saved here. |
predictions |
vector |
Vector to store density predictions. |
{: #python_kde_detailed-documentation }
This program performs a Kernel Density Estimation. KDE is a non-parametric way of estimating probability density function. For each query point the program will estimate its probability density by applying a kernel function to each reference point. The computational complexity of this is O(N^2) where there are N query points and N reference points, but this implementation will typically see better performance as it uses an approximate dual or single tree algorithm for acceleration.
Dual or single tree optimization avoids many barely relevant calculations (as kernel function values decrease with distance), so it is an approximate computation. You can specify the maximum relative error tolerance for each query value with rel_error
as well as the maximum absolute error tolerance with the parameter abs_error
. This program runs using an Euclidean metric. Kernel function can be selected using the kernel
option. You can also choose what which type of tree to use for the dual-tree algorithm with tree
. It is also possible to select whether to use dual-tree algorithm or single-tree algorithm using the algorithm
option.
Monte Carlo estimations can be used to accelerate the KDE estimate when the Gaussian Kernel is used. This provides a probabilistic guarantee on the the error of the resulting KDE instead of an absolute guarantee.To enable Monte Carlo estimations, the monte_carlo
flag can be used, and success probability can be set with the mc_probability
option. It is possible to set the initial sample size for the Monte Carlo estimation using initial_sample_size
. This implementation will only consider a node, as a candidate for the Monte Carlo estimation, if its number of descendant nodes is bigger than the initial sample size. This can be controlled using a coefficient that will multiply the initial sample size and can be set using mc_entry_coef
. To avoid using the same amount of computations an exact approach would take, this program recurses the tree whenever a fraction of the amount of the node's descendant points have already been computed. This fraction is set using mc_break_coef
.
For example, the following will run KDE using the data in 'ref_data'
for training and the data in 'qu_data'
as query data. It will apply an Epanechnikov kernel with a 0.2 bandwidth to each reference point and use a KD-Tree for the dual-tree optimization. The returned predictions will be within 5% of the real KDE value for each query point.
>>> output = kde(reference=ref_data, query=qu_data, bandwidth=0.2,
kernel='epanechnikov', tree='kd-tree', rel_error=0.05)
>>> out_data = output['predictions']
the predicted density estimations will be stored in 'out_data'
.
If no query
is provided, then KDE will be computed on the reference
dataset.
It is possible to select either a reference dataset or an input model but not both at the same time. If an input model is selected and parameter values are not set (e.g. bandwidth
) then default parameter values will be used.
In addition to the last program call, it is also possible to activate Monte Carlo estimations if a Gaussian kernel is used. This can provide faster results, but the KDE will only have a probabilistic guarantee of meeting the desired error bound (instead of an absolute guarantee). The following example will run KDE using a Monte Carlo estimation when possible. The results will be within a 5% of the real KDE value with a 95% probability. Initial sample size for the Monte Carlo estimation will be 200 points and a node will be a candidate for the estimation only when it contains 700 (i.e. 3.5200) points. If a node contains 700 points and 420 (i.e. 0.6700) have already been sampled, then the algorithm will recurse instead of keep sampling.
>>> output = kde(reference=ref_data, query=qu_data, bandwidth=0.2,
kernel='gaussian', tree='kd-tree', rel_error=0.05, monte_carlo=,
mc_probability=0.95, initial_sample_size=200, mc_entry_coef=3.5,
mc_break_coef=0.6)
>>> out_data = output['predictions']
name | type | description | default |
---|---|---|---|
abs_error |
Float64 |
Relative error tolerance for the prediction. | 0 |
algorithm |
String |
Algorithm to use for the prediction.('dual-tree', 'single-tree'). | "dual-tree" |
bandwidth |
Float64 |
Bandwidth of the kernel. | 1 |
initial_sample_size |
Int |
Initial sample size for Monte Carlo estimations. | 100 |
input_model |
KDEModel |
Contains pre-trained KDE model. | nothing |
kernel |
String |
Kernel to use for the prediction.('gaussian', 'epanechnikov', 'laplacian', 'spherical', 'triangular'). | "gaussian" |
mc_break_coef |
Float64 |
Controls what fraction of the amount of node's descendants is the limit for the sample size before it recurses. | 0.4 |
mc_entry_coef |
Float64 |
Controls how much larger does the amount of node descendants has to be compared to the initial sample size in order to be a candidate for Monte Carlo estimations. | 3 |
mc_probability |
Float64 |
Probability of the estimation being bounded by relative error when using Monte Carlo estimations. | 0.95 |
monte_carlo |
Bool |
Whether to use Monte Carlo estimations when possible. | false |
query |
Float64 matrix-like |
Query dataset to KDE on. | zeros(0, 0) |
reference |
Float64 matrix-like |
Input reference dataset use for KDE. | zeros(0, 0) |
rel_error |
Float64 |
Relative error tolerance for the prediction. | 0.05 |
tree |
String |
Tree to use for the prediction.('kd-tree', 'ball-tree', 'cover-tree', 'octree', 'r-tree'). | "kd-tree" |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output_model |
KDEModel |
If specified, the KDE model will be saved here. |
predictions |
Float64 vector-like |
Vector to store density predictions. |
{: #julia_kde_detailed-documentation }
This program performs a Kernel Density Estimation. KDE is a non-parametric way of estimating probability density function. For each query point the program will estimate its probability density by applying a kernel function to each reference point. The computational complexity of this is O(N^2) where there are N query points and N reference points, but this implementation will typically see better performance as it uses an approximate dual or single tree algorithm for acceleration.
Dual or single tree optimization avoids many barely relevant calculations (as kernel function values decrease with distance), so it is an approximate computation. You can specify the maximum relative error tolerance for each query value with rel_error
as well as the maximum absolute error tolerance with the parameter abs_error
. This program runs using an Euclidean metric. Kernel function can be selected using the kernel
option. You can also choose what which type of tree to use for the dual-tree algorithm with tree
. It is also possible to select whether to use dual-tree algorithm or single-tree algorithm using the algorithm
option.
Monte Carlo estimations can be used to accelerate the KDE estimate when the Gaussian Kernel is used. This provides a probabilistic guarantee on the the error of the resulting KDE instead of an absolute guarantee.To enable Monte Carlo estimations, the monte_carlo
flag can be used, and success probability can be set with the mc_probability
option. It is possible to set the initial sample size for the Monte Carlo estimation using initial_sample_size
. This implementation will only consider a node, as a candidate for the Monte Carlo estimation, if its number of descendant nodes is bigger than the initial sample size. This can be controlled using a coefficient that will multiply the initial sample size and can be set using mc_entry_coef
. To avoid using the same amount of computations an exact approach would take, this program recurses the tree whenever a fraction of the amount of the node's descendant points have already been computed. This fraction is set using mc_break_coef
.
For example, the following will run KDE using the data in ref_data
for training and the data in qu_data
as query data. It will apply an Epanechnikov kernel with a 0.2 bandwidth to each reference point and use a KD-Tree for the dual-tree optimization. The returned predictions will be within 5% of the real KDE value for each query point.
julia> using CSV
julia> ref_data = CSV.read("ref_data.csv")
julia> qu_data = CSV.read("qu_data.csv")
julia> _, out_data = kde(bandwidth=0.2, kernel="epanechnikov",
query=qu_data, reference=ref_data, rel_error=0.05, tree="kd-tree")
the predicted density estimations will be stored in out_data
.
If no query
is provided, then KDE will be computed on the reference
dataset.
It is possible to select either a reference dataset or an input model but not both at the same time. If an input model is selected and parameter values are not set (e.g. bandwidth
) then default parameter values will be used.
In addition to the last program call, it is also possible to activate Monte Carlo estimations if a Gaussian kernel is used. This can provide faster results, but the KDE will only have a probabilistic guarantee of meeting the desired error bound (instead of an absolute guarantee). The following example will run KDE using a Monte Carlo estimation when possible. The results will be within a 5% of the real KDE value with a 95% probability. Initial sample size for the Monte Carlo estimation will be 200 points and a node will be a candidate for the estimation only when it contains 700 (i.e. 3.5200) points. If a node contains 700 points and 420 (i.e. 0.6700) have already been sampled, then the algorithm will recurse instead of keep sampling.
julia> using CSV
julia> ref_data = CSV.read("ref_data.csv")
julia> qu_data = CSV.read("qu_data.csv")
julia> _, out_data = kde(bandwidth=0.2, initial_sample_size=200,
kernel="gaussian", mc_break_coef=0.6, mc_entry_coef=3.5,
mc_probability=0.95, monte_carlo=, query=qu_data,
reference=ref_data, rel_error=0.05, tree="kd-tree")
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
AbsError |
float64 |
Relative error tolerance for the prediction. | 0 |
Algorithm |
string |
Algorithm to use for the prediction.('dual-tree', 'single-tree'). | "dual-tree" |
Bandwidth |
float64 |
Bandwidth of the kernel. | 1 |
InitialSampleSize |
int |
Initial sample size for Monte Carlo estimations. | 100 |
InputModel |
kdeModel |
Contains pre-trained KDE model. | nil |
Kernel |
string |
Kernel to use for the prediction.('gaussian', 'epanechnikov', 'laplacian', 'spherical', 'triangular'). | "gaussian" |
McBreakCoef |
float64 |
Controls what fraction of the amount of node's descendants is the limit for the sample size before it recurses. | 0.4 |
McEntryCoef |
float64 |
Controls how much larger does the amount of node descendants has to be compared to the initial sample size in order to be a candidate for Monte Carlo estimations. | 3 |
McProbability |
float64 |
Probability of the estimation being bounded by relative error when using Monte Carlo estimations. | 0.95 |
MonteCarlo |
bool |
Whether to use Monte Carlo estimations when possible. | false |
Query |
*mat.Dense |
Query dataset to KDE on. | mat.NewDense(1, 1, nil) |
Reference |
*mat.Dense |
Input reference dataset use for KDE. | mat.NewDense(1, 1, nil) |
RelError |
float64 |
Relative error tolerance for the prediction. | 0.05 |
Tree |
string |
Tree to use for the prediction.('kd-tree', 'ball-tree', 'cover-tree', 'octree', 'r-tree'). | "kd-tree" |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
outputModel |
kdeModel |
If specified, the KDE model will be saved here. |
predictions |
*mat.Dense (1d) |
Vector to store density predictions. |
{: #go_kde_detailed-documentation }
This program performs a Kernel Density Estimation. KDE is a non-parametric way of estimating probability density function. For each query point the program will estimate its probability density by applying a kernel function to each reference point. The computational complexity of this is O(N^2) where there are N query points and N reference points, but this implementation will typically see better performance as it uses an approximate dual or single tree algorithm for acceleration.
Dual or single tree optimization avoids many barely relevant calculations (as kernel function values decrease with distance), so it is an approximate computation. You can specify the maximum relative error tolerance for each query value with RelError
as well as the maximum absolute error tolerance with the parameter AbsError
. This program runs using an Euclidean metric. Kernel function can be selected using the Kernel
option. You can also choose what which type of tree to use for the dual-tree algorithm with Tree
. It is also possible to select whether to use dual-tree algorithm or single-tree algorithm using the Algorithm
option.
Monte Carlo estimations can be used to accelerate the KDE estimate when the Gaussian Kernel is used. This provides a probabilistic guarantee on the the error of the resulting KDE instead of an absolute guarantee.To enable Monte Carlo estimations, the MonteCarlo
flag can be used, and success probability can be set with the McProbability
option. It is possible to set the initial sample size for the Monte Carlo estimation using InitialSampleSize
. This implementation will only consider a node, as a candidate for the Monte Carlo estimation, if its number of descendant nodes is bigger than the initial sample size. This can be controlled using a coefficient that will multiply the initial sample size and can be set using McEntryCoef
. To avoid using the same amount of computations an exact approach would take, this program recurses the tree whenever a fraction of the amount of the node's descendant points have already been computed. This fraction is set using McBreakCoef
.
For example, the following will run KDE using the data in ref_data
for training and the data in qu_data
as query data. It will apply an Epanechnikov kernel with a 0.2 bandwidth to each reference point and use a KD-Tree for the dual-tree optimization. The returned predictions will be within 5% of the real KDE value for each query point.
// Initialize optional parameters for Kde().
param := mlpack.KdeOptions()
param.Reference = ref_data
param.Query = qu_data
param.Bandwidth = 0.2
param.Kernel = "epanechnikov"
param.Tree = "kd-tree"
param.RelError = 0.05
_, out_data := mlpack.Kde(param)
the predicted density estimations will be stored in out_data
.
If no Query
is provided, then KDE will be computed on the Reference
dataset.
It is possible to select either a reference dataset or an input model but not both at the same time. If an input model is selected and parameter values are not set (e.g. Bandwidth
) then default parameter values will be used.
In addition to the last program call, it is also possible to activate Monte Carlo estimations if a Gaussian kernel is used. This can provide faster results, but the KDE will only have a probabilistic guarantee of meeting the desired error bound (instead of an absolute guarantee). The following example will run KDE using a Monte Carlo estimation when possible. The results will be within a 5% of the real KDE value with a 95% probability. Initial sample size for the Monte Carlo estimation will be 200 points and a node will be a candidate for the estimation only when it contains 700 (i.e. 3.5200) points. If a node contains 700 points and 420 (i.e. 0.6700) have already been sampled, then the algorithm will recurse instead of keep sampling.
// Initialize optional parameters for Kde().
param := mlpack.KdeOptions()
param.Reference = ref_data
param.Query = qu_data
param.Bandwidth = 0.2
param.Kernel = "gaussian"
param.Tree = "kd-tree"
param.RelError = 0.05
param.MonteCarlo =
param.McProbability = 0.95
param.InitialSampleSize = 200
param.McEntryCoef = 3.5
param.McBreakCoef = 0.6
_, out_data := mlpack.Kde(param)
name | type | description | default |
---|---|---|---|
abs_error |
numeric |
Relative error tolerance for the prediction. | 0 |
algorithm |
character |
Algorithm to use for the prediction.('dual-tree', 'single-tree'). | "dual-tree" |
bandwidth |
numeric |
Bandwidth of the kernel. | 1 |
initial_sample_size |
integer |
Initial sample size for Monte Carlo estimations. | 100 |
input_model |
KDEModel |
Contains pre-trained KDE model. | NA |
kernel |
character |
Kernel to use for the prediction.('gaussian', 'epanechnikov', 'laplacian', 'spherical', 'triangular'). | "gaussian" |
mc_break_coef |
numeric |
Controls what fraction of the amount of node's descendants is the limit for the sample size before it recurses. | 0.4 |
mc_entry_coef |
numeric |
Controls how much larger does the amount of node descendants has to be compared to the initial sample size in order to be a candidate for Monte Carlo estimations. | 3 |
mc_probability |
numeric |
Probability of the estimation being bounded by relative error when using Monte Carlo estimations. | 0.95 |
monte_carlo |
logical |
Whether to use Monte Carlo estimations when possible. | FALSE |
query |
numeric matrix |
Query dataset to KDE on. | matrix(numeric(), 0, 0) |
reference |
numeric matrix |
Input reference dataset use for KDE. | matrix(numeric(), 0, 0) |
rel_error |
numeric |
Relative error tolerance for the prediction. | 0.05 |
tree |
character |
Tree to use for the prediction.('kd-tree', 'ball-tree', 'cover-tree', 'octree', 'r-tree'). | "kd-tree" |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
KDEModel |
If specified, the KDE model will be saved here. |
predictions |
numeric vector |
Vector to store density predictions. |
{: #r_kde_detailed-documentation }
This program performs a Kernel Density Estimation. KDE is a non-parametric way of estimating probability density function. For each query point the program will estimate its probability density by applying a kernel function to each reference point. The computational complexity of this is O(N^2) where there are N query points and N reference points, but this implementation will typically see better performance as it uses an approximate dual or single tree algorithm for acceleration.
Dual or single tree optimization avoids many barely relevant calculations (as kernel function values decrease with distance), so it is an approximate computation. You can specify the maximum relative error tolerance for each query value with rel_error
as well as the maximum absolute error tolerance with the parameter abs_error
. This program runs using an Euclidean metric. Kernel function can be selected using the kernel
option. You can also choose what which type of tree to use for the dual-tree algorithm with tree
. It is also possible to select whether to use dual-tree algorithm or single-tree algorithm using the algorithm
option.
Monte Carlo estimations can be used to accelerate the KDE estimate when the Gaussian Kernel is used. This provides a probabilistic guarantee on the the error of the resulting KDE instead of an absolute guarantee.To enable Monte Carlo estimations, the monte_carlo
flag can be used, and success probability can be set with the mc_probability
option. It is possible to set the initial sample size for the Monte Carlo estimation using initial_sample_size
. This implementation will only consider a node, as a candidate for the Monte Carlo estimation, if its number of descendant nodes is bigger than the initial sample size. This can be controlled using a coefficient that will multiply the initial sample size and can be set using mc_entry_coef
. To avoid using the same amount of computations an exact approach would take, this program recurses the tree whenever a fraction of the amount of the node's descendant points have already been computed. This fraction is set using mc_break_coef
.
For example, the following will run KDE using the data in "ref_data"
for training and the data in "qu_data"
as query data. It will apply an Epanechnikov kernel with a 0.2 bandwidth to each reference point and use a KD-Tree for the dual-tree optimization. The returned predictions will be within 5% of the real KDE value for each query point.
R> output <- kde(reference=ref_data, query=qu_data, bandwidth=0.2,
kernel="epanechnikov", tree="kd-tree", rel_error=0.05)
R> out_data <- output$predictions
the predicted density estimations will be stored in "out_data"
.
If no query
is provided, then KDE will be computed on the reference
dataset.
It is possible to select either a reference dataset or an input model but not both at the same time. If an input model is selected and parameter values are not set (e.g. bandwidth
) then default parameter values will be used.
In addition to the last program call, it is also possible to activate Monte Carlo estimations if a Gaussian kernel is used. This can provide faster results, but the KDE will only have a probabilistic guarantee of meeting the desired error bound (instead of an absolute guarantee). The following example will run KDE using a Monte Carlo estimation when possible. The results will be within a 5% of the real KDE value with a 95% probability. Initial sample size for the Monte Carlo estimation will be 200 points and a node will be a candidate for the estimation only when it contains 700 (i.e. 3.5200) points. If a node contains 700 points and 420 (i.e. 0.6700) have already been sampled, then the algorithm will recurse instead of keep sampling.
R> output <- kde(reference=ref_data, query=qu_data, bandwidth=0.2,
kernel="gaussian", tree="kd-tree", rel_error=0.05, monte_carlo=,
mc_probability=0.95, initial_sample_size=200, mc_entry_coef=3.5,
mc_break_coef=0.6)
R> out_data <- output$predictions
// Initialize optional parameters for KernelPca(). param := mlpack.KernelPcaOptions() param.Bandwidth = 1 param.Center = false param.Degree = 1 param.KernelScale = 1 param.NewDimensionality = 0 param.NystroemMethod = false param.Offset = 0 param.Sampling = "kmeans"
output := mlpack.KernelPca(input, kernel, param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- kernel_pca(bandwidth=1, center=FALSE, degree=1,
input=matrix(numeric(), 0, 0), kernel="", kernel_scale=1,
new_dimensionality=0, nystroem_method=FALSE, offset=0,
sampling="kmeans", verbose=FALSE)
R> output <- d$output
An implementation of Kernel Principal Components Analysis (KPCA). This can be used to perform nonlinear dimensionality reduction or preprocessing on a given dataset. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--bandwidth (-b) |
double |
Bandwidth, for 'gaussian' and 'laplacian' kernels. | 1 |
--center (-c) |
flag |
If set, the transformed data will be centered about the origin. | |
--degree (-D) |
double |
Degree of polynomial, for 'polynomial' kernel. | 1 |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_file (-i) |
2-d matrix file |
Input dataset to perform KPCA on. | **--** |
--kernel (-k) |
string |
The kernel to use; see the above documentation for the list of usable kernels. | **--** |
--kernel_scale (-S) |
double |
Scale, for 'hyptan' kernel. | 1 |
--new_dimensionality (-d) |
int |
If not 0, reduce the dimensionality of the output dataset by ignoring the dimensions with the smallest eigenvalues. | 0 |
--nystroem_method (-n) |
flag |
If set, the Nystroem method will be used. | |
--offset (-O) |
double |
Offset, for 'hyptan' and 'polynomial' kernels. | 0 |
--sampling (-s) |
string |
Sampling scheme to use for the Nystroem method: 'kmeans', 'random', 'ordered' | 'kmeans' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_file (-o) |
2-d matrix file |
Matrix to save modified dataset to. |
{: #cli_kernel_pca_detailed-documentation }
This program performs Kernel Principal Components Analysis (KPCA) on the specified dataset with the specified kernel. This will transform the data onto the kernel principal components, and optionally reduce the dimensionality by ignoring the kernel principal components with the smallest eigenvalues.
For the case where a linear kernel is used, this reduces to regular PCA.
The kernels that are supported are listed below:
-
'linear': the standard linear dot product (same as normal PCA): K(x, y) = x^T y
-
'gaussian': a Gaussian kernel; requires bandwidth: K(x, y) = exp(-(|| x - y || ^ 2) / (2 * (bandwidth ^ 2)))
-
'polynomial': polynomial kernel; requires offset and degree: K(x, y) = (x^T y + offset) ^ degree
-
'hyptan': hyperbolic tangent kernel; requires scale and offset: K(x, y) = tanh(scale * (x^T y) + offset)
-
'laplacian': Laplacian kernel; requires bandwidth: K(x, y) = exp(-(|| x - y ||) / bandwidth)
-
'epanechnikov': Epanechnikov kernel; requires bandwidth: K(x, y) = max(0, 1 - || x - y ||^2 / bandwidth^2)
-
'cosine': cosine distance: K(x, y) = 1 - (x^T y) / (|| x || * || y ||)
The parameters for each of the kernels should be specified with the options --bandwidth (-b)
, --kernel_scale (-S)
, --offset (-O)
, or --degree (-D)
(or a combination of those parameters).
Optionally, the Nystroem method ("Using the Nystroem method to speed up kernel machines", 2001) can be used to calculate the kernel matrix by specifying the --nystroem_method (-n)
parameter. This approach works by using a subset of the data as basis to reconstruct the kernel matrix; to specify the sampling scheme, the --sampling (-s)
parameter is used. The sampling scheme for the Nystroem method can be chosen from the following list: 'kmeans', 'random', 'ordered'.
For example, the following command will perform KPCA on the dataset 'input.csv'
using the Gaussian kernel, and saving the transformed data to 'transformed.csv'
:
$ mlpack_kernel_pca --input_file input.csv --kernel gaussian --output_file
transformed.csv
name | type | description | default |
---|---|---|---|
bandwidth |
float |
Bandwidth, for 'gaussian' and 'laplacian' kernels. | 1 |
center |
bool |
If set, the transformed data will be centered about the origin. | False |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
degree |
float |
Degree of polynomial, for 'polynomial' kernel. | 1 |
input |
matrix |
Input dataset to perform KPCA on. | **--** |
kernel |
str |
The kernel to use; see the above documentation for the list of usable kernels. | **--** |
kernel_scale |
float |
Scale, for 'hyptan' kernel. | 1 |
new_dimensionality |
int |
If not 0, reduce the dimensionality of the output dataset by ignoring the dimensions with the smallest eigenvalues. | 0 |
nystroem_method |
bool |
If set, the Nystroem method will be used. | False |
offset |
float |
Offset, for 'hyptan' and 'polynomial' kernels. | 0 |
sampling |
str |
Sampling scheme to use for the Nystroem method: 'kmeans', 'random', 'ordered' | 'kmeans' |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output |
matrix |
Matrix to save modified dataset to. |
{: #python_kernel_pca_detailed-documentation }
This program performs Kernel Principal Components Analysis (KPCA) on the specified dataset with the specified kernel. This will transform the data onto the kernel principal components, and optionally reduce the dimensionality by ignoring the kernel principal components with the smallest eigenvalues.
For the case where a linear kernel is used, this reduces to regular PCA.
The kernels that are supported are listed below:
-
'linear': the standard linear dot product (same as normal PCA): K(x, y) = x^T y
-
'gaussian': a Gaussian kernel; requires bandwidth: K(x, y) = exp(-(|| x - y || ^ 2) / (2 * (bandwidth ^ 2)))
-
'polynomial': polynomial kernel; requires offset and degree: K(x, y) = (x^T y + offset) ^ degree
-
'hyptan': hyperbolic tangent kernel; requires scale and offset: K(x, y) = tanh(scale * (x^T y) + offset)
-
'laplacian': Laplacian kernel; requires bandwidth: K(x, y) = exp(-(|| x - y ||) / bandwidth)
-
'epanechnikov': Epanechnikov kernel; requires bandwidth: K(x, y) = max(0, 1 - || x - y ||^2 / bandwidth^2)
-
'cosine': cosine distance: K(x, y) = 1 - (x^T y) / (|| x || * || y ||)
The parameters for each of the kernels should be specified with the options bandwidth
, kernel_scale
, offset
, or degree
(or a combination of those parameters).
Optionally, the Nystroem method ("Using the Nystroem method to speed up kernel machines", 2001) can be used to calculate the kernel matrix by specifying the nystroem_method
parameter. This approach works by using a subset of the data as basis to reconstruct the kernel matrix; to specify the sampling scheme, the sampling
parameter is used. The sampling scheme for the Nystroem method can be chosen from the following list: 'kmeans', 'random', 'ordered'.
For example, the following command will perform KPCA on the dataset 'input'
using the Gaussian kernel, and saving the transformed data to 'transformed'
:
>>> output = kernel_pca(input=input, kernel='gaussian')
>>> transformed = output['output']
name | type | description | default |
---|---|---|---|
bandwidth |
Float64 |
Bandwidth, for 'gaussian' and 'laplacian' kernels. | 1 |
center |
Bool |
If set, the transformed data will be centered about the origin. | false |
degree |
Float64 |
Degree of polynomial, for 'polynomial' kernel. | 1 |
input |
Float64 matrix-like |
Input dataset to perform KPCA on. | **--** |
kernel |
String |
The kernel to use; see the above documentation for the list of usable kernels. | **--** |
kernel_scale |
Float64 |
Scale, for 'hyptan' kernel. | 1 |
new_dimensionality |
Int |
If not 0, reduce the dimensionality of the output dataset by ignoring the dimensions with the smallest eigenvalues. | 0 |
nystroem_method |
Bool |
If set, the Nystroem method will be used. | false |
offset |
Float64 |
Offset, for 'hyptan' and 'polynomial' kernels. | 0 |
sampling |
String |
Sampling scheme to use for the Nystroem method: 'kmeans', 'random', 'ordered' | "kmeans" |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output |
Float64 matrix-like |
Matrix to save modified dataset to. |
{: #julia_kernel_pca_detailed-documentation }
This program performs Kernel Principal Components Analysis (KPCA) on the specified dataset with the specified kernel. This will transform the data onto the kernel principal components, and optionally reduce the dimensionality by ignoring the kernel principal components with the smallest eigenvalues.
For the case where a linear kernel is used, this reduces to regular PCA.
The kernels that are supported are listed below:
-
'linear': the standard linear dot product (same as normal PCA): K(x, y) = x^T y
-
'gaussian': a Gaussian kernel; requires bandwidth: K(x, y) = exp(-(|| x - y || ^ 2) / (2 * (bandwidth ^ 2)))
-
'polynomial': polynomial kernel; requires offset and degree: K(x, y) = (x^T y + offset) ^ degree
-
'hyptan': hyperbolic tangent kernel; requires scale and offset: K(x, y) = tanh(scale * (x^T y) + offset)
-
'laplacian': Laplacian kernel; requires bandwidth: K(x, y) = exp(-(|| x - y ||) / bandwidth)
-
'epanechnikov': Epanechnikov kernel; requires bandwidth: K(x, y) = max(0, 1 - || x - y ||^2 / bandwidth^2)
-
'cosine': cosine distance: K(x, y) = 1 - (x^T y) / (|| x || * || y ||)
The parameters for each of the kernels should be specified with the options bandwidth
, kernel_scale
, offset
, or degree
(or a combination of those parameters).
Optionally, the Nystroem method ("Using the Nystroem method to speed up kernel machines", 2001) can be used to calculate the kernel matrix by specifying the nystroem_method
parameter. This approach works by using a subset of the data as basis to reconstruct the kernel matrix; to specify the sampling scheme, the sampling
parameter is used. The sampling scheme for the Nystroem method can be chosen from the following list: 'kmeans', 'random', 'ordered'.
For example, the following command will perform KPCA on the dataset input
using the Gaussian kernel, and saving the transformed data to transformed
:
julia> using CSV
julia> input = CSV.read("input.csv")
julia> transformed = kernel_pca(input, "gaussian")
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
Bandwidth |
float64 |
Bandwidth, for 'gaussian' and 'laplacian' kernels. | 1 |
Center |
bool |
If set, the transformed data will be centered about the origin. | false |
Degree |
float64 |
Degree of polynomial, for 'polynomial' kernel. | 1 |
input |
*mat.Dense |
Input dataset to perform KPCA on. | **--** |
kernel |
string |
The kernel to use; see the above documentation for the list of usable kernels. | **--** |
KernelScale |
float64 |
Scale, for 'hyptan' kernel. | 1 |
NewDimensionality |
int |
If not 0, reduce the dimensionality of the output dataset by ignoring the dimensions with the smallest eigenvalues. | 0 |
NystroemMethod |
bool |
If set, the Nystroem method will be used. | false |
Offset |
float64 |
Offset, for 'hyptan' and 'polynomial' kernels. | 0 |
Sampling |
string |
Sampling scheme to use for the Nystroem method: 'kmeans', 'random', 'ordered' | "kmeans" |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
output |
*mat.Dense |
Matrix to save modified dataset to. |
{: #go_kernel_pca_detailed-documentation }
This program performs Kernel Principal Components Analysis (KPCA) on the specified dataset with the specified kernel. This will transform the data onto the kernel principal components, and optionally reduce the dimensionality by ignoring the kernel principal components with the smallest eigenvalues.
For the case where a linear kernel is used, this reduces to regular PCA.
The kernels that are supported are listed below:
-
'linear': the standard linear dot product (same as normal PCA): K(x, y) = x^T y
-
'gaussian': a Gaussian kernel; requires bandwidth: K(x, y) = exp(-(|| x - y || ^ 2) / (2 * (bandwidth ^ 2)))
-
'polynomial': polynomial kernel; requires offset and degree: K(x, y) = (x^T y + offset) ^ degree
-
'hyptan': hyperbolic tangent kernel; requires scale and offset: K(x, y) = tanh(scale * (x^T y) + offset)
-
'laplacian': Laplacian kernel; requires bandwidth: K(x, y) = exp(-(|| x - y ||) / bandwidth)
-
'epanechnikov': Epanechnikov kernel; requires bandwidth: K(x, y) = max(0, 1 - || x - y ||^2 / bandwidth^2)
-
'cosine': cosine distance: K(x, y) = 1 - (x^T y) / (|| x || * || y ||)
The parameters for each of the kernels should be specified with the options Bandwidth
, KernelScale
, Offset
, or Degree
(or a combination of those parameters).
Optionally, the Nystroem method ("Using the Nystroem method to speed up kernel machines", 2001) can be used to calculate the kernel matrix by specifying the NystroemMethod
parameter. This approach works by using a subset of the data as basis to reconstruct the kernel matrix; to specify the sampling scheme, the Sampling
parameter is used. The sampling scheme for the Nystroem method can be chosen from the following list: 'kmeans', 'random', 'ordered'.
For example, the following command will perform KPCA on the dataset input
using the Gaussian kernel, and saving the transformed data to transformed
:
// Initialize optional parameters for KernelPca().
param := mlpack.KernelPcaOptions()
transformed := mlpack.KernelPca(input, "gaussian", param)
name | type | description | default |
---|---|---|---|
bandwidth |
numeric |
Bandwidth, for 'gaussian' and 'laplacian' kernels. | 1 |
center |
logical |
If set, the transformed data will be centered about the origin. | FALSE |
degree |
numeric |
Degree of polynomial, for 'polynomial' kernel. | 1 |
input |
numeric matrix |
Input dataset to perform KPCA on. | **--** |
kernel |
character |
The kernel to use; see the above documentation for the list of usable kernels. | **--** |
kernel_scale |
numeric |
Scale, for 'hyptan' kernel. | 1 |
new_dimensionality |
integer |
If not 0, reduce the dimensionality of the output dataset by ignoring the dimensions with the smallest eigenvalues. | 0 |
nystroem_method |
logical |
If set, the Nystroem method will be used. | FALSE |
offset |
numeric |
Offset, for 'hyptan' and 'polynomial' kernels. | 0 |
sampling |
character |
Sampling scheme to use for the Nystroem method: 'kmeans', 'random', 'ordered' | "kmeans" |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output |
numeric matrix |
Matrix to save modified dataset to. |
{: #r_kernel_pca_detailed-documentation }
This program performs Kernel Principal Components Analysis (KPCA) on the specified dataset with the specified kernel. This will transform the data onto the kernel principal components, and optionally reduce the dimensionality by ignoring the kernel principal components with the smallest eigenvalues.
For the case where a linear kernel is used, this reduces to regular PCA.
The kernels that are supported are listed below:
-
'linear': the standard linear dot product (same as normal PCA): K(x, y) = x^T y
-
'gaussian': a Gaussian kernel; requires bandwidth: K(x, y) = exp(-(|| x - y || ^ 2) / (2 * (bandwidth ^ 2)))
-
'polynomial': polynomial kernel; requires offset and degree: K(x, y) = (x^T y + offset) ^ degree
-
'hyptan': hyperbolic tangent kernel; requires scale and offset: K(x, y) = tanh(scale * (x^T y) + offset)
-
'laplacian': Laplacian kernel; requires bandwidth: K(x, y) = exp(-(|| x - y ||) / bandwidth)
-
'epanechnikov': Epanechnikov kernel; requires bandwidth: K(x, y) = max(0, 1 - || x - y ||^2 / bandwidth^2)
-
'cosine': cosine distance: K(x, y) = 1 - (x^T y) / (|| x || * || y ||)
The parameters for each of the kernels should be specified with the options bandwidth
, kernel_scale
, offset
, or degree
(or a combination of those parameters).
Optionally, the Nystroem method ("Using the Nystroem method to speed up kernel machines", 2001) can be used to calculate the kernel matrix by specifying the nystroem_method
parameter. This approach works by using a subset of the data as basis to reconstruct the kernel matrix; to specify the sampling scheme, the sampling
parameter is used. The sampling scheme for the Nystroem method can be chosen from the following list: 'kmeans', 'random', 'ordered'.
For example, the following command will perform KPCA on the dataset "input"
using the Gaussian kernel, and saving the transformed data to "transformed"
:
R> output <- kernel_pca(input=input, kernel="gaussian")
R> transformed <- output$output
// Initialize optional parameters for Kmeans(). param := mlpack.KmeansOptions() param.Algorithm = "naive" param.AllowEmptyClusters = false param.InPlace = false param.InitialCentroids = mat.NewDense(1, 1, nil) param.KillEmptyClusters = false param.LabelsOnly = false param.MaxIterations = 1000 param.Percentage = 0.02 param.RefinedStart = false param.Samplings = 100 param.Seed = 0
centroid, output := mlpack.Kmeans(clusters, input, param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- kmeans(algorithm="naive", allow_empty_clusters=FALSE,
clusters=0, in_place=FALSE, initial_centroids=matrix(numeric(), 0, 0),
input=matrix(numeric(), 0, 0), kill_empty_clusters=FALSE,
labels_only=FALSE, max_iterations=1000, percentage=0.02,
refined_start=FALSE, samplings=100, seed=0, verbose=FALSE)
R> centroid <- d$centroid
R> output <- d$output
An implementation of several strategies for efficient k-means clustering. Given a dataset and a value of k, this computes and returns a k-means clustering on that data. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--algorithm (-a) |
string |
Algorithm to use for the Lloyd iteration ('naive', 'pelleg-moore', 'elkan', 'hamerly', 'dualtree', or 'dualtree-covertree'). | 'naive' |
--allow_empty_clusters (-e) |
flag |
Allow empty clusters to be persist. | |
--clusters (-c) |
int |
Number of clusters to find (0 autodetects from initial centroids). | **--** |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--in_place (-P) |
flag |
If specified, a column containing the learned cluster assignments will be added to the input dataset file. In this case, --output_file is overridden. (Do not use in Python.) | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--initial_centroids_file (-I) |
2-d matrix file |
Start with the specified initial centroids. | '' |
--input_file (-i) |
2-d matrix file |
Input dataset to perform clustering on. | **--** |
--kill_empty_clusters (-E) |
flag |
Remove empty clusters when they occur. | |
--labels_only (-l) |
flag |
Only output labels into output file. | |
--max_iterations (-m) |
int |
Maximum number of iterations before k-means terminates. | 1000 |
--percentage (-p) |
double |
Percentage of dataset to use for each refined start sampling (use when --refined_start is specified). | 0.02 |
--refined_start (-r) |
flag |
Use the refined initial point strategy by Bradley and Fayyad to choose initial points. | |
--samplings (-S) |
int |
Number of samplings to perform for refined start (use when --refined_start is specified). | 100 |
--seed (-s) |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--centroid_file (-C) |
2-d matrix file |
If specified, the centroids of each cluster will be written to the given file. |
--output_file (-o) |
2-d matrix file |
Matrix to store output labels or labeled data to. |
{: #cli_kmeans_detailed-documentation }
This program performs K-Means clustering on the given dataset. It can return the learned cluster assignments, and the centroids of the clusters. Empty clusters are not allowed by default; when a cluster becomes empty, the point furthest from the centroid of the cluster with maximum variance is taken to fill that cluster.
Optionally, the Bradley and Fayyad approach ("Refining initial points for k-means clustering", 1998) can be used to select initial points by specifying the --refined_start (-r)
parameter. This approach works by taking random samplings of the dataset; to specify the number of samplings, the --samplings (-S)
parameter is used, and to specify the percentage of the dataset to be used in each sample, the --percentage (-p)
parameter is used (it should be a value between 0.0 and 1.0).
There are several options available for the algorithm used for each Lloyd iteration, specified with the --algorithm (-a)
option. The standard O(kN) approach can be used ('naive'). Other options include the Pelleg-Moore tree-based algorithm ('pelleg-moore'), Elkan's triangle-inequality based algorithm ('elkan'), Hamerly's modification to Elkan's algorithm ('hamerly'), the dual-tree k-means algorithm ('dualtree'), and the dual-tree k-means algorithm using the cover tree ('dualtree-covertree').
The behavior for when an empty cluster is encountered can be modified with the --allow_empty_clusters (-e)
option. When this option is specified and there is a cluster owning no points at the end of an iteration, that cluster's centroid will simply remain in its position from the previous iteration. If the --kill_empty_clusters (-E)
option is specified, then when a cluster owns no points at the end of an iteration, the cluster centroid is simply filled with DBL_MAX, killing it and effectively reducing k for the rest of the computation. Note that the default option when neither empty cluster option is specified can be time-consuming to calculate; therefore, specifying either of these parameters will often accelerate runtime.
Initial clustering assignments may be specified using the --initial_centroids_file (-I)
parameter, and the maximum number of iterations may be specified with the --max_iterations (-m)
parameter.
As an example, to use Hamerly's algorithm to perform k-means clustering with k=10 on the dataset 'data.csv'
, saving the centroids to 'centroids.csv'
and the assignments for each point to 'assignments.csv'
, the following command could be used:
$ mlpack_kmeans --input_file data.csv --clusters 10 --output_file
assignments.csv --centroid_file centroids.csv
To run k-means on that same dataset with initial centroids specified in 'initial.csv'
with a maximum of 500 iterations, storing the output centroids in 'final.csv'
the following command may be used:
$ mlpack_kmeans --input_file data.csv --initial_centroids_file initial.csv
--clusters 10 --max_iterations 500 --centroid_file final.csv
- K-Means tutorial
- mlpack_dbscan
- Using the triangle inequality to accelerate k-means (pdf)
- Making k-means even faster (pdf)
- Accelerating exact k-means algorithms with geometric reasoning (pdf)
- A dual-tree algorithm for fast k-means clustering with large k (pdf)
- mlpack::kmeans::KMeans class documentation
name | type | description | default |
---|---|---|---|
algorithm |
str |
Algorithm to use for the Lloyd iteration ('naive', 'pelleg-moore', 'elkan', 'hamerly', 'dualtree', or 'dualtree-covertree'). | 'naive' |
allow_empty_clusters |
bool |
Allow empty clusters to be persist. | False |
clusters |
int |
Number of clusters to find (0 autodetects from initial centroids). | **--** |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
in_place |
bool |
If specified, a column containing the learned cluster assignments will be added to the input dataset file. In this case, --output_file is overridden. (Do not use in Python.) | False |
initial_centroids |
matrix |
Start with the specified initial centroids. | np.empty([0, 0]) |
input |
matrix |
Input dataset to perform clustering on. | **--** |
kill_empty_clusters |
bool |
Remove empty clusters when they occur. | False |
labels_only |
bool |
Only output labels into output file. | False |
max_iterations |
int |
Maximum number of iterations before k-means terminates. | 1000 |
percentage |
float |
Percentage of dataset to use for each refined start sampling (use when --refined_start is specified). | 0.02 |
refined_start |
bool |
Use the refined initial point strategy by Bradley and Fayyad to choose initial points. | False |
samplings |
int |
Number of samplings to perform for refined start (use when --refined_start is specified). | 100 |
seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
centroid |
matrix |
If specified, the centroids of each cluster will be written to the given file. |
output |
matrix |
Matrix to store output labels or labeled data to. |
{: #python_kmeans_detailed-documentation }
This program performs K-Means clustering on the given dataset. It can return the learned cluster assignments, and the centroids of the clusters. Empty clusters are not allowed by default; when a cluster becomes empty, the point furthest from the centroid of the cluster with maximum variance is taken to fill that cluster.
Optionally, the Bradley and Fayyad approach ("Refining initial points for k-means clustering", 1998) can be used to select initial points by specifying the refined_start
parameter. This approach works by taking random samplings of the dataset; to specify the number of samplings, the samplings
parameter is used, and to specify the percentage of the dataset to be used in each sample, the percentage
parameter is used (it should be a value between 0.0 and 1.0).
There are several options available for the algorithm used for each Lloyd iteration, specified with the algorithm
option. The standard O(kN) approach can be used ('naive'). Other options include the Pelleg-Moore tree-based algorithm ('pelleg-moore'), Elkan's triangle-inequality based algorithm ('elkan'), Hamerly's modification to Elkan's algorithm ('hamerly'), the dual-tree k-means algorithm ('dualtree'), and the dual-tree k-means algorithm using the cover tree ('dualtree-covertree').
The behavior for when an empty cluster is encountered can be modified with the allow_empty_clusters
option. When this option is specified and there is a cluster owning no points at the end of an iteration, that cluster's centroid will simply remain in its position from the previous iteration. If the kill_empty_clusters
option is specified, then when a cluster owns no points at the end of an iteration, the cluster centroid is simply filled with DBL_MAX, killing it and effectively reducing k for the rest of the computation. Note that the default option when neither empty cluster option is specified can be time-consuming to calculate; therefore, specifying either of these parameters will often accelerate runtime.
Initial clustering assignments may be specified using the initial_centroids
parameter, and the maximum number of iterations may be specified with the max_iterations
parameter.
As an example, to use Hamerly's algorithm to perform k-means clustering with k=10 on the dataset 'data'
, saving the centroids to 'centroids'
and the assignments for each point to 'assignments'
, the following command could be used:
>>> output = kmeans(input=data, clusters=10)
>>> assignments = output['output']
>>> centroids = output['centroid']
To run k-means on that same dataset with initial centroids specified in 'initial'
with a maximum of 500 iterations, storing the output centroids in 'final'
the following command may be used:
>>> output = kmeans(input=data, initial_centroids=initial, clusters=10,
max_iterations=500)
>>> final = output['centroid']
name | type | description | default |
---|---|---|---|
algorithm |
String |
Algorithm to use for the Lloyd iteration ('naive', 'pelleg-moore', 'elkan', 'hamerly', 'dualtree', or 'dualtree-covertree'). | "naive" |
allow_empty_clusters |
Bool |
Allow empty clusters to be persist. | false |
clusters |
Int |
Number of clusters to find (0 autodetects from initial centroids). | **--** |
in_place |
Bool |
If specified, a column containing the learned cluster assignments will be added to the input dataset file. In this case, --output_file is overridden. (Do not use in Python.) | false |
initial_centroids |
Float64 matrix-like |
Start with the specified initial centroids. | zeros(0, 0) |
input |
Float64 matrix-like |
Input dataset to perform clustering on. | **--** |
kill_empty_clusters |
Bool |
Remove empty clusters when they occur. | false |
labels_only |
Bool |
Only output labels into output file. | false |
max_iterations |
Int |
Maximum number of iterations before k-means terminates. | 1000 |
percentage |
Float64 |
Percentage of dataset to use for each refined start sampling (use when --refined_start is specified). | 0.02 |
refined_start |
Bool |
Use the refined initial point strategy by Bradley and Fayyad to choose initial points. | false |
samplings |
Int |
Number of samplings to perform for refined start (use when --refined_start is specified). | 100 |
seed |
Int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
centroid |
Float64 matrix-like |
If specified, the centroids of each cluster will be written to the given file. |
output |
Float64 matrix-like |
Matrix to store output labels or labeled data to. |
{: #julia_kmeans_detailed-documentation }
This program performs K-Means clustering on the given dataset. It can return the learned cluster assignments, and the centroids of the clusters. Empty clusters are not allowed by default; when a cluster becomes empty, the point furthest from the centroid of the cluster with maximum variance is taken to fill that cluster.
Optionally, the Bradley and Fayyad approach ("Refining initial points for k-means clustering", 1998) can be used to select initial points by specifying the refined_start
parameter. This approach works by taking random samplings of the dataset; to specify the number of samplings, the samplings
parameter is used, and to specify the percentage of the dataset to be used in each sample, the percentage
parameter is used (it should be a value between 0.0 and 1.0).
There are several options available for the algorithm used for each Lloyd iteration, specified with the algorithm
option. The standard O(kN) approach can be used ('naive'). Other options include the Pelleg-Moore tree-based algorithm ('pelleg-moore'), Elkan's triangle-inequality based algorithm ('elkan'), Hamerly's modification to Elkan's algorithm ('hamerly'), the dual-tree k-means algorithm ('dualtree'), and the dual-tree k-means algorithm using the cover tree ('dualtree-covertree').
The behavior for when an empty cluster is encountered can be modified with the allow_empty_clusters
option. When this option is specified and there is a cluster owning no points at the end of an iteration, that cluster's centroid will simply remain in its position from the previous iteration. If the kill_empty_clusters
option is specified, then when a cluster owns no points at the end of an iteration, the cluster centroid is simply filled with DBL_MAX, killing it and effectively reducing k for the rest of the computation. Note that the default option when neither empty cluster option is specified can be time-consuming to calculate; therefore, specifying either of these parameters will often accelerate runtime.
Initial clustering assignments may be specified using the initial_centroids
parameter, and the maximum number of iterations may be specified with the max_iterations
parameter.
As an example, to use Hamerly's algorithm to perform k-means clustering with k=10 on the dataset data
, saving the centroids to centroids
and the assignments for each point to assignments
, the following command could be used:
julia> using CSV
julia> data = CSV.read("data.csv")
julia> centroids, assignments = kmeans(10, data)
To run k-means on that same dataset with initial centroids specified in initial
with a maximum of 500 iterations, storing the output centroids in final
the following command may be used:
julia> using CSV
julia> data = CSV.read("data.csv")
julia> initial = CSV.read("initial.csv")
julia> final, _ = kmeans(10, data; initial_centroids=initial,
max_iterations=500)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
Algorithm |
string |
Algorithm to use for the Lloyd iteration ('naive', 'pelleg-moore', 'elkan', 'hamerly', 'dualtree', or 'dualtree-covertree'). | "naive" |
AllowEmptyClusters |
bool |
Allow empty clusters to be persist. | false |
clusters |
int |
Number of clusters to find (0 autodetects from initial centroids). | **--** |
InPlace |
bool |
If specified, a column containing the learned cluster assignments will be added to the input dataset file. In this case, --output_file is overridden. (Do not use in Python.) | false |
InitialCentroids |
*mat.Dense |
Start with the specified initial centroids. | mat.NewDense(1, 1, nil) |
input |
*mat.Dense |
Input dataset to perform clustering on. | **--** |
KillEmptyClusters |
bool |
Remove empty clusters when they occur. | false |
LabelsOnly |
bool |
Only output labels into output file. | false |
MaxIterations |
int |
Maximum number of iterations before k-means terminates. | 1000 |
Percentage |
float64 |
Percentage of dataset to use for each refined start sampling (use when --refined_start is specified). | 0.02 |
RefinedStart |
bool |
Use the refined initial point strategy by Bradley and Fayyad to choose initial points. | false |
Samplings |
int |
Number of samplings to perform for refined start (use when --refined_start is specified). | 100 |
Seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
centroid |
*mat.Dense |
If specified, the centroids of each cluster will be written to the given file. |
output |
*mat.Dense |
Matrix to store output labels or labeled data to. |
{: #go_kmeans_detailed-documentation }
This program performs K-Means clustering on the given dataset. It can return the learned cluster assignments, and the centroids of the clusters. Empty clusters are not allowed by default; when a cluster becomes empty, the point furthest from the centroid of the cluster with maximum variance is taken to fill that cluster.
Optionally, the Bradley and Fayyad approach ("Refining initial points for k-means clustering", 1998) can be used to select initial points by specifying the RefinedStart
parameter. This approach works by taking random samplings of the dataset; to specify the number of samplings, the Samplings
parameter is used, and to specify the percentage of the dataset to be used in each sample, the Percentage
parameter is used (it should be a value between 0.0 and 1.0).
There are several options available for the algorithm used for each Lloyd iteration, specified with the Algorithm
option. The standard O(kN) approach can be used ('naive'). Other options include the Pelleg-Moore tree-based algorithm ('pelleg-moore'), Elkan's triangle-inequality based algorithm ('elkan'), Hamerly's modification to Elkan's algorithm ('hamerly'), the dual-tree k-means algorithm ('dualtree'), and the dual-tree k-means algorithm using the cover tree ('dualtree-covertree').
The behavior for when an empty cluster is encountered can be modified with the AllowEmptyClusters
option. When this option is specified and there is a cluster owning no points at the end of an iteration, that cluster's centroid will simply remain in its position from the previous iteration. If the KillEmptyClusters
option is specified, then when a cluster owns no points at the end of an iteration, the cluster centroid is simply filled with DBL_MAX, killing it and effectively reducing k for the rest of the computation. Note that the default option when neither empty cluster option is specified can be time-consuming to calculate; therefore, specifying either of these parameters will often accelerate runtime.
Initial clustering assignments may be specified using the InitialCentroids
parameter, and the maximum number of iterations may be specified with the MaxIterations
parameter.
As an example, to use Hamerly's algorithm to perform k-means clustering with k=10 on the dataset data
, saving the centroids to centroids
and the assignments for each point to assignments
, the following command could be used:
// Initialize optional parameters for Kmeans().
param := mlpack.KmeansOptions()
centroids, assignments := mlpack.Kmeans(data, 10, param)
To run k-means on that same dataset with initial centroids specified in initial
with a maximum of 500 iterations, storing the output centroids in final
the following command may be used:
// Initialize optional parameters for Kmeans().
param := mlpack.KmeansOptions()
param.InitialCentroids = initial
param.MaxIterations = 500
final, _ := mlpack.Kmeans(data, 10, param)
name | type | description | default |
---|---|---|---|
algorithm |
character |
Algorithm to use for the Lloyd iteration ('naive', 'pelleg-moore', 'elkan', 'hamerly', 'dualtree', or 'dualtree-covertree'). | "naive" |
allow_empty_clusters |
logical |
Allow empty clusters to be persist. | FALSE |
clusters |
integer |
Number of clusters to find (0 autodetects from initial centroids). | **--** |
in_place |
logical |
If specified, a column containing the learned cluster assignments will be added to the input dataset file. In this case, --output_file is overridden. (Do not use in Python.) | FALSE |
initial_centroids |
numeric matrix |
Start with the specified initial centroids. | matrix(numeric(), 0, 0) |
input |
numeric matrix |
Input dataset to perform clustering on. | **--** |
kill_empty_clusters |
logical |
Remove empty clusters when they occur. | FALSE |
labels_only |
logical |
Only output labels into output file. | FALSE |
max_iterations |
integer |
Maximum number of iterations before k-means terminates. | 1000 |
percentage |
numeric |
Percentage of dataset to use for each refined start sampling (use when --refined_start is specified). | 0.02 |
refined_start |
logical |
Use the refined initial point strategy by Bradley and Fayyad to choose initial points. | FALSE |
samplings |
integer |
Number of samplings to perform for refined start (use when --refined_start is specified). | 100 |
seed |
integer |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
centroid |
numeric matrix |
If specified, the centroids of each cluster will be written to the given file. |
output |
numeric matrix |
Matrix to store output labels or labeled data to. |
{: #r_kmeans_detailed-documentation }
This program performs K-Means clustering on the given dataset. It can return the learned cluster assignments, and the centroids of the clusters. Empty clusters are not allowed by default; when a cluster becomes empty, the point furthest from the centroid of the cluster with maximum variance is taken to fill that cluster.
Optionally, the Bradley and Fayyad approach ("Refining initial points for k-means clustering", 1998) can be used to select initial points by specifying the refined_start
parameter. This approach works by taking random samplings of the dataset; to specify the number of samplings, the samplings
parameter is used, and to specify the percentage of the dataset to be used in each sample, the percentage
parameter is used (it should be a value between 0.0 and 1.0).
There are several options available for the algorithm used for each Lloyd iteration, specified with the algorithm
option. The standard O(kN) approach can be used ('naive'). Other options include the Pelleg-Moore tree-based algorithm ('pelleg-moore'), Elkan's triangle-inequality based algorithm ('elkan'), Hamerly's modification to Elkan's algorithm ('hamerly'), the dual-tree k-means algorithm ('dualtree'), and the dual-tree k-means algorithm using the cover tree ('dualtree-covertree').
The behavior for when an empty cluster is encountered can be modified with the allow_empty_clusters
option. When this option is specified and there is a cluster owning no points at the end of an iteration, that cluster's centroid will simply remain in its position from the previous iteration. If the kill_empty_clusters
option is specified, then when a cluster owns no points at the end of an iteration, the cluster centroid is simply filled with DBL_MAX, killing it and effectively reducing k for the rest of the computation. Note that the default option when neither empty cluster option is specified can be time-consuming to calculate; therefore, specifying either of these parameters will often accelerate runtime.
Initial clustering assignments may be specified using the initial_centroids
parameter, and the maximum number of iterations may be specified with the max_iterations
parameter.
As an example, to use Hamerly's algorithm to perform k-means clustering with k=10 on the dataset "data"
, saving the centroids to "centroids"
and the assignments for each point to "assignments"
, the following command could be used:
R> output <- kmeans(input=data, clusters=10)
R> assignments <- output$output
R> centroids <- output$centroid
To run k-means on that same dataset with initial centroids specified in "initial"
with a maximum of 500 iterations, storing the output centroids in "final"
the following command may be used:
R> output <- kmeans(input=data, initial_centroids=initial, clusters=10,
max_iterations=500)
R> final <- output$centroid
// Initialize optional parameters for Lars(). param := mlpack.LarsOptions() param.Input = mat.NewDense(1, 1, nil) param.InputModel = nil param.Lambda1 = 0 param.Lambda2 = 0 param.Responses = mat.NewDense(1, 1, nil) param.Test = mat.NewDense(1, 1, nil) param.UseCholesky = false
output_model, output_predictions := mlpack.Lars(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- lars(input=matrix(numeric(), 0, 0), input_model=NA, lambda1=0,
lambda2=0, responses=matrix(numeric(), 0, 0), test=matrix(numeric(), 0,
0), use_cholesky=FALSE, verbose=FALSE)
R> output_model <- d$output_model
R> output_predictions <- d$output_predictions
An implementation of Least Angle Regression (Stagewise/laSso), also known as LARS. This can train a LARS/LASSO/Elastic Net model and use that model or a pre-trained model to output regression predictions for a test set. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_file (-i) |
2-d matrix file |
Matrix of covariates (X). | '' |
--input_model_file (-m) |
LARS file |
Trained LARS model to use. | '' |
--lambda1 (-l) |
double |
Regularization parameter for l1-norm penalty. | 0 |
--lambda2 (-L) |
double |
Regularization parameter for l2-norm penalty. | 0 |
--responses_file (-r) |
2-d matrix file |
Matrix of responses/observations (y). | '' |
--test_file (-t) |
2-d matrix file |
Matrix containing points to regress on (test points). | '' |
--use_cholesky (-c) |
flag |
Use Cholesky decomposition during computation rather than explicitly computing the full Gram matrix. | |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_model_file (-M) |
LARS file |
Output LARS model. |
--output_predictions_file (-o) |
2-d matrix file |
If --test_file is specified, this file is where the predicted responses will be saved. |
{: #cli_lars_detailed-documentation }
An implementation of LARS: Least Angle Regression (Stagewise/laSso). This is a stage-wise homotopy-based algorithm for L1-regularized linear regression (LASSO) and L1+L2-regularized linear regression (Elastic Net).
This program is able to train a LARS/LASSO/Elastic Net model or load a model from file, output regression predictions for a test set, and save the trained model to a file. The LARS algorithm is described in more detail below:
Let X be a matrix where each row is a point and each column is a dimension, and let y be a vector of targets.
The Elastic Net problem is to solve
min_beta 0.5 || X * beta - y ||_2^2 + lambda_1 ||beta||_1 + 0.5 lambda_2 ||beta||_2^2
If lambda1 > 0 and lambda2 = 0, the problem is the LASSO. If lambda1 > 0 and lambda2 > 0, the problem is the Elastic Net. If lambda1 = 0 and lambda2 > 0, the problem is ridge regression. If lambda1 = 0 and lambda2 = 0, the problem is unregularized linear regression.
For efficiency reasons, it is not recommended to use this algorithm with --lambda1 (-l)
= 0. In that case, use the 'linear_regression' program, which implements both unregularized linear regression and ridge regression.
To train a LARS/LASSO/Elastic Net model, the --input_file (-i)
and --responses_file (-r)
parameters must be given. The --lambda1 (-l)
, --lambda2 (-L)
, and --use_cholesky (-c)
parameters control the training options. A trained model can be saved with the --output_model_file (-M)
. If no training is desired at all, a model can be passed via the --input_model_file (-m)
parameter.
The program can also provide predictions for test data using either the trained model or the given input model. Test points can be specified with the --test_file (-t)
parameter. Predicted responses to the test points can be saved with the --output_predictions_file (-o)
output parameter.
For example, the following command trains a model on the data 'data.csv'
and responses 'responses.csv'
with lambda1 set to 0.4 and lambda2 set to 0 (so, LASSO is being solved), and then the model is saved to 'lasso_model.bin'
:
$ mlpack_lars --input_file data.csv --responses_file responses.csv --lambda1
0.4 --lambda2 0 --output_model_file lasso_model.bin
The following command uses the 'lasso_model.bin'
to provide predicted responses for the data 'test.csv'
and save those responses to 'test_predictions.csv'
:
$ mlpack_lars --input_model_file lasso_model.bin --test_file test.csv
--output_predictions_file test_predictions.csv
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
input |
matrix |
Matrix of covariates (X). | np.empty([0, 0]) |
input_model |
LARSType |
Trained LARS model to use. | None |
lambda1 |
float |
Regularization parameter for l1-norm penalty. | 0 |
lambda2 |
float |
Regularization parameter for l2-norm penalty. | 0 |
responses |
matrix |
Matrix of responses/observations (y). | np.empty([0, 0]) |
test |
matrix |
Matrix containing points to regress on (test points). | np.empty([0, 0]) |
use_cholesky |
bool |
Use Cholesky decomposition during computation rather than explicitly computing the full Gram matrix. | False |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
LARSType |
Output LARS model. |
output_predictions |
matrix |
If --test_file is specified, this file is where the predicted responses will be saved. |
{: #python_lars_detailed-documentation }
An implementation of LARS: Least Angle Regression (Stagewise/laSso). This is a stage-wise homotopy-based algorithm for L1-regularized linear regression (LASSO) and L1+L2-regularized linear regression (Elastic Net).
This program is able to train a LARS/LASSO/Elastic Net model or load a model from file, output regression predictions for a test set, and save the trained model to a file. The LARS algorithm is described in more detail below:
Let X be a matrix where each row is a point and each column is a dimension, and let y be a vector of targets.
The Elastic Net problem is to solve
min_beta 0.5 || X * beta - y ||_2^2 + lambda_1 ||beta||_1 + 0.5 lambda_2 ||beta||_2^2
If lambda1 > 0 and lambda2 = 0, the problem is the LASSO. If lambda1 > 0 and lambda2 > 0, the problem is the Elastic Net. If lambda1 = 0 and lambda2 > 0, the problem is ridge regression. If lambda1 = 0 and lambda2 = 0, the problem is unregularized linear regression.
For efficiency reasons, it is not recommended to use this algorithm with lambda1
= 0. In that case, use the 'linear_regression' program, which implements both unregularized linear regression and ridge regression.
To train a LARS/LASSO/Elastic Net model, the input
and responses
parameters must be given. The lambda1
, lambda2
, and use_cholesky
parameters control the training options. A trained model can be saved with the output_model
. If no training is desired at all, a model can be passed via the input_model
parameter.
The program can also provide predictions for test data using either the trained model or the given input model. Test points can be specified with the test
parameter. Predicted responses to the test points can be saved with the output_predictions
output parameter.
For example, the following command trains a model on the data 'data'
and responses 'responses'
with lambda1 set to 0.4 and lambda2 set to 0 (so, LASSO is being solved), and then the model is saved to 'lasso_model'
:
>>> output = lars(input=data, responses=responses, lambda1=0.4, lambda2=0)
>>> lasso_model = output['output_model']
The following command uses the 'lasso_model'
to provide predicted responses for the data 'test'
and save those responses to 'test_predictions'
:
>>> output = lars(input_model=lasso_model, test=test)
>>> test_predictions = output['output_predictions']
name | type | description | default |
---|---|---|---|
input |
Float64 matrix-like |
Matrix of covariates (X). | zeros(0, 0) |
input_model |
LARS |
Trained LARS model to use. | nothing |
lambda1 |
Float64 |
Regularization parameter for l1-norm penalty. | 0 |
lambda2 |
Float64 |
Regularization parameter for l2-norm penalty. | 0 |
responses |
Float64 matrix-like |
Matrix of responses/observations (y). | zeros(0, 0) |
test |
Float64 matrix-like |
Matrix containing points to regress on (test points). | zeros(0, 0) |
use_cholesky |
Bool |
Use Cholesky decomposition during computation rather than explicitly computing the full Gram matrix. | false |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output_model |
LARS |
Output LARS model. |
output_predictions |
Float64 matrix-like |
If --test_file is specified, this file is where the predicted responses will be saved. |
{: #julia_lars_detailed-documentation }
An implementation of LARS: Least Angle Regression (Stagewise/laSso). This is a stage-wise homotopy-based algorithm for L1-regularized linear regression (LASSO) and L1+L2-regularized linear regression (Elastic Net).
This program is able to train a LARS/LASSO/Elastic Net model or load a model from file, output regression predictions for a test set, and save the trained model to a file. The LARS algorithm is described in more detail below:
Let X be a matrix where each row is a point and each column is a dimension, and let y be a vector of targets.
The Elastic Net problem is to solve
min_beta 0.5 || X * beta - y ||_2^2 + lambda_1 ||beta||_1 + 0.5 lambda_2 ||beta||_2^2
If lambda1 > 0 and lambda2 = 0, the problem is the LASSO. If lambda1 > 0 and lambda2 > 0, the problem is the Elastic Net. If lambda1 = 0 and lambda2 > 0, the problem is ridge regression. If lambda1 = 0 and lambda2 = 0, the problem is unregularized linear regression.
For efficiency reasons, it is not recommended to use this algorithm with lambda1
= 0. In that case, use the 'linear_regression' program, which implements both unregularized linear regression and ridge regression.
To train a LARS/LASSO/Elastic Net model, the input
and responses
parameters must be given. The lambda1
, lambda2
, and use_cholesky
parameters control the training options. A trained model can be saved with the output_model
. If no training is desired at all, a model can be passed via the input_model
parameter.
The program can also provide predictions for test data using either the trained model or the given input model. Test points can be specified with the test
parameter. Predicted responses to the test points can be saved with the output_predictions
output parameter.
For example, the following command trains a model on the data data
and responses responses
with lambda1 set to 0.4 and lambda2 set to 0 (so, LASSO is being solved), and then the model is saved to lasso_model
:
julia> using CSV
julia> data = CSV.read("data.csv")
julia> responses = CSV.read("responses.csv")
julia> lasso_model, _ = lars(input=data, lambda1=0.4, lambda2=0,
responses=responses)
The following command uses the lasso_model
to provide predicted responses for the data test
and save those responses to test_predictions
:
julia> using CSV
julia> test = CSV.read("test.csv")
julia> _, test_predictions = lars(input_model=lasso_model,
test=test)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
Input |
*mat.Dense |
Matrix of covariates (X). | mat.NewDense(1, 1, nil) |
InputModel |
lars |
Trained LARS model to use. | nil |
Lambda1 |
float64 |
Regularization parameter for l1-norm penalty. | 0 |
Lambda2 |
float64 |
Regularization parameter for l2-norm penalty. | 0 |
Responses |
*mat.Dense |
Matrix of responses/observations (y). | mat.NewDense(1, 1, nil) |
Test |
*mat.Dense |
Matrix containing points to regress on (test points). | mat.NewDense(1, 1, nil) |
UseCholesky |
bool |
Use Cholesky decomposition during computation rather than explicitly computing the full Gram matrix. | false |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
outputModel |
lars |
Output LARS model. |
outputPredictions |
*mat.Dense |
If --test_file is specified, this file is where the predicted responses will be saved. |
{: #go_lars_detailed-documentation }
An implementation of LARS: Least Angle Regression (Stagewise/laSso). This is a stage-wise homotopy-based algorithm for L1-regularized linear regression (LASSO) and L1+L2-regularized linear regression (Elastic Net).
This program is able to train a LARS/LASSO/Elastic Net model or load a model from file, output regression predictions for a test set, and save the trained model to a file. The LARS algorithm is described in more detail below:
Let X be a matrix where each row is a point and each column is a dimension, and let y be a vector of targets.
The Elastic Net problem is to solve
min_beta 0.5 || X * beta - y ||_2^2 + lambda_1 ||beta||_1 + 0.5 lambda_2 ||beta||_2^2
If lambda1 > 0 and lambda2 = 0, the problem is the LASSO. If lambda1 > 0 and lambda2 > 0, the problem is the Elastic Net. If lambda1 = 0 and lambda2 > 0, the problem is ridge regression. If lambda1 = 0 and lambda2 = 0, the problem is unregularized linear regression.
For efficiency reasons, it is not recommended to use this algorithm with Lambda1
= 0. In that case, use the 'linear_regression' program, which implements both unregularized linear regression and ridge regression.
To train a LARS/LASSO/Elastic Net model, the Input
and Responses
parameters must be given. The Lambda1
, Lambda2
, and UseCholesky
parameters control the training options. A trained model can be saved with the OutputModel
. If no training is desired at all, a model can be passed via the InputModel
parameter.
The program can also provide predictions for test data using either the trained model or the given input model. Test points can be specified with the Test
parameter. Predicted responses to the test points can be saved with the OutputPredictions
output parameter.
For example, the following command trains a model on the data data
and responses responses
with lambda1 set to 0.4 and lambda2 set to 0 (so, LASSO is being solved), and then the model is saved to lasso_model
:
// Initialize optional parameters for Lars().
param := mlpack.LarsOptions()
param.Input = data
param.Responses = responses
param.Lambda1 = 0.4
param.Lambda2 = 0
lasso_model, _ := mlpack.Lars(param)
The following command uses the lasso_model
to provide predicted responses for the data test
and save those responses to test_predictions
:
// Initialize optional parameters for Lars().
param := mlpack.LarsOptions()
param.InputModel = &lasso_model
param.Test = test
_, test_predictions := mlpack.Lars(param)
name | type | description | default |
---|---|---|---|
input |
numeric matrix |
Matrix of covariates (X). | matrix(numeric(), 0, 0) |
input_model |
LARS |
Trained LARS model to use. | NA |
lambda1 |
numeric |
Regularization parameter for l1-norm penalty. | 0 |
lambda2 |
numeric |
Regularization parameter for l2-norm penalty. | 0 |
responses |
numeric matrix |
Matrix of responses/observations (y). | matrix(numeric(), 0, 0) |
test |
numeric matrix |
Matrix containing points to regress on (test points). | matrix(numeric(), 0, 0) |
use_cholesky |
logical |
Use Cholesky decomposition during computation rather than explicitly computing the full Gram matrix. | FALSE |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
LARS |
Output LARS model. |
output_predictions |
numeric matrix |
If --test_file is specified, this file is where the predicted responses will be saved. |
{: #r_lars_detailed-documentation }
An implementation of LARS: Least Angle Regression (Stagewise/laSso). This is a stage-wise homotopy-based algorithm for L1-regularized linear regression (LASSO) and L1+L2-regularized linear regression (Elastic Net).
This program is able to train a LARS/LASSO/Elastic Net model or load a model from file, output regression predictions for a test set, and save the trained model to a file. The LARS algorithm is described in more detail below:
Let X be a matrix where each row is a point and each column is a dimension, and let y be a vector of targets.
The Elastic Net problem is to solve
min_beta 0.5 || X * beta - y ||_2^2 + lambda_1 ||beta||_1 + 0.5 lambda_2 ||beta||_2^2
If lambda1 > 0 and lambda2 = 0, the problem is the LASSO. If lambda1 > 0 and lambda2 > 0, the problem is the Elastic Net. If lambda1 = 0 and lambda2 > 0, the problem is ridge regression. If lambda1 = 0 and lambda2 = 0, the problem is unregularized linear regression.
For efficiency reasons, it is not recommended to use this algorithm with lambda1
= 0. In that case, use the 'linear_regression' program, which implements both unregularized linear regression and ridge regression.
To train a LARS/LASSO/Elastic Net model, the input
and responses
parameters must be given. The lambda1
, lambda2
, and use_cholesky
parameters control the training options. A trained model can be saved with the output_model
. If no training is desired at all, a model can be passed via the input_model
parameter.
The program can also provide predictions for test data using either the trained model or the given input model. Test points can be specified with the test
parameter. Predicted responses to the test points can be saved with the output_predictions
output parameter.
For example, the following command trains a model on the data "data"
and responses "responses"
with lambda1 set to 0.4 and lambda2 set to 0 (so, LASSO is being solved), and then the model is saved to "lasso_model"
:
R> output <- lars(input=data, responses=responses, lambda1=0.4, lambda2=0)
R> lasso_model <- output$output_model
The following command uses the "lasso_model"
to provide predicted responses for the data "test"
and save those responses to "test_predictions"
:
R> output <- lars(input_model=lasso_model, test=test)
R> test_predictions <- output$output_predictions
// Initialize optional parameters for LinearRegression(). param := mlpack.LinearRegressionOptions() param.InputModel = nil param.Lambda = 0 param.Test = mat.NewDense(1, 1, nil) param.Training = mat.NewDense(1, 1, nil) param.TrainingResponses = mat.NewDense(1, 1, nil)
output_model, output_predictions := mlpack.LinearRegression(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- linear_regression(input_model=NA, lambda=0,
test=matrix(numeric(), 0, 0), training=matrix(numeric(), 0, 0),
training_responses=matrix(numeric(), 0, 0), verbose=FALSE)
R> output_model <- d$output_model
R> output_predictions <- d$output_predictions
An implementation of simple linear regression and ridge regression using ordinary least squares. Given a dataset and responses, a model can be trained and saved for later use, or a pre-trained model can be used to output regression predictions for a test set. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_model_file (-m) |
LinearRegression file |
Existing LinearRegression model to use. | '' |
--lambda (-l) |
double |
Tikhonov regularization for ridge regression. If 0, the method reduces to linear regression. | 0 |
--test_file (-T) |
2-d matrix file |
Matrix containing X' (test regressors). | '' |
--training_file (-t) |
2-d matrix file |
Matrix containing training set X (regressors). | '' |
--training_responses_file (-r) |
1-d matrix file |
Optional vector containing y (responses). If not given, the responses are assumed to be the last row of the input file. | '' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_model_file (-M) |
LinearRegression file |
Output LinearRegression model. |
--output_predictions_file (-o) |
1-d matrix file |
If --test_file is specified, this matrix is where the predicted responses will be saved. |
{: #cli_linear_regression_detailed-documentation }
An implementation of simple linear regression and simple ridge regression using ordinary least squares. This solves the problem
y = X * b + e
where X (specified by --training_file (-t)
) and y (specified either as the last column of the input matrix --training_file (-t)
or via the --training_responses_file (-r)
parameter) are known and b is the desired variable. If the covariance matrix (X'X) is not invertible, or if the solution is overdetermined, then specify a Tikhonov regularization constant (with --lambda (-l)
) greater than 0, which will regularize the covariance matrix to make it invertible. The calculated b may be saved with the --output_predictions_file (-o)
output parameter.
Optionally, the calculated value of b is used to predict the responses for another matrix X' (specified by the --test_file (-T)
parameter):
y' = X' * b
and the predicted responses y' may be saved with the --output_predictions_file (-o)
output parameter. This type of regression is related to least-angle regression, which mlpack implements as the 'lars' program.
For example, to run a linear regression on the dataset 'X.csv'
with responses 'y.csv'
, saving the trained model to 'lr_model.bin'
, the following command could be used:
$ mlpack_linear_regression --training_file X.csv --training_responses_file
y.csv --output_model_file lr_model.bin
Then, to use 'lr_model.bin'
to predict responses for a test set 'X_test.csv'
, saving the predictions to 'X_test_responses.csv'
, the following command could be used:
$ mlpack_linear_regression --input_model_file lr_model.bin --test_file
X_test.csv --output_predictions_file X_test_responses.csv
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
input_model |
LinearRegressionType |
Existing LinearRegression model to use. | None |
lambda_ |
float |
Tikhonov regularization for ridge regression. If 0, the method reduces to linear regression. | 0 |
test |
matrix |
Matrix containing X' (test regressors). | np.empty([0, 0]) |
training |
matrix |
Matrix containing training set X (regressors). | np.empty([0, 0]) |
training_responses |
vector |
Optional vector containing y (responses). If not given, the responses are assumed to be the last row of the input file. | np.empty([0]) |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
LinearRegressionType |
Output LinearRegression model. |
output_predictions |
vector |
If --test_file is specified, this matrix is where the predicted responses will be saved. |
{: #python_linear_regression_detailed-documentation }
An implementation of simple linear regression and simple ridge regression using ordinary least squares. This solves the problem
y = X * b + e
where X (specified by training
) and y (specified either as the last column of the input matrix training
or via the training_responses
parameter) are known and b is the desired variable. If the covariance matrix (X'X) is not invertible, or if the solution is overdetermined, then specify a Tikhonov regularization constant (with lambda_
) greater than 0, which will regularize the covariance matrix to make it invertible. The calculated b may be saved with the output_predictions
output parameter.
Optionally, the calculated value of b is used to predict the responses for another matrix X' (specified by the test
parameter):
y' = X' * b
and the predicted responses y' may be saved with the output_predictions
output parameter. This type of regression is related to least-angle regression, which mlpack implements as the 'lars' program.
For example, to run a linear regression on the dataset 'X'
with responses 'y'
, saving the trained model to 'lr_model'
, the following command could be used:
>>> output = linear_regression(training=X, training_responses=y)
>>> lr_model = output['output_model']
Then, to use 'lr_model'
to predict responses for a test set 'X_test'
, saving the predictions to 'X_test_responses'
, the following command could be used:
>>> output = linear_regression(input_model=lr_model, test=X_test)
>>> X_test_responses = output['output_predictions']
name | type | description | default |
---|---|---|---|
input_model |
LinearRegression |
Existing LinearRegression model to use. | nothing |
lambda |
Float64 |
Tikhonov regularization for ridge regression. If 0, the method reduces to linear regression. | 0 |
test |
Float64 matrix-like |
Matrix containing X' (test regressors). | zeros(0, 0) |
training |
Float64 matrix-like |
Matrix containing training set X (regressors). | zeros(0, 0) |
training_responses |
Float64 vector-like |
Optional vector containing y (responses). If not given, the responses are assumed to be the last row of the input file. | Float64[] |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output_model |
LinearRegression |
Output LinearRegression model. |
output_predictions |
Float64 vector-like |
If --test_file is specified, this matrix is where the predicted responses will be saved. |
{: #julia_linear_regression_detailed-documentation }
An implementation of simple linear regression and simple ridge regression using ordinary least squares. This solves the problem
y = X * b + e
where X (specified by training
) and y (specified either as the last column of the input matrix training
or via the training_responses
parameter) are known and b is the desired variable. If the covariance matrix (X'X) is not invertible, or if the solution is overdetermined, then specify a Tikhonov regularization constant (with lambda
) greater than 0, which will regularize the covariance matrix to make it invertible. The calculated b may be saved with the output_predictions
output parameter.
Optionally, the calculated value of b is used to predict the responses for another matrix X' (specified by the test
parameter):
y' = X' * b
and the predicted responses y' may be saved with the output_predictions
output parameter. This type of regression is related to least-angle regression, which mlpack implements as the 'lars' program.
For example, to run a linear regression on the dataset X
with responses y
, saving the trained model to lr_model
, the following command could be used:
julia> using CSV
julia> X = CSV.read("X.csv")
julia> y = CSV.read("y.csv")
julia> lr_model, _ = linear_regression(training=X,
training_responses=y)
Then, to use lr_model
to predict responses for a test set X_test
, saving the predictions to X_test_responses
, the following command could be used:
julia> using CSV
julia> X_test = CSV.read("X_test.csv")
julia> _, X_test_responses = linear_regression(input_model=lr_model,
test=X_test)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
InputModel |
linearRegression |
Existing LinearRegression model to use. | nil |
Lambda |
float64 |
Tikhonov regularization for ridge regression. If 0, the method reduces to linear regression. | 0 |
Test |
*mat.Dense |
Matrix containing X' (test regressors). | mat.NewDense(1, 1, nil) |
Training |
*mat.Dense |
Matrix containing training set X (regressors). | mat.NewDense(1, 1, nil) |
TrainingResponses |
*mat.Dense (1d) |
Optional vector containing y (responses). If not given, the responses are assumed to be the last row of the input file. | mat.NewDense(1, 1, nil) |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
outputModel |
linearRegression |
Output LinearRegression model. |
outputPredictions |
*mat.Dense (1d) |
If --test_file is specified, this matrix is where the predicted responses will be saved. |
{: #go_linear_regression_detailed-documentation }
An implementation of simple linear regression and simple ridge regression using ordinary least squares. This solves the problem
y = X * b + e
where X (specified by Training
) and y (specified either as the last column of the input matrix Training
or via the TrainingResponses
parameter) are known and b is the desired variable. If the covariance matrix (X'X) is not invertible, or if the solution is overdetermined, then specify a Tikhonov regularization constant (with Lambda
) greater than 0, which will regularize the covariance matrix to make it invertible. The calculated b may be saved with the OutputPredictions
output parameter.
Optionally, the calculated value of b is used to predict the responses for another matrix X' (specified by the Test
parameter):
y' = X' * b
and the predicted responses y' may be saved with the OutputPredictions
output parameter. This type of regression is related to least-angle regression, which mlpack implements as the 'lars' program.
For example, to run a linear regression on the dataset X
with responses y
, saving the trained model to lr_model
, the following command could be used:
// Initialize optional parameters for LinearRegression().
param := mlpack.LinearRegressionOptions()
param.Training = X
param.TrainingResponses = y
lr_model, _ := mlpack.LinearRegression(param)
Then, to use lr_model
to predict responses for a test set X_test
, saving the predictions to X_test_responses
, the following command could be used:
// Initialize optional parameters for LinearRegression().
param := mlpack.LinearRegressionOptions()
param.InputModel = &lr_model
param.Test = X_test
_, X_test_responses := mlpack.LinearRegression(param)
name | type | description | default |
---|---|---|---|
input_model |
LinearRegression |
Existing LinearRegression model to use. | NA |
lambda |
numeric |
Tikhonov regularization for ridge regression. If 0, the method reduces to linear regression. | 0 |
test |
numeric matrix |
Matrix containing X' (test regressors). | matrix(numeric(), 0, 0) |
training |
numeric matrix |
Matrix containing training set X (regressors). | matrix(numeric(), 0, 0) |
training_responses |
numeric vector |
Optional vector containing y (responses). If not given, the responses are assumed to be the last row of the input file. | matrix(numeric(), 0, 0) |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
LinearRegression |
Output LinearRegression model. |
output_predictions |
numeric vector |
If --test_file is specified, this matrix is where the predicted responses will be saved. |
{: #r_linear_regression_detailed-documentation }
An implementation of simple linear regression and simple ridge regression using ordinary least squares. This solves the problem
y = X * b + e
where X (specified by training
) and y (specified either as the last column of the input matrix training
or via the training_responses
parameter) are known and b is the desired variable. If the covariance matrix (X'X) is not invertible, or if the solution is overdetermined, then specify a Tikhonov regularization constant (with lambda
) greater than 0, which will regularize the covariance matrix to make it invertible. The calculated b may be saved with the output_predictions
output parameter.
Optionally, the calculated value of b is used to predict the responses for another matrix X' (specified by the test
parameter):
y' = X' * b
and the predicted responses y' may be saved with the output_predictions
output parameter. This type of regression is related to least-angle regression, which mlpack implements as the 'lars' program.
For example, to run a linear regression on the dataset "X"
with responses "y"
, saving the trained model to "lr_model"
, the following command could be used:
R> output <- linear_regression(training=X, training_responses=y)
R> lr_model <- output$output_model
Then, to use "lr_model"
to predict responses for a test set "X_test"
, saving the predictions to "X_test_responses"
, the following command could be used:
R> output <- linear_regression(input_model=lr_model, test=X_test)
R> X_test_responses <- output$output_predictions
// Initialize optional parameters for LinearSvm(). param := mlpack.LinearSvmOptions() param.Delta = 1 param.Epochs = 50 param.InputModel = nil param.Labels = mat.NewDense(1, 1, nil) param.Lambda = 0.0001 param.MaxIterations = 10000 param.NoIntercept = false param.NumClasses = 0 param.Optimizer = "lbfgs" param.Seed = 0 param.Shuffle = false param.StepSize = 0.01 param.Test = mat.NewDense(1, 1, nil) param.TestLabels = mat.NewDense(1, 1, nil) param.Tolerance = 1e-10 param.Training = mat.NewDense(1, 1, nil)
output_model, predictions, probabilities := mlpack.LinearSvm(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- linear_svm(delta=1, epochs=50, input_model=NA,
labels=matrix(integer(), 0, 0), lambda=0.0001, max_iterations=10000,
no_intercept=FALSE, num_classes=0, optimizer="lbfgs", seed=0,
shuffle=FALSE, step_size=0.01, test=matrix(numeric(), 0, 0),
test_labels=matrix(integer(), 0, 0), tolerance=1e-10,
training=matrix(numeric(), 0, 0), verbose=FALSE)
R> output_model <- d$output_model
R> predictions <- d$predictions
R> probabilities <- d$probabilities
An implementation of linear SVM for multiclass classification. Given labeled data, a model can be trained and saved for future use; or, a pre-trained model can be used to classify new points. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--delta (-d) |
double |
Margin of difference between correct class and other classes. | 1 |
--epochs (-E) |
int |
Maximum number of full epochs over dataset for psgd | 50 |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_model_file (-m) |
LinearSVMModel file |
Existing model (parameters). | '' |
--labels_file (-l) |
1-d index matrix file |
A matrix containing labels (0 or 1) for the points in the training set (y). | '' |
--lambda (-r) |
double |
L2-regularization parameter for training. | 0.0001 |
--max_iterations (-n) |
int |
Maximum iterations for optimizer (0 indicates no limit). | 10000 |
--no_intercept (-N) |
flag |
Do not add the intercept term to the model. | |
--num_classes (-c) |
int |
Number of classes for classification; if unspecified (or 0), the number of classes found in the labels will be used. | 0 |
--optimizer (-O) |
string |
Optimizer to use for training ('lbfgs' or 'psgd'). | 'lbfgs' |
--seed (-s) |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
--shuffle (-S) |
flag |
Don't shuffle the order in which data points are visited for parallel SGD. | |
--step_size (-a) |
double |
Step size for parallel SGD optimizer. | 0.01 |
--test_file (-T) |
2-d matrix file |
Matrix containing test dataset. | '' |
--test_labels_file (-L) |
1-d index matrix file |
Matrix containing test labels. | '' |
--tolerance (-e) |
double |
Convergence tolerance for optimizer. | 1e-10 |
--training_file (-t) |
2-d matrix file |
A matrix containing the training set (the matrix of predictors, X). | '' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_model_file (-M) |
LinearSVMModel file |
Output for trained linear svm model. |
--predictions_file (-P) |
1-d index matrix file |
If test data is specified, this matrix is where the predictions for the test set will be saved. |
--probabilities_file (-p) |
2-d matrix file |
If test data is specified, this matrix is where the class probabilities for the test set will be saved. |
{: #cli_linear_svm_detailed-documentation }
An implementation of linear SVMs that uses either L-BFGS or parallel SGD (stochastic gradient descent) to train the model.
This program allows loading a linear SVM model (via the --input_model_file (-m)
parameter) or training a linear SVM model given training data (specified with the --training_file (-t)
parameter), or both those things at once. In addition, this program allows classification on a test dataset (specified with the --test_file (-T)
parameter) and the classification results may be saved with the --predictions_file (-P)
output parameter. The trained linear SVM model may be saved using the --output_model_file (-M)
output parameter.
The training data, if specified, may have class labels as its last dimension. Alternately, the --labels_file (-l)
parameter may be used to specify a separate vector of labels.
When a model is being trained, there are many options. L2 regularization (to prevent overfitting) can be specified with the --lambda (-r)
option, and the number of classes can be manually specified with the --num_classes (-c)
and if an intercept term is not desired in the model, the --no_intercept (-N)
parameter can be specified.Margin of difference between correct class and other classes can be specified with the --delta (-d)
option.The optimizer used to train the model can be specified with the --optimizer (-O)
parameter. Available options are 'psgd' (parallel stochastic gradient descent) and 'lbfgs' (the L-BFGS optimizer). There are also various parameters for the optimizer; the --max_iterations (-n)
parameter specifies the maximum number of allowed iterations, and the --tolerance (-e)
parameter specifies the tolerance for convergence. For the parallel SGD optimizer, the --step_size (-a)
parameter controls the step size taken at each iteration by the optimizer and the maximum number of epochs (specified with --epochs (-E)
). If the objective function for your data is oscillating between Inf and 0, the step size is probably too large. There are more parameters for the optimizers, but the C++ interface must be used to access these.
Optionally, the model can be used to predict the labels for another matrix of data points, if --test_file (-T)
is specified. The --test_file (-T)
parameter can be specified without the --training_file (-t)
parameter, so long as an existing linear SVM model is given with the --input_model_file (-m)
parameter. The output predictions from the linear SVM model may be saved with the --predictions_file (-P)
parameter.
As an example, to train a LinaerSVM on the data ''data.csv'
' with labels ''labels.csv'
' with L2 regularization of 0.1, saving the model to ''lsvm_model.bin'
', the following command may be used:
$ mlpack_linear_svm --training_file data.csv --labels_file labels.csv --lambda
0.1 --delta 1 --num_classes 0 --output_model_file lsvm_model.bin
Then, to use that model to predict classes for the dataset ''test.csv'
', storing the output predictions in ''predictions.csv'
', the following command may be used:
$ mlpack_linear_svm --input_model_file lsvm_model.bin --test_file test.csv
--predictions_file predictions.csv
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
delta |
float |
Margin of difference between correct class and other classes. | 1 |
epochs |
int |
Maximum number of full epochs over dataset for psgd | 50 |
input_model |
LinearSVMModelType |
Existing model (parameters). | None |
labels |
int vector |
A matrix containing labels (0 or 1) for the points in the training set (y). | np.empty([0], dtype=np.uint64) |
lambda_ |
float |
L2-regularization parameter for training. | 0.0001 |
max_iterations |
int |
Maximum iterations for optimizer (0 indicates no limit). | 10000 |
no_intercept |
bool |
Do not add the intercept term to the model. | False |
num_classes |
int |
Number of classes for classification; if unspecified (or 0), the number of classes found in the labels will be used. | 0 |
optimizer |
str |
Optimizer to use for training ('lbfgs' or 'psgd'). | 'lbfgs' |
seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
shuffle |
bool |
Don't shuffle the order in which data points are visited for parallel SGD. | False |
step_size |
float |
Step size for parallel SGD optimizer. | 0.01 |
test |
matrix |
Matrix containing test dataset. | np.empty([0, 0]) |
test_labels |
int vector |
Matrix containing test labels. | np.empty([0], dtype=np.uint64) |
tolerance |
float |
Convergence tolerance for optimizer. | 1e-10 |
training |
matrix |
A matrix containing the training set (the matrix of predictors, X). | np.empty([0, 0]) |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
LinearSVMModelType |
Output for trained linear svm model. |
predictions |
int vector |
If test data is specified, this matrix is where the predictions for the test set will be saved. |
probabilities |
matrix |
If test data is specified, this matrix is where the class probabilities for the test set will be saved. |
{: #python_linear_svm_detailed-documentation }
An implementation of linear SVMs that uses either L-BFGS or parallel SGD (stochastic gradient descent) to train the model.
This program allows loading a linear SVM model (via the input_model
parameter) or training a linear SVM model given training data (specified with the training
parameter), or both those things at once. In addition, this program allows classification on a test dataset (specified with the test
parameter) and the classification results may be saved with the predictions
output parameter. The trained linear SVM model may be saved using the output_model
output parameter.
The training data, if specified, may have class labels as its last dimension. Alternately, the labels
parameter may be used to specify a separate vector of labels.
When a model is being trained, there are many options. L2 regularization (to prevent overfitting) can be specified with the lambda_
option, and the number of classes can be manually specified with the num_classes
and if an intercept term is not desired in the model, the no_intercept
parameter can be specified.Margin of difference between correct class and other classes can be specified with the delta
option.The optimizer used to train the model can be specified with the optimizer
parameter. Available options are 'psgd' (parallel stochastic gradient descent) and 'lbfgs' (the L-BFGS optimizer). There are also various parameters for the optimizer; the max_iterations
parameter specifies the maximum number of allowed iterations, and the tolerance
parameter specifies the tolerance for convergence. For the parallel SGD optimizer, the step_size
parameter controls the step size taken at each iteration by the optimizer and the maximum number of epochs (specified with epochs
). If the objective function for your data is oscillating between Inf and 0, the step size is probably too large. There are more parameters for the optimizers, but the C++ interface must be used to access these.
Optionally, the model can be used to predict the labels for another matrix of data points, if test
is specified. The test
parameter can be specified without the training
parameter, so long as an existing linear SVM model is given with the input_model
parameter. The output predictions from the linear SVM model may be saved with the predictions
parameter.
As an example, to train a LinaerSVM on the data ''data'
' with labels ''labels'
' with L2 regularization of 0.1, saving the model to ''lsvm_model'
', the following command may be used:
>>> output = linear_svm(training=data, labels=labels, lambda_=0.1, delta=1,
num_classes=0)
>>> lsvm_model = output['output_model']
Then, to use that model to predict classes for the dataset ''test'
', storing the output predictions in ''predictions'
', the following command may be used:
>>> output = linear_svm(input_model=lsvm_model, test=test)
>>> predictions = output['predictions']
name | type | description | default |
---|---|---|---|
delta |
Float64 |
Margin of difference between correct class and other classes. | 1 |
epochs |
Int |
Maximum number of full epochs over dataset for psgd | 50 |
input_model |
LinearSVMModel |
Existing model (parameters). | nothing |
labels |
Int vector-like |
A matrix containing labels (0 or 1) for the points in the training set (y). | Int[] |
lambda |
Float64 |
L2-regularization parameter for training. | 0.0001 |
max_iterations |
Int |
Maximum iterations for optimizer (0 indicates no limit). | 10000 |
no_intercept |
Bool |
Do not add the intercept term to the model. | false |
num_classes |
Int |
Number of classes for classification; if unspecified (or 0), the number of classes found in the labels will be used. | 0 |
optimizer |
String |
Optimizer to use for training ('lbfgs' or 'psgd'). | "lbfgs" |
seed |
Int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
shuffle |
Bool |
Don't shuffle the order in which data points are visited for parallel SGD. | false |
step_size |
Float64 |
Step size for parallel SGD optimizer. | 0.01 |
test |
Float64 matrix-like |
Matrix containing test dataset. | zeros(0, 0) |
test_labels |
Int vector-like |
Matrix containing test labels. | Int[] |
tolerance |
Float64 |
Convergence tolerance for optimizer. | 1e-10 |
training |
Float64 matrix-like |
A matrix containing the training set (the matrix of predictors, X). | zeros(0, 0) |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output_model |
LinearSVMModel |
Output for trained linear svm model. |
predictions |
Int vector-like |
If test data is specified, this matrix is where the predictions for the test set will be saved. |
probabilities |
Float64 matrix-like |
If test data is specified, this matrix is where the class probabilities for the test set will be saved. |
{: #julia_linear_svm_detailed-documentation }
An implementation of linear SVMs that uses either L-BFGS or parallel SGD (stochastic gradient descent) to train the model.
This program allows loading a linear SVM model (via the input_model
parameter) or training a linear SVM model given training data (specified with the training
parameter), or both those things at once. In addition, this program allows classification on a test dataset (specified with the test
parameter) and the classification results may be saved with the predictions
output parameter. The trained linear SVM model may be saved using the output_model
output parameter.
The training data, if specified, may have class labels as its last dimension. Alternately, the labels
parameter may be used to specify a separate vector of labels.
When a model is being trained, there are many options. L2 regularization (to prevent overfitting) can be specified with the lambda
option, and the number of classes can be manually specified with the num_classes
and if an intercept term is not desired in the model, the no_intercept
parameter can be specified.Margin of difference between correct class and other classes can be specified with the delta
option.The optimizer used to train the model can be specified with the optimizer
parameter. Available options are 'psgd' (parallel stochastic gradient descent) and 'lbfgs' (the L-BFGS optimizer). There are also various parameters for the optimizer; the max_iterations
parameter specifies the maximum number of allowed iterations, and the tolerance
parameter specifies the tolerance for convergence. For the parallel SGD optimizer, the step_size
parameter controls the step size taken at each iteration by the optimizer and the maximum number of epochs (specified with epochs
). If the objective function for your data is oscillating between Inf and 0, the step size is probably too large. There are more parameters for the optimizers, but the C++ interface must be used to access these.
Optionally, the model can be used to predict the labels for another matrix of data points, if test
is specified. The test
parameter can be specified without the training
parameter, so long as an existing linear SVM model is given with the input_model
parameter. The output predictions from the linear SVM model may be saved with the predictions
parameter.
As an example, to train a LinaerSVM on the data 'data
' with labels 'labels
' with L2 regularization of 0.1, saving the model to 'lsvm_model
', the following command may be used:
julia> using CSV
julia> data = CSV.read("data.csv")
julia> labels = CSV.read("labels.csv"; type=Int)
julia> lsvm_model, _, _ = linear_svm(delta=1, labels=labels,
lambda=0.1, num_classes=0, training=data)
Then, to use that model to predict classes for the dataset 'test
', storing the output predictions in 'predictions
', the following command may be used:
julia> using CSV
julia> test = CSV.read("test.csv")
julia> _, predictions, _ = linear_svm(input_model=lsvm_model,
test=test)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
Delta |
float64 |
Margin of difference between correct class and other classes. | 1 |
Epochs |
int |
Maximum number of full epochs over dataset for psgd | 50 |
InputModel |
linearsvmModel |
Existing model (parameters). | nil |
Labels |
*mat.Dense (1d with ints) |
A matrix containing labels (0 or 1) for the points in the training set (y). | mat.NewDense(1, 1, nil) |
Lambda |
float64 |
L2-regularization parameter for training. | 0.0001 |
MaxIterations |
int |
Maximum iterations for optimizer (0 indicates no limit). | 10000 |
NoIntercept |
bool |
Do not add the intercept term to the model. | false |
NumClasses |
int |
Number of classes for classification; if unspecified (or 0), the number of classes found in the labels will be used. | 0 |
Optimizer |
string |
Optimizer to use for training ('lbfgs' or 'psgd'). | "lbfgs" |
Seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
Shuffle |
bool |
Don't shuffle the order in which data points are visited for parallel SGD. | false |
StepSize |
float64 |
Step size for parallel SGD optimizer. | 0.01 |
Test |
*mat.Dense |
Matrix containing test dataset. | mat.NewDense(1, 1, nil) |
TestLabels |
*mat.Dense (1d with ints) |
Matrix containing test labels. | mat.NewDense(1, 1, nil) |
Tolerance |
float64 |
Convergence tolerance for optimizer. | 1e-10 |
Training |
*mat.Dense |
A matrix containing the training set (the matrix of predictors, X). | mat.NewDense(1, 1, nil) |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
outputModel |
linearsvmModel |
Output for trained linear svm model. |
predictions |
*mat.Dense (1d with ints) |
If test data is specified, this matrix is where the predictions for the test set will be saved. |
probabilities |
*mat.Dense |
If test data is specified, this matrix is where the class probabilities for the test set will be saved. |
{: #go_linear_svm_detailed-documentation }
An implementation of linear SVMs that uses either L-BFGS or parallel SGD (stochastic gradient descent) to train the model.
This program allows loading a linear SVM model (via the InputModel
parameter) or training a linear SVM model given training data (specified with the Training
parameter), or both those things at once. In addition, this program allows classification on a test dataset (specified with the Test
parameter) and the classification results may be saved with the Predictions
output parameter. The trained linear SVM model may be saved using the OutputModel
output parameter.
The training data, if specified, may have class labels as its last dimension. Alternately, the Labels
parameter may be used to specify a separate vector of labels.
When a model is being trained, there are many options. L2 regularization (to prevent overfitting) can be specified with the Lambda
option, and the number of classes can be manually specified with the NumClasses
and if an intercept term is not desired in the model, the NoIntercept
parameter can be specified.Margin of difference between correct class and other classes can be specified with the Delta
option.The optimizer used to train the model can be specified with the Optimizer
parameter. Available options are 'psgd' (parallel stochastic gradient descent) and 'lbfgs' (the L-BFGS optimizer). There are also various parameters for the optimizer; the MaxIterations
parameter specifies the maximum number of allowed iterations, and the Tolerance
parameter specifies the tolerance for convergence. For the parallel SGD optimizer, the StepSize
parameter controls the step size taken at each iteration by the optimizer and the maximum number of epochs (specified with Epochs
). If the objective function for your data is oscillating between Inf and 0, the step size is probably too large. There are more parameters for the optimizers, but the C++ interface must be used to access these.
Optionally, the model can be used to predict the labels for another matrix of data points, if Test
is specified. The Test
parameter can be specified without the Training
parameter, so long as an existing linear SVM model is given with the InputModel
parameter. The output predictions from the linear SVM model may be saved with the Predictions
parameter.
As an example, to train a LinaerSVM on the data 'data
' with labels 'labels
' with L2 regularization of 0.1, saving the model to 'lsvm_model
', the following command may be used:
// Initialize optional parameters for LinearSvm().
param := mlpack.LinearSvmOptions()
param.Training = data
param.Labels = labels
param.Lambda = 0.1
param.Delta = 1
param.NumClasses = 0
lsvm_model, _, _ := mlpack.LinearSvm(param)
Then, to use that model to predict classes for the dataset 'test
', storing the output predictions in 'predictions
', the following command may be used:
// Initialize optional parameters for LinearSvm().
param := mlpack.LinearSvmOptions()
param.InputModel = &lsvm_model
param.Test = test
_, predictions, _ := mlpack.LinearSvm(param)
name | type | description | default |
---|---|---|---|
delta |
numeric |
Margin of difference between correct class and other classes. | 1 |
epochs |
integer |
Maximum number of full epochs over dataset for psgd | 50 |
input_model |
LinearSVMModel |
Existing model (parameters). | NA |
labels |
integer vector |
A matrix containing labels (0 or 1) for the points in the training set (y). | matrix(integer(), 0, 0) |
lambda |
numeric |
L2-regularization parameter for training. | 0.0001 |
max_iterations |
integer |
Maximum iterations for optimizer (0 indicates no limit). | 10000 |
no_intercept |
logical |
Do not add the intercept term to the model. | FALSE |
num_classes |
integer |
Number of classes for classification; if unspecified (or 0), the number of classes found in the labels will be used. | 0 |
optimizer |
character |
Optimizer to use for training ('lbfgs' or 'psgd'). | "lbfgs" |
seed |
integer |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
shuffle |
logical |
Don't shuffle the order in which data points are visited for parallel SGD. | FALSE |
step_size |
numeric |
Step size for parallel SGD optimizer. | 0.01 |
test |
numeric matrix |
Matrix containing test dataset. | matrix(numeric(), 0, 0) |
test_labels |
integer vector |
Matrix containing test labels. | matrix(integer(), 0, 0) |
tolerance |
numeric |
Convergence tolerance for optimizer. | 1e-10 |
training |
numeric matrix |
A matrix containing the training set (the matrix of predictors, X). | matrix(numeric(), 0, 0) |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output_model |
LinearSVMModel |
Output for trained linear svm model. |
predictions |
integer vector |
If test data is specified, this matrix is where the predictions for the test set will be saved. |
probabilities |
numeric matrix |
If test data is specified, this matrix is where the class probabilities for the test set will be saved. |
{: #r_linear_svm_detailed-documentation }
An implementation of linear SVMs that uses either L-BFGS or parallel SGD (stochastic gradient descent) to train the model.
This program allows loading a linear SVM model (via the input_model
parameter) or training a linear SVM model given training data (specified with the training
parameter), or both those things at once. In addition, this program allows classification on a test dataset (specified with the test
parameter) and the classification results may be saved with the predictions
output parameter. The trained linear SVM model may be saved using the output_model
output parameter.
The training data, if specified, may have class labels as its last dimension. Alternately, the labels
parameter may be used to specify a separate vector of labels.
When a model is being trained, there are many options. L2 regularization (to prevent overfitting) can be specified with the lambda
option, and the number of classes can be manually specified with the num_classes
and if an intercept term is not desired in the model, the no_intercept
parameter can be specified.Margin of difference between correct class and other classes can be specified with the delta
option.The optimizer used to train the model can be specified with the optimizer
parameter. Available options are 'psgd' (parallel stochastic gradient descent) and 'lbfgs' (the L-BFGS optimizer). There are also various parameters for the optimizer; the max_iterations
parameter specifies the maximum number of allowed iterations, and the tolerance
parameter specifies the tolerance for convergence. For the parallel SGD optimizer, the step_size
parameter controls the step size taken at each iteration by the optimizer and the maximum number of epochs (specified with epochs
). If the objective function for your data is oscillating between Inf and 0, the step size is probably too large. There are more parameters for the optimizers, but the C++ interface must be used to access these.
Optionally, the model can be used to predict the labels for another matrix of data points, if test
is specified. The test
parameter can be specified without the training
parameter, so long as an existing linear SVM model is given with the input_model
parameter. The output predictions from the linear SVM model may be saved with the predictions
parameter.
As an example, to train a LinaerSVM on the data '"data"
' with labels '"labels"
' with L2 regularization of 0.1, saving the model to '"lsvm_model"
', the following command may be used:
R> output <- linear_svm(training=data, labels=labels, lambda=0.1, delta=1,
num_classes=0)
R> lsvm_model <- output$output_model
Then, to use that model to predict classes for the dataset '"test"
', storing the output predictions in '"predictions"
', the following command may be used:
R> output <- linear_svm(input_model=lsvm_model, test=test)
R> predictions <- output$predictions
// Initialize optional parameters for Lmnn(). param := mlpack.LmnnOptions() param.BatchSize = 50 param.Center = false param.Distance = mat.NewDense(1, 1, nil) param.K = 1 param.Labels = mat.NewDense(1, 1, nil) param.LinearScan = false param.MaxIterations = 100000 param.Normalize = false param.Optimizer = "amsgrad" param.Passes = 50 param.PrintAccuracy = false param.Range = 1 param.Rank = 0 param.Regularization = 0.5 param.Seed = 0 param.StepSize = 0.01 param.Tolerance = 1e-07
centered_data, output, transformed_data := mlpack.Lmnn(input, param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- lmnn(batch_size=50, center=FALSE, distance=matrix(numeric(), 0,
0), input=matrix(numeric(), 0, 0), k=1, labels=matrix(integer(), 0, 0),
linear_scan=FALSE, max_iterations=100000, normalize=FALSE,
optimizer="amsgrad", passes=50, print_accuracy=FALSE, range=1, rank=0,
regularization=0.5, seed=0, step_size=0.01, tolerance=1e-07,
verbose=FALSE)
R> centered_data <- d$centered_data
R> output <- d$output
R> transformed_data <- d$transformed_data
An implementation of Large Margin Nearest Neighbors (LMNN), a distance learning technique. Given a labeled dataset, this learns a transformation of the data that improves k-nearest-neighbor performance; this can be useful as a preprocessing step. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--batch_size (-b) |
int |
Batch size for mini-batch SGD. | 50 |
--center (-C) |
flag |
Perform mean-centering on the dataset. It is useful when the centroid of the data is far from the origin. | |
--distance_file (-d) |
2-d matrix file |
Initial distance matrix to be used as starting point | '' |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_file (-i) |
2-d matrix file |
Input dataset to run LMNN on. | **--** |
--k (-k) |
int |
Number of target neighbors to use for each datapoint. | 1 |
--labels_file (-l) |
1-d index matrix file |
Labels for input dataset. | '' |
--linear_scan (-L) |
flag |
Don't shuffle the order in which data points are visited for SGD or mini-batch SGD. | |
--max_iterations (-n) |
int |
Maximum number of iterations for L-BFGS (0 indicates no limit). | 100000 |
--normalize (-N) |
flag |
Use a normalized starting point for optimization. Itis useful for when points are far apart, or when SGD is returning NaN. | |
--optimizer (-O) |
string |
Optimizer to use; 'amsgrad', 'bbsgd', 'sgd', or 'lbfgs'. | 'amsgrad' |
--passes (-p) |
int |
Maximum number of full passes over dataset for AMSGrad, BB_SGD and SGD. | 50 |
--print_accuracy (-P) |
flag |
Print accuracies on initial and transformed dataset | |
--range (-R) |
int |
Number of iterations after which impostors needs to be recalculated | 1 |
--rank (-A) |
int |
Rank of distance matrix to be optimized. | 0 |
--regularization (-r) |
double |
Regularization for LMNN objective function | 0.5 |
--seed (-s) |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
--step_size (-a) |
double |
Step size for AMSGrad, BB_SGD and SGD (alpha). | 0.01 |
--tolerance (-t) |
double |
Maximum tolerance for termination of AMSGrad, BB_SGD, SGD or L-BFGS. | 1e-07 |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--centered_data_file (-c) |
2-d matrix file |
Output matrix for mean-centered dataset. |
--output_file (-o) |
2-d matrix file |
Output matrix for learned distance matrix. |
--transformed_data_file (-D) |
2-d matrix file |
Output matrix for transformed dataset. |
{: #cli_lmnn_detailed-documentation }
This program implements Large Margin Nearest Neighbors, a distance learning technique. The method seeks to improve k-nearest-neighbor classification on a dataset. The method employes the strategy of reducing distance between similar labeled data points (a.k.a target neighbors) and increasing distance between differently labeled points (a.k.a impostors) using standard optimization techniques over the gradient of the distance between data points.
To work, this algorithm needs labeled data. It can be given as the last row of the input dataset (specified with --input_file (-i)
), or alternatively as a separate matrix (specified with --labels_file (-l)
). Additionally, a starting point for optimization (specified with --distance_file (-d)
can be given, having (r x d) dimensionality. Here r should satisfy 1 <= r <= d, Consequently a Low-Rank matrix will be optimized. Alternatively, Low-Rank distance can be learned by specifying the --rank (-A)
parameter (A Low-Rank matrix with uniformly distributed values will be used as initial learning point).
The program also requires number of targets neighbors to work with ( specified with --k (-k)
), A regularization parameter can also be passed, It acts as a trade of between the pulling and pushing terms (specified with --regularization (-r)
), In addition, this implementation of LMNN includes a parameter to decide the interval after which impostors must be re-calculated (specified with --range (-R)
).
Output can either be the learned distance matrix (specified with --output_file (-o)
), or the transformed dataset (specified with --transformed_data_file (-D)
), or both. Additionally mean-centered dataset (specified with --centered_data_file (-c)
) can be accessed given mean-centering (specified with --center (-C)
) is performed on the dataset. Accuracy on initial dataset and final transformed dataset can be printed by specifying the --print_accuracy (-P)
parameter.
This implementation of LMNN uses AdaGrad, BigBatch_SGD, stochastic gradient descent, mini-batch stochastic gradient descent, or the L_BFGS optimizer.
AdaGrad, specified by the value 'adagrad' for the parameter --optimizer (-O)
, uses maximum of past squared gradients. It primarily on six parameters: the step size (specified with --step_size (-a)
), the batch size (specified with --batch_size (-b)
), the maximum number of passes (specified with --passes (-p)
). Inaddition, a normalized starting point can be used by specifying the --normalize (-N)
parameter.
BigBatch_SGD, specified by the value 'bbsgd' for the parameter --optimizer (-O)
, depends primarily on four parameters: the step size (specified with --step_size (-a)
), the batch size (specified with --batch_size (-b)
), the maximum number of passes (specified with --passes (-p)
). In addition, a normalized starting point can be used by specifying the --normalize (-N)
parameter.
Stochastic gradient descent, specified by the value 'sgd' for the parameter --optimizer (-O)
, depends primarily on three parameters: the step size (specified with --step_size (-a)
), the batch size (specified with --batch_size (-b)
), and the maximum number of passes (specified with --passes (-p)
). In addition, a normalized starting point can be used by specifying the --normalize (-N)
parameter. Furthermore, mean-centering can be performed on the dataset by specifying the --center (-C)
parameter.
The L-BFGS optimizer, specified by the value 'lbfgs' for the parameter --optimizer (-O)
, uses a back-tracking line search algorithm to minimize a function. The following parameters are used by L-BFGS: --max_iterations (-n)
, --tolerance (-t)
(the optimization is terminated when the gradient norm is below this value). For more details on the L-BFGS optimizer, consult either the mlpack L-BFGS documentation (in lbfgs.hpp) or the vast set of published literature on L-BFGS. In addition, a normalized starting point can be used by specifying the --normalize (-N)
parameter.
By default, the AMSGrad optimizer is used.
Example - Let's say we want to learn distance on iris dataset with number of targets as 3 using BigBatch_SGD optimizer. A simple call for the same will look like:
$ mlpack_mlpack_lmnn --input_file iris.csv --labels_file iris_labels.csv --k 3
--optimizer bbsgd --output_file output.csv
An another program call making use of range & regularization parameter with dataset having labels as last column can be made as:
$ mlpack_mlpack_lmnn --input_file letter_recognition.csv --k 5 --range 10
--regularization 0.4 --output_file output.csv
name | type | description | default |
---|---|---|---|
batch_size |
int |
Batch size for mini-batch SGD. | 50 |
center |
bool |
Perform mean-centering on the dataset. It is useful when the centroid of the data is far from the origin. | False |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
distance |
matrix |
Initial distance matrix to be used as starting point | np.empty([0, 0]) |
input |
matrix |
Input dataset to run LMNN on. | **--** |
k |
int |
Number of target neighbors to use for each datapoint. | 1 |
labels |
int vector |
Labels for input dataset. | np.empty([0], dtype=np.uint64) |
linear_scan |
bool |
Don't shuffle the order in which data points are visited for SGD or mini-batch SGD. | False |
max_iterations |
int |
Maximum number of iterations for L-BFGS (0 indicates no limit). | 100000 |
normalize |
bool |
Use a normalized starting point for optimization. Itis useful for when points are far apart, or when SGD is returning NaN. | False |
optimizer |
str |
Optimizer to use; 'amsgrad', 'bbsgd', 'sgd', or 'lbfgs'. | 'amsgrad' |
passes |
int |
Maximum number of full passes over dataset for AMSGrad, BB_SGD and SGD. | 50 |
print_accuracy |
bool |
Print accuracies on initial and transformed dataset | False |
range |
int |
Number of iterations after which impostors needs to be recalculated | 1 |
rank |
int |
Rank of distance matrix to be optimized. | 0 |
regularization |
float |
Regularization for LMNN objective function | 0.5 |
seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
step_size |
float |
Step size for AMSGrad, BB_SGD and SGD (alpha). | 0.01 |
tolerance |
float |
Maximum tolerance for termination of AMSGrad, BB_SGD, SGD or L-BFGS. | 1e-07 |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
centered_data |
matrix |
Output matrix for mean-centered dataset. |
output |
matrix |
Output matrix for learned distance matrix. |
transformed_data |
matrix |
Output matrix for transformed dataset. |
{: #python_lmnn_detailed-documentation }
This program implements Large Margin Nearest Neighbors, a distance learning technique. The method seeks to improve k-nearest-neighbor classification on a dataset. The method employes the strategy of reducing distance between similar labeled data points (a.k.a target neighbors) and increasing distance between differently labeled points (a.k.a impostors) using standard optimization techniques over the gradient of the distance between data points.
To work, this algorithm needs labeled data. It can be given as the last row of the input dataset (specified with input
), or alternatively as a separate matrix (specified with labels
). Additionally, a starting point for optimization (specified with distance
can be given, having (r x d) dimensionality. Here r should satisfy 1 <= r <= d, Consequently a Low-Rank matrix will be optimized. Alternatively, Low-Rank distance can be learned by specifying the rank
parameter (A Low-Rank matrix with uniformly distributed values will be used as initial learning point).
The program also requires number of targets neighbors to work with ( specified with k
), A regularization parameter can also be passed, It acts as a trade of between the pulling and pushing terms (specified with regularization
), In addition, this implementation of LMNN includes a parameter to decide the interval after which impostors must be re-calculated (specified with range
).
Output can either be the learned distance matrix (specified with output
), or the transformed dataset (specified with transformed_data
), or both. Additionally mean-centered dataset (specified with centered_data
) can be accessed given mean-centering (specified with center
) is performed on the dataset. Accuracy on initial dataset and final transformed dataset can be printed by specifying the print_accuracy
parameter.
This implementation of LMNN uses AdaGrad, BigBatch_SGD, stochastic gradient descent, mini-batch stochastic gradient descent, or the L_BFGS optimizer.
AdaGrad, specified by the value 'adagrad' for the parameter optimizer
, uses maximum of past squared gradients. It primarily on six parameters: the step size (specified with step_size
), the batch size (specified with batch_size
), the maximum number of passes (specified with passes
). Inaddition, a normalized starting point can be used by specifying the normalize
parameter.
BigBatch_SGD, specified by the value 'bbsgd' for the parameter optimizer
, depends primarily on four parameters: the step size (specified with step_size
), the batch size (specified with batch_size
), the maximum number of passes (specified with passes
). In addition, a normalized starting point can be used by specifying the normalize
parameter.
Stochastic gradient descent, specified by the value 'sgd' for the parameter optimizer
, depends primarily on three parameters: the step size (specified with step_size
), the batch size (specified with batch_size
), and the maximum number of passes (specified with passes
). In addition, a normalized starting point can be used by specifying the normalize
parameter. Furthermore, mean-centering can be performed on the dataset by specifying the center
parameter.
The L-BFGS optimizer, specified by the value 'lbfgs' for the parameter optimizer
, uses a back-tracking line search algorithm to minimize a function. The following parameters are used by L-BFGS: max_iterations
, tolerance
(the optimization is terminated when the gradient norm is below this value). For more details on the L-BFGS optimizer, consult either the mlpack L-BFGS documentation (in lbfgs.hpp) or the vast set of published literature on L-BFGS. In addition, a normalized starting point can be used by specifying the normalize
parameter.
By default, the AMSGrad optimizer is used.
Example - Let's say we want to learn distance on iris dataset with number of targets as 3 using BigBatch_SGD optimizer. A simple call for the same will look like:
>>> output = mlpack_lmnn(input=iris, labels=iris_labels, k=3,
optimizer='bbsgd')
>>> output = output['output']
An another program call making use of range & regularization parameter with dataset having labels as last column can be made as:
>>> output = mlpack_lmnn(input=letter_recognition, k=5, range=10,
regularization=0.4)
>>> output = output['output']
name | type | description | default |
---|---|---|---|
batch_size |
Int |
Batch size for mini-batch SGD. | 50 |
center |
Bool |
Perform mean-centering on the dataset. It is useful when the centroid of the data is far from the origin. | false |
distance |
Float64 matrix-like |
Initial distance matrix to be used as starting point | zeros(0, 0) |
input |
Float64 matrix-like |
Input dataset to run LMNN on. | **--** |
k |
Int |
Number of target neighbors to use for each datapoint. | 1 |
labels |
Int vector-like |
Labels for input dataset. | Int[] |
linear_scan |
Bool |
Don't shuffle the order in which data points are visited for SGD or mini-batch SGD. | false |
max_iterations |
Int |
Maximum number of iterations for L-BFGS (0 indicates no limit). | 100000 |
normalize |
Bool |
Use a normalized starting point for optimization. Itis useful for when points are far apart, or when SGD is returning NaN. | false |
optimizer |
String |
Optimizer to use; 'amsgrad', 'bbsgd', 'sgd', or 'lbfgs'. | "amsgrad" |
passes |
Int |
Maximum number of full passes over dataset for AMSGrad, BB_SGD and SGD. | 50 |
print_accuracy |
Bool |
Print accuracies on initial and transformed dataset | false |
range |
Int |
Number of iterations after which impostors needs to be recalculated | 1 |
rank |
Int |
Rank of distance matrix to be optimized. | 0 |
regularization |
Float64 |
Regularization for LMNN objective function | 0.5 |
seed |
Int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
step_size |
Float64 |
Step size for AMSGrad, BB_SGD and SGD (alpha). | 0.01 |
tolerance |
Float64 |
Maximum tolerance for termination of AMSGrad, BB_SGD, SGD or L-BFGS. | 1e-07 |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
centered_data |
Float64 matrix-like |
Output matrix for mean-centered dataset. |
output |
Float64 matrix-like |
Output matrix for learned distance matrix. |
transformed_data |
Float64 matrix-like |
Output matrix for transformed dataset. |
{: #julia_lmnn_detailed-documentation }
This program implements Large Margin Nearest Neighbors, a distance learning technique. The method seeks to improve k-nearest-neighbor classification on a dataset. The method employes the strategy of reducing distance between similar labeled data points (a.k.a target neighbors) and increasing distance between differently labeled points (a.k.a impostors) using standard optimization techniques over the gradient of the distance between data points.
To work, this algorithm needs labeled data. It can be given as the last row of the input dataset (specified with input
), or alternatively as a separate matrix (specified with labels
). Additionally, a starting point for optimization (specified with distance
can be given, having (r x d) dimensionality. Here r should satisfy 1 <= r <= d, Consequently a Low-Rank matrix will be optimized. Alternatively, Low-Rank distance can be learned by specifying the rank
parameter (A Low-Rank matrix with uniformly distributed values will be used as initial learning point).
The program also requires number of targets neighbors to work with ( specified with k
), A regularization parameter can also be passed, It acts as a trade of between the pulling and pushing terms (specified with regularization
), In addition, this implementation of LMNN includes a parameter to decide the interval after which impostors must be re-calculated (specified with range
).
Output can either be the learned distance matrix (specified with output
), or the transformed dataset (specified with transformed_data
), or both. Additionally mean-centered dataset (specified with centered_data
) can be accessed given mean-centering (specified with center
) is performed on the dataset. Accuracy on initial dataset and final transformed dataset can be printed by specifying the print_accuracy
parameter.
This implementation of LMNN uses AdaGrad, BigBatch_SGD, stochastic gradient descent, mini-batch stochastic gradient descent, or the L_BFGS optimizer.
AdaGrad, specified by the value 'adagrad' for the parameter optimizer
, uses maximum of past squared gradients. It primarily on six parameters: the step size (specified with step_size
), the batch size (specified with batch_size
), the maximum number of passes (specified with passes
). Inaddition, a normalized starting point can be used by specifying the normalize
parameter.
BigBatch_SGD, specified by the value 'bbsgd' for the parameter optimizer
, depends primarily on four parameters: the step size (specified with step_size
), the batch size (specified with batch_size
), the maximum number of passes (specified with passes
). In addition, a normalized starting point can be used by specifying the normalize
parameter.
Stochastic gradient descent, specified by the value 'sgd' for the parameter optimizer
, depends primarily on three parameters: the step size (specified with step_size
), the batch size (specified with batch_size
), and the maximum number of passes (specified with passes
). In addition, a normalized starting point can be used by specifying the normalize
parameter. Furthermore, mean-centering can be performed on the dataset by specifying the center
parameter.
The L-BFGS optimizer, specified by the value 'lbfgs' for the parameter optimizer
, uses a back-tracking line search algorithm to minimize a function. The following parameters are used by L-BFGS: max_iterations
, tolerance
(the optimization is terminated when the gradient norm is below this value). For more details on the L-BFGS optimizer, consult either the mlpack L-BFGS documentation (in lbfgs.hpp) or the vast set of published literature on L-BFGS. In addition, a normalized starting point can be used by specifying the normalize
parameter.
By default, the AMSGrad optimizer is used.
Example - Let's say we want to learn distance on iris dataset with number of targets as 3 using BigBatch_SGD optimizer. A simple call for the same will look like:
julia> using CSV
julia> iris = CSV.read("iris.csv")
julia> iris_labels = CSV.read("iris_labels.csv"; type=Int)
julia> _, output, _ = mlpack_lmnn(iris; k=3, labels=iris_labels,
optimizer="bbsgd")
An another program call making use of range & regularization parameter with dataset having labels as last column can be made as:
julia> using CSV
julia> letter_recognition = CSV.read("letter_recognition.csv")
julia> _, output, _ = mlpack_lmnn(letter_recognition; k=5, range=10,
regularization=0.4)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
BatchSize |
int |
Batch size for mini-batch SGD. | 50 |
Center |
bool |
Perform mean-centering on the dataset. It is useful when the centroid of the data is far from the origin. | false |
Distance |
*mat.Dense |
Initial distance matrix to be used as starting point | mat.NewDense(1, 1, nil) |
input |
*mat.Dense |
Input dataset to run LMNN on. | **--** |
K |
int |
Number of target neighbors to use for each datapoint. | 1 |
Labels |
*mat.Dense (1d with ints) |
Labels for input dataset. | mat.NewDense(1, 1, nil) |
LinearScan |
bool |
Don't shuffle the order in which data points are visited for SGD or mini-batch SGD. | false |
MaxIterations |
int |
Maximum number of iterations for L-BFGS (0 indicates no limit). | 100000 |
Normalize |
bool |
Use a normalized starting point for optimization. Itis useful for when points are far apart, or when SGD is returning NaN. | false |
Optimizer |
string |
Optimizer to use; 'amsgrad', 'bbsgd', 'sgd', or 'lbfgs'. | "amsgrad" |
Passes |
int |
Maximum number of full passes over dataset for AMSGrad, BB_SGD and SGD. | 50 |
PrintAccuracy |
bool |
Print accuracies on initial and transformed dataset | false |
Range |
int |
Number of iterations after which impostors needs to be recalculated | 1 |
Rank |
int |
Rank of distance matrix to be optimized. | 0 |
Regularization |
float64 |
Regularization for LMNN objective function | 0.5 |
Seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
StepSize |
float64 |
Step size for AMSGrad, BB_SGD and SGD (alpha). | 0.01 |
Tolerance |
float64 |
Maximum tolerance for termination of AMSGrad, BB_SGD, SGD or L-BFGS. | 1e-07 |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
centeredData |
*mat.Dense |
Output matrix for mean-centered dataset. |
output |
*mat.Dense |
Output matrix for learned distance matrix. |
transformedData |
*mat.Dense |
Output matrix for transformed dataset. |
{: #go_lmnn_detailed-documentation }
This program implements Large Margin Nearest Neighbors, a distance learning technique. The method seeks to improve k-nearest-neighbor classification on a dataset. The method employes the strategy of reducing distance between similar labeled data points (a.k.a target neighbors) and increasing distance between differently labeled points (a.k.a impostors) using standard optimization techniques over the gradient of the distance between data points.
To work, this algorithm needs labeled data. It can be given as the last row of the input dataset (specified with Input
), or alternatively as a separate matrix (specified with Labels
). Additionally, a starting point for optimization (specified with Distance
can be given, having (r x d) dimensionality. Here r should satisfy 1 <= r <= d, Consequently a Low-Rank matrix will be optimized. Alternatively, Low-Rank distance can be learned by specifying the Rank
parameter (A Low-Rank matrix with uniformly distributed values will be used as initial learning point).
The program also requires number of targets neighbors to work with ( specified with K
), A regularization parameter can also be passed, It acts as a trade of between the pulling and pushing terms (specified with Regularization
), In addition, this implementation of LMNN includes a parameter to decide the interval after which impostors must be re-calculated (specified with Range
).
Output can either be the learned distance matrix (specified with Output
), or the transformed dataset (specified with TransformedData
), or both. Additionally mean-centered dataset (specified with CenteredData
) can be accessed given mean-centering (specified with Center
) is performed on the dataset. Accuracy on initial dataset and final transformed dataset can be printed by specifying the PrintAccuracy
parameter.
This implementation of LMNN uses AdaGrad, BigBatch_SGD, stochastic gradient descent, mini-batch stochastic gradient descent, or the L_BFGS optimizer.
AdaGrad, specified by the value 'adagrad' for the parameter Optimizer
, uses maximum of past squared gradients. It primarily on six parameters: the step size (specified with StepSize
), the batch size (specified with BatchSize
), the maximum number of passes (specified with Passes
). Inaddition, a normalized starting point can be used by specifying the Normalize
parameter.
BigBatch_SGD, specified by the value 'bbsgd' for the parameter Optimizer
, depends primarily on four parameters: the step size (specified with StepSize
), the batch size (specified with BatchSize
), the maximum number of passes (specified with Passes
). In addition, a normalized starting point can be used by specifying the Normalize
parameter.
Stochastic gradient descent, specified by the value 'sgd' for the parameter Optimizer
, depends primarily on three parameters: the step size (specified with StepSize
), the batch size (specified with BatchSize
), and the maximum number of passes (specified with Passes
). In addition, a normalized starting point can be used by specifying the Normalize
parameter. Furthermore, mean-centering can be performed on the dataset by specifying the Center
parameter.
The L-BFGS optimizer, specified by the value 'lbfgs' for the parameter Optimizer
, uses a back-tracking line search algorithm to minimize a function. The following parameters are used by L-BFGS: MaxIterations
, Tolerance
(the optimization is terminated when the gradient norm is below this value). For more details on the L-BFGS optimizer, consult either the mlpack L-BFGS documentation (in lbfgs.hpp) or the vast set of published literature on L-BFGS. In addition, a normalized starting point can be used by specifying the Normalize
parameter.
By default, the AMSGrad optimizer is used.
Example - Let's say we want to learn distance on iris dataset with number of targets as 3 using BigBatch_SGD optimizer. A simple call for the same will look like:
// Initialize optional parameters for MlpackLmnn().
param := mlpack.MlpackLmnnOptions()
param.Labels = iris_labels
param.K = 3
param.Optimizer = "bbsgd"
_, output, _ := mlpack.MlpackLmnn(iris, param)
An another program call making use of range & regularization parameter with dataset having labels as last column can be made as:
// Initialize optional parameters for MlpackLmnn().
param := mlpack.MlpackLmnnOptions()
param.K = 5
param.Range = 10
param.Regularization = 0.4
_, output, _ := mlpack.MlpackLmnn(letter_recognition, param)
name | type | description | default |
---|---|---|---|
batch_size |
integer |
Batch size for mini-batch SGD. | 50 |
center |
logical |
Perform mean-centering on the dataset. It is useful when the centroid of the data is far from the origin. | FALSE |
distance |
numeric matrix |
Initial distance matrix to be used as starting point | matrix(numeric(), 0, 0) |
input |
numeric matrix |
Input dataset to run LMNN on. | **--** |
k |
integer |
Number of target neighbors to use for each datapoint. | 1 |
labels |
integer vector |
Labels for input dataset. | matrix(integer(), 0, 0) |
linear_scan |
logical |
Don't shuffle the order in which data points are visited for SGD or mini-batch SGD. | FALSE |
max_iterations |
integer |
Maximum number of iterations for L-BFGS (0 indicates no limit). | 100000 |
normalize |
logical |
Use a normalized starting point for optimization. Itis useful for when points are far apart, or when SGD is returning NaN. | FALSE |
optimizer |
character |
Optimizer to use; 'amsgrad', 'bbsgd', 'sgd', or 'lbfgs'. | "amsgrad" |
passes |
integer |
Maximum number of full passes over dataset for AMSGrad, BB_SGD and SGD. | 50 |
print_accuracy |
logical |
Print accuracies on initial and transformed dataset | FALSE |
range |
integer |
Number of iterations after which impostors needs to be recalculated | 1 |
rank |
integer |
Rank of distance matrix to be optimized. | 0 |
regularization |
numeric |
Regularization for LMNN objective function | 0.5 |
seed |
integer |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
step_size |
numeric |
Step size for AMSGrad, BB_SGD and SGD (alpha). | 0.01 |
tolerance |
numeric |
Maximum tolerance for termination of AMSGrad, BB_SGD, SGD or L-BFGS. | 1e-07 |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
centered_data |
numeric matrix |
Output matrix for mean-centered dataset. |
output |
numeric matrix |
Output matrix for learned distance matrix. |
transformed_data |
numeric matrix |
Output matrix for transformed dataset. |
{: #r_lmnn_detailed-documentation }
This program implements Large Margin Nearest Neighbors, a distance learning technique. The method seeks to improve k-nearest-neighbor classification on a dataset. The method employes the strategy of reducing distance between similar labeled data points (a.k.a target neighbors) and increasing distance between differently labeled points (a.k.a impostors) using standard optimization techniques over the gradient of the distance between data points.
To work, this algorithm needs labeled data. It can be given as the last row of the input dataset (specified with input
), or alternatively as a separate matrix (specified with labels
). Additionally, a starting point for optimization (specified with distance
can be given, having (r x d) dimensionality. Here r should satisfy 1 <= r <= d, Consequently a Low-Rank matrix will be optimized. Alternatively, Low-Rank distance can be learned by specifying the rank
parameter (A Low-Rank matrix with uniformly distributed values will be used as initial learning point).
The program also requires number of targets neighbors to work with ( specified with k
), A regularization parameter can also be passed, It acts as a trade of between the pulling and pushing terms (specified with regularization
), In addition, this implementation of LMNN includes a parameter to decide the interval after which impostors must be re-calculated (specified with range
).
Output can either be the learned distance matrix (specified with output
), or the transformed dataset (specified with transformed_data
), or both. Additionally mean-centered dataset (specified with centered_data
) can be accessed given mean-centering (specified with center
) is performed on the dataset. Accuracy on initial dataset and final transformed dataset can be printed by specifying the print_accuracy
parameter.
This implementation of LMNN uses AdaGrad, BigBatch_SGD, stochastic gradient descent, mini-batch stochastic gradient descent, or the L_BFGS optimizer.
AdaGrad, specified by the value 'adagrad' for the parameter optimizer
, uses maximum of past squared gradients. It primarily on six parameters: the step size (specified with step_size
), the batch size (specified with batch_size
), the maximum number of passes (specified with passes
). Inaddition, a normalized starting point can be used by specifying the normalize
parameter.
BigBatch_SGD, specified by the value 'bbsgd' for the parameter optimizer
, depends primarily on four parameters: the step size (specified with step_size
), the batch size (specified with batch_size
), the maximum number of passes (specified with passes
). In addition, a normalized starting point can be used by specifying the normalize
parameter.
Stochastic gradient descent, specified by the value 'sgd' for the parameter optimizer
, depends primarily on three parameters: the step size (specified with step_size
), the batch size (specified with batch_size
), and the maximum number of passes (specified with passes
). In addition, a normalized starting point can be used by specifying the normalize
parameter. Furthermore, mean-centering can be performed on the dataset by specifying the center
parameter.
The L-BFGS optimizer, specified by the value 'lbfgs' for the parameter optimizer
, uses a back-tracking line search algorithm to minimize a function. The following parameters are used by L-BFGS: max_iterations
, tolerance
(the optimization is terminated when the gradient norm is below this value). For more details on the L-BFGS optimizer, consult either the mlpack L-BFGS documentation (in lbfgs.hpp) or the vast set of published literature on L-BFGS. In addition, a normalized starting point can be used by specifying the normalize
parameter.
By default, the AMSGrad optimizer is used.
Example - Let's say we want to learn distance on iris dataset with number of targets as 3 using BigBatch_SGD optimizer. A simple call for the same will look like:
R> output <- mlpack_lmnn(input=iris, labels=iris_labels, k=3,
optimizer="bbsgd")
R> output <- output$output
An another program call making use of range & regularization parameter with dataset having labels as last column can be made as:
R> output <- mlpack_lmnn(input=letter_recognition, k=5, range=10,
regularization=0.4)
R> output <- output$output
// Initialize optional parameters for LocalCoordinateCoding(). param := mlpack.LocalCoordinateCodingOptions() param.Atoms = 0 param.InitialDictionary = mat.NewDense(1, 1, nil) param.InputModel = nil param.Lambda = 0 param.MaxIterations = 0 param.Normalize = false param.Seed = 0 param.Test = mat.NewDense(1, 1, nil) param.Tolerance = 0.01 param.Training = mat.NewDense(1, 1, nil)
codes, dictionary, output_model := mlpack.LocalCoordinateCoding(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- local_coordinate_coding(atoms=0,
initial_dictionary=matrix(numeric(), 0, 0), input_model=NA, lambda=0,
max_iterations=0, normalize=FALSE, seed=0, test=matrix(numeric(), 0, 0),
tolerance=0.01, training=matrix(numeric(), 0, 0), verbose=FALSE)
R> codes <- d$codes
R> dictionary <- d$dictionary
R> output_model <- d$output_model
An implementation of Local Coordinate Coding (LCC), a data transformation technique. Given input data, this transforms each point to be expressed as a linear combination of a few points in the dataset; once an LCC model is trained, it can be used to transform points later also. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--atoms (-k) |
int |
Number of atoms in the dictionary. | 0 |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--initial_dictionary_file (-i) |
2-d matrix file |
Optional initial dictionary. | '' |
--input_model_file (-m) |
LocalCoordinateCoding file |
Input LCC model. | '' |
--lambda (-l) |
double |
Weighted l1-norm regularization parameter. | 0 |
--max_iterations (-n) |
int |
Maximum number of iterations for LCC (0 indicates no limit). | 0 |
--normalize (-N) |
flag |
If set, the input data matrix will be normalized before coding. | |
--seed (-s) |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
--test_file (-T) |
2-d matrix file |
Test points to encode. | '' |
--tolerance (-o) |
double |
Tolerance for objective function. | 0.01 |
--training_file (-t) |
2-d matrix file |
Matrix of training data (X). | '' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--codes_file (-c) |
2-d matrix file |
Output codes matrix. |
--dictionary_file (-d) |
2-d matrix file |
Output dictionary matrix. |
--output_model_file (-M) |
LocalCoordinateCoding file |
Output for trained LCC model. |
{: #cli_local_coordinate_coding_detailed-documentation }
An implementation of Local Coordinate Coding (LCC), which codes data that approximately lives on a manifold using a variation of l1-norm regularized sparse coding. Given a dense data matrix X with n points and d dimensions, LCC seeks to find a dense dictionary matrix D with k atoms in d dimensions, and a coding matrix Z with n points in k dimensions. Because of the regularization method used, the atoms in D should lie close to the manifold on which the data points lie.
The original data matrix X can then be reconstructed as D * Z. Therefore, this program finds a representation of each point in X as a sparse linear combination of atoms in the dictionary D.
The coding is found with an algorithm which alternates between a dictionary step, which updates the dictionary D, and a coding step, which updates the coding matrix Z.
To run this program, the input matrix X must be specified (with -i), along with the number of atoms in the dictionary (-k). An initial dictionary may also be specified with the --initial_dictionary_file (-i)
parameter. The l1-norm regularization parameter is specified with the --lambda (-l)
parameter.
For example, to run LCC on the dataset 'data.csv'
using 200 atoms and an l1-regularization parameter of 0.1, saving the dictionary --dictionary_file (-d)
and the codes into --codes_file (-c)
, use
$ mlpack_local_coordinate_coding --training_file data.csv --atoms 200 --lambda
0.1 --dictionary_file dict.csv --codes_file codes.csv
The maximum number of iterations may be specified with the --max_iterations (-n)
parameter. Optionally, the input data matrix X can be normalized before coding with the --normalize (-N)
parameter.
An LCC model may be saved using the --output_model_file (-M)
output parameter. Then, to encode new points from the dataset 'points.csv'
with the previously saved model 'lcc_model.bin'
, saving the new codes to 'new_codes.csv'
, the following command can be used:
$ mlpack_local_coordinate_coding --input_model_file lcc_model.bin --test_file
points.csv --codes_file new_codes.csv
name | type | description | default |
---|---|---|---|
atoms |
int |
Number of atoms in the dictionary. | 0 |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
initial_dictionary |
matrix |
Optional initial dictionary. | np.empty([0, 0]) |
input_model |
LocalCoordinateCodingType |
Input LCC model. | None |
lambda_ |
float |
Weighted l1-norm regularization parameter. | 0 |
max_iterations |
int |
Maximum number of iterations for LCC (0 indicates no limit). | 0 |
normalize |
bool |
If set, the input data matrix will be normalized before coding. | False |
seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
test |
matrix |
Test points to encode. | np.empty([0, 0]) |
tolerance |
float |
Tolerance for objective function. | 0.01 |
training |
matrix |
Matrix of training data (X). | np.empty([0, 0]) |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
codes |
matrix |
Output codes matrix. |
dictionary |
matrix |
Output dictionary matrix. |
output_model |
LocalCoordinateCodingType |
Output for trained LCC model. |
{: #python_local_coordinate_coding_detailed-documentation }
An implementation of Local Coordinate Coding (LCC), which codes data that approximately lives on a manifold using a variation of l1-norm regularized sparse coding. Given a dense data matrix X with n points and d dimensions, LCC seeks to find a dense dictionary matrix D with k atoms in d dimensions, and a coding matrix Z with n points in k dimensions. Because of the regularization method used, the atoms in D should lie close to the manifold on which the data points lie.
The original data matrix X can then be reconstructed as D * Z. Therefore, this program finds a representation of each point in X as a sparse linear combination of atoms in the dictionary D.
The coding is found with an algorithm which alternates between a dictionary step, which updates the dictionary D, and a coding step, which updates the coding matrix Z.
To run this program, the input matrix X must be specified (with -i), along with the number of atoms in the dictionary (-k). An initial dictionary may also be specified with the initial_dictionary
parameter. The l1-norm regularization parameter is specified with the lambda_
parameter.
For example, to run LCC on the dataset 'data'
using 200 atoms and an l1-regularization parameter of 0.1, saving the dictionary dictionary
and the codes into codes
, use
>>> output = local_coordinate_coding(training=data, atoms=200, lambda_=0.1)
>>> dict = output['dictionary']
>>> codes = output['codes']
The maximum number of iterations may be specified with the max_iterations
parameter. Optionally, the input data matrix X can be normalized before coding with the normalize
parameter.
An LCC model may be saved using the output_model
output parameter. Then, to encode new points from the dataset 'points'
with the previously saved model 'lcc_model'
, saving the new codes to 'new_codes'
, the following command can be used:
>>> output = local_coordinate_coding(input_model=lcc_model, test=points)
>>> new_codes = output['codes']
name | type | description | default |
---|---|---|---|
atoms |
Int |
Number of atoms in the dictionary. | 0 |
initial_dictionary |
Float64 matrix-like |
Optional initial dictionary. | zeros(0, 0) |
input_model |
LocalCoordinateCoding |
Input LCC model. | nothing |
lambda |
Float64 |
Weighted l1-norm regularization parameter. | 0 |
max_iterations |
Int |
Maximum number of iterations for LCC (0 indicates no limit). | 0 |
normalize |
Bool |
If set, the input data matrix will be normalized before coding. | false |
seed |
Int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
test |
Float64 matrix-like |
Test points to encode. | zeros(0, 0) |
tolerance |
Float64 |
Tolerance for objective function. | 0.01 |
training |
Float64 matrix-like |
Matrix of training data (X). | zeros(0, 0) |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
codes |
Float64 matrix-like |
Output codes matrix. |
dictionary |
Float64 matrix-like |
Output dictionary matrix. |
output_model |
LocalCoordinateCoding |
Output for trained LCC model. |
{: #julia_local_coordinate_coding_detailed-documentation }
An implementation of Local Coordinate Coding (LCC), which codes data that approximately lives on a manifold using a variation of l1-norm regularized sparse coding. Given a dense data matrix X with n points and d dimensions, LCC seeks to find a dense dictionary matrix D with k atoms in d dimensions, and a coding matrix Z with n points in k dimensions. Because of the regularization method used, the atoms in D should lie close to the manifold on which the data points lie.
The original data matrix X can then be reconstructed as D * Z. Therefore, this program finds a representation of each point in X as a sparse linear combination of atoms in the dictionary D.
The coding is found with an algorithm which alternates between a dictionary step, which updates the dictionary D, and a coding step, which updates the coding matrix Z.
To run this program, the input matrix X must be specified (with -i), along with the number of atoms in the dictionary (-k). An initial dictionary may also be specified with the initial_dictionary
parameter. The l1-norm regularization parameter is specified with the lambda
parameter.
For example, to run LCC on the dataset data
using 200 atoms and an l1-regularization parameter of 0.1, saving the dictionary dictionary
and the codes into codes
, use
julia> using CSV
julia> data = CSV.read("data.csv")
julia> codes, dict, _ = local_coordinate_coding(atoms=200,
lambda=0.1, training=data)
The maximum number of iterations may be specified with the max_iterations
parameter. Optionally, the input data matrix X can be normalized before coding with the normalize
parameter.
An LCC model may be saved using the output_model
output parameter. Then, to encode new points from the dataset points
with the previously saved model lcc_model
, saving the new codes to new_codes
, the following command can be used:
julia> using CSV
julia> points = CSV.read("points.csv")
julia> new_codes, _, _ =
local_coordinate_coding(input_model=lcc_model, test=points)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
Atoms |
int |
Number of atoms in the dictionary. | 0 |
InitialDictionary |
*mat.Dense |
Optional initial dictionary. | mat.NewDense(1, 1, nil) |
InputModel |
localCoordinateCoding |
Input LCC model. | nil |
Lambda |
float64 |
Weighted l1-norm regularization parameter. | 0 |
MaxIterations |
int |
Maximum number of iterations for LCC (0 indicates no limit). | 0 |
Normalize |
bool |
If set, the input data matrix will be normalized before coding. | false |
Seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
Test |
*mat.Dense |
Test points to encode. | mat.NewDense(1, 1, nil) |
Tolerance |
float64 |
Tolerance for objective function. | 0.01 |
Training |
*mat.Dense |
Matrix of training data (X). | mat.NewDense(1, 1, nil) |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
codes |
*mat.Dense |
Output codes matrix. |
dictionary |
*mat.Dense |
Output dictionary matrix. |
outputModel |
localCoordinateCoding |
Output for trained LCC model. |
{: #go_local_coordinate_coding_detailed-documentation }
An implementation of Local Coordinate Coding (LCC), which codes data that approximately lives on a manifold using a variation of l1-norm regularized sparse coding. Given a dense data matrix X with n points and d dimensions, LCC seeks to find a dense dictionary matrix D with k atoms in d dimensions, and a coding matrix Z with n points in k dimensions. Because of the regularization method used, the atoms in D should lie close to the manifold on which the data points lie.
The original data matrix X can then be reconstructed as D * Z. Therefore, this program finds a representation of each point in X as a sparse linear combination of atoms in the dictionary D.
The coding is found with an algorithm which alternates between a dictionary step, which updates the dictionary D, and a coding step, which updates the coding matrix Z.
To run this program, the input matrix X must be specified (with -i), along with the number of atoms in the dictionary (-k). An initial dictionary may also be specified with the InitialDictionary
parameter. The l1-norm regularization parameter is specified with the Lambda
parameter.
For example, to run LCC on the dataset data
using 200 atoms and an l1-regularization parameter of 0.1, saving the dictionary Dictionary
and the codes into Codes
, use
// Initialize optional parameters for LocalCoordinateCoding().
param := mlpack.LocalCoordinateCodingOptions()
param.Training = data
param.Atoms = 200
param.Lambda = 0.1
codes, dict, _ := mlpack.LocalCoordinateCoding(param)
The maximum number of iterations may be specified with the MaxIterations
parameter. Optionally, the input data matrix X can be normalized before coding with the Normalize
parameter.
An LCC model may be saved using the OutputModel
output parameter. Then, to encode new points from the dataset points
with the previously saved model lcc_model
, saving the new codes to new_codes
, the following command can be used:
// Initialize optional parameters for LocalCoordinateCoding().
param := mlpack.LocalCoordinateCodingOptions()
param.InputModel = &lcc_model
param.Test = points
new_codes, _, _ := mlpack.LocalCoordinateCoding(param)
name | type | description | default |
---|---|---|---|
atoms |
integer |
Number of atoms in the dictionary. | 0 |
initial_dictionary |
numeric matrix |
Optional initial dictionary. | matrix(numeric(), 0, 0) |
input_model |
LocalCoordinateCoding |
Input LCC model. | NA |
lambda |
numeric |
Weighted l1-norm regularization parameter. | 0 |
max_iterations |
integer |
Maximum number of iterations for LCC (0 indicates no limit). | 0 |
normalize |
logical |
If set, the input data matrix will be normalized before coding. | FALSE |
seed |
integer |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
test |
numeric matrix |
Test points to encode. | matrix(numeric(), 0, 0) |
tolerance |
numeric |
Tolerance for objective function. | 0.01 |
training |
numeric matrix |
Matrix of training data (X). | matrix(numeric(), 0, 0) |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
codes |
numeric matrix |
Output codes matrix. |
dictionary |
numeric matrix |
Output dictionary matrix. |
output_model |
LocalCoordinateCoding |
Output for trained LCC model. |
{: #r_local_coordinate_coding_detailed-documentation }
An implementation of Local Coordinate Coding (LCC), which codes data that approximately lives on a manifold using a variation of l1-norm regularized sparse coding. Given a dense data matrix X with n points and d dimensions, LCC seeks to find a dense dictionary matrix D with k atoms in d dimensions, and a coding matrix Z with n points in k dimensions. Because of the regularization method used, the atoms in D should lie close to the manifold on which the data points lie.
The original data matrix X can then be reconstructed as D * Z. Therefore, this program finds a representation of each point in X as a sparse linear combination of atoms in the dictionary D.
The coding is found with an algorithm which alternates between a dictionary step, which updates the dictionary D, and a coding step, which updates the coding matrix Z.
To run this program, the input matrix X must be specified (with -i), along with the number of atoms in the dictionary (-k). An initial dictionary may also be specified with the initial_dictionary
parameter. The l1-norm regularization parameter is specified with the lambda
parameter.
For example, to run LCC on the dataset "data"
using 200 atoms and an l1-regularization parameter of 0.1, saving the dictionary dictionary
and the codes into codes
, use
R> output <- local_coordinate_coding(training=data, atoms=200, lambda=0.1)
R> dict <- output$dictionary
R> codes <- output$codes
The maximum number of iterations may be specified with the max_iterations
parameter. Optionally, the input data matrix X can be normalized before coding with the normalize
parameter.
An LCC model may be saved using the output_model
output parameter. Then, to encode new points from the dataset "points"
with the previously saved model "lcc_model"
, saving the new codes to "new_codes"
, the following command can be used:
R> output <- local_coordinate_coding(input_model=lcc_model, test=points)
R> new_codes <- output$codes
// Initialize optional parameters for LogisticRegression(). param := mlpack.LogisticRegressionOptions() param.BatchSize = 64 param.DecisionBoundary = 0.5 param.InputModel = nil param.Labels = mat.NewDense(1, 1, nil) param.Lambda = 0 param.MaxIterations = 10000 param.Optimizer = "lbfgs" param.StepSize = 0.01 param.Test = mat.NewDense(1, 1, nil) param.Tolerance = 1e-10 param.Training = mat.NewDense(1, 1, nil)
output, output_model, output_probabilities, predictions, probabilities := mlpack.LogisticRegression(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- logistic_regression(batch_size=64, decision_boundary=0.5,
input_model=NA, labels=matrix(integer(), 0, 0), lambda=0,
max_iterations=10000, optimizer="lbfgs", step_size=0.01,
test=matrix(numeric(), 0, 0), tolerance=1e-10,
training=matrix(numeric(), 0, 0), verbose=FALSE)
R> output <- d$output
R> output_model <- d$output_model
R> output_probabilities <- d$output_probabilities
R> predictions <- d$predictions
R> probabilities <- d$probabilities
An implementation of L2-regularized logistic regression for two-class classification. Given labeled data, a model can be trained and saved for future use; or, a pre-trained model can be used to classify new points. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--batch_size (-b) |
int |
Batch size for SGD. | 64 |
--decision_boundary (-d) |
double |
Decision boundary for prediction; if the logistic function for a point is less than the boundary, the class is taken to be 0; otherwise, the class is 1. | 0.5 |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_model_file (-m) |
LogisticRegression<> file |
Existing model (parameters). | '' |
--labels_file (-l) |
1-d index matrix file |
A matrix containing labels (0 or 1) for the points in the training set (y). | '' |
--lambda (-L) |
double |
L2-regularization parameter for training. | 0 |
--max_iterations (-n) |
int |
Maximum iterations for optimizer (0 indicates no limit). | 10000 |
--optimizer (-O) |
string |
Optimizer to use for training ('lbfgs' or 'sgd'). | 'lbfgs' |
--step_size (-s) |
double |
Step size for SGD optimizer. | 0.01 |
--test_file (-T) |
2-d matrix file |
Matrix containing test dataset. | '' |
--tolerance (-e) |
double |
Convergence tolerance for optimizer. | 1e-10 |
--training_file (-t) |
2-d matrix file |
A matrix containing the training set (the matrix of predictors, X). | '' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_file (-o) |
1-d index matrix file |
If test data is specified, this matrix is where the predictions for the test set will be saved. |
--output_model_file (-M) |
LogisticRegression<> file |
Output for trained logistic regression model. |
--output_probabilities_file (-x) |
2-d matrix file |
If test data is specified, this matrix is where the class probabilities for the test set will be saved. |
--predictions_file (-P) |
1-d index matrix file |
If test data is specified, this matrix is where the predictions for the test set will be saved. |
--probabilities_file (-p) |
2-d matrix file |
If test data is specified, this matrix is where the class probabilities for the test set will be saved. |
{: #cli_logistic_regression_detailed-documentation }
An implementation of L2-regularized logistic regression using either the L-BFGS optimizer or SGD (stochastic gradient descent). This solves the regression problem
y = (1 / 1 + e^-(X * b))
where y takes values 0 or 1.
This program allows loading a logistic regression model (via the --input_model_file (-m)
parameter) or training a logistic regression model given training data (specified with the --training_file (-t)
parameter), or both those things at once. In addition, this program allows classification on a test dataset (specified with the --test_file (-T)
parameter) and the classification results may be saved with the --predictions_file (-P)
output parameter. The trained logistic regression model may be saved using the --output_model_file (-M)
output parameter.
The training data, if specified, may have class labels as its last dimension. Alternately, the --labels_file (-l)
parameter may be used to specify a separate matrix of labels.
When a model is being trained, there are many options. L2 regularization (to prevent overfitting) can be specified with the --lambda (-L)
option, and the optimizer used to train the model can be specified with the --optimizer (-O)
parameter. Available options are 'sgd' (stochastic gradient descent) and 'lbfgs' (the L-BFGS optimizer). There are also various parameters for the optimizer; the --max_iterations (-n)
parameter specifies the maximum number of allowed iterations, and the --tolerance (-e)
parameter specifies the tolerance for convergence. For the SGD optimizer, the --step_size (-s)
parameter controls the step size taken at each iteration by the optimizer. The batch size for SGD is controlled with the --batch_size (-b)
parameter. If the objective function for your data is oscillating between Inf and 0, the step size is probably too large. There are more parameters for the optimizers, but the C++ interface must be used to access these.
For SGD, an iteration refers to a single point. So to take a single pass over the dataset with SGD, --max_iterations (-n)
should be set to the number of points in the dataset.
Optionally, the model can be used to predict the responses for another matrix of data points, if --test_file (-T)
is specified. The --test_file (-T)
parameter can be specified without the --training_file (-t)
parameter, so long as an existing logistic regression model is given with the --input_model_file (-m)
parameter. The output predictions from the logistic regression model may be saved with the --predictions_file (-P)
parameter.
Note : The following parameters are deprecated and will be removed in mlpack 4: --output_file (-o)
, --output_probabilities_file (-x)
Use --predictions_file (-P)
instead of --output_file (-o)
Use --probabilities_file (-p)
instead of --output_probabilities_file (-x)
This implementation of logistic regression does not support the general multi-class case but instead only the two-class case. Any labels must be either 0 or 1. For more classes, see the softmax_regression program.
As an example, to train a logistic regression model on the data ''data.csv'
' with labels ''labels.csv'
' with L2 regularization of 0.1, saving the model to ''lr_model.bin'
', the following command may be used:
$ mlpack_logistic_regression --training_file data.csv --labels_file labels.csv
--lambda 0.1 --output_model_file lr_model.bin
Then, to use that model to predict classes for the dataset ''test.csv'
', storing the output predictions in ''predictions.csv'
', the following command may be used:
$ mlpack_logistic_regression --input_model_file lr_model.bin --test_file
test.csv --output_file predictions.csv
name | type | description | default |
---|---|---|---|
batch_size |
int |
Batch size for SGD. | 64 |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
decision_boundary |
float |
Decision boundary for prediction; if the logistic function for a point is less than the boundary, the class is taken to be 0; otherwise, the class is 1. | 0.5 |
input_model |
LogisticRegression<>Type |
Existing model (parameters). | None |
labels |
int vector |
A matrix containing labels (0 or 1) for the points in the training set (y). | np.empty([0], dtype=np.uint64) |
lambda_ |
float |
L2-regularization parameter for training. | 0 |
max_iterations |
int |
Maximum iterations for optimizer (0 indicates no limit). | 10000 |
optimizer |
str |
Optimizer to use for training ('lbfgs' or 'sgd'). | 'lbfgs' |
step_size |
float |
Step size for SGD optimizer. | 0.01 |
test |
matrix |
Matrix containing test dataset. | np.empty([0, 0]) |
tolerance |
float |
Convergence tolerance for optimizer. | 1e-10 |
training |
matrix |
A matrix containing the training set (the matrix of predictors, X). | np.empty([0, 0]) |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output |
int vector |
If test data is specified, this matrix is where the predictions for the test set will be saved. |
output_model |
LogisticRegression<>Type |
Output for trained logistic regression model. |
output_probabilities |
matrix |
If test data is specified, this matrix is where the class probabilities for the test set will be saved. |
predictions |
int vector |
If test data is specified, this matrix is where the predictions for the test set will be saved. |
probabilities |
matrix |
If test data is specified, this matrix is where the class probabilities for the test set will be saved. |
{: #python_logistic_regression_detailed-documentation }
An implementation of L2-regularized logistic regression using either the L-BFGS optimizer or SGD (stochastic gradient descent). This solves the regression problem
y = (1 / 1 + e^-(X * b))
where y takes values 0 or 1.
This program allows loading a logistic regression model (via the input_model
parameter) or training a logistic regression model given training data (specified with the training
parameter), or both those things at once. In addition, this program allows classification on a test dataset (specified with the test
parameter) and the classification results may be saved with the predictions
output parameter. The trained logistic regression model may be saved using the output_model
output parameter.
The training data, if specified, may have class labels as its last dimension. Alternately, the labels
parameter may be used to specify a separate matrix of labels.
When a model is being trained, there are many options. L2 regularization (to prevent overfitting) can be specified with the lambda_
option, and the optimizer used to train the model can be specified with the optimizer
parameter. Available options are 'sgd' (stochastic gradient descent) and 'lbfgs' (the L-BFGS optimizer). There are also various parameters for the optimizer; the max_iterations
parameter specifies the maximum number of allowed iterations, and the tolerance
parameter specifies the tolerance for convergence. For the SGD optimizer, the step_size
parameter controls the step size taken at each iteration by the optimizer. The batch size for SGD is controlled with the batch_size
parameter. If the objective function for your data is oscillating between Inf and 0, the step size is probably too large. There are more parameters for the optimizers, but the C++ interface must be used to access these.
For SGD, an iteration refers to a single point. So to take a single pass over the dataset with SGD, max_iterations
should be set to the number of points in the dataset.
Optionally, the model can be used to predict the responses for another matrix of data points, if test
is specified. The test
parameter can be specified without the training
parameter, so long as an existing logistic regression model is given with the input_model
parameter. The output predictions from the logistic regression model may be saved with the predictions
parameter.
Note : The following parameters are deprecated and will be removed in mlpack 4: output
, output_probabilities
Use predictions
instead of output
Use probabilities
instead of output_probabilities
This implementation of logistic regression does not support the general multi-class case but instead only the two-class case. Any labels must be either 0 or 1. For more classes, see the softmax_regression program.
As an example, to train a logistic regression model on the data ''data'
' with labels ''labels'
' with L2 regularization of 0.1, saving the model to ''lr_model'
', the following command may be used:
>>> output = logistic_regression(training=data, labels=labels, lambda_=0.1)
>>> lr_model = output['output_model']
Then, to use that model to predict classes for the dataset ''test'
', storing the output predictions in ''predictions'
', the following command may be used:
>>> output = logistic_regression(input_model=lr_model, test=test)
>>> predictions = output['output']
name | type | description | default |
---|---|---|---|
batch_size |
Int |
Batch size for SGD. | 64 |
decision_boundary |
Float64 |
Decision boundary for prediction; if the logistic function for a point is less than the boundary, the class is taken to be 0; otherwise, the class is 1. | 0.5 |
input_model |
LogisticRegression |
Existing model (parameters). | nothing |
labels |
Int vector-like |
A matrix containing labels (0 or 1) for the points in the training set (y). | Int[] |
lambda |
Float64 |
L2-regularization parameter for training. | 0 |
max_iterations |
Int |
Maximum iterations for optimizer (0 indicates no limit). | 10000 |
optimizer |
String |
Optimizer to use for training ('lbfgs' or 'sgd'). | "lbfgs" |
step_size |
Float64 |
Step size for SGD optimizer. | 0.01 |
test |
Float64 matrix-like |
Matrix containing test dataset. | zeros(0, 0) |
tolerance |
Float64 |
Convergence tolerance for optimizer. | 1e-10 |
training |
Float64 matrix-like |
A matrix containing the training set (the matrix of predictors, X). | zeros(0, 0) |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output |
Int vector-like |
If test data is specified, this matrix is where the predictions for the test set will be saved. |
output_model |
LogisticRegression |
Output for trained logistic regression model. |
output_probabilities |
Float64 matrix-like |
If test data is specified, this matrix is where the class probabilities for the test set will be saved. |
predictions |
Int vector-like |
If test data is specified, this matrix is where the predictions for the test set will be saved. |
probabilities |
Float64 matrix-like |
If test data is specified, this matrix is where the class probabilities for the test set will be saved. |
{: #julia_logistic_regression_detailed-documentation }
An implementation of L2-regularized logistic regression using either the L-BFGS optimizer or SGD (stochastic gradient descent). This solves the regression problem
y = (1 / 1 + e^-(X * b))
where y takes values 0 or 1.
This program allows loading a logistic regression model (via the input_model
parameter) or training a logistic regression model given training data (specified with the training
parameter), or both those things at once. In addition, this program allows classification on a test dataset (specified with the test
parameter) and the classification results may be saved with the predictions
output parameter. The trained logistic regression model may be saved using the output_model
output parameter.
The training data, if specified, may have class labels as its last dimension. Alternately, the labels
parameter may be used to specify a separate matrix of labels.
When a model is being trained, there are many options. L2 regularization (to prevent overfitting) can be specified with the lambda
option, and the optimizer used to train the model can be specified with the optimizer
parameter. Available options are 'sgd' (stochastic gradient descent) and 'lbfgs' (the L-BFGS optimizer). There are also various parameters for the optimizer; the max_iterations
parameter specifies the maximum number of allowed iterations, and the tolerance
parameter specifies the tolerance for convergence. For the SGD optimizer, the step_size
parameter controls the step size taken at each iteration by the optimizer. The batch size for SGD is controlled with the batch_size
parameter. If the objective function for your data is oscillating between Inf and 0, the step size is probably too large. There are more parameters for the optimizers, but the C++ interface must be used to access these.
For SGD, an iteration refers to a single point. So to take a single pass over the dataset with SGD, max_iterations
should be set to the number of points in the dataset.
Optionally, the model can be used to predict the responses for another matrix of data points, if test
is specified. The test
parameter can be specified without the training
parameter, so long as an existing logistic regression model is given with the input_model
parameter. The output predictions from the logistic regression model may be saved with the predictions
parameter.
Note : The following parameters are deprecated and will be removed in mlpack 4: output
, output_probabilities
Use predictions
instead of output
Use probabilities
instead of output_probabilities
This implementation of logistic regression does not support the general multi-class case but instead only the two-class case. Any labels must be either 0 or 1. For more classes, see the softmax_regression program.
As an example, to train a logistic regression model on the data 'data
' with labels 'labels
' with L2 regularization of 0.1, saving the model to 'lr_model
', the following command may be used:
julia> using CSV
julia> data = CSV.read("data.csv")
julia> labels = CSV.read("labels.csv"; type=Int)
julia> _, lr_model, _, _, _ = logistic_regression(labels=labels,
lambda=0.1, training=data)
Then, to use that model to predict classes for the dataset 'test
', storing the output predictions in 'predictions
', the following command may be used:
julia> using CSV
julia> test = CSV.read("test.csv")
julia> predictions, _, _, _, _ =
logistic_regression(input_model=lr_model, test=test)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
BatchSize |
int |
Batch size for SGD. | 64 |
DecisionBoundary |
float64 |
Decision boundary for prediction; if the logistic function for a point is less than the boundary, the class is taken to be 0; otherwise, the class is 1. | 0.5 |
InputModel |
logisticRegression |
Existing model (parameters). | nil |
Labels |
*mat.Dense (1d with ints) |
A matrix containing labels (0 or 1) for the points in the training set (y). | mat.NewDense(1, 1, nil) |
Lambda |
float64 |
L2-regularization parameter for training. | 0 |
MaxIterations |
int |
Maximum iterations for optimizer (0 indicates no limit). | 10000 |
Optimizer |
string |
Optimizer to use for training ('lbfgs' or 'sgd'). | "lbfgs" |
StepSize |
float64 |
Step size for SGD optimizer. | 0.01 |
Test |
*mat.Dense |
Matrix containing test dataset. | mat.NewDense(1, 1, nil) |
Tolerance |
float64 |
Convergence tolerance for optimizer. | 1e-10 |
Training |
*mat.Dense |
A matrix containing the training set (the matrix of predictors, X). | mat.NewDense(1, 1, nil) |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
output |
*mat.Dense (1d with ints) |
If test data is specified, this matrix is where the predictions for the test set will be saved. |
outputModel |
logisticRegression |
Output for trained logistic regression model. |
outputProbabilities |
*mat.Dense |
If test data is specified, this matrix is where the class probabilities for the test set will be saved. |
predictions |
*mat.Dense (1d with ints) |
If test data is specified, this matrix is where the predictions for the test set will be saved. |
probabilities |
*mat.Dense |
If test data is specified, this matrix is where the class probabilities for the test set will be saved. |
{: #go_logistic_regression_detailed-documentation }
An implementation of L2-regularized logistic regression using either the L-BFGS optimizer or SGD (stochastic gradient descent). This solves the regression problem
y = (1 / 1 + e^-(X * b))
where y takes values 0 or 1.
This program allows loading a logistic regression model (via the InputModel
parameter) or training a logistic regression model given training data (specified with the Training
parameter), or both those things at once. In addition, this program allows classification on a test dataset (specified with the Test
parameter) and the classification results may be saved with the Predictions
output parameter. The trained logistic regression model may be saved using the OutputModel
output parameter.
The training data, if specified, may have class labels as its last dimension. Alternately, the Labels
parameter may be used to specify a separate matrix of labels.
When a model is being trained, there are many options. L2 regularization (to prevent overfitting) can be specified with the Lambda
option, and the optimizer used to train the model can be specified with the Optimizer
parameter. Available options are 'sgd' (stochastic gradient descent) and 'lbfgs' (the L-BFGS optimizer). There are also various parameters for the optimizer; the MaxIterations
parameter specifies the maximum number of allowed iterations, and the Tolerance
parameter specifies the tolerance for convergence. For the SGD optimizer, the StepSize
parameter controls the step size taken at each iteration by the optimizer. The batch size for SGD is controlled with the BatchSize
parameter. If the objective function for your data is oscillating between Inf and 0, the step size is probably too large. There are more parameters for the optimizers, but the C++ interface must be used to access these.
For SGD, an iteration refers to a single point. So to take a single pass over the dataset with SGD, MaxIterations
should be set to the number of points in the dataset.
Optionally, the model can be used to predict the responses for another matrix of data points, if Test
is specified. The Test
parameter can be specified without the Training
parameter, so long as an existing logistic regression model is given with the InputModel
parameter. The output predictions from the logistic regression model may be saved with the Predictions
parameter.
Note : The following parameters are deprecated and will be removed in mlpack 4: Output
, OutputProbabilities
Use Predictions
instead of Output
Use Probabilities
instead of OutputProbabilities
This implementation of logistic regression does not support the general multi-class case but instead only the two-class case. Any labels must be either 0 or 1. For more classes, see the softmax_regression program.
As an example, to train a logistic regression model on the data 'data
' with labels 'labels
' with L2 regularization of 0.1, saving the model to 'lr_model
', the following command may be used:
// Initialize optional parameters for LogisticRegression().
param := mlpack.LogisticRegressionOptions()
param.Training = data
param.Labels = labels
param.Lambda = 0.1
_, lr_model, _, _, _ := mlpack.LogisticRegression(param)
Then, to use that model to predict classes for the dataset 'test
', storing the output predictions in 'predictions
', the following command may be used:
// Initialize optional parameters for LogisticRegression().
param := mlpack.LogisticRegressionOptions()
param.InputModel = &lr_model
param.Test = test
predictions, _, _, _, _ := mlpack.LogisticRegression(param)
name | type | description | default |
---|---|---|---|
batch_size |
integer |
Batch size for SGD. | 64 |
decision_boundary |
numeric |
Decision boundary for prediction; if the logistic function for a point is less than the boundary, the class is taken to be 0; otherwise, the class is 1. | 0.5 |
input_model |
LogisticRegression |
Existing model (parameters). | NA |
labels |
integer vector |
A matrix containing labels (0 or 1) for the points in the training set (y). | matrix(integer(), 0, 0) |
lambda |
numeric |
L2-regularization parameter for training. | 0 |
max_iterations |
integer |
Maximum iterations for optimizer (0 indicates no limit). | 10000 |
optimizer |
character |
Optimizer to use for training ('lbfgs' or 'sgd'). | "lbfgs" |
step_size |
numeric |
Step size for SGD optimizer. | 0.01 |
test |
numeric matrix |
Matrix containing test dataset. | matrix(numeric(), 0, 0) |
tolerance |
numeric |
Convergence tolerance for optimizer. | 1e-10 |
training |
numeric matrix |
A matrix containing the training set (the matrix of predictors, X). | matrix(numeric(), 0, 0) |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output |
integer vector |
If test data is specified, this matrix is where the predictions for the test set will be saved. |
output_model |
LogisticRegression |
Output for trained logistic regression model. |
output_probabilities |
numeric matrix |
If test data is specified, this matrix is where the class probabilities for the test set will be saved. |
predictions |
integer vector |
If test data is specified, this matrix is where the predictions for the test set will be saved. |
probabilities |
numeric matrix |
If test data is specified, this matrix is where the class probabilities for the test set will be saved. |
{: #r_logistic_regression_detailed-documentation }
An implementation of L2-regularized logistic regression using either the L-BFGS optimizer or SGD (stochastic gradient descent). This solves the regression problem
y = (1 / 1 + e^-(X * b))
where y takes values 0 or 1.
This program allows loading a logistic regression model (via the input_model
parameter) or training a logistic regression model given training data (specified with the training
parameter), or both those things at once. In addition, this program allows classification on a test dataset (specified with the test
parameter) and the classification results may be saved with the predictions
output parameter. The trained logistic regression model may be saved using the output_model
output parameter.
The training data, if specified, may have class labels as its last dimension. Alternately, the labels
parameter may be used to specify a separate matrix of labels.
When a model is being trained, there are many options. L2 regularization (to prevent overfitting) can be specified with the lambda
option, and the optimizer used to train the model can be specified with the optimizer
parameter. Available options are 'sgd' (stochastic gradient descent) and 'lbfgs' (the L-BFGS optimizer). There are also various parameters for the optimizer; the max_iterations
parameter specifies the maximum number of allowed iterations, and the tolerance
parameter specifies the tolerance for convergence. For the SGD optimizer, the step_size
parameter controls the step size taken at each iteration by the optimizer. The batch size for SGD is controlled with the batch_size
parameter. If the objective function for your data is oscillating between Inf and 0, the step size is probably too large. There are more parameters for the optimizers, but the C++ interface must be used to access these.
For SGD, an iteration refers to a single point. So to take a single pass over the dataset with SGD, max_iterations
should be set to the number of points in the dataset.
Optionally, the model can be used to predict the responses for another matrix of data points, if test
is specified. The test
parameter can be specified without the training
parameter, so long as an existing logistic regression model is given with the input_model
parameter. The output predictions from the logistic regression model may be saved with the predictions
parameter.
Note : The following parameters are deprecated and will be removed in mlpack 4: output
, output_probabilities
Use predictions
instead of output
Use probabilities
instead of output_probabilities
This implementation of logistic regression does not support the general multi-class case but instead only the two-class case. Any labels must be either 0 or 1. For more classes, see the softmax_regression program.
As an example, to train a logistic regression model on the data '"data"
' with labels '"labels"
' with L2 regularization of 0.1, saving the model to '"lr_model"
', the following command may be used:
R> output <- logistic_regression(training=data, labels=labels, lambda=0.1)
R> lr_model <- output$output_model
Then, to use that model to predict classes for the dataset '"test"
', storing the output predictions in '"predictions"
', the following command may be used:
R> output <- logistic_regression(input_model=lr_model, test=test)
R> predictions <- output$output
// Initialize optional parameters for Lsh(). param := mlpack.LshOptions() param.BucketSize = 500 param.HashWidth = 0 param.InputModel = nil param.K = 0 param.NumProbes = 0 param.Projections = 10 param.Query = mat.NewDense(1, 1, nil) param.Reference = mat.NewDense(1, 1, nil) param.SecondHashSize = 99901 param.Seed = 0 param.Tables = 30 param.TrueNeighbors = mat.NewDense(1, 1, nil)
distances, neighbors, output_model := mlpack.Lsh(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- lsh(bucket_size=500, hash_width=0, input_model=NA, k=0,
num_probes=0, projections=10, query=matrix(numeric(), 0, 0),
reference=matrix(numeric(), 0, 0), second_hash_size=99901, seed=0,
tables=30, true_neighbors=matrix(integer(), 0, 0), verbose=FALSE)
R> distances <- d$distances
R> neighbors <- d$neighbors
R> output_model <- d$output_model
An implementation of approximate k-nearest-neighbor search with locality-sensitive hashing (LSH). Given a set of reference points and a set of query points, this will compute the k approximate nearest neighbors of each query point in the reference set; models can be saved for future use. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--bucket_size (-B) |
int |
The size of a bucket in the second level hash. | 500 |
--hash_width (-H) |
double |
The hash width for the first-level hashing in the LSH preprocessing. By default, the LSH class automatically estimates a hash width for its use. | 0 |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_model_file (-m) |
LSHSearch<> file |
Input LSH model. | '' |
--k (-k) |
int |
Number of nearest neighbors to find. | 0 |
--num_probes (-T) |
int |
Number of additional probes for multiprobe LSH; if 0, traditional LSH is used. | 0 |
--projections (-K) |
int |
The number of hash functions for each table | 10 |
--query_file (-q) |
2-d matrix file |
Matrix containing query points (optional). | '' |
--reference_file (-r) |
2-d matrix file |
Matrix containing the reference dataset. | '' |
--second_hash_size (-S) |
int |
The size of the second level hash table. | 99901 |
--seed (-s) |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
--tables (-L) |
int |
The number of hash tables to be used. | 30 |
--true_neighbors_file (-t) |
2-d index matrix file |
Matrix of true neighbors to compute recall with (the recall is printed when -v is specified). | '' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--distances_file (-d) |
2-d matrix file |
Matrix to output distances into. |
--neighbors_file (-n) |
2-d index matrix file |
Matrix to output neighbors into. |
--output_model_file (-M) |
LSHSearch<> file |
Output for trained LSH model. |
{: #cli_lsh_detailed-documentation }
This program will calculate the k approximate-nearest-neighbors of a set of points using locality-sensitive hashing. You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.
For example, the following will return 5 neighbors from the data for each point in 'input.csv'
and store the distances in 'distances.csv'
and the neighbors in 'neighbors.csv'
:
$ mlpack_lsh --k 5 --reference_file input.csv --distances_file distances.csv
--neighbors_file neighbors.csv
The output is organized such that row i and column j in the neighbors output corresponds to the index of the point in the reference set which is the j'th nearest neighbor from the point in the query set with index i. Row j and column i in the distances output file corresponds to the distance between those two points.
Because this is approximate-nearest-neighbors search, results may be different from run to run. Thus, the --seed (-s)
parameter can be specified to set the random seed.
This program also has many other parameters to control its functionality; see the parameter-specific documentation for more information.
name | type | description | default |
---|---|---|---|
bucket_size |
int |
The size of a bucket in the second level hash. | 500 |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
hash_width |
float |
The hash width for the first-level hashing in the LSH preprocessing. By default, the LSH class automatically estimates a hash width for its use. | 0 |
input_model |
LSHSearch<>Type |
Input LSH model. | None |
k |
int |
Number of nearest neighbors to find. | 0 |
num_probes |
int |
Number of additional probes for multiprobe LSH; if 0, traditional LSH is used. | 0 |
projections |
int |
The number of hash functions for each table | 10 |
query |
matrix |
Matrix containing query points (optional). | np.empty([0, 0]) |
reference |
matrix |
Matrix containing the reference dataset. | np.empty([0, 0]) |
second_hash_size |
int |
The size of the second level hash table. | 99901 |
seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
tables |
int |
The number of hash tables to be used. | 30 |
true_neighbors |
int matrix |
Matrix of true neighbors to compute recall with (the recall is printed when -v is specified). | np.empty([0, 0], dtype=np.uint64) |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
distances |
matrix |
Matrix to output distances into. |
neighbors |
int matrix |
Matrix to output neighbors into. |
output_model |
LSHSearch<>Type |
Output for trained LSH model. |
{: #python_lsh_detailed-documentation }
This program will calculate the k approximate-nearest-neighbors of a set of points using locality-sensitive hashing. You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.
For example, the following will return 5 neighbors from the data for each point in 'input'
and store the distances in 'distances'
and the neighbors in 'neighbors'
:
>>> output = lsh(k=5, reference=input)
>>> distances = output['distances']
>>> neighbors = output['neighbors']
The output is organized such that row i and column j in the neighbors output corresponds to the index of the point in the reference set which is the j'th nearest neighbor from the point in the query set with index i. Row j and column i in the distances output file corresponds to the distance between those two points.
Because this is approximate-nearest-neighbors search, results may be different from run to run. Thus, the seed
parameter can be specified to set the random seed.
This program also has many other parameters to control its functionality; see the parameter-specific documentation for more information.
name | type | description | default |
---|---|---|---|
bucket_size |
Int |
The size of a bucket in the second level hash. | 500 |
hash_width |
Float64 |
The hash width for the first-level hashing in the LSH preprocessing. By default, the LSH class automatically estimates a hash width for its use. | 0 |
input_model |
LSHSearch |
Input LSH model. | nothing |
k |
Int |
Number of nearest neighbors to find. | 0 |
num_probes |
Int |
Number of additional probes for multiprobe LSH; if 0, traditional LSH is used. | 0 |
projections |
Int |
The number of hash functions for each table | 10 |
query |
Float64 matrix-like |
Matrix containing query points (optional). | zeros(0, 0) |
reference |
Float64 matrix-like |
Matrix containing the reference dataset. | zeros(0, 0) |
second_hash_size |
Int |
The size of the second level hash table. | 99901 |
seed |
Int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
tables |
Int |
The number of hash tables to be used. | 30 |
true_neighbors |
Int matrix-like |
Matrix of true neighbors to compute recall with (the recall is printed when -v is specified). | zeros(Int, 0, 0) |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
distances |
Float64 matrix-like |
Matrix to output distances into. |
neighbors |
Int matrix-like |
Matrix to output neighbors into. |
output_model |
LSHSearch |
Output for trained LSH model. |
{: #julia_lsh_detailed-documentation }
This program will calculate the k approximate-nearest-neighbors of a set of points using locality-sensitive hashing. You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.
For example, the following will return 5 neighbors from the data for each point in input
and store the distances in distances
and the neighbors in neighbors
:
julia> using CSV
julia> input = CSV.read("input.csv")
julia> distances, neighbors, _ = lsh(k=5, reference=input)
The output is organized such that row i and column j in the neighbors output corresponds to the index of the point in the reference set which is the j'th nearest neighbor from the point in the query set with index i. Row j and column i in the distances output file corresponds to the distance between those two points.
Because this is approximate-nearest-neighbors search, results may be different from run to run. Thus, the seed
parameter can be specified to set the random seed.
This program also has many other parameters to control its functionality; see the parameter-specific documentation for more information.
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
BucketSize |
int |
The size of a bucket in the second level hash. | 500 |
HashWidth |
float64 |
The hash width for the first-level hashing in the LSH preprocessing. By default, the LSH class automatically estimates a hash width for its use. | 0 |
InputModel |
lshSearch |
Input LSH model. | nil |
K |
int |
Number of nearest neighbors to find. | 0 |
NumProbes |
int |
Number of additional probes for multiprobe LSH; if 0, traditional LSH is used. | 0 |
Projections |
int |
The number of hash functions for each table | 10 |
Query |
*mat.Dense |
Matrix containing query points (optional). | mat.NewDense(1, 1, nil) |
Reference |
*mat.Dense |
Matrix containing the reference dataset. | mat.NewDense(1, 1, nil) |
SecondHashSize |
int |
The size of the second level hash table. | 99901 |
Seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
Tables |
int |
The number of hash tables to be used. | 30 |
TrueNeighbors |
*mat.Dense (with ints) |
Matrix of true neighbors to compute recall with (the recall is printed when -v is specified). | mat.NewDense(1, 1, nil) |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
distances |
*mat.Dense |
Matrix to output distances into. |
neighbors |
*mat.Dense (with ints) |
Matrix to output neighbors into. |
outputModel |
lshSearch |
Output for trained LSH model. |
{: #go_lsh_detailed-documentation }
This program will calculate the k approximate-nearest-neighbors of a set of points using locality-sensitive hashing. You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.
For example, the following will return 5 neighbors from the data for each point in input
and store the distances in distances
and the neighbors in neighbors
:
// Initialize optional parameters for Lsh().
param := mlpack.LshOptions()
param.K = 5
param.Reference = input
distances, neighbors, _ := mlpack.Lsh(param)
The output is organized such that row i and column j in the neighbors output corresponds to the index of the point in the reference set which is the j'th nearest neighbor from the point in the query set with index i. Row j and column i in the distances output file corresponds to the distance between those two points.
Because this is approximate-nearest-neighbors search, results may be different from run to run. Thus, the Seed
parameter can be specified to set the random seed.
This program also has many other parameters to control its functionality; see the parameter-specific documentation for more information.
name | type | description | default |
---|---|---|---|
bucket_size |
integer |
The size of a bucket in the second level hash. | 500 |
hash_width |
numeric |
The hash width for the first-level hashing in the LSH preprocessing. By default, the LSH class automatically estimates a hash width for its use. | 0 |
input_model |
LSHSearch |
Input LSH model. | NA |
k |
integer |
Number of nearest neighbors to find. | 0 |
num_probes |
integer |
Number of additional probes for multiprobe LSH; if 0, traditional LSH is used. | 0 |
projections |
integer |
The number of hash functions for each table | 10 |
query |
numeric matrix |
Matrix containing query points (optional). | matrix(numeric(), 0, 0) |
reference |
numeric matrix |
Matrix containing the reference dataset. | matrix(numeric(), 0, 0) |
second_hash_size |
integer |
The size of the second level hash table. | 99901 |
seed |
integer |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
tables |
integer |
The number of hash tables to be used. | 30 |
true_neighbors |
integer matrix |
Matrix of true neighbors to compute recall with (the recall is printed when -v is specified). | matrix(integer(), 0, 0) |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
distances |
numeric matrix |
Matrix to output distances into. |
neighbors |
integer matrix |
Matrix to output neighbors into. |
output_model |
LSHSearch |
Output for trained LSH model. |
{: #r_lsh_detailed-documentation }
This program will calculate the k approximate-nearest-neighbors of a set of points using locality-sensitive hashing. You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.
For example, the following will return 5 neighbors from the data for each point in "input"
and store the distances in "distances"
and the neighbors in "neighbors"
:
R> output <- lsh(k=5, reference=input)
R> distances <- output$distances
R> neighbors <- output$neighbors
The output is organized such that row i and column j in the neighbors output corresponds to the index of the point in the reference set which is the j'th nearest neighbor from the point in the query set with index i. Row j and column i in the distances output file corresponds to the distance between those two points.
Because this is approximate-nearest-neighbors search, results may be different from run to run. Thus, the seed
parameter can be specified to set the random seed.
This program also has many other parameters to control its functionality; see the parameter-specific documentation for more information.
// Initialize optional parameters for MeanShift(). param := mlpack.MeanShiftOptions() param.ForceConvergence = false param.InPlace = false param.LabelsOnly = false param.MaxIterations = 1000 param.Radius = 0
centroid, output := mlpack.MeanShift(input, param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- mean_shift(force_convergence=FALSE, in_place=FALSE,
input=matrix(numeric(), 0, 0), labels_only=FALSE, max_iterations=1000,
radius=0, verbose=FALSE)
R> centroid <- d$centroid
R> output <- d$output
A fast implementation of mean-shift clustering using dual-tree range search. Given a dataset, this uses the mean shift algorithm to produce and return a clustering of the data. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--force_convergence (-f) |
flag |
If specified, the mean shift algorithm will continue running regardless of max_iterations until the clusters converge. | |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--in_place (-P) |
flag |
If specified, a column containing the learned cluster assignments will be added to the input dataset file. In this case, --output_file is overridden. (Do not use with Python.) | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_file (-i) |
2-d matrix file |
Input dataset to perform clustering on. | **--** |
--labels_only (-l) |
flag |
If specified, only the output labels will be written to the file specified by --output_file. | |
--max_iterations (-m) |
int |
Maximum number of iterations before mean shift terminates. | 1000 |
--radius (-r) |
double |
If the distance between two centroids is less than the given radius, one will be removed. A radius of 0 or less means an estimate will be calculated and used for the radius. | 0 |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--centroid_file (-C) |
2-d matrix file |
If specified, the centroids of each cluster will be written to the given matrix. |
--output_file (-o) |
2-d matrix file |
Matrix to write output labels or labeled data to. |
{: #cli_mean_shift_detailed-documentation }
This program performs mean shift clustering on the given dataset, storing the learned cluster assignments either as a column of labels in the input dataset or separately.
The input dataset should be specified with the --input_file (-i)
parameter, and the radius used for search can be specified with the --radius (-r)
parameter. The maximum number of iterations before algorithm termination is controlled with the --max_iterations (-m)
parameter.
The output labels may be saved with the --output_file (-o)
output parameter and the centroids of each cluster may be saved with the --centroid_file (-C)
output parameter.
For example, to run mean shift clustering on the dataset 'data.csv'
and store the centroids to 'centroids.csv'
, the following command may be used:
$ mlpack_mean_shift --input_file data.csv --centroid_file centroids.csv
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
force_convergence |
bool |
If specified, the mean shift algorithm will continue running regardless of max_iterations until the clusters converge. | False |
in_place |
bool |
If specified, a column containing the learned cluster assignments will be added to the input dataset file. In this case, --output_file is overridden. (Do not use with Python.) | False |
input |
matrix |
Input dataset to perform clustering on. | **--** |
labels_only |
bool |
If specified, only the output labels will be written to the file specified by --output_file. | False |
max_iterations |
int |
Maximum number of iterations before mean shift terminates. | 1000 |
radius |
float |
If the distance between two centroids is less than the given radius, one will be removed. A radius of 0 or less means an estimate will be calculated and used for the radius. | 0 |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
centroid |
matrix |
If specified, the centroids of each cluster will be written to the given matrix. |
output |
matrix |
Matrix to write output labels or labeled data to. |
{: #python_mean_shift_detailed-documentation }
This program performs mean shift clustering on the given dataset, storing the learned cluster assignments either as a column of labels in the input dataset or separately.
The input dataset should be specified with the input
parameter, and the radius used for search can be specified with the radius
parameter. The maximum number of iterations before algorithm termination is controlled with the max_iterations
parameter.
The output labels may be saved with the output
output parameter and the centroids of each cluster may be saved with the centroid
output parameter.
For example, to run mean shift clustering on the dataset 'data'
and store the centroids to 'centroids'
, the following command may be used:
>>> output = mean_shift(input=data)
>>> centroids = output['centroid']
name | type | description | default |
---|---|---|---|
force_convergence |
Bool |
If specified, the mean shift algorithm will continue running regardless of max_iterations until the clusters converge. | false |
in_place |
Bool |
If specified, a column containing the learned cluster assignments will be added to the input dataset file. In this case, --output_file is overridden. (Do not use with Python.) | false |
input |
Float64 matrix-like |
Input dataset to perform clustering on. | **--** |
labels_only |
Bool |
If specified, only the output labels will be written to the file specified by --output_file. | false |
max_iterations |
Int |
Maximum number of iterations before mean shift terminates. | 1000 |
radius |
Float64 |
If the distance between two centroids is less than the given radius, one will be removed. A radius of 0 or less means an estimate will be calculated and used for the radius. | 0 |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
centroid |
Float64 matrix-like |
If specified, the centroids of each cluster will be written to the given matrix. |
output |
Float64 matrix-like |
Matrix to write output labels or labeled data to. |
{: #julia_mean_shift_detailed-documentation }
This program performs mean shift clustering on the given dataset, storing the learned cluster assignments either as a column of labels in the input dataset or separately.
The input dataset should be specified with the input
parameter, and the radius used for search can be specified with the radius
parameter. The maximum number of iterations before algorithm termination is controlled with the max_iterations
parameter.
The output labels may be saved with the output
output parameter and the centroids of each cluster may be saved with the centroid
output parameter.
For example, to run mean shift clustering on the dataset data
and store the centroids to centroids
, the following command may be used:
julia> using CSV
julia> data = CSV.read("data.csv")
julia> centroids, _ = mean_shift(data)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
ForceConvergence |
bool |
If specified, the mean shift algorithm will continue running regardless of max_iterations until the clusters converge. | false |
InPlace |
bool |
If specified, a column containing the learned cluster assignments will be added to the input dataset file. In this case, --output_file is overridden. (Do not use with Python.) | false |
input |
*mat.Dense |
Input dataset to perform clustering on. | **--** |
LabelsOnly |
bool |
If specified, only the output labels will be written to the file specified by --output_file. | false |
MaxIterations |
int |
Maximum number of iterations before mean shift terminates. | 1000 |
Radius |
float64 |
If the distance between two centroids is less than the given radius, one will be removed. A radius of 0 or less means an estimate will be calculated and used for the radius. | 0 |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
centroid |
*mat.Dense |
If specified, the centroids of each cluster will be written to the given matrix. |
output |
*mat.Dense |
Matrix to write output labels or labeled data to. |
{: #go_mean_shift_detailed-documentation }
This program performs mean shift clustering on the given dataset, storing the learned cluster assignments either as a column of labels in the input dataset or separately.
The input dataset should be specified with the Input
parameter, and the radius used for search can be specified with the Radius
parameter. The maximum number of iterations before algorithm termination is controlled with the MaxIterations
parameter.
The output labels may be saved with the Output
output parameter and the centroids of each cluster may be saved with the Centroid
output parameter.
For example, to run mean shift clustering on the dataset data
and store the centroids to centroids
, the following command may be used:
// Initialize optional parameters for MeanShift().
param := mlpack.MeanShiftOptions()
centroids, _ := mlpack.MeanShift(data, param)
name | type | description | default |
---|---|---|---|
force_convergence |
logical |
If specified, the mean shift algorithm will continue running regardless of max_iterations until the clusters converge. | FALSE |
in_place |
logical |
If specified, a column containing the learned cluster assignments will be added to the input dataset file. In this case, --output_file is overridden. (Do not use with Python.) | FALSE |
input |
numeric matrix |
Input dataset to perform clustering on. | **--** |
labels_only |
logical |
If specified, only the output labels will be written to the file specified by --output_file. | FALSE |
max_iterations |
integer |
Maximum number of iterations before mean shift terminates. | 1000 |
radius |
numeric |
If the distance between two centroids is less than the given radius, one will be removed. A radius of 0 or less means an estimate will be calculated and used for the radius. | 0 |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
centroid |
numeric matrix |
If specified, the centroids of each cluster will be written to the given matrix. |
output |
numeric matrix |
Matrix to write output labels or labeled data to. |
{: #r_mean_shift_detailed-documentation }
This program performs mean shift clustering on the given dataset, storing the learned cluster assignments either as a column of labels in the input dataset or separately.
The input dataset should be specified with the input
parameter, and the radius used for search can be specified with the radius
parameter. The maximum number of iterations before algorithm termination is controlled with the max_iterations
parameter.
The output labels may be saved with the output
output parameter and the centroids of each cluster may be saved with the centroid
output parameter.
For example, to run mean shift clustering on the dataset "data"
and store the centroids to "centroids"
, the following command may be used:
R> output <- mean_shift(input=data)
R> centroids <- output$centroid
// Initialize optional parameters for Nbc(). param := mlpack.NbcOptions() param.IncrementalVariance = false param.InputModel = nil param.Labels = mat.NewDense(1, 1, nil) param.Test = mat.NewDense(1, 1, nil) param.Training = mat.NewDense(1, 1, nil)
output, output_model, output_probs, predictions, probabilities := mlpack.Nbc(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- nbc(incremental_variance=FALSE, input_model=NA,
labels=matrix(integer(), 0, 0), test=matrix(numeric(), 0, 0),
training=matrix(numeric(), 0, 0), verbose=FALSE)
R> output <- d$output
R> output_model <- d$output_model
R> output_probs <- d$output_probs
R> predictions <- d$predictions
R> probabilities <- d$probabilities
An implementation of the Naive Bayes Classifier, used for classification. Given labeled data, an NBC model can be trained and saved, or, a pre-trained model can be used for classification. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--incremental_variance (-I) |
flag |
The variance of each class will be calculated incrementally. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_model_file (-m) |
NBCModel file |
Input Naive Bayes model. | '' |
--labels_file (-l) |
1-d index matrix file |
A file containing labels for the training set. | '' |
--test_file (-T) |
2-d matrix file |
A matrix containing the test set. | '' |
--training_file (-t) |
2-d matrix file |
A matrix containing the training set. | '' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--output_file (-o) |
1-d index matrix file |
The matrix in which the predicted labels for the test set will be written (deprecated). |
--output_model_file (-M) |
NBCModel file |
File to save trained Naive Bayes model to. |
--output_probs_file |
2-d matrix file |
The matrix in which the predicted probability of labels for the test set will be written (deprecated). |
--predictions_file (-a) |
1-d index matrix file |
The matrix in which the predicted labels for the test set will be written. |
--probabilities_file (-p) |
2-d matrix file |
The matrix in which the predicted probability of labels for the test set will be written. |
{: #cli_nbc_detailed-documentation }
This program trains the Naive Bayes classifier on the given labeled training set, or loads a model from the given model file, and then may use that trained model to classify the points in a given test set.
The training set is specified with the --training_file (-t)
parameter. Labels may be either the last row of the training set, or alternately the --labels_file (-l)
parameter may be specified to pass a separate matrix of labels.
If training is not desired, a pre-existing model may be loaded with the --input_model_file (-m)
parameter.
The --incremental_variance (-I)
parameter can be used to force the training to use an incremental algorithm for calculating variance. This is slower, but can help avoid loss of precision in some cases.
If classifying a test set is desired, the test set may be specified with the --test_file (-T)
parameter, and the classifications may be saved with the --predictions_file (-a)
predictions parameter. If saving the trained model is desired, this may be done with the --output_model_file (-M)
output parameter.
Note: the --output_file (-o)
and --output_probs_file
parameters are deprecated and will be removed in mlpack 4.0.0. Use --predictions_file (-a)
and --probabilities_file (-p)
instead.
For example, to train a Naive Bayes classifier on the dataset 'data.csv'
with labels 'labels.csv'
and save the model to 'nbc_model.bin'
, the following command may be used:
$ mlpack_nbc --training_file data.csv --labels_file labels.csv
--output_model_file nbc_model.bin
Then, to use 'nbc_model.bin'
to predict the classes of the dataset 'test_set.csv'
and save the predicted classes to 'predictions.csv'
, the following command may be used:
$ mlpack_nbc --input_model_file nbc_model.bin --test_file test_set.csv
--output_file predictions.csv
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
incremental_variance |
bool |
The variance of each class will be calculated incrementally. | False |
input_model |
NBCModelType |
Input Naive Bayes model. | None |
labels |
int vector |
A file containing labels for the training set. | np.empty([0], dtype=np.uint64) |
test |
matrix |
A matrix containing the test set. | np.empty([0, 0]) |
training |
matrix |
A matrix containing the training set. | np.empty([0, 0]) |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output |
int vector |
The matrix in which the predicted labels for the test set will be written (deprecated). |
output_model |
NBCModelType |
File to save trained Naive Bayes model to. |
output_probs |
matrix |
The matrix in which the predicted probability of labels for the test set will be written (deprecated). |
predictions |
int vector |
The matrix in which the predicted labels for the test set will be written. |
probabilities |
matrix |
The matrix in which the predicted probability of labels for the test set will be written. |
{: #python_nbc_detailed-documentation }
This program trains the Naive Bayes classifier on the given labeled training set, or loads a model from the given model file, and then may use that trained model to classify the points in a given test set.
The training set is specified with the training
parameter. Labels may be either the last row of the training set, or alternately the labels
parameter may be specified to pass a separate matrix of labels.
If training is not desired, a pre-existing model may be loaded with the input_model
parameter.
The incremental_variance
parameter can be used to force the training to use an incremental algorithm for calculating variance. This is slower, but can help avoid loss of precision in some cases.
If classifying a test set is desired, the test set may be specified with the test
parameter, and the classifications may be saved with the predictions
predictions parameter. If saving the trained model is desired, this may be done with the output_model
output parameter.
Note: the output
and output_probs
parameters are deprecated and will be removed in mlpack 4.0.0. Use predictions
and probabilities
instead.
For example, to train a Naive Bayes classifier on the dataset 'data'
with labels 'labels'
and save the model to 'nbc_model'
, the following command may be used:
>>> output = nbc(training=data, labels=labels)
>>> nbc_model = output['output_model']
Then, to use 'nbc_model'
to predict the classes of the dataset 'test_set'
and save the predicted classes to 'predictions'
, the following command may be used:
>>> output = nbc(input_model=nbc_model, test=test_set)
>>> predictions = output['output']
name | type | description | default |
---|---|---|---|
incremental_variance |
Bool |
The variance of each class will be calculated incrementally. | false |
input_model |
NBCModel |
Input Naive Bayes model. | nothing |
labels |
Int vector-like |
A file containing labels for the training set. | Int[] |
test |
Float64 matrix-like |
A matrix containing the test set. | zeros(0, 0) |
training |
Float64 matrix-like |
A matrix containing the training set. | zeros(0, 0) |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output |
Int vector-like |
The matrix in which the predicted labels for the test set will be written (deprecated). |
output_model |
NBCModel |
File to save trained Naive Bayes model to. |
output_probs |
Float64 matrix-like |
The matrix in which the predicted probability of labels for the test set will be written (deprecated). |
predictions |
Int vector-like |
The matrix in which the predicted labels for the test set will be written. |
probabilities |
Float64 matrix-like |
The matrix in which the predicted probability of labels for the test set will be written. |
{: #julia_nbc_detailed-documentation }
This program trains the Naive Bayes classifier on the given labeled training set, or loads a model from the given model file, and then may use that trained model to classify the points in a given test set.
The training set is specified with the training
parameter. Labels may be either the last row of the training set, or alternately the labels
parameter may be specified to pass a separate matrix of labels.
If training is not desired, a pre-existing model may be loaded with the input_model
parameter.
The incremental_variance
parameter can be used to force the training to use an incremental algorithm for calculating variance. This is slower, but can help avoid loss of precision in some cases.
If classifying a test set is desired, the test set may be specified with the test
parameter, and the classifications may be saved with the predictions
predictions parameter. If saving the trained model is desired, this may be done with the output_model
output parameter.
Note: the output
and output_probs
parameters are deprecated and will be removed in mlpack 4.0.0. Use predictions
and probabilities
instead.
For example, to train a Naive Bayes classifier on the dataset data
with labels labels
and save the model to nbc_model
, the following command may be used:
julia> using CSV
julia> data = CSV.read("data.csv")
julia> labels = CSV.read("labels.csv"; type=Int)
julia> _, nbc_model, _, _, _ = nbc(labels=labels, training=data)
Then, to use nbc_model
to predict the classes of the dataset test_set
and save the predicted classes to predictions
, the following command may be used:
julia> using CSV
julia> test_set = CSV.read("test_set.csv")
julia> predictions, _, _, _, _ = nbc(input_model=nbc_model,
test=test_set)
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
IncrementalVariance |
bool |
The variance of each class will be calculated incrementally. | false |
InputModel |
nbcModel |
Input Naive Bayes model. | nil |
Labels |
*mat.Dense (1d with ints) |
A file containing labels for the training set. | mat.NewDense(1, 1, nil) |
Test |
*mat.Dense |
A matrix containing the test set. | mat.NewDense(1, 1, nil) |
Training |
*mat.Dense |
A matrix containing the training set. | mat.NewDense(1, 1, nil) |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
output |
*mat.Dense (1d with ints) |
The matrix in which the predicted labels for the test set will be written (deprecated). |
outputModel |
nbcModel |
File to save trained Naive Bayes model to. |
outputProbs |
*mat.Dense |
The matrix in which the predicted probability of labels for the test set will be written (deprecated). |
predictions |
*mat.Dense (1d with ints) |
The matrix in which the predicted labels for the test set will be written. |
probabilities |
*mat.Dense |
The matrix in which the predicted probability of labels for the test set will be written. |
{: #go_nbc_detailed-documentation }
This program trains the Naive Bayes classifier on the given labeled training set, or loads a model from the given model file, and then may use that trained model to classify the points in a given test set.
The training set is specified with the Training
parameter. Labels may be either the last row of the training set, or alternately the Labels
parameter may be specified to pass a separate matrix of labels.
If training is not desired, a pre-existing model may be loaded with the InputModel
parameter.
The IncrementalVariance
parameter can be used to force the training to use an incremental algorithm for calculating variance. This is slower, but can help avoid loss of precision in some cases.
If classifying a test set is desired, the test set may be specified with the Test
parameter, and the classifications may be saved with the Predictions
predictions parameter. If saving the trained model is desired, this may be done with the OutputModel
output parameter.
Note: the Output
and OutputProbs
parameters are deprecated and will be removed in mlpack 4.0.0. Use Predictions
and Probabilities
instead.
For example, to train a Naive Bayes classifier on the dataset data
with labels labels
and save the model to nbc_model
, the following command may be used:
// Initialize optional parameters for Nbc().
param := mlpack.NbcOptions()
param.Training = data
param.Labels = labels
_, nbc_model, _, _, _ := mlpack.Nbc(param)
Then, to use nbc_model
to predict the classes of the dataset test_set
and save the predicted classes to predictions
, the following command may be used:
// Initialize optional parameters for Nbc().
param := mlpack.NbcOptions()
param.InputModel = &nbc_model
param.Test = test_set
predictions, _, _, _, _ := mlpack.Nbc(param)
name | type | description | default |
---|---|---|---|
incremental_variance |
logical |
The variance of each class will be calculated incrementally. | FALSE |
input_model |
NBCModel |
Input Naive Bayes model. | NA |
labels |
integer vector |
A file containing labels for the training set. | matrix(integer(), 0, 0) |
test |
numeric matrix |
A matrix containing the test set. | matrix(numeric(), 0, 0) |
training |
numeric matrix |
A matrix containing the training set. | matrix(numeric(), 0, 0) |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output |
integer vector |
The matrix in which the predicted labels for the test set will be written (deprecated). |
output_model |
NBCModel |
File to save trained Naive Bayes model to. |
output_probs |
numeric matrix |
The matrix in which the predicted probability of labels for the test set will be written (deprecated). |
predictions |
integer vector |
The matrix in which the predicted labels for the test set will be written. |
probabilities |
numeric matrix |
The matrix in which the predicted probability of labels for the test set will be written. |
{: #r_nbc_detailed-documentation }
This program trains the Naive Bayes classifier on the given labeled training set, or loads a model from the given model file, and then may use that trained model to classify the points in a given test set.
The training set is specified with the training
parameter. Labels may be either the last row of the training set, or alternately the labels
parameter may be specified to pass a separate matrix of labels.
If training is not desired, a pre-existing model may be loaded with the input_model
parameter.
The incremental_variance
parameter can be used to force the training to use an incremental algorithm for calculating variance. This is slower, but can help avoid loss of precision in some cases.
If classifying a test set is desired, the test set may be specified with the test
parameter, and the classifications may be saved with the predictions
predictions parameter. If saving the trained model is desired, this may be done with the output_model
output parameter.
Note: the output
and output_probs
parameters are deprecated and will be removed in mlpack 4.0.0. Use predictions
and probabilities
instead.
For example, to train a Naive Bayes classifier on the dataset "data"
with labels "labels"
and save the model to "nbc_model"
, the following command may be used:
R> output <- nbc(training=data, labels=labels)
R> nbc_model <- output$output_model
Then, to use "nbc_model"
to predict the classes of the dataset "test_set"
and save the predicted classes to "predictions"
, the following command may be used:
R> output <- nbc(input_model=nbc_model, test=test_set)
R> predictions <- output$output
// Initialize optional parameters for Nca(). param := mlpack.NcaOptions() param.ArmijoConstant = 0.0001 param.BatchSize = 50 param.Labels = mat.NewDense(1, 1, nil) param.LinearScan = false param.MaxIterations = 500000 param.MaxLineSearchTrials = 50 param.MaxStep = 1e+20 param.MinStep = 1e-20 param.Normalize = false param.NumBasis = 5 param.Optimizer = "sgd" param.Seed = 0 param.StepSize = 0.01 param.Tolerance = 1e-07 param.Wolfe = 0.9
output := mlpack.Nca(input, param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- nca(armijo_constant=0.0001, batch_size=50,
input=matrix(numeric(), 0, 0), labels=matrix(integer(), 0, 0),
linear_scan=FALSE, max_iterations=500000, max_line_search_trials=50,
max_step=1e+20, min_step=1e-20, normalize=FALSE, num_basis=5,
optimizer="sgd", seed=0, step_size=0.01, tolerance=1e-07, verbose=FALSE,
wolfe=0.9)
R> output <- d$output
An implementation of neighborhood components analysis, a distance learning technique that can be used for preprocessing. Given a labeled dataset, this uses NCA, which seeks to improve the k-nearest-neighbor classification, and returns the learned distance metric. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--armijo_constant (-A) |
double |
Armijo constant for L-BFGS. | 0.0001 |
--batch_size (-b) |
int |
Batch size for mini-batch SGD. | 50 |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_file (-i) |
2-d matrix file |
Input dataset to run NCA on. | **--** |
--labels_file (-l) |
1-d index matrix file |
Labels for input dataset. | '' |
--linear_scan (-L) |
flag |
Don't shuffle the order in which data points are visited for SGD or mini-batch SGD. | |
--max_iterations (-n) |
int |
Maximum number of iterations for SGD or L-BFGS (0 indicates no limit). | 500000 |
--max_line_search_trials (-T) |
int |
Maximum number of line search trials for L-BFGS. | 50 |
--max_step (-M) |
double |
Maximum step of line search for L-BFGS. | 1e+20 |
--min_step (-m) |
double |
Minimum step of line search for L-BFGS. | 1e-20 |
--normalize (-N) |
flag |
Use a normalized starting point for optimization. This is useful for when points are far apart, or when SGD is returning NaN. | |
--num_basis (-B) |
int |
Number of memory points to be stored for L-BFGS. | 5 |
--optimizer (-O) |
string |
Optimizer to use; 'sgd' or 'lbfgs'. | 'sgd' |
--seed (-s) |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
--step_size (-a) |
double |
Step size for stochastic gradient descent (alpha). | 0.01 |
--tolerance (-t) |
double |
Maximum tolerance for termination of SGD or L-BFGS. | 1e-07 |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. | |
--wolfe (-w) |
double |
Wolfe condition parameter for L-BFGS. | 0.9 |
name | type | description |
---|---|---|
--output_file (-o) |
2-d matrix file |
Output matrix for learned distance matrix. |
{: #cli_nca_detailed-documentation }
This program implements Neighborhood Components Analysis, both a linear dimensionality reduction technique and a distance learning technique. The method seeks to improve k-nearest-neighbor classification on a dataset by scaling the dimensions. The method is nonparametric, and does not require a value of k. It works by using stochastic ("soft") neighbor assignments and using optimization techniques over the gradient of the accuracy of the neighbor assignments.
To work, this algorithm needs labeled data. It can be given as the last row of the input dataset (specified with --input_file (-i)
), or alternatively as a separate matrix (specified with --labels_file (-l)
).
This implementation of NCA uses stochastic gradient descent, mini-batch stochastic gradient descent, or the L_BFGS optimizer. These optimizers do not guarantee global convergence for a nonconvex objective function (NCA's objective function is nonconvex), so the final results could depend on the random seed or other optimizer parameters.
Stochastic gradient descent, specified by the value 'sgd' for the parameter --optimizer (-O)
, depends primarily on three parameters: the step size (specified with --step_size (-a)
), the batch size (specified with --batch_size (-b)
), and the maximum number of iterations (specified with --max_iterations (-n)
). In addition, a normalized starting point can be used by specifying the --normalize (-N)
parameter, which is necessary if many warnings of the form 'Denominator of p_i is 0!' are given. Tuning the step size can be a tedious affair. In general, the step size is too large if the objective is not mostly uniformly decreasing, or if zero-valued denominator warnings are being issued. The step size is too small if the objective is changing very slowly. Setting the termination condition can be done easily once a good step size parameter is found; either increase the maximum iterations to a large number and allow SGD to find a minimum, or set the maximum iterations to 0 (allowing infinite iterations) and set the tolerance (specified by --tolerance (-t)
) to define the maximum allowed difference between objectives for SGD to terminate. Be careful---setting the tolerance instead of the maximum iterations can take a very long time and may actually never converge due to the properties of the SGD optimizer. Note that a single iteration of SGD refers to a single point, so to take a single pass over the dataset, set the value of the --max_iterations (-n)
parameter equal to the number of points in the dataset.
The L-BFGS optimizer, specified by the value 'lbfgs' for the parameter --optimizer (-O)
, uses a back-tracking line search algorithm to minimize a function. The following parameters are used by L-BFGS: --num_basis (-B)
(specifies the number of memory points used by L-BFGS), --max_iterations (-n)
, --armijo_constant (-A)
, --wolfe (-w)
, --tolerance (-t)
(the optimization is terminated when the gradient norm is below this value), --max_line_search_trials (-T)
, --min_step (-m)
, and --max_step (-M)
(which both refer to the line search routine). For more details on the L-BFGS optimizer, consult either the mlpack L-BFGS documentation (in lbfgs.hpp) or the vast set of published literature on L-BFGS.
By default, the SGD optimizer is used.
name | type | description | default |
---|---|---|---|
armijo_constant |
float |
Armijo constant for L-BFGS. | 0.0001 |
batch_size |
int |
Batch size for mini-batch SGD. | 50 |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
input |
matrix |
Input dataset to run NCA on. | **--** |
labels |
int vector |
Labels for input dataset. | np.empty([0], dtype=np.uint64) |
linear_scan |
bool |
Don't shuffle the order in which data points are visited for SGD or mini-batch SGD. | False |
max_iterations |
int |
Maximum number of iterations for SGD or L-BFGS (0 indicates no limit). | 500000 |
max_line_search_trials |
int |
Maximum number of line search trials for L-BFGS. | 50 |
max_step |
float |
Maximum step of line search for L-BFGS. | 1e+20 |
min_step |
float |
Minimum step of line search for L-BFGS. | 1e-20 |
normalize |
bool |
Use a normalized starting point for optimization. This is useful for when points are far apart, or when SGD is returning NaN. | False |
num_basis |
int |
Number of memory points to be stored for L-BFGS. | 5 |
optimizer |
str |
Optimizer to use; 'sgd' or 'lbfgs'. | 'sgd' |
seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
step_size |
float |
Step size for stochastic gradient descent (alpha). | 0.01 |
tolerance |
float |
Maximum tolerance for termination of SGD or L-BFGS. | 1e-07 |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
wolfe |
float |
Wolfe condition parameter for L-BFGS. | 0.9 |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
output |
matrix |
Output matrix for learned distance matrix. |
{: #python_nca_detailed-documentation }
This program implements Neighborhood Components Analysis, both a linear dimensionality reduction technique and a distance learning technique. The method seeks to improve k-nearest-neighbor classification on a dataset by scaling the dimensions. The method is nonparametric, and does not require a value of k. It works by using stochastic ("soft") neighbor assignments and using optimization techniques over the gradient of the accuracy of the neighbor assignments.
To work, this algorithm needs labeled data. It can be given as the last row of the input dataset (specified with input
), or alternatively as a separate matrix (specified with labels
).
This implementation of NCA uses stochastic gradient descent, mini-batch stochastic gradient descent, or the L_BFGS optimizer. These optimizers do not guarantee global convergence for a nonconvex objective function (NCA's objective function is nonconvex), so the final results could depend on the random seed or other optimizer parameters.
Stochastic gradient descent, specified by the value 'sgd' for the parameter optimizer
, depends primarily on three parameters: the step size (specified with step_size
), the batch size (specified with batch_size
), and the maximum number of iterations (specified with max_iterations
). In addition, a normalized starting point can be used by specifying the normalize
parameter, which is necessary if many warnings of the form 'Denominator of p_i is 0!' are given. Tuning the step size can be a tedious affair. In general, the step size is too large if the objective is not mostly uniformly decreasing, or if zero-valued denominator warnings are being issued. The step size is too small if the objective is changing very slowly. Setting the termination condition can be done easily once a good step size parameter is found; either increase the maximum iterations to a large number and allow SGD to find a minimum, or set the maximum iterations to 0 (allowing infinite iterations) and set the tolerance (specified by tolerance
) to define the maximum allowed difference between objectives for SGD to terminate. Be careful---setting the tolerance instead of the maximum iterations can take a very long time and may actually never converge due to the properties of the SGD optimizer. Note that a single iteration of SGD refers to a single point, so to take a single pass over the dataset, set the value of the max_iterations
parameter equal to the number of points in the dataset.
The L-BFGS optimizer, specified by the value 'lbfgs' for the parameter optimizer
, uses a back-tracking line search algorithm to minimize a function. The following parameters are used by L-BFGS: num_basis
(specifies the number of memory points used by L-BFGS), max_iterations
, armijo_constant
, wolfe
, tolerance
(the optimization is terminated when the gradient norm is below this value), max_line_search_trials
, min_step
, and max_step
(which both refer to the line search routine). For more details on the L-BFGS optimizer, consult either the mlpack L-BFGS documentation (in lbfgs.hpp) or the vast set of published literature on L-BFGS.
By default, the SGD optimizer is used.
name | type | description | default |
---|---|---|---|
armijo_constant |
Float64 |
Armijo constant for L-BFGS. | 0.0001 |
batch_size |
Int |
Batch size for mini-batch SGD. | 50 |
input |
Float64 matrix-like |
Input dataset to run NCA on. | **--** |
labels |
Int vector-like |
Labels for input dataset. | Int[] |
linear_scan |
Bool |
Don't shuffle the order in which data points are visited for SGD or mini-batch SGD. | false |
max_iterations |
Int |
Maximum number of iterations for SGD or L-BFGS (0 indicates no limit). | 500000 |
max_line_search_trials |
Int |
Maximum number of line search trials for L-BFGS. | 50 |
max_step |
Float64 |
Maximum step of line search for L-BFGS. | 1e+20 |
min_step |
Float64 |
Minimum step of line search for L-BFGS. | 1e-20 |
normalize |
Bool |
Use a normalized starting point for optimization. This is useful for when points are far apart, or when SGD is returning NaN. | false |
num_basis |
Int |
Number of memory points to be stored for L-BFGS. | 5 |
optimizer |
String |
Optimizer to use; 'sgd' or 'lbfgs'. | "sgd" |
seed |
Int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
step_size |
Float64 |
Step size for stochastic gradient descent (alpha). | 0.01 |
tolerance |
Float64 |
Maximum tolerance for termination of SGD or L-BFGS. | 1e-07 |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
wolfe |
Float64 |
Wolfe condition parameter for L-BFGS. | 0.9 |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
output |
Float64 matrix-like |
Output matrix for learned distance matrix. |
{: #julia_nca_detailed-documentation }
This program implements Neighborhood Components Analysis, both a linear dimensionality reduction technique and a distance learning technique. The method seeks to improve k-nearest-neighbor classification on a dataset by scaling the dimensions. The method is nonparametric, and does not require a value of k. It works by using stochastic ("soft") neighbor assignments and using optimization techniques over the gradient of the accuracy of the neighbor assignments.
To work, this algorithm needs labeled data. It can be given as the last row of the input dataset (specified with input
), or alternatively as a separate matrix (specified with labels
).
This implementation of NCA uses stochastic gradient descent, mini-batch stochastic gradient descent, or the L_BFGS optimizer. These optimizers do not guarantee global convergence for a nonconvex objective function (NCA's objective function is nonconvex), so the final results could depend on the random seed or other optimizer parameters.
Stochastic gradient descent, specified by the value 'sgd' for the parameter optimizer
, depends primarily on three parameters: the step size (specified with step_size
), the batch size (specified with batch_size
), and the maximum number of iterations (specified with max_iterations
). In addition, a normalized starting point can be used by specifying the normalize
parameter, which is necessary if many warnings of the form 'Denominator of p_i is 0!' are given. Tuning the step size can be a tedious affair. In general, the step size is too large if the objective is not mostly uniformly decreasing, or if zero-valued denominator warnings are being issued. The step size is too small if the objective is changing very slowly. Setting the termination condition can be done easily once a good step size parameter is found; either increase the maximum iterations to a large number and allow SGD to find a minimum, or set the maximum iterations to 0 (allowing infinite iterations) and set the tolerance (specified by tolerance
) to define the maximum allowed difference between objectives for SGD to terminate. Be careful---setting the tolerance instead of the maximum iterations can take a very long time and may actually never converge due to the properties of the SGD optimizer. Note that a single iteration of SGD refers to a single point, so to take a single pass over the dataset, set the value of the max_iterations
parameter equal to the number of points in the dataset.
The L-BFGS optimizer, specified by the value 'lbfgs' for the parameter optimizer
, uses a back-tracking line search algorithm to minimize a function. The following parameters are used by L-BFGS: num_basis
(specifies the number of memory points used by L-BFGS), max_iterations
, armijo_constant
, wolfe
, tolerance
(the optimization is terminated when the gradient norm is below this value), max_line_search_trials
, min_step
, and max_step
(which both refer to the line search routine). For more details on the L-BFGS optimizer, consult either the mlpack L-BFGS documentation (in lbfgs.hpp) or the vast set of published literature on L-BFGS.
By default, the SGD optimizer is used.
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
ArmijoConstant |
float64 |
Armijo constant for L-BFGS. | 0.0001 |
BatchSize |
int |
Batch size for mini-batch SGD. | 50 |
input |
*mat.Dense |
Input dataset to run NCA on. | **--** |
Labels |
*mat.Dense (1d with ints) |
Labels for input dataset. | mat.NewDense(1, 1, nil) |
LinearScan |
bool |
Don't shuffle the order in which data points are visited for SGD or mini-batch SGD. | false |
MaxIterations |
int |
Maximum number of iterations for SGD or L-BFGS (0 indicates no limit). | 500000 |
MaxLineSearchTrials |
int |
Maximum number of line search trials for L-BFGS. | 50 |
MaxStep |
float64 |
Maximum step of line search for L-BFGS. | 1e+20 |
MinStep |
float64 |
Minimum step of line search for L-BFGS. | 1e-20 |
Normalize |
bool |
Use a normalized starting point for optimization. This is useful for when points are far apart, or when SGD is returning NaN. | false |
NumBasis |
int |
Number of memory points to be stored for L-BFGS. | 5 |
Optimizer |
string |
Optimizer to use; 'sgd' or 'lbfgs'. | "sgd" |
Seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
StepSize |
float64 |
Step size for stochastic gradient descent (alpha). | 0.01 |
Tolerance |
float64 |
Maximum tolerance for termination of SGD or L-BFGS. | 1e-07 |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Wolfe |
float64 |
Wolfe condition parameter for L-BFGS. | 0.9 |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
output |
*mat.Dense |
Output matrix for learned distance matrix. |
{: #go_nca_detailed-documentation }
This program implements Neighborhood Components Analysis, both a linear dimensionality reduction technique and a distance learning technique. The method seeks to improve k-nearest-neighbor classification on a dataset by scaling the dimensions. The method is nonparametric, and does not require a value of k. It works by using stochastic ("soft") neighbor assignments and using optimization techniques over the gradient of the accuracy of the neighbor assignments.
To work, this algorithm needs labeled data. It can be given as the last row of the input dataset (specified with Input
), or alternatively as a separate matrix (specified with Labels
).
This implementation of NCA uses stochastic gradient descent, mini-batch stochastic gradient descent, or the L_BFGS optimizer. These optimizers do not guarantee global convergence for a nonconvex objective function (NCA's objective function is nonconvex), so the final results could depend on the random seed or other optimizer parameters.
Stochastic gradient descent, specified by the value 'sgd' for the parameter Optimizer
, depends primarily on three parameters: the step size (specified with StepSize
), the batch size (specified with BatchSize
), and the maximum number of iterations (specified with MaxIterations
). In addition, a normalized starting point can be used by specifying the Normalize
parameter, which is necessary if many warnings of the form 'Denominator of p_i is 0!' are given. Tuning the step size can be a tedious affair. In general, the step size is too large if the objective is not mostly uniformly decreasing, or if zero-valued denominator warnings are being issued. The step size is too small if the objective is changing very slowly. Setting the termination condition can be done easily once a good step size parameter is found; either increase the maximum iterations to a large number and allow SGD to find a minimum, or set the maximum iterations to 0 (allowing infinite iterations) and set the tolerance (specified by Tolerance
) to define the maximum allowed difference between objectives for SGD to terminate. Be careful---setting the tolerance instead of the maximum iterations can take a very long time and may actually never converge due to the properties of the SGD optimizer. Note that a single iteration of SGD refers to a single point, so to take a single pass over the dataset, set the value of the MaxIterations
parameter equal to the number of points in the dataset.
The L-BFGS optimizer, specified by the value 'lbfgs' for the parameter Optimizer
, uses a back-tracking line search algorithm to minimize a function. The following parameters are used by L-BFGS: NumBasis
(specifies the number of memory points used by L-BFGS), MaxIterations
, ArmijoConstant
, Wolfe
, Tolerance
(the optimization is terminated when the gradient norm is below this value), MaxLineSearchTrials
, MinStep
, and MaxStep
(which both refer to the line search routine). For more details on the L-BFGS optimizer, consult either the mlpack L-BFGS documentation (in lbfgs.hpp) or the vast set of published literature on L-BFGS.
By default, the SGD optimizer is used.
name | type | description | default |
---|---|---|---|
armijo_constant |
numeric |
Armijo constant for L-BFGS. | 0.0001 |
batch_size |
integer |
Batch size for mini-batch SGD. | 50 |
input |
numeric matrix |
Input dataset to run NCA on. | **--** |
labels |
integer vector |
Labels for input dataset. | matrix(integer(), 0, 0) |
linear_scan |
logical |
Don't shuffle the order in which data points are visited for SGD or mini-batch SGD. | FALSE |
max_iterations |
integer |
Maximum number of iterations for SGD or L-BFGS (0 indicates no limit). | 500000 |
max_line_search_trials |
integer |
Maximum number of line search trials for L-BFGS. | 50 |
max_step |
numeric |
Maximum step of line search for L-BFGS. | 1e+20 |
min_step |
numeric |
Minimum step of line search for L-BFGS. | 1e-20 |
normalize |
logical |
Use a normalized starting point for optimization. This is useful for when points are far apart, or when SGD is returning NaN. | FALSE |
num_basis |
integer |
Number of memory points to be stored for L-BFGS. | 5 |
optimizer |
character |
Optimizer to use; 'sgd' or 'lbfgs'. | "sgd" |
seed |
integer |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
step_size |
numeric |
Step size for stochastic gradient descent (alpha). | 0.01 |
tolerance |
numeric |
Maximum tolerance for termination of SGD or L-BFGS. | 1e-07 |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
wolfe |
numeric |
Wolfe condition parameter for L-BFGS. | 0.9 |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
output |
numeric matrix |
Output matrix for learned distance matrix. |
{: #r_nca_detailed-documentation }
This program implements Neighborhood Components Analysis, both a linear dimensionality reduction technique and a distance learning technique. The method seeks to improve k-nearest-neighbor classification on a dataset by scaling the dimensions. The method is nonparametric, and does not require a value of k. It works by using stochastic ("soft") neighbor assignments and using optimization techniques over the gradient of the accuracy of the neighbor assignments.
To work, this algorithm needs labeled data. It can be given as the last row of the input dataset (specified with input
), or alternatively as a separate matrix (specified with labels
).
This implementation of NCA uses stochastic gradient descent, mini-batch stochastic gradient descent, or the L_BFGS optimizer. These optimizers do not guarantee global convergence for a nonconvex objective function (NCA's objective function is nonconvex), so the final results could depend on the random seed or other optimizer parameters.
Stochastic gradient descent, specified by the value 'sgd' for the parameter optimizer
, depends primarily on three parameters: the step size (specified with step_size
), the batch size (specified with batch_size
), and the maximum number of iterations (specified with max_iterations
). In addition, a normalized starting point can be used by specifying the normalize
parameter, which is necessary if many warnings of the form 'Denominator of p_i is 0!' are given. Tuning the step size can be a tedious affair. In general, the step size is too large if the objective is not mostly uniformly decreasing, or if zero-valued denominator warnings are being issued. The step size is too small if the objective is changing very slowly. Setting the termination condition can be done easily once a good step size parameter is found; either increase the maximum iterations to a large number and allow SGD to find a minimum, or set the maximum iterations to 0 (allowing infinite iterations) and set the tolerance (specified by tolerance
) to define the maximum allowed difference between objectives for SGD to terminate. Be careful---setting the tolerance instead of the maximum iterations can take a very long time and may actually never converge due to the properties of the SGD optimizer. Note that a single iteration of SGD refers to a single point, so to take a single pass over the dataset, set the value of the max_iterations
parameter equal to the number of points in the dataset.
The L-BFGS optimizer, specified by the value 'lbfgs' for the parameter optimizer
, uses a back-tracking line search algorithm to minimize a function. The following parameters are used by L-BFGS: num_basis
(specifies the number of memory points used by L-BFGS), max_iterations
, armijo_constant
, wolfe
, tolerance
(the optimization is terminated when the gradient norm is below this value), max_line_search_trials
, min_step
, and max_step
(which both refer to the line search routine). For more details on the L-BFGS optimizer, consult either the mlpack L-BFGS documentation (in lbfgs.hpp) or the vast set of published literature on L-BFGS.
By default, the SGD optimizer is used.
// Initialize optional parameters for Knn(). param := mlpack.KnnOptions() param.Algorithm = "dual_tree" param.Epsilon = 0 param.InputModel = nil param.K = 0 param.LeafSize = 20 param.Query = mat.NewDense(1, 1, nil) param.RandomBasis = false param.Reference = mat.NewDense(1, 1, nil) param.Rho = 0.7 param.Seed = 0 param.Tau = 0 param.TreeType = "kd" param.TrueDistances = mat.NewDense(1, 1, nil) param.TrueNeighbors = mat.NewDense(1, 1, nil)
distances, neighbors, output_model := mlpack.Knn(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- knn(algorithm="dual_tree", epsilon=0, input_model=NA, k=0,
leaf_size=20, query=matrix(numeric(), 0, 0), random_basis=FALSE,
reference=matrix(numeric(), 0, 0), rho=0.7, seed=0, tau=0,
tree_type="kd", true_distances=matrix(numeric(), 0, 0),
true_neighbors=matrix(integer(), 0, 0), verbose=FALSE)
R> distances <- d$distances
R> neighbors <- d$neighbors
R> output_model <- d$output_model
An implementation of k-nearest-neighbor search using single-tree and dual-tree algorithms. Given a set of reference points and query points, this can find the k nearest neighbors in the reference set of each query point using trees; trees that are built can be saved for future use. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--algorithm (-a) |
string |
Type of neighbor search: 'naive', 'single_tree', 'dual_tree', 'greedy'. | 'dual_tree' |
--epsilon (-e) |
double |
If specified, will do approximate nearest neighbor search with given relative error. | 0 |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_model_file (-m) |
KNNModel file |
Pre-trained kNN model. | '' |
--k (-k) |
int |
Number of nearest neighbors to find. | 0 |
--leaf_size (-l) |
int |
Leaf size for tree building (used for kd-trees, vp trees, random projection trees, UB trees, R trees, R* trees, X trees, Hilbert R trees, R+ trees, R++ trees, spill trees, and octrees). | 20 |
--query_file (-q) |
2-d matrix file |
Matrix containing query points (optional). | '' |
--random_basis (-R) |
flag |
Before tree-building, project the data onto a random orthogonal basis. | |
--reference_file (-r) |
2-d matrix file |
Matrix containing the reference dataset. | '' |
--rho (-b) |
double |
Balance threshold (only valid for spill trees). | 0.7 |
--seed (-s) |
int |
Random seed (if 0, std::time(NULL) is used). | 0 |
--tau (-u) |
double |
Overlapping size (only valid for spill trees). | 0 |
--tree_type (-t) |
string |
Type of tree to use: 'kd', 'vp', 'rp', 'max-rp', 'ub', 'cover', 'r', 'r-star', 'x', 'ball', 'hilbert-r', 'r-plus', 'r-plus-plus', 'spill', 'oct'. | 'kd' |
--true_distances_file (-D) |
2-d matrix file |
Matrix of true distances to compute the effective error (average relative error) (it is printed when -v is specified). | '' |
--true_neighbors_file (-T) |
2-d index matrix file |
Matrix of true neighbors to compute the recall (it is printed when -v is specified). | '' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--distances_file (-d) |
2-d matrix file |
Matrix to output distances into. |
--neighbors_file (-n) |
2-d index matrix file |
Matrix to output neighbors into. |
--output_model_file (-M) |
KNNModel file |
If specified, the kNN model will be output here. |
{: #cli_knn_detailed-documentation }
This program will calculate the k-nearest-neighbors of a set of points using kd-trees or cover trees (cover tree support is experimental and may be slow). You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.
For example, the following command will calculate the 5 nearest neighbors of each point in 'input.csv'
and store the distances in 'distances.csv'
and the neighbors in 'neighbors.csv'
:
$ mlpack_knn --k 5 --reference_file input.csv --neighbors_file neighbors.csv
--distances_file distances.csv
The output is organized such that row i and column j in the neighbors output matrix corresponds to the index of the point in the reference set which is the j'th nearest neighbor from the point in the query set with index i. Row j and column i in the distances output matrix corresponds to the distance between those two points.
name | type | description | default |
---|---|---|---|
algorithm |
str |
Type of neighbor search: 'naive', 'single_tree', 'dual_tree', 'greedy'. | 'dual_tree' |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
epsilon |
float |
If specified, will do approximate nearest neighbor search with given relative error. | 0 |
input_model |
KNNModelType |
Pre-trained kNN model. | None |
k |
int |
Number of nearest neighbors to find. | 0 |
leaf_size |
int |
Leaf size for tree building (used for kd-trees, vp trees, random projection trees, UB trees, R trees, R* trees, X trees, Hilbert R trees, R+ trees, R++ trees, spill trees, and octrees). | 20 |
query |
matrix |
Matrix containing query points (optional). | np.empty([0, 0]) |
random_basis |
bool |
Before tree-building, project the data onto a random orthogonal basis. | False |
reference |
matrix |
Matrix containing the reference dataset. | np.empty([0, 0]) |
rho |
float |
Balance threshold (only valid for spill trees). | 0.7 |
seed |
int |
Random seed (if 0, std::time(NULL) is used). | 0 |
tau |
float |
Overlapping size (only valid for spill trees). | 0 |
tree_type |
str |
Type of tree to use: 'kd', 'vp', 'rp', 'max-rp', 'ub', 'cover', 'r', 'r-star', 'x', 'ball', 'hilbert-r', 'r-plus', 'r-plus-plus', 'spill', 'oct'. | 'kd' |
true_distances |
matrix |
Matrix of true distances to compute the effective error (average relative error) (it is printed when -v is specified). | np.empty([0, 0]) |
true_neighbors |
int matrix |
Matrix of true neighbors to compute the recall (it is printed when -v is specified). | np.empty([0, 0], dtype=np.uint64) |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
distances |
matrix |
Matrix to output distances into. |
neighbors |
int matrix |
Matrix to output neighbors into. |
output_model |
KNNModelType |
If specified, the kNN model will be output here. |
{: #python_knn_detailed-documentation }
This program will calculate the k-nearest-neighbors of a set of points using kd-trees or cover trees (cover tree support is experimental and may be slow). You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.
For example, the following command will calculate the 5 nearest neighbors of each point in 'input'
and store the distances in 'distances'
and the neighbors in 'neighbors'
:
>>> output = knn(k=5, reference=input)
>>> neighbors = output['neighbors']
>>> distances = output['distances']
The output is organized such that row i and column j in the neighbors output matrix corresponds to the index of the point in the reference set which is the j'th nearest neighbor from the point in the query set with index i. Row j and column i in the distances output matrix corresponds to the distance between those two points.
name | type | description | default |
---|---|---|---|
algorithm |
String |
Type of neighbor search: 'naive', 'single_tree', 'dual_tree', 'greedy'. | "dual_tree" |
epsilon |
Float64 |
If specified, will do approximate nearest neighbor search with given relative error. | 0 |
input_model |
KNNModel |
Pre-trained kNN model. | nothing |
k |
Int |
Number of nearest neighbors to find. | 0 |
leaf_size |
Int |
Leaf size for tree building (used for kd-trees, vp trees, random projection trees, UB trees, R trees, R* trees, X trees, Hilbert R trees, R+ trees, R++ trees, spill trees, and octrees). | 20 |
query |
Float64 matrix-like |
Matrix containing query points (optional). | zeros(0, 0) |
random_basis |
Bool |
Before tree-building, project the data onto a random orthogonal basis. | false |
reference |
Float64 matrix-like |
Matrix containing the reference dataset. | zeros(0, 0) |
rho |
Float64 |
Balance threshold (only valid for spill trees). | 0.7 |
seed |
Int |
Random seed (if 0, std::time(NULL) is used). | 0 |
tau |
Float64 |
Overlapping size (only valid for spill trees). | 0 |
tree_type |
String |
Type of tree to use: 'kd', 'vp', 'rp', 'max-rp', 'ub', 'cover', 'r', 'r-star', 'x', 'ball', 'hilbert-r', 'r-plus', 'r-plus-plus', 'spill', 'oct'. | "kd" |
true_distances |
Float64 matrix-like |
Matrix of true distances to compute the effective error (average relative error) (it is printed when -v is specified). | zeros(0, 0) |
true_neighbors |
Int matrix-like |
Matrix of true neighbors to compute the recall (it is printed when -v is specified). | zeros(Int, 0, 0) |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
distances |
Float64 matrix-like |
Matrix to output distances into. |
neighbors |
Int matrix-like |
Matrix to output neighbors into. |
output_model |
KNNModel |
If specified, the kNN model will be output here. |
{: #julia_knn_detailed-documentation }
This program will calculate the k-nearest-neighbors of a set of points using kd-trees or cover trees (cover tree support is experimental and may be slow). You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.
For example, the following command will calculate the 5 nearest neighbors of each point in input
and store the distances in distances
and the neighbors in neighbors
:
julia> using CSV
julia> input = CSV.read("input.csv")
julia> distances, neighbors, _ = knn(k=5, reference=input)
The output is organized such that row i and column j in the neighbors output matrix corresponds to the index of the point in the reference set which is the j'th nearest neighbor from the point in the query set with index i. Row j and column i in the distances output matrix corresponds to the distance between those two points.
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
Algorithm |
string |
Type of neighbor search: 'naive', 'single_tree', 'dual_tree', 'greedy'. | "dual_tree" |
Epsilon |
float64 |
If specified, will do approximate nearest neighbor search with given relative error. | 0 |
InputModel |
knnModel |
Pre-trained kNN model. | nil |
K |
int |
Number of nearest neighbors to find. | 0 |
LeafSize |
int |
Leaf size for tree building (used for kd-trees, vp trees, random projection trees, UB trees, R trees, R* trees, X trees, Hilbert R trees, R+ trees, R++ trees, spill trees, and octrees). | 20 |
Query |
*mat.Dense |
Matrix containing query points (optional). | mat.NewDense(1, 1, nil) |
RandomBasis |
bool |
Before tree-building, project the data onto a random orthogonal basis. | false |
Reference |
*mat.Dense |
Matrix containing the reference dataset. | mat.NewDense(1, 1, nil) |
Rho |
float64 |
Balance threshold (only valid for spill trees). | 0.7 |
Seed |
int |
Random seed (if 0, std::time(NULL) is used). | 0 |
Tau |
float64 |
Overlapping size (only valid for spill trees). | 0 |
TreeType |
string |
Type of tree to use: 'kd', 'vp', 'rp', 'max-rp', 'ub', 'cover', 'r', 'r-star', 'x', 'ball', 'hilbert-r', 'r-plus', 'r-plus-plus', 'spill', 'oct'. | "kd" |
TrueDistances |
*mat.Dense |
Matrix of true distances to compute the effective error (average relative error) (it is printed when -v is specified). | mat.NewDense(1, 1, nil) |
TrueNeighbors |
*mat.Dense (with ints) |
Matrix of true neighbors to compute the recall (it is printed when -v is specified). | mat.NewDense(1, 1, nil) |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
distances |
*mat.Dense |
Matrix to output distances into. |
neighbors |
*mat.Dense (with ints) |
Matrix to output neighbors into. |
outputModel |
knnModel |
If specified, the kNN model will be output here. |
{: #go_knn_detailed-documentation }
This program will calculate the k-nearest-neighbors of a set of points using kd-trees or cover trees (cover tree support is experimental and may be slow). You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.
For example, the following command will calculate the 5 nearest neighbors of each point in input
and store the distances in distances
and the neighbors in neighbors
:
// Initialize optional parameters for Knn().
param := mlpack.KnnOptions()
param.K = 5
param.Reference = input
distances, neighbors, _ := mlpack.Knn(param)
The output is organized such that row i and column j in the neighbors output matrix corresponds to the index of the point in the reference set which is the j'th nearest neighbor from the point in the query set with index i. Row j and column i in the distances output matrix corresponds to the distance between those two points.
name | type | description | default |
---|---|---|---|
algorithm |
character |
Type of neighbor search: 'naive', 'single_tree', 'dual_tree', 'greedy'. | "dual_tree" |
epsilon |
numeric |
If specified, will do approximate nearest neighbor search with given relative error. | 0 |
input_model |
KNNModel |
Pre-trained kNN model. | NA |
k |
integer |
Number of nearest neighbors to find. | 0 |
leaf_size |
integer |
Leaf size for tree building (used for kd-trees, vp trees, random projection trees, UB trees, R trees, R* trees, X trees, Hilbert R trees, R+ trees, R++ trees, spill trees, and octrees). | 20 |
query |
numeric matrix |
Matrix containing query points (optional). | matrix(numeric(), 0, 0) |
random_basis |
logical |
Before tree-building, project the data onto a random orthogonal basis. | FALSE |
reference |
numeric matrix |
Matrix containing the reference dataset. | matrix(numeric(), 0, 0) |
rho |
numeric |
Balance threshold (only valid for spill trees). | 0.7 |
seed |
integer |
Random seed (if 0, std::time(NULL) is used). | 0 |
tau |
numeric |
Overlapping size (only valid for spill trees). | 0 |
tree_type |
character |
Type of tree to use: 'kd', 'vp', 'rp', 'max-rp', 'ub', 'cover', 'r', 'r-star', 'x', 'ball', 'hilbert-r', 'r-plus', 'r-plus-plus', 'spill', 'oct'. | "kd" |
true_distances |
numeric matrix |
Matrix of true distances to compute the effective error (average relative error) (it is printed when -v is specified). | matrix(numeric(), 0, 0) |
true_neighbors |
integer matrix |
Matrix of true neighbors to compute the recall (it is printed when -v is specified). | matrix(integer(), 0, 0) |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
distances |
numeric matrix |
Matrix to output distances into. |
neighbors |
integer matrix |
Matrix to output neighbors into. |
output_model |
KNNModel |
If specified, the kNN model will be output here. |
{: #r_knn_detailed-documentation }
This program will calculate the k-nearest-neighbors of a set of points using kd-trees or cover trees (cover tree support is experimental and may be slow). You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.
For example, the following command will calculate the 5 nearest neighbors of each point in "input"
and store the distances in "distances"
and the neighbors in "neighbors"
:
R> output <- knn(k=5, reference=input)
R> neighbors <- output$neighbors
R> distances <- output$distances
The output is organized such that row i and column j in the neighbors output matrix corresponds to the index of the point in the reference set which is the j'th nearest neighbor from the point in the query set with index i. Row j and column i in the distances output matrix corresponds to the distance between those two points.
// Initialize optional parameters for Kfn(). param := mlpack.KfnOptions() param.Algorithm = "dual_tree" param.Epsilon = 0 param.InputModel = nil param.K = 0 param.LeafSize = 20 param.Percentage = 1 param.Query = mat.NewDense(1, 1, nil) param.RandomBasis = false param.Reference = mat.NewDense(1, 1, nil) param.Seed = 0 param.TreeType = "kd" param.TrueDistances = mat.NewDense(1, 1, nil) param.TrueNeighbors = mat.NewDense(1, 1, nil)
distances, neighbors, output_model := mlpack.Kfn(param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- kfn(algorithm="dual_tree", epsilon=0, input_model=NA, k=0,
leaf_size=20, percentage=1, query=matrix(numeric(), 0, 0),
random_basis=FALSE, reference=matrix(numeric(), 0, 0), seed=0,
tree_type="kd", true_distances=matrix(numeric(), 0, 0),
true_neighbors=matrix(integer(), 0, 0), verbose=FALSE)
R> distances <- d$distances
R> neighbors <- d$neighbors
R> output_model <- d$output_model
An implementation of k-furthest-neighbor search using single-tree and dual-tree algorithms. Given a set of reference points and query points, this can find the k furthest neighbors in the reference set of each query point using trees; trees that are built can be saved for future use. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--algorithm (-a) |
string |
Type of neighbor search: 'naive', 'single_tree', 'dual_tree', 'greedy'. | 'dual_tree' |
--epsilon (-e) |
double |
If specified, will do approximate furthest neighbor search with given relative error. Must be in the range [0,1). | 0 |
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--input_model_file (-m) |
KFNModel file |
Pre-trained kFN model. | '' |
--k (-k) |
int |
Number of furthest neighbors to find. | 0 |
--leaf_size (-l) |
int |
Leaf size for tree building (used for kd-trees, vp trees, random projection trees, UB trees, R trees, R* trees, X trees, Hilbert R trees, R+ trees, R++ trees, and octrees). | 20 |
--percentage (-p) |
double |
If specified, will do approximate furthest neighbor search. Must be in the range (0,1] (decimal form). Resultant neighbors will be at least (p*100) % of the distance as the true furthest neighbor. | 1 |
--query_file (-q) |
2-d matrix file |
Matrix containing query points (optional). | '' |
--random_basis (-R) |
flag |
Before tree-building, project the data onto a random orthogonal basis. | |
--reference_file (-r) |
2-d matrix file |
Matrix containing the reference dataset. | '' |
--seed (-s) |
int |
Random seed (if 0, std::time(NULL) is used). | 0 |
--tree_type (-t) |
string |
Type of tree to use: 'kd', 'vp', 'rp', 'max-rp', 'ub', 'cover', 'r', 'r-star', 'x', 'ball', 'hilbert-r', 'r-plus', 'r-plus-plus', 'oct'. | 'kd' |
--true_distances_file (-D) |
2-d matrix file |
Matrix of true distances to compute the effective error (average relative error) (it is printed when -v is specified). | '' |
--true_neighbors_file (-T) |
2-d index matrix file |
Matrix of true neighbors to compute the recall (it is printed when -v is specified). | '' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--distances_file (-d) |
2-d matrix file |
Matrix to output distances into. |
--neighbors_file (-n) |
2-d index matrix file |
Matrix to output neighbors into. |
--output_model_file (-M) |
KFNModel file |
If specified, the kFN model will be output here. |
{: #cli_kfn_detailed-documentation }
This program will calculate the k-furthest-neighbors of a set of points. You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.
For example, the following will calculate the 5 furthest neighbors of eachpoint in 'input.csv'
and store the distances in 'distances.csv'
and the neighbors in 'neighbors.csv'
:
$ mlpack_kfn --k 5 --reference_file input.csv --distances_file distances.csv
--neighbors_file neighbors.csv
The output files are organized such that row i and column j in the neighbors output matrix corresponds to the index of the point in the reference set which is the j'th furthest neighbor from the point in the query set with index i. Row i and column j in the distances output file corresponds to the distance between those two points.
name | type | description | default |
---|---|---|---|
algorithm |
str |
Type of neighbor search: 'naive', 'single_tree', 'dual_tree', 'greedy'. | 'dual_tree' |
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
epsilon |
float |
If specified, will do approximate furthest neighbor search with given relative error. Must be in the range [0,1). | 0 |
input_model |
KFNModelType |
Pre-trained kFN model. | None |
k |
int |
Number of furthest neighbors to find. | 0 |
leaf_size |
int |
Leaf size for tree building (used for kd-trees, vp trees, random projection trees, UB trees, R trees, R* trees, X trees, Hilbert R trees, R+ trees, R++ trees, and octrees). | 20 |
percentage |
float |
If specified, will do approximate furthest neighbor search. Must be in the range (0,1] (decimal form). Resultant neighbors will be at least (p*100) % of the distance as the true furthest neighbor. | 1 |
query |
matrix |
Matrix containing query points (optional). | np.empty([0, 0]) |
random_basis |
bool |
Before tree-building, project the data onto a random orthogonal basis. | False |
reference |
matrix |
Matrix containing the reference dataset. | np.empty([0, 0]) |
seed |
int |
Random seed (if 0, std::time(NULL) is used). | 0 |
tree_type |
str |
Type of tree to use: 'kd', 'vp', 'rp', 'max-rp', 'ub', 'cover', 'r', 'r-star', 'x', 'ball', 'hilbert-r', 'r-plus', 'r-plus-plus', 'oct'. | 'kd' |
true_distances |
matrix |
Matrix of true distances to compute the effective error (average relative error) (it is printed when -v is specified). | np.empty([0, 0]) |
true_neighbors |
int matrix |
Matrix of true neighbors to compute the recall (it is printed when -v is specified). | np.empty([0, 0], dtype=np.uint64) |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
distances |
matrix |
Matrix to output distances into. |
neighbors |
int matrix |
Matrix to output neighbors into. |
output_model |
KFNModelType |
If specified, the kFN model will be output here. |
{: #python_kfn_detailed-documentation }
This program will calculate the k-furthest-neighbors of a set of points. You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.
For example, the following will calculate the 5 furthest neighbors of eachpoint in 'input'
and store the distances in 'distances'
and the neighbors in 'neighbors'
:
>>> output = kfn(k=5, reference=input)
>>> distances = output['distances']
>>> neighbors = output['neighbors']
The output files are organized such that row i and column j in the neighbors output matrix corresponds to the index of the point in the reference set which is the j'th furthest neighbor from the point in the query set with index i. Row i and column j in the distances output file corresponds to the distance between those two points.
name | type | description | default |
---|---|---|---|
algorithm |
String |
Type of neighbor search: 'naive', 'single_tree', 'dual_tree', 'greedy'. | "dual_tree" |
epsilon |
Float64 |
If specified, will do approximate furthest neighbor search with given relative error. Must be in the range [0,1). | 0 |
input_model |
KFNModel |
Pre-trained kFN model. | nothing |
k |
Int |
Number of furthest neighbors to find. | 0 |
leaf_size |
Int |
Leaf size for tree building (used for kd-trees, vp trees, random projection trees, UB trees, R trees, R* trees, X trees, Hilbert R trees, R+ trees, R++ trees, and octrees). | 20 |
percentage |
Float64 |
If specified, will do approximate furthest neighbor search. Must be in the range (0,1] (decimal form). Resultant neighbors will be at least (p*100) % of the distance as the true furthest neighbor. | 1 |
query |
Float64 matrix-like |
Matrix containing query points (optional). | zeros(0, 0) |
random_basis |
Bool |
Before tree-building, project the data onto a random orthogonal basis. | false |
reference |
Float64 matrix-like |
Matrix containing the reference dataset. | zeros(0, 0) |
seed |
Int |
Random seed (if 0, std::time(NULL) is used). | 0 |
tree_type |
String |
Type of tree to use: 'kd', 'vp', 'rp', 'max-rp', 'ub', 'cover', 'r', 'r-star', 'x', 'ball', 'hilbert-r', 'r-plus', 'r-plus-plus', 'oct'. | "kd" |
true_distances |
Float64 matrix-like |
Matrix of true distances to compute the effective error (average relative error) (it is printed when -v is specified). | zeros(0, 0) |
true_neighbors |
Int matrix-like |
Matrix of true neighbors to compute the recall (it is printed when -v is specified). | zeros(Int, 0, 0) |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
distances |
Float64 matrix-like |
Matrix to output distances into. |
neighbors |
Int matrix-like |
Matrix to output neighbors into. |
output_model |
KFNModel |
If specified, the kFN model will be output here. |
{: #julia_kfn_detailed-documentation }
This program will calculate the k-furthest-neighbors of a set of points. You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.
For example, the following will calculate the 5 furthest neighbors of eachpoint in input
and store the distances in distances
and the neighbors in neighbors
:
julia> using CSV
julia> input = CSV.read("input.csv")
julia> distances, neighbors, _ = kfn(k=5, reference=input)
The output files are organized such that row i and column j in the neighbors output matrix corresponds to the index of the point in the reference set which is the j'th furthest neighbor from the point in the query set with index i. Row i and column j in the distances output file corresponds to the distance between those two points.
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
Algorithm |
string |
Type of neighbor search: 'naive', 'single_tree', 'dual_tree', 'greedy'. | "dual_tree" |
Epsilon |
float64 |
If specified, will do approximate furthest neighbor search with given relative error. Must be in the range [0,1). | 0 |
InputModel |
kfnModel |
Pre-trained kFN model. | nil |
K |
int |
Number of furthest neighbors to find. | 0 |
LeafSize |
int |
Leaf size for tree building (used for kd-trees, vp trees, random projection trees, UB trees, R trees, R* trees, X trees, Hilbert R trees, R+ trees, R++ trees, and octrees). | 20 |
Percentage |
float64 |
If specified, will do approximate furthest neighbor search. Must be in the range (0,1] (decimal form). Resultant neighbors will be at least (p*100) % of the distance as the true furthest neighbor. | 1 |
Query |
*mat.Dense |
Matrix containing query points (optional). | mat.NewDense(1, 1, nil) |
RandomBasis |
bool |
Before tree-building, project the data onto a random orthogonal basis. | false |
Reference |
*mat.Dense |
Matrix containing the reference dataset. | mat.NewDense(1, 1, nil) |
Seed |
int |
Random seed (if 0, std::time(NULL) is used). | 0 |
TreeType |
string |
Type of tree to use: 'kd', 'vp', 'rp', 'max-rp', 'ub', 'cover', 'r', 'r-star', 'x', 'ball', 'hilbert-r', 'r-plus', 'r-plus-plus', 'oct'. | "kd" |
TrueDistances |
*mat.Dense |
Matrix of true distances to compute the effective error (average relative error) (it is printed when -v is specified). | mat.NewDense(1, 1, nil) |
TrueNeighbors |
*mat.Dense (with ints) |
Matrix of true neighbors to compute the recall (it is printed when -v is specified). | mat.NewDense(1, 1, nil) |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
distances |
*mat.Dense |
Matrix to output distances into. |
neighbors |
*mat.Dense (with ints) |
Matrix to output neighbors into. |
outputModel |
kfnModel |
If specified, the kFN model will be output here. |
{: #go_kfn_detailed-documentation }
This program will calculate the k-furthest-neighbors of a set of points. You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.
For example, the following will calculate the 5 furthest neighbors of eachpoint in input
and store the distances in distances
and the neighbors in neighbors
:
// Initialize optional parameters for Kfn().
param := mlpack.KfnOptions()
param.K = 5
param.Reference = input
distances, neighbors, _ := mlpack.Kfn(param)
The output files are organized such that row i and column j in the neighbors output matrix corresponds to the index of the point in the reference set which is the j'th furthest neighbor from the point in the query set with index i. Row i and column j in the distances output file corresponds to the distance between those two points.
name | type | description | default |
---|---|---|---|
algorithm |
character |
Type of neighbor search: 'naive', 'single_tree', 'dual_tree', 'greedy'. | "dual_tree" |
epsilon |
numeric |
If specified, will do approximate furthest neighbor search with given relative error. Must be in the range [0,1). | 0 |
input_model |
KFNModel |
Pre-trained kFN model. | NA |
k |
integer |
Number of furthest neighbors to find. | 0 |
leaf_size |
integer |
Leaf size for tree building (used for kd-trees, vp trees, random projection trees, UB trees, R trees, R* trees, X trees, Hilbert R trees, R+ trees, R++ trees, and octrees). | 20 |
percentage |
numeric |
If specified, will do approximate furthest neighbor search. Must be in the range (0,1] (decimal form). Resultant neighbors will be at least (p*100) % of the distance as the true furthest neighbor. | 1 |
query |
numeric matrix |
Matrix containing query points (optional). | matrix(numeric(), 0, 0) |
random_basis |
logical |
Before tree-building, project the data onto a random orthogonal basis. | FALSE |
reference |
numeric matrix |
Matrix containing the reference dataset. | matrix(numeric(), 0, 0) |
seed |
integer |
Random seed (if 0, std::time(NULL) is used). | 0 |
tree_type |
character |
Type of tree to use: 'kd', 'vp', 'rp', 'max-rp', 'ub', 'cover', 'r', 'r-star', 'x', 'ball', 'hilbert-r', 'r-plus', 'r-plus-plus', 'oct'. | "kd" |
true_distances |
numeric matrix |
Matrix of true distances to compute the effective error (average relative error) (it is printed when -v is specified). | matrix(numeric(), 0, 0) |
true_neighbors |
integer matrix |
Matrix of true neighbors to compute the recall (it is printed when -v is specified). | matrix(integer(), 0, 0) |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
distances |
numeric matrix |
Matrix to output distances into. |
neighbors |
integer matrix |
Matrix to output neighbors into. |
output_model |
KFNModel |
If specified, the kFN model will be output here. |
{: #r_kfn_detailed-documentation }
This program will calculate the k-furthest-neighbors of a set of points. You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set.
For example, the following will calculate the 5 furthest neighbors of eachpoint in "input"
and store the distances in "distances"
and the neighbors in "neighbors"
:
R> output <- kfn(k=5, reference=input)
R> distances <- output$distances
R> neighbors <- output$neighbors
The output files are organized such that row i and column j in the neighbors output matrix corresponds to the index of the point in the reference set which is the j'th furthest neighbor from the point in the query set with index i. Row i and column j in the distances output file corresponds to the distance between those two points.
// Initialize optional parameters for Nmf(). param := mlpack.NmfOptions() param.InitialH = mat.NewDense(1, 1, nil) param.InitialW = mat.NewDense(1, 1, nil) param.MaxIterations = 10000 param.MinResidue = 1e-05 param.Seed = 0 param.UpdateRules = "multdist"
h, w := mlpack.Nmf(input, rank, param)
</div>
<div class="language-decl" id="r" markdown="1">
```R
R> library(mlpack)
R> d <- nmf(initial_h=matrix(numeric(), 0, 0),
initial_w=matrix(numeric(), 0, 0), input=matrix(numeric(), 0, 0),
max_iterations=10000, min_residue=1e-05, rank=0, seed=0,
update_rules="multdist", verbose=FALSE)
R> h <- d$h
R> w <- d$w
An implementation of non-negative matrix factorization. This can be used to decompose an input dataset into two low-rank non-negative components. Detailed documentation{: .language-detail-link #cli }Detailed documentation{: .language-detail-link #python }Detailed documentation{: .language-detail-link #julia }Detailed documentation{: .language-detail-link #go }Detailed documentation{: .language-detail-link #r }.
name | type | description | default |
---|---|---|---|
--help (-h) |
flag |
Default help info. Only exists in CLI binding. | |
--info |
string |
Print help on a specific option. Only exists in CLI binding. | '' |
--initial_h_file (-q) |
2-d matrix file |
Initial H matrix. | '' |
--initial_w_file (-p) |
2-d matrix file |
Initial W matrix. | '' |
--input_file (-i) |
2-d matrix file |
Input dataset to perform NMF on. | **--** |
--max_iterations (-m) |
int |
Number of iterations before NMF terminates (0 runs until convergence. | 10000 |
--min_residue (-e) |
double |
The minimum root mean square residue allowed for each iteration, below which the program terminates. | 1e-05 |
--rank (-r) |
int |
Rank of the factorization. | **--** |
--seed (-s) |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
--update_rules (-u) |
string |
Update rules for each iteration; ( multdist | multdiv | als ). | 'multdist' |
--verbose (-v) |
flag |
Display informational messages and the full list of parameters and timers at the end of execution. | |
--version (-V) |
flag |
Display the version of mlpack. Only exists in CLI binding. |
name | type | description |
---|---|---|
--h_file (-H) |
2-d matrix file |
Matrix to save the calculated H to. |
--w_file (-W) |
2-d matrix file |
Matrix to save the calculated W to. |
{: #cli_nmf_detailed-documentation }
This program performs non-negative matrix factorization on the given dataset, storing the resulting decomposed matrices in the specified files. For an input dataset V, NMF decomposes V into two matrices W and H such that
V = W * H
where all elements in W and H are non-negative. If V is of size (n x m), then W will be of size (n x r) and H will be of size (r x m), where r is the rank of the factorization (specified by the --rank (-r)
parameter).
Optionally, the desired update rules for each NMF iteration can be chosen from the following list:
- multdist: multiplicative distance-based update rules (Lee and Seung 1999)
- multdiv: multiplicative divergence-based update rules (Lee and Seung 1999)
- als: alternating least squares update rules (Paatero and Tapper 1994)
The maximum number of iterations is specified with --max_iterations (-m)
, and the minimum residue required for algorithm termination is specified with the --min_residue (-e)
parameter.
For example, to run NMF on the input matrix 'V.csv'
using the 'multdist' update rules with a rank-10 decomposition and storing the decomposed matrices into 'W.csv'
and 'H.csv'
, the following command could be used:
$ mlpack_nmf --input_file V.csv --w_file W.csv --h_file H.csv --rank 10
--update_rules multdist
name | type | description | default |
---|---|---|---|
copy_all_inputs |
bool |
If specified, all input parameters will be deep copied before the method is run. This is useful for debugging problems where the input parameters are being modified by the algorithm, but can slow down the code. Only exists in Python binding. | False |
initial_h |
matrix |
Initial H matrix. | np.empty([0, 0]) |
initial_w |
matrix |
Initial W matrix. | np.empty([0, 0]) |
input |
matrix |
Input dataset to perform NMF on. | **--** |
max_iterations |
int |
Number of iterations before NMF terminates (0 runs until convergence. | 10000 |
min_residue |
float |
The minimum root mean square residue allowed for each iteration, below which the program terminates. | 1e-05 |
rank |
int |
Rank of the factorization. | **--** |
seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
update_rules |
str |
Update rules for each iteration; ( multdist | multdiv | als ). | 'multdist' |
verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | False |
Results are returned in a Python dictionary. The keys of the dictionary are the names of the output parameters.
name | type | description |
---|---|---|
h |
matrix |
Matrix to save the calculated H to. |
w |
matrix |
Matrix to save the calculated W to. |
{: #python_nmf_detailed-documentation }
This program performs non-negative matrix factorization on the given dataset, storing the resulting decomposed matrices in the specified files. For an input dataset V, NMF decomposes V into two matrices W and H such that
V = W * H
where all elements in W and H are non-negative. If V is of size (n x m), then W will be of size (n x r) and H will be of size (r x m), where r is the rank of the factorization (specified by the rank
parameter).
Optionally, the desired update rules for each NMF iteration can be chosen from the following list:
- multdist: multiplicative distance-based update rules (Lee and Seung 1999)
- multdiv: multiplicative divergence-based update rules (Lee and Seung 1999)
- als: alternating least squares update rules (Paatero and Tapper 1994)
The maximum number of iterations is specified with max_iterations
, and the minimum residue required for algorithm termination is specified with the min_residue
parameter.
For example, to run NMF on the input matrix 'V'
using the 'multdist' update rules with a rank-10 decomposition and storing the decomposed matrices into 'W'
and 'H'
, the following command could be used:
>>> output = nmf(input=V, rank=10, update_rules='multdist')
>>> W = output['w']
>>> H = output['h']
name | type | description | default |
---|---|---|---|
initial_h |
Float64 matrix-like |
Initial H matrix. | zeros(0, 0) |
initial_w |
Float64 matrix-like |
Initial W matrix. | zeros(0, 0) |
input |
Float64 matrix-like |
Input dataset to perform NMF on. | **--** |
max_iterations |
Int |
Number of iterations before NMF terminates (0 runs until convergence. | 10000 |
min_residue |
Float64 |
The minimum root mean square residue allowed for each iteration, below which the program terminates. | 1e-05 |
rank |
Int |
Rank of the factorization. | **--** |
seed |
Int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
update_rules |
String |
Update rules for each iteration; ( multdist | multdiv | als ). | "multdist" |
verbose |
Bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Results are returned as a tuple, and can be unpacked directly into return values or stored directly as a tuple; undesired results can be ignored with the _ keyword.
name | type | description |
---|---|---|
h |
Float64 matrix-like |
Matrix to save the calculated H to. |
w |
Float64 matrix-like |
Matrix to save the calculated W to. |
{: #julia_nmf_detailed-documentation }
This program performs non-negative matrix factorization on the given dataset, storing the resulting decomposed matrices in the specified files. For an input dataset V, NMF decomposes V into two matrices W and H such that
V = W * H
where all elements in W and H are non-negative. If V is of size (n x m), then W will be of size (n x r) and H will be of size (r x m), where r is the rank of the factorization (specified by the rank
parameter).
Optionally, the desired update rules for each NMF iteration can be chosen from the following list:
- multdist: multiplicative distance-based update rules (Lee and Seung 1999)
- multdiv: multiplicative divergence-based update rules (Lee and Seung 1999)
- als: alternating least squares update rules (Paatero and Tapper 1994)
The maximum number of iterations is specified with max_iterations
, and the minimum residue required for algorithm termination is specified with the min_residue
parameter.
For example, to run NMF on the input matrix V
using the 'multdist' update rules with a rank-10 decomposition and storing the decomposed matrices into W
and H
, the following command could be used:
julia> using CSV
julia> V = CSV.read("V.csv")
julia> H, W = nmf(V, 10; update_rules="multdist")
There are two types of input options: required options, which are passed directly to the function call, and optional options, which are passed via an initialized struct, which allows keyword access to each of the options.
name | type | description | default |
---|---|---|---|
InitialH |
*mat.Dense |
Initial H matrix. | mat.NewDense(1, 1, nil) |
InitialW |
*mat.Dense |
Initial W matrix. | mat.NewDense(1, 1, nil) |
input |
*mat.Dense |
Input dataset to perform NMF on. | **--** |
MaxIterations |
int |
Number of iterations before NMF terminates (0 runs until convergence. | 10000 |
MinResidue |
float64 |
The minimum root mean square residue allowed for each iteration, below which the program terminates. | 1e-05 |
rank |
int |
Rank of the factorization. | **--** |
Seed |
int |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
UpdateRules |
string |
Update rules for each iteration; ( multdist | multdiv | als ). | "multdist" |
Verbose |
bool |
Display informational messages and the full list of parameters and timers at the end of execution. | false |
Output options are returned via Go's support for multiple return values, in the order listed below.
name | type | description |
---|---|---|
h |
*mat.Dense |
Matrix to save the calculated H to. |
w |
*mat.Dense |
Matrix to save the calculated W to. |
{: #go_nmf_detailed-documentation }
This program performs non-negative matrix factorization on the given dataset, storing the resulting decomposed matrices in the specified files. For an input dataset V, NMF decomposes V into two matrices W and H such that
V = W * H
where all elements in W and H are non-negative. If V is of size (n x m), then W will be of size (n x r) and H will be of size (r x m), where r is the rank of the factorization (specified by the Rank
parameter).
Optionally, the desired update rules for each NMF iteration can be chosen from the following list:
- multdist: multiplicative distance-based update rules (Lee and Seung 1999)
- multdiv: multiplicative divergence-based update rules (Lee and Seung 1999)
- als: alternating least squares update rules (Paatero and Tapper 1994)
The maximum number of iterations is specified with MaxIterations
, and the minimum residue required for algorithm termination is specified with the MinResidue
parameter.
For example, to run NMF on the input matrix V
using the 'multdist' update rules with a rank-10 decomposition and storing the decomposed matrices into W
and H
, the following command could be used:
// Initialize optional parameters for Nmf().
param := mlpack.NmfOptions()
param.UpdateRules = "multdist"
H, W := mlpack.Nmf(V, 10, param)
name | type | description | default |
---|---|---|---|
initial_h |
numeric matrix |
Initial H matrix. | matrix(numeric(), 0, 0) |
initial_w |
numeric matrix |
Initial W matrix. | matrix(numeric(), 0, 0) |
input |
numeric matrix |
Input dataset to perform NMF on. | **--** |
max_iterations |
integer |
Number of iterations before NMF terminates (0 runs until convergence. | 10000 |
min_residue |
numeric |
The minimum root mean square residue allowed for each iteration, below which the program terminates. | 1e-05 |
rank |
integer |
Rank of the factorization. | **--** |
seed |
integer |
Random seed. If 0, 'std::time(NULL)' is used. | 0 |
update_rules |
character |
Update rules for each iteration; ( multdist | multdiv | als ). | "multdist" |
verbose |
logical |
Display informational messages and the full list of parameters and timers at the end of execution. | FALSE |
Results are returned in a R list. The keys of the list are the names of the output parameters.
name | type | description |
---|---|---|
h |
numeric matrix |
Matrix to save the calculated H to. |
w |
numeric matrix |
Matrix to save the calculated W to. |
{: #r_nmf_detailed-documentation }
This program performs non-negative matrix factorization on the given dataset, storing the resulting decomposed matrices in the specified files. For an input dataset V, NMF decomposes V into two matrices W and H such that
V = W * H
where all elements in W and H are non-negative. If V is of size (n x m), then W will be of size (n x r) and H will be of size (r x m), where r is the rank of the factorization (specified by the rank
parameter).
Optionally, the desired update rules for each NMF iteration can be chosen from the following list:
- multdist: multiplicative distance-based update rules (Lee and Seung 1999)
- multdiv: multiplicative divergence-based update rules (Lee and Seung 1999)
- als: alternating least squares update rules (Paatero and Tapper 1994)
The maximum number of iterations is specified wit