Created
July 8, 2018 15:51
Star
You must be signed in to star a gist
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Decision trees might even be in $\mathcal{O}(1)$. I have to give impurity measures a closer thought. | |
# Time Complexity of Training and Testing | |
## By training samples ($n$) and samples features ($m$) and number of classes ($c$). | |
The following classifiers potentially have computational complexities less than or equal with $\mathcal{O}(mnc)$ complexity) in both of the training and testing phases. | |
* k-nearest neighbors is linear (https://stats.stackexchange.com/questions/219655/k-nn-computational-complexity), | |
* Naive Bayes is linear for those PDFs that can be estimated in linear time (e.g. Poisson and Multinomial PDFs). | |
* Approximate SVM is linear (https://stats.stackexchange.com/questions/96995/machine-learning-classifiers-big-o-or-complexity) | |
* Logistic Regression can be linear (https://cstheory.stackexchange.com/questions/4278/computational-complexity-of-learning-classification-algorithms-fitting-the-p) | |
* AdaBoost and Gradient Boosting require at least a weak learner and are linear only if their weak learner is linear (f m n log(n)) (https://stackoverflow.com/questions/22397485/what-is-the-o-runtime-complexity-of-adaboost) | |
* Neural Networks can be linear (https://ai.stackexchange.com/questions/5728/time-complexity-for-training-a-neural-network) | |
The following classifiers are nonlinear either in the training or in the testing phase: | |
* SVM requires polynomial training with respect to the number of samples (https://stackoverflow.com/questions/16585465/training-complexity-of-linear-svm) | |
* Decision tree and correspondingly Random forest are loglinear (m n log(n)) (https://stackoverflow.com/questions/34212610/why-is-the-runtime-to-construct-a-decision-tree-mnlogn) | |
* Linear (and Quadratic) Discriminant Analysis (LDA and QDA) are polynomial (n m^2) (https://stats.stackexchange.com/questions/211177/comparison-of-lda-vs-knn-time-complexity) | |
* Restricted Boltzmann Machine (RBM) is polynomial with respect to m (http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.BernoulliRBM.html) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment