Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
Decision trees might even be in $\mathcal{O}(1)$. I have to give impurity measures a closer thought.
# Time Complexity of Training and Testing
## By training samples ($n$) and samples features ($m$) and number of classes ($c$).
The following classifiers potentially have computational complexities less than or equal with $\mathcal{O}(mnc)$ complexity) in both of the training and testing phases.
* k-nearest neighbors is linear (,
* Naive Bayes is linear for those PDFs that can be estimated in linear time (e.g. Poisson and Multinomial PDFs).
* Approximate SVM is linear (
* Logistic Regression can be linear (
* AdaBoost and Gradient Boosting require at least a weak learner and are linear only if their weak learner is linear (f m n log(n)) (
* Neural Networks can be linear (
The following classifiers are nonlinear either in the training or in the testing phase:
* SVM requires polynomial training with respect to the number of samples (
* Decision tree and correspondingly Random forest are loglinear (m n log(n)) (
* Linear (and Quadratic) Discriminant Analysis (LDA and QDA) are polynomial (n m^2) (
* Restricted Boltzmann Machine (RBM) is polynomial with respect to m (
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.