Skip to content

Instantly share code, notes, and snippets.

@parthvshah
Last active October 2, 2020 14:30
Show Gist options
  • Save parthvshah/3782d8c953a481692885bf3e788db415 to your computer and use it in GitHub Desktop.
Save parthvshah/3782d8c953a481692885bf3e788db415 to your computer and use it in GitHub Desktop.

CNN

Training and inference phases of neural networks can be analysed individually.

For back propagation or training:

O(n^5) where n is the number of neurons, number of layers and number of iterations.

For forward propagation or inference:

O(n^4) where n is the number of neurons including the bias unit in a layer.

Assumption: There are the same number of neurons in each layer, and that the number of layers equal the number of neurons in each layer.

Reference: Computational Complexity Of Neural Networks

RNN

Given a single-layer RNN with 'a' rectified linear units and input of length 'b', a population prediction error of ε can be realized with at most O((a^4 * b)/ ε^2) samples

Reference: Sample Complexity Bounds for Recurrent Neural Networks with Application to Combinatorial Graph Problems

XGBoost

Let 'd' be the maximum depth of the tree and 'K' be total number of trees. For the exact greedy algorithm, complexity is O(K * d * x * log(n))

To find the optimal split at each node, you needed to re-sort the data on each column. This ends up incurring a time complexity at each layer that is very crudely approximated by O(x * log(n)). That is, 'x' nonzero entries for each feature and at each layer you're sorting lists, each of length at most 'n'.

Note: The log(n) factor can be reduced further by using binary search

References: XGBoost: A Scalable Tree Boosting System and XGBoost paper - time complexity analysis

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment