What does an interpretable RF visualization look like? Out-of-the-box 📦 RF implementations in R and Python compute variable importance over all trees, but how do we get there?
In other words, what would a cumulative variable importance for a RF look like?