Skip to contents

Compute variable importance scores for the predictors in a model.


vi(object, ...)

# S3 method for default
  method = c("model", "firm", "permute", "shap"),
  feature_names = NULL,
  abbreviate_feature_names = NULL,
  sort = TRUE,
  decreasing = TRUE,
  scale = FALSE,
  rank = FALSE,



A fitted model object (e.g., a randomForest object) or an object that inherits from class "vi".


Additional optional arguments to be passed on to vi_model, vi_firm, vi_permute, or vi_shap; see their respective help pages for details.


Character string specifying the type of variable importance (VI) to compute. Current options are "model" (the default), for model-specific VI scores (see vi_model for details), "firm", for variance-based VI scores (see vi_firm for details), "permute", for permutation-based VI scores (see ' vi_permute for details), or "shap", for Shapley-based VI scores. For more details on the variance-based methods, see Greenwell et al. (2018) and Scholbeck et al. (2019).


Character string giving the names of the predictor variables (i.e., features) of interest.


Integer specifying the length at which to abbreviate feature names. Default is NULL which results in no abbreviation (i.e., the full name of each feature will be printed).


Logical indicating whether or not to order the sort the variable importance scores. Default is TRUE.


Logical indicating whether or not the variable importance scores should be sorted in descending (TRUE) or ascending (FALSE) order of importance. Default is TRUE.


Logical indicating whether or not to scale the variable importance scores so that the largest is 100. Default is FALSE.


Logical indicating whether or not to rank the variable importance scores (i.e., convert to integer ranks). Default is FALSE. Potentially useful when comparing variable importance scores across different models using different methods.


A tidy data frame (i.e., a "tibble" object) with at least two columns: Variable and Importance. For "lm"/"glm"-like objects, an additional column, called Sign, is also included which includes the sign (i.e., POS/NEG) of the original coefficient. If method = "permute" and nsim > 1, then an additional column, StDev, giving the standard deviation of the permutation-based variable importance scores is included.


Greenwell, B. M., Boehmke, B. C., and McCarthy, A. J. A Simple and Effective Model-Based Variable Importance Measure. arXiv preprint arXiv:1805.04755 (2018).


# A projection pursuit regression example

# Load the sample data

# Fit a projection pursuit regression model
mtcars.ppr <- ppr(mpg ~ ., data = mtcars, nterms = 1)

# Prediction wrapper that tells vi() how to obtain new predictions from your
# fitted model
pfun <- function(object, newdata) predict(object, newdata = newdata)

# Compute permutation-based variable importance scores
set.seed(1434)  # for reproducibility
(vis <- vi(mtcars.ppr, method = "permute", target = "mpg", nsim = 10,
           metric = "rmse", pred_wrapper = pfun))
#> # A tibble: 10 × 3
#>    Variable Importance   StDev
#>    <chr>         <dbl>   <dbl>
#>  1 wt         3.17     0.374  
#>  2 hp         2.18     0.462  
#>  3 gear       0.755    0.367  
#>  4 qsec       0.674    0.240  
#>  5 cyl        0.462    0.158  
#>  6 am         0.173    0.144  
#>  7 vs         0.0999   0.0605 
#>  8 drat       0.0265   0.0564 
#>  9 carb       0.00898  0.00885
#> 10 disp      -0.000824 0.00744

# Plot variable importance scores
vip(vis, include_type = TRUE, all_permutations = TRUE,
    geom = "point", aesthetics = list(color = "forestgreen", size = 3))

# A binary classification example
if (FALSE) {
library(rpart)  # for classification and regression trees

# Load Wisconsin breast cancer data; see ?mlbench::BreastCancer for details
data(BreastCancer, package = "mlbench")
bc <- subset(BreastCancer, select = -Id)  # for brevity

# Fit a standard classification tree
set.seed(1032)  # for reproducibility
tree <- rpart(Class ~ ., data = bc, cp = 0)

# Prune using 1-SE rule (e.g., use `plotcp(tree)` for guidance)
cp <- tree$cptable
cp <- cp[cp[, "nsplit"] == 2L, "CP"]
tree2 <- prune(tree, cp = cp)  # tree with three splits

# Default tree-based VIP

# Computing permutation importance requires a prediction wrapper. For
# classification, the return value depends on the chosen metric; see
# `?vip::vi_permute` for details.
pfun <- function(object, newdata) {
  # Need vector of predicted class probabilities when using  log-loss metric
  predict(object, newdata = newdata, type = "prob")[, "malignant"]

# Permutation-based importance (note that only the predictors that show up
# in the final tree have non-zero importance)
set.seed(1046)  # for reproducibility
vi(tree2, method = "permute", nsim = 10, target = "Class",
   metric = "logloss", pred_wrapper = pfun, reference_class = "malignant")

# Equivalent (but not sorted)
set.seed(1046)  # for reproducibility
vi_permute(tree2, nsim = 10, target = "Class", metric = "logloss",
           pred_wrapper = pfun, reference_class = "malignant")