Overview

vip is an R package for constructing variable importance plots (VIPs). VIPs are part of a larger framework referred to as interpretable machine learning (IML), which includes (but not limited to): partial dependence plots (PDPs) and individual conditional expectation (ICE) curves. While PDPs and ICE curves (available in the R package pdp) help visualize feature effects, VIPs help visualize feature impact (either locally or globally). An in-progress, but comprehensive, overview of IML can be found here: https://github.com/christophM/interpretable-ml-book.

Many supervised learning algorithms can naturally emit some measure of importance for the features used in the model, and these approaches are embedded in many different packages. The downside, however, is that each package uses a different function and interface and it can be challenging (and distracting) to have to remember each one (e.g., remembering to use xgb.importance() for xgboost models and gbm.summary() for gbm models). With vip you get one consistent interface to computing variable importance for many types of supervised learning models across a number of packages. Additionally, vip package website offers a number of model-agnostic procedures for computing feature importance (see the next section) as well an experimental function for quantifying the strength of potential interaction effects. For details and example usage, visit the vip package website.

Features