1. Load the Boston housing data set from the pdp package. These data come from a classic paper that analyzed the relationship between several characteristics (e.g., crime rate, average rooms per dwelling, property tax value) and the median value of homes within a census tract (cmedv). See ?pdp::boston for details and further references.

    • What are the dimensions of this data set?
    • Perform some exploratory data analysis on this data set (be sure to assess the distribution of the target variable cmedv).
  2. Split the Boston housing data into a training set and test set using a 70-30% split.

    • How many observations are in the training set and test set?
    • Compare the distribution of cmedv between the training set and test set.
  3. Load the spam data set from the kernlab package.

    • What is the distribution of the target variable (type) across the entire data set?
    • Create a 70/30 training/test split stratified by the target variable.
    • Compare the distribution of the target variable between the training set and test set.
  4. Using the Boston housing training data created in 2), fit a linear regression model that use all available features to predict cmedv.

    • Create a model with lm(), glm(), and caret::train().
    • How do the coefficients compare across these models?
    • How does the MSE/RMSE compare across these models?
    • Which method is caret::train() using to fit a linear regression model?
  5. Using the Boston housing training data created in exercise 2), perform a 10-fold cross-validated linear regression model, repeated 5 times, that uses all available features to predict cmedv.

    • What is the average RMSE across all 50 model iterations?
    • Plot the distribution of the RMSE across all 50 model iterations.
    • Describe the results.
    • Repeat this exercise for the spam data from exercise 3); since the target (type) is binary, be sure to use a more appropriate metric (e.g., AUC or misclassification error).
  6. Repeat exercise 5) on the Boston housing data; however, instead of a linear regression model, use a k-nearest neighbor model that executes a hyperparameter grid search where k ranges from 2–20. How does this model’s results compare to the linear regression results?

LS0tCnRpdGxlOiAiQ2hhcHRlciAyIEV4ZXJjaXNlcyIKb3V0cHV0OiBodG1sX25vdGVib29rCi0tLQoKMS4gTG9hZCB0aGUgQm9zdG9uIGhvdXNpbmcgZGF0YSBzZXQgZnJvbSB0aGUgW3BkcCBwYWNrYWdlXShodHRwczovL2NyYW4uci1wcm9qZWN0Lm9yZy9wYWNrYWdlPXBkcCkuIFRoZXNlIGRhdGEgY29tZSBmcm9tIGEgW2NsYXNzaWMgcGFwZXJdKGh0dHBzOi8vd3d3LnNjaWVuY2VkaXJlY3QuY29tL3NjaWVuY2UvYXJ0aWNsZS9hYnMvcGlpLzAwOTUwNjk2Nzg5MDAwNjIpIHRoYXQgYW5hbHl6ZWQgdGhlIHJlbGF0aW9uc2hpcCBiZXR3ZWVuIHNldmVyYWwgY2hhcmFjdGVyaXN0aWNzIChlLmcuLCBjcmltZSByYXRlLCBhdmVyYWdlIHJvb21zIHBlciBkd2VsbGluZywgcHJvcGVydHkgdGF4IHZhbHVlKSBhbmQgdGhlIG1lZGlhbiB2YWx1ZSBvZiBob21lcyB3aXRoaW4gYSBjZW5zdXMgdHJhY3QgKGBjbWVkdmApLiBTZWUgYD9wZHA6OmJvc3RvbmAgZm9yIGRldGFpbHMgYW5kIGZ1cnRoZXIgcmVmZXJlbmNlcy4KCiAgIC0gV2hhdCBhcmUgdGhlIGRpbWVuc2lvbnMgb2YgdGhpcyBkYXRhIHNldD8KICAgLSBQZXJmb3JtIHNvbWUgZXhwbG9yYXRvcnkgZGF0YSBhbmFseXNpcyBvbiB0aGlzIGRhdGEgc2V0IChiZSBzdXJlIHRvIGFzc2VzcyB0aGUgZGlzdHJpYnV0aW9uIG9mIHRoZSB0YXJnZXQgdmFyaWFibGUgYGNtZWR2YCkuCgoyLiBTcGxpdCB0aGUgQm9zdG9uIGhvdXNpbmcgZGF0YSBpbnRvIGEgdHJhaW5pbmcgc2V0IGFuZCB0ZXN0IHNldCB1c2luZyBhIDcwLTMwJSBzcGxpdC4KCiAgIC0gSG93IG1hbnkgb2JzZXJ2YXRpb25zIGFyZSBpbiB0aGUgdHJhaW5pbmcgc2V0IGFuZCB0ZXN0IHNldD8KICAgLSBDb21wYXJlIHRoZSBkaXN0cmlidXRpb24gb2YgYGNtZWR2YCBiZXR3ZWVuIHRoZSB0cmFpbmluZyBzZXQgYW5kIHRlc3Qgc2V0LgoKMy4gTG9hZCB0aGUgc3BhbSBkYXRhIHNldCBmcm9tIHRoZSBba2VybmxhYiBwYWNrYWdlXShodHRwczovL2NyYW4uci1wcm9qZWN0Lm9yZy9wYWNrYWdlPWtlcm5sYWIpLgoKICAgLSBXaGF0IGlzIHRoZSBkaXN0cmlidXRpb24gb2YgdGhlIHRhcmdldCB2YXJpYWJsZSAoYHR5cGVgKSBhY3Jvc3MgdGhlIGVudGlyZSBkYXRhIHNldD8KICAgLSBDcmVhdGUgYSA3MC8zMCB0cmFpbmluZy90ZXN0IHNwbGl0IHN0cmF0aWZpZWQgYnkgdGhlIHRhcmdldCB2YXJpYWJsZS4KICAgLSBDb21wYXJlIHRoZSBkaXN0cmlidXRpb24gb2YgdGhlIHRhcmdldCB2YXJpYWJsZSBiZXR3ZWVuIHRoZSB0cmFpbmluZyBzZXQgYW5kIHRlc3Qgc2V0LgoKNC4gVXNpbmcgdGhlIEJvc3RvbiBob3VzaW5nIHRyYWluaW5nIGRhdGEgY3JlYXRlZCBpbiAyKSwgZml0IGEgbGluZWFyIHJlZ3Jlc3Npb24gbW9kZWwgdGhhdCB1c2UgYWxsIGF2YWlsYWJsZSBmZWF0dXJlcyB0byBwcmVkaWN0IGBjbWVkdmAuIAoKICAgLSBDcmVhdGUgYSBtb2RlbCB3aXRoIGBsbSgpYCwgYGdsbSgpYCwgYW5kIGBjYXJldDo6dHJhaW4oKWAuCiAgIC0gSG93IGRvIHRoZSBjb2VmZmljaWVudHMgY29tcGFyZSBhY3Jvc3MgdGhlc2UgbW9kZWxzPwogICAtIEhvdyBkb2VzIHRoZSBNU0UvUk1TRSBjb21wYXJlIGFjcm9zcyB0aGVzZSBtb2RlbHM/CiAgIC0gV2hpY2ggbWV0aG9kIGlzIGBjYXJldDo6dHJhaW4oKWAgdXNpbmcgdG8gZml0IGEgbGluZWFyIHJlZ3Jlc3Npb24gbW9kZWw/Cgo1LiBVc2luZyB0aGUgQm9zdG9uIGhvdXNpbmcgdHJhaW5pbmcgZGF0YSBjcmVhdGVkIGluIGV4ZXJjaXNlIDIpLCBwZXJmb3JtIGEgMTAtZm9sZCBjcm9zcy12YWxpZGF0ZWQgbGluZWFyIHJlZ3Jlc3Npb24gbW9kZWwsIHJlcGVhdGVkIDUgdGltZXMsIHRoYXQgdXNlcyBhbGwgYXZhaWxhYmxlIGZlYXR1cmVzIHRvIHByZWRpY3QgYGNtZWR2YC4gCgogICAtIFdoYXQgaXMgdGhlIGF2ZXJhZ2UgUk1TRSBhY3Jvc3MgYWxsIDUwIG1vZGVsIGl0ZXJhdGlvbnM/CiAgIC0gUGxvdCB0aGUgZGlzdHJpYnV0aW9uIG9mIHRoZSBSTVNFIGFjcm9zcyBhbGwgNTAgbW9kZWwgaXRlcmF0aW9ucy4KICAgLSBEZXNjcmliZSB0aGUgcmVzdWx0cy4KICAgLSBSZXBlYXQgdGhpcyBleGVyY2lzZSBmb3IgdGhlIHNwYW0gZGF0YSBmcm9tIGV4ZXJjaXNlIDMpOyBzaW5jZSB0aGUgdGFyZ2V0IChgdHlwZWApIGlzIGJpbmFyeSwgYmUgc3VyZSB0byB1c2UgYSBtb3JlIGFwcHJvcHJpYXRlIG1ldHJpYyAoZS5nLiwgQVVDIG9yIG1pc2NsYXNzaWZpY2F0aW9uIGVycm9yKS4KCjYuIFJlcGVhdCBleGVyY2lzZSA1KSBvbiB0aGUgQm9zdG9uIGhvdXNpbmcgZGF0YTsgaG93ZXZlciwgaW5zdGVhZCBvZiBhIGxpbmVhciByZWdyZXNzaW9uIG1vZGVsLCB1c2UgYSBfa18tbmVhcmVzdCBuZWlnaGJvciBtb2RlbCB0aGF0IGV4ZWN1dGVzIGEgaHlwZXJwYXJhbWV0ZXIgZ3JpZCBzZWFyY2ggd2hlcmUgX2tfIHJhbmdlcyBmcm9tIDItLTIwLiBIb3cgZG9lcyB0aGlzIG1vZGVsJ3MgcmVzdWx0cyBjb21wYXJlIHRvIHRoZSBsaW5lYXIgcmVncmVzc2lvbiByZXN1bHRzPw==