From 80269d07d58ff4b5391ea645f6e3a4640862581b Mon Sep 17 00:00:00 2001 From: Trevor Campbell Date: Thu, 9 Nov 2023 21:02:19 -0800 Subject: [PATCH] accuracy -> RMSPE in reg1 --- source/regression1.Rmd | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/source/regression1.Rmd b/source/regression1.Rmd index ecf51e05b..36c63eed6 100644 --- a/source/regression1.Rmd +++ b/source/regression1.Rmd @@ -761,10 +761,10 @@ Here we see that the smallest estimated RMSPE from cross-validation occurs when If we want to compare this multivariable KNN regression model to the model with only a single predictor *as part of the model tuning process* (e.g., if we are running forward selection as described in the chapter on evaluating and tuning classification models), -then we must compare the accuracy estimated using only the training data via cross-validation. -Looking back, the estimated cross-validation accuracy for the single-predictor +then we must compare the RMSPE estimated using only the training data via cross-validation. +Looking back, the estimated cross-validation RMSPE for the single-predictor model was `r format(round(sacr_min$mean), big.mark=",", nsmall=0, scientific = FALSE)`. -The estimated cross-validation accuracy for the multivariable model is +The estimated cross-validation RMSPE for the multivariable model is `r format(round(sacr_multi$mean), big.mark=",", nsmall=0, scientific = FALSE)`. Thus in this case, we did not improve the model by a large amount by adding this additional predictor.