Advanced tidymodels
We used feature hashing to generate a smaller set of indicator columns to deal with the large number of levels for the agent and country predictors.
Tree-based models (and a few others) don’t require indicators for categorical predictors. They can split on these variables as-is.
We’ll keep all categorical predictors as factors and focus on optimizing additional boosting parameters.
lgbm_spec <-
boost_tree(trees = 1000, learn_rate = tune(), min_n = tune(),
tree_depth = tune(), loss_reduction = tune(),
stop_iter = tune()) %>%
set_mode("regression") %>%
set_engine("lightgbm", num_threads = 1)
lgbm_wflow <- workflow(avg_price_per_room ~ ., lgbm_spec)
lgbm_param <-
lgbm_wflow %>%
extract_parameter_set_dials() %>%
update(learn_rate = learn_rate(c(-5, -1)))
Instead of pre-defining a grid of candidate points, we can model our current results to predict what the next candidate point should be.
Suppose that we are only tuning the learning rate in our boosted tree.
We could do something like:
and use this to predict and rank new learning rate candidates.
A linear model probably isn’t the best choice though (more in a minute).
To illustrate the process, we resampled a large grid of learning rate values for our data to show what the relationship is between MAE and learning rate.
Now suppose that we used a grid of three points in the parameter range for learning rate…
We can make a “meta-model” with a small set of historical performance results.
Gaussian Processes (GP) models are a good choice to model performance.
\[\operatorname{cov}(\boldsymbol{x}_i, \boldsymbol{x}_j) = \exp\left(-\frac{1}{2}|\boldsymbol{x}_i - \boldsymbol{x}_j|^2\right) + \sigma^2_{ij}\]
The GP model can take candidate tuning parameter combinations as inputs and make predictions for performance (e.g. MAE)
The variance is mostly driven by spatial variability (the previous equation).
The predicted variance is zero at locations of actual data points and becomes very high when far away from any observed data.
Your GP makes predictions on two new candidate tuning parameters.
We want to minimize MAE.
Which should we choose?
03:00
This isn’t a very good fit but we can still use it.
How can we use the outputs to choose the next point to measure?
Acquisition functions take the predicted mean and variance and use them to balance:
Exploration focuses on the variance, exploitation is about the mean.
We’ll use an acquisition function to select a new candidate.
The most popular method appears to be expected improvement (EI) above the current best results.
We would probably pick the point with the largest EI as the next point.
(There are other functions beyond EI.)
Once we pick the candidate point, we measure performance for it (e.g. resampling).
Another GP is fit, EI is recomputed, and so on.
We stop when we have completed the allowed number of iterations or if we don’t see any improvement after a pre-set number of attempts.
We’ll use a function called tune_bayes()
that has very similar syntax to tune_grid()
.
It has an additional initial
argument for the initial set of performance estimates and parameter combinations for the GP model.
initial
can be the results of another tune_*()
function or an integer (in which case tune_grid()
is used under to hood to make such an initial set of results).
We’ll run the optimization more than once, so let’s make an initial grid of results to serve as the substrate for the BO.
I suggest at least the number of tuning parameters plus two as the initial grid for BO.
reg_metrics <- metric_set(mae, rsq)
set.seed(12)
init_res <-
lgbm_wflow %>%
tune_grid(
resamples = hotel_rs,
grid = nrow(lgbm_param) + 2,
param_info = lgbm_param,
metrics = reg_metrics
)
show_best(init_res, metric = "mae") %>% select(-.metric, -.estimator)
#> # A tibble: 5 × 9
#> min_n tree_depth learn_rate loss_reduction stop_iter mean n std_err .config
#> <int> <int> <dbl> <dbl> <int> <dbl> <int> <dbl> <chr>
#> 1 16 12 0.0136 1.91e- 3 9 10.1 10 0.196 Preprocessor1_Model4
#> 2 9 4 0.0415 5.21e- 9 13 10.2 10 0.167 Preprocessor1_Model1
#> 3 25 8 0.00256 9.58e-10 7 14.1 10 0.278 Preprocessor1_Model7
#> 4 22 9 0.00154 5.77e- 6 5 19.3 10 0.326 Preprocessor1_Model5
#> 5 32 3 0.000144 3.02e+ 1 18 47.6 10 0.387 Preprocessor1_Model6
ctrl_bo <- control_bayes(verbose_iter = TRUE) # <- for demonstration
set.seed(15)
lgbm_bayes_res <-
lgbm_wflow %>%
tune_bayes(
resamples = hotel_rs,
initial = init_res, # <- initial results
iter = 20,
param_info = lgbm_param,
control = ctrl_bo,
metrics = reg_metrics
)
#> Optimizing mae using the expected improvement
#>
#> ── Iteration 1 ───────────────────────────────────────────────────────
#>
#> i Current best: mae=10.13 (@iter 0)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=32, tree_depth=12, learn_rate=0.0178, loss_reduction=1.03e-10, stop_iter=12
#> i Estimating performance
#> ✓ Estimating performance
#> ♥ Newest results: mae=10.08 (+/-0.175)
#>
#> ── Iteration 2 ───────────────────────────────────────────────────────
#>
#> i Current best: mae=10.08 (@iter 1)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=15, tree_depth=14, learn_rate=0.0977, loss_reduction=0.00535, stop_iter=4
#> i Estimating performance
#> ✓ Estimating performance
#> ♥ Newest results: mae=9.719 (+/-0.187)
#>
#> ── Iteration 3 ───────────────────────────────────────────────────────
#>
#> i Current best: mae=9.719 (@iter 2)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=38, tree_depth=1, learn_rate=0.1, loss_reduction=0.0809, stop_iter=10
#> i Estimating performance
#> ✓ Estimating performance
#> ⓧ Newest results: mae=15.45 (+/-0.253)
#>
#> ── Iteration 4 ───────────────────────────────────────────────────────
#>
#> i Current best: mae=9.719 (@iter 2)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=32, tree_depth=1, learn_rate=0.00833, loss_reduction=1.31e-06, stop_iter=8
#> i Estimating performance
#> ✓ Estimating performance
#> ⓧ Newest results: mae=19.44 (+/-0.33)
#>
#> ── Iteration 5 ───────────────────────────────────────────────────────
#>
#> i Current best: mae=9.719 (@iter 2)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=18, tree_depth=8, learn_rate=0.0495, loss_reduction=1.4e-06, stop_iter=5
#> i Estimating performance
#> ✓ Estimating performance
#> ⓧ Newest results: mae=9.757 (+/-0.146)
#>
#> ── Iteration 6 ───────────────────────────────────────────────────────
#>
#> i Current best: mae=9.719 (@iter 2)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=3, tree_depth=14, learn_rate=0.0319, loss_reduction=4.02e-09, stop_iter=17
#> i Estimating performance
#> ✓ Estimating performance
#> ⓧ Newest results: mae=9.76 (+/-0.163)
#>
#> ── Iteration 7 ───────────────────────────────────────────────────────
#>
#> i Current best: mae=9.719 (@iter 2)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=6, tree_depth=8, learn_rate=0.0883, loss_reduction=1.94e-08, stop_iter=4
#> i Estimating performance
#> ✓ Estimating performance
#> ♥ Newest results: mae=9.712 (+/-0.17)
#>
#> ── Iteration 8 ───────────────────────────────────────────────────────
#>
#> i Current best: mae=9.712 (@iter 7)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=6, tree_depth=8, learn_rate=0.025, loss_reduction=7.82e-05, stop_iter=19
#> i Estimating performance
#> ✓ Estimating performance
#> ⓧ Newest results: mae=9.838 (+/-0.17)
#>
#> ── Iteration 9 ───────────────────────────────────────────────────────
#>
#> i Current best: mae=9.712 (@iter 7)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=32, tree_depth=6, learn_rate=0.0737, loss_reduction=2.15e-07, stop_iter=8
#> i Estimating performance
#> ✓ Estimating performance
#> ⓧ Newest results: mae=10.06 (+/-0.2)
#>
#> ── Iteration 10 ──────────────────────────────────────────────────────
#>
#> i Current best: mae=9.712 (@iter 7)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=5, tree_depth=11, learn_rate=0.0451, loss_reduction=3.45e-10, stop_iter=7
#> i Estimating performance
#> ✓ Estimating performance
#> ♥ Newest results: mae=9.637 (+/-0.156)
#>
#> ── Iteration 11 ──────────────────────────────────────────────────────
#>
#> i Current best: mae=9.637 (@iter 10)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=2, tree_depth=7, learn_rate=0.0372, loss_reduction=2.44e-09, stop_iter=11
#> i Estimating performance
#> ✓ Estimating performance
#> ⓧ Newest results: mae=9.761 (+/-0.171)
#>
#> ── Iteration 12 ──────────────────────────────────────────────────────
#>
#> i Current best: mae=9.637 (@iter 10)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=26, tree_depth=15, learn_rate=0.00626, loss_reduction=0.00554, stop_iter=16
#> i Estimating performance
#> ✓ Estimating performance
#> ⓧ Newest results: mae=10.79 (+/-0.198)
#>
#> ── Iteration 13 ──────────────────────────────────────────────────────
#>
#> i Current best: mae=9.637 (@iter 10)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=29, tree_depth=10, learn_rate=0.0996, loss_reduction=4.5e-05, stop_iter=16
#> i Estimating performance
#> ✓ Estimating performance
#> ⓧ Newest results: mae=9.838 (+/-0.169)
#>
#> ── Iteration 14 ──────────────────────────────────────────────────────
#>
#> i Current best: mae=9.637 (@iter 10)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=12, tree_depth=13, learn_rate=0.085, loss_reduction=2.16, stop_iter=9
#> i Estimating performance
#> ✓ Estimating performance
#> ⓧ Newest results: mae=9.795 (+/-0.16)
#>
#> ── Iteration 15 ──────────────────────────────────────────────────────
#>
#> i Current best: mae=9.637 (@iter 10)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=4, tree_depth=9, learn_rate=0.0418, loss_reduction=0.00293, stop_iter=7
#> i Estimating performance
#> ✓ Estimating performance
#> ⓧ Newest results: mae=9.75 (+/-0.168)
#>
#> ── Iteration 16 ──────────────────────────────────────────────────────
#>
#> i Current best: mae=9.637 (@iter 10)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=6, tree_depth=15, learn_rate=0.0703, loss_reduction=5.15e-10, stop_iter=13
#> i Estimating performance
#> ✓ Estimating performance
#> ⓧ Newest results: mae=9.672 (+/-0.134)
#>
#> ── Iteration 17 ──────────────────────────────────────────────────────
#>
#> i Current best: mae=9.637 (@iter 10)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=27, tree_depth=15, learn_rate=0.0956, loss_reduction=3.74e-10, stop_iter=17
#> i Estimating performance
#> ✓ Estimating performance
#> ⓧ Newest results: mae=9.861 (+/-0.197)
#>
#> ── Iteration 18 ──────────────────────────────────────────────────────
#>
#> i Current best: mae=9.637 (@iter 10)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=2, tree_depth=11, learn_rate=0.0871, loss_reduction=0.00196, stop_iter=18
#> i Estimating performance
#> ✓ Estimating performance
#> ♥ Newest results: mae=9.601 (+/-0.147)
#>
#> ── Iteration 19 ──────────────────────────────────────────────────────
#>
#> i Current best: mae=9.601 (@iter 18)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=2, tree_depth=12, learn_rate=0.0991, loss_reduction=8.45e-06, stop_iter=14
#> i Estimating performance
#> ✓ Estimating performance
#> ⓧ Newest results: mae=9.61 (+/-0.17)
#>
#> ── Iteration 20 ──────────────────────────────────────────────────────
#>
#> i Current best: mae=9.601 (@iter 18)
#> i Gaussian process model
#> ✓ Gaussian process model
#> i Generating 5000 candidates
#> i Predicted candidates
#> i min_n=4, tree_depth=15, learn_rate=0.0206, loss_reduction=1.46e-06, stop_iter=15
#> i Estimating performance
#> ✓ Estimating performance
#> ⓧ Newest results: mae=9.881 (+/-0.177)
show_best(lgbm_bayes_res, metric = "mae") %>% select(-.metric, -.estimator)
#> # A tibble: 5 × 10
#> min_n tree_depth learn_rate loss_reduction stop_iter mean n std_err .config .iter
#> <int> <int> <dbl> <dbl> <int> <dbl> <int> <dbl> <chr> <int>
#> 1 2 11 0.0871 1.96e- 3 18 9.60 10 0.147 Iter18 18
#> 2 2 12 0.0991 8.45e- 6 14 9.61 10 0.170 Iter19 19
#> 3 5 11 0.0451 3.45e-10 7 9.64 10 0.156 Iter10 10
#> 4 6 15 0.0703 5.15e-10 13 9.67 10 0.134 Iter16 16
#> 5 6 8 0.0883 1.94e- 8 4 9.71 10 0.170 Iter7 7
Let’s try a different acquisition function: conf_bound(kappa)
.
We’ll use the objective
argument to set it.
Choose your own kappa
value:
Bonus points: Before the optimization is done, press <esc>
and see what happens.
10:00
Stopping tune_bayes()
will return the current results.
Parallel processing can still be used to more efficiently measure each candidate point.
There are a lot of other iterative methods that you can use.
The finetune package also has functions for simulated annealing search.
Let’s say that we’ve tried a lot of different models and we like our lightgbm model the most.
What do we do now?
We can take the results of the Bayesian optimization and accept the best results:
best_param <- select_best(lgbm_bayes_res, metric = "mae")
final_wflow <-
lgbm_wflow %>%
finalize_workflow(best_param)
final_wflow
#> ══ Workflow ══════════════════════════════════════════════════════════
#> Preprocessor: Formula
#> Model: boost_tree()
#>
#> ── Preprocessor ──────────────────────────────────────────────────────
#> avg_price_per_room ~ .
#>
#> ── Model ─────────────────────────────────────────────────────────────
#> Boosted Tree Model Specification (regression)
#>
#> Main Arguments:
#> trees = 1000
#> min_n = 2
#> tree_depth = 11
#> learn_rate = 0.0871075826616985
#> loss_reduction = 0.00195652467829182
#> stop_iter = 18
#>
#> Engine-Specific Arguments:
#> num_threads = 1
#>
#> Computational engine: lightgbm
We can use individual functions:
final_fit <- final_wflow %>% fit(data = hotel_train)
# then predict() or augment()
# then compute metrics
Remember that there is also a convenience function to do all of this:
set.seed(3893)
final_res <- final_wflow %>% last_fit(hotel_split, metrics = reg_metrics)
final_res
#> # Resampling results
#> # Manual resampling
#> # A tibble: 1 × 6
#> splits id .metrics .notes .predictions .workflow
#> <list> <chr> <list> <list> <list> <list>
#> 1 <split [3749/1251]> train/test split <tibble [2 × 4]> <tibble [0 × 3]> <tibble [1,251 × 4]> <workflow>
Test set performance:
Recall that resampling predicted the MAE to be 9.601.