This function takes a list of objects of the random_search_resample function and returns a list with the optimal parameters

performance_measures(
  list_objects,
  eval_metric,
  sort = list(variable = "Median", decreasing = TRUE)
)

Arguments

list_objects

a list of model objects

eval_metric

the evaluation metric (the name of the function)

sort

a list of arguments specifying how the optimal parameters should be sorted (for variable one of : 'Min.', '1st Qu.', 'Median', 'Mean', '3rd Qu.', 'Max.')

Value

a list of lists

Details

This function takes a list of objects of the random_search_resample function and returns a list with the optimal parameters. Four lists are returned : the first is a list of the grid parameters evaluated on the train data, the second is the same list of parameters evaluated on test data, the third gives summary statistics for the predictions of each algorithm compared with the other algorithms and the fourth list shows if any of the models had missing values in the predictions.

Examples

if (FALSE) { perf = performance_measures(list_objects = list(rf = res_rf, logitBoost = res_log_boost), eval_metric = acc, sort = list(variable = 'Median', decreasing = TRUE)) perf }