Skip to contents

Provides comprehensive performance benchmarking capabilities for comparing optimization algorithms with constitutional compliance standards.

Details

This service implements constitutional performance requirements:

  • Maximum execution time: 300 seconds for 10,000+ locations

  • Maximum memory usage: 2GB RAM

  • Statistical significance testing for performance comparisons

  • Scalability analysis across dataset sizes

Public fields

algorithms

List of optimization algorithms to benchmark

datasets

List of test datasets with varying complexity

metrics

Performance metrics to collect (time, memory, accuracy)

results

Collected benchmarking results

config_manager

Configuration and logging manager

algorithms

Available optimization algorithms

datasets

Test datasets for benchmarking

metrics

Performance metrics configuration

results

Benchmarking results storage

config_manager

Configuration manager instance Initialize benchmarking service

Methods


Method get_latest_results()

Usage

BenchmarkingService$get_latest_results()


Method new()

Usage

BenchmarkingService$new(
  config_manager = NULL,
  algorithms = c("greedy", "genetic", "simulated_annealing"),
  custom_datasets = NULL
)

Arguments

config_manager

Optional ConfigManager instance

algorithms

Vector of algorithm names to benchmark

custom_datasets

List of custom datasets for benchmarking Run comprehensive benchmark suite


Method run_benchmark_suite()

Usage

BenchmarkingService$run_benchmark_suite(
  n_iterations = 5,
  include_scalability = TRUE,
  save_results = TRUE
)

Arguments

n_iterations

Number of benchmark iterations per algorithm

include_scalability

Whether to include scalability analysis

save_results

Whether to save results to file

Returns

List containing benchmark results and analysis Benchmark individual algorithm performance


Method benchmark_algorithm()

Usage

BenchmarkingService$benchmark_algorithm(
  algorithm_name,
  field_data,
  existing_samples = NULL,
  n_new_samples = 50,
  n_iterations = 5
)

Arguments

algorithm_name

Name of optimization algorithm

field_data

Spatial field data for optimization

existing_samples

Optional existing sample locations

n_new_samples

Number of new samples to select

n_iterations

Number of benchmark iterations

Returns

Benchmark results data frame Analyze algorithm scalability


Method analyze_scalability()

Usage

BenchmarkingService$analyze_scalability(
  test_sizes = c(100, 500, 1000, 5000, 10000),
  algorithms_to_test = self$algorithms
)

Arguments

test_sizes

Vector of dataset sizes to test

algorithms_to_test

Algorithms to include in scalability analysis

Returns

Scalability analysis results Compare algorithm performance with statistical significance


Method compare_algorithms()

Usage

BenchmarkingService$compare_algorithms(
  results_df = self$results,
  metric = "execution_time_mean",
  significance_level = 0.05
)

Arguments

results_df

Benchmark results data frame

metric

Performance metric to compare

significance_level

Alpha level for statistical tests

Returns

Statistical comparison results Generate performance report


Method generate_performance_report()

Usage

BenchmarkingService$generate_performance_report(
  include_plots = FALSE,
  output_format = "text"
)

Arguments

include_plots

Whether to generate performance plots

output_format

Format for report output ("text", "html", "pdf")

Returns

Performance report object Validate system performance capabilities


Method validate_system_performance()

Usage

BenchmarkingService$validate_system_performance()

Returns

System validation results Create standard benchmark datasets


Method clone()

The objects of this class are cloneable with this method.

Usage

BenchmarkingService$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.