Publikation
Surprise Benchmarking: The Why, What, and How
Lawrence Benson; Carsten Binnig; Jan-Micha Bodensohn; Federico Lorenzi; Jigao Luo; Danica Porobic; Tilmann Rabl; Anupam Sanghi; Russell Sears; Pinar Tözün; Tobias Ziegler
In: Proceedings of the Tenth International Workshop on Testing Database Systems, DBTest 2024, Santiago, Chile, 9 June 2024. International Workshop on Testing Database Systems (DBTest), Pages 1-8, ACM, 2024.
Zusammenfassung
Standardized benchmarks are crucial to ensure a fair comparison of performance across systems. While extremely valuable, these benchmarks all use a setup where the workload is well-defined and known in advance. Unfortunately, this has led to overly-tuning data management systems for particular benchmark workloads such as TPC-H or TPC-C. As a result, benchmarking results frequently do not reflect the behavior of these systems in many real-world settings since workloads often significantly vary from the "known" benchmarking workloads. To address this issue, we present surprise benchmarking, a complementary approach to the current standardized benchmarking where "unknown" queries are exercised during the evaluation.