Skip to main content Skip to main navigation

Publikation

PEEL: A Framework for benchmarking distributed systems and algorithms

Christoph Boden; Alexander Alexandrov; Andreas Kunft; Tilmann Rabl; Volker Markl
In: Raghunath Nambiar; Meikel Poess (Hrsg.). Performance Evaluation and Benchmarking for the Analytics Era. TPC Technology Conference (TPCTC-2017), 9th, located at 43rd International Conference on Very Large Databases (VLDB), August 28, München, Germany, ISBN 978-3-319-72401-0, Springer, 2017.

Zusammenfassung

During the last decade, a multitude of novel systems for scalable and distributed data processing has been proposed in both academia and industry. While there are published results of experimental evaluations for nearly all systems, it remains a challenge to objectively compare different system's performance. It is thus imperative to enable and establish benchmarks for these systems. However, even if workloads and data sets or data generators are fixed, orchestrating and executing benchmarks can be a major obstacle. Worse, many systems come with hardware-dependent parameters that have to be tuned and spawn a diverse set of configuration files. This impedes portability and reproducibility of benchmarks. To address these problems and to foster reproducible and portable experiments and benchmarks of distributed data processing systems, we present PEEL , a framework to define, execute, analyze, and share experiments. PEEL enables the transparent specification of benchmarking workloads and system configuration parameters. It orchestrates the systems involved and automatically runs and collects all associated logs of experiments. PEEL currently supports Apache HDFS, Hadoop, Flink, and Spark and can easily be extended to include further systems.

Weitere Links