Project deliverable

D2.3 – Benchmark data for domain-specific and non-regression performance tests

  • 28 Sep 2021
  • .

One of the main goals of the TREX project is the implementation of optimised libraries for QMC computation (QMCkl) and I/O for interoperability and synergic applications of TREX codes as well as for use by other software packages outside TREX. A key ingredient to successfully accomplish the above target is ensuring that the libraries do provide an improvement in terms of performance, and not at the expense of accuracy. Also, since development is an ongoing process, it is vital to be able to detect early on if a new update caused regressions, in terms of accuracy, performance or stability. To address this concern, a set of benchmarks has been compiled. These benchmarks are intended to offer high-level tests, executable by users of the QMCkl or I/O libraries in order to check their installation as well as by developers of these libraries to detect any regression in terms of accuracy or performance. They will also allow monitoring of the overall evolution of performance during the development of the libraries. The present document is intended as a short user guide to this set of benchmarks. Its purpose is to describe how to exploit it along with the associated repository of performance reports, and how to add new benchmarks to the set. Chapter 2 recaps the content of the deliverable, chapter 3 defines how to use reference values for checking the benchmark results, chapter 4 presents the performance reports, and chapter 5 describes the necessary elements to provide for adding another benchmark to the list. Appendix A: presents the list of benchmarks, along with all necessary information for executing and analysing them. Appendix B: describes how to generate a performance analysis report for a benchmark. Finally, Appendix C: presents the correspondence between the existing reports and the benchmarks versions.