Performance benchmark structure

We need to store benchmarks somewhere and be able to analyze the results even if time has passed.

I suggest to create a repo called eve-performance and store results there.

Proposing the following structure: 

Performance benchmark structure

Level 1: Name of the configuration (e.g. Raspberry Pi 4) 

      -  README.cfg  - full spec of the HW

      -  SUMMARY.csv - All results in the table 

Type

Config name

Config descr

Io_test

Result

 

 

 

 

 

 

 

 

 

 

 

  • HOWTO.me  - How to run 

  • Level 2: CONFIGS

CONFIG_NAME : where we run (which env : Container/ eve/ ubuntu) 

  • README.cfg  - some description 

  • default.yml - eden config (from ~/.eden)

  • Qemu.conf - qemu config  (from ~/.eden)

  • Dev….json -after test is finished  (from ~/.eden)

  • Level 3: TESTS

       IO_TEST: name of the fio test

- test.cfg  - full test configuration

-README.cfg - test summary 

-fio (or some test) - RAW results

-iostat

-perf 

-sar

All results should have time and raw data.