When measuring I / O performance for comparison between raw performance on hardware and VM performance in EVE, there are 3 points to consider:
1. How many resources are allocated to the virtual machine in EVE (CPU).
2. What type of image is specified for VM in EVE (*.raw, *.qcow2, *.img)
3. What testing pattern is used for comparison.
From recent performance testing in EVE, we got the following results:
Immediately, we note that the result from the picture is abstract, and is shown solely for example.
...
Why it can turn out this way:
1. Limited VM (CPU) resources (the problem is being solved, which also affects point 2)
2. Small block size (4k) -> large number of IOPS -> more CPU consumption. Fast filling of the request queue.
3. Slow disk on Host Server.
4. A large layer between the application and the real disk. (Can be seen in the picture below)
5. File forwarding (*.raw, *.qcow2, *.img) as a disk in VM
What can be done to get maximum results:
1. Increase the CPU (In a nutshell: more processor = more processing power = more processor time = more I / O slot in a given time period.) This will increase performance to some extent.
2. Use *.raw instead of *.qcow2
About testing patterns:
For example, let's take the following pattern from Phoronix: Seq Read - Linux AIO - No - No - 4KB In this test, the I / O path for VM in EVE is much longer than the I / O path in the bare-metal test. Which obviously gives its I / O latency, which must be considered in the comparison. There is nothing you can do about this point in this diagram and the current VM configuration in EVE and in this test.