Latency Measurements
The latency measurements were started by compiling appropriate kernel images, filesystems, bootloaders, and other requisite files to the local development machine. Both used kernels were configured as presented earlier in Load Generation. Once all files were flashed to the SD card and the system was successfully booted, it was left idle for 3 minutes before any testing. During that time, the kernel had time to initialize internal data structures such as the random number generator to fully complete the boot process. This way, any of the internal kernel initializations did not affect the measurement results.
After this procedure, the kernel was ready for the actual measurements. For the JH-7110 DevKit contain DVFS driver. The CPU frequency can be changed in the test. To get the accuracy test result, set the CPU frequency to maximum(1.5 GHz)
-
Set maximum CPU frequency.
echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
-
Set different load (cpu, os, memory or storage).
cpu: stress-ng --matrix 4
os:stress-ng --set 4
memory: stress-ng --vm 4
storage:stress-ng --dir 4
-
Run cyclitest.
cyclictest -m -S -p 99 -i 200 -q -D 10m -H 200
(The cyclictest arguments detail can see in Latency Measurement)
Each latency measurement lasted for a total duration of 10 minutes resulting in approximately 1.2 million latency samples each. The same process was repeated for every load category using the mainline and the PREEMPT_RT patched kernels without experiencing any problems. Altogether, there were a total of 10 different measurement combinations that were executed.
The dataset is presented in Latency Measurement Results. It is easier to do a comparison between different categories and kernels using the tabular format. Most importantly the table contains the absolute maximum observed latencies from each measurement. Other important
calculated numbers are the average latency and respective standard deviation. Also, the minimum observed latencies are presented for completeness. The cyclictest tool reported measured practical clock resolution to be 1 µs, so all presented calculations and measurements are rounded to the nearest microsecond.
Stdev is standard deviation of the latency. It reflects the jitter of the latency. The less the value of stdev, the better real-time system. The methods of computation and an example is list in Appendix B.
Stress | Real-time | Mainline | ||||||
---|---|---|---|---|---|---|---|---|
Avg(µs) | Stdev(µs) | Min(µs) | Max(µs) | Avg(µs) | Stdev(µs) | Min(µs) | Max (µs) | |
idle | 9 | 2 | 7 | 40 | 8 | 4 | 6 | 66 |
cpu | 9 | 4 | 7 | 36 | 10 | 6 | 6 | 88 |
os | 10 | 8 | 7 | 46 | 20 | 16 | 6 | 9162 |
mem | 12 | 6 | 7 | 130 | 12 | 14 | 6 | 14125 |
storage | 10 | 7 | 7 | 49 | 16 | 14 | 7 | 3025 |
For the mainline kernel, the lowest maximum latency was measured when the system was idling, as would be expected. Also, in this particular case, the average latency and standard deviation were the smallest. When the CPU load was introduced to the system, it caused the latencies to rise, but the maximum latency was kept under 100 µs. However, with every other type of load, noticeably longer latencies were experienced.
The OS and memory stress test in main line kernel seemed to much worse than real-time Kernel. With this type of load, the maximum latencies were in the order of milliseconds, and the average latency was also much higher with a significant standard deviation when compared to Real-time system.
With real-time kernel, the observed latencies were very similar to each other, across all different categories. All measured maximum latencies, except for the error category, were well below 200 µs. Also, the observed standard deviations were consistently small throughout the tests, suggesting that most of the experienced latencies were close to the average numbers. Therefore, applying any of the selected loads to the system did not affect the system responsiveness by a significant amount.