LAZY_PREEMPT
In the RT patches set. A LAZY_PREEMPT configuration is not supported by RISC-V. But it is an important feature in RT-Linux. This patch enhances the non RT workload performance. It is necessary to support it. Though RT-linux not support RISC-V architecture. But we can get other architecture patch(ARM/x86) to implement it.
An implementation of RISC-V LAZY_PREEPMT is list in Appendix A. The patch has been verified by OpenPLC test. Both latency and jitter are improved from test result.
For more detail about LAZY_PREEMPT, below is the commit messages of LAZY_PREEMPT.
It has become an obsession to mitigate the determinism vs. throughput loss of RT. Looking at the mainline semantics of preemption points gives a hint why RT sucks throughput wise for ordinary SCHED_OTHER tasks. One major issue is the wakeup of tasks which are right away preempting the waking task while the waking task holds a lock on which the woken task will block right after having preempted the wakee. In mainline this is prevented due to the implicit preemption disable of spin/rw_lock held regions. On RT this is not possible due to the fully preemptible nature of sleeping spinlocks.
Though for a SCHED_OTHER task preempting another SCHED_OTHER task this is really not a correctness issue. RT folks are concerned about SCHED_FIFO/RR tasks preemption and not about the purely fairness driven SCHED_OTHER preemption latencies.
So I introduced a lazy preemption mechanism which only applies to SCHED_OTHER tasks preempting another SCHED_OTHER task. Aside of the existing preempt_count each tasks sports now a preempt_lazy_count which is manipulated on lock acquiry and release. This is slightly incorrect as for lazyness reasons I coupled this on migrate_disable/enable so some other mechanisms get the same treatment (e.g. get_cpu_light).
Now on the scheduler side instead of setting NEED_RESCHED this sets NEED_RESCHED_LAZY in case of a SCHED_OTHER/SCHED_OTHER preemption and therefor allows to exit the waking task the lock held region before the woken task preempts. That also works better for cross CPU wakeups as the other side can stay in the adaptive spinning loop.
For RT class preemption there is no change. This simply sets NEED_RESCHED and forgoes the lazy preemption counter.