In computer operating system design, kernel preemption is a property possessed by some kernels (the cores of operating systems), in which the CPU can be interrupted in the middle of executing kernel code and assigned other tasks (from which it later returns to finish its kernel tasks).
Details
Specifically, the scheduler is permitted to forcibly perform a context switch (on behalf of a runnable and higher-priority process) on a driver or other part of the kernel during its execution, rather than co-operatively waiting for the driver or kernel function (such as a system call) to complete its execution and return control of the processor to the scheduler when done.[1][2][3][4] It is used mainly in monolithic and hybrid kernels, where all or most device drivers are run in kernel space. Linux is an example of a monolithic-kernel operating system with kernel preemption.
The main benefit of kernel preemption is that it solves two issues that would otherwise be problematic for monolithic kernels, in which the kernel consists of one large binary.[5] Without kernel preemption, two major issues exist for monolithic and hybrid kernels:
See also
References
- 1 2 "Preemption under Linux". kernelnewbies.org. 2009-08-22. Retrieved 2016-06-10.
- 1 2 Jonathan Corbet (2003-02-24). "Driver porting: the preemptible kernel". LWN.net. Retrieved 2016-06-10.
- ↑ "FreeBSD Architecture Handbook, Chapter 8. SMPng Design Document, Section 8.3. General Architecture and Design". freebsd.org. Retrieved 2016-06-10.
- ↑ Robert Love (2002-05-01). "Lowering Latency in Linux: Introducing a Preemptible Kernel". Linux Journal. Retrieved 2016-06-10.
- ↑ Robert Love (2010). Linux Kernel Development (3 ed.). Pearson Education. ISBN 978-0672329463.