Expand description
Task scheduling.
§Scheduler Injection
The task scheduler of an OS is a complex beast, and the most suitable scheduling algorithm often depends on the target usage scenario. To avoid code bloat and offer flexibility, OSTD does not include a gigantic, one-size-fits-all task scheduler. Instead, it allows the client to implement a custom scheduler (in safe Rust, of course) and register it with OSTD. This feature is known as scheduler injection.
The client kernel performs scheduler injection via the inject_scheduler
API.
This API should be called as early as possible during kernel initialization,
before any Task
-related APIs are used.
This requirement is reasonable since Task
s depend on the scheduler.
§Scheduler Abstraction
The inject_scheduler
API accepts an object implementing the Scheduler
trait,
which abstracts over any SMP-aware task scheduler.
Whenever an OSTD client spawns a new task (via crate::task::TaskOptions
)
or wakes a sleeping task (e.g., via crate::sync::Waker
),
OSTD internally forwards the corresponding Arc<Task>
to the scheduler by invoking the Scheduler::enqueue
method.
This allows the injected scheduler to manage all runnable tasks.
Each enqueued task is dispatched to one of the per-CPU local runqueues,
which manage all runnable tasks on a specific CPU.
A local runqueue is abstracted by the LocalRunQueue
trait.
OSTD accesses the local runqueue of the current CPU
via Scheduler::local_rq_with
or Scheduler::mut_local_rq_with
,
which return immutable and mutable references to dyn LocalRunQueue
, respectively.
The LocalRunQueue
trait enables OSTD to inspect and manipulate local runqueues.
For instance, OSTD invokes the LocalRunQueue::pick_next
method
to let the scheduler select the next task to run.
OSTD then performs a context switch to that task,
which becomes the current running task, accessible via LocalRunQueue::current
.
When the current task is about to sleep (e.g., via crate::sync::Waiter
),
OSTD removes it from the local runqueue using LocalRunQueue::dequeue_current
.
The interfaces of Scheduler
and LocalRunQueue
are simple
yet (perhaps surprisingly) powerful enough to support
even complex and advanced task scheduler implementations.
Scheduler implementations are free to employ any load-balancing strategy
to dispatch enqueued tasks across local runqueues,
and each local runqueue is free to choose any prioritization strategy
for selecting the next task to run.
Based on OSTD’s scheduling abstractions,
the Asterinas kernel has successfully supported multiple Linux scheduling classes,
including both real-time and normal policies.
§Safety Impact
While OSTD delegates scheduling decisions to the injected task scheduler, it verifies these decisions to avoid undefined behavior. In particular, it enforces the following safety invariant:
A task must not be scheduled to run on more than one CPU at a time.
Violating this invariant—e.g., running the same task on two CPUs concurrently— can have catastrophic consequences, as the task’s stack and internal state may be corrupted by concurrent modifications.
Modules§
- info
- Scheduling related information in a task.
Enums§
- Enqueue
Flags - Possible triggers of an
enqueue
action. - Update
Flags - Possible triggers of an
update_current
action.
Traits§
- Local
RunQueue - A per-CPU, local runqueue.
- Scheduler
- A SMP-aware task scheduler.
Functions§
- inject_
scheduler - Injects a custom implementation of task scheduler into OSTD.