CursorView

Struct CursorView 

Source
pub struct CursorView<C: PageTableConfig> {
    pub cur_va: Vaddr,
    pub mappings: Set<Mapping>,
    pub phantom: PhantomData<C>,
}

Fields§

§cur_va: Vaddr§mappings: Set<Mapping>§phantom: PhantomData<C>

Implementations§

Source§

impl<C: PageTableConfig> CursorView<C>

A CursorView is not aware that the underlying structure of the page table is a tree. It treats the page table as an array of mappings of various sizes, and the cursor itself as the current virtual address, moving from low to high addresses. These functions specify its behavior and provide a simple interface for reasoning about its behavior.

Source

pub open spec fn item_into_mapping(va: Vaddr, item: C::Item) -> Mapping

{
    let (paddr, level, prop) = C::item_into_raw_spec(item);
    let size = page_size(level);
    Mapping {
        va_range: va..(va + size) as usize,
        pa_range: paddr..(paddr + size) as Paddr,
        page_size: size,
        property: prop,
    }
}
Source

pub open spec fn present(self) -> bool

{
    self
        .mappings
        .filter(|m: Mapping| m.va_range.start <= self.cur_va < m.va_range.end)
        .len() > 0
}

This function checks if the current virtual address is mapped. It does not correspond to a cursor method itself, but defines what it means for an entry to present: there is a mapping whose virtual address range contains the current virtual address.

Source

pub open spec fn query_mapping(self) -> Mapping

recommends
self.present(),
{
    self.mappings
        .filter(|m: Mapping| m.va_range.start <= self.cur_va < m.va_range.end)
        .choose()
}
Source

pub open spec fn query(self, paddr: Paddr, size: usize, prop: PageProperty) -> bool

{
    let m = self.query_mapping();
    m.pa_range.start == paddr && m.page_size == size && m.property == prop
}
Source

pub open spec fn query_range(self) -> Range<Vaddr>

{ self.query_mapping().va_range }
Source

pub open spec fn cur_slot_range(self, size: usize) -> Range<Vaddr>

{
    let start = nat_align_down(self.cur_va as nat, size as nat) as Vaddr;
    start..(start as nat + size as nat) as Vaddr
}

The VA range of the current slot when mapping a page of the given size. Works for both present and absent mappings: when present, equals query_range() for a mapping of that size; when absent, returns the aligned range that would be mapped.

Source

pub open spec fn query_item_spec(self, item: C::Item) -> Option<Range<Vaddr>>

recommends
self.present(),
{
    let (paddr, level, prop) = C::item_into_raw_spec(item);
    let size = page_size(level);
    if self.query(paddr, size, prop) { Some(self.query_range()) } else { None }
}

This predicate specifies the behavior of the query method. It states that the current item in the page table matches the given item, mapped at the resulting virtual address range.

Source

pub open spec fn find_next_impl_spec( self, len: usize, find_unmap_subtree: bool, split_huge: bool, ) -> (Self, Option<Mapping>)

{
    let mappings_in_range = self
        .mappings
        .filter(|m: Mapping| self.cur_va <= m.va_range.start < self.cur_va + len);
    if mappings_in_range.len() > 0 {
        let mapping = mappings_in_range
            .find_unique_minimal(|m: Mapping, n: Mapping| {
                m.va_range.start < n.va_range.start
            });
        let view = CursorView {
            cur_va: mapping.va_range.end,
            ..self
        };
        (view, Some(mapping))
    } else {
        let view = CursorView {
            cur_va: (self.cur_va + len) as Vaddr,
            ..self
        };
        (view, None)
    }
}

The specification for the internal function, find_next_impl. It finds the next mapped virtual address that is at most len bytes away from the current virtual address. TODO: add the specifications for find_unmap_subtree and split_huge, which are used by other functions that call this one. This returns a mapping rather than the address because that is useful when it’s called as a subroutine.

Source

pub open spec fn find_next_spec(self, len: usize) -> (Self, Option<Vaddr>)

{
    let (cursor, mapping) = self.find_next_impl_spec(len, false, false);
    if mapping is Some {
        (cursor, Some(mapping.unwrap().va_range.start))
    } else {
        (cursor, None)
    }
}

Actual specification for find_next. The cursor finds the next mapped virtual address that is at most len bytes away from the current virtual address, returns it, and then moves the cursor forward to the next end of its range.

Source

pub open spec fn jump_spec(self, va: usize) -> Self

{
    CursorView {
        cur_va: va as Vaddr,
        ..self
    }
}

Jump just sets the current virtual address to the given address.

Source

pub open spec fn align_up_spec(self, size: usize) -> Vaddr

{ nat_align_up(self.cur_va as nat, size as nat) as Vaddr }
Source

pub open spec fn split_index(m: Mapping, new_size: usize, n: usize) -> Mapping

{
    Mapping {
        va_range: (m.va_range.start + n * new_size)
            as usize..(m.va_range.start + (n + 1) * new_size) as usize,
        pa_range: (m.pa_range.start + n * new_size)
            as usize..(m.pa_range.start + (n + 1) * new_size) as usize,
        page_size: new_size,
        property: m.property,
    }
}
Source

pub proof fn split_if_mapped_huge_spec_preserves_present(v: Self, new_size: usize)

requires
v.inv(),
v.present(),
new_size > 0,
v.query_mapping().page_size > 0,
v.query_mapping().page_size % new_size == 0,
ensures
v.split_if_mapped_huge_spec(new_size).present(),

After split_if_mapped_huge_spec(new_size), a sub-mapping at cur_va still exists. The witness is split_index(m, new_size, k) where k = (cur_va - m.va_range.start) / new_size.

Source

pub proof fn split_if_mapped_huge_spec_decreases_page_size(v: Self, new_size: usize)

requires
v.inv(),
v.present(),
new_size > 0,
v.query_mapping().page_size > new_size,
v.query_mapping().page_size % new_size == 0,
ensures
v.split_if_mapped_huge_spec(new_size).present(),
v.split_if_mapped_huge_spec(new_size).query_mapping().page_size
    < v.query_mapping().page_size,

After split_if_mapped_huge_spec(new_size) on a valid view, the mapping at cur_va has page_size == new_size < m.page_size.

The sub-mapping split_index(m, new_size, k) has page_size = new_size. No other mapping from the original view covers cur_va (non-overlapping), so query_mapping() must return a sub-mapping with page_size = new_size.

Source

pub open spec fn split_if_mapped_huge_spec(self, new_size: usize) -> Self

{
    let m = self.query_mapping();
    let size = m.page_size;
    let new_mappings = Set::<int>::new(|n: int| 0 <= n < size / new_size)
        .map(|n: int| Self::split_index(m, new_size, n as usize));
    CursorView {
        cur_va: self.cur_va,
        mappings: self.mappings - set![m] + new_mappings,
        ..self
    }
}
Source

pub open spec fn split_while_huge(self, size: usize) -> Self

{
    if self.present() {
        let m = self.query_mapping();
        if m.page_size > size {
            let new_size = m.page_size / NR_ENTRIES;
            let new_self = self.split_if_mapped_huge_spec(new_size);
            new_self.split_while_huge(size)
        } else {
            self
        }
    } else {
        self
    }
}
Source

pub broadcast proof fn lemma_split_while_huge_preserves_cur_va(self, size: usize)

requires
self.inv(),
ensures
#[trigger] self.split_while_huge(size).cur_va == self.cur_va,

split_while_huge only modifies mappings, not cur_va.

Source

pub open spec fn remove_subtree(self, size: usize) -> Set<Mapping>

{
    let subtree = self
        .mappings
        .filter(|m: Mapping| self.cur_va <= m.va_range.start < self.cur_va + size);
    self.mappings - subtree
}
Source

pub open spec fn map_spec(self, paddr: Paddr, size: usize, prop: PageProperty) -> Self

{
    let new = Mapping {
        va_range: self.cur_slot_range(size),
        pa_range: paddr..(paddr + size) as usize,
        page_size: size,
        property: prop,
    };
    let split_self = self.split_while_huge(size);
    CursorView {
        cur_va: split_self.align_up_spec(size),
        mappings: split_self.remove_subtree(size) + set![new],
        ..split_self
    }
}

Inserts a mapping into the cursor. If there were previously mappings there, they are removed. Note that multiple mappings might be removed if they overlap with a new large mapping.

Source

pub open spec fn map_simple(self, paddr: Paddr, size: usize, prop: PageProperty) -> Self

{
    let new = Mapping {
        va_range: self.cur_slot_range(size),
        pa_range: paddr..(paddr + size) as usize,
        page_size: size,
        property: prop,
    };
    CursorView {
        cur_va: self.cur_va,
        mappings: self.mappings + set![new],
        ..self
    }
}
Source

pub open spec fn unmap_spec(self, len: usize, new_view: Self, num_unmapped: usize) -> bool

{
    let after_start_split = self.split_while_huge(PAGE_SIZE);
    let at_end = CursorView {
        cur_va: (self.cur_va + len) as Vaddr,
        ..after_start_split
    };
    let after_both_splits = at_end.split_while_huge(PAGE_SIZE);
    let base = CursorView {
        cur_va: self.cur_va,
        ..after_both_splits
    };
    let taken = base
        .mappings
        .filter(|m: Mapping| self.cur_va <= m.va_range.start < self.cur_va + len);
    &&& new_view.cur_va >= (self.cur_va + len) as Vaddr
    &&& new_view.mappings == base.mappings - taken
    &&& num_unmapped == taken.len() as usize

}

Unmaps a range of virtual addresses from the current address up to len bytes. It returns the number of mappings that were removed.

Because the implementation may split huge pages that straddle the range boundaries, the spec first applies split_while_huge at both cur_va and cur_va + len to obtain a “base” set of mappings where all boundary entries are at the finest granularity. Mappings whose va_range.start falls in [cur_va, cur_va + len) are then removed from this base.

Source

pub proof fn split_while_huge_compose(self, s1: usize, s2: usize)

requires
s2 <= s1,
ensures
self.split_while_huge(s2) == self.split_while_huge(s1).split_while_huge(s2),

Composition law for split_while_huge: splitting to a finer target s2 <= s1 is the same as first splitting to s1 and then further splitting to s2. This holds because split_while_huge(s1) leaves the current mapping with page_size <= s1, so a subsequent split_while_huge(s2) (with s2 <= s1) produces the same result as applying split_while_huge(s2) directly.

Source

pub proof fn split_while_huge_idempotent(self, size: usize)

ensures
self.split_while_huge(size).split_while_huge(size) == self.split_while_huge(size),

When the current entry is absent or maps at page_size <= size, split_while_huge(size) is a no-op. Applying a second call with the same size therefore returns the same value.

Source

pub proof fn split_while_huge_noop_implies_page_size_le(self, size: usize)

requires
self.split_while_huge(size) == self,
self.present(),
ensures
self.query_mapping().page_size <= size,

When split_while_huge(size) is a no-op and the view is present(), the mapping at cur_va already has page_size <= size.

Follows from one step of unfolding: if page_size > size, the function would recurse and modify mappings, so it couldn’t be a no-op.

Source

pub proof fn split_while_huge_one_step(self, size: usize)

requires
self.present(),
self.query_mapping().page_size > size,
self.query_mapping().page_size / NR_ENTRIES == size,
ensures
self.split_while_huge(size).mappings =~= self.split_if_mapped_huge_spec(size).mappings,

When the mapping at cur_va is exactly one split-step above size (i.e. query_mapping().page_size / NR_ENTRIES == size), one step of split_while_huge equals split_if_mapped_huge_spec:

self.split_while_huge(size) == self.split_if_mapped_huge_spec(size)

This is because split_while_huge takes one step split_if_mapped_huge_spec(m.page_size / NR_ENTRIES) = split_if_mapped_huge_spec(size), then the sub-mapping at cur_va has page_size == size <= size, so it stops.

Source

pub open spec fn protect_spec( self, len: usize, op: FnSpec<(PageProperty,), PageProperty>, target_page_size: usize, ) -> (Self, Option<Range<Vaddr>>)

{
    let (find_cursor, next) = self.find_next_impl_spec(len, false, true);
    if next is Some {
        let found = next.unwrap();
        let at_found = CursorView {
            cur_va: found.va_range.start as Vaddr,
            ..self
        };
        let split_view = at_found.split_while_huge(target_page_size);
        let split_mapping = split_view.query_mapping();
        let new_mapping = Mapping {
            property: op(split_mapping.property),
            ..split_mapping
        };
        let new_cursor = CursorView {
            cur_va: split_mapping.va_range.end,
            mappings: split_view.mappings - set![split_mapping] + set![new_mapping],
            ..self
        };
        (new_cursor, Some(split_mapping.va_range))
    } else {
        (find_cursor, None)
    }
}

Models protect_next: find the next mapping in range, split it to target_page_size if it is a huge page, then update its property via op.

target_page_size corresponds to the cursor level after find_next_impl with split_huge = true — this is determined by the page table structure and cannot be derived from the abstract view alone.

Trait Implementations§

Source§

impl<C: PageTableConfig> Inv for CursorView<C>

Source§

open spec fn inv(self) -> bool

{
    &&& self.cur_va < MAX_USERSPACE_VADDR
    &&& self.mappings.finite()
    &&& forall |m: Mapping| self.mappings.contains(m) ==> m.inv()
    &&& forall |m: Mapping, n: Mapping| {
        self.mappings.contains(m)
            ==> (self.mappings.contains(n)
                ==> (m != n
                    ==> m.va_range.end <= n.va_range.start
                        || n.va_range.end <= m.va_range.start))
    }

}

Auto Trait Implementations§

§

impl<C> Freeze for CursorView<C>

§

impl<C> RefUnwindSafe for CursorView<C>
where C: RefUnwindSafe,

§

impl<C> Send for CursorView<C>

§

impl<C> Sync for CursorView<C>

§

impl<C> Unpin for CursorView<C>
where C: Unpin,

§

impl<C> UnwindSafe for CursorView<C>
where C: UnwindSafe,

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

§

impl<T, VERUS_SPEC__A> FromSpec<T> for VERUS_SPEC__A
where VERUS_SPEC__A: From<T>,

§

fn obeys_from_spec() -> bool

§

fn from_spec(v: T) -> VERUS_SPEC__A

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

§

impl<T, VERUS_SPEC__A> IntoSpec<T> for VERUS_SPEC__A
where VERUS_SPEC__A: Into<T>,

§

fn obeys_into_spec() -> bool

§

fn into_spec(self) -> T

§

impl<T, U> IntoSpecImpl<U> for T
where U: From<T>,

§

fn obeys_into_spec() -> bool

§

fn into_spec(self) -> U

Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
§

impl<T, VERUS_SPEC__A> TryFromSpec<T> for VERUS_SPEC__A
where VERUS_SPEC__A: TryFrom<T>,

§

fn obeys_try_from_spec() -> bool

§

fn try_from_spec( v: T, ) -> Result<VERUS_SPEC__A, <VERUS_SPEC__A as TryFrom<T>>::Error>

Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
§

impl<T, VERUS_SPEC__A> TryIntoSpec<T> for VERUS_SPEC__A
where VERUS_SPEC__A: TryInto<T>,

§

fn obeys_try_into_spec() -> bool

§

fn try_into_spec(self) -> Result<T, <VERUS_SPEC__A as TryInto<T>>::Error>

§

impl<T, U> TryIntoSpecImpl<U> for T
where U: TryFrom<T>,

§

fn obeys_try_into_spec() -> bool

§

fn try_into_spec(self) -> Result<U, <U as TryFrom<T>>::Error>