#[repr(transparent)]pub struct Segment<M: AnyFrameMeta + ?Sized> {
pub range: Range<Paddr>,
pub _marker: PhantomData<M>,
}Expand description
A contiguous range of homogeneous physical memory frames.
This is a handle to multiple contiguous frames. It will be more lightweight than owning an array of frame handles.
The ownership is achieved by the reference counting mechanism of frames.
When constructing a Segment, the frame handles are created then
forgotten, leaving the reference count. When dropping a it, the frame
handles are restored and dropped, decrementing the reference count.
All the metadata of the frames are homogeneous, i.e., they are of the same type.
Fields§
§range: Range<Paddr>The physical address range of the segment.
_marker: PhantomData<M>Marker for the metadata type.
Implementations§
Source§impl<M: AnyFrameMeta + Repr<MetaSlotStorage> + OwnerOf> Segment<M>
impl<M: AnyFrameMeta + Repr<MetaSlotStorage> + OwnerOf> Segment<M>
Sourcepub open spec fn start_paddr_spec(&self) -> Paddr
pub open spec fn start_paddr_spec(&self) -> Paddr
self.inv(),{ self.range.start }Returns the starting physical address of the contiguous frames.
Sourcepub open spec fn end_paddr_spec(&self) -> Paddr
pub open spec fn end_paddr_spec(&self) -> Paddr
self.inv(),{ self.range.end }Returns the ending physical address of the contiguous frames.
Sourcepub open spec fn size_spec(&self) -> usize
pub open spec fn size_spec(&self) -> usize
self.inv(),{ (self.range.end - self.range.start) as usize }Returns the length in bytes of the contiguous frames.
Sourcepub open spec fn nrpage_spec(&self) -> usize
pub open spec fn nrpage_spec(&self) -> usize
self.inv(),{ self.size_spec() / PAGE_SIZE }Returns the number of pages of the contiguous frames.
Sourcepub open spec fn split_spec(self, offset: usize) -> (Self, Self)
pub open spec fn split_spec(self, offset: usize) -> (Self, Self)
self.inv(),offset % PAGE_SIZE == 0,0 < offset < self.size_spec(),{
let at = (self.range.start + offset) as usize;
let idx = at / PAGE_SIZE;
(
Self {
range: self.range.start..at,
_marker: core::marker::PhantomData,
},
Self {
range: at..self.range.end,
_marker: core::marker::PhantomData,
},
)
}Splits the contiguous frames into two at the given byte offset from the start in spec mode.
Sourcepub exec fn start_paddr(&self) -> res : Paddr
pub exec fn start_paddr(&self) -> res : Paddr
self.inv(),returnsself.start_paddr_spec(),Gets the start physical address of the contiguous frames.
Sourcepub exec fn end_paddr(&self) -> res : Paddr
pub exec fn end_paddr(&self) -> res : Paddr
self.inv(),returnsself.end_paddr_spec(),Gets the end physical address of the contiguous frames.
Sourcepub exec fn size(&self) -> res : usize
pub exec fn size(&self) -> res : usize
self.inv(),returnsself.size_spec(),Gets the length in bytes of the contiguous frames.
Sourcepub exec fn from_unused(
range: Range<Paddr>,
metadata_fn: impl Fn(Paddr) -> (Paddr, M),
) -> res : Result<Self, GetFrameError>
pub exec fn from_unused( range: Range<Paddr>, metadata_fn: impl Fn(Paddr) -> (Paddr, M), ) -> res : Result<Self, GetFrameError>
Tracked(regions): Tracked<&mut MetaRegionOwners>,
->
owner: Tracked<Option<SegmentOwner<M>>>,requiresold(regions).inv(),forall |paddr_in: Paddr| {
(range.start <= paddr_in < range.end && paddr_in % PAGE_SIZE == 0)
==> {
&&& metadata_fn.requires((paddr_in,))
}
},forall |paddr_in: Paddr, paddr_out: Paddr, m: M| {
metadata_fn.ensures((paddr_in,), (paddr_out, m))
==> {
&&& paddr_out < MAX_PADDR
&&& paddr_out % PAGE_SIZE == 0
&&& paddr_in == paddr_out
&&& old(regions).slots.contains_key(frame_to_index(paddr_out))
&&& old(regions).slot_owners[frame_to_index(paddr_out)].usage is Unused
&&& old(regions)
.slot_owners[frame_to_index(paddr_out)]
.inner_perms
.in_list
.points_to(0)
}
},ensures(range.start % PAGE_SIZE != 0 || range.end % PAGE_SIZE != 0)
==> r == Err::<Self, _>(GetFrameError::NotAligned),(range.start % PAGE_SIZE == 0 && range.end % PAGE_SIZE == 0 && range.end > MAX_PADDR)
==> r == Err::<Self, _>(GetFrameError::OutOfBound),(range.start % PAGE_SIZE == 0 && range.end % PAGE_SIZE == 0 && range.end <= MAX_PADDR)
==> range.start < range.end,r matches Ok(
r,
) ==> {
&&& final(regions).inv()
&&& r.range.start == range.start
&&& r.range.end == range.end
&&& r.range.start < r.range.end
&&& owner@ matches Some(
owner,
) && {
&&& r.inv()
&&& r.wf(&owner)
&&& forall |i: int| {
0 <= i < owner.perms.len() as int
==> {
&&& owner.perms[i].addr()
== meta_addr(
frame_to_index_spec((range.start + i * PAGE_SIZE) as usize),
)
}
}
&&& forall |paddr: Paddr| {
(range.start <= paddr < range.end && paddr % PAGE_SIZE == 0)
==> !final(regions).slots.contains_key(frame_to_index_spec(paddr))
}
}
},Creates a new Segment from unused frames.
The caller must provide a closure to initialize metadata for all the frames.
The closure receives the physical address of the frame and returns the
metadata, which is similar to core::array::from_fn.
It returns an error if:
- any of the frames cannot be created with a specific reason.
§Verified Properties
§Preconditions
- the metadata function must be well-formed and valid for all frames in the range;
- the metadata function must ensure that the frames can be created and owned by the segment;
- for any frame created via the closure
metadata_fn, the corresponding slot inregionsmust be unused and not dropped in the owner (MetaRegionOwners).
Range constraints (alignment, range.end <= MAX_PADDR, non-emptiness) are runtime-checked
in the body — see the postconditions below for the corresponding error variants.
§Postconditions
- if the result is
Ok, the returned segment satisfies its invariant with the owner, has the same physical address range as the input, and the owner isSome; - if the input range is misaligned, the result is
Err(NotAligned); - if the input range exceeds
MAX_PADDR, the result isErr(OutOfBound); - if the input is aligned and within
MAX_PADDRand the function terminated, thenrange.start < range.end(the runtimeassert!would otherwise diverge).
Sourcepub exec fn split(self, offset: usize) -> r : (Self, Self)
pub exec fn split(self, offset: usize) -> r : (Self, Self)
Tracked(owner): Tracked<SegmentOwner<M>>,
Tracked(regions): Tracked<&mut MetaRegionOwners>,
->
frame_perms: (Tracked<SegmentOwner<M>>, Tracked<SegmentOwner<M>>),requiresself.inv(),self.wf(&owner),offset % PAGE_SIZE == 0,0 < offset < self.size(),old(regions).inv(),owner.relate_regions(*old(regions)),ensures*final(regions) =~= *old(regions),r.0.inv(),r.0.wf(&frame_perms.0@),r.1.inv(),r.1.wf(&frame_perms.1@),(r.0, r.1) == self.split_spec(offset),owner.perms =~= (frame_perms.0@.perms + frame_perms.1@.perms),frame_perms.0@.relate_regions(*final(regions)),frame_perms.1@.relate_regions(*final(regions)),Splits the frames into two at the given byte offset from the start.
The resulting frames cannot be empty. So the offset cannot be neither zero nor the length of the frames.
§Verified Properties
§Preconditions
- the segment must satisfy its invariant (
Self::inv) and the well-formedness relation with the owner (Self::wf); - the offset must be aligned and within bounds;
- the meta region must be valid and the owner must relate to it.
§Postconditions
- the resulting segments satisfy their invariants with the corresponding owners;
- they match
Self::split_spec; - the original owner’s permissions are split between the two new owners;
- both halves continue to relate correctly to
regions(which is unchanged).
Sourcepub exec fn slice(&self, range: &Range<usize>) -> r : Self
pub exec fn slice(&self, range: &Range<usize>) -> r : Self
Tracked(owner): Tracked<&SegmentOwner<M>>,
Tracked(regions): Tracked<&mut MetaRegionOwners>,requiresself.inv(),self.wf(&owner),owner.inv(),old(regions).inv(),owner.relate_regions(*old(regions)),ensuresr.inv(),r.range.start == self.range.start + range.start,r.range.end == self.range.start + range.end,r.range.end <= self.range.end,final(regions).inv(),final(regions).slots =~= old(regions).slots,final(regions).slot_owners.dom() =~= old(regions).slot_owners.dom(),Gets an extra handle to the frames in the byte offset range.
The sliced byte offset range in indexed by the offset from the start of the contiguous frames. The resulting frames holds extra reference counts.
§Verified Properties
§Postconditions
- the resulting slice’s range matches the slicing range and is in-bounds
(the in-bounds check follows from the diverging
assert!in the body); regionspreserves invariants and key domains. Per-frame state — e.g. that a hypotheticalsub_owner.relate_regions(regions)would hold — is not exposed; threading that through the per-frame ref-count bump loop would require a much heavier proof. MirrorsSegment::clone.
See also [vstd::seq::Seq::subrange].
Sourcepub exec fn next(&mut self) -> res : Option<Frame<M>>
pub exec fn next(&mut self) -> res : Option<Frame<M>>
Tracked(regions): Tracked<&mut MetaRegionOwners>,
Tracked(owner): Tracked<&mut SegmentOwner<M>>,requiresold(self).inv(),old(self).wf(old(owner)),old(owner).inv(),old(regions).inv(),old(owner).relate_regions(*old(regions)),ensuresfinal(regions).inv(),final(self).inv(),final(owner).relate_regions(*final(regions)),match res {
None => final(self).range.start == old(self).range.end,
Some(f) => (
&&& final(self).range.start == old(self).range.start + PAGE_SIZE
&&& f.paddr() == old(self).range.start
&&& final(regions).slots.contains_key(frame_to_index(f.paddr()))
&&& final(regions).slot_owners[frame_to_index(f.paddr())].raw_count == 0
),
},Gets the next frame in the segment.
This is the verified counterpart of Iterator::next for Segment<M>.
The trait method is external_body because Verus’s core::iter::Iterator
support can’t thread Tracked metadata through the trait method’s
fixed signature.
§Verified Properties
§Preconditions
- segment, owner, and meta regions must satisfy their respective invariants;
- the segment is well-formed with the owner;
- the owner relates correctly to
regions(forgotten reference, slot present and consistent, etc. — seeSegmentOwner::relate_regions).
All the per-frame coherence conditions previously listed inline are now
consequences of owner.relate_regions(regions) instantiated at i = 0.
§Postconditions
Source§impl<M: AnyFrameMeta + ?Sized> Segment<M>
impl<M: AnyFrameMeta + ?Sized> Segment<M>
Sourcepub exec fn drop(self)
pub exec fn drop(self)
Tracked(owner): Tracked<SegmentOwner<M>>,
Tracked(regions): Tracked<&mut MetaRegionOwners>,requiresself.inv(),self.wf(&owner),owner.inv(),old(regions).inv(),owner.relate_regions(*old(regions)),forall |i: int| {
0 <= i < owner.perms.len() as int
==> {
let idx = frame_to_index((self.range.start + i * PAGE_SIZE) as usize);
old(regions).slot_owners[idx].inner_perms.ref_count.value() == 1
==> {
&&& old(regions).slot_owners[idx].inner_perms.storage.is_init()
&&& old(regions).slot_owners[idx].inner_perms.in_list.value() == 0
}
}
},ensuresfinal(regions).inv(),Verified drop: iterates over each frame in the segment, decrements its reference count, and (when last ref) tears down the metadata.
Custom drop method (not an impl of [vstd_extra::drop_tracking::Drop]):
regions flows through [TrackDrop::State] (= MetaRegionOwners)
so ManuallyDrop::new(self, Tracked(regions)) stays callable, while
owner is threaded in separately via verus_spec.
Source§impl<M: AnyUFrameMeta + OwnerOf> Segment<M>
impl<M: AnyUFrameMeta + OwnerOf> Segment<M>
Sourcepub exec fn reader(&self) -> r : VmReader<'_, Infallible>
pub exec fn reader(&self) -> r : VmReader<'_, Infallible>
->
owner: Tracked<VmIoOwner>,requiresself.inv(),ensuresr.inv(),owner@.inv(),r.wf(owner@),r.cursor.vaddr == paddr_to_vaddr_spec(self.start_paddr_spec()),r.remain_spec() == self.size_spec(),owner@.is_kernel,Sourcepub exec fn writer(&self) -> r : VmWriter<'_, Infallible>
pub exec fn writer(&self) -> r : VmWriter<'_, Infallible>
->
owner: Tracked<VmIoOwner>,requiresself.inv(),ensuresr.inv(),owner@.inv(),r.wf(owner@),r.cursor.vaddr == paddr_to_vaddr_spec(self.start_paddr_spec()),r.avail_spec() == self.size_spec(),owner@.is_kernel,!owner@.is_fallible,Source§impl<M: AnyFrameMeta + ?Sized> Segment<M>
impl<M: AnyFrameMeta + ?Sized> Segment<M>
Sourcepub open spec fn wf(&self, owner: &SegmentOwner<M>) -> bool
pub open spec fn wf(&self, owner: &SegmentOwner<M>) -> bool
{
&&& self.range == owner.range
&&& owner.perms.len() * PAGE_SIZE == self.range.end - self.range.start
&&& forall |i: int| {
0 <= i < owner.perms.len() as int
==> owner.perms[i].addr()
== meta_addr(frame_to_index((self.range.start + i * PAGE_SIZE) as usize))
}
}The well-formedness relation between a Segment and its owner:
- the segment’s range matches the range tracked by the owner;
- the number of permissions in the owner matches the number of frames in the segment;
- the physical address of each permission matches the corresponding frame in the segment.
This is only the cross-object relation; callers are expected to also
state self.inv() (and where relevant owner.inv()) alongside. With
owner.inv() the perm-address and length conjuncts are consequences of
self.range == owner.range, but they are kept here for trigger
availability at sites that don’t invoke owner.inv().
Interested readers are encouraged to see frame_to_index and meta_addr for how
we convert between physical addresses and meta region indices.
Sourcepub open spec fn kernel_mem_view_covers(&self, view: &MemView) -> bool
pub open spec fn kernel_mem_view_covers(&self, view: &MemView) -> bool
{
&&& self.inv()
&&& view.mappings.finite()
&&& view.mappings_are_disjoint()
&&& forall |vaddr: Vaddr| {
paddr_to_vaddr(self.range.start) <= vaddr
< paddr_to_vaddr(self.range.start) + self.range.end - self.range.start
==> {
&&& view.addr_transl(vaddr) is Some
&&& view.memory.contains_key(view.addr_transl(vaddr).unwrap().0)
&&& view.memory[view.addr_transl(vaddr).unwrap().0].inv()
&&& view
.memory[view.addr_transl(vaddr).unwrap().0]
.contents[view.addr_transl(vaddr).unwrap().1 as int] is Init
}
}
&&& forall |paddr: Paddr| {
self.range.start <= paddr < self.range.end
==> {
let vaddr = paddr_to_vaddr(paddr);
&&& view.addr_transl(vaddr) is Some
&&& view.addr_transl(vaddr).unwrap().0 <= paddr
&&& paddr
< view.addr_transl(vaddr).unwrap().0
+ view.memory[view.addr_transl(vaddr).unwrap().0].size@
&&& view.addr_transl(vaddr).unwrap().1
== paddr - view.addr_transl(vaddr).unwrap().0
&&& view.memory.contains_key(view.addr_transl(vaddr).unwrap().0)
&&& view.memory[view.addr_transl(vaddr).unwrap().0].inv()
&&& view
.memory[view.addr_transl(vaddr).unwrap().0]
.contents[view.addr_transl(vaddr).unwrap().1 as int] is Init
}
}
}Whether a MemView covers the segment through the kernel direct mapping.
This predicate only describes the virtual-to-physical relation and the presence of initialized backing frame contents.
Trait Implementations§
Source§impl<M: AnyFrameMeta + ?Sized> Inv for Segment<M>
impl<M: AnyFrameMeta + ?Sized> Inv for Segment<M>
Source§open spec fn inv(self) -> bool
open spec fn inv(self) -> bool
{
&&& self.range.start % PAGE_SIZE == 0
&&& self.range.end % PAGE_SIZE == 0
&&& self.range.start <= self.range.end <= MAX_PADDR
}The invariant of a Segment:
- the physical addresses of the frames are aligned and within bounds.
- the range is well-formed, i.e., the start is less than or equal to the end.
Source§impl<M: AnyFrameMeta + Repr<MetaSlotStorage> + OwnerOf> Iterator for Segment<M>
impl<M: AnyFrameMeta + Repr<MetaSlotStorage> + OwnerOf> Iterator for Segment<M>
Source§fn next_chunk<const N: usize>(
&mut self,
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
fn next_chunk<const N: usize>(
&mut self,
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
iter_next_chunk)N values. Read more1.0.0 · Source§fn size_hint(&self) -> (usize, Option<usize>)
fn size_hint(&self) -> (usize, Option<usize>)
1.0.0 · Source§fn count(self) -> usizewhere
Self: Sized,
fn count(self) -> usizewhere
Self: Sized,
1.0.0 · Source§fn last(self) -> Option<Self::Item>where
Self: Sized,
fn last(self) -> Option<Self::Item>where
Self: Sized,
Source§fn advance_by(&mut self, n: usize) -> Result<(), NonZero<usize>>
fn advance_by(&mut self, n: usize) -> Result<(), NonZero<usize>>
iter_advance_by)n elements. Read more1.0.0 · Source§fn nth(&mut self, n: usize) -> Option<Self::Item>
fn nth(&mut self, n: usize) -> Option<Self::Item>
nth element of the iterator. Read more1.28.0 · Source§fn step_by(self, step: usize) -> StepBy<Self>where
Self: Sized,
fn step_by(self, step: usize) -> StepBy<Self>where
Self: Sized,
1.0.0 · Source§fn chain<U>(self, other: U) -> Chain<Self, <U as IntoIterator>::IntoIter>
fn chain<U>(self, other: U) -> Chain<Self, <U as IntoIterator>::IntoIter>
1.0.0 · Source§fn zip<U>(self, other: U) -> Zip<Self, <U as IntoIterator>::IntoIter>where
Self: Sized,
U: IntoIterator,
fn zip<U>(self, other: U) -> Zip<Self, <U as IntoIterator>::IntoIter>where
Self: Sized,
U: IntoIterator,
Source§fn intersperse_with<G>(self, separator: G) -> IntersperseWith<Self, G>
fn intersperse_with<G>(self, separator: G) -> IntersperseWith<Self, G>
iter_intersperse)separator
between adjacent items of the original iterator. Read more1.0.0 · Source§fn map<B, F>(self, f: F) -> Map<Self, F>
fn map<B, F>(self, f: F) -> Map<Self, F>
1.0.0 · Source§fn filter<P>(self, predicate: P) -> Filter<Self, P>
fn filter<P>(self, predicate: P) -> Filter<Self, P>
1.0.0 · Source§fn filter_map<B, F>(self, f: F) -> FilterMap<Self, F>
fn filter_map<B, F>(self, f: F) -> FilterMap<Self, F>
1.0.0 · Source§fn enumerate(self) -> Enumerate<Self>where
Self: Sized,
fn enumerate(self) -> Enumerate<Self>where
Self: Sized,
1.0.0 · Source§fn skip_while<P>(self, predicate: P) -> SkipWhile<Self, P>
fn skip_while<P>(self, predicate: P) -> SkipWhile<Self, P>
1.0.0 · Source§fn take_while<P>(self, predicate: P) -> TakeWhile<Self, P>
fn take_while<P>(self, predicate: P) -> TakeWhile<Self, P>
1.57.0 · Source§fn map_while<B, P>(self, predicate: P) -> MapWhile<Self, P>
fn map_while<B, P>(self, predicate: P) -> MapWhile<Self, P>
1.0.0 · Source§fn skip(self, n: usize) -> Skip<Self>where
Self: Sized,
fn skip(self, n: usize) -> Skip<Self>where
Self: Sized,
n elements. Read more1.0.0 · Source§fn take(self, n: usize) -> Take<Self>where
Self: Sized,
fn take(self, n: usize) -> Take<Self>where
Self: Sized,
n elements, or fewer
if the underlying iterator ends sooner. Read more1.0.0 · Source§fn flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F>
fn flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F>
Source§fn map_windows<F, R, const N: usize>(self, f: F) -> MapWindows<Self, F, N>
fn map_windows<F, R, const N: usize>(self, f: F) -> MapWindows<Self, F, N>
iter_map_windows)f for each contiguous window of size N over
self and returns an iterator over the outputs of f. Like slice::windows(),
the windows during mapping overlap as well. Read more1.0.0 · Source§fn inspect<F>(self, f: F) -> Inspect<Self, F>
fn inspect<F>(self, f: F) -> Inspect<Self, F>
1.0.0 · Source§fn by_ref(&mut self) -> &mut Selfwhere
Self: Sized,
fn by_ref(&mut self) -> &mut Selfwhere
Self: Sized,
Iterator. Read moreSource§fn collect_into<E>(self, collection: &mut E) -> &mut E
fn collect_into<E>(self, collection: &mut E) -> &mut E
iter_collect_into)1.0.0 · Source§fn partition<B, F>(self, f: F) -> (B, B)
fn partition<B, F>(self, f: F) -> (B, B)
Source§fn is_partitioned<P>(self, predicate: P) -> bool
fn is_partitioned<P>(self, predicate: P) -> bool
iter_is_partitioned)true precede all those that return false. Read more1.27.0 · Source§fn try_fold<B, F, R>(&mut self, init: B, f: F) -> R
fn try_fold<B, F, R>(&mut self, init: B, f: F) -> R
1.27.0 · Source§fn try_for_each<F, R>(&mut self, f: F) -> R
fn try_for_each<F, R>(&mut self, f: F) -> R
1.0.0 · Source§fn fold<B, F>(self, init: B, f: F) -> B
fn fold<B, F>(self, init: B, f: F) -> B
1.51.0 · Source§fn reduce<F>(self, f: F) -> Option<Self::Item>
fn reduce<F>(self, f: F) -> Option<Self::Item>
Source§fn try_reduce<R>(
&mut self,
f: impl FnMut(Self::Item, Self::Item) -> R,
) -> <<R as Try>::Residual as Residual<Option<<R as Try>::Output>>>::TryType
fn try_reduce<R>( &mut self, f: impl FnMut(Self::Item, Self::Item) -> R, ) -> <<R as Try>::Residual as Residual<Option<<R as Try>::Output>>>::TryType
iterator_try_reduce)1.0.0 · Source§fn all<F>(&mut self, f: F) -> bool
fn all<F>(&mut self, f: F) -> bool
1.0.0 · Source§fn any<F>(&mut self, f: F) -> bool
fn any<F>(&mut self, f: F) -> bool
1.0.0 · Source§fn find<P>(&mut self, predicate: P) -> Option<Self::Item>
fn find<P>(&mut self, predicate: P) -> Option<Self::Item>
1.30.0 · Source§fn find_map<B, F>(&mut self, f: F) -> Option<B>
fn find_map<B, F>(&mut self, f: F) -> Option<B>
Source§fn try_find<R>(
&mut self,
f: impl FnMut(&Self::Item) -> R,
) -> <<R as Try>::Residual as Residual<Option<Self::Item>>>::TryType
fn try_find<R>( &mut self, f: impl FnMut(&Self::Item) -> R, ) -> <<R as Try>::Residual as Residual<Option<Self::Item>>>::TryType
try_find)1.0.0 · Source§fn position<P>(&mut self, predicate: P) -> Option<usize>
fn position<P>(&mut self, predicate: P) -> Option<usize>
1.6.0 · Source§fn max_by_key<B, F>(self, f: F) -> Option<Self::Item>
fn max_by_key<B, F>(self, f: F) -> Option<Self::Item>
1.15.0 · Source§fn max_by<F>(self, compare: F) -> Option<Self::Item>
fn max_by<F>(self, compare: F) -> Option<Self::Item>
1.6.0 · Source§fn min_by_key<B, F>(self, f: F) -> Option<Self::Item>
fn min_by_key<B, F>(self, f: F) -> Option<Self::Item>
1.15.0 · Source§fn min_by<F>(self, compare: F) -> Option<Self::Item>
fn min_by<F>(self, compare: F) -> Option<Self::Item>
1.0.0 · Source§fn unzip<A, B, FromA, FromB>(self) -> (FromA, FromB)
fn unzip<A, B, FromA, FromB>(self) -> (FromA, FromB)
1.36.0 · Source§fn copied<'a, T>(self) -> Copied<Self>
fn copied<'a, T>(self) -> Copied<Self>
Source§fn array_chunks<const N: usize>(self) -> ArrayChunks<Self, N>where
Self: Sized,
fn array_chunks<const N: usize>(self) -> ArrayChunks<Self, N>where
Self: Sized,
iter_array_chunks)N elements of the iterator at a time. Read more1.11.0 · Source§fn product<P>(self) -> P
fn product<P>(self) -> P
Source§fn cmp_by<I, F>(self, other: I, cmp: F) -> Ordering
fn cmp_by<I, F>(self, other: I, cmp: F) -> Ordering
iter_order_by)Iterator with those
of another with respect to the specified comparison function. Read more1.5.0 · Source§fn partial_cmp<I>(self, other: I) -> Option<Ordering>
fn partial_cmp<I>(self, other: I) -> Option<Ordering>
PartialOrd elements of
this Iterator with those of another. The comparison works like short-circuit
evaluation, returning a result without comparing the remaining elements.
As soon as an order can be determined, the evaluation stops and a result is returned. Read moreSource§fn partial_cmp_by<I, F>(self, other: I, partial_cmp: F) -> Option<Ordering>where
Self: Sized,
I: IntoIterator,
F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Option<Ordering>,
fn partial_cmp_by<I, F>(self, other: I, partial_cmp: F) -> Option<Ordering>where
Self: Sized,
I: IntoIterator,
F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Option<Ordering>,
iter_order_by)Iterator with those
of another with respect to the specified comparison function. Read moreSource§fn eq_by<I, F>(self, other: I, eq: F) -> bool
fn eq_by<I, F>(self, other: I, eq: F) -> bool
iter_order_by)1.5.0 · Source§fn lt<I>(self, other: I) -> bool
fn lt<I>(self, other: I) -> bool
Iterator are lexicographically
less than those of another. Read more1.5.0 · Source§fn le<I>(self, other: I) -> bool
fn le<I>(self, other: I) -> bool
Iterator are lexicographically
less or equal to those of another. Read more1.5.0 · Source§fn gt<I>(self, other: I) -> bool
fn gt<I>(self, other: I) -> bool
Iterator are lexicographically
greater than those of another. Read more1.5.0 · Source§fn ge<I>(self, other: I) -> bool
fn ge<I>(self, other: I) -> bool
Iterator are lexicographically
greater than or equal to those of another. Read more1.82.0 · Source§fn is_sorted_by<F>(self, compare: F) -> bool
fn is_sorted_by<F>(self, compare: F) -> bool
1.82.0 · Source§fn is_sorted_by_key<F, K>(self, f: F) -> bool
fn is_sorted_by_key<F, K>(self, f: F) -> bool
Source§impl<M: AnyFrameMeta + Repr<MetaSlotStorage> + OwnerOf> RCClone for Segment<M>
impl<M: AnyFrameMeta + Repr<MetaSlotStorage> + OwnerOf> RCClone for Segment<M>
Source§open spec fn clone_requires(self, perm: MetaRegionOwners) -> bool
open spec fn clone_requires(self, perm: MetaRegionOwners) -> bool
{
&&& self.inv()
&&& perm.inv()
&&& forall |pa: Paddr| {
(self.range.start <= pa < self.range.end && pa % PAGE_SIZE == 0)
==> {
let idx = frame_to_index(pa);
&&& perm.slots.contains_key(idx)
&&& has_safe_slot(pa)
&&& perm.slot_owners[idx].inner_perms.ref_count.value() > 0
&&& perm.slot_owners[idx].inner_perms.ref_count.value() + 1
< super::meta::REF_COUNT_MAX
&&& !MetaSlot::inc_ref_count_panic_cond(
perm.slot_owners[idx].inner_perms.ref_count,
)
}
}
}Source§open spec fn clone_ensures(
self,
old_perm: MetaRegionOwners,
new_perm: MetaRegionOwners,
res: Self,
) -> bool
open spec fn clone_ensures( self, old_perm: MetaRegionOwners, new_perm: MetaRegionOwners, res: Self, ) -> bool
{
&&& res.range == self.range
&&& res.inv()
&&& new_perm.inv()
}Source§exec fn clone(&self, Tracked(perm): Tracked<&mut MetaRegionOwners>) -> res : Self
exec fn clone(&self, Tracked(perm): Tracked<&mut MetaRegionOwners>) -> res : Self
Source§impl<M: AnyFrameMeta + ?Sized> TrackDrop for Segment<M>
impl<M: AnyFrameMeta + ?Sized> TrackDrop for Segment<M>
Source§type State = MetaRegionOwners
type State = MetaRegionOwners
Mirroring crate::mm::frame::UniqueFrame’s pattern, the tracked
state for ManuallyDrop purposes is just the global
MetaRegionOwners. The SegmentOwner<M> that holds the per-frame
perms is threaded into the custom drop method via verus_spec rather
than via this trait state — so that
ManuallyDrop::new(self, Tracked(regions)) can be called without
combining a &mut MetaRegionOwners borrow with an owned
SegmentOwner into a single tracked tuple.
Source§open spec fn constructor_requires(self, s: Self::State) -> bool
open spec fn constructor_requires(self, s: Self::State) -> bool
{ true }Source§open spec fn constructor_ensures(self, s0: Self::State, s1: Self::State) -> bool
open spec fn constructor_ensures(self, s0: Self::State, s1: Self::State) -> bool
{ s0 =~= s1 }Source§proof fn constructor_spec(self, tracked s: &mut Self::State)
proof fn constructor_spec(self, tracked s: &mut Self::State)
Source§open spec fn drop_requires(self, s: Self::State) -> bool
open spec fn drop_requires(self, s: Self::State) -> bool
{ s.inv() }Source§open spec fn drop_ensures(self, s0: Self::State, s1: Self::State) -> bool
open spec fn drop_ensures(self, s0: Self::State, s1: Self::State) -> bool
{ true }Source§proof fn drop_tracked(self, tracked s: &mut Self::State)
proof fn drop_tracked(self, tracked s: &mut Self::State)
Auto Trait Implementations§
impl<M> Freeze for Segment<M>
impl<M> RefUnwindSafe for Segment<M>where
M: RefUnwindSafe,
impl<M> Send for Segment<M>where
M: Send,
impl<M> Sync for Segment<M>where
M: Sync,
impl<M> Unpin for Segment<M>where
M: Unpin,
impl<M> UnsafeUnpin for Segment<M>
impl<M> UnwindSafe for Segment<M>where
M: UnwindSafe,
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T, VERUS_SPEC__A> FromSpec<T> for VERUS_SPEC__Awhere
VERUS_SPEC__A: From<T>,
impl<T, VERUS_SPEC__A> FromSpec<T> for VERUS_SPEC__Awhere
VERUS_SPEC__A: From<T>,
fn obeys_from_spec() -> bool
fn from_spec(v: T) -> VERUS_SPEC__A
Source§impl<I> IntoIterator for Iwhere
I: Iterator,
impl<I> IntoIterator for Iwhere
I: Iterator,
§impl<T, VERUS_SPEC__A> IntoSpec<T> for VERUS_SPEC__Awhere
VERUS_SPEC__A: Into<T>,
impl<T, VERUS_SPEC__A> IntoSpec<T> for VERUS_SPEC__Awhere
VERUS_SPEC__A: Into<T>,
fn obeys_into_spec() -> bool
fn into_spec(self) -> T
§impl<T, U> IntoSpecImpl<U> for Twhere
U: From<T>,
impl<T, U> IntoSpecImpl<U> for Twhere
U: From<T>,
fn obeys_into_spec() -> bool
fn into_spec(self) -> U
§impl<VERUS_SPEC__A> IteratorSpec for VERUS_SPEC__A
impl<VERUS_SPEC__A> IteratorSpec for VERUS_SPEC__A
§fn obeys_prophetic_iter_laws(&self) -> bool
fn obeys_prophetic_iter_laws(&self) -> bool
next,
expressed in terms of prophetic spec functions.
Only iterators that terminate (i.e., eventually return None
and then continue to return None) should use this interface.§fn remaining(&self) -> Seq<<VERUS_SPEC__A as Iterator>::Item>
fn remaining(&self) -> Seq<<VERUS_SPEC__A as Iterator>::Item>
§fn will_return_none(&self) -> bool
fn will_return_none(&self) -> bool
None after the above sequence?
(As opposed to hanging indefinitely on a next() call)
Trivially true for most iterators but important for iterators
that apply an exec closure that may not terminate.§fn decrease(&self) -> Option<nat>
fn decrease(&self) -> Option<nat>
§fn initial_value_relation(&self, init: &VERUS_SPEC__A) -> bool
fn initial_value_relation(&self, init: &VERUS_SPEC__A) -> bool
my_vec.iter()). This allows for more ergonomic/intuitive invariants.
When the analysis can infer a spec initial value (by discovering a when_used_as_spec
annotation), the analysis places the value in init.