Struct guestmem::GuestMemory
source · pub struct GuestMemory { /* private fields */ }
Expand description
A wrapper around a GuestMemoryAccess
that provides methods for safely
reading and writing guest memory.
Implementations§
source§impl GuestMemory
impl GuestMemory
sourcepub fn new(debug_name: impl Into<Arc<str>>, imp: impl GuestMemoryAccess) -> Self
pub fn new(debug_name: impl Into<Arc<str>>, imp: impl GuestMemoryAccess) -> Self
Returns a new instance using imp
as the backing.
debug_name
is used to specify which guest memory is being accessed in
error messages.
sourcepub fn new_multi_region(
debug_name: impl Into<Arc<str>>,
region_size: u64,
imps: Vec<Option<impl GuestMemoryAccess>>,
) -> Result<Self, MultiRegionError>
pub fn new_multi_region( debug_name: impl Into<Arc<str>>, region_size: u64, imps: Vec<Option<impl GuestMemoryAccess>>, ) -> Result<Self, MultiRegionError>
Creates a new multi-region guest memory, made up of multiple mappings. This allows you to create a very large sparse layout (up to the limits of the VM’s physical address space) without having to allocate an enormous amount of virtual address space.
Each region will be region_size
bytes and will start immediately after
the last one. This must be a power of two, be at least a page in size,
and cannot fill the full 64-bit address space.
imps
must be a list of GuestMemoryAccess
implementations, one for
each region. Use None
if the corresponding region is empty.
A region’s mapping cannot fully fill the region. This is necessary to avoid callers expecting to be able to access a memory range that spans two regions.
sourcepub fn allocate(size: usize) -> Self
pub fn allocate(size: usize) -> Self
Allocates a guest memory object on the heap with the given size in bytes.
size
will be rounded up to the page size. The backing buffer will be
page aligned.
The debug name in errors will be “heap”. If you want to provide a
different debug name, manually use GuestMemory::new
with
AlignedHeapMemory
.
pub fn subrange( &self, offset: u64, len: u64, allow_preemptive_locking: bool, ) -> Result<GuestMemory, GuestMemoryError>
sourcepub fn full_mapping(&self) -> Option<(*mut u8, usize)>
pub fn full_mapping(&self) -> Option<(*mut u8, usize)>
Returns the mapping for all of guest memory.
Returns None
if there is more than one region or if the memory is not
mapped.
sourcepub fn iova(&self, gpa: u64) -> Option<u64>
pub fn iova(&self, gpa: u64) -> Option<u64>
Gets the IO address for DMAing to gpa
from a user-mode driver not
going through an IOMMU.
sourcepub fn write_at(&self, gpa: u64, src: &[u8]) -> Result<(), GuestMemoryError>
pub fn write_at(&self, gpa: u64, src: &[u8]) -> Result<(), GuestMemoryError>
Writes src
into guest memory at address gpa
.
sourcepub fn write_from_atomic(
&self,
gpa: u64,
src: &[AtomicU8],
) -> Result<(), GuestMemoryError>
pub fn write_from_atomic( &self, gpa: u64, src: &[AtomicU8], ) -> Result<(), GuestMemoryError>
Writes src
into guest memory at address gpa
.
sourcepub fn fill_at(
&self,
gpa: u64,
val: u8,
len: usize,
) -> Result<(), GuestMemoryError>
pub fn fill_at( &self, gpa: u64, val: u8, len: usize, ) -> Result<(), GuestMemoryError>
Writes len
bytes of val
into guest memory at address gpa
.
sourcepub fn read_at(&self, gpa: u64, dest: &mut [u8]) -> Result<(), GuestMemoryError>
pub fn read_at(&self, gpa: u64, dest: &mut [u8]) -> Result<(), GuestMemoryError>
Reads from guest memory address gpa
into dest
.
sourcepub fn read_to_atomic(
&self,
gpa: u64,
dest: &[AtomicU8],
) -> Result<(), GuestMemoryError>
pub fn read_to_atomic( &self, gpa: u64, dest: &[AtomicU8], ) -> Result<(), GuestMemoryError>
Reads from guest memory address gpa
into dest
.
sourcepub fn write_plain<T: AsBytes>(
&self,
gpa: u64,
b: &T,
) -> Result<(), GuestMemoryError>
pub fn write_plain<T: AsBytes>( &self, gpa: u64, b: &T, ) -> Result<(), GuestMemoryError>
Writes an object to guest memory at address gpa
.
If the object is 1, 2, 4, or 8 bytes and the address is naturally
aligned, then the write will be performed atomically. Here, this means
that concurrent readers (via read_plain
) cannot observe a torn write
but will observe either the old or new value.
The memory ordering of the write is unspecified.
FUTURE: once we are on Rust 1.79, add a method specifically for atomic accesses that const asserts that the size is appropriate.
sourcepub fn compare_exchange<T: AsBytes + FromBytes + Copy>(
&self,
gpa: u64,
current: T,
new: T,
) -> Result<Result<T, T>, GuestMemoryError>
pub fn compare_exchange<T: AsBytes + FromBytes + Copy>( &self, gpa: u64, current: T, new: T, ) -> Result<Result<T, T>, GuestMemoryError>
Attempts a sequentially-consistent compare exchange of the value at gpa
.
sourcepub fn compare_exchange_bytes<T: AsBytes + FromBytes + ?Sized>(
&self,
gpa: u64,
current: &mut T,
new: &T,
) -> Result<bool, GuestMemoryError>
pub fn compare_exchange_bytes<T: AsBytes + FromBytes + ?Sized>( &self, gpa: u64, current: &mut T, new: &T, ) -> Result<bool, GuestMemoryError>
Attempts a sequentially-consistent compare exchange of the value at gpa
.
sourcepub fn read_plain<T: FromBytes>(&self, gpa: u64) -> Result<T, GuestMemoryError>
pub fn read_plain<T: FromBytes>(&self, gpa: u64) -> Result<T, GuestMemoryError>
Reads an object from guest memory at address gpa
.
If the object is 1, 2, 4, or 8 bytes and the address is naturally aligned, then the read will be performed atomically. Here, this means that when there is a concurrent writer, callers will observe either the old or new value, but not a torn read.
The memory ordering of the read is unspecified.
FUTURE: once we are on Rust 1.79, add a method specifically for atomic accesses that const asserts that the size is appropriate.
pub fn lock_gpns( &self, with_kernel_access: bool, gpns: &[u64], ) -> Result<LockedPages, GuestMemoryError>
pub fn probe_gpns(&self, gpns: &[u64]) -> Result<(), GuestMemoryError>
sourcepub fn check_gpa_readable(&self, gpa: u64) -> bool
pub fn check_gpa_readable(&self, gpa: u64) -> bool
Check if a given GPA is readable or not.
pub fn write_range( &self, range: &PagedRange<'_>, data: &[u8], ) -> Result<(), GuestMemoryError>
pub fn zero_range(&self, range: &PagedRange<'_>) -> Result<(), GuestMemoryError>
pub fn read_range( &self, range: &PagedRange<'_>, data: &mut [u8], ) -> Result<(), GuestMemoryError>
pub fn write_range_from_atomic( &self, range: &PagedRange<'_>, data: &[AtomicU8], ) -> Result<(), GuestMemoryError>
pub fn read_range_to_atomic( &self, range: &PagedRange<'_>, data: &[AtomicU8], ) -> Result<(), GuestMemoryError>
sourcepub fn lock_range<T: LockedRange>(
&self,
paged_range: PagedRange<'_>,
locked_range: T,
) -> Result<LockedRangeImpl<T>, GuestMemoryError>
pub fn lock_range<T: LockedRange>( &self, paged_range: PagedRange<'_>, locked_range: T, ) -> Result<LockedRangeImpl<T>, GuestMemoryError>
Locks the guest pages spanned by the specified PagedRange
for the 'static
lifetime.
§Arguments
- ‘paged_range’ - The guest memory range to lock.
- ‘locked_range’ - Receives a list of VA ranges to which each contiguous physical sub-range in
paged_range
has been mapped. Must be initially empty.
Trait Implementations§
source§impl Clone for GuestMemory
impl Clone for GuestMemory
source§fn clone(&self) -> GuestMemory
fn clone(&self) -> GuestMemory
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for GuestMemory
impl Debug for GuestMemory
source§impl Default for GuestMemory
impl Default for GuestMemory
The default implementation is GuestMemory::empty
.
Auto Trait Implementations§
impl Freeze for GuestMemory
impl !RefUnwindSafe for GuestMemory
impl Send for GuestMemory
impl Sync for GuestMemory
impl Unpin for GuestMemory
impl !UnwindSafe for GuestMemory
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
source§unsafe fn clone_to_uninit(&self, dst: *mut T)
unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)