Skip to content

libkrun X11 pipewire support#36

Open
dorindabassey wants to merge 4 commits intomagma-gpu:mainfrom
dorindabassey:x11-pipewire-support
Open

libkrun X11 pipewire support#36
dorindabassey wants to merge 4 commits intomagma-gpu:mainfrom
dorindabassey:x11-pipewire-support

Conversation

@dorindabassey
Copy link
Copy Markdown

This series adds cross-domain IPC support for X11 and PipeWire, enabling better graphics with enhanced file descriptor and futex sharing capabilities.

@dorindabassey dorindabassey force-pushed the x11-pipewire-support branch 2 times, most recently from e4a812b to 11dcf85 Compare December 9, 2025 13:15
Copy link
Copy Markdown
Collaborator

@gurchetansingh gurchetansingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

O_o ... this is a tricky one. I think the core thing you must try to do here is try to fit the cross-platform abstraction we're trying to create.

  • You should use rustix, not libc or nix
  • For EVENT_FD, we already have an abstraction for mesa3d_util::Event. Use that.
  • For futex, you need to create that. The good thing is:

https://github.com/m-ou-se/atomic-wait

has a pretty solid API for this over Linux (futex), Windows (WaitOnAddress), and MacOS. Do the Linux implementation, but we'll need stubs for Windows/MacOS (hopefully eventually we'll get to them too). Use rustix instead of libc (most of your changes will be mesa3d_util). We of course need to properly attribute atomic-wait for the adaptation, so I think the resultant code needs to be BSD-2 licensed.

And also, can you test this change with muvm + libkrun + latest rutabaga_gfx? That'll help us have confidence the change works before cut the release.

Thank you!

Comment thread Cargo.toml Outdated
Comment thread ffi/src/include/rutabaga_gfx_ffi.h
Comment thread src/rutabaga_core.rs Outdated
Comment thread src/cross_domain/cross_domain_protocol.rs Outdated
@dorindabassey
Copy link
Copy Markdown
Author

O_o ... this is a tricky one. I think the core thing you must try to do here is try to fit the cross-platform abstraction we're trying to create.

* You should use `rustix`, not `libc` or `nix`

* For `EVENT_FD`, we already have an abstraction for `mesa3d_util::Event`.  Use that.

* For `futex`, you need to create that.  The good thing is:

https://github.com/m-ou-se/atomic-wait

has a pretty solid API for this over Linux (futex), Windows (WaitOnAddress), and MacOS. Do the Linux implementation, but we'll need stubs for Windows/MacOS (hopefully eventually we'll get to them too). Use rustix instead of libc (most of your changes will be mesa3d_util). We of course need to properly attribute atomic-wait for the adaptation, so I think the resultant code needs to be BSD-2 licensed.

And also, can you test this change with muvm + libkrun + latest rutabaga_gfx? That'll help us have confidence the change works before cut the release.

Thank you!

Thank you for the feedback, I have addressed the issues and incorporated your suggestions to use rustix. I also tested these changes with muvm from (git clone -b val/kqloytrpuwkm https://github.com/valpackett/muvm) + libkrun from (git clone -b fixkrun https://github.com/dorindabassey/libkrun) + latest rutabaga_gfx(this PR) by running xeyes in the vm.

Copy link
Copy Markdown
Collaborator

@gurchetansingh gurchetansingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late response (holiday season). Needs to pass the CI, and try to make it more OS-agnostic: i.e, less (target_os=linux)

CC: @valpackett for any other review comments.

Comment thread src/rutabaga_utils.rs Outdated
Comment thread src/rutabaga_core.rs Outdated
Comment thread src/cross_domain/cross_domain_protocol.rs
Comment thread src/cross_domain/mod.rs Outdated
let event = Event::try_from(MesaHandle {
os_handle: owned_desc,
handle_type: MESA_HANDLE_TYPE_SIGNAL_EVENT_FD,
})?;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we be using descriptor.determine_type()

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed it to use descriptor.determine_type() but i guess I'll have to upstream this to mesa aswell, is that right?

Comment thread src/cross_domain/cross_domain_protocol.rs Outdated
Comment thread src/cross_domain/cross_domain_protocol.rs Outdated
Comment thread src/cross_domain/cross_domain_protocol.rs Outdated
Comment thread src/cross_domain/cross_domain_protocol.rs Outdated
Comment thread src/cross_domain/cross_domain_protocol.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
Comment thread src/cross_domain/cross_domain_protocol.rs Outdated
Comment thread src/cross_domain/cross_domain_protocol.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
@dorindabassey dorindabassey force-pushed the x11-pipewire-support branch 4 times, most recently from 0302006 to 3c79e0d Compare January 6, 2026 21:32
Copy link
Copy Markdown
Collaborator

@gurchetansingh gurchetansingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, finally wrapping my head around what this is trying to do and have more substantive feedback. I think we should create a:

third_party/mesa3d/src/util/rust/atomic_memory_sentinel.rs file.

and that would be the access point to futex and WaitOnAddress like APIs.

use crate::sys::platform::futex as futex

AtomicMemorySentinel {
     memory_mapping: MemoryMapping,
     atomic_ptr: *mut AtomicU32,
     value: ..
}

impl Drop for AtomicMemorySentinel {
fn drop(&mut self) {
futex::wake_all(...)
}
}

impl AtomicMemorySentinel {
pub fn new(mapping: MemoryMapping) -> MesaResult
pub fn wait() -> MesaResult<()>;
pub fn signal() -> MesaResult<()>

}

And then in the CrossDomain, code you can work in terms of Arc<Mutex<AtomicMemorySentinel>. I feel like the current CrossDomainMemoryWatcher is mixing thread operations, atomic memory operations.

Plus, AtomicMemorySentinel sounds coller than MemoryWatcher, does it not?!?

Also, cross_domain.rs is getting rather large. Feel free to split into seperate files: I.e, cross_domain_worker.rs, cross_domain_atomic_memory_sentinel_worker.rs. Thanks again for your patience!

Comment thread subprojects/packagefiles/rustix-1-rs/meson.build Outdated
Comment thread third_party/mesa3d/src/util/rust/lib.rs Outdated
Comment thread src/rutabaga_core.rs
Comment thread src/cross_domain/cross_domain_protocol.rs Outdated
Comment thread src/cross_domain/cross_domain_protocol.rs Outdated
Comment thread src/cross_domain/cross_domain_protocol.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
@dorindabassey dorindabassey force-pushed the x11-pipewire-support branch 3 times, most recently from 236561f to cbf2942 Compare January 13, 2026 16:39
@dorindabassey
Copy link
Copy Markdown
Author

dorindabassey commented Jan 13, 2026

Okay, finally wrapping my head around what this is trying to do and have more substantive feedback. I think we should create a:

third_party/mesa3d/src/util/rust/atomic_memory_sentinel.rs file.

and that would be the access point to futex and WaitOnAddress like APIs.

use crate::sys::platform::futex as futex

AtomicMemorySentinel {
     memory_mapping: MemoryMapping,
     atomic_ptr: *mut AtomicU32,
     value: ..
}

impl Drop for AtomicMemorySentinel { fn drop(&mut self) { futex::wake_all(...) } }

impl AtomicMemorySentinel { pub fn new(mapping: MemoryMapping) -> MesaResult pub fn wait() -> MesaResult<()>; pub fn signal() -> MesaResult<()>

}

And then in the CrossDomain, code you can work in terms of Arc<Mutex<AtomicMemorySentinel>. I feel like the current CrossDomainMemoryWatcher is mixing thread operations, atomic memory operations.

Plus, AtomicMemorySentinel sounds coller than MemoryWatcher, does it not?!?

Also, cross_domain.rs is getting rather large. Feel free to split into seperate files: I.e, cross_domain_worker.rs, cross_domain_atomic_memory_sentinel_worker.rs. Thanks again for your patience!

Thanks for the detailed feedback! I've addressed all the other points. Regarding splitting cross_domain/mod.rs into separate files (cross_domain_worker.rs, cross_domain_atomic_memory_sentinel_worker.rs): I'd prefer to defer this to a follow-up change since I need to rebase this series first, and the file split would make the rebase messy.
Also I don't know what the last CI error is about. I thought I removed linux_raw_sys

Copy link
Copy Markdown
Collaborator

@gurchetansingh gurchetansingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While you don't need to refactor existing code, I still you should encapsulate the new functionality into it's own file to the extent possibility, since it does introduce complex spread throughout the place.

I suggest the creation of atomic_memory_sentinel_manager.rs in the src/cross_domain dir. That can encapsulate management of the AtomicMemorySentinelThread (s) and access to them. You don't have prefix everything with CrossDomain in src/cross_domain.rs -- I think we'll refactor that pattern later.

mod.rs

 struct CrossDomainContext {
     watcher_manager: Arc<Mutex<AtomicMemorySentinelManager>>,
     worker_thread: Option<thread::JoinHandle<()>>,
 }
 
 impl CrossDomainContext {
     fn memory_watcher_new(&mut self, cmd: &CreateNewWatcherCmd) -> Result<()> {
         // Lock the mutex to get mutable access to the manager
         let mut manager = self.watcher_manager.lock().unwrap();
         let (id, eventfd_descriptor) = manager.create_watcher(cmd.fs_id, cmd.handle)?;
 
         self.state.add_job(CrossDomainJob::AddPolledDescriptor(id, eventfd_descriptor));
         Ok(())
     }
 
     fn memory_watcher_destroy(&mut self, cmd: &DestroyWatcherCmd) -> Result<()> {
         let mut manager = self.watcher_manager.lock().unwrap();
         manager.destroy_watcher(cmd.id)?;
 
         self.state.add_job(CrossDomainJob::RemovePolledDescriptor(cmd.id));
         Ok(())
     }
 }
 
 
 struct CrossDomainWorker {
     watcher_manager: Arc<Mutex<AtomicMemorySentinelManager>>,
     wait_ctx: WaitContext,
 }
 
 impl CrossDomainWorker {
     fn handle_fence(&mut self) -> Result<()> {
         let events = self.wait_ctx.wait()?;
         if let Some(event) = events.first() {
             if event.id >= MEMORY_WATCHER_ID_START {
                 let manager = self.watcher_manager.lock().unwrap();
 
                 let ring_message = manager.handle_event(event.id, &event.descriptor)?;
 
                 self.state.write_to_ring(RingWrite::Write(ring_message, None), ...)?;
             }
         }
         Ok(())
}

atomic_memory_sentinel_manager.rs
 pub struct AtomicMemorySentinelManager {
     watchers: Map<u32, SentinenlInstance>,
     next_id: u32,
     virtiofs_table: VirtioFsTable, // A reference to the required dependency
 }
 
 impl AtomicMemorySentinelManager {
     pub fn new(virtiofs_table: VirtioFsTable) -> Self {
         Self {
             watchers: Map::new(),
             next_id: MEMORY_WATCHER_ID_START,
             virtiofs_table,
         }
     }
 
     pub fn create_watcher(&mut self, fs_id: u64, handle: u64) -> Result<(u32, OwnedDescriptor)> {
         let eventfd = Event::new()?;
         let thread_eventfd = eventfd.try_clone()?;
 
         let join_handle = thread::spawn(move || {
             // Thread logic that waits on the sentinel and signals `thread_eventfd`.
         });
 
         Ok((id, eventfd.into()))
     }
 
     pub fn handle_event(&self, id: u32, event_fd: &Event) -> Result<RingMessage> {
        .. 
         Ok(message)
     }
 
     pub fn destroy_watcher(&mut self, id: u32) -> Result<()> {
        ...
     }
 }
 
 impl Drop for AtomicMemorySentinelManager {
     fn drop(&mut self) {
         for (_, watcher) in self.watchers.drain() {
             watcher.shutdown();
         }
     }
 }

Comment thread Cargo.toml Outdated
Comment thread src/cross_domain/cross_domain_protocol.rs Outdated
Comment thread src/cross_domain/cross_domain_protocol.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
Comment thread third_party/mesa3d/src/util/rust/atomic_memory_sentinel.rs Outdated
Comment thread third_party/mesa3d/src/util/rust/atomic_memory_sentinel.rs Outdated
'--cfg', 'feature="net"',
'--cfg', 'feature="param"',
'--cfg', 'feature="pipe"',
'--cfg', 'feature="thread"',
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, this will introduce a linux-raw-sys dependency in Android, which the Rust team will not be happy about. I say keep this, and we'll have to get another patch through rustix before merged, but don't add a linux-raw-sys dep in the meson builds.

Comment thread src/cross_domain/atomic_memory_sentinel_manager.rs Outdated
Comment thread third_party/mesa3d/src/util/rust/atomic_memory_sentinel.rs Outdated
Comment thread src/cross_domain/mod.rs
Comment thread src/cross_domain/mod.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
Comment thread src/cross_domain/atomic_memory_sentinel_manager.rs Outdated
Comment thread third_party/mesa3d/src/util/rust/atomic_memory_sentinel.rs Outdated
Comment thread third_party/mesa3d/src/util/rust/atomic_memory_sentinel.rs Outdated
This adds the VirtiofsTable type and plumbs it
through RutabagaBuilder and CrossDomain, allowing
file descriptors to be shared between virtio-fs
and virtio-gpu contexts.

Based on libkrun commit 4ccc22d2ff887ae22eeeb9d0434498209dfdc00c

Signed-off-by: Asahi Lina <[email protected]>
Signed-off-by: Dorinda Bassey <[email protected]>
Extract file descriptor receive logic from handle_fence()
into a separate process_receive() method to improve code
organization.

Co-developed-by: Sasha Finkelstein <[email protected]>
Signed-off-by: Dorinda Bassey <[email protected]>
Copy link
Copy Markdown
Collaborator

@gurchetansingh gurchetansingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly nits, but I think we need to get rid of the double-atomicity and leaking a reference to the atomic value.

I'll also prepare a CL to solve the Rustix/Android issue..

Comment thread third_party/mesa3d/src/util/rust/atomic_memory_sentinel.rs Outdated
Comment thread third_party/mesa3d/src/util/rust/atomic_memory_sentinel.rs Outdated
Comment thread third_party/mesa3d/src/util/rust/atomic_memory_sentinel.rs Outdated
Comment thread third_party/mesa3d/src/util/rust/atomic_memory_sentinel.rs Outdated
Comment thread third_party/mesa3d/src/util/rust/atomic_memory_sentinel.rs
Comment thread third_party/mesa3d/src/util/rust/lib.rs Outdated
Comment thread third_party/mesa3d/src/util/rust/atomic_memory_sentinel.rs Outdated
Comment thread src/cross_domain/atomic_memory_sentinel_manager.rs
Comment thread src/cross_domain/atomic_memory_sentinel_manager.rs Outdated
Comment thread src/cross_domain/mod.rs Outdated
This adds the X11 cross domain channel and an ability to share
memory watchers between the VM and the host for synchronization.
Memory watchers provide a platform-agnostic API over Linux
futex syscalls, allowing shared memory synchronization via
VirtioFS file descriptors.

Signed-off-by: Sasha Finkelstein <[email protected]>
Co-developed-by: Asahi Lina <[email protected]>
Signed-off-by: Asahi Lina <[email protected]>
Signed-off-by: Dorinda Bassey <[email protected]>
This adds a pipewire channel type, increases the maximum number
of fds that can be transferred in a single message, and adds support
for sharing eventfds between the guest and host.

Signed-off-by: Sasha Finkelstein <[email protected]>
Signed-off-by: Dorinda Bassey <[email protected]>
@dorindabassey
Copy link
Copy Markdown
Author

Mostly nits, but I think we need to get rid of the double-atomicity and leaking a reference to the atomic value.

I'll also prepare a CL to solve the Rustix/Android issue..

Thanks! I've fixed the pending issues now.

@gurchetansingh
Copy link
Copy Markdown
Collaborator

Apologies once again for the delay! Rustix patch almost complete, undergoing testing internally. In the meantime, can you send the mesa3d_util change to upstream Mesa?

ETA for new rutabaga crates.io release at this pace: ~3 weeks

Thank you!

@dorindabassey
Copy link
Copy Markdown
Author

Apologies once again for the delay! Rustix patch almost complete, undergoing testing internally. In the meantime, can you send the mesa3d_util change to upstream Mesa?

ETA for new rutabaga crates.io release at this pace: ~3 weeks

Thank you!

NO worries! Thank you! Submitted the PR here - https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/39653

@gurchetansingh
Copy link
Copy Markdown
Collaborator

bytecodealliance/rustix#1577 <-- Rustix pull request here, could take a few weeks to get a review tho

@gurchetansingh
Copy link
Copy Markdown
Collaborator

@dorindabassey the Rustix MR is taking more time than expected.

As a stopgap to make sure you're not blocked, I created a libkrun specific branch and associated crates.io releases:

https://github.com/magma-gpu/rutabaga_gfx/commits/libkrun
https://crates.io/crates/mesa3d_util/0.1.76-libkrun.0
https://crates.io/crates/rutabaga_gfx/0.1.76-libkrun.0

This is similar to the tricks used to support chromeos too. Definitely will merge this into main once we have rustix issues fixed, but in the meantime feel free to use the libkrun crates.io releases to land the rest of the changes you need.

@dorindabassey
Copy link
Copy Markdown
Author

@dorindabassey the Rustix MR is taking more time than expected.

As a stopgap to make sure you're not blocked, I created a libkrun specific branch and associated crates.io releases:

https://github.com/magma-gpu/rutabaga_gfx/commits/libkrun https://crates.io/crates/mesa3d_util/0.1.76-libkrun.0 https://crates.io/crates/rutabaga_gfx/0.1.76-libkrun.0

This is similar to the tricks used to support chromeos too. Definitely will merge this into main once we have rustix issues fixed, but in the meantime feel free to use the libkrun crates.io releases to land the rest of the changes you need.

Thanks! This would be helpful, also the Mesa MR is still pending.

@gurchetansingh
Copy link
Copy Markdown
Collaborator

also the Mesa MR is still pending

Ack, but merging that would also break Android unfortunately, and it's much less easier/useful to create branches for Mesa.

@valpackett
Copy link
Copy Markdown
Contributor

Sorry for dragging this along even more :) but one extra thing I'm currently working on (for libkrun) is D-Bus over virtgpu, so we could have D-Bus integration with FD-passing (for XDG portals that require it). Camera and screen capture pass PipeWire remotes, but printing and file transfer for drag&drop/copy&paste pass regular file fds. Well, printing does, but file transfer passes O_PATH descriptors (despite the document-portal preferring "normal" ones now). And that's what I wanted to discuss here:

this current type VirtioFsTable = Arc<Mutex<Map<VirtioFsKey, File>>> won't work for O_PATH fds, as in that case there's no handle at all, just the inode number. So I'd have to add a different map/thingy to be able to look up inodes.

But anyway also this whole Mutex<Map<…>> is straight out of libkrun's implementation details and a different VMM might use a different data structure to store the FDs. (And maybe message-passing instead of mutex-based sharing for concurrency!)

Can we instead make a trait like

pub trait VirtioFsLookup {
    async fn get_open_file(fs_id: u64, handle: u64) -> Result<File>;
}

and then store an Option<dyn VirtioFsLookup + Send + Sync> or something…?

(And then conveniently I'll be able to extend it with a method that returns an O_PATH "File" based on an inode instead of a handle)

dorindabassey pushed a commit to dorindabassey/rutabaga_gfx that referenced this pull request Feb 23, 2026
We would like to land:

magma-gpu#36

But that'll break Android until the following rustix change lands:

bytecodealliance/rustix#1577

and it can take a few months to get something through rustix due
to limited maintainer bandwidth.

Stopgap solution: cut releases and branches for libkrun until
Rustix change is merged.  Unlike the "chromeos" branch, the libkrun
branch will eventually be merged with main.
@dorindabassey
Copy link
Copy Markdown
Author

dorindabassey commented Feb 23, 2026

Sorry for dragging this along even more :) but one extra thing I'm currently working on (for libkrun) is D-Bus over virtgpu, so we could have D-Bus integration with FD-passing (for XDG portals that require it). Camera and screen capture pass PipeWire remotes, but printing and file transfer for drag&drop/copy&paste pass regular file fds. Well, printing does, but file transfer passes O_PATH descriptors (despite the document-portal preferring "normal" ones now). And that's what I wanted to discuss here:

this current type VirtioFsTable = Arc<Mutex<Map<VirtioFsKey, File>>> won't work for O_PATH fds, as in that case there's no handle at all, just the inode number. So I'd have to add a different map/thingy to be able to look up inodes.

But anyway also this whole Mutex<Map<…>> is straight out of libkrun's implementation details and a different VMM might use a different data structure to store the FDs. (And maybe message-passing instead of mutex-based sharing for concurrency!)

Can we instead make a trait like

pub trait VirtioFsLookup {
    async fn get_open_file(fs_id: u64, handle: u64) -> Result<File>;
}

and then store an Option<dyn VirtioFsLookup + Send + Sync> or something…?

(And then conveniently I'll be able to extend it with a method that returns an O_PATH "File" based on an inode instead of a handle)

I'm currently working on implementing the trait-based approach you suggested, though I made some adjustments from your proposal:
pub trait VirtioFsLookup: Send + Sync {
fn get_open_file(&self, fs_id: u64, handle: u64) -> RutabagaResult;
}
I decided to make it sync because The codebase has no async runtime, and the call site in AtomicMemorySentinelManager::create_watcher is synchronous. Is this ok for you? or do you prefer the async version?
For the O_PATH support, we can extend the trait with an optional method for inode lookups.
but I wonder if this is something to do in this PR, or create a new PR for it?

@valpackett
Copy link
Copy Markdown
Contributor

@dorindabassey sync is fine for everything right now, the possibility of using async is something I suggested for the very abstract possibility of other/future VMMs wanting to use async :)

For the inode method, just the possibility that I could add it to the trait in a future PR would be excellent.

@mtjhrc
Copy link
Copy Markdown

mtjhrc commented Feb 24, 2026

Just a small suggestion, since this is switching to a more general trait anyway:

I think the get_open_file should return OwnedDescriptor instead of File. That seems more semantically correct and the code converts it to an OwnedDescriptor immediately anyway. And maybe call it something like get_exported_fd?

@dorindabassey
Copy link
Copy Markdown
Author

dorindabassey commented Feb 24, 2026

Just a small suggestion, since this is switching to a more general trait anyway:

I think the get_open_file should return OwnedDescriptor instead of File. That seems more semantically correct and the code converts it to an OwnedDescriptor immediately anyway. And maybe call it something like get_exported_fd?

Right, Thanks! the PR is opened on #44

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants