Waylandsink: could not create wl_buffer out of wl_shm memory

I have an application which uses the following pipeline:
appsrc ! queue ! videoconvert ! waylandsink
where I’m pushing i420 frames from libwebrtc.

This was working on an Arch Linux x86 installation. I tried the same setup on a Fedora 42 installation. However, I’m getting the error: could not create wl_buffer out of wl_shm memory.

I tried launching a pipeline with videotestsrc and uridecodebin with waylandsink as the sink. This works fine. Is there some configuration or package that I’m missing that causes this?
Both of the cases were on GStreamer 1.26.0

At first sight, a leak in your application could cause this.

I’ll look into it more. But it doesn’t seem to be happening on my Arch system and on Mac (which uses glimagesink instead of waylandsink).

I think I got the issue:
The following was my logic to push I420 frame buffers to appsrc:

tokio::spawn(async move {
    let mut stream = NativeVideoStream::new(track);
    let (mut caps_width, mut caps_height) = (0, 0);
    let mut size = 0;
    let mut offset = [0; 3];
    let mut stride = [0i32; 3];
    let mut i = 0;
    while let Some(frame) = stream.next().await {
        let i420_buffer = frame.buffer.to_i420();
        let width = i420_buffer.width() as usize;
        let height = i420_buffer.height() as usize;
        let data_yuv = i420_buffer.data();
        let strides_yuv = i420_buffer.st
        if i < 4 {
            println!(
                "Pipeline: {:?}, ",
                pipeline.current_state(),
            );
            i += 1;
        
        if caps_width != width || caps_height != height {
            caps_width = width;
            caps_height = height;
            let caps = Caps::builder("video/x-raw")
                .field("width", width as i32)
                .field("height", height as i32)
                .field("format", "I420")
                .field(
                    "framerate",
                    &gstreamer::Fraction::new(30, 1),
                )
                .build();
            videoappsrc.set_caps(Some(&caps));
            let info = VideoInfo::builder(
                VideoFormat::I420,
                width as u32,
                height as u32,
            )
            .fps(gstreamer::Fraction::new(30, 1))
            .build()
            .unwrap();
            size = info.size();
            let offset_slice = info.offset();
            let stride_slice = info.stride();
            offset[..offset_slice.len()]
                .copy_from_slice(offset_slice);
            stride[..stride_slice.len()]
                .copy_from_slice(stride_slice);
        
        let mut raw_data = vec![0u8
        // Copy Y plane
        let y_stride = strides_yuv.0 as usize;
        let y_offset = offset[0];
        for row in 0..height {
            let src_start = row * y_stride;
            let dst_start = y_offset + row * stride[0] as usize;
            raw_data[dst_start..dst_start + width]
                .copy_from_slice(
                    &data_yuv.0[src_start..src_start + width],
                );
        
        // Copy U plane
        let chroma_height = height / 2;
        let u_stride = strides_yuv.1 as usize;
        let u_offset = offset[1] as usize;
        for row in 0..chroma_height {
            let src_start = row * u_stride;
            let dst_start = u_offset + row * stride[1] as usize;
            raw_data[dst_start..dst_start + (width / 2)]
                .copy_from_slice(
                    &data_yuv.1
                        [src_start..src_start + (width / 2)],
                );
        
        // Copy V plane
        let v_stride = strides_yuv.2 as usize;
        let v_offset = offset[2] as usize;
        for row in 0..chroma_height {
            let src_start = row * v_stride;
            let dst_start = v_offset + row * stride[2] as usize;
            raw_data[dst_start..dst_start + (width / 2)]
                .copy_from_slice(
                    &data_yuv.2
                        [src_start..src_start + (width / 2)],
                );
        
        let gst_buffer =
            gstreamer::Buffer::from_mut_slice(raw_data);
        if let Err(e) = videoappsrc.push_buffer(gst_buffer) {
            println!("Error {:?}", e);
        }
    }
});

I switched to converting to ARGB and it’s working:

tokio::spawn(async move {
    let mut stream = NativeVideoStream::new(track);
    let (mut caps_width, mut caps_height) = (0, 0);
    let mut i = 0;
    while let Some(frame) = stream.next().await {
        let i420_buffer = frame.buffer.to_i420();
        let width = i420_buffer.width() as u32;
        let height = i420_buffer.height() as u32;
        let stride = width * 4;
        let mut raw_data =
            vec![0u8; (stride * height) as usize];
        frame.buffer.to_argb(
            VideoFormatType::BGRA,
            raw_data.as_mut_slice(),
            stride,
            width as i32,
            height as i32,
        
        if i < 4 {
            println!(
                "Pipeline: {:?}, ",
                pipeline.current_state(),
            );
            i += 1;
        
        if caps_width != width || caps_height != height {
            caps_width = width;
            caps_height = height;
            let caps = Caps::builder("video/x-raw")
                .field("width", width as i32)
                .field("height", height as i32)
                .field("format", "ARGB")
                .field(
                    "framerate",
                    &gstreamer::Fraction::new(30, 1),
                )
                .build();
            videoappsrc.set_caps(Some(&caps));
        
        let gst_buffer =
            gstreamer::Buffer::from_mut_slice(raw_data);
        if let Err(e) = videoappsrc.push_buffer(gst_buffer) {
            println!("Error {:?}", e);
        }
    }
});

What am I doing wrong with I420 scenario? Why does it work on the other systems?
I obviously prefer using the ARGB conversion as it’s simpler. But should I be worried about the conversion cost?