WebRTC pipeline destruction

We’re using webrtcbin for WebRTC sessions with audio and video media, and a single data channel. It all seems to be working perfectly, except when it comes to cleaning up a session.

According to heaptrack there are a number of GStreamer objects lingering on after the session ends, plus some data that was queued up to be sent over the now-closed data channel.

The cleanup at the moment is just

  1. Remove the bus watch
  2. Set the pipeline state to NULL
        debug!("[WebRTC Session {session_id}] un-watching bus...");
        if let Err(e) = self.pipeline.bus().expect("pipeline bus").remove_watch() {
            error!("[WebRTC Session {session_id}] failed to remove bus watch: {e}");
        }

        debug!("[WebRTC Session {session_id}] setting pipeline state to NULL...");
        match self.pipeline.set_state(gst::State::Null) {
            Ok(_) => {
                debug!("[WebRTC Session {session_id}] pipeline state set to NULL");
            }
            Err(e) => {
                error!(
                    "[WebRTC Session {session_id}] failed to set the pipeline state to NULL: {e}"
                );
            }
        }

Is there anything else that needs to be done to properly release everything?

Also, I don’t know if this is related or normal behaviour, but after closing the connection we get a bus error like

BUS ERROR from /GstPipeline:pipeline-127859407074357/GstWebRTCBin:webrtcbin19/GstSctpEnc:sctpenc19 - Could not write to resource.

That’s sufficient in general.

Either there’s a bug in webrtcbin, or in your application. A common cause of such memory leaks is if you create reference cycles. For example, you might connect a signal handler to webrtcbin or some other element where the closure captures a strong reference to the pipeline (or something that has a strong reference to the pipeline).

Thanks - the only pipeline reference in our code is a private member of a non-Clone struct which is definitely being dropped, so I don’t think it’s being retained from a reference cycle. Other than that, there are a bunch of signal handlers along these lines -

        // watch for new data channels
        let session_clone = session.downgrade();
        session
            .webrtcbin
            .connect("on-data-channel", false, move |values| {
                let _webrtc = values[0].get::<gst::Element>().expect("Invalid argument");
                let data_channel = values[1]
                    .get::<gst_webrtc::WebRTCDataChannel>()
                    .expect("Invalid argument");

                let session = upgrade_weak!(session_clone, None);

                if let Err(err) = session.on_data_channel(data_channel) {
                    gst::element_error!(
                        session.pipeline,
                        gst::LibraryError::Failed,
                        ("failed to handle data channel: {:?}", err)
                    );
                }

                None
            });

Could gst::element_error! be doing something unexepected?

You’re even using weak references in that code snippet, so that’s not going to be a problem.

I think we’ll need a full, runnable code example here to debug this further. Can you build some testcase that reproduces the problem?

No, it will just create a message and post it on the bus.