Nvcompositor freezes on dynamic pad-resize when switching multi-camera view modes

I’m developing a multi-camera video compositing application for different vehicles using GStreamer on NVIDIA Jetson Orin AGX with ZED X One cameras. The application needs to dynamically switch between different camera view layouts (single camera, dual camera, quad view, etc.) at runtime based on operator commands, similar to switching between different security camera views.

My current pipeline architecture consists of multiple nvarguscamerasrc elements feeding into an nvcompositor, followed by nvv4l2h265enc for hardware encoding and udpsink for network streaming. Each camera source goes through nvvidconv for format conversion and capsfilter for format specification before connecting to the compositor. The pipeline works perfectly for static layouts, producing smooth 30fps H.265 encoded video streams.

The core issue I’m facing is that when I attempt to dynamically update the nvcompositor sink pad properties (xpos, ypos, width, height, alpha) to change camera layouts at runtime, the video feed freezes completely. The pipeline doesn’t crash or throw errors, but no new buffers flow through the system after the property update. I can see in my logs that the g_object_set calls on the compositor sink pads complete successfully, and the properties appear to be set correctly when I query them back, but the video output becomes static.

I’ve tried several approaches to solve this problem. Initially, I attempted to pause the entire pipeline using gst_element_set_state(pipeline, GST_STATE_PAUSED) before updating properties and then resuming with GST_STATE_PLAYING, but this causes the same freezing behavior. I also tried using pad blocking with gst_pad_add_probe and GST_PAD_PROBE_TYPE_BLOCK_DOWNSTREAM to temporarily halt data flow during property updates, but this also results in frozen output. I experimented with sending custom events and EOS events to force buffer flushing, and even tried gst_pad_mark_reconfigure on the compositor sink pads, but none of these approaches restored proper data flow.

Looking at my implementation, I create all camera branches at startup and connect them to the nvcompositor with initial layout properties. When a layout change is requested, I iterate through each branch and call g_object_set on their respective compositor sink pads to update the positioning and scaling properties. The branches themselves remain connected throughout this process. I’m not unlinking or relinking any pads, just updating the compositor’s internal layout properties.

The expected behavior would be that updating these properties should cause the compositor to immediately reflect the new layout in subsequent output frames, similar to how other GStreamer elements handle dynamic property changes. However, it seems like the nvcompositor either doesn’t support dynamic property updates properly, or there’s a specific sequence of operations required to make it work correctly.

I’m wondering if this is a known limitation of the nvcompositor element, or if there’s a proper way to handle dynamic layout changes. Should I be using a different approach entirely, such as maintaining separate pipelines for each layout and switching between them? Is there a specific order for updating compositor properties or additional steps required after property changes? Are there alternative compositor elements that might handle dynamic updates better?

The application needs to switch layouts frequently during operation, so recreating the entire pipeline for each change introduces unacceptable latency and potential frame drops. Any insights into proper dynamic compositor usage or alternative approaches for runtime layout switching would be greatly appreciated.

Not sure for myself, though maybe a MRE would help for better advice.

Here’s the high-level example of how I am constructing my pipeline workflow.

#include <gst/gst.h>
#include <iostream>
#include <vector>

struct CameraLocation {
    size_t width, height, x_pos, y_pos;
    double alpha;
};

class VideoPipeline {
public:
    VideoPipeline() {
        pipeline = gst_pipeline_new("video-pipeline");
        compositor = gst_element_factory_make("nvcompositor", "compositor");
        sink = gst_element_factory_make("udpsink", "sink");

        gst_bin_add_many(GST_BIN(pipeline), compositor, sink, NULL);
        gst_element_link(compositor, sink);
    }

    void add_camera_branch(size_t camera_sn, const CameraLocation &location) {
        GstElement *src = gst_element_factory_make("nvarguscamerasrc", NULL);
        GstElement *nvvidconv = gst_element_factory_make("nvvidconv", NULL);
        GstElement *capsfilter = gst_element_factory_make("capsfilter", NULL);

        gst_bin_add_many(GST_BIN(pipeline), src, nvvidconv, capsfilter, NULL);
        gst_element_link_many(src, nvvidconv, capsfilter, compositor);

        GstPad *sink_pad = gst_element_get_request_pad(compositor, "sink_%u");
        g_object_set(sink_pad,
                     "xpos", location.x_pos,
                     "ypos", location.y_pos,
                     "width", location.width,
                     "height", location.height,
                     "alpha", location.alpha,
                     NULL);
        gst_object_unref(sink_pad);
    }

    void update_layout(const std::vector<CameraLocation> &locations) {
        for (size_t i = 0; i < locations.size(); ++i) {
            GstPad *sink_pad = gst_element_get_static_pad(compositor, ("sink_" + std::to_string(i)).c_str());
            g_object_set(sink_pad,
                         "xpos", locations[i].x_pos,
                         "ypos", locations[i].y_pos,
                         "width", locations[i].width,
                         "height", locations[i].height,
                         "alpha", locations[i].alpha,
                         NULL);
            gst_object_unref(sink_pad);
        }
    }

    void start() {
        gst_element_set_state(pipeline, GST_STATE_PLAYING);
    }

private:
    GstElement *pipeline, *compositor, *sink;
};

int main() {
    gst_init(NULL, NULL);

    VideoPipeline pipeline;
    pipeline.add_camera_branch(0, {1920, 1080, 0, 0, 1.0});
    pipeline.add_camera_branch(1, {640, 480, 1920, 0, 0.8});

    pipeline.start();

    std::vector<CameraLocation> new_layout = {
        {1280, 720, 0, 0, 1.0},
        {1280, 720, 1280, 0, 0.8},
    };
    pipeline.update_layout(new_layout);

    return 0;
}

Thanks for providing MRE.
While trying to repro your case (I haven’t a ArgusCameraSrc with my AGX Orin, I’ll have to simulate but won’t be available for this in next few days), you may also have a look to this post for dynamic compositor sources:

Seems you’re trying to stream raw composed video without format/encoding/container over UDP. First try to locally display if possible.

Thank you for the reply. I have since gotten closer to achieving my goal but am still running into a major problem. I am able to switch between the different view presets quite rapidly (~1s) but the live camera feed stops streaming. At a high-level, I am blocking pads for each camera source, updating properties using g_object_set(), reconfiguring, and then I stop blocking the pads. It appears that the nvarguscamerasrc element is particularly sensitive to such disruptions and may stop streaming entirely. Any way to get around this?

Here is how I’m doing it:

bool ZedVideoPipeline::update_layout(const std::vector<CameraLocation> &camera_locations)
    {

        std::vector<GstPad*> blocked_pads;
        for (size_t i = 0; i < pipeline_branches.size(); ++i) {
            if (pipeline_branches[i]->compositor_sink_pad) {
                GstPad* pad = pipeline_branches[i]->compositor_sink_pad;
                gst_pad_add_probe(pad, GST_PAD_PROBE_TYPE_BLOCK_DOWNSTREAM,
                                                nullptr, nullptr, nullptr);
                blocked_pads.push_back(pad);
            }
        }

        usleep(10000);

        for (size_t branch_idx = 0; branch_idx < pipeline_branches.size(); ++branch_idx) {
            if (branch_idx < camera_locations.size()) {
                const auto &location = camera_locations[branch_idx];
                pipeline_branches[branch_idx]->update_layout(location);
            } else {
                CameraLocation hidden_location = {1, 1, 0, 0, 0, 0.0};
                pipeline_branches[branch_idx]->update_layout(hidden_location);
            }
        }

        GstEvent *reconfigure_event = gst_event_new_reconfigure();
        if (!gst_element_send_event(compositor, reconfigure_event)) {
            std::cout << "WARNING: Failed to send reconfigure event (non-critical)" << std::endl;
        }

        for (size_t i = 0; i < blocked_pads.size(); ++i) {
            GstPad* pad = blocked_pads[i];
            gst_pad_remove_probe(pad, GST_PAD_PROBE_TYPE_BLOCK_DOWNSTREAM);
        }

        return true;
    }
bool ZedVideoPipelineBranch::update_layout(const CameraLocation &location)
    {
        if (!compositor_sink_pad) {
            std::cout << "ERROR: compositor_sink_pad is null, cannot update layout" << std::endl;
            return false;
        }

        if (!GST_IS_PAD(compositor_sink_pad)) {
            std::cout << "ERROR: compositor_sink_pad is not a valid GstPad" << std::endl;
            return false;
        }

        auto safe = location; 
        if (safe.width == 0) safe.width = 640; 
        if (safe.height == 0) safe.height = 320;

        g_object_set(compositor_sink_pad,
                     "xpos", safe.x_pos,
                     "ypos", safe.y_pos,
                     "width", safe.width,
                     "height", safe.height,
                     "alpha", safe.alpha,
                     NULL);
        gst_pad_mark_reconfigure(compositor_sink_pad);
        return true;
    }

I have tried to update the layout without pad blocking but that resulted in the live feed freezing and a failure to even switch layouts at all.

The only warnings I am getting is: (zed_pipeline:59041): GStreamer-WARNING **: 17:24:30.048: gstpad.c:1573: pad 0x4ef87f0' has no probe with id 114’

Is this fundamentally a problem with my approach or are there inherent limitations using nvarguscamerasrc for live video feed and dynamic layout manipulation?