Two independent flows in the same pipeline

Hi, I’m trying to write a plugin that contains source elements to support grabbing view finder live video as well as full resolution snapshots from a DSLR camera. It adds two new source elements:

edsdkimgsrc - full resolution snapshots (image/jpg etc)
edsdklivesrc - low resolution live finder image stream (image/jpg)

These two sources works well independently, i.e the following two pipelines work:

gst-launch-1.0 edsdkimgsrc ! jpegdec ! videoscale ! video/x-raw,width=1152,height=768 ! videoconvert ! autovideosink

gst-launch-1.0 edsdklivesrc ! jpegdec ! videoconvert ! queue ! autovideosink

edsdklivesrc produces buffers continously at 10 fps or so.
edsdkimgsrc only produces buffers when the camera is triggered.

I also want to run the two sources simultaneously, for example to save full resolution snapshots to disk while livestreaming the viewfinder video to some other sink for inspection.

This is the multi-flow pipeline I’m trying to run, using gst-launch-1.0:

gst-launch-1.0 edsdkimgsrc ! jpegdec ! videoscale ! video/x-raw,width=1152,height=768 ! videoconvert ! autovideosink edsdklivesrc ! jpegdec ! videoconvert ! queue ! autovideosink

Note that due to constraints of the camera SDK these sources need to share a common camera context, which is set up using the existing context sharing features of gstreamer.

Both edsdkimsrc and edsdklivesrc are configured as live source elements, based on GstPushSrc. When the pipeline is started, if no image is received from edsdkimgsrc, the edsdklivesrc flow stops running after a short time. If the shutter button is pressed and edsdkimgsrc produces and image, the live flow starts running again after a delay that seems to be similar to the time between launching the pipeline and pressing the shutter button. Once a single image has been produced by edsdkimgsrc, the edsdklivesrc flow keeps going even though no more buffers are received from edsdkimgsrc.

I’m guessing this has something to to with time synchronization, or that the pipeline doesn’t enter the appropriate state until edsdkimgsrc produces the first buffer.

Any suggestions about how to make this work?

Would the camera context be shared if you stuck it in a simple Python/C/Rust app that was basically nothing more than just the two separate gst_parse_launch() pipelines?

I don’t think the automatic, query-based method of letting elements share a common context works across elements in separate pipelines. But if you’re using these elements from code rather than via gst-launch, I think you can manually create the context and pass it to each separate pipeline using API calls. I might give this a go… I still want to be able to use these elements using a gst-launch command line though.

I think the issue is that the pipeline doesn’t become PREROLLED without a buffer being pushed out by the source element. For a non-live source element that only generates buffers sporadically, how can we make the pipeline become PREROLLED and enter state PLAYING without sending an initial buffer? I tried sending a GAP event down the pipeline when starting up, and that opened up an empty autovideosink window, but then nothing worked after that, even when pressing the camera shutter button.