How to have the same frame rate when adding tee dynamically?


We have a C++ application to recover stream video from a camera, apply some processing on the video frames and display the processed stream video on our gui (qt application), and save the processed stream in mp4 file on demand.

To do that, at start we have 2 pipelines :
Recover pipeline : udpsrc–>rtpvrawdepay–>appsink
Render pipeline : appsrc–>videoconvert–>videocrop–>tee->queue->d3d11videosink
And when user asks to save video, we add dynamically a branch to the tee so the render pipeline looks like this :

And we have a callback on “new-sample” from the recovering pipeline to process frames. We also use gst_video_overlay_set_window_handle to display video in our qt widget.

All is Ok, except that at start the frame rate is around 17 instead of 25 (camera is 25 fps). If save video is started (so branch added on tee), frame rate is the one expected to 25 and remained to 25 even if recording is stopped (branch removed).
We don’t find how to have the frame rate to 25 at start.

Recover pipeline:

Render pipeline without save :

Render pipeline with save :

(Environment : Windows 11 - gstreamer 1.22.6 - Qt C++)

Thanks for your help.

Nobody can help us ?

Just some advices to tell us what to check, where to look for are welcome.

We try to add some logs to debug. We only see tthat at start appsource drop frames and stop to drop when record is activated.


Did you check the properties of the x264enc element ? Maybe there is a way this element forces the pipeline to be at 25 fps where the pipeline without encoder just drop frames ?

If someone has other leads, I’m interested by this problem too ! :slight_smile:

The framerate in GStreamer is primarily determined by the data flow. You get as many frames per second as frames get captured/depayloaded/decoded/etc per second (irrespective of what caps might say in most cases).

So if you’re getting fewer frames than expected you need to figure out where the data gets dropped - presumably in the receiver/capture pipeline somewhere?

I see that you have a queue max-size-buffers=25 there. You probably want queue max-size-buffers=25 max-size-time=0 max-size-bytes=0, otherwise the first limit to be hit applies, and frames get dropped because the queue is configured as leaky. Since you have raw video frames you might hit the bytes limit (10 MB by default) quite quickly.

If you have a tee element in your pipeline, it will just forward data as it comes in. If 25 frames per second come in, it will forward 25 frames per second to both branches.

I see you have a capsfilter in your x264enc branch - that shouldn’t really be needed, and in your case it looks plain wrong actually: The incoming data has format=GBRA_12LE (presumably something negotiated with the videosink), so both branches of the tee will receive that format. The caps filter in your encoder branch should make the pipeline error out with a not-negotiated error.

You might want to put tee ! queue ! videoconvert ! video/x-raw,format=I420 ! x264enc tune=zerolatency … in your recording branch, that will convert it appropriately.

PS: in git main / 1.23.1 dev release there’s also a qml6d3d11sink now for what it’s worth.


Thanks everybody for your replies.

Indeed, we had a problem with our caps : we built our rendering pipeline with the caps “video/x-raw, format=RGB, width=1920, height = 1080, framerate = 25/1” but when launching stream this caps was updated (because some times frame size can be different) with the caps “video/x-raw, format=RGB, width=1920, height = 1080”. Frame rate was missing, add it fixed the problem.

Also, we fix the queue parameters with max-size-buffers=25 max-size-time=0 max-size-bytes=0.

On the other hand, we didn’t update the record branch to follow :
tee ! queue ! videoconvert ! video/x-raw,format=I420 ! x264enc tune=zerolatency …
To set “video/x-raw,format=I420”, we have to use capsfilter no ? Why to add videoconvert ? Actually, we don’t have errors on our pipeline.

We keep in mind that qml6d3d11sink now exists but upgrade to 1.23.x is not yet planned.


A capsfilter will only restrict/enforce a certain format at a specific point. Which will result either in buffers in the right format passing through, or a not-negotiated error being reported if buffers do not have caps that match.

You need a videoconvert before the capsfilter to make sure that if your buffers are in a different format they get converted to I420.


Ok we add a videoconvert before the capsfilter. Indeed, it seems we had a bad effect on frames without this videoconvert.
One thing we don’t understand it’s why/how adding the record branch affects the display branch : affects framerate in our first problematic, and affects frame rendering now. Is the same buffer reference is shared between the two branches ?
Also videoconvert induces lag, how can we optimize our branches to avoid this ?