We have a C++ application to recover stream video from a camera, apply some processing on the video frames and display the processed stream video on our gui (qt application), and save the processed stream in mp4 file on demand.
To do that, at start we have 2 pipelines :
Recover pipeline : udpsrc–>rtpvrawdepay–>appsink
Render pipeline : appsrc–>videoconvert–>videocrop–>tee->queue->d3d11videosink
And when user asks to save video, we add dynamically a branch to the tee so the render pipeline looks like this :
appsrc–>videoconvert–>videocrop–>tee->queue->d3d11videosink
|->queue->capsFilter->x264enc->mp4mux->filesink
And we have a callback on “new-sample” from the recovering pipeline to process frames. We also use gst_video_overlay_set_window_handle to display video in our qt widget.
All is Ok, except that at start the frame rate is around 17 instead of 25 (camera is 25 fps). If save video is started (so branch added on tee), frame rate is the one expected to 25 and remained to 25 even if recording is stopped (branch removed).
We don’t find how to have the frame rate to 25 at start.
Did you check the properties of the x264enc element ? Maybe there is a way this element forces the pipeline to be at 25 fps where the pipeline without encoder just drop frames ?
If someone has other leads, I’m interested by this problem too !
The framerate in GStreamer is primarily determined by the data flow. You get as many frames per second as frames get captured/depayloaded/decoded/etc per second (irrespective of what caps might say in most cases).
So if you’re getting fewer frames than expected you need to figure out where the data gets dropped - presumably in the receiver/capture pipeline somewhere?
I see that you have a queue max-size-buffers=25 there. You probably want queue max-size-buffers=25 max-size-time=0 max-size-bytes=0, otherwise the first limit to be hit applies, and frames get dropped because the queue is configured as leaky. Since you have raw video frames you might hit the bytes limit (10 MB by default) quite quickly.
If you have a tee element in your pipeline, it will just forward data as it comes in. If 25 frames per second come in, it will forward 25 frames per second to both branches.
I see you have a capsfilter in your x264enc branch - that shouldn’t really be needed, and in your case it looks plain wrong actually: The incoming data has format=GBRA_12LE (presumably something negotiated with the videosink), so both branches of the tee will receive that format. The caps filter in your encoder branch should make the pipeline error out with a not-negotiated error.
You might want to put tee ! queue ! videoconvert ! video/x-raw,format=I420 ! x264enc tune=zerolatency … in your recording branch, that will convert it appropriately.
PS: in git main / 1.23.1 dev release there’s also a qml6d3d11sink now for what it’s worth.
Indeed, we had a problem with our caps : we built our rendering pipeline with the caps “video/x-raw, format=RGB, width=1920, height = 1080, framerate = 25/1” but when launching stream this caps was updated (because some times frame size can be different) with the caps “video/x-raw, format=RGB, width=1920, height = 1080”. Frame rate was missing, add it fixed the problem.
Also, we fix the queue parameters with max-size-buffers=25 max-size-time=0 max-size-bytes=0.
On the other hand, we didn’t update the record branch to follow :
tee ! queue ! videoconvert ! video/x-raw,format=I420 ! x264enc tune=zerolatency …
To set “video/x-raw,format=I420”, we have to use capsfilter no ? Why to add videoconvert ? Actually, we don’t have errors on our pipeline.
We keep in mind that qml6d3d11sink now exists but upgrade to 1.23.x is not yet planned.
A capsfilter will only restrict/enforce a certain format at a specific point. Which will result either in buffers in the right format passing through, or a not-negotiated error being reported if buffers do not have caps that match.
You need a videoconvert before the capsfilter to make sure that if your buffers are in a different format they get converted to I420.
Ok we add a videoconvert before the capsfilter. Indeed, it seems we had a bad effect on frames without this videoconvert.
One thing we don’t understand it’s why/how adding the record branch affects the display branch : affects framerate in our first problematic, and affects frame rendering now. Is the same buffer reference is shared between the two branches ?
Also videoconvert induces lag, how can we optimize our branches to avoid this ?
I continue this old post with some other requirements.
So, according to previous answers, pipelines look like this now :
Recover pipeline : udpsrc–>rtpvrawdepay–>appsink
Render pipeline with record :
appsrc–>tee->queue->videoconvert–>videocrop->d3d11videosink
|->queue->videoconvert->videocrop->capsFilter->x264enc->mp4mux->filesink
with
All is Ok, display is done at 25 fps since the camera frame rate is 25 fps. The “problem” is the mp4 file : sometimes (depending on computer where our software is launched) the frame rate is < 25 because the record branch drop some frames. And I understand that it is normal according the parameters of our pipeline. But, we would like to have a final mp4 file with 25fps, so no frames dropped : if we try to set the leaky property of the record queue to 0, 25 fps is reached but involves lags on live-stream (display). Increase max-size-buffers to 25 or 50 do not change anything : finally some frames are dropped because queue is full (maybe something to investigate but how ?). Is it possible to record without drop frames with our architecture ?
Correct. queue max-size-buffers=1 leaky=2 is quite optimistic, you’re basically at the mercy of the scheduler here, I’m not sure it actually makes sense to have such a low max-size-buffers value.
What do you mean by “involves lags on live-stream (display)”? How do you determine/measure the lag? Lag of what compared to what?
Is mp4mux outputting ‘normal’ mp4 (which would only be readable on EOS) or fragmented mp4 or something else? (e.g. robust mode or somsuch)
queue max-size-buffers=1 leaky=2 is quite optimistic, you’re basically at the mercy of the scheduler here, I’m not sure it actually makes sense to have such a low max-size-buffers value.
=> Yes, but increase the max-size-buffers do not change anything. However, I set max-size-buffers=3 but don’t know if it is really optimized.
What do you mean by “involves lags on live-stream (display)”? How do you determine/measure the lag? Lag of what compared to what?
=> The lag is visible, I move my camera and the display doesn’t follow the move immediatly.
Is mp4mux outputting ‘normal’ mp4 (which would only be readable on EOS) or fragmented mp4 or something else? (e.g. robust mode or somsuch)
=> mp4mux outputting ‘normal’ , i.e readable only on EOS
Note that I add a videorate element to my record branch. It allows to have the perfect frame rate expected, duplicating frames but I feel it’s not optimized.
Even if I increase max-size-buffers to 25 or 50 frames are dropped because queue is full and keeps full : how can I investigate where the problem occurs ?