Latency and Synchronization

Hi guys,
i’m having a hard time trying to wrap my head around the timestamp synchronization in Gstreamer:

If i understand correctly:

  • A global pipeline latency is calculated before the pipeline even goes into PLAYING-state
  • the latency is the sum of minimum reported latencies of all elements in the pipeline
  • to know when to present the frame, sink has to add the pipeline latency to the buffer PTS (render time = latency + PTS)

What i don’t understand:
Is this still the same latency that was calculated in the beginning of the program ?
Because i cannot help but see the global latency as the “best case latency”. Would the buffers not always come late with “real and random” latency ? (running time > global latency + PTS)

I am working with 2 live sources (v4l2src) and a compositor if that helps

I’d appreciate any input, thanks!

1 Like

Hi,

Your understanding is almost correct:

  • the pipeline latency is computed as the maximum if the minimum reported latency from each sink (ie each branch of the pipeline

The minimum means the minimum latency at which the elements are guaranteed to produce data. The maximum is how much buffering they have

This latency is only the “algorithmic” latency, for example if you need to accumulate a certain amount of data to produce an output. For the “processing” latency, which corresponds to CPU time, scheduling, etc, the sinks have a “processing-deadline”, it’s a deadline that you set as a programmer, as GStreamer can’t really know about the realtime capacity of the system.

Hey, thanks for the input! It was an eye-opener. I think my problem was mixing up latency of an element and latency of the pipeline.

This latency is only the “algorithmic” latency, for example if you need to accumulate a certain amount of data to produce an output. For the “processing” latency, which corresponds to CPU time, scheduling, etc, the sinks have a “processing-deadline”, it’s a deadline that you set as a programmer, as GStreamer can’t really know about the realtime capacity of the system.

So in normal-cases the actual happening latency is never over the pipeline latency, since its calculated as the time needed to give all elements a chance to guaranteedly produce data, correct ?

And is the “gst_base_sink_is_too_late: There may be a timestamping problem, or this computer is too slow.” error then actually caused by the “processing-deadline” ? or also combination of both latencies?

Thank you so much!

Yes, when you get that gst_base_sink_is_too_late: There may be a timestamping problem, or this computer is too slow., it’s supposed to mean that the real processing time exceeds the configured processing-deadline. You can adjust the processing deadline if you suspect that the computer is a little too slow. That said, in my experience, it often means that there is a bug somewhere, other elements not properly reporting their latency, or elements accumulating that in ways they shouldn’t.

Hi ocrete,

so i’m working with this pipeline:

v4l2src ! a lot of processing ! comp.sink_0
v4l2src ! a little processing ! comp.sink_1
compositor ! textoverlay ! sink

I initially wanted to understand chronologically how Gstreamer can sync timestamp buffers, so the process from the moment the buffer was captured, through the processing/accumulation latency and up until the final presentation to the user. My original quest is actually to analyse the worst case time difference between both compositor windows.

If i understand correctly until now, the latency would not affect this (would not make the time difference drift bigger/smaller), so it comes down to the time when the frames was captured.

Ideally when both v4l2srcs capture at the same time, time difference would be 0 ms, but if we keep one v4l2src “chronologically” in place as reference and shift the other one forward/backward along time-axis, i think (CMIIW) the maximum time difference by sampling at any time would the period, so 0 <= delta <= 33 ms

I’ve observed that at least for me on an nvidia jetson, the capturing rate is not constant:
For 30 FPS instead of constant 33 ms, the period varies from 32 … 36 ms
For 60 FPS instead of constant 16,67 ms, the period varies from 16 … 20 ms

So in these plots are following values:
src_diff is the difference of pts from current and last frame → period
and
pts_diff is the difference of v4l2src0 current pts and v4l2src1 current pts → delta

As can be seen in the plots, Gstreamer does regulate the timestamps. But i am honestly quite surprised and confused to see these triangle patterns in the plots :sweat_smile:

  1. Is it normal that the v4l2src capture period varies a couple milliseconds off ?
  2. Would you be able to give an insight on what is actually happening ? or how Gstreamer actually regulating the time difference like so ?

Thanks! best regards!

Hi,

Do you have queue just after your sources ? If not, that could be the source of the variation you are seeing. The threads from the sources will be responsible not just for capturing but for all the processing in the following elements until it reaches another queue (or some element that starts another streaming thread).

By placing a queue after your sources you ensure they are solely dedicated to capturing (at the right time).

It is somehow still the same after adding a queue right after the sources.
I also tried to recreate the pipeline with videotestsrc.
The triangle pattern is still there, but the variation is smaller → 33 … 34 ms instead of 32 … 36 ms

Are these small variation something, that indeed could occur ?
or maybe there is something wrong with the way i calculate the graph ?

thanks!

By looks of things, something somewhere can only handle millisecond resolution for timing. So it can’t do 1/30s (33.3333333… ms) and instead produces the closest rounded millisecond.

Maybe v4l2 can only produce millisecond precision ?