In my application I use pad offsets to synchronize audio streams. These offsets are applied to the source pads linked to the sink pads of an audiomixer element.
Do I need to account for these offsets by increasing the latency property of the audiomixer? Or are pad offsets automatically taken into account into the pipeline latency?
Do I need to account for these offsets by increasing the latency property of the audiomixer? Or are pad offsets automatically taken into account into the pipeline latency?
You need to take them into account yourself if the addition of the pad offset actually increases the latency. It often doesn’t.
Additional question: does it matter whether the pad offset is on the mixer sinkpad, or on its peer?
Theoretically no, but in practice pad offsets on sink pads misbehave due to hard to fix bugs and I would recommend putting the pad offset on a source pad.
When/why wouldn’t it? If the offset is not on the “critical path”? In my case I can expect each of the inputs to have nearly identical latency, so then delaying any of them would increase the effective latency, right?
I recall trying that and things not working, so this is very good to know, thanks!
How does min_upstream_latency factor into this? I set this to a worst case amount. My inputs are SRT streams, and it looks like srtsrc actually runs at lower latency than its latency property indicates, as long as network conditions are good. Does the mixer always delay its output by min_upstream_latency or is that only how long it is willing/able to buffer internally if needed?
If you delay them you increase the running time of each buffer. By increasing the running time you effectively reduce the latency (or I am confused and need more coffee )
It doesn’t delay anything as long as buffers arrive early enough. It only sets a deadline until when buffers must arrive before the mixer times out and produces output anyway.
The value is taken into account by sinks and other elements that sync to the clock though. They will always delay by that much at least (unless one of the inputs of the mixer or any of the other streams going to any sink has a higher latency, then they delay by that much instead).
I apply only positive offsets (negative offsets didn’t work, it just goes silent) so that adds to the running time of the buffers. That means they are due to be aggregated later, so the other inputs have to be buffered until then…? No, maybe you are right, delaying any input actually buys you extra time for the other inputs to arrive.
None of my sinks are sync (hlssink, ie.e. filesink, and rtpbin into udpsinks to another process), but there are multiple levels of mixers and muxers, which do sync to the clock in live mode if I understand correctly.
So if an upstream mixer has a min_upstream_latency already, I don’t need to also set it on a downstream mixer or muxer?
Because then they all arrive even more too late (they were supposed to be rendered before you even received the buffer) and more latency needs to be configured to give them more time.
No, they only use the latency as a deadline for timing out in live pipelines.
The running time of the buffers and the latency make up the deadline for timing out, and it determines when that deadline is reached based on the pipeline clock.
I think the important part to understand about latency in GStreamer is that in a live pipeline the running time of a buffer is the capture time. So at that point the buffer is already exactly just in time and delaying it any further would make it too late. The latency then gives an additional budget for the buffer to be processed further downstream until it arrives at its destination, so each element on the way (including the source as it needs a moment from capture to actually having the buffer ready for sending downstream) adds its own latency to the (minimum!) latency in the latency query.
When configuring the latency on a pipeline (by default) all sinks are queried for the latency, and the maximum of all minimum latencies is then configured on the whole pipeline so that streams with lower latency are delayed inside the sink by as much as is necessary to make them in sync with the stream with the highest latency.
The minimum latency in the latency query is also kind of confusingly named. It’s not the minimum latency that is introduced, but it is the maximum latency that is introduced in the worst case. It is the minimum latency that downstream has to compensate for to allow for buffers to be in time and not consider them all too late.
The maximum latency in the latency query on the other hand is the amount of buffering that is provided by the elements along the way. As delaying streams for a higher latency (e.g. video stream has 200ms latency, audio stream has 100ms latency, then you need to delay the audio for another 100ms to have both streams in sync) requires buffering somehow this always needs to be higher than the configured (and maximum of all minimum) latencies.
That’s good to hear, because I had convinced myself that was correct, then drove home thinking about it and found myself thinking “no wait it’s the other way around or is it”
But let me think. If I have one input from say a live testsrc, that has no latency so it sets the deadline for the rest. The mixer knows its upstream latency from the pipeline latency query (which can be “artificially” increased by min-upstream-latency), so it will queue the buffers from the testsrc while they wait for “matching” buffers on the other pads. The latency property configured on the mixer is “extra” to account for imperfect timing upstream?
Now, if I set an offset on one of the inputs, those buffers also spend some time in the queue, because their running time is ahead of the other live inputs. But (and only) if I were to set the offset greater than the upstream latency, I’ll be making buffers “FROM THE FUTURE” and the mixer won’t provide enough queuing and I need to increase the mixer latency?
It’s in addition to allow for wrongly reported latency from upstream, yes. It will allow for buffers to arrive later than they should without discarding them (or whatever the aggregator subclass’ timeout behaviour is).
No, any positive offset would delay the buffers and they would be “from the future”. There would have to be enough possible buffering (see maximum latency from the query) upstream of the mixer (e.g. a big enough queue) to allow for delaying by that much.
The mixer’s latency property only allows for buffers to arrive later than the configured latency. You could make use of that when setting a negative pad offset to compensate for buffers all arriving late according to their timestamp and latency budget.