Dynamically changing rtpbin latencies

Hi hackers!

We’re using rtpbin with ntp-sync=true and buffer-mode=synced for synchronizing streams in our system. The timestamps we carry are from capture time on cameras or encoders.

We are looking into dynamically adjusting the latency property based on measured end-to-end latency to adapt to varying network conditions, keeping our overall distributed pipeline latency as low as possible.

I wonder where the dragons are with this approach? It seems packets already queued in the jitter buffer are not rescheduled when you update latency on a running pipeline?

Looking at gstrtpjitterbuffer.c, the timeout_offset() function includes latency_ns in its calculation:

static inline GstClockTimeDiff
timeout_offset (GstRtpJitterBuffer * jitterbuffer)

{
  GstRtpJitterBufferPrivate *priv = jitterbuffer->priv;
  return priv->ts_offset + priv->out_offset + priv->latency_ns;
}

When PROP_TS_OFFSET is set, update_timer_offsets() is called to reschedule existing packets. However, when PROP_LATENCY is set, this function is not called(?), even though latency_ns is part of timeout_offset().

Am I completely wrong or … is this something that could be changed or is it by design?

All the best
Jonas

What we are trying to avoid is having static minimum latencies set in different nodes in our distributed system, and instead try to get as low as possible.

Dynamic latency is not something currently supported by rtpjitterbuffer at all. You would likely need a differently designed jitterbuffer and latency handling in GStreamer to support this.

Thanks!

So, do you, or anyone else have some idea for how we should design this?

We have a bunch of nodes in a cluster making up a distributed processing pipeline … pretty much:

┌────────────┐  SRT  ┌───────────┐  RTP    ┌───────────┐  RTP   ┌──────────┐
│  Camera/   │ ────► │ Ingestion │ ────►   │ Processing│ ─────► │ Consumer │
│  Encoder   │       │   Node    │         │  Nodes    │        │  Nodes   │
└────────────┘       └───────────┘         └───────────┘        └──────────┘

And each step we use rtpbin for receiving and sending RTP between the nodes. And today we need(?) to put some static latencies on the rtpjitterbuffers to make this work well.

The problem is that this prevents us from going as low latency as we want, since we need to account for worst case. Ingestion can vary a lot, and we might construct this jumble of nodes in different configurations where some are capable of much lower latencies than others.

So what options do we have? All this in the same cloud domain so maybe jitterbuffers are not needed for us? But I guess there is no way to have a rtpbin without them?

Should we roll our own rtpbin without jitterbuffer? Or is there some very simple solution to all this that we are missing?

All the best
Jonas