Webrtcsink latency

Hi,

I’ve (again :slight_smile: ) a question about webrtcsink, and how to reduce the latency when sending content.
Basically I’ve a pipeline like this:
pipeline = Gst.parse_launch(f"webrtcsink name=ws meta=\"meta,name=stream\" appsrc name=camera ! queue ! h264parse ! ws.")
I am directly providing h264 packets from my camera to webrtcsink. All on the same computer (localhost). With that, I’ve got ~300ms delay. I am filming a timer with the camera, that I display on my custom client.
If, instead of webrtcsink, I use a custom producer based on aiortc, I’ve got ~130ms (same signalling server, same consumer client). So I assume that there is something on webrtcsink side.

I’ve had to add a queue, otherwise there’s the warning

0:00:00.232523955 45429 0x3fc14c0 WARN basesink gstbasesink.c:1249:gst_base_sink_query_latency: warning: Pipeline construction is invalid, please add queues.

queue2 does not remove this same warning BTW. I’ve played with various parameter of queue without success (latency remains the same). Also I’ve always have this warning when streaming starts

0:00:18.324681033 45787 0x7f25f8061400 WARN webrtctransportsendbin transportsendbin.c:457:gst_transport_send_bin_element_query: did not really configure latency of 0:00:00.000000000

Not sure what it means.

Finally I’ve tried to set the latency parameter of webrtcbin, that forwards it to rtpbin if I understand well. I’ve directly set the value in the webrtcsink source code (rust plugin) for purpose of testing. Nothing change whatever I set (more or less than the default 200ms).

Any idea on how to control the latency on webrtcsink side?

Thanks

That’s expected. You would add a queue of suitable size to allow for compensating the latency difference between the different streams (e.g. if your audio stream has 500ms latency and the video stream 200ms, then the video stream needs an additional 300ms of buffering at least). This will not increase latency. See this explanation about the min/max latency and generally how latency works in GStreamer.

That means that at this point at least latency configuration was not successful (the LATENCY event was rejected by upstream). The debug logs will contain more information about what went wrong there.

Most likely you’ll get a LATENCY message on the bus at a later time, and then via gst_bin_recalculate_latency() it should end up with a valid latency configuration. Or possibly even without you having to do anything at all.

Thanks for the details, the link is helping. I am still a bit confused.

In my example, I have only one stream. So I’m not sure what latency I should compensate. BTW I’ve set these properties

   src.set_property("format", Gst.Format.TIME)
   src.set_property("is-live", True)
   src.set_property("do-timestamp", True)

I’d like to reduce the latency of this video stream first.

Now my real use case has multiple streams indeed. It is more something like:

pipeline = Gst.parse_launch(f"webrtcsink name=ws meta=\"meta,name=stream\" \
                                appsrc name=camera ! queue ! h264parse ! ws. \
                                alsasrc ! queue ! opusenc audio-type=voice ! audio/x-opus, rate=48000, channels=2 ! ws.")

So this gst-launch-1.0 -v alsasrc ! audiolatency print-latency=true ! alsasink reports a latency of ~150ms. Is that the amount I should set in min-latency of appsrc to compensate it? (I did it does not change anything). Also, the audio latency increases in time, which doesn’t appear if I only stream audio.

On the appsrc, the min-latency should be set to the actual latency your source element introduces (in the worst case). In other words, for the timestamp (running time) of the frames you’re passing into the appsrc, how far in the past (too late) compared to the current clock time are they? When using do-timestamp=true the answer is 0 as outgoing buffers are going to be timestamped based on exactly the current clock time.

At the appsrc you don’t need to worry about the latency of any other streams. That’s something to be worried about at the pipeline level.

Ok the camera has 30ms delay, but that does not change much if I set min-latency=30 (it is ms right?)

What do you mean at the pipeline level? Is there something I should do?

You would then ideally timestamp frames yourself accordingly. In your case, if you let the appsrc do the timestamping then audio/video sync would be off by 30ms.

Add enough queues for compensating the latency of the stream with lower latency, and make sure to handle LATENCY messages correctly to update the latency of the pipeline.

Got it. But we are talking about audio video sync. Back to my initial question, I’ve a ~300ms delay sending only video (while ~130ms with another producer, both associated to the same consumer). So is it the queue, or something in webrtcsink that adds latency?
A perfect pipe would give me 30ms delay only, like when I directly display the camera output. I’d like to understand and quantify the webrtc overhead.

Hard to say without analyzing this in detail in your application. You’d need to check where inside webrtcsink latency is introduced. Theoretically there’s nothing apart from the encoder latency, processing delay, network latency and whatever the receiver introduces on its side.

Yes sure. I wanted to double check that there is no kind of buffering I would be unaware of. Thanks for the help, I’ll you know if I find something.

One thing you didn’t consider here would be the encoder and decoder latency. That’s why you’ll always have more latency end-to-end over WebRTC than when displaying the camera stream locally.

Well the camera has a built-in encoder, hence the h264 packets that I directly give to the pipe (and the 30ms delay). The decoder (consumer) and the network are the same when I use webrtcsink or the aiortc custom producer. That’s why I focus on this part here.

1 Like

Ah great, that simplifies things a bit then. Theoretically you should not observe any additional latency inside webrtcsink then, except for bugs. Let me know what you find!

So, testing these two producers on another linux machine, and the consumer on a Windows one, I’ve got similar performances. I guess this is ok :slight_smile: There might be something specific on my setup.

1 Like