I have a streaming application using webrtcbin that’s generally working fine. However, there’s one exception: when I use it on our university network (no matter if via Wifi, LAN, or VPN), audio and video is horribly chunky and garbled.
On my home connection, everything streams just fine - I had a four-hour intercontinental connection last week, and despite crappy Wifi on the US side, everything still worked.
Notably, other video call software (e.g. Zoom) doesn’t seem to have the same issues with our uni network, that works fine in both cases, it’s just my GStreamer webrtc setup that seems to croak.
I’m a bit lost how to debug this, TBH - what could I analyze and/or tweak to see where this issue is coming from? I already asked our admins, and they had no idea either
I’m interested in this topic. When my computer has both Ethernet and Wi-Fi connections active, it seems that WebRTC (especially with webrtcsink) doesn’t automatically prefer the Ethernet connection, which could lead to reduced streaming quality. How can I determine which network card is being used by WebRTC, and is there a way to analyze the network path it’s taking?
Right now, there is no logic for this in libnice. It’s just the order in which they are received from getifaddrs(). On Windows, we use GetBestInterfaceEx, but I don’t know of a similar API on Linux or BSD platforms.
Another interesting option would be to actually measure the STUN response time in libnice and prefer the lowest ping.
In any case, the solution for this kind of problem belongs in libnice.
@ocrete OK, that’s related to the streaming path question, but would you also have any ideas for solving (or at least debugging) the original issue (very bad streaming quality on some networks, which does not seem to affect other video livestream software)? Thanks!
Thanks for the hints! I’ve implemented the get-stats signal right now, but there’s a lot of data coming out, it’s going to take a moment to sift through.
I’m using standard webrtcbin and don’t have any code relating to retransmissions, I was actually assuming this would happen transparently? How would I be able to turn them on?
Right now, I’m using a fixed bitrate of 3 MBit which IMHO should be low enough to not cause any problems, but dynamic bitrate is something I’ve already been considering.
Thanks! But hm, how do I actually do that? I assumed that the right way is to set a handler for on-new-transceiver on the webrtcbin and then set the do-nack property on the transceiver object in the handler, but it doesn’t look like that signal actually ever gets triggered…
Connect to on-new-transceiver before creating pads. Or use get-transceiver after creating pads.
Thank you, that was kinda obvious in hindsight
Interesting observation: newest Chrome (119) seems to dislike the offer which is created when do-nack is enabled, and spits out something like Failed to execute 'setRemoteDescription' on 'RTCPeerConnection': Failed to set remote offer sdp: Failed to add remote stream ssrc: 212544496 to {mid: audio0, media_type: audio} Is this a known issue? Couldn’t find anything related.
Firefox works nicely so far, as does the Python client. I’ll dig a bit more…
And for this exact SDP, the Chrome error is: stream.js:6 DOMException: Failed to execute 'setRemoteDescription' on 'RTCPeerConnection': Failed to set remote offer sdp: Failed to add remote stream ssrc: 198368932 to {mid: audio1, media_type: audio}
So it’s the RTX SSRC for audio. Are retransmissions incompatible with Opus?
EDIT: just to clarify, when I don’t set do-nack, the SDP has only one SSRC per stream and then Chrome doesn’t complain.
However, looking at the streaming stats, the packet loss rate reported e.g. by Firefox is somewhere around 50%, and I assume this is just something where no amount of error correction can help anymore.
My current suspicion is that the VPN has some custom QoS rules for known video streaming software like Zoom, Skype etc. and just extremely horrible drop rates for plain old UDP (which AFAICT is how the WebRTC traffic looks to the VPN)