Best architecture for stable RTSP camera streaming (LAN + WAN, no port forwarding)

Hi everyone,
I’m trying to choose the most stable way to stream RTSP (H.264) cameras to users both on the LAN and over WAN, without port forwarding at the site.

Current architecture
Cameras → RTSP
Edge/server on the LAN pulls RTSP (GStreamer)

  • Edge publishes via WebRTC (LiveKit) to a cloud SFU (VPS) using gstreamer.

  • React Native app (mobile + Android TV) subscribes from the cloud SFU to view from anywhere

Problem
Some streams are:

  • choppy/jittery, and in some cases

  • black or show one frame then freeze (varies by network/device)

Question
Is WebRTC → cloud SFU the right default approach for maximum reliability here, or would you recommend a different pattern (e.g. LL-HLS/HLS)?

Thanks

WebRTC has an ability to adjust it’s quality depending on current “network weather”. Sometimes it can cause visible artifacts.
Consider using HLS or MPEG-DASH with some buffering, if low-latency isn’t a rigid constraint.

Hi Alex. The issue I’m finding with WebRTC is that only 3-4 of my 6 streams show in my app. The others are black. I look at the logs and the frames arrive for those streams in the app but none are decoded even though they are being viewed. I put in a restart mechanism to unsub and sub to the black streams and after a few tries they eventually all load. So all connections are possible so it’s not a hardware issue.

I feel like it could be an IDR frame issue. Where the stream doesn’t get the IDR so it stays black and the unsub and sub essentially makes it try again until it has what is required to start the stream.

My cameras have a GOP = 1s, low bitrate, FPS, base profile. The gstreamer publishes to my cloud SFU using livekitwebrtcbin.

What do you think is the reason for the initial black streams?