USB camera stutter

Hi, I have an annoying issue that I can’t seem to solve. I’m trying to stream a USB camera and two MIPI CSI cameras through the webrtcsink, but I keep getting a stuttering image from the USB camera, with “lost frames detected: count = 1” errors. My pipeline is like this:

v4l2src → videoconvert → queue → compositor → queue → videoconvert → webrtcsink

With two more arms connected to the compositor. I’m using a Jetson Orin Nano. Changing the elements to their hardware-accelerated counterparts already helps, but doesn’t solve it. Also, the camera has a LED which indicates whether it is streaming or not, and it’s constantly flickering.

Just displaying the image with a simple pipeline works fine, it’s only when an encoder is connected that frames start to get lost.

Is this a known problem and are there any remedies for it? I was thinking to add a buffer so that the frame acquisition may run completely independent from the encoding, but I don’t know how to do that correctly.

Thanks in advance!

What does your pipeline look like with hardware-accelerated counterparts?

What video encoder is being used in this pipeline and the hw-accelerated version?

The structure is the same:

nvv4l2camerasrc → nvvidconv → queue → nvcompositor → queue → nvvidconv → webrtcsink

It’s using x264enc as the Orin Nano doesn’t have the NVENC hardware encoders. Could it be that the encoder is struggling with the hard edges in the composite image?

This happens when the video node driver is starved and forced to overwrite the currently active buffer instead of delivering it. You may want to try and move your queue in front of videoconvert, but realisticly, its possible that the CPU processing in videoconvert / compositor is too slow. Have you considered using HW acceleration ?

1 Like

I guess so, because picking a lower resolution improves it as well. I’ll try if I can get a better result on a Raspberry Pi w/ hardware encoding

Not sure where you’re using video encoding (compositor output, webrtc ?), but you may see different performance depending on what are the nvcompositor input/output formats… If you don’t need alpha composing, may be NV12 (or I420) in NVMM memory would be preferred (might depend on your Jetson platform and L4T release).
You may better tell your pipelines for better advice.