Trouble with latency

I’ve got a pipeline watching and streaming 30Hz 1080 over UDP to localhost.
Question: is it reasonable to stream 4 streams on an 8 core rysan?

I’ve manipulted (leaky queue) so any single stream is within 100ms or so of the original v4l2src
Running 2 streams (cores around 50%) and latency goes up ubout 10secs per minute.

I found a videotesestsrc with bouncing ball and timer example. 1 stream is within ms, 2 drifts.

It very much depends on what you are doing with the streams? Are you encoding? Are you doing transformations? But from a high level, dealing with 4x 1080p30 streams on a modern system shouldn’t be an issue. Can you share the pipeline you are using?

Olivier

This is my unadulterated send pipeline

    pipe = settings.value ("recordPipe",
        "v4l2src device=%1 ! "
        "image/jpeg,width=1920,height=1080,framerate=30/1 ! "
        "jpegdec ! "
        "videoconvert ! "
        "queue max-size-buffers=0 max-size-bytes=0 max-size-time=1000000000 ! "
        "gdkpixbufoverlay name=logo ! "
        "clockoverlay name=DT "
        "halignment=left valignment=top font-desc=\"Sans, 12\" time-format=\"%d %b %Y %H:%M:%S\" ! "
        "textoverlay name=O1 halignment=left valignment=top ! "
        "textoverlay name=O2 halignment=left valignment=top ! "
        "textoverlay name=O3 halignment=left valignment=top ! "
        "textoverlay name=O4 halignment=left valignment=top ! "
        "textoverlay name=O5 halignment=left valignment=top ! "
        "textoverlay name=O6 halignment=left valignment=top ! "
        "textoverlay name=O7 halignment=left valignment=top ! "
        "textoverlay name=O8 halignment=left valignment=top ! "
        "textoverlay name=O9 halignment=left valignment=top ! "
        "textoverlay name=O10 halignment=left valignment=top ! "
        "tee name=t "

        "t. ! queue ! glimagesink name=sink sync=false "

        "t. ! queue leaky=1 ! x264enc tune=zerolatency ! "
        "rtph264pay ! "
        "udpsink host=127.0.0.1 port=%2 "

        "t. !  queue ! "
        "x264enc speed-preset=ultrafast tune=zerolatency byte-stream=true bitrate=%3 key-int-max=%4 ! "
        "mux. pulsesrc device=%5 ! "      // device = 0
        "queue ! audioconvert ! "
        "audioresample ! audio/x-raw, rate=48000 ! "
        "queue ! avenc_aac ! "
        "queue ! mux. qtmux name=mux ! "
        "filesink location=%6").toString ()

    .arg (cardInfo[channel])
    .arg (5000 + channel)
    .arg (bitrate)
    .arg (gop)
    .arg (pulseDev)
    .arg (recFile);

But here is a test pipeline that shows the lag too

gst-launch-1.0 videotestsrc pattern=ball is-live=1 ! video/x-raw,width=1920,height=1080,framerate=30/1 ! timeoverlay font-desc=“Sans, 24” ! tee name=t ! queue ! autovideosink t. ! queue ! xvimagesink name=sink sync=false t. ! queue ! x264enc tune=zerolatency ! rtph264pay ! udpsink host=127.0.0.1 port=5001

and recieved by

gst-launch-1.0 -vc udpsrc port=5001 close-socket=false multicast-iface=false auto-multicast=true ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! xvimagesink sync=false async=false

I read that udpsrc shows lower latency than tcpserversrc but I also tried tcp (with no outstanding difference)

I’m still having a torrid time with latency
On NVIDIA forum Honey_Patouceul opined that

shmsink/src may result in significant CPU usage. Maybe using a network streaming protocol into localhost could save more resources, but I haven’t really profiled that.

I find the CPU usage is low (20% cf 50% udpsink/src) and that timing is perfect BUT in contravention of the unix way the SHM segment is not shared (rw- r-- —)
So back to basics, how do I share a pipeline from 1 user to another user.

User streaming stream1 or stream2

User1 watching stream1
or
User2 watching stream2

latency sub 100ms

User streaming stream1 and stream2

User1 watching stream1
and
User2 watching stream2

latency 5 to 10 secs / minute accumulating
latency is on (only???) the receive side. Stopping the sender, receiver keeps receiving.
Here is my receive pipeline:
gst-launch-1.0 -vc udpsrc port=5002 close-socket=false multicast-iface=false auto-multicast=true ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! xvimagesink sync=false async=false udpsrc port=3446 ! ‘application/x-rtp,media=audio,payload=96,clock-rate=22000,encoding-name=L24’ ! rtpL24depay ! audioconvert ! autoaudiosink sync=false

replace rtpjitterbuffe with queue results in no delay, artifacts on the display. (smeer of bouncing ball)

Run Ridge’s interpipe looks interesting, but it won’t build on 24.04

Note that my advice in NVIDIA forum was for a case of passing raw video through shmsrc/shmsink with a Jetson, it can be very different with your own platform.

Also note that shmsink has a perms property.

For RTP over UDP, you may try without audio, just with something like:

gst-launch-1.0 -vc udpsrc port=5002 close-socket=false ! application/x-rtp, encoding-name=H264 ! rtpjitterbuffer latency=0 ! rtph264depay ! h264parse ! avdec_h264 ! xvimagesink

A clod of earth must have clouded my view. I poured over inspect looking for something like this Thankyou

I send with this

gst-launch-1.0 -v videotestsrc is-live=true pattern=ball background-color=0xff0000f0 foreground-color=0xfff0f000 !
‘video/x-raw,width=1280,height=720,format=(string)RGB,framerate=(fraction)30/1’ !
timeoverlay font-desc=“Sans, 24” !
videoconvert !
shmsink perms=0666 socket-path=/tmp/foo name=/tmp/shm sync=false wait-for-connection=false shm-size=20000000

and receive with this

gst-launch-1.0 -v shmsrc do-timestamp=true socket-path=/tmp/foo name=/tmp/shm !
‘video/x-raw,width=1280,height=720,format=(string)RGB,framerate=(fraction)30/1’ !
videoconvert !
fpsdisplaysink text-overlay=false sync=false -e

All perfect.
I run the same receive again and again perfect.

Instead (of running again) I run the same receive pipeline as another user.
No errors in terminal but the display is utter rubbish (lines, black and white ! marks, title bar)

Can anybody tell me what I don’t understand please?

Looks like a resolution mismatch… I also notice that receiver has framerate 60/1 while sender seems to have set 30/1.

I’ve tested your case without issues but no X display for receiver so I used cacasink instead:

User1 sender:

gst-launch-1.0 -v videotestsrc is-live=true pattern=ball background-color=0xff0000f0 foreground-color=0xfff0f000 ! 'video/x-raw,width=1280,height=720,format=(string)RGB,framerate=(fraction)30/1' ! timeoverlay font-desc="Sans, 24" ! videoconvert ! shmsink perms=0666 socket-path=/tmp/foo name=/tmp/shm sync=false wait-for-connection=false shm-size=20000000

User2 receiver:

gst-launch-1.0 -v shmsrc do-timestamp=true socket-path=/tmp/foo name=/tmp/shm ! 'video/x-raw,width=1280,height=720,format=(string)RGB,framerate=(fraction)30/1' ! videoconvert ! cacasink sync=false

Does this work for you ?

After struggling lots your help resolved my issues - thanks
I’ve been fiddling lots and 60 crept in, in place of 30 while I was trying things.
The bit that solved for me was from your suggestion:
rtpjitterbuffer latency=0

I am using udpsink/src.

But I would like to solve the shm other-user issue (That is a failing of being libra - perfection) so I’ll give a try in the morning (Australia time is way out)

For user1 your shm eg works correctly.
For user2, no errors reported, but