Input-selector not working on jetson for 2 rtspsrc

Hello friends,
this is a repost of an issue I’m having with input-selector
It works fine when switching 2 videotestsrc, but fails when I use 2 rtspsrc.
Thanks in advance
Any help will be greatly appreciated with a gift from Argentina. :wink:

videotestsrc is-live=true pattern=ball !  video/x-raw,width=160,height=120,framerate=2/1  ! videoconvert ! x264enc ! rtph264pay ! queue ! selector.sink_1  videotestsrc is-live=true pattern=snow ! video/x-raw,width=160,height=120,framerate=2/1 ! videoconvert ! x264enc ! rtph264pay ! queue ! selector.sink_2  input-selector name=selector ! queue ! whipsink
rtspsrc location=.... ! rtph264depay ! rtph264pay config-interval=-1 ! application/x-rtp,media=video,encoding-name=H264 ! queue  ! selector.sink_0 rtspsrc location=.... ! rtph264depay   ! rtph264pay config-interval=-1  ! application/x-rtp,media=video,encoding-name=H264 ! queue  ! selector.sink_1  input-selector name=selector  ! queue ! whipsink

Just tested this with an AGX Orin running R36.3 and using a localhost RTSP server. Not tested whipsink, but the following seems ok:

gst-launch-1.0 -v \
rtspsrc location=rtsp://127.0.0.1:8554/test ! rtph264depay ! rtph264pay config-interval=-1 ! application/x-rtp,media=video,encoding-name=H264 ! queue ! selector.sink_0 \
rtspsrc location=rtsp://127.0.0.1:8554/test ! rtph264depay ! rtph264pay config-interval=-1 ! application/x-rtp,media=video,encoding-name=H264 ! queue ! selector.sink_1  \
input-selector name=selector ! queue ! fakesink dump=1

Note that in my case the H264 stream from RTSP server adds SPS/PPS (your config-interval may do the same when a Key frame happens) and also has an IDR interval of 15 for 30 fps so that a key frame is issued each 15 frames and receiver can quickly display.

What gives:

gst-discoverer-1.0 -v <your_rtsp_src>

You may also try adding h264parse between rtph264depay and rtph264pay.

You may also try to run both pipelines with verbose mode (such as with gst-launch-1.0 -v) and carefully check for differences.

Does decoding/re-encoding help ?

gst-launch-1.0 -v \
rtspsrc location=rtsp://127.0.0.1:8554/test ! rtph264depay ! h264parse ! nvv4l2decoder ! queue ! nvv4l2h264enc insert-sps-pps=1 idrinterval=15 insert-vui=1 ! h264parse ! rtph264pay ! application/x-rtp,media=video,encoding-name=H264 ! queue ! selector.sink_0 \
rtspsrc location=rtsp://127.0.0.1:8554/test ! rtph264depay ! h264parse ! nvv4l2decoder ! queue ! nvv4l2h264enc insert-sps-pps=1 idrinterval=15 insert-vui=1 ! h264parse ! rtph264pay ! application/x-rtp,media=video,encoding-name=H264 ! queue ! selector.sink_1  \
input-selector name=selector ! queue ! fakesink dump=1

Note that Orin Nano has no encoder, so this wouldn’t work with this model. CPU encoding would work with something like:

gst-launch-1.0 -v \
rtspsrc location=rtsp://127.0.0.1:8554/test ! rtph264depay ! h264parse ! nvv4l2decoder ! queue ! nvvidconv ! video/x-raw,format=I420 ! x264enc key-int-max=15 tune=zerolatency ! h264parse ! rtph264pay  ! application/x-rtp,media=video,encoding-name=H264 ! queue ! selector.sink_0 \
rtspsrc location=rtsp://127.0.0.1:8554/test ! rtph264depay ! h264parse ! nvv4l2decoder ! queue ! nvvidconv ! video/x-raw,format=I420 ! x264enc key-int-max=15 tune=zerolatency ! h264parse ! rtph264pay  ! application/x-rtp,media=video,encoding-name=H264 ! queue ! selector.sink_1 \
input-selector name=selector ! queue ! fakesink dump=1

Also note that your videotestsrc case uses a very low resolution and framerate. If your rtsp cameras are streaming high resolution at high framerate, you might have to adjust bitrate, profile, level and maybe network sink buffer-size for UDP.

Hi friend!

Thank you so much for your help!. I’m kinda solo here…

The hardware setup is a jetson orin xavier connected to a 5G internet connection streaming to Dolby.io.

The software setup as follows:
source branch 1: test_signal.mp4 → ffmpeg → mediamtx → gstreamer pipeline → unique sink
source branch 2: rtsp camera (VGA@4fps) → gstreamer pipeline → unique sink
unique sink: selector → whipsink

Main goals: avoid exhausting the 5G network quota and avoid decoding/encoding.

How should the pipeline work? Initially stream a test signal when no viewers but quickly switch to a camera signal when +1 viewers and switch back to test signal when no viewers and so on.

Gst-discoverer-1.0 output is below.

Properties:
  Duration: 99:99:99.999999999
  Seekable: no
  Live: yes
  unknown #0: application/x-rtp
    video #1: H.264 (High Profile)
      Stream ID: dc0bbd374c1b74f2484e3d95eacc4ff4a877d09cd19d51a3dc5a4f28354a3dae/video:0:0:RTP:AVP:96
      Width: 1920
      Height: 1080
      Depth: 24
      Frame rate: 4/1
      Pixel aspect ratio: 1/1
      Interlaced: false
      Bitrate: 0
      Max bitrate: 0

I may help if I can, but I don’t have your setup and cannot just guess.
You may provide a MRE and saying your Jetson model and baseboard, L4T version, gst version and if not standard how you’ve built it, what are pipelines and returned errors.

You may use -v flag with gst-discoverer-1.0 as suggested for more insights and share.
You may also provide some feedback about differences from videotestsrc and rstpsrc with verbose flag as suggested.