Delayed linking failed

GStreamer 1.24.7 on Windows. I’m getting some errors about delayed linking failed. This seems to be an intermittent problem, so it’s a little tough to test. Here is my pipeline:

rtspsrc location=rtsp://10.16.14.207:554/s0 timeout=0 name=src !
                application/x-rtp,media=video !
                queue max-size-bytes=20971520 name=video_source_queue max-size-time=0 !
                parsebin !
                identity sync=true !
                fallbackswitch name=video_fallback immediate-fallback=true !
                video/x-h264 !
                tee name=video_tee !
                queue name=video_fake_queue !
                fakesink
            multifilesrc location=Videos/WaitingForStream.mp4 loop=true !
                parsebin !
                video/x-h264, stream-format=avc !
                video_fallback.
            fallbackswitch name=audio_fallback immediate-fallback=true !
                audioconvert !
                audioresample !
                opusenc !
                tee name=audio_tee !
                queue name=audio_fake_queue !
                fakesink
            src. !
                application/x-rtp,media=audio !
                queue max-size-bytes=20971520 max-size-time=0 name=audio_source_queue !
                parsebin !
                decodebin !
                identity sync=true !
                audio_fallback.sink_0
            audiotestsrc wave=silence !
                audio_fallback.sink_1

Basically this will take an rtsp stream and if it’s not ready or has a failure, it will:

  1. Show the Videos/WaitingForStream.mp4 video.
  2. Play empty audio.

The pipeline ends up getting webrtcbin elements attached, but I don’t think any of that is related. I am listening for warning and error events on the bus and I see this:

gst/parse/grammar.y(918): gst_parse_no_more_pads (): /GstPipeline:pipeline3/GstRTSPSrc:src:
failed delayed linking some pad of GstRTSPSrc named src to some pad of GstQueue named audio_source_queue

This seems to correspond with the stream not playing on the webrtc client. My guess is there’s some hiccup in the audio portion of the stream and maybe the audio pad isn’t there for the rtspsrc, which is causing the queue to fail to link. The stream does have audio. I’m unsure what I need to do to handle this.

I’m thinking the only way to do this is to use the pad-added signal to wait for the audio and video pads rather that having one big pre-defined pipeline. Is that the case? If so, that means it’s not possible to run this reliably with gst-launch?

Here is debug output level 6 for rtsp and sdp: GStreamer.txt

I should note that I’m testing on a different PC that has GStreamer 1.24.2 on it, hence the reason that version is listed in the log file.

Linking sometimes requires programming logic that gst-launch-1.0 is simply not smart enough to do. Have you considered writing an app instead ?

This is in an app. The pipeline I posted is started with Parse.Launch (GStreamer-Sharp API).

When clients connect, the streams get set up to go out over webrtc. So, each client gets something like the following linked to the video and audio tees:

queue name=video_output_queue max-size-bytes=20971520 max-size-time=0 leaky=downstream ! 
rtph264pay aggregate-mode=zero-latency config-interval=-1 timestamp-offset=0 ! 
webrtcbin name=webrtc bundle-policy=max-bundle
queue name=audio_output_queue max-size-bytes=20971520 max-size-time=0 leaky=downstream ! 
rtpopuspay !
webrtc.

I don’t think the webrtc bit is the issue though. I think it’s just that it randomly has issues with either the audio or the video from rtspsrc not being there and completely halts the pipeline. And I suppose with how I’m just using Parse.Launch, it’s not that much different than if I was using gst-launch, so you might be on the right path to needing to link things manually. Was just hoping to not have to do this.

Really, I’d like to understand why this is so random. Even if I link manually I suspect I’m going to be in a situation where either the audio or the video isn’t available. Users are going to be a little confused by that. I suppose I can restart the pipeline until there is both audio and video, though that doesn’t seem great either.

I understand it’s nice if you can avoid custom linking. In general it comes from ambiguity in the caps. When the caps of each branches are ambiguous, the parser will just link the first stream that arrives to the first matching branch.

I think in your case, the use of parsebin introduce an ambiguity. If the first media packet is audio, it gets linked to parsebin, which I think you expected video there. Thing will eventually fail if that happens, the actual error is because the parsebin pad, which is dynamic will have no compatibility with the rest of the pipeline.

I would say that a first thing to try would be to replace that parsebin with rtph264depay. Other similar changes may be needed.

Thanks for the suggestion. It makes sense. I’ve had other pipelines that randomly worked or didn’t work and ended up being caps related. I’ll give this a try and report back.

I was wondering, does the following not solve this?

application/x-rtp,media=video 

and:

application/x-rtp,media=audio

Here’s the current pipeline. Still causing the same issues.

rtspsrc location=rtsp://10.16.14.207:554/s0 name=src timeout=0 !
        application/x-rtp,media=video !
        queue max-size-bytes=20971520 name=video_source_queue max-size-time=0 !
        rtph264depay !
        identity sync=true !
        fallbackswitch name=video_fallback immediate-fallback=true !
        video/x-h264 !
        tee name=video_tee !
        queue name=video_fake_queue !
        fakesink
    multifilesrc location=Videos/WaitingForStream.mp4 loop=true !
        parsebin !
        video/x-h264, stream-format=avc !
        video_fallback.
    fallbackswitch name=audio_fallback immediate-fallback=true !
        audioconvert !
        audioresample !
        opusenc !
        tee name=audio_tee !
        queue name=audio_fake_queue !
        fakesink
    src. !
        application/x-rtp,media=audio !
        queue max-size-bytes=20971520 max-size-time=0 name=audio_source_queue !
        rtpmp4gdepay !
        decodebin !
        identity sync=true !
        audio_fallback.sink_0
    audiotestsrc wave=silence !
        audio_fallback.sink_1

Did some more testing. Took audio out of the equation. Here’s the pipeline:

rtspsrc location=rtsp://10.16.14.207:554/s0 name=src timeout=0 !
        application/x-rtp,media=video !
        queue max-size-bytes=20971520 name=video_source_queue max-size-time=0 !
        parsebin !
        identity sync=true !
        fallbackswitch name=video_fallback immediate-fallback=true !
        video/x-h264 !
        tee name=video_tee !
        queue name=video_fake_queue !
        fakesink
    multifilesrc location=Videos/WaitingForStream.mp4 loop=true !
        parsebin !
        video/x-h264, stream-format=avc !
        video_fallback.

This also has the same problem:

 gst/parse/grammar.y(919): gst_parse_no_more_pads (): /GstPipeline:pipeline3/GstRTSPSrc:src:
failed delayed linking some pad of GstRTSPSrc named src to some pad of GstQueue named video_source_queue

And I did try it with both parsebin as well as rtph264depay

I decided to try to run a pipeline with gst-launch. I still get the same thing with this:

gst-launch-1.0 -v rtspsrc location=rtsp://10.16.14.207:554/s0 name=src timeout=0 ! 
application/x-rtp,media=video ! 
queue max-size-bytes=20971520 name=video_source_queue max-size-time=0 ! 
rtph264depay ! 
identity sync=true ! 
fallbackswitch name=video_fallback immediate-fallback=true ! 
video/x-h264 ! 
tee name=video_tee ! 
queue name=video_fake_queue ! 
decodebin ! 
autovideosink multifilesrc location=Videos/WaitingForStream.mp4 loop=true ! 
parsebin ! 
video/x-h264, stream-format=avc ! 
video_fallback. 

I ran this with GST_DEBUG=3. Here are the stdout and stderr when it fails:

Failed Pipeline Error
Failed Pipeline Output

Here is stdout and stderr when it doesn’t fail:

Working Pipeline Error
Working Pipeline Output

Haven’t spent a ton of time comparing the files, but I did think this was suspect:

/GstPipeline:pipeline0/GstRTSPSrc:src/GstRtpBin:manager/GstRtpSession:rtpsession0: stats = application/x-rtp-session-stats, rtx-drop-count=(uint)0, sent-nack-count=(uint)0, recv-nack-count=(uint)0, source-stats=(GValueArray)< "application/x-rtp-source-stats\,\ ssrc\=\(uint\)922628121\,\ internal\=\(boolean\)true\,\ validated\=\(boolean\)true\,\ received-bye\=\(boolean\)false\,\ is-csrc\=\(boolean\)false\,\ is-sender\=\(boolean\)false\,\ seqnum-base\=\(int\)-1\,\ clock-rate\=\(int\)-1\,\ octets-sent\=\(guint64\)0\,\ packets-sent\=\(guint64\)0\,\ octets-received\=\(guint64\)0\,\ packets-received\=\(guint64\)0\,\ bytes-received\=\(guint64\)0\,\ bitrate\=\(guint64\)0\,\ packets-lost\=\(int\)0\,\ jitter\=\(uint\)0\,\ sent-pli-count\=\(uint\)0\,\ recv-pli-count\=\(uint\)0\,\ sent-fir-count\=\(uint\)0\,\ recv-fir-count\=\(uint\)0\,\ sent-nack-count\=\(uint\)0\,\ recv-nack-count\=\(uint\)0\,\ recv-packet-rate\=\(uint\)0\,\ have-sr\=\(boolean\)false\,\ sr-ntptime\=\(guint64\)0\,\ sr-rtptime\=\(uint\)0\,\ sr-octet-count\=\(uint\)0\,\ sr-packet-count\=\(uint\)0\;", "application/x-rtp-source-stats\,\ ssrc\=\(uint\)92292413\,\ internal\=\(boolean\)false\,\ validated\=\(boolean\)true\,\ received-bye\=\(boolean\)false\,\ is-csrc\=\(boolean\)false\,\ is-sender\=\(boolean\)true\,\ seqnum-base\=\(int\)-1\,\ clock-rate\=\(int\)11025\,\ rtp-from\=\(string\)10.16.14.207:43598\,\ octets-sent\=\(guint64\)0\,\ packets-sent\=\(guint64\)0\,\ octets-received\=\(guint64\)7370\,\ packets-received\=\(guint64\)20\,\ bytes-received\=\(guint64\)8170\,\ bitrate\=\(guint64\)0\,\ packets-lost\=\(int\)-1\,\ jitter\=\(uint\)265\,\ sent-pli-count\=\(uint\)0\,\ recv-pli-count\=\(uint\)0\,\ sent-fir-count\=\(uint\)0\,\ recv-fir-count\=\(uint\)0\,\ sent-nack-count\=\(uint\)0\,\ recv-nack-count\=\(uint\)0\,\ recv-packet-rate\=\(uint\)9\,\ have-sr\=\(boolean\)false\,\ sr-ntptime\=\(guint64\)0\,\ sr-rtptime\=\(uint\)0\,\ sr-octet-count\=\(uint\)0\,\ sr-packet-count\=\(uint\)0\,\ sent-rb\=\(boolean\)true\,\ sent-rb-fractionlost\=\(uint\)0\,\ sent-rb-packetslost\=\(int\)-1\,\ sent-rb-exthighestseq\=\(uint\)46427\,\ sent-rb-jitter\=\(uint\)265\,\ sent-rb-lsr\=\(uint\)0\,\ sent-rb-dlsr\=\(uint\)0\,\ have-rb\=\(boolean\)false\,\ rb-ssrc\=\(uint\)0\,\ rb-fractionlost\=\(uint\)0\,\ rb-packetslost\=\(int\)0\,\ rb-exthighestseq\=\(uint\)0\,\ rb-jitter\=\(uint\)0\,\ rb-lsr\=\(uint\)0\,\ rb-dlsr\=\(uint\)0\,\ rb-round-trip\=\(uint\)0\;" >, rtx-count=(uint)0, recv-rtx-req-count=(uint)0, sent-rtx-req-count=(uint)0;

Seemed like it transferred no data here, but the working pipeline showed data being transferred at approximately the same place.

Now I’ve tried it without all the fallbackswitch stuff. Same error (warning: failed delayed linking some pad of GstRTSPSrc named src to some pad of GstQueue named video_source_queue). Pipeline:

gst-launch-1.0 -v rtspsrc location=rtsp://10.16.14.207:554/s0 name=src timeout=0 ! 
application/x-rtp,media=video ! 
queue max-size-bytes=20971520 name=video_source_queue max-size-time=0 ! 
rtph264depay ! 
identity sync=true ! 
tee name=video_tee ! 
queue name=video_fake_queue ! 
decodebin ! 
autovideosink

And even though the error is actually showing up as a warning, it always coincides with the pipeline failing and exiting.

Since this was random, I put together this windows batch script…

@echo off
set MAX_LOOPS=100
set COUNTER=0

:loop
if %COUNTER% geq %MAX_LOOPS% (
    echo Maximum loop count reached. Exiting...
    exit /b
)

echo Loop #%COUNTER%

rem Start the GStreamer pipeline in a separate thread and wait for 10 seconds
start "" cmd /c "gst-launch-1.0 -v rtspsrc location=rtsp://10.16.14.207:554/s0 name=src ! application/x-rtp,media=video ! rtph264depay ! decodebin ! autovideosink > output_simple2.txt 2> error_simple2.txt"
timeout /t 10 /nobreak >nul

rem Check if the GStreamer process is still running
tasklist | find /i "gst-launch-1.0.exe" >nul
if %errorlevel% equ 0 (
    echo Process running successfully for 10 seconds, killing it.
    taskkill /f /im gst-launch-1.0.exe >nul
    set /a COUNTER+=1
    goto loop
) else (
    echo Process failed or exited before 10 seconds.
    exit /b
)

Removing the application/x-rtp,media=video seems to fix the problem, or at least maybe it is much less likely to trigger and I just didn’t run this test harness over enough loops. But I’m baffled why the caps would cause a problem. I would think that they’d be the solution to the problem.

I built our app without the caps and I’m still seeing the problems, so apparently the caps aren’t the problem, or at least aren’t the full problem.

Btw, as you reproduce in gst-launch, you can use GST_DEBUG_DUMP_DOT_DIR=some-dir/ env to get a graphivz dump of the pipeline. Hopefully it dumps it when this error occure, that will clarify the state of the pipeline. But as this is linking, it can only be related to the wrong branch being linked.

I think you’re right about the wrong branch being linked. It’s almost as if on the successful attempts, the video pad gets added first and then it does the linking correctly to rtph264depay. When it fails, I think it gets the audio pad first and instead of ignoring the pad as I was expecting it to do, it seems to think it’s time to give up. As if it’s not expecting any more pads (in this case, the video pad that hasn’t been added yet).

I did attempt the graphviz. Regardless of what I set for GST_DEBUG_DUMP_DOT_DIR (absolute path, relative path, forward slashes, backslashes, surrounded by quotes), I get this:

0:00:00.286679100 32892 000001FA2B516840 WARN                 default gstdebugutils.c:882:gst_debug_bin_to_dot_file: Failed to open file 'c:\temp\gstreamer_dot\ \0.00.00.286575800-gst-launch.NULL_READY.dot' for writing: No such file or directory
0:00:00.287201400 32892 000001FA2B516840 WARN                 default gstdebugutils.c:882:gst_debug_bin_to_dot_file: Failed to open file 'c:\temp\gstreamer_dot\ \0.00.00.287084500-gst-launch.READY_PAUSED.dot' for writing: No such file or directory

I’m thinking this doesn’t work in Windows. Looks like it’s adding a space between the GST_DEBUG_DUMP_DOT_DIR and the filename, so maybe that’s a bug? Odd since I thought I’ve done a graphviz dump on Windows before, though with a much older GStreamer version.

Do you think this linking issue is a bug in GStreamer? The simplified pipeline I’m using for testing is something I think most people would try to do as a first step just to watch the video from an rtsp stream. But there’s not really any indication this isn’t reliable, or at least unreliable in the presence of both an audio and a video stream. At the very least, maybe updated docs for rtspsrc to explain it?

I’m still trying to understand why the caps filter doesn’t just ignore the audio pad based off media=video, but I’ll admit to not having a very good idea what goes on behind the scenes with this.

It’s kind of unfortunate the stream pad templates aren’t of the form stream_video_%u and stream_audio_%u. Then maybe I could make this work without having to do any coding. I suppose the solution is going to be to remove the linking I’m doing in the pipeline between the rtspsrc and the audio and video elements and then as pads are added, do the linking in code.

You probably forgot to create this folder, and likely want to drop the sub-directory named space (" ").

Already considered it. Created the folder. Made no difference.

This was it. Took me a long time to realize what was going on. I changed my test script to do this:

start "" cmd /c "set GST_DEBUG=3 && set GST_DEBUG_DUMP_DOT_DIR=c:\temp\gstreamer_dot && gst-launch-1.0 -v rtspsrc location=rtsp://10.16.14.207:554/s0 name=src ! rtph264depay ! decodebin ! autovideosink > output_simple2.txt 2> error_simple2.txt"

Note the space between c:\temp\gstreamer_dot and the &&. Removing the space fixed the issue. I guess I shouldn’t be surprised as it does make sense to be able to create an environment variable with trailing spaces and the space before the && would be trailing. Now that I have that fixed, I’ll run it until failure and see what it looks like.