Hi, I have a pipeline where I receive an RTMP stream, demux it, and replace the audio track with another one coming from WebRTC. The problem is that there can be a significant delay between when the audio and video are available. Audio is available almost immediately, while video might only be available after 10 seconds. Because of this gap, I encounter “backwards dts” errors in flvmux
:
0:00:13.113041667 27518 0x1348b1800 WARN flvmux gstflvmux.c:1289:gst_flv_mux_buffer_to_tag_internal:<muxer:video> Got backwards dts! (0:00:01.633000000 < 0:00:12.423000000)
What is the correct way to handle this? The first solution that comes to mind is to drop all the audio buffers up until the point when the video starts, but this seems a bit cumbersome. The second solution is to somehow combine audiomixer/fallbackswitch/inputselector, though it is as ugly as the first .
Here is a simplified version of my pipeline (I use livekit go-sdk with appsrc
instead of livekitwebrtcsrc
)
gst-launch-1.0 -e \
multiqueue name=mq max-size-buffers=0 max-size-bytes=0 max-size-time=0 \
urisourcebin uri="rtmp://localhost:1934/live/test" parse-streams=true use-buffering=true name=demux \
demux. ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=0 ! mq.sink_0 mq.src_0 ! muxer. \
demux. ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=0 ! decodebin3 ! audioconvert ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=0 ! \
livekitwebrtcsink \
signaller::ws-url=ws://localhost:7880 \
signaller::auth-token=<token> \
audio-caps='audio/x-opus' message-forward=true \
livekitwebrtcsrc async-handling=true \
signaller::ws-url=ws://localhost:7880 \
signaller::auth-token=<token> \
signaller::producer-peer-id=processed_testIdentity \
! decodebin3 ! audioconvert ! avenc_aac ! mq.sink_1 mq.src_1 ! muxer. \
flvmux name=muxer ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=0 ! rtmp2sink location=rtmp://localhost:1934/live/test-remux async=false \
Thank you.