Looking for debugging suggestions

Hi,

I’m trying to tackle a long outstanding issue. I ran out of ideas what it may be. I’m writing a very basic HLS streaming player.

I encounter the the issue that sometimes the player stops displaying video, audio or both. I’m pretty sure it happens when a BUFFERING message is on the bus and the application PAUSES the pipeline and sets the pipeline back to PLAYING once the buffering states has reached 100 again.

The pipeline always returns back to PLAYING just fine, but the stream never recovers. I believe the HTTP fragment request are still happening.

Tried various video sinks and decoders, it does not seem to make a difference.

I have two .dot graphs attached. “working” and “crash”. The “working” is as expected at normal operation. “crash” is after a PAUSE/PLAYING cycle which stopped playing audio/video.

Appreciate any ideas. Cheers.


Attaching thw “working” one as well. It didn’t let me in the original post.

It seems that in the above one the multiqueue has no audio buffers and in the below one you have some. IIRC both sinks need to be active for the pipeline to keep running - ie. if the audio sink is starving the video sink will wait for it. I do not often use such pipelines so I do not really know.

Did you try running video only? I would also try downloading the entire video and running it in a simpler pipeline with GST_DEBUG=4.

Why do you have the video decoder outside of decodebin3 ?

Can you first try without the decoder outside (and not modify the caps of uridecodebin3)

Never mind. Just saw the issue. You need queue elements just before your sinks always

Thanks for the suggestions so far. I tried additional queues outside the bin. I guess after is just fine, or do they really have to be just before the sink? (If so, I don’t understand why as I think as long as that branch has a thread/queue it should be happy).

I make them to hold 10 seconds of data and the video part hits that limit and starves. Still the multiqueue in the bin does not seem to have any audio at all. Same for the queue after in the audio branch.

So where is the audio data? It feels to me like it is getting lost inside the uridecodebin.

I use the decoder outside because I’m switching between different hardware and software decoders on different platforms. It makes it easier for me for testing. I think I tried to just using the decodebin to let do all the work, but I think hit the same issue - but I would have to test that again to make sure. But I would not understand why the current design should cause an issue(?).

I tried fiddling with the queue sizes of the multiqueue to hold more data, played with the sync and interleave options, but I feel this is not going to help. Whenever I check the pipeline in a stall there just is no sign of audio buffers anywhere.

Do you have those issues with a regular playbin3 pipeline ? Do you have the issue with gst-play-1.0 --use-playbin3 ... ? If so, please file an issue with the stream URI.

The fact there isn’t audio data doesn’t make sense if hlsdemux2 posted a 100% buffering status.

As for choosing specific decoders depending on platforms, you can override the rank of your preferred decoder elements with the GST_PLUGIN_FEATURE_RANK environment variable and then decodebin3 will pick the highest one. See : Running GStreamer Applications

Yeah, I can definitely trigger it with gst-play-1.0. It plays fine. I guess the main difference is that is just does a recalculate_latency() on a LATENCY message while I just pause the stream on a BUFFERING message.

If I do the same in my application it seems to not hang. Need to check if handles constant buffer under-runs well in that case…

But when I start playing with the space bar to PAUSE/PLAY the gst-play-1.0. I can make it stall in a similar way as I experience.

Do you have a public URL this can be reproduced with ?

Which version of GStreamer are you using ?

I have opened an issue here: