Queue fill priority

I’m creating a pipeline who connect to a camera and run some AI. The AI sometimes it’s slower than the production line. So I add a leaky queue who drops packet.

The problem is that there is a rpjjitterbuffer who has his own queue who is being filled up before my queue.

I’m sure that the bottleneck is not the decoder (i’m using the nvidia one who can decode much more)

The behaviour that I’m witnessing is that the gstreamer prefer to fill up queue at the beginning of the queue rather than fill up the one who can leak.

I don’t want to drop packets before the decoder because i don’t want to drop delta frame that will make decoding mistakes, I want only drop decoded frame after.

Is there a way to force the pipeline to prioritize some queue rather than others?

I have noticed that every queue, the decoder, and the rpjjitterbuffer at the source pad creates their own task. Should I give more priority to those tasks? And if that is true, is that a correct practice?

Thanks to everyone who may help me with that.


I’m not sure what exactly you’re observing here or what the problem is.

The rtpjitterbuffer does queue data internally, but it will typically output packets as soon as possible, unless there’s packet loss, in which case it will queue data and wait for more data up to the specified latency.

Unless it’s working in buffering mode for you where it will buffer a few seconds of data in advance. Are you seeing buffering messages on the pipeline bus?

Is data being streamed via UDP or UDP multicast in your case or is it interleaved via the RTSP connection?

Hello tpm,
Thanks for the reply!
rtpjitterbuffer yes stores data, but for some reason he keeps it for himself and don’t push it down the streams.
The rtsp is in TCP mode but I don’t believe this is the problem.
If I set a very quick AI or the pipeline is not under stress everything seems to work out pretty well.

The problem only happens when we have congestions. If we have congestions I have noticed that the rtpjitterbuffer fill up his queue as well as the queue before the decoder (16/16).
But all the other queue are not being filled at all, even if they can be.

To simulate a slow down in the pipeline i have add down the pipeline a probe who let it pass a packet every 0.5 secs.

What I witness is that after the decoder, the fps of all the components is set to 2FPS.

My queue who is after the decoder never gets to fill up, so it can’t drop frames.

As a direct consequences of that the rtpjitterbuffer fill up of packets, and also the queue before the decoder, both are unable to drop packets (and I don’t want them to do it)

I hope that is more clear

As far as I can tell the only queue that’s set to leaky here is the GstTycoQueue, which does not look like an upstream GStreamer element, and also it’s set to leaky=tyco whatever that means :slightly_smiling_face:

Yeah, it’s almost a regular queue, and tyco means it can leak.
The problem is that it leaks before the decoder and not after.

If I set to not leak, the rpjbuffer will fill up.

There is also a queue after the decoder, and it never fill it up, nor it leaks in any settings.

It looks like that the decoding is the bottleneck, but it’s not because when the pipeline it’s not slow down on purpose at the end of the pipeline (by adding sleep, so I’m not consuming any resources) the decoding runs perfectly in time.

Basically, it looks like that the decoder is not able to push data when there is a congestion at the end of the pipeline, I know that they suppose to be in different thread (and they are because I checked) but this is what it happens.

Anyway, if you have any clue on what it could be the root cause of the problem, I will be very grateful.
And I’m already are for the time that you took analysing the problem.

I will reply to myself, just in case someone is stuck like me.

The decoder has a limited amount of plane (3-5) and they are shared for the whole pipeline. If all the plane has not being freed, the decoder won’t accept any new frame.

That’s why it looks like that the pipeline hangs at the decoder.

I have noticed this behaviour since we replace the nvstreammux with the new one who does not do the resize.

That means it just forward the frame without touching it, so it wont free the plane as far is being carried out downstream.

There are 2 fixes that can be implemented.
The easier one is to increase the number of plane in the decoder (num-extra-surfaces) at his maximum so you have enough plane to trigger the leak.

The second solution is to add a nvvideoconvert and force him to do at least 1 conversion, that will free the plane and remove the restriction. Yes that force an extra operation, but you can have as many planes as you want, and you don’t have to worry about.

On another note, I also have noticed that the component nvinferserver has an infinite queue inside that will use all the extra-surfaces if you are not careful.

I hope this will help someone who face my same issue.
I know it’s more an nvidia issue rather that a generic gstreamer one, but maybe someone can benefit from my experience :slight_smile:

Thanks for helping anyway.