GStreamer: overlay video does not restart from frame 0 when re-triggered (decodebin + compositor + alpha)

Hi all,

I’m working on a PyQt5 + GStreamer application with a compositor-based video pipeline. I have multiple video layers (background, numbers, logo, win animation), each with separate alpha videos combined via alphacombine.

The issue I’m struggling with is related to time management of a single overlay branch.


Pipeline description

The pipeline is static and roughly looks like this:

filesrc → decodebin → videoconvert → alphacombine → videoconvert → glupload

→ gltransformation → gldownload → compositor

The win overlay is connected to compositor.sink_2 and its visibility is controlled using the compositor pad alpha property.

The main pipeline runs continuously in PLAYING.


Current behavior

The win animation is always decoding and running in the background as soon as the pipeline is set to PLAYING.

When an external event (spin_complete_event) is triggered:

  • I simply set compositor.sink_2::alpha = 1.0

  • This reveals the win animation at whatever timestamp it currently is

Because the animation is ~10 seconds long and continuously running, the visual result is that the win animation always appears at a seemingly random frame depending on when the trigger happens.

Important clarification:

The animation does not start from frame 0, not even the first time. The alpha is enabled at an arbitrary moment while the animation is already playing.


What I have already tried

  • Pausing / playing the win branch

  • Toggling compositor alpha on/off

  • Keeping the win branch in PAUSED until triggered

None of these reliably reset the decoder position to frame 0 without affecting the rest of the pipeline.

Seeking the entire pipeline is not an option, because all other layers must keep running uninterrupted.


What I want to achieve

Each time the win overlay is triggered:

  • It should start exactly from frame 0

  • It should play exactly once

  • Other layers must continue uninterrupted

  • No pipeline rebuilds (GL + compositor must remain stable)


Questions

  1. Is there a supported / idiomatic way in GStreamer to restart only one branch of a running pipeline?

  2. Is a segment seek on the win branch the correct solution here?

  3. Should the win overlay be isolated into a GstBin with ghost pads to allow local seeking?

  4. Is there any way to reset decodebin state for a single stream without tearing down the pipeline?


Extra context

  • Platform: Linux

  • GStreamer 1.x

  • PyGObject

  • Heavy use of GL elements (glupload, gltransformation, compositor)

  • Real-time UI application (freezes or pipeline reconfiguration are not acceptable)


I’m mainly looking for best practices here. I understand that decodebin will keep decoding once the pipeline is in PLAYING, but I’m unsure what the correct architectural solution is for replaying a single overlay animation from the beginning.

Any insights would be greatly appreciated. Thanks!

the animation is ~10 seconds long and continuously running

How do you implement this “continuously running” at the moment? Do you do the looping yourself?

I assume you have other inputs going into the compositor already? What are they? Are they live sources or file/locally generated?


I may have misunderstood the problem or what you’re trying to achieve, but here are some possible / solutions that come to mind:

  1. Assuming you have other inputs going into the compositor, just don’t create the input pad for your on-demand overlay from the start (I’m not sure if this will work in your case though, depends on the sources involved, and you may have to force an output format with alpha on the compositor), but simply create it when you want to feed the clip in, in combination with a gst_pad_set_offset() on a source pad upstream of the compositor, configured to the current position / running time of the compositor (which you can query). That way timestamp 0 from the file should feed into the compositor “now”. Whether this is workable depends on how long it takes for that decodebin/opengl pipeline to produce a first buffer though, and how timely the overlay has to kick in I suppose.
  2. If the clip loops continuously, and you know the exact length/period, then you could query the current position of the compositor when you want to feed it in, and from that calculate that remainder to the next repeat/start, and then use gst_pad_set_offset() to shift the running time such that the compositor will basically throw away the remainder of the current loop iteration until the next start. This will only work if your input branch can produce data much faster than real-time on the machine you’re deploying this on (although you could add queues to have generous buffering in the right places, depending on how much RAM you have to play with).
  3. You use something like intervideosink and intervideosrc. You may have to force intervideosrc to the exact output format you want to feed into compositor, but it should just output repeat black frames if there’s no input on the intervideosink. Then when you want to feed the clip you make a separate pipeline such as uridecodebin ! ... ! intervideosink and let it play once to EOS, and then the frames go into the compositor via the intervideosrc. The challenge here I suppose is to flip the compositor alpha property at “the right time”. You can probably do that by adding a pad probe on the intervideosrc and checking for the GAP flag on the buffers coming out of it. This indicates repeat black frames. When you get the first non-GAP buffer you can set the compositor sink pad alpha property as you like. (Although there might still be a bit of queuing inside compositor, so perhaps it needs to be done differently using the compositors “samples-selected” signal and checking the next/pending buffer on your clip’s input pad or somesuch).

Just some things you could try