Proper time management

I find it difficult to read the documentation from a media-production perspective. From a media-conversion perspective it becomes easier to wrap my head around the documentation (though not much easier). In terms of time management there is a need to accomplish start times (or delay), durations (or lengths), etc. The documentation bears mention of seek times (which seems to allow specifying duration) and segments. (I’m not interested in using GES.)

As far as media production is concerned, segments seem plausible and would at the very least allow to accomplish duration (ignoring any pre-rolling or preperation) and start time, but these things would only be relevant to that one segment (or clip), which is necessary in its own way. But what if I want the segment to start playing after a few seconds of running time? How would I force a delay at any point within the pipeline? I thought about subclassing GstElement and not letting anything pass through before the time, and perhaps subtract the duration from the running clock. Whether or not that would work, I don’t know, but it would seem to me that there should be a better way, I just don’t know what that would be.

And what if I want to enforce a max duration on the entire pipeline? Should I subclass GstElement and then emit an EOS event when I want everything to end? Yet, I still think there should be a better way (assuming the aforementioned is possible), but I wouldn’t know how. And this says nothing about whether the pipeline will function with an empty buffer.

How should I go about in managing time (without GES) that will get me what I want?

gst_pad_set_offset() on a source pad of an element in the pipeline should allow you to delay / shift the render time of that stream.

For limiting the playback duration you can specify an end to a seek.

Tried a simple script to test out seek on a continuous stream, but it always goes passed the set end time. Not sure what I’m doing wrong.

import gi;
gi.require_version('Gst', '1.0');

from gi.repository import Gst;
Gst.init(None);

pipeline = Gst.Pipeline.new();

src = Gst.ElementFactory.make('videotestsrc');
conv = Gst.ElementFactory.make('videoconvert');
sink = Gst.ElementFactory.make('autovideosink');

pipeline.add(src, conv, sink);
Gst.Element.link_many(src, conv, sink);

pipeline.seek(1, Gst.Format.TIME, Gst.SeekFlags.FLUSH | Gst.SeekFlags.KEY_UNIT,
              Gst.SeekType.NONE, 0,
              Gst.SeekType.SET, 7 * Gst.SECOND);

state = pipeline.set_state(Gst.State.PLAYING);
if state == Gst.StateChangeReturn.FAILURE:
    print('Failed to play!');
else:
    pipeline.get_bus().timed_pop_filtered(Gst.CLOCK_TIME_NONE, Gst.MessageType.EOS)
    pipeline.set_state(Gst.State.NULL)

Have you checked the return value of the seek? I suspect it failed because the pipeline was in NULL state still.

Try setting the pipeline to PAUSED state, and wait for it to get there (either with get_state() or by waiting for an ASYNC_DONE message on the bus), and then do the flushing seek, and then set the pipeline to PLAYING.

1 Like

That did the trick for seeking. However, after further testing, setting an offset on a source pad has an unexpected result. It appears that the first frame of the source gets sent through instead of an empty stream. Is there an easy way to prevent data from being passed through until the offset is reached?

Here’s a reproducible example:

import gi;
gi.require_version('Gst', '1.0');

from gi.repository import Gst;
Gst.init(None);

pipeline = Gst.Pipeline.new();

src = Gst.ElementFactory.make('videotestsrc');
conv = Gst.ElementFactory.make('videoconvert');
sink = Gst.ElementFactory.make('autovideosink');

pipeline.add(src, conv, sink);
Gst.Element.link_many(src, conv, sink);

srcpad = src.get_static_pad('src');
if srcpad:
    srcpad.set_offset(3 * Gst.SECOND);

state = pipeline.set_state(Gst.State.PLAYING);
if state == Gst.StateChangeReturn.FAILURE:
    print('Failed to play!');
else:
    pipeline.get_bus().timed_pop_filtered(Gst.CLOCK_TIME_NONE, Gst.MessageType.EOS)
    pipeline.set_state(Gst.State.NULL)

You mean the first frame (pre-seek) gets displayed?

If you pick a real video sink (not autovideosink) it will have a show-preroll-frame property which you can set to FALSE.

Yeah, setting an offset of 3 seconds seems to cause the first frame to be displayed for 3 seconds, giving the impression that the video was paused for 3 seconds. (After the first 3 seconds, the video plays normally, as expected.)

Since what you said about using a real video sink implies that a filesink would result in the same issue that I had, I ran test on filesink. This time another unexpected thing happened: the offset appears to have been ignored. Is there anything that can be done to get a video to be played at a later time in the pipeline and without staining the output with the first frame?

Unlike video sinks, filesink does not look at buffer timestamps or sync to the clock by default.

You can use filesink sync=true to get similar behaviour though.

There are also fakesink and fakevideosink elements for testing with null sinks.

I tried filesink sync=true, but the offset was still ignored.

In the end I would want to render to file. How come the final sink has so much control over what the upstream sources do? I’m not sure how the fake sinks will help me, especially considering what you previously said about real sinks.

I’m assuming GES implements its own things to overcome these issues. But GES is something I would like to avoid (i.e. poor documentation, seems to be lacking in text features, etc). Is GES my only option?