I would get the running-time in the need-data callback.
I can’t use gst_segment_to_running_time because I don’t known how obtain the segment.
May be better if I explain the problem I’m facing: my pipeline get a video stream h264 and send it to a splitmuxsink with muxer-factory=mpegtsmux. splitmuxsink will split for me the file every XMb with a progressive number in the filename. There is also an appsrc to push a buffer as subtitle_0 to the muxer. Every second I have to write in a file a timestamp and the progressive number of the video file so in the future I can associate a timestamp with a certain video file.
So my idea is get the running-time in the appsrc callback and compare it with the running-time in the "format-location-full” callback of splitmuxsink, because this is the place where I’m sure the actual video file will be closed and starts the new one.
I’m not sure I really understand the problem you’re trying to solve or what you’d like to achieve - could you rephrase your explanation or explain what behaviour you get now that you don’t want?
There might be considerable buffering between the source and the output of splitmuxsink.
Just to be sure, there’s no obligation to use the “need-data” callback at all.
You can just push a buffer into appsrc whenever you have a new one and ignore the “need-data” stuff entirely, and if your input isn’t paced at 30Hz you can set up a timer I suppose. I’m not sure if GstBaseSrc/AppSrc will do the pacing for you even if you set the duration on the buffer. I think they will just emit “need-data” as soon as the buffer has been pushed and processed downstream?
I choosed the “need-data“ stuff to get, let me say, a pull data mechanism instead of a push one, so instead of use a timer I prefer that gstreamer asks me for more data at the right time, according to te pipeline time. My appsrc generate a klv stream with a time inside (wallclock) and get muxed by the “subtitle” pad of splitmuxsink. So one of my needs is obtain the synchronization between the video e the klv.