I have the following pipeline to stream a number of files over webrtc to a web browser.
```
webrtcbin name=webrtcbin
splitmuxsrc name=sms
sms.video_0 ! parsebin ! video/x-h264 ! rtph264pay pt=96 ! webrtcbin.
fallbackswitch name=audio_fallback immediate-fallback=true ! audioconvert ! audioresample ! opusenc ! rtpopuspay ! application/x-rtp,media=audio,encoding-name=OPUS,payload=97 ! webrtcbin.
sms.audio_0 ! decodebin ! identity sync=true ! audio_fallback.sink_0
audiotestsrc wave=silence ! audio_fallback.sink_1
```
I’m providing a list of files to the splitmuxsrc using format-location.
This all works fine, but now I have a requirement to allow for seeking. My understanding is that since it’s a stream rather than a file, the html video tag won’t show the seeking control, so I implemented my own seeking control. I just don’t know what to do on the server-side to seek.
I’ve tried seeking on the pipeline and the splitmuxsrc, but neither do anything. I suspect I need to somehow convince the splitmuxsrc to switch to the appropriate file and seek within that file but I haven’t been able to figure out how to do that.
Any suggestions? Thanks.
I’ve been playing around with a non-webrtc pipeline:
splitmuxsrc location=D:/VideoRecording/Test3/*.mp4 name=src src.video_0 ! decodebin ! autovideosink
This will seek with this code:
`static void SeekRelative(
int seconds)
{
if (!pipeline.QueryPosition(Format.Time, out var position))
{
Console.WriteLine("Failed to query position");
return;
}
var newPos = position + seconds * Constants.SECOND;
if (newPos < 0)
{
newPos = 0;
}
pipeline.SeekSimple(Format.Time, SeekFlags.Flush | SeekFlags.KeyUnit, newPos);
}`
The same seeking code doesn’t work with the original pipeline. My gut says it’s the webrtcbin. I found a project at https://github.com/hith3sh/PyStreamRTC that suggests it does what I want. I haven’t tried setting up Python yet to test that, but I did try comparing it to my code and I’m not really sure what the difference is. I suppose the next step is to verify this Python code works, but was hoping it’d be easy for someone to spot what’s different.
I also went down the path of restarting the entire pipeline and skipping ahead to the seek position before the pipeline goes into play state, but that didn’t seem to work either.
Well, after playing with PyStreamRTC and working it out with ChatGPT, I came up with this:
def seek(self, time_sec):
logging.info(f"Seeking to {time_sec} seconds")
self.pipeline.set_state(Gst.State.PAUSED)
self.pipeline.get_state(Gst.CLOCK_TIME_NONE)
success = self.pipeline.seek(1.0, Gst.Format.TIME, Gst.SeekFlags.FLUSH | Gst.SeekFlags.KEY_UNIT, Gst.SeekType.SET, time_sec * Gst.SECOND, Gst.SeekType.NONE, 0)
if not success:
logging.error("Seek Failed")
return
self.pipeline.set_state(Gst.State.PLAYING)
The strange thing is I remember attempting to pause the pipeline and do the seek in my actual application and it didn’t work, so it will be interesting to see if this solution does work.
Ok, I believe I have this all sorted out. I think the fallbackswitch and/or audiotestsrc was what messed me up. I’ve moved to checking the file to see if it has audio and using that to choose whether to play a video-only pipeline or to play an audio/video pipeline.