I am a little confused by the gstreamer hlssink/hlssink2/hlssink3 elements. I have questions about how to play/pause and seeking (i.e. dragging the slider and pausing the video from the browser) for non-live sources.
In my use case, I have an mp4 file and I want to enable HLS playback. What’s confusing to me is that the documentation for hlssink2 specifically calls out that the element is not an http server and that playback is supported by pointing an http server at the generated playlist file (easy enough and plenty of examples online). However, the hlssink elements also support a max-files setting that effectively controls the amount of video available in the playlist at any given time (saves disk space I assume). Thus, if the client video player drags the slider to a location outside of the current playlist, how can/is this handled?
The max-files thing is usually only used for live streams.
A VOD playlist would be expected to be static, with all fragments available.
You could write a fancy server that generates such VOD fragments from your mp4 file on the fly as requested of course, but the hlssink* elements do not support such a mode of operation.
Is there any GStreamer plugin/element/sink that you are aware of that would provide playback support of an MP4 file to a browser that supports pause/play/seeking without having to send these events to the pipeline manually with a “fancy server”.
I specifically mention browsers to exclude such protocols as RTSP.
Have you tried normal html5 video with the mp4 file directly?
I’m not quite sure what the issue is with VOD HLS or DASH - is it that you want to save disk space and avoid having both the original file and an fMP4 version/copies on the server? Why do you need to have both? Or is it just that you need to make an mp4 file available immediately without any preprocessing steps?
The system is provided an mp4 from an external service.
The system is running on the edge and is resource-constrained. Some of the videos can be 30 minutes long HD (1080p) and FHD (2160p). To preprocess these into HLS or DASH, would take a while and I wanted the users to be able to immediately playback.
I have to be able to add a custom processing step to add a watermark. Currently, this is a company-proprietary GStreamer plugin.
Disk space is not an issue. I just need something that would provide immediate playback with the custom GStreamer step mentioned above.
Currently, the watermark is applied by decoding to raw. I didn’t even know that a watermark could be added to the encoded bitstream?! Is there a GStreamer plugin/open-source tool capable of this?
No, usually it’s applied to a raw stream. I was just asking to make sure we’re on the same page, sometimes people use the same terminology for different things.
Let’s say that I could add “fancy server” logic as I am in control of the server code as well. What would this look like? Could it be as simple as intercepting the seek event in the browser and passing that on to the server GStreamer pipeline? Would hlssink*/webrtcsink handle this appropriately (also would the receiving browser client handle this) or would the logic be more involved?
I guess I just want to understand if what I am doing is going against the norm. Also, how much custom code/logic would be involved and/or is there a GStreamer plugin that already solves this problem before I try to implement my own.