I’m using splitmuxsink with muxer = default (mp4mux) and sink = awss3sink to upload 15-sec segments of a stream to GCS (google cloud storage).
Let’s say my stream is from 01:05:00 UTC to 01:05:39 UTC
In this case, my pipeline is creating 2 segments from
- 01:05:00 UTC - 01:05:15 UTC
- 01:05:15 UTC - 01:05:30 UTC
However, the remaining 9 seconds (01:05:30 UTC - 01:05:39 UTC) is lost.
Is there some straightforward so that I can get 3 files of 15sec. 15sec and 9sec respectively? Basically, I want to flush all the remaining data to GCS.
One hack is to decrease the max-size-time
of a segment from 15sec to 2sec to minimize the loss. However, I want to discuss some alternative approaches.
These are the settings for awss3sink
:
g_object_set (G_OBJECT(self->gcs_sink), "access-key", self->gcs_access_key, "bucket", "livestream-recording-service-prod-bucket-temp", "endpoint-uri", "https://storage.googleapis.com", "force-path-style", TRUE, "region", "asia-southeast1", "secret-access-key", self->gcs_secret_access_key, "sync", TRUE, "content-type", "video/mp4", NULL);
These are the settings for splitmuxsink
:
g_object_set(G_OBJECT(self->split_mux_sink), "max-size-time", (guint64)SEGMENT_DURATION * GST_MSECOND, "send-keyframe-requests", TRUE, "sink", self->gcs_sink, NULL);