Mp4mux behaviour with a non-seekable sink (awss3sink)

Hi, I have a pipeline that connected to a live source, where the tail of the pipeline is splitmuxsink with the default mp4mux with default settings, but with filesink swapped out for awss3sink (which I think is not seekable). I’m writing 10 minute files that vary from 50MB to 250MB depending on bitrate.

Looking at the files generated the atoms are as expected: ftyp, then one mdat, then moov with all its children.

My question is: how is mp4mux writing the size field for the mdat atom? Does it keep the whole thing in memory until the split?

Although I’m not using fast-start, I wondered if it could be using the same file-caching approach, but I can’t see any files in /tmp. How can I tell what it’s doing?

GST_DEBUG=*qtmux*:6 should tell you what it’s doing :slightly_smiling_face:

1 Like

That great, thanks! I think this confirms what I thought, I compared the output from these three commands:

GST_DEBUG=*qtmux*:6 gst-launch-1.0 -e videotestsrc is-live=true \
  ! x264enc ! h264parse ! mp4mux ! filesink location=01.mp4

GST_DEBUG=*qtmux*:6 gst-launch-1.0 -e videotestsrc is-live=true \
  ! x264enc ! h264parse ! mp4mux \
  ! awss3sink bucket=${MY_BUCKET} key=02.mp4 content-type=video/mp4

GST_DEBUG=*qtmux*:6 gst-launch-1.0 -e videotestsrc is-live=true \
  ! x264enc ! h264parse ! mp4mux faststart=true \
  ! awss3sink bucket=${MY_BUCKET} key=02.mp4 content-type=video/mp4

It’s clear that the first is seeking and rewriting the mdat header at the end, the second defers sending all the mdat buffers downstream until after it gets the EOS, and I assume is keeping the buffers in memory as there is no logging that indicates they are being stored anywhere else. The last case (which does prompt the use of a temporary file) has clear logs that the the buffers are pushed to the temporary file during execution, and then re-read and pushed downstream in the mdat after EOS.