Framerate not respected and invalid video properties

Hi,

I’m using Gstreamer from OpenCV to read a video, undistort it (in opencv) and write it back on disk. Instead of letting OpenCV do the reading and writing automatically, I am building a Gstreamer pipeline to ensure I’m using the GPU (Nvidia).

This is running on two different targets: my desktop and a Jetson Orin NX (an ARM-based edge computer with an Nvidia GPU)

Here’s how I read and write the video on my PC:

VideoCapture vid_capture("filesrc location=./input.mp4 ! qtdemux ! h264parse ! nvh264dec ! videoconvert ! appsink sync=false");

int h264 = VideoWriter::fourcc('H', '2', '6', '4');

VideoWriter video_writer("appsrc ! video/x-raw, format=BGR ! videoconvert ! nvh264enc bitrate=4000 ! filesink location=out.avi", h264, 25, video_size);

Because the plugins are slightly different on the edge computer, here is what I am running on the Jetson:

VideoCapture vid_capture("filesrc location=./input.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink sync=false");

int h264 = VideoWriter::fourcc('H', '2', '6', '4');

VideoWriter video_writer("appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw,format=RGBA ! nvvidconv ! nvv4l2h264enc ! filesink location=out.mp4", h264, 25, video_size);

The encoding and decoding is pretty fast thanks to the GPU plugins. However, the resulting video has a couple of issues:

  • vlc doesn’t manage to open it
  • On both platforms, the file’s properties are not set properly: no duration or framerate is set.
  • Opening the desktop’s video with another video player works and I can see the video is exactly as expected. However when I open the video generated by the edge computer it plays at about 10 times the normal speed. It doesn’t look like there are missing frames, just that the framerate is not respected.

Does someone have any idea why ? Any help would be appreciated!

Thanks,
Antoine

I don’t have experience in this area but maybe you need: mp4mux

1 Like

As mentionned by @bka, you may use a container for storing your encoded video with details about resolution, framerate, encoding… For AVI files, use avimux, for mp4 files use qtmux (better than mp4mux), for MKV files use matroskamux, …
Note that H264 can have different stream-formats, so you may add h264parse for doing conversion if needed.
Also note that for gstreamer backend, opencv VideoWriter would use RAW (0) fourcc.
So you may try:

# Writer encoding into H264 and storing into AVI container file with your PC
cv::VideoWriter video_writer("appsrc ! video/x-raw, format=BGR ! videoconvert ! nvh264enc bitrate=4000 ! h264parse ! avimux ! filesink location=out.avi", cv::CAP_GSTREAMER, 0, float(fps), cv::Size(width, height));

# Writer encoding into H264 and storing into MP4 container file with Jetson HW encoder
cv::VideoWriter video_writer("appsrc ! video/x-raw, format=BGR ! videoconvert ! nvv4l2h264enc bitrate=4000000 ! h264parse ! qtmux ! filesink location=out.mp4", cv::CAP_GSTREAMER, 0, float(fps), cv::Size(width, height));

Hi,

Thanks a lot to both of you for your answers. I didn’t think I would need a muxer as there is no sound track, but that makes a lot of sense to add a container. That fixed my issue :slight_smile:

Another question: how do I know when and where I need to add a queue ? Is it just in case there is a real-time source like an rtsp stream or a camera module ?

Thanks again!

Note that the following is just my own current understanding. You may also create a new topic for this question, maybe someone more skilled would provide a canonical answer (or correct my wrong understanding here).

queue element has many properties for custom behaviours, the following is for queue with default properties values.

A queue adds a synchronization mechanism that allow upstream and downstream subpipelines to run in different threads.

You would add queues at each input of a muxer/compositor.
You may also add queues at each output of tee/demuxer.
That may save deadlock.

I think it is often a good thing to have a queue in each pipeline or subpipeline.
It may be located just after the source, or between parts of the pipeline doing heavy computations, such as between a decoder and an encoder, in order to have decoding and encoding done by different threads that could run on different cores if available.

For your opencv case, you may add a queue before appsink so that input pipeline can run in parallel of your opencv application. You would as well add a queue after appsrc in the VideoWriter pipeline.

1 Like

Awesome, thanks a lot for the explanation @Honey_Patouceul !

I’ll give it a try and will look more into this then.

Thanks again for the help!