We have a C++ application to recover stream video from a camera, apply some processing on the video frames and display the processed stream video on our gui (qt application).
To do that, we have 2 pipelines :
Recover pipeline : udpsrc–>rtpvrawdepay–>appsink
Display pipeline : appsrc–>queue–>videoconvert–>videocrop–>d3d11videosink
And we have a callback on “new-sample” from the recovering pipeline to process frames. We also use gst_video_overlay_set_window_handle to display video in our qt widget.
All is OK except some glitches, like if frames are interlaced. If in the callback, we replace the frames of the camera by colored frames (one frame white, next frame black, next frame blue, next frame red, etc …), we note that frames displayed are effectively interlaced : instead to have a frame white then a frame black, etc we have a frame with several colors.
If we replace element d3d11videosink by glimagesink we no longer have a problem (no glitches, no frames interlaced). But we thought that d3d11 was better to use on Windows platform instead of glimagesink.
So is it a problem to use glimagesink on Windows platform ?
Is there a problem with d3d11videosink ? Are we missing some parameters ?
(Environment : Windows 11 - gstreamer 1.22.6 - Qt C++)
Thanks for your help.