Frame Dropping in GStreamer Pipeline Not Reducing Total Frame Processing

I’m building a video analysis application in Rust using GStreamer where users can choose different processing scales (e.g., analyze every frame, every 2nd frame, every 10th frame, etc.). I’ve attempted to implement frame dropping using the videorate element, but it seems to only affect the display of progress and not the actual frame processing.

For example, with a 30fps video that’s 2 minutes long (~3600 frames):

When selecting 10x speed (process every 10th frame), the progress shows “360/360 frames”.
When it reaches “360/360”, it says completed but continues processing in the background.
The actual processing time remains the same as if processing all frames.
The backend appears to still decode and process all 3600 frames.
Here’s my current pipeline setup:

    fn create_single_pipeline(
        path: &PathBuf,
        data_tx: &Sender<FrameCmd>,
        video_index: usize,
        orientation: Orientation,
        is_muxed: bool,
        processing_scale: f32,
    ) -> Result<gst::Pipeline> {
        let file_info = MediaInfo::from(path)?;
        let duration = file_info.duration;
    
        let video = file_info
            .video
            .first()
            .ok_or(anyhow!("No video streams found in file"))?;
        let video_width = video.width;
        let video_height = video.height;
        let input_fps = f64::from(video.framerate.numer()) / f64::from(video.framerate.denom());
        let target_fps = (input_fps * processing_scale as f64).max(1.0); // Ensure minimum 1 fps
        
        // Adjust estimated frames based on target fps
        let video_estimated_total_frames: u64 = (target_fps * duration.as_secs_f64()).floor() as u64;
    
        debug!(
            "Video {}: input {} x {}, input fps {}, target fps {}, estimated frames {}",
            video_index, video_width, video_height, input_fps, target_fps, video_estimated_total_frames
        );
    
        let pipeline = gst::Pipeline::new();
    
        let src = gst::ElementFactory::make("filesrc")
            .property("location", path.to_str().ok_or(anyhow!("Invalid path"))?)
            .build()?;
    
        let decodebin = gst::ElementFactory::make("decodebin").build()?;
        
        pipeline.add_many([&src, &decodebin])?;
        gst::Element::link_many([&src, &decodebin])?;
    
        let pipeline_weak = pipeline.downgrade();
        let data_tx_clone = data_tx.clone();
    
        decodebin.connect_pad_added(move |dbin, src_pad| {
            let pipeline = match pipeline_weak.upgrade() {
                Some(pipeline) => pipeline,
                None => return,
            };
    
            let (is_audio, is_video) = {
                let media_type = src_pad.current_caps().and_then(|caps| {
                    caps.structure(0).map(|s| {
                        let name = s.name();
                        (name.starts_with("audio/"), name.starts_with("video/"))
                    })
                });
    
                match media_type {
                    None => {
                        element_warning!(
                            dbin,
                            gst::CoreError::Negotiation,
                            ("Failed to get media type from pad {}", src_pad.name())
                        );
                        return;
                    }
                    Some(media_type) => media_type,
                }
            };
    
            let insert_sink = |is_audio: bool, is_video: bool| -> Result<(), Error> {
                if is_audio {
                    Self::setup_audio_pipeline(&pipeline, src_pad, data_tx_clone.clone(), video_index)
                } else if is_video {
                    let queue1 = gst::ElementFactory::make("queue")
                        .property("max-size-buffers", 2u32)
                        .build()?;
                    let videorate = gst::ElementFactory::make("videorate")
                        .property("drop-only", true)    
                        .property("skip-to-first", true)  
                        .build()?;
                    // Set target framerate caps
                    let rate_caps = gst::Caps::builder("video/x-raw")
                        .field("framerate", gst::Fraction::new(
                            (target_fps * 1000.0) as i32,
                            1000
                        ))
                        .build();
                    let rate_filter = gst::ElementFactory::make("capsfilter")
                        .property("caps", &rate_caps)
                        .build()?;
                    let convert = gst::ElementFactory::make("videoconvert").build()?;
                    let scale = gst::ElementFactory::make("videoscale").build()?;
                    pipeline.add_many([&queue1, &videorate, &rate_filter, &convert, &scale])?;
                    gst::Element::link_many([&queue1, &videorate, &rate_filter, &convert, &scale])?;
                    src_pad.link(&queue1.static_pad("sink").unwrap())?;
                    if is_muxed {
                        Self::setup_muxed_video_pipeline(
                            &pipeline,
                            &scale,
                            data_tx_clone.clone(),
                            video_width,
                            video_height,
                            video_estimated_total_frames,
                            video_index,
                        )
                    } else { //others for the different cases...

And here’s how I handle the video sink:

fn setup_video_sink(
    sink: gst_app::AppSink,
    width: u32,
    height: u32,
    estimated_total_frames: u64,
    side: VideoFrameSide,
    video_index: usize,
    data_tx: Sender<FrameCmd>,
) -> Result<(), Error> {
    sink.set_callbacks(
        gst_app::AppSinkCallbacks::builder()
            .new_sample(move |appsink| {
                let sample = appsink.pull_sample().map_err(|_| gst::FlowError::Eos)?;
                let buffer = sample.buffer().ok_or_else(|| {
                    element_error!(
                        appsink,
                        gst::ResourceError::Failed,
                        ("Failed to get buffer from appsink")
                    );
                    gst::FlowError::Error
                })?;

                let map = buffer.map_readable()?;
                let samples = map.as_slice();

                let frame = match side {
                    VideoFrameSide::Left => VideoFrameData::left(
                        width,
                        height,
                        estimated_total_frames,
                        samples,
                        video_index,
                    ),
                    VideoFrameSide::Right => VideoFrameData::right(
                        width,
                        height,
                        estimated_total_frames,
                        samples,
                        video_index,
                    ),
                }?;

                data_tx.send(FrameCmd::video_frame(frame))?;
                Ok(gst::FlowSuccess::Ok)
            })
            .build(),
    );

    Ok(())
}

Is my approach to frame dropping using videorate correct? The pipeline seems to still process all frames despite the rate limiting.

Should I be looking at a different approach or different elements to achieve actual frame dropping at decode time?

Is there something specific about using AppSink that might be bypassing the frame dropping?

Environment:

GStreamer version: Latest
Operating System: macOS
Rust GStreamer bindings version: Latest

Any guidance would be greatly appreciated.

I resolved the issue by adjusting my GStreamer pipeline to ensure that frame dropping occurs before the decoding stage. Initially, the videorate element was placed after the decoder, so all frames were being decoded even if they were dropped later, resulting in no performance improvement. By configuring the videorate element with the max-rate property and removing any queue elements that were buffering frames, I allowed backpressure to propagate upstream. This informed the decoder of the desired frame rate, enabling it to skip unnecessary frames and reduce processing load.

Additionally, I configured the decodebin element to reduce buffering by setting properties like post-stream-topology to true and use-buffering to false. I also adjusted the AppSink settings to minimize buffering and prevent blocking, which allowed for smoother frame dropping. These changes ensured that frame dropping happened as early as possible in the pipeline, significantly improving performance by reducing the number of frames that needed to be decoded and processed.