RTSP Stream Pipeline: Saving Frames at 2fps While Ensuring Consistent Output Without Viewing

I am working on a GStreamer pipeline to handle an RTSP stream with the following requirements:

Restream the input RTSP stream to a new RTSP output.
Save frames from the input stream at a consistent rate of 2fps.

Here is my current pipeline:

rtspsrc location={self.stream_url} protocols=tcp latency=200 !  
rtpjitterbuffer ! rtp{self.codec}depay ! tee name=t  
t. ! queue ! rtp{self.codec}pay name=pay0 pt=96  
t. ! queue ! avdec_{self.codec} ! videorate ! video/x-raw,framerate=2/1 !  
videoconvert ! jpegenc ! appsink name=appsink emit-signals=true sync=false max-buffers=1 drop=true

The RTSP input stream is added to a GStreamer RTSP server for restreaming. However, I am facing an issue:

Frames are saved at 2fps only when I am viewing the output RTSP stream. If no one is viewing the output, the frames stop being saved.

I need the frame-saving process to run independently of whether the output stream is being viewed. Additionally, I would like to make the pipeline more efficient.

Are there any optimizations I can apply to improve the efficiency of this pipeline, especially considering CPU and memory usage?
Are there alternatives or better approaches to saving frames at a consistent rate without being dependent on client connections to the output RTSP stream?

Any insights or suggestions would be greatly appreciated.

Thank you!"

Honestly, the easiest way to do this would probably be to make the rtspserver pipeline just be the passthrough (relay), and then move the saving to a separate client pipeline on the same machine that connects to the relay server.

Since as you say, the server will only spin up the pipeline for the stream if there’s a client, so you’ll have to have some dummy client anyway.

Other remarks:

  • rtspsrc already includes an rtpjitterbuffer.
  • for I-frame only formats like JPEG you can put the videorate + capsfilter before the decoder, which will reduce the amount of work your decoder has to do
  • for other formats like H.264 the question is how often you get a keyframe (camera configuration). If bandwidth isn’t a problem and you can configure it to send keyframes more frequently then you can just not decode delta frames in the file saving branch. Unclear if it’s worth it for your scenario.

Interesting. Thanks for the reply. The frame pipeline essentially forces the original stream to keep playing. What do you think this will do to cpu usage?

Also, I just read that I could possibly use self.server.set_suspend_mode(GstRtspServer.RTSPSuspendMode.NONE). Will this work? I am using gi with python and the version is this:

# gst-launch-1.0 --version
gst-launch-1.0 version 1.18.4
GStreamer 1.18.4
http://packages.qa.debian.org/gstreamer1.0