Hi, I’m displaying a video from v4l2src in a waylandsink and while I’m doing that I want to dynamically record the video to an mp4. I want to allow a user provided input to determine when to start recording to an mp4 and when to stop the recording. The recording pipeline I’m currently using is:
gst-launch-1.0 v4l2src ! vpuenc_h264 ! h264parse ! mp4mux ! filesink location=sample.mp4 -e
I tried connecting the recording pipeline to a tee and then deleting it when I’m done recording and I was able to get video to record but that approach causes problems after starting and stopping recording multiple times. Does anyone know a way I can approach this, preferably by just pausing the recording section of the pipeline?
You may try intervideo elements for splitting into 3 separate pipelines such as:
import time
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GObject
Gst.init(None)
# Video source pipeline
p1 = Gst.parse_launch ('videotestsrc is-live=1 ! video/x-raw,width=640,height=480,framerate=30/1 ! timeoverlay ! intervideosink channel=videosrc')
if not p1:
print('Failed to launch p1')
exit(-1)
# Video display pipeline
p2 = Gst.parse_launch ('intervideosrc channel=videosrc ! queue ! videoconvert ! autovideosink')
if not p2:
print('Failed to launch p2')
exit(-1)
# Recording pipeline
p3 = Gst.parse_launch ('intervideosrc channel=videosrc ! queue ! videoconvert ! x264enc ! h264parse ! queue ! qtmux ! filesink location=record.mp4')
if not p3:
print('Failed to launch p3')
exit(-1)
print('Starting p1 & p2')
p1.set_state(Gst.State.PLAYING)
p2.set_state(Gst.State.PLAYING)
# Run for 3s
time.sleep(3)
print('Starting p3')
p3.set_state(Gst.State.PLAYING)
# Run for 5s
time.sleep(5)
print('Pausing p3')
p3.set_state(Gst.State.PAUSED)
# Wait for 3s
time.sleep(3)
print('Restarting p3')
p3.set_state(Gst.State.PLAYING)
# Run for 5s
time.sleep(5)
print('Sending EOS')
bus = p3.get_bus()
Gst.Element.send_event(p3, Gst.Event.new_eos())
msg = bus.timed_pop_filtered(Gst.CLOCK_TIME_NONE, Gst.MessageType.EOS | Gst.MessageType.ERROR)
if msg.type == Gst.MessageType.ERROR:
print('An error occured, video may not be correct')
print('stopping')
p3.set_state(Gst.State.NULL)
p2.set_state(Gst.State.NULL)
p1.set_state(Gst.State.NULL)
exit(0)
Hello,When I use intervideo elements like blow, i got a video of black screen. Can you tell me what’s wrong?
gst-launch-1.0 -e -v rtspsrc location=rtsp://XXXX ! rtph265depay ! h265parse ! avdec_h265 ! video/x-raw,width=1920,height=1080 ! queue ! intervideosink channel=test
gst-launch-1.0 -e intervideosrc channel=test ! queue ! x264enc ! h264parse ! queue ! qtmux ! filesink location=record.mp4
I didn’t end up using the intervideo elements to record the video dynamically so I don’t know if this will help you. I was able to record the video dynamically by having my image capture pipeline end in an appsink and then sending samples from the appsink to the recording pipeline that began with an appsrc.
intervideosink/intervideosrc are to be used in the same process such as:
gst-launch-1.0 -ev rtspsrc location=rtsp://XXXX ! rtph265depay ! h265parse ! avdec_h265 ! queue ! intervideosink channel=test intervideosrc channel=test ! queue ! x264enc ! h264parse ! queue ! qtmux ! filesink location=record.mp4
If your use case is using 2 different processes, then you may use shmsink/shmsrc or better use RTP stream through local loopback interface.
Note that depending on your resolution, you may have to adjust max kernel socket buffer size and MTU, see this post for RTP/VRAW suggestions.
I need to use 2 pipelines, so i used appsrc and appsink, and it works.
Thank a lot!