I wanted to do dynamic recording and intervideo was suggested and worked perfectly.
I need to make on demand snapshots. While testing
This works perfectly
gst-launch-1.0 --verbose v4l2src device=/dev/camera1 ! image/jpeg,width=1920,height=1080 ! jpegdec ! videoconvert ! pngenc snapshot=true ! filesink location=snapshot.png
This does not work, the snap is blank
gst-launch-1.0 --verbose v4l2src device=/dev/camera1 ! image/jpeg,width=1920,height=1080 ! jpegdec ! videoconvert !
intervideosink channel=snap
intervideosrc channel=snap ! pngenc snapshot=true ! filesink location=snapshot.png
Please any hints.
Ridgerun’s interpipe looks ideal but when I played (in context of dynamic recording) I had run-away CPU usage. It may have been unrelated. Has anyone had good experience and why is a seeming useful element shunned so (it has been pending for years)
My guess would be that it has to do with the camera needing some time to start up and start outputting frames.
intervideosrc will probably get going right away and start outputting black frames for starters because that’s what it does if it doesn’t have any input on the intervideosink side yet. Once the sink has seen the first frame, the source will output one too.
You can test if that theory is correct with something like intervideosrc ! queue ! pngenc ! multifilesink location=frame-%d.png and then inspect the frames one by one.
Where has it been “pending for years”?
We have elements with the similar functionality upstream now as part of the GStreamer Rust plugins set: rsinter
These may work for you because in this case they can’t / don’t make up filler data.
Don’t think debian/ubuntu ship this one yet, but should be fairly easy to build yourself once you install rustup which you can use to install a Rust toolchain.
The other question is: what does your pipeline do / look like when you’re not taking a snapshot? Are you doing anything with the camera output or are you just running the camera to take a snapshot every now and then?
If you have a normal video sink or so, those usually have a last-sample property via which you can retrieve the last/current frame, which you then can feed to gst_video_convert_sample() or gst_video_convert_sample_async() to encode.
If you don’t do anything else with the camera output you may also want to just keep the frames in JPEG format instead of re-encoding them to PNG, if cpu usage is a concern.
! intervideosink
intervideosrc ! bla bla ! filesink pipeline is run or nulled for dynamic record
so I have [pretty video] .. [dynamic record]
and ^ snapshot now [output stream is valid]
I get WHITE frames, not BLACK, and the stream is valid and has been for a long time.Let me try multifilesync
For years RunRidge have been saying they are trying to get it accepted into mainline.
Once running I'll look at efficiency but it is messy
[camera]--[DateTime Overlay]--[logo overlay]--[textoverlay]--[video]
[video]--[audio muxed in]--[filesink]
t0 ..... t100
[video]--[audio muxed in]--[filesink]
t17 ... t43
snapshot at t5, t27, t54
ofcourse t0, t100,t17 etc are just random timeframes to give an idea.
Let me find out WHAT the problem is then I will probably ask for help on HOW