GStreamer Tee for LiveView and saving streams to Mp4 file

I am able to record videos using the Gst.parse_launch() utility, it waits for EOF and gracefully closes the files. When I tee the encoded streams for liveViews, while recording, I have to create elements on the go, which I am pretty sure I am doing correctly; however, I want to reduce or rescale the liveView stream to avoid 2 1080p or 4K video streams running simultaneously. So, is it possible to add rescaling in the liveView tee, and at the same time record videos at 4K or 1080p?

cameraPipeline = Gst.parse_launch(
                                    "XXXXcamerasrc\
                                    ! video/x-h265,width=1280,height=720,framerate=25/1,stream-format=byte-stream,bitrate=8000000,profile=high,bitrate-control-mode=vbr\
                                    ! h265parse\
                                    ! tee name=t\
                                    ! queue\
                                    ! splitmuxsink\
                                        location=/mnt/media/output.mp4\
                                        max-size-bytes=1000000000 "
                                    )

t = cameraPipeline.get_by_name("t")
cameraPipeline.set_state(Gst.State.PLAYING)

liveBranch = {}

def liveView(host="10.42.0.10", port=5000):
    if len(liveBranch) == 0:
        print("Starting live view")
        # Create a new branch for live view
        q = Gst.ElementFactory.make("queue", "liveQueue")
        rtppay = Gst.ElementFactory.make("rtph265pay", "rtpPay")
        udpsink = Gst.ElementFactory.make("udpsink", "udpSink")
        udpsink.set_property("host", host)
        udpsink.set_property("port", port)        

        for e in [q, rtppay, udpsink]:
            if not e:
                print(f"Failed to create element: {e.get_name()}")
                sys.exit(1)
            cameraPipeline.add(e)
            e.sync_state_with_parent()
        
        pad = t.get_request_pad("src_%u")
        pad.link(q.get_static_pad("sink"))

        q.link(rtppay) 
        rtppay.link(udpsink)
        # liveBranch.update([q, rtppay, udpsink])
        liveBranch.update(q=q,rtppay=rtppay,udpsink=udpsink,pad=pad)
        print("Live view branch created and linked")
        print("Live view started at {}:{}".format(host, port))
               
        if not liveBranch:
            print("Failed to link live view branch")
            sys.exit(1)
    else:
        print("Live view already started")

To test it, I am using the CLI to create and destroy elements on the go.

The pipeline could be rearranged to achieve the same. Considering tee name=t after the camerasrc and capsfilter and that a decode and re-encode would be required for scaling the video before streaming, one would end up with something like below.

t. → queue → splitmuxsink
t. → queue → avdec_h265 → videoscale → videoconvert → x265enc → queue → rtph265pay → udpsink

A branch of the tee being responsible for the recording and the others for scaling and then streaming. The exact decoder and encoder might be different depending on your system or requirements.

One would have to be mindful of the scaling/re-encode depending on the resolution, number of parallel streams and CPU/hardware resources. Note also the n-threads property on videoscale and videoconvert.

Thank you for the reply.

I was thinking the same thing, if I encode/decode the LiveView stream again (either in software or hardware), it will increase the CPU usage. Instead, tee before the encoder so I can have different stream capabilities according to the need.

So, is it possible to create and destroy elements in the liveView stream on the go, if we use two different streams to encoder?

Instead, tee before the encoder so I can have different stream capabilities according to the need.

Can you clarify? The example shared does have tee before the encoder.

So, is it possible to create and destroy elements in the liveView stream on the go, if we use two different streams to encoder?

Can you clarify further on what do you mean by two different streams to the encoder?

It is possible to add and remove elements from the pipeline dynamically or at runtime. There is documentation/example here assuming that is what you mean.

Isn’t it after the encoder? I believe we have a single output stream after the encoder, that’s why to rescale the liveView stream, I have to use the enc/dec. If it were before the encoder, I would specify the caps for both streams.

Yes, that is exactly what I mean, my application requires pressing a liveView button, which in turn should create these elements and then link and add them.

If you mean that you want to send the same H265 scaled and encoded stream via multiple sinks, you may introduce another tee after the encoder or payloader to tee it to multiple sinks.

my application requires pressing a liveView button, which in turn should create these elements and then link and add them.

Yes, that is possible. For example, one may request a source pad from the tee and then add elements downstream to the tee as required.

Thank you for the help. Cheers :slight_smile: