Is GStreamer suited for a dynamic muxing application (crossfades, blends, etc.)

Sorry if I don’t get all the terminology right, but I wanted to engage the community before I go down the rabbit hole of GStreamer…it’s quite overwhelming.

I’m making an application which will allow users to setup sources for sinks inside a hierarchical node tree. The sources can be audio/video/both played from file/web/etc. The sinks are going to be actual windows discovered devices (e.g. bluetooth speakers) as well as some abstractions (e.g. “second monitor + default audio device”). I’m also hoping to add some odd ones in there such as “smart LEDs” or those LED hologram spinners which can project simple imagery.

As the user plays a new node in the hierarchy…it will be moving from a previous node to a target node. The previous node and target node will share some rootward nodes in the hierarchy. I want those to keep playing, uninterupted. The parts of the tree that are not shared (the branches) will fade out and fade in respectively.

While this is happening, someone could easily pick another new node…and we’d get multiple sets of nodes in the fade out and fade in buckets…etc.

In any event, I expect a lot of active pipelines potentially and generally just don’t know how dynamic GStreamer is for this sort of purpose.

Can anyone comment on if GStreamer is well suited for this? Can it be done directly through commandline execution or should I engage the apis? Is there an example of a GStreamer mixer app maybe that would help me?

Thanks in advance.

J

Hi @jasonagorski,

Yes Gstreamer is well suited for audio and video sources. Smart lighting unfortunately is not. However you could develop methods outside the pipeline code to handle the smart lighting stuff.

I’ve been using GStreamer for a few years now and find you need to invest time in understanding the underlying mechanics. Bus messages, events, sources/filters/sinks pads, ghost pads, caps etc. I better stop there.

I’ve spent 100’s of hours playing around with all types of pipelines decoding/encoding/RTP using gst-launch-1.0 to validate a working pipeline before cementing into your code. There is plenty on the internet on Gstreamer which answers most questions.

To do what you require, you need to study adding bins (which are a its own pipeline) to connect to a running pipeline and on how to attach and detach these source bins ghost pads dynamically without send EOS (end of stream) event through the pipeline using pad blocks etc.

Here are a few repos which have given me many hours of study and inspiration which should help out on what you’re wanting to do. Not sure what your coding in?

  1. GitHub - voc/voctomix: Full-HD Software Live-Video-Mixer in python Been dead for a while however others have began merging changes for small enhancements and fixes. Great!

  2. GitHub - bbc/brave: Basic Real-time AV Editor - allowing you to preview, mix, and route live audio and video streams on the cloud Dead however concept of linking pipelines to running pipeline is here. Uses early version of gstreamer. Could easily be modernised with later version of gstreamer. (Python)

  3. GitHub - dorftv/dove: DOVE Online Video Editor recent and based off Brave. Works in a docker container however is what you wanting to do is here… (Python)

I’ve been developing in Go and finding for me Go has some advantages with Gstreamer over Python. Speed and one executable. C and C++ has the same outcome. I’m hooked on using Go as it is a modern language over C and C++ has more to offer in packages etc.

Another is Rust however I can’t code in rust to save myself! Old ways are hard to change.

Hope this information helps you out.

Rob

1 Like