Segmentation fault in event message handling while using ipcpipeline

Context: Fedora 39, gstreamer 1.22.11-1 from official repos (including epel and rpmfusion)

I am currently attempting to set up an application using ipcpipeline to offload some of the heavier work of a process into side processes before bringing the data back in to the starting process for final handling. This works… for a few seconds. However, there seems to be something with ipcpipeline where it doesn’t like navigation events and it is causing a segmentation fault.

The current test pipeline I have that demonstrates the problem - no processing, it just sends the data to the child and gets it back:
(Parent, Primary Pipeline): videotestsrc ! capsfilter ! videoconvert ! queue2 ! ipcpipelinesink
(Child, ipcslavepipeline): ipcpipelinesrc ! queue2 ! ipcpipelinesink
(Parent, ipcslavepipeline): ipcpipelinesrc ! queue2 ! xvimagesink

Source of this pipeline (sorry that it’s not the cleanest, I haven’t gone back and refactored things while sorting out if this even works): gstreamer ipcpipeline segmentation fault · GitHub

When I initially start, it works exactly as expected. The xvimagesink in the parent process slave pipeline shows the expected videotestrc pattern including the snow in the bottom corner. Where things go wrong is when the xvimagesink tries to send a Navigation event back through the pipeline when the window gets clicked on or a button is pressed. The message is successfully relayed from the parent slave pipeline to the child slave pipeline. The child process immediately segmentation faults with this assert message:

(gst_stream_nano_gray:891275): GStreamer-CRITICAL **: 07:27:32.167: structure_serialize: assertion ‘structure != NULL’ failed

Backtrace of the child at termination: gstreamer ipcpipeline sigsegv backtrace · GitHub

If it would help, I can post the trace logs of each process, although each is in the 10s of MB so I figured I’d save that one for on request since they should be able to be regenerated from the above source. That said, the last line of the child process is 0:00:01.128319082 ^[[33m892358^[[00m 0xf7bc10 ^[[37mTRACE ^[[00m ^[[00m ipcpipelinecomm gstipcpipelinecomm.c:2080:read_many:<slave_ipc_sink>^[[00m deserialized message 0x7f8dd80070c0 of type element which would be just before Frame 20 of the backtrace, so it would appear that the problem is happening in the ipcpipelinesink element of the child on receipt of the message from the parent slave pipeline.

Any idea what the problem is? Is doing chained ipcpipeline like this not supported? It works properly when running the ipcpipeline example with only one stage. If this is an unsupported use case for ipcpipeline, what is my best alternative for doing this scatter and collect type of processing?

For even more context if you want to know WHY I want to do this - there are two things I’m trying to do. One is to parallelize CV processing pipelines. The second is that I am attempting to do real time encoding of ffv1 on a platform where the processor can’t quite keep up. Since ffv1 can be set up as pure intraframe, I was going to use an appsrc/appsinks to round robin frames between a pair of processes running avenc_ffv1, then on the return path a set of appsrcs/appsink to collect the frames and reemit them in ptp order.

So some more troubleshooting and I’ve found a pipeline that accomplishes what I want, with a master process that’s both sending and receiving from child processes. What I’ve generally found:

  • ipcpipeline shouldn’t loop between processes. Having an ipcpipeline from the parent to one or more childs processes is fine. Using another ipcpipelinesrc/sink pair to send that back and you’re going to have a bad time. You might be able to do some thread management to break out the various pipelines into separate glib thread contexts, but I didn’t get to trying that out.
  • You can accomplish effectively the same workflow with a slightly different pipeline. The return path can be accomplished with an fdsink/fdsrc. So you have the parent pipeline as the One True Pipeline, and then ipcpipelinesink to ipcslavepipeline in the children. In the parent, as part of the same pipeline, make another bin using fdsrc to get the data back. The pipelines will all be controlled by the parent process pipeline with message passing via the ipcslavepipelines, but you get the data back without a pipeline loop.