Have anyone ever seen gstreamer pipeline to take rtmpsrc, split video/audio to intervideosink/interaudiosink, then take them from intervideosrc/interaudiosrc and put back together to rtmpsink? This should be the only way to keep rtmpsink up and running (black screen will continue in configured video resolution and framerate after input timeout) while there are are glitches with rtmpsrc using gstreamer.
Another way I thought about was making some virtual video and audio devices, play back input rtmpsrc there and took them with v4l2src from there with another gstreamer pipelines.
Other suggestions are welcome too.
Have seen working examples using videotestsrc + audiotestsrc over inter- video/audio src/sink going to autovideosink resulting testsrc display in screen after taking it from intervideosrc and test audio tone in speakers (it is not mentioned in manuals but found out that intervideosink timeout is in nanoseconds). Have seen working examples taking rtmpsrc to rtmpsink with audio mux but in case of input glitch there will be output glitch too. Have seen examples using udpsink + udpsrc but again input glitches will kill output too and also there will be some data errors using udp even if it is in local host, cpu is almost idle (got gpu encoding working) and whatever buffer sizes are set.