Syncronised audio and video via multicast


Some of our cameras require that we do not use an RTSP request to stream as they are already streaming via multicast to a different server. I have successfully got video streaming working in this scenario using a pipeline like the following:

gst-launch-1.0 udpsrc uri=udp:// caps=“application/x-rtp,media=video,clock-rate=90000,encoding-name=H264” ! rtpjitterbuffer ! rtph264depay ! h264parse ! decodebin ! videoconvert ! autovideosink

(although I am doing this in code instead).

Can anyone advise me on how to get audio as well - and this would need to be in sync using the pts values and RTCP sender reports as is usual for video/audio via RTSP.

(typically in this case audio would be the same multicast address and the port would be video port + 2)

Thanks for any help,


If you can connect to the cameras via RTSP it would be helpful to do that just to get the SDP that it provides for that stream, as it will have all the info in it that you need, and then you might also be able to just use sdpsrc.

Thanks - I’ll look into sdpsrc.

Grabbing the SDP on our customer sites is probably not practical - but I can probably reconstruct it based on the info I already know.


Would you be willing to share an example pipeline containing sdpsrc please?