Hi!
I have a project in which a robot, coded in Rust and running on a Raspberry Pi is remotely operated. However, it does not receive or transmit any video or audio - only raw data.
I wanted to use WebRTC for remote operation and the GStreamer webrtcsink element along with the Rust bindings seems like an easy drop-in method to achieve this. Using the raw webrtcbin element or another WebRTC library requires implementing a signalling solution, while the GStreamer supplied signalling server works very well for my purpose.
I have it now working (creating the data channel myself in consumer-added), but it seems that I am forced to add an audio/video element to my pipeline and then send some empty data just so it will connect to the signalling server. Is there any way to do this that will not require the redundant element?
Thank you!
I have a similar use case. Do you have any more information on this?
I know how to do this in C++ or Python using webrtcbin and manually handling the signaling with a WebSocket. I am also wondering if we can use webrtcsink for this, or maybe webrtcbin and the Rust signaller (Signaller::new(WebRTCSignallerRole::Producer)).
Hi,
If you’re interested you can see the Rust implementation in here.
As I mentioned in the post, I am able to use webrtcsink and the stock signaller. I am just forced to send some empty audio data to make the signaller connect.
So I guess that means you would be replicating the signaller usage as in webrtcsink?
Might be nice to create a new webrtcdata element in Rust, which would be like webrtcsink but without the audio/video stuff, and lets you connect to a “channel-created” event where you can handle the channel logic.