Capture audio input from sound card and pipe it to a webrtc peer?


I am new to gstreamer. I am trying to capture audio from a soundcard on a headless linux computer and then send to to a webrtc peer client (web browser). There is only one peer involved, no broadcasting. It is point to point, [headless linux audio input] → [single webrtc client on the Internet]

I managed to capture audio from the card to a wav file using gstreamer. And I have read about webrtcbin and webrtcsink , watched tutorial son both these subjects but still I cannot grasp how to tie all this together.

I am familiar with webrtc and signaling servers. So I don’t mind a solution where I have to rpovide my own signaling server if needed.

Programming language is not important, I can adapt to that.

Thank You!

Hi @circaeng

You could start with the example from the webrtc README in gst-plugins-rs. The Usage section details how to launch the default signaller, run webrtcsink with audio & video streams and render those streams in a browser.

See also this example which demonstrates how to integrate with a custom signaller and programmatically build the pipeline.