Hi
I am new to gstreamer. I am trying to capture audio from a soundcard on a headless linux computer and then send to to a webrtc peer client (web browser). There is only one peer involved, no broadcasting. It is point to point, [headless linux audio input] → [single webrtc client on the Internet]
I managed to capture audio from the card to a wav file using gstreamer. And I have read about webrtcbin and webrtcsink , watched tutorial son both these subjects but still I cannot grasp how to tie all this together.
I am familiar with webrtc and signaling servers. So I don’t mind a solution where I have to rpovide my own signaling server if needed.
Programming language is not important, I can adapt to that.
Thank You!