Unable to achieve echo cancellatiion using webrtcdsp plugin

I have two ARM based boards each having a mic and a handsfree speaker attached to them and running Linux OS. I ran the following gstreamer pipelines on the boards via shell script.

Sender pipeline on BOARD 1

###########################

B=192.168.8.87
A_BAudSendPort=5000

gst-launch-1.0 alsasrc device=‘hw:0,0’ !
webrtcdsp echo-cancel=true noise-suppression=true !
webrtcechoprobe ! audioconvert !
‘audio/x-raw,channels=1,depth=16,width=16,rate=16000,encoding-name=(string)L16’ !
rtpL16pay ! udpsink host=$B port=$A_BAudSendPort sync=false

Receiver pipeline on board 1

##########################
B_AAudSendPort=5001

gst-launch-1.0 udpsrc port=$B_AAudSendPort !
“application/x-rtp,media=(string)audio,clock-rate=(int)16000, width=16, height=16,
encoding-name=(string)L16,
encoding-params=(string)1, channels=(int)1, channel-positions=(int)1,
payload=(int)96” !
rtpL16depay !
audioconvert !
webrtcdsp !
webrtcechoprobe name=webrtcechoprobe0 !
alsasink sync=false

Sender pipeline on BOARD 2

###########################

B=192.168.21.141
A_BAudSendPort=5001
gst-launch-1.0 alsasrc device=‘hw:0,0’ !
webrtcdsp echo-cancel=true noise-suppression=true !
webrtcechoprobe ! audioconvert !
‘audio/x-raw,channels=1,depth=16,width=16,rate=16000,encoding-name=(string)L16’ !
rtpL16pay ! udpsink host=$B port=$A_BAudSendPort sync=false

Receiver pipeline on board 2

##########################
B_AAudSendPort=5000

gst-launch-1.0 udpsrc port=$B_AAudSendPort !
“application/x-rtp,media=(string)audio,clock-rate=(int)16000, width=16, height=16,
encoding-name=(string)L16,
encoding-params=(string)1, channels=(int)1, channel-positions=(int)1,
payload=(int)96” !
rtpL16depay !
audioconvert !
webrtcdsp !
webrtcechoprobe name=webrtcechoprobe0 !
alsasink sync=false

Even though I used webrtcdsp plugin at both the sender pipelines I could still hear loud echo at both the ends. What am I missing in the pipelines??
Please help!

  1. You need to run the sender and receiver in the same pipeline. It should just be a matter of combining the two gst-launch strings on each side.
  2. Only webrtcdsp should be linked to the alsasrc, and only webrtcechoprobe to the alsasink.

The rationale is that webrtcechoprobe is feeding the playback stream to the canceller (so it knows what to look for in the captured audio), and webrtcdsp is trying to remove the audio you just played from the captured stream. If these are separate processes like you have them now, the capture side has no idea about what is being played back and can do no cancellation.

Like you told I combined the pipelines (and did some changes as well)for both the boards:-

#Board 1
########

gst-launch-1.0 alsasrc device=‘hw:0,0’ ! audioresample ! audioconvert ! audio/x-raw,rate=16000,channels=1 ! webrtcdsp echo-cancel=true echo-suppression-level=2 noise-suppression=true noise-suppression-level=3 ! audioconvert ! rtpL16pay ! udpsink host=192.168.8.87 port=5006 async=FALSE udpsrc port=5007 caps=“application/x-rtp,channels=1,clock-rate=16000” ! rtpjitterbuffer latency=10 ! rtpL16depay ! audioconvert ! webrtcechoprobe ! audioconvert ! audioresample ! alsasink sync=false

Board2
#######
gst-launch-1.0 alsasrc device=‘hw:0,0’ ! audioresample ! audioconvert ! audio/x-raw,rate=16000,channels=1 ! webrtcdsp echo-cancel=true echo-suppression-level=2 noise-suppression=true noise-suppression-level=3 ! audioconvert ! rtpL16pay ! udpsink host=192.168.21.141 port=5007 async=FALSE udpsrc port=5006 caps=“application/x-rtp,channels=1,clock-rate=16000” ! rtpjitterbuffer latency=10 ! rtpL16depay ! audioconvert ! audioresample ! webrtcechoprobe ! audioconvert ! alsasink sync=false

I am still getting echo . What else am I missing ??
Please help!

This seems correct. What version of webrtc-audio-processing are you using?

I am using webrtc-audio-processing version 0.3.1 (and gstreamer version 1.14.4 - It was preinstalled on my board running Linux). Actually gstreamer-1.14.4 needs webrtc-audio-processing library version to be > 0.2 and <0.4

1.14 is really quite old (1.14.5 was released more than 5 years ago). I don’t have any concrete suggestions, but I think it might be worth trying current versions of GStreamer and webrtc-audio-processing.

Hi,

I actually tried the same pipelines on two PCs as well, with gstreamer version 1.24.2 on both and physically placed significantly apart from each other, still echo cancellation was not working.
Also in gstreamer 1.24.2 which is the default version that comes installed with ubuntu 24.04.1, webrtcdsp plugin was not working even after installing gst-plugins-bad from “apt install” command.
So, I had to manually compile the sources of gst-plugins-bad 1.24.2 which had webrtc-audio-processing library as a dependency. So I compiled that too for my PC.
The webrtc-audio-processing library version used is 1.3 and it is downloaded from the link below:-
http://freedesktop.org/software/pulseaudio/webrtc-audio-processing/webrtc-audio-processing-1.3.tar.gz

Can you please try these pipeline on your PCs (preferably ubuntu 24.04.1 based) and revert with results ? Or if there is even a minor mistake in the pipelines then can you rectify them ?
Appreciate your help!

Just to sanity check – I assume the two boards are far enough apart that there is no possibility of the mics picking up each other’s audio?

Yes they are! There wasn’t any echo when I used google meet on the same setup. That’s why I suggested you to try the same at your end!
I just wanted to be sure that it works on PC (preferably ubuntu based, as my setup on PCs has ubuntu) be it any version of gstreamer and dependent libraries so that I have a reference.
Thanks!

Hi @arun ,

Could you please help in resolving the issue?
Thanks!

Heya Vishal, I’m a bit caught up, but will try your pipelines later today if I can.

1 Like

Hi,
Just to reiterate, it doesn’t have to be ubuntu 24. You could try it on any version of ubuntu and gstreamer. POC that webrtcdsp works is primarily the point of interest.
Thanks!