Webrtcsink : echo cancellation example

Hi,

I’m using the webrtcsink plugin. I’d like to know if it’s possible to use the webrtc echo cancellation feature?

Can it be added to a pipeline like gst-launch-1.0 webrtcsink name=ws meta="meta,name=gst-stream" pulsesrc ! opusenc ! audio/x-opus, rate=48000, channels=2 ! ws. ?

Assuming that I also receive and playback someone else sound. So pulsesrc is capturing this sound, creating the echo.

Thanks

1 Like

Echo cancellation is used when you are both receiving and sending audio. As webrtcsink only sends the audio and doesn’t receive the audio Echo cancellation does not apply to an webrtcsink-onky pipeline.

If you are receiving audio from the peer through a different pipeline then audio Echo cancellation may be useful.

The webrcdsp documentation you linked to contains an example of how to construct the pipeline to perform audio cancellation with near and far streams.

Yes I also receive audio from the peer. Using this example gst-launch-1.0 playbin uri=gstwebrtc://127.0.0.1:8443?peer-id=[Client ID] from the doc

I don’t really know how to connect these two pipelines. I was hopping using the streams pulsesink (to get the stream playback by playbin), and pulsesrc with webrtdsp to feed the audio stream of webrtcsink.

Not exactly the pipelines you are using, but you might do something like this:

gst-launch-1.0 \
  webrtcsink name=ws meta="meta,name=gst-stream" pulsesrc ! webrtcdsp ! \
    opusenc ! audio/x-opus, rate=48000, channels=2 ! ws.
  playbin uri=gstwebrtc://127.0.0.1:8443?peer-id=[Client ID] audio-filter=webrtcechoprobe

What we’re doing here is making sure both things are running on the same pipeline (which is currently a requirement while using the webrtcdsp plugin).

You plug in webrtcechoprobe in your playback path (here we ask playbin to do that for us), and webrtcdsp in the capture path.

That’s a good idea that may work. Unfortunately I have the following error:

Additional debug info:
../gst-libs/gst/audio/gstaudiobasesrc.c(847): gst_audio_base_src_create (): /GstPipeline:pipeline0/GstPulseSrc:pulsesrc0:
Dropped 14400 samples. This is most likely because downstream can't keep up and is consuming samples too slowly.

It disappears if I add a ! queue ! after pulesrc but the echo would not be removed

Adding the queue makes sense, and should not really affect the functioning of webrtcdsp. It’s expected that it should usually just work. :thinking:

The webrtcdsp needs relatively small latency and low jitter to function. Place queued after the DSP, but also lower the SRC and sink buffer-time and latency time to reduce the latency.

Ah, good point. The webrtc-audio-processing 1.x update (in git but not a release yet), does relax that a little bit (I measured that it’s resilient to over ~100ms of latency even).

Thanks.

I am not sure how to reduce the delay though. I tried to play with buffer-time but it does not change anything, and there is no sound anymore if I decrease latency-time below 10000 (the default value)

gst-launch-1.0 webrtcsink name=ws meta="meta,name=gst-stream pulsesrc buffer-time=50000 latency-time=9000 ! webrtcdsp ! queue ! opusenc ! audio/x-opus, rate=48000, channels=2 ! ws. webrtcsrc signaller::uri="ws://127.0.0.1:8443" signaller::producer-peer-id=a6283a6e-3dce-4cc0-a567-6f1160c7f143 ! audioconvert ! webrtcechoprobe ! pulsesink buffer-time=50000 latency-time=9000

Hey which repo are you referring to? thx

Tha main gstreamer repo – gst-plugins-bad is where the plugin lives and was updated. This is the MR: webrtcdsp: Update code for webrtc-audio-processing-1 (!2943) · Merge requests · GStreamer / gstreamer · GitLab

1 Like

I have trouble compiling this lib from the main gstreamer repo, tag 1.22.6
I’m doing this
meson setup -Dbad=enabled -Dgst-plugins-bad:webrtcdsp=enabled builddir/

webrtc-audio-processing v1.0 is cloned. But I’ve got this error

subprojects/webrtc-audio-processing/meson.build:49:6: ERROR: C++ shared or static library ‘absl_flags_registry’ not found

Looks to be a known and fixed issue. If I use webrtc-audio-processing v1.3 (change in the .wrap file). I’ve got

Executing subproject gst-plugins-bad:webrtc-audio-processing

webrtc-audio-processing| Project name: webrtc-audio-processing
webrtc-audio-processing| Project version: 1.3
webrtc-audio-processing| C compiler for the host machine: cc (gcc 11.4.0 “cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0”)
webrtc-audio-processing| C linker for the host machine: cc ld.bfd 2.38
webrtc-audio-processing| C++ compiler for the host machine: c++ (gcc 11.4.0 “c++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0”)
webrtc-audio-processing| C++ linker for the host machine: c++ ld.bfd 2.38
webrtc-audio-processing| Run-time dependency absl_base found: YES 20210324
webrtc-audio-processing| Run-time dependency absl_flags found: YES 20210324
webrtc-audio-processing| Run-time dependency absl_strings found: YES 20210324
webrtc-audio-processing| Run-time dependency absl_synchronization found: YES 20210324
webrtc-audio-processing| Run-time dependency absl_bad_optional_access found: YES 20210324
webrtc-audio-processing| Library rt found: YES
webrtc-audio-processing| Dependency threads found: YES unknown (cached)
webrtc-audio-processing| Build targets in project: 956
webrtc-audio-processing| Subproject webrtc-audio-processing finished.

gst-plugins-bad| WARNING: Subproject ‘webrtc-audio-processing’ did not override ‘webrtc-audio-processing’ dependency and no variable name specified
gst-plugins-bad| Dependency webrtc-audio-processing from subproject subprojects/webrtc-audio-processing found: NO

subprojects/gst-plugins-bad/ext/webrtcdsp/meson.build:7:13: ERROR: Dependency ‘webrtc-audio-processing’ is required but not found.

A full log can be found at /opt/gstreamer/builddir/meson-logs/meson-log.txt

Any idea?

The changes to update to webrtc-audio-processing 1.x are still in main and not yet in a release, so 1.22.6 won’t have those changes, unforuntately.

Thanks. I wanted to compile everything first before changing branch. Looks like v1.22.6 has trouble compiling webrtc-audio-processing with the default settings. But I switch to the main branch and it is fine.

I am still working on reducing the latency to the minimum. Then I’ll try your branch. I’ll let you know

1 Like

Still using webrtcdsp from the main branch, with this pipe

pipeline = Gst.parse_launch(f"webrtcsink name=ws meta="meta,name=stream" alsasrc ! audio/x-raw,rate=48000 ! webrtcdsp ! queue ! opusenc ! audio/x-opus, rate=48000, channels=2 ! ws. webrtcsrc signaller::uri=XXX signaller::producer-peer-id=XXX ! audio/x-raw, rate=48000 ! webrtcechoprobe ! alsasink")

I’ve got bidirectionnal sound but, because of the delay, echo is not removed. I can reduce the delay using this alsa configuration

pcm.lowlatencysink {
    type dmix
    ipc_key 1234
    slave {
        pcm "hw:0,0"
        period_time 0
        period_size 64
        buffer_size 256
        rate 48000
        format S16_LE
        channels 2
    }
    bindings {
        0 0
        1 1
    }
}

pcm.lowlatencysrc {
    type dsnoop
    ipc_key 1
    slave {
        pcm "hw:0,0"
        channels 2
        rate 48000
        period_size 64
        buffer_size 256
        format S16_LE
        channels 2
    }
}

Latency is way better, but If I plug webrtcdsp in the pipe I do not have sound

pipeline = Gst.parse_launch(f"webrtcsink name=ws meta="meta,name=stream" alsasrc device=lowlatencysrc ! audio/x-raw,rate=48000 ! webrtcdsp ! queue ! opusenc ! audio/x-opus, rate=48000, channels=2 ! ws. webrtcsrc signaller::uri=XXX signaller::producer-peer-id=XXX ! audio/x-raw, rate=48000 ! webrtcechoprobe ! alsasink device=lowlatencysink"")

There are these warnings I don’t understand

0:00:00.422846068  5852 0x7f379000dd80 WARN                    alsa conf.c:5668:snd_config_expand: alsalib error: Unknown parameters {AES0 0x02 AES1 0x82 AES2 0x00 AES3 0x02}
0:00:00.422866859  5852 0x7f379000dd80 WARN                    alsa pcm.c:2664:snd_pcm_open_noupdate: alsalib error: Unknown PCM lowlatencysink:{AES0 0x02 AES1 0x82 AES2 0x00 AES3 0x02}

It seems that the buffer and period size are disturbing webrtcdsp. Increasing them makes the pipe work.
Still an echo because of the delay. I’ll test your branch.

Edit: actually I am on the main branch, so I’m using the webrtcdsp you mentioned @arun

It finally works. I needed an additional queue before alsasink and these parameters webrtcdsp delay-agnostic=true echo-suppression-level=3

You need to configure latency and buffer-time on alsasrc/alsasink. A good value for non RT process is between 10 and 20ms “latency” (size of gst-buffers) and about 3 to 4 time the latency for the buffer-time.

Thanks. I’d like to fully understand this.
So, it would be alsasrc device=lowlatencysrc latency-time=10000 buffer-time=30000 ? Are these values going to reduce the latency (I didn’t notice anything when I was testing some values), or is it just to tell gstreamer the actual latency so it can better compensate / synchronize processes?

How precise should I be? I just have a rough idea of the latency using the audiolatency plugin but I don’t know exactly how late is the capture or the playback.

So, these are going to be used to negotiate the period_size and buffer_size in alsa API. They won’t be used exactly, and your configuration may impair the ability to negotiate something else. If you translate these guidelines to bytes and use that in your configuration,.it will likely work better.

In the hypothetical case these value are kepts, alsasrc will have latency of latency, with buffer-time drift tolerance (this isn’t great I know).

While alsasink will have buffer-time latency, the latency is the size of the writes made by GStreamer into the ring buffer. Drift tolerance is configurable, 40ms by default (way inside the webrtcdsp tolerance).

Because webrtcdsp elements are not informed about how much drift is seen, it is common that delay-agnostic mode is needed. But I have used this element without in many cases and it worked. Without delay-agnostic, there is usually no initial period of echo before it’s starts cancelling.

1 Like