[d3d11screencapturesrc] DXGI vs WGC

Hi all,
I have a software that capture the screen using d3d11screencapturesrc.

I would like to capture the entire screen but I would like to choose the GPU to use for the capture.
I have 2 GPUs, an RTX4090 and an iGPU inside my CPU.

I know that I cannot select which GPU to use for the screen capture while using DXGI.
The GPU used for the capture is the one where the screen is attached.

Can I select which GPU to use with WGC?

Is it possible to record the entire screen with WGC if I don’t pass a window-handle?

capturing a monitor using WGC is not recommended because of its poor performance. So, capturing a monitor using your RTX GPU can show poor result than DXGI + iGPU. But, yes you can select GPU and capture a monitor using WGC.

d3d11screencapturesrc capture-api=wgc adapter={your target GPU's DXGI adapter enumeration index. index would be varying depending on GPU priority setting}

By default, WGC mode will capture a primary monitor if window-handle is null. To select target monitor, use monitor-index or monitor-handle property.

hi, thanks for the answer.

my software is “gaming oriented” and I would like to reduce the GPU impact of the screen capture as much as possible.

the idea was to reduce the load on the main GPU and delegate the secondary iGPU to the screen capture.

if I use:
d3d11screencapturesrc capture-api=wgc adapter=1

I can see a 5/6% load on the iGPU but the framerate is orrendous.
it captures at 4-5FPS…

is this normal that a 7950X3D iGPU is so slow for the purpose even if the iGPU load is so low?

Thanks!!!

Hard to tell whether it’s normal or not, but such low frame rate is not surprising.

I have an impression that WGC is a slightly improved screen capture API than GDI (I guess WGC might rely on some GDI methods internally), but DXGI still much outperforms than WGC

1 Like

I know that DXGI is much much better than WGC, your implementation there is fantastic, I love it.

I am trying to use WGC to cover the use case I explained previously.

I found a very weird thing.

WGC performs way way better on the GPU that is attached to the monitor.

I have a desktop PC with a 7950X3D CPU that has an iGPU and a RTX4090 dGPU.

If I attach the monitor to the dGPU, WGC is much faster on the dGPU than the iGPU. (This seems reasonable)

If I attach the monitor to the iGPU, WGC is much faster on the iGPU than the dGPU. (This is very unreasonable)

Much faster means that I can capture at 60+FPS while I can’t capture more than 6 FPS while WGC is not performing properly.

I experienced the same thing on my laptop.
The laptop uses a 13900HX CPU that has an iGPU with a RTX4070 dGPU.

From DXDIAG I can see that the laptop’s monitor is connected to the iGPU, in fact, if I use the iGPU, WGC is much faster than the 4070 dGPU.

Another weird thing is that when using WGC on the GPU that has no monitor attached, I can see that both GPUs are beeing utilized, it’s like if the first GPU passes the data to the second one.
If I use WGC on the GPU that has the monitor connected, only that GPU is used.

Do you think that this weird behaviour depends on the gstreamer implementation of the WGC API or the problem depends on the Windows API itself?

Thanks :pray:

There could be additional room to improve the performance though, GStreamer is fully relying on the WGC API. We just copy captured d3d11 texture (by WGC API) to our own d3d11 texture, and that’s the only additional GPU operation triggered by the element. Note that texture copy is almost unavoidable because we don’t know how many textures would be consumed by downstream but WGC requires fixed size texture pool.

Btw, the observed behavior (WGC is fast with a monitor connect GPU) sounds quite natural.

1 Like