Shared context, GLMemory and glcolorconvert filter issue


What I’m trying to achieve is to have a gstreamer pipeline with OpenGL buffers coming from OBS. On the renderer side I have a shared OpenGL context proven to work with glinterop and nvenc inside the OBS plugin in a separate encoding thread. Now, I would like to remove the encoder implementation from the plugin and use gstreamer pipeline like this:

glcolorconvert name=glcolorconvert ! video/x-raw(memory:GLMemory),format=NV12,width=1920,height=1080,framerate=60/1 ! nvautogpuh264enc name=cudaenc ! h264parse config-interval=-1 ! mpegtsmux

The buffers are coming through appsrc, format=RGBA. I followed this example:

After setting up all the display/gl contexts properly while the pipe is not playing, I got rid of the glcolorconvert context errors. Also using the nvautogpuh264enc helped to fix encoder related errors. Right now I’m stuck here with the glcolor convert:

gst_gl_context_thread_add: assertion 'context->priv->active_thread == g_thread_self ()' failed
glbasefilter gstglbasefilter.c:458:gst_gl_base_filter_decide_allocation:<glcolorconvert> Subclass failed to initialize.

The context was created as wrapped, and activated:

m_gl_context = gst_gl_context_new_wrapped(m_gl_display, (guintptr)m_shared_context_info->context, gl_platform, gl_api);
gst_gl_context_activate(m_gl_context, TRUE);
gst_gl_context_fill_info(m_gl_context, nullptr);

I tried getting passed that point by executing the gl_set_caps function directly not via gst_gl_context_thread_add, but then the glcolorconvert crashes at some later stage.

I would love any hint how to proceed on debugging this issue.


Ok, got it working!

  1. forgot to convert wrapped context back to the regular one, btw this is quite weird:
    gst_gl_display_create_context(m_gl_display, m_wrapped_context, &m_gl_context, &err)
  2. used new x11 display:
    m_gl_display = gst_gl_display_new_with_type(GST_GL_DISPLAY_TYPE_X11);

Instead of reusing the pointer from OBS:

Display* dpy = m_obs_context->device->plat->display;
m_gl_display = (GstGLDisplay*)gst_gl_display_x11_new_with_display(dpy);

So, that example from collabora is doing a couple of not quite correct things and I would not actually recommend following it to the letter. If you want a more correct example, then subprojects/gst-plugins-base/tests/examples/gl/sdl/sdlshare.c · main · GStreamer / gstreamer · GitLab is a better place to start from. There is also some design documentation available from OpenGL. The GstGLContext documentation also contains some of this information: GstGLContext

A wrapped GL context is only a container for the external GL context pointer. Performing any GStreamer OpenGL operations with a wrapped OpenGL context must have preconditions supplied by the caller. Those preconditions are:

  1. You must control the actual eglMakeCurrent() (or equivalent) call.
  2. You must call gst_gl_context_activate(wrap_context, TRUE) which will not actually perform any OpenGL API (eglMakeCurrent()) but only allow you to call gst_gl_context_thread_add in the current thread whilst the wrap context is active/current. This allows you to call all GStreamer OpenGL functions and have any internal gst_gl_context_thread_add() execute in the current thread of your wrapped GL context. That is the only purpose of this.

Calling gst_gl_context_thread_add() without these proconditions will result in criticals or errors or crashes. You cannot use this wrapped GL context as the OpenGL context of GStreamer pipeline as gst_gl_context_thread_add will be called from any number of unspecified threads.

A non-wrapped GL context contains a backing GL thread where all OpenGL calls are marshalled onto (using gst_gl_context_thread_add(). This is what your gst_gl_display_create_context(m_gl_display, m_wrapped_context, &m_gl_context, &err) is performing. It is creating a new OpenGL context that is shared with your provided wrapped GL context.

You should ensure that your X11 Display is the same between OBS and GStreamer as otherwise your OpenGL context’s may not be shareable with some OpenGL drivers.

Now, GStreamer GL elements share OpenGL resources in a couple of ways.

  1. GstGLDisplay → If you have an external display, the GstContext mechanism is used in its entirety. Specifically, GST_CONTEXT queries between adjacent elements or a GST_CONTEXT (synchronous) message to the application.
  2. external (wrapped) GstGLContext. Also uses the GST_CONTEXT mechanism in its entirety.
  3. ‘local’ GstGLContext only does the GST_CONTEXT query process (no synchronous GST_CONTEXT message to the application). Otherwise, the GstGLDisplay is asked for a GstGLContext (using gst_gl_display_ensure_context()). If gst_gl_display_ensure_context() creates an OpenGL context, then the externally provided wrapped OpenGL context is used as the OpenGL context to share with.

Thank you very much for a detailed explanation and another example. Generally the Collabora example was fine for me, but there was some confusion, as the author himself questions the code in the comments “is this really necessary”. All of the functions executed in the Collabora code ended up being necessary, and in proper order, while the pipeline was in NULL/PAUSED state. The _gst_bus_call with GST_MESSAGE_NEED_CONTEXT was never executed. If I didn’t setup everything before hand, gst assumed some default (wrong - in my usecase) context instead.