How to build this pipline in c++ gstreamer

Hello evenyone, I am new in gstreamer. I have a pipeline which is workable in my cmd. And now I have to put it to C/C++, how can I change it to C/C++? Also, I have to change the beginning src to appsrc.

fdsrc fd = 0 ! video/x-h264,width=1920,height=1080,framerate=30/1,streamformat=(string)byte-stream ! h264parse config-interval=1 ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink sync=false

My through is.

GstElement *pipeline = gst_pipeline_new("pipeline");
GstElement *appsrc = gst_element_factory_make("appsrc", "appsrc");
GstElement *h264parse = gst_element_factory_make("h264parse", "h264parse");
GstElement *decoder = gst_element_factory_make("nvv4l2dec", "decoder");
GstElement *nvvidconv = gst_element_factory_make("nvvidconv", "nvvidconv");
GstElement *videoconvert = gst_element_factory_make("videoconvert", "videoconvert");
GstElement *appsink = gst_element_factory_make("appsink", "appsink");

g_object_set(G_OBJECT(h264parse), "config-interval", 1, NULL);

g_object_set(G_OBJECT(appsrc), "caps",
             gst_caps_new_simple("video/x-h264",
                                 "width", G_TYPE_INT, 1920,
                                 "height", G_TYPE_INT, 1080,
                                 "framerate", GST_TYPE_FRACTION, 30, 1,
                                 "stream-format", G_TYPE_STRING, "byte-stream",
                                 NULL), NULL);

g_object_set(G_OBJECT(appsink), "caps",
             gst_caps_new_simple("video/x-raw",
                                 "format", G_TYPE_STRING, "BGR",
                                 NULL), NULL);

g_object_set(appsink, "emit-signals", TRUE, "sync", FALSE, NULL);
g_signal_connect(appsink, "new-sample", G_CALLBACK(on_new_sample), NULL);

gst_bin_add_many(GST_BIN(pipeline), appsrc, h264parse, decoder, nvvidconv, videoconvert, appsink, NULL);
gst_element_link_many(appsrc, h264parse, decoder, nvvidconv, videoconvert, appsink, NULL);

But seems I skip the video/x-raw, format=(string)BGRx and video/x-raw,format=BGR, but I dont know how to put into the c pipeline. Can anyone help me, thanks!!

video/x-raw, format=(string)BGRx in a gst-launch command line is short for capsfilter caps=video/x-raw, format=(string)BGRx.

In code, you would need to use the capsfilter element or gst_element_link_filtered.

1 Like

Thanks for your help!!! :slightly_smiling_face: Is it the following way to combine them?

GstElement *pipeline = gst_pipeline_new("pipeline");
GstElement *appsrc = gst_element_factory_make("appsrc", "appsrc");
GstElement *h264parse = gst_element_factory_make("h264parse", "h264parse");
GstElement *decoder = gst_element_factory_make("nvv4l2dec", "decoder");
GstElement *nvvidconv = gst_element_factory_make("nvvidconv", "nvvidconv");
GstElement *videoconvert = gst_element_factory_make("videoconvert", "videoconvert");
GstElement *appsink = gst_element_factory_make("appsink", "appsink");

g_object_set(G_OBJECT(h264parse), "config-interval", 1, NULL);

g_object_set(G_OBJECT(appsrc), "caps",
             gst_caps_new_simple("video/x-h264",
                                 "width", G_TYPE_INT, 1920,
                                 "height", G_TYPE_INT, 1080,
                                 "framerate", GST_TYPE_FRACTION, 30, 1,
                                 "stream-format", G_TYPE_STRING, "byte-stream",
                                 NULL), NULL);

GstCaps *nvvidconv_caps = gst_caps_new_simple("video/x-raw",
                                                  "format", G_TYPE_STRING, "BGRx",
                                                  NULL);
gst_element_link_filtered(decoder, nvvidconv, nvvidconv_caps);
gst_caps_unref(nvvidconv_caps);

GstCaps *appsink_caps = gst_caps_new_simple("video/x-raw",
                                                "format", G_TYPE_STRING, "BGR",
                                                NULL);
gst_element_link_filtered(videoconvert, appsink, appsink_caps);
gst_caps_unref(appsink_caps);

g_object_set(appsink, "emit-signals", TRUE, "sync", FALSE, NULL);
g_signal_connect(appsink, "new-sample", G_CALLBACK(on_new_sample), NULL);

gst_bin_add_many(GST_BIN(pipeline), appsrc, h264parse, decoder, nvvidconv, videoconvert, appsink, NULL);
gst_element_link_many(appsrc, h264parse, decoder, nvvidconv, videoconvert, appsink, NULL);

There are 2 more questions I want to ask :thinking:

  1. When doing gst_bin_add_many() and gst_element_link_many(), do I need to link the appsink_caps and nvvidconv_caps into them?
  2. The beginning appsrc caps video/x-h264,width=1920,height=1080,framerate=30/1,streamformat=(string)byte-stream. Which way is the best to add the caps into the pipeline? The way I write in here which use g_object_set(“caps”), or use GstCaps and gst_element_link_filtered() combine appsrc and h264parse element?

You must not use link_many if you are attempting to link disjoint chains. gst_element_link_filtered() will create the capsfilter for you and link the elements to it. You would need to use multiple link_many calls for each chain of elements that hasn’t been linked already with link_filtered.

For the appsrc caps, setting the caps property is fine.

1 Like

Hello. In addition to the above issues. I’m wondering if this is the only way(Through a callback function) to get appsink results? Now, I have a feed function like this:

void push_data_to_appsrc(GstElement *appsrc, const uint8_t *data, size_t length) {
     GstBuffer *buffer = gst_buffer_new_allocate(NULL, length, NULL);
     gst_buffer_fill(buffer, 0, data, length);
     gst_app_src_push_buffer(GST_APP_SRC(appsrc), buffer);
}

And now I can get the result through the signal callback(on_new_sample)

    g_object_set(appsink, "emit-signals", TRUE, "sync", FALSE, NULL);
    g_signal_connect(appsink, "new-sample", G_CALLBACK(on_new_sample), NULL);
static GstFlowReturn on_new_sample(GstAppSink *appsink, gpointer user_data) {
    GstSample *sample = gst_app_sink_pull_sample(appsink);
    GstBuffer *buffer = gst_sample_get_buffer(sample);
    GstMapInfo map;
    gst_buffer_map(buffer, &map, GST_MAP_READ);


    cv::Mat image(cv::Size(1920, 1080), CV_8UC3, (char*)map.data, cv::Mat::AUTO_STEP);

    cv::imshow("Video Frame", image);
    cv::waitKey(1);

    gst_buffer_unmap(buffer, &map);
    gst_sample_unref(sample);
    return GST_FLOW_OK;
}

Can I call push_data_to_appsrc() and then block before getting the result in a same function. It means like:

push_data_to_appsrc()
get_data()

Is it the correct? :thinking:

GstElement *pipeline = gst_pipeline_new("pipeline");
GstElement *appsrc = gst_element_factory_make("appsrc", "appsrc");
GstElement *h264parse = gst_element_factory_make("h264parse", "h264parse");
GstElement *decoder = gst_element_factory_make("nvv4l2dec", "decoder");
GstElement *nvvidconv = gst_element_factory_make("nvvidconv", "nvvidconv");
GstElement *videoconvert = gst_element_factory_make("videoconvert", "videoconvert");
GstElement *appsink = gst_element_factory_make("appsink", "appsink");

g_object_set(G_OBJECT(h264parse), "config-interval", 1, NULL);

g_object_set(G_OBJECT(appsrc), "caps",
             gst_caps_new_simple("video/x-h264",
                                 "width", G_TYPE_INT, 1920,
                                 "height", G_TYPE_INT, 1080,
                                 "framerate", GST_TYPE_FRACTION, 30, 1,
                                 "stream-format", G_TYPE_STRING, "byte-stream",
                                 NULL), NULL);

GstCaps *nvvidconv_caps = gst_caps_new_simple("video/x-raw",
                                                  "format", G_TYPE_STRING, "BGRx",
                                                  NULL);
gst_element_link_filtered(nvvidconv, videoconvert, nvvidconv_caps);
gst_caps_unref(nvvidconv_caps);

GstCaps *appsink_caps = gst_caps_new_simple("video/x-raw",
                                                "format", G_TYPE_STRING, "BGR",
                                                NULL);
gst_element_link_filtered(videoconvert, appsink, appsink_caps);
gst_caps_unref(appsink_caps);

g_object_set(appsink, "emit-signals", TRUE, "sync", FALSE, NULL);
g_signal_connect(appsink, "new-sample", G_CALLBACK(on_new_sample), NULL);

gst_bin_add_many(GST_BIN(pipeline), appsrc, h264parse, decoder, nvvidconv, videoconvert, appsink, NULL);
gst_element_link_many(appsrc, h264parse, decoder,nvvidconv, NULL);
appsrc, h264parse, decoder,nvvidconv—> gst_element_link_many
nvvidconv and videoconvert —> gst_element_link_filtered
videoconvert and appsink → gst_element_link_filtered