Sending images from RPI --> Laptop running Linux --> Back to RPI

The end goal for a project I’m working on is to stream images from my RPI4 to my Linux laptop, then send the processed images from the Linux laptop back to my RPI4

I’m currently using this pipeline to stream images from my RPI4 and I’ve confirmed that this pipeline is working correctly

raspivid -n -t 0 -w 640 -h 480 -fps 25 -b 2000000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=10 pt=96 ! udpsink host=xxx.xx.xxx.xxx port=5000

I’m using OpenCV and C++ on my Linux laptop to receive the incoming images and I’m able to use imshow() to verify that I am receiving the images from the RPI 4 but I’m not sure if I have the ‘sending’ pipeline setup correctly

This is the receive_pipeline that I know works correctly

std::string receive_pipeline = “udpsrc port=5000 ! application/x-rtp, encoding-name=H264,payload=96 ! rtph264depay ! decodebin ! videoconvert ! appsink sync=false drop=true”;

Here’s the send_pipeline that I’m iffy about

std::string send_pipeline = “appsrc ! videoconvert ! x264enc tune=zerolatency ! rtph264pay config-interval=10 pt=96 ! udpsink host=172.17.141.124 port=5001”;

And then the gst-launch command that I’m using on the RPI4 terminal to receive the images from the Linux laptop

gst-launch-1.0 -v -e udpsrc port=5001 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! decodebin ! videoconvert ! autovideosink

And what is the problem on the PC → RPI side?

You can check whether data is flowing on the receiver by adding an ... ! identity dump=true ! ... in strategic places, e.g. try after udpsrc first to make sure that packets are coming in (if not, there might be an IPv4 vs. IPv6 thing going on, check that it’s listening on the right interfaces with netstat -lnp --udp). If that’s all good, add it after the payloader, then after decodebin, etc.

On the RPI → PC sender side you might be able to use the rpicamsrc plugin instead of piping from raspivid.

1 Like

In addition, you may also try just checking network+gstreamer ruling out opencv and your processing from Linux laptop with running a gstreamer H264 relay or decoding/encoding such as:

gst-launch-1.0 -v udpsrc port=5000 ! application/x-rtp, encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! rtph264pay ! udpsink host=172.17.141.124 port=5001

gst-launch-1.0 -v udpsrc port=5000 ! application/x-rtp, encoding-name=H264,payload=96 ! rtph264depay ! decodebin ! videoconvert ! queue ! x264enc key-int-max=30 insert-vui=1 tune=zerolatency ! h264parse ! rtph264pay ! udpsink host=172.17.141.124 port=5001

If this works so far, also be aware that opencv videocapture (and imshow) may support more formats (such as BGRx) than opencv VideoWriter.
Try specifying video/x-raw,format=BGR caps before appsink and also be sure you’re pushing BGR frames into opencv writer with gstreamer backend.

writer_pipeline_str = 'appsrc ! video/x-raw,format=BGR ! queue ! videoconvert ! x264enc key-int-max=30 insert-vui=1 tune=zerolatency ! h264parse ! rtph264pay ! udpsink host=172.17.141.124 port=5001'
writer = cv2.VideoCapture(writer_pipeline_str, cv2.CAP_GSTREAMER, 0, float(fps), ...)
# and check if writer.isOpened().
1 Like