Skip to content

Instantly share code, notes, and snippets.

@patrickelectric
Last active March 16, 2024 20:28
Show Gist options
  • Star 37 You must be signed in to star a gist
  • Fork 11 You must be signed in to fork a gist
  • Save patrickelectric/443645bb0fd6e71b34c504d20d475d5a to your computer and use it in GitHub Desktop.
Save patrickelectric/443645bb0fd6e71b34c504d20d475d5a to your computer and use it in GitHub Desktop.
Get video from gstreamer udp with python and visualize with OpenCV
#!/usr/bin/env python
import cv2
import gi
import numpy as np
gi.require_version('Gst', '1.0')
from gi.repository import Gst
class Video():
"""BlueRov video capture class constructor
Attributes:
port (int): Video UDP port
video_codec (string): Source h264 parser
video_decode (string): Transform YUV (12bits) to BGR (24bits)
video_pipe (object): GStreamer top-level pipeline
video_sink (object): Gstreamer sink element
video_sink_conf (string): Sink configuration
video_source (string): Udp source ip and port
"""
def __init__(self, port=5600):
"""Summary
Args:
port (int, optional): UDP port
"""
Gst.init(None)
self.port = port
self._frame = None
# [Software component diagram](https://www.ardusub.com/software/components.html)
# UDP video stream (:5600)
self.video_source = 'udpsrc port={}'.format(self.port)
# [Rasp raw image](http://picamera.readthedocs.io/en/release-0.7/recipes2.html#raw-image-capture-yuv-format)
# Cam -> CSI-2 -> H264 Raw (YUV 4-4-4 (12bits) I420)
self.video_codec = '! application/x-rtp, payload=96 ! rtph264depay ! h264parse ! avdec_h264'
# Python don't have nibble, convert YUV nibbles (4-4-4) to OpenCV standard BGR bytes (8-8-8)
self.video_decode = \
'! decodebin ! videoconvert ! video/x-raw,format=(string)BGR ! videoconvert'
# Create a sink to get data
self.video_sink_conf = \
'! appsink emit-signals=true sync=false max-buffers=2 drop=true'
self.video_pipe = None
self.video_sink = None
self.run()
def start_gst(self, config=None):
""" Start gstreamer pipeline and sink
Pipeline description list e.g:
[
'videotestsrc ! decodebin', \
'! videoconvert ! video/x-raw,format=(string)BGR ! videoconvert',
'! appsink'
]
Args:
config (list, optional): Gstreamer pileline description list
"""
if not config:
config = \
[
'videotestsrc ! decodebin',
'! videoconvert ! video/x-raw,format=(string)BGR ! videoconvert',
'! appsink'
]
command = ' '.join(config)
self.video_pipe = Gst.parse_launch(command)
self.video_pipe.set_state(Gst.State.PLAYING)
self.video_sink = self.video_pipe.get_by_name('appsink0')
@staticmethod
def gst_to_opencv(sample):
"""Transform byte array into np array
Args:
sample (TYPE): Description
Returns:
TYPE: Description
"""
buf = sample.get_buffer()
caps = sample.get_caps()
array = np.ndarray(
(
caps.get_structure(0).get_value('height'),
caps.get_structure(0).get_value('width'),
3
),
buffer=buf.extract_dup(0, buf.get_size()), dtype=np.uint8)
return array
def frame(self):
""" Get Frame
Returns:
iterable: bool and image frame, cap.read() output
"""
return self._frame
def frame_available(self):
"""Check if frame is available
Returns:
bool: true if frame is available
"""
return type(self._frame) != type(None)
def run(self):
""" Get frame to update _frame
"""
self.start_gst(
[
self.video_source,
self.video_codec,
self.video_decode,
self.video_sink_conf
])
self.video_sink.connect('new-sample', self.callback)
def callback(self, sink):
sample = sink.emit('pull-sample')
new_frame = self.gst_to_opencv(sample)
self._frame = new_frame
return Gst.FlowReturn.OK
if __name__ == '__main__':
# Create the video object
# Add port= if is necessary to use a different one
video = Video()
while True:
# Wait for the next frame
if not video.frame_available():
continue
frame = video.frame()
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
@lweingart
Copy link

Hello again @patrickelectric,

Just out of curiosity and to further my understanding, could you please tell me what command on the computer sending the images would work with your code as it is ?

Thank you.

@doman93
Copy link

doman93 commented Apr 12, 2021

@patrickelectric I tried to rewrite your code in C++ but it doesn't work. Can you help me to figure this out

int main(int argc, char** argv)
{ 
     VideoCapture video("udpsrc port=5600 ! application/x-rtp, payload=96 ! rtph264depay ! h264parse ! avdec_h264"
             " ! decodebin ! videoconvert ! video/x-raw,format=(string)BGR ! videoconvert! appsink emit-signals=true sync=false max-buffers=2 drop=true", CAP_GSTREAMER);
    
       while (!video.isOpened()) {
        cerr <<"VideoCapture not opened"<<endl;
        exit(-1);
    }
    
    while (true) {

        Mat frame;

        video.read(frame);

        imshow("Camera", frame);

        if (waitKey(1) == 27) {
            break;
        }
    }
    return 0;
}

Your code Python works perfectly in my case, but somehow C++ stuck. Build file uses the default OpenCV 3.3 of Ros kinetic

@patrickelectric
Copy link
Author

@HosseinKeipour
Copy link

Hi,
I have tried to run your code but faced this error,

Traceback (most recent call last):
File "/home/shirin/Downloads/depth/video_udp.py", line 158, in
video = Video()
File "/home/shirin/Downloads/depth/video_udp.py", line 67, in init
self.run()
File "/home/shirin/Downloads/depth/video_udp.py", line 145, in run
self.video_sink.connect('new-sample', self.callback)
TypeError: <gi.GstAutoVideoSink object at 0x7f9cc7c2b100 (GstAutoVideoSink at 0x34b6020)>: unknown signal name: new-sample

I was wondering if you could share your idea!

@rahul-kr2000
Copy link

You are a savior! Thanks a lot for sharing this.

@tawfiq1200
Copy link

Can you please comment on the issue I am having here?

@tawfiq1200
Copy link

Found the fix and posted

@swcho84
Copy link

swcho84 commented Dec 22, 2022

You are my Hero!

@madjxatw
Copy link

madjxatw commented Jul 26, 2023

Thanks for sharing. What I am currently interesting is how to reduce the CPU usage (12%~17% CPU on my intel i9-11900k machine). I've used OpenCV cv2.VideoCapture(stream_url, cv2.CAP_GSTREAMER) that uses GStreamer as video capturing backend and it only consumed around 3%~5% CPU, I have no idea what magic behind in OpenCV.

@patrickelectric
Copy link
Author

Thanks for sharing. What I am currently interesting is how to reduce the CPU usage (12%~17% CPU on my intel i9-11900k machine). I've used OpenCV cv2.VideoCapture(stream_url, cv2.CAP_GSTREAMER) that uses GStreamer as video capturing backend and it only consumed around 3%~5% CPU, I have no idea what magic behind in OpenCV.

Hi, this example was projected to work in any scenario. Decodebin will try to be smart to select the best decode for you, but it usually fail. Try to use decodebin3 or choosing your decoder manually to use what your hardware provide.

@madjxatw
Copy link

I am currently not using decodebin in my pipeline but it still works. Any benefit if I insert a decodebin element into the pipeline?

@anselmobattisti
Copy link

anselmobattisti commented Jul 27, 2023

I am currently not using decodebin in my pipeline but it still works. Any benefit if I insert a decodebin element into the pipeline?

Plugin documentation:

https://gstreamer.freedesktop.org/documentation/playback/decodebin.html?gi-language=c

"GstBin that auto-magically constructs a decoding pipeline using available decoders and demuxers via auto-plugging."

Using a decodebin your pipeline will be more resilient regarding on the ingress multimedia stream. If you only will have a single type of multimedia stream, and its working, them your pipeline is safe.

@patrickelectric
Copy link
Author

patrickelectric commented Jul 27, 2023 via email

@madjxatw
Copy link

@patrickelectric, thanks for sharing!

@MatomeRampedi
Copy link

Hi, patrickelectric, is it possible to have the same functionality using tcpserver on gstreamer?

@patrickelectric
Copy link
Author

Yes, take a look in tcpclientsrc and tcpserversrc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment