|# Fast reading from the raspberry camera with Python, Numpy, and OpenCV|
|# Allows to process grayscale video up to 124 FPS (tested in Raspberry Zero Wifi with V2.1 camera)|
|# Made by @CarlosGS in May 2017|
|# Club de Robotica - Universidad Autonoma de Madrid|
|# License: Public Domain, attribution appreciated|
|import numpy as np|
|import subprocess as sp|
|frames =  # stores the video sequence for the demo|
|max_frames = 300|
|N_frames = 0|
|# Video capture parameters|
|(w,h) = (640,240)|
|bytesPerFrame = w * h|
|fps = 250 # setting to 250 will request the maximum framerate possible|
|# "raspividyuv" is the command that provides camera frames in YUV format|
|# "--output -" specifies stdout as the output|
|# "--timeout 0" specifies continuous video|
|# "--luma" discards chroma channels, only luminance is sent through the pipeline|
|# see "raspividyuv --help" for more information on the parameters|
|videoCmd = "raspividyuv -w "+str(w)+" -h "+str(h)+" --output - --timeout 0 --framerate "+str(fps)+" --luma --nopreview"|
|videoCmd = videoCmd.split() # Popen requires that each parameter is a separate string|
|cameraProcess = sp.Popen(videoCmd, stdout=sp.PIPE) # start the camera|
|atexit.register(cameraProcess.terminate) # this closes the camera process in case the python scripts exits unexpectedly|
|# wait for the first frame and discard it (only done to measure time more accurately)|
|rawStream = cameraProcess.stdout.read(bytesPerFrame)|
|start_time = time.time()|
|cameraProcess.stdout.flush() # discard any frames that we were not able to process in time|
|# Parse the raw stream into a numpy array|
|frame = np.fromfile(cameraProcess.stdout, count=bytesPerFrame, dtype=np.uint8)|
|if frame.size != bytesPerFrame:|
|print("Error: Camera stream closed unexpectedly")|
|frame.shape = (h,w) # set the correct dimensions for the numpy array|
|# The frame can be processed here using any function in the OpenCV library.|
|# Full image processing will slow down the pipeline, so the requested FPS should be set accordingly.|
|#frame = cv2.Canny(frame, 50,150)|
|# For instance, in this example you can enable the Canny edge function above.|
|# You will see that the frame rate drops to ~35fps and video playback is erratic.|
|# If you then set fps = 30 at the beginning of the script, there will be enough cycle time between frames to provide accurate video.|
|# One optimization could be to work with a decimated (downscaled) version of the image: deci = frame[::2, ::2]|
|frames.append(frame) # save the frame (for the demo)|
|#del frame # free the allocated memory|
|N_frames += 1|
|if N_frames > max_frames: break|
|end_time = time.time()|
|cameraProcess.terminate() # stop the camera|
|elapsed_seconds = end_time-start_time|
|print("Done! Result: "+str(N_frames/elapsed_seconds)+" fps")|
|print("Writing frames to disk...")|
|out = cv2.VideoWriter("slow_motion.avi", cv2.cv.CV_FOURCC(*"MJPG"), 30, (w,h))|
|for n in range(N_frames):|
|#cv2.imwrite("frame"+str(n)+".png", frames[n]) # save frame as a PNG image|
|frame_rgb = cv2.cvtColor(frames[n],cv2.COLOR_GRAY2RGB) # video codec requires RGB image|
|print("Display frames with OpenCV...")|
|for frame in frames:|
|cv2.imshow("Slow Motion", frame)|
|cv2.waitKey(1) # request maximum refresh rate|
For some reason every Raspberry Camera tutorial specifies the maximum camera frame rate at 90 FPS. This is not true!
These high FPS are possible thanks to the great work at Raspividyuv. The above script is the first one to connect Raspividyuv and Python to achieve ultra low latency results. I hope this example code can enable many people to integrate efficient computer vision algorithms into many kinds of robots. The scripts receives grayscale video only, though it could be extended to fetch color as well.
Maximum framerates at multiple resolutions
Example images at each resolution: https://www.dropbox.com/s/k0gzpt15jj0qbqd/raspberry_cameraV2_quality_FPS_comparison.zip (useful to compare the field-of-view in every mode). Note some of the large modes produce corrupt frames, this needs to be studied.
Videos (click to open)
The videos are not only recorded, but also processed in real time in the Python script.
Now it is possible to have low cost vision for fast robots! Even if you don't actually use 120FPS, the lower time between frames will give you more cycle time to process each image.
Please share your progress too, so we can all learn :-)
This error appears because in Python 3 pipe is buffered but unseekable (details).
Also in my case image has been distorted, and I changed resolution to 640x480 instead of 640x240.
cameraProcess = sp.Popen(videoCmd, stdout=sp.PIPE, bufsize=0) # start the camera` - works for me. PIPE without m
Thanks for the great piece of software.
I got error message as follows:
The OS is the latest Raspberry Pi OS, with OpenCV 4.5.1 compiled from source on Raspi 3B+, Python 3.7
Is this related to camera version or there is other reason ?
Yes, it is installed:
_"raspividyuv" Camera App (commit 4a0a19b88b43 Tainted)
Camera Name ov5647
GPS output Disabled
framerate 30, time delay 5000
Preview Yes, Full screen Yes
It is somehow related to "bufsize" topic, because when I set the bufsize=1 I got the message "obtaining file position failed".
It does work with Python 2.7, however I needed to change from "cv2.cv.CV_FOURCC" to "cv2.VideoWriter_fourcc" in line 77 because of newer version of OpenCV