-
-
Save Alby-o/fe87e35bc21d534c8220aed7df028e03 to your computer and use it in GitHub Desktop.
// imgLib -> Image package from https://pub.dartlang.org/packages/image | |
import 'package:image/image.dart' as imglib; | |
import 'package:camera/camera.dart'; | |
Future<List<int>> convertImagetoPng(CameraImage image) async { | |
try { | |
imglib.Image img; | |
if (image.format.group == ImageFormatGroup.yuv420) { | |
img = _convertYUV420(image); | |
} else if (image.format.group == ImageFormatGroup.bgra8888) { | |
img = _convertBGRA8888(image); | |
} | |
imglib.PngEncoder pngEncoder = new imglib.PngEncoder(); | |
// Convert to png | |
List<int> png = pngEncoder.encodeImage(img); | |
return png; | |
} catch (e) { | |
print(">>>>>>>>>>>> ERROR:" + e.toString()); | |
} | |
return null; | |
} | |
// CameraImage BGRA8888 -> PNG | |
// Color | |
imglib.Image _convertBGRA8888(CameraImage image) { | |
return imglib.Image.fromBytes( | |
image.width, | |
image.height, | |
image.planes[0].bytes, | |
format: imglib.Format.bgra, | |
); | |
} | |
// CameraImage YUV420_888 -> PNG -> Image (compresion:0, filter: none) | |
// Black | |
imglib.Image _convertYUV420(CameraImage image) { | |
var img = imglib.Image(image.width, image.height); // Create Image buffer | |
Plane plane = image.planes[0]; | |
const int shift = (0xFF << 24); | |
// Fill image buffer with plane[0] from YUV420_888 | |
for (int x = 0; x < image.width; x++) { | |
for (int planeOffset = 0; | |
planeOffset < image.height * image.width; | |
planeOffset += image.width) { | |
final pixelColor = plane.bytes[planeOffset + x]; | |
// color: 0x FF FF FF FF | |
// A B G R | |
// Calculate pixel color | |
var newVal = shift | (pixelColor << 16) | (pixelColor << 8) | pixelColor; | |
img.data[planeOffset + x] = newVal; | |
} | |
} | |
return img; | |
} |
I use the same your way, but I got the image containing nothing?Now CameraImage also supports jpeg, so I do the same in your code, just add a new type of converting.
imglib.Image? _convertJpeg(CameraImage image) { return imglib.Image.fromBytes( image.width, image.height, image.planes[0].bytes, format: imglib.Format.rgb, channels: imglib.Channels.rgb); }
thnx it helped a great deal
@GanZhiXiong,
Hi, I faced the same issue with using latest ffi: ^1.1.2 version on my code for image conversion, can you please share your solution here.
@Hugand Trying to use your C converter implementation, for android it's running perfectly fine, but when I'm running on iOS, I get below error
/Users/Username/Library/Developer/Xcode/DerivedData/Runner-bapuesqdvyewspdpyvpssxebolee/Build/Intermediates.noindex/Runner.build/Debug-iphonesimulator/Runner.build/Objects-normal/x86_64/custom_image_converter.o
/Users/Username/Library/Developer/Xcode/DerivedData/Runner-bapuesqdvyewspdpyvpssxebolee/Build/Intermediates.noindex/Runner.build/Debug-iphonesimulator/Runner.build/Objects-normal/x86_64/AppDelegate.o
ld: 1 duplicate symbol for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
note: Using new build system
note: Building targets in parallel
note: Planning build
note: Analyzing workspace
note: Constructing build description
note: Build preparation complete
Could not build the application for the simulator.
Error launching application on iPhone 12 Pro.
Could not build the application for the simulator.
The Error is Simulator related, not pertaining to C Code.
@sikandernoori Actually I've resolved the issue, and it was not simulator related issue, it was code related issue.
The issue was in with the main
function in C script.
The output is a grayscale image. How do I get a color image?
@yh4922 can you provide reproduce able code segment ?
imglib.Image _convertYUV420(CameraImage image) {
var img = imglib.Image(image.width, image.height); // Create Image buffer
final int width = image.width;
final int height = image.height;
final int uvRowStride = image.planes[1].bytesPerRow;
final int uvPixelStride = image.planes[1].bytesPerPixel;
const shift = (0xFF << 24);
for(int x=0; x < width; x++) {
for(int y=0; y < height; y++) {
final int uvIndex = uvPixelStride * (x/2).floor() + uvRowStride*(y/2).floor();
final int index = y * width + x;
final yp = image.planes[0].bytes[index];
final up = image.planes[1].bytes[uvIndex];
final vp = image.planes[2].bytes[uvIndex];
// Calculate pixel color
int r = (yp + vp * 1436 / 1024 - 179).round().clamp(0, 255);
int g = (yp - up * 46549 / 131072 + 44 -vp * 93604 / 131072 + 91).round().clamp(0, 255);
int b = (yp + up * 1814 / 1024 - 227).round().clamp(0, 255);
// color: 0x FF FF FF FF
// A B G R
img.data[index] = shift | (b << 16) | (g << 8) | r;
}
}
return img;
}
@sikandernoori this way, it can be converted into color images, but it takes more than 1000ms to convert on a mobile phone with Snapdragon 870 CPU, and it will block the UI.
@yh4922 this is the time it takes dart to convert a YUV 420 image (which comes from Android camera) to RGB. Performance could be much better if you do it in C, possibly enabling NEON or GPU. You can do this conversion in OpenCV, it's nicely optimized.
@ramsmart-inno if you are using OpenCV anyway, this image conversion does not cost you APK size
Thanks for your answers, I see many kinds of solutions.
I now want to convert yuv420 images to color, can you tell me which solution can do it?
I've seen many that only convert to black and white, and that's not what I want.
In addition, the camera's imageFormatGroup parameter is set to jpeg, you can easily convert the color image, but I found that it will cause the preview screen lag, the experience is very bad.
So I may only be able to convert yuv420.
Thank you very much!
About a few days of struggling with the CamerImage to Image conversion, I managed to improve the method to include paddnig on different devices. I tested on several devices, checking the conversion at different camera resolutions. And I think it works.
imglib.Image convertYUV420ToImage(CameraImage cameraImage) {
final imageWidth = cameraImage.width;
final imageHeight = cameraImage.height;
final yBuffer = cameraImage.planes[0].bytes;
final uBuffer = cameraImage.planes[1].bytes;
final vBuffer = cameraImage.planes[2].bytes;
final int yRowStride = cameraImage.planes[0].bytesPerRow;
final int yPixelStride = cameraImage.planes[0].bytesPerPixel!;
final int uvRowStride = cameraImage.planes[1].bytesPerRow;
final int uvPixelStride = cameraImage.planes[1].bytesPerPixel!;
final image = imglib.Image(imageWidth, imageHeight);
for (int h = 0; h < imageHeight; h++) {
int uvh = (h / 2).floor();
for (int w = 0; w < imageWidth; w++) {
int uvw = (w / 2).floor();
final yIndex = (h * yRowStride) + (w * yPixelStride);
// Y plane should have positive values belonging to [0...255]
final int y = yBuffer[yIndex];
// U/V Values are subsampled i.e. each pixel in U/V chanel in a
// YUV_420 image act as chroma value for 4 neighbouring pixels
final int uvIndex = (uvh * uvRowStride) + (uvw * uvPixelStride);
// U/V values ideally fall under [-0.5, 0.5] range. To fit them into
// [0, 255] range they are scaled up and centered to 128.
// Operation below brings U/V values to [-128, 127].
final int u = uBuffer[uvIndex];
final int v = vBuffer[uvIndex];
// Compute RGB values per formula above.
int r = (y + v * 1436 / 1024 - 179).round();
int g = (y - u * 46549 / 131072 + 44 - v * 93604 / 131072 + 91).round();
int b = (y + u * 1814 / 1024 - 227).round();
r = r.clamp(0, 255);
g = g.clamp(0, 255);
b = b.clamp(0, 255);
// Use 255 for alpha value, no transparency. ARGB values are
// positioned in each byte of a single 4 byte integer
// [AAAAAAAARRRRRRRRGGGGGGGGBBBBBBBB]
final int argbIndex = h * imageWidth + w;
image.data[argbIndex] = 0xff000000 |
((b << 16) & 0xff0000) |
((g << 8) & 0xff00) |
(r & 0xff);
}
}
return image;
}
This code has some deprecated classes for Flutter 3.7.1:
imglib.Image _convertBGRA8888(CameraImage image)
has to be rewritten because of imglib.Image.fromBytes
doesn't accept direct parameters and the Format enum doesn't have the bgra value.
imglib.Image _convertBGRA8888(CameraImage image) {
return imglib.Image.fromBytes(
image.width,
image.height,
image.planes[0].bytes,
order: ChannelOrder.bgra,
);
}
Hi, 'img.data[index]' seems no longer use, how to modify the code when using convertYUV420ToImage?
Hi, 'img.data[index]' seems no longer use, how to modify the code when using convertYUV420ToImage?
I have the same problem
Hi, 'img.data[index]' seems no longer use, how to modify the code when using convertYUV420ToImage?
I have the same problem
Hi juanlabrador
I found a way and it's work well. Replace 'img.data[index]' to 'img.setPixelRgba(x, y, r, g, b, hexFF);'
Hi, 'img.data[index]' seems no longer use, how to modify the code when using convertYUV420ToImage?
I have the same problem
Hi juanlabrador
I found a way and it's work well. Replace 'img.data[index]' to 'img.setPixelRgba(x, y, r, g, b, hexFF);'
And what should go into each value?
imglib.Image _convertBGRA8888(CameraImage image) { return imglib.Image.fromBytes( image.width, image.height, image.planes[0].bytes, order: ChannelOrder.bgra, ); }
the image result is all broken.
@neoacevedo I used this function.
static imglib.Image convertBGRA8888ToImage(CameraImage cameraImage) {
return imglib.Image.fromBytes(
width: cameraImage.planes[0].width!,
height: cameraImage.planes[0].height!,
bytes: cameraImage.planes[0].bytes.buffer,
order: imglib.ChannelOrder.bgra,
);
}
I used this function.
static imglib.Image convertBGRA8888ToImage(CameraImage cameraImage) {
return imglib.Image.fromBytes(
width: cameraImage.planes[0].width!,
height: cameraImage.planes[0].height!,
bytes: cameraImage.planes[0].bytes.buffer,
order: imglib.ChannelOrder.bgra,
);
}
its working for iOS, but not on Android.
What imageFormatGroup did you use to create the Camera Controller?
federico-amura-kenility I think on Android you should use convertYUV420ToImage. I modify my class after update Image lib, please try this code.
part of object_detection;
/// ImageUtils
class ImageUtils {
///
/// Converts a [CameraImage] in YUV420 format to [image_lib.Image] in RGB format
///
static imglib.Image convertCameraImage(CameraImage cameraImage) {
if (cameraImage.format.group == ImageFormatGroup.yuv420) {
return convertYUV420ToImage(cameraImage);
} else if (cameraImage.format.group == ImageFormatGroup.bgra8888) {
return convertBGRA8888ToImage(cameraImage);
} else {
throw Exception('Undefined image type.');
}
}
///
/// Converts a [CameraImage] in BGRA888 format to [image_lib.Image] in RGB format
///
static imglib.Image convertBGRA8888ToImage(CameraImage cameraImage) {
return imglib.Image.fromBytes(
width: cameraImage.planes[0].width!,
height: cameraImage.planes[0].height!,
bytes: cameraImage.planes[0].bytes.buffer,
order: imglib.ChannelOrder.bgra,
);
}
///
/// Converts a [CameraImage] in YUV420 format to [image_lib.Image] in RGB format
///
static imglib.Image convertYUV420ToImage(CameraImage cameraImage) {
final imageWidth = cameraImage.width;
final imageHeight = cameraImage.height;
final yBuffer = cameraImage.planes[0].bytes;
final uBuffer = cameraImage.planes[1].bytes;
final vBuffer = cameraImage.planes[2].bytes;
final int yRowStride = cameraImage.planes[0].bytesPerRow;
final int yPixelStride = cameraImage.planes[0].bytesPerPixel!;
final int uvRowStride = cameraImage.planes[1].bytesPerRow;
final int uvPixelStride = cameraImage.planes[1].bytesPerPixel!;
final image = imglib.Image(width: imageWidth, height: imageHeight);
for (int h = 0; h < imageHeight; h++) {
int uvh = (h / 2).floor();
for (int w = 0; w < imageWidth; w++) {
int uvw = (w / 2).floor();
final yIndex = (h * yRowStride) + (w * yPixelStride);
// Y plane should have positive values belonging to [0...255]
final int y = yBuffer[yIndex];
// U/V Values are subsampled i.e. each pixel in U/V chanel in a
// YUV_420 image act as chroma value for 4 neighbouring pixels
final int uvIndex = (uvh * uvRowStride) + (uvw * uvPixelStride);
// U/V values ideally fall under [-0.5, 0.5] range. To fit them into
// [0, 255] range they are scaled up and centered to 128.
// Operation below brings U/V values to [-128, 127].
final int u = uBuffer[uvIndex];
final int v = vBuffer[uvIndex];
// Compute RGB values per formula above.
int r = (y + v * 1436 / 1024 - 179).round();
int g = (y - u * 46549 / 131072 + 44 - v * 93604 / 131072 + 91).round();
int b = (y + u * 1814 / 1024 - 227).round();
r = r.clamp(0, 255);
g = g.clamp(0, 255);
b = b.clamp(0, 255);
image.setPixelRgb(w, h, r, g, b);
}
}
return image;
}
}
After some trial and error, I found the perfect solution for iOS:
const IOS_BYTES_OFFSET = 28;
static Image _convertBGRA8888ToImage(CameraImage cameraImage) {
final plane = cameraImage.planes[0];
return Image.fromBytes(
width: cameraImage.width,
height: cameraImage.height,
bytes: plane.bytes.buffer,
rowStride: plane.bytesPerRow,
bytesOffset: IOS_BYTES_OFFSET,
order: ChannelOrder.bgra,
);
}
The other solution produced a 1088 wide image with a 8px black bar. By adding rowStride
and bytesOffset
, it is now 1080 width with no black bars.
I have no idea where the offset of 28 comes from. Does anyone know why 28 works?
@saad-palapa Does anyone know why 28 works?
Now that we know the answer, the explanation is rather easy. 28 bytes is 8 pixels of BGRA. The image in memory is 1088 pixel wide with a black bar before the first column (the illustration keeps the 8 "black" extra pixels, but does not go keep the real dimensions):
XXXXXXXX......................
XXXXXXXX.......... _ .........
XXXXXXXX........ _( )_ .......
XXXXXXXX....... (_(%)_) ......
XXXXXXXX......... (_)\ .......
XXXXXXXX............. | __ ...
XXXXXXXX............. |/_/ ...
XXXXXXXX............. | ......
XXXXXXXX............. | ......
XXXXXXXX......................
XXXXXXXX......................
By adding the offset, you feed to Image.fromBytes()
something like
......................XXXXXXXX
.......... _ .........XXXXXXXX
........ _( )_ .......XXXXXXXX
....... (_(%)_) ......XXXXXXXX
......... (_)\ .......XXXXXXXX
............. | __ ...XXXXXXXX
............. |/_/ ...XXXXXXXX
............. | ......XXXXXXXX
............. | ......XXXXXXXX
......................XXXXXXXX
......................
The function is smart enough to throw away the extra pixels on the right when the width
parameter is 1080 and rowStride
is 1088.
Hello, I meet another problem. Does anyone try to convert NV21 from cameraImage to image ?
Some device (Xiaomi, Motorola) will have the NV21 format. Anyone can help?
@rraayy you can see how this can be efficiently done with ffi on medium. The sample code is available at https://github.com/Hugand/camera_tutorial
I use the same your way, but I got the image containing nothing?
Now CameraImage also supports jpeg, so I do the same in your code, just add a new type of converting.