Skip to content

Instantly share code, notes, and snippets.

@benlodotcom
Last active February 3, 2016 17:45
Show Gist options
  • Star 13 You must be signed in to star a gist
  • Fork 4 You must be signed in to fork a gist
  • Save benlodotcom/451924 to your computer and use it in GitHub Desktop.
Save benlodotcom/451924 to your computer and use it in GitHub Desktop.
A little demo snippet that you can use to access directly the camera video in IOS4
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreGraphics/CoreGraphics.h>
#import <CoreVideo/CoreVideo.h>
#import <CoreMedia/CoreMedia.h>
/*!
@class AVController
@author Benjamin Loulier
@brief Controller to demonstrate how we can have a direct access to the camera using the iPhone SDK 4
*/
@interface MyAVController : UIViewController <AVCaptureVideoDataOutputSampleBufferDelegate> {
AVCaptureSession *_captureSession;
UIImageView *_imageView;
CALayer *_customLayer;
AVCaptureVideoPreviewLayer *_prevLayer;
}
/*!
@brief The capture session takes the input from the camera and capture it
*/
@property (nonatomic, retain) AVCaptureSession *captureSession;
/*!
@brief The UIImageView we use to display the image generated from the imageBuffer
*/
@property (nonatomic, retain) UIImageView *imageView;
/*!
@brief The CALayer we use to display the CGImageRef generated from the imageBuffer
*/
@property (nonatomic, retain) CALayer *customLayer;
/*!
@brief The CALAyer customized by apple to display the video corresponding to a capture session
*/
@property (nonatomic, retain) AVCaptureVideoPreviewLayer *prevLayer;
/*!
@brief This method initializes the capture session
*/
- (void)initCapture;
@end
#import "MyAVController.h"
@implementation MyAVController
@synthesize captureSession = _captureSession;
@synthesize imageView = _imageView;
@synthesize customLayer = _customLayer;
@synthesize prevLayer = _prevLayer;
#pragma mark -
#pragma mark Initialization
- (id)init {
self = [super init];
if (self) {
/*We initialize some variables (they might be not initialized depending on what is commented or not)*/
self.imageView = nil;
self.prevLayer = nil;
self.customLayer = nil;
}
return self;
}
- (void)viewDidLoad {
/*We intialize the capture*/
[self initCapture];
}
- (void)initCapture {
/*We setup the input*/
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput
deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]
error:nil];
/*We setupt the output*/
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
/*While a frame is processes in -captureOutput:didOutputSampleBuffer:fromConnection: delegate methods no other frames are added in the queue.
If you don't want this behaviour set the property to NO */
captureOutput.alwaysDiscardsLateVideoFrames = YES;
/*We specify a minimum duration for each frame (play with this settings to avoid having too many frames waiting
in the queue because it can cause memory issues). It is similar to the inverse of the maximum framerate.
In this example we set a min frame duration of 1/10 seconds so a maximum framerate of 10fps. We say that
we are not able to process more than 10 frames per second.*/
//captureOutput.minFrameDuration = CMTimeMake(1, 10);
/*We create a serial queue to handle the processing of our frames*/
dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", NULL);
[captureOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Set the video output to store frame in BGRA (It is supposed to be faster)
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
/*And we create a capture session*/
self.captureSession = [[AVCaptureSession alloc] init];
/*We add input and output*/
[self.captureSession addInput:captureInput];
[self.captureSession addOutput:captureOutput];
/*We use medium quality, ont the iPhone 4 this demo would be laging too much, the conversion in UIImage and CGImage demands too much ressources for a 720p resolution.*/
[self.captureSession setSessionPreset:AVCaptureSessionPresetMedium];
/*We add the Custom Layer (We need to change the orientation of the layer so that the video is displayed correctly)*/
self.customLayer = [CALayer layer];
self.customLayer.frame = self.view.bounds;
self.customLayer.transform = CATransform3DRotate(CATransform3DIdentity, M_PI/2.0f, 0, 0, 1);
self.customLayer.contentsGravity = kCAGravityResizeAspectFill;
[self.view.layer addSublayer:self.customLayer];
/*We add the imageView*/
self.imageView = [[UIImageView alloc] init];
self.imageView.frame = CGRectMake(0, 0, 100, 100);
[self.view addSubview:self.imageView];
/*We add the preview layer*/
self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];
self.prevLayer.frame = CGRectMake(100, 0, 100, 100);
self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer: self.prevLayer];
/*We start the capture*/
[self.captureSession startRunning];
}
#pragma mark -
#pragma mark AVCaptureSession delegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
/*We create an autorelease pool because as we are not in the main_queue our code is
not executed in the main thread. So we have to create an autorelease pool for the thread we are in*/
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
/*We release some components*/
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
/*We display the result on the custom layer. All the display stuff must be done in the main thread because
UIKit is no thread safe, and as we are not in the main thread (remember we didn't use the main_queue)
we use performSelectorOnMainThread to call our CALayer and tell it to display the CGImage.*/
[self.customLayer performSelectorOnMainThread:@selector(setContents:) withObject: (id) newImage waitUntilDone:YES];
/*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly).
Same thing as for the CALayer we are not in the main thread so ...*/
UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
/*We relase the CGImageRef*/
CGImageRelease(newImage);
[self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:image waitUntilDone:YES];
/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
[pool drain];
}
#pragma mark -
#pragma mark Memory management
- (void)viewDidUnload {
self.imageView = nil;
self.customLayer = nil;
self.prevLayer = nil;
}
- (void)dealloc {
[self.captureSession release];
[super dealloc];
}
@end
@benlodotcom
Copy link
Author

Hi!
Could you post the piece of code you use to initialize the controller and display its view ?
Also if you want to get more informations about this gist you could take a look at the post I wrote on my blog: http://www.benjaminloulier.com/posts/2.
Ben

@stefanvm
Copy link

Hi!
I've tried the similar (very similar) approach with the updating of the image in the UIView as well as with redrawing of the own layer and failed absolutely. The picture in the view won't get updated at all. The only really working version is that with the preiew layer. Could you really provide the complete project for the version based upon redrawing UIView and custom layer?
Stefan

@benlodotcom
Copy link
Author

Hi guys,

Maybe you experienced problems with dispatch queues, I changed the gist so now the main dispatch queue is used. I also made a quick project on Xcode which uses the controller and display the ImageView the CALayer and the previewLayer: http://github.com/benlodotcom/MyAVControllerDemo.

Enjoy and let me know if you have more problems.

Ben

@stefanvm
Copy link

stefanvm commented Jul 1, 2010

This is exactly what I've meant, you need the preview layer stuck to the capture session to get anything visible.

@stefanvm
Copy link

stefanvm commented Jul 1, 2010

By the way, the usage of the main queue is not recommended because this is a concurrent queue and the frames you get arrive in an arbitrary order and in different threads. This can cause problems under some circumstances. If you create the queue as in your previous version you create as default the sequential queue.

@benlodotcom
Copy link
Author

Thanks a lot for your comments guys.

For sure using the main queue is not recommended in production, it was a quick workaround to make it work. I'm building an AR application and I definitely use another queue ;-)

Ben

@rnapier
Copy link

rnapier commented Apr 19, 2012

Really nice sample. Very helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment