Getting Raw Video Data into My App Quick and Dirty 14

I’ve been working with Core Image for iOS as part of a book project that I am involved in.  I’m really excited about what Core Image can do and I’m eager to see what apps use this capability.

However, I wanted to use Core Image to process input from the iPad 2 and iPhone’s camera.  The AVFoundation framework and it’s accompanying tools are a huge topic and there are lots of things that one can do with video.  What I wanted to do was fairly simple, however, and I figured there should be a quick resource that would give me the couple dozen lines I needed to get started.  But, I couldn’t find it.

There’s the AVFoundation programming guide, there is a bunch of sample code available at Apple, there are WWDC videos that cover getting camera input.  But, I just wanted that one thing, and I found it a little annoying that I couldn’t find a resource that targeted the piece that I was interested in.

So, this is going to be that resource.  How to get raw video into your app, bare minimum.

The first thing you need is create a new GLKit project (use ARC too, we won’t be doing traditional memory management).  We’ll be using the capability of core image that draws directly to the render buffer.  You can strip out all the code that comes with the template.  We won’t need any of it.

Then we need to add the frameworks.  They are many, some of them are for the video, some for Core Image:

  • AVFoundation
  • CoreVideo
  • CoreMedia
  • QuartzCore
  • ImageIO
  • CoreImage

Once you’ve done that import those frameworks into your header file and set your GLKViewController up as a AVCaptureVideoDataOutputSampleBufferDelegate. Also you’ll need a couple instance variables. You should have this:


#import <UIKit/UIKit.h>
#import <GLKit/GLKit.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreMedia/CoreMedia.h>
#import <CoreVideo/CoreVideo.h>
#import <QuartzCore/QuartzCore.h>
#import <CoreImage/CoreImage.h>
#import <ImageIO/ImageIO.h>

@interface ViewController : GLKViewController
<AVCaptureVideoDataOutputSampleBufferDelegate> {
    AVCaptureSession *session;

    CIContext *coreImageContext;
    GLuint _renderBuffer;

@property (strong, nonatomic) EAGLContext *context;


The delegate protocol means that this class will receive the callback the delivers the raw pixel data. The session will set this up and configure the parameters that control the session, things like resolution, camera input, etc. The Core Image context is required to draw the results of the Core Image filters that we’ll be using.

The coreImageContext needs a render buffer to write to, so we’ll set that up.  Also, we need a reference to the EAGLContext do tell it to present the contents of the render buffer.

Let’s go ahead and implement setting up the camera and the context in the viewDidLoad method:

#import "ViewController.h"

@implementation ViewController
@synthesize context = _context;

- (void)viewDidLoad
    [super viewDidLoad]; 

    self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

    if (!self.context) {
        NSLog(@"Failed to create ES context");

    GLKView *view = (GLKView *)self.view;
    view.context = self.context;
    view.drawableDepthFormat = GLKViewDrawableDepthFormat24;

    glGenRenderbuffers(1, &_renderBuffer); //2
    glBindRenderbuffer(GL_RENDERBUFFER, _renderBuffer);

    coreImageContext = [CIContext contextWithEAGLContext:self.context]; 

    NSError * error;
    session = [[AVCaptureSession alloc] init];

    [session beginConfiguration];
    [session setSessionPreset:AVCaptureSessionPreset640x480];

    AVCaptureDevice * videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
    [session addInput:input];

    AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
    [dataOutput setAlwaysDiscardsLateVideoFrames:YES];
    [dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; 	

    [dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

    [session addOutput:dataOutput];
    [session commitConfiguration];
    [session startRunning];

The first section comes mostly from the template.  Here we’re just setting up the EAGLContext.  Next we set up the render buffer.  In the third section we initialize our Core Image context.

Finally, in the fourth section we set up our camera input.  We create our session and then configure it.  We’re setting the session to be 640 pixels wide by 480 pixels tall.  There are other options, including 720p and 1080p.  The more pixels we feed into Core Image, the slower the performance is.  A single, simple filter could handle a higher resolution, but for our test, lets just start here.

Next we set up the input device.  If we wanted to specify the front or back cameras, would would call devicesWithMediaType, which returns an array of devices.  To get to a front facing camera, we iterate through that array and look for AVCaptureDevicePositionFront in the AVCaptureDevicePosition property.  But, for ease we can just use the default here.

Next we set up the output.  We are telling it to go ahead and ignore any frames that come in late.  If we were recording we’d want to change this.  Then we set up the color format for the incoming data.

Next we set the delegate which will receive the callback each frame, and we set it to the main queue.

Finally, we finalize our configuration and start the camera running.

Now let’s add the callback method and do something with that data:

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

    CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);

    CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];

    [coreImageContext drawImage:image atPoint:CGPointZero fromRect:[image extent] ];

    [self.context presentRenderbuffer:GL_RENDERBUFFER];

The first line takes the incoming sample buffer and convert it into a CVPixelBuffer.  Then we use the Core Image initialization method, imageWithCVPixelBuffer to turn that same data into a CIImage.

Next, we use our coreImageContext CIContext object to draw its content into the render buffer, and finally present that on screen.  There’s another CIContext method that would allow us to write to a CVPixelBuffer, we’d use this if we wanted to record the output of the core image filters.

If you build and run now, you should have video data being captured and rendered to the screen.  There are a few more methods that came with the template we should probably include.  These are kind of housekeeping stuff that’s outside the scope of this post, but should be included:

- (void)viewDidUnload
    [super viewDidUnload];

    if ([EAGLContext currentContext] == self.context) {
        [EAGLContext setCurrentContext:nil];
	self.context = nil;

- (void)didReceiveMemoryWarning
    [super didReceiveMemoryWarning];
    // Release any cached data, images, etc. that aren't in use.

- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
    // Return YES for supported orientations
    if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone) {
        return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown);
    } else {
        return YES;

So, that works, but so what?  Well it’s a starting point for playing with Core Image filters.  So, what I’d do next is apply a filter.  Here’s what you’d change to apply a core filter to the video output, add the following method to the callback, right before the coreImageContext drawImage:atPoint:fromRect line:

image = [CIFilter filterWithName:@"CIFalseColor" keysAndValues:
kCIInputImageKey, image,
@"inputColor0", [CIColor colorWithRed:0.0 green:0.2 blue:0.0],
@"inputColor1", [CIColor colorWithRed:0.0 green:0.0 blue:1.0],

The False Color filter takes an image and maps it’s contents to two colors, in this case dark green and light blue. You can see the results.

There are all kinds of things you can do with Core Image filters. They can be chained together for more effect.

This project still needs a lot of tweaking, the video is the wrong size for the screen (this could be fixed with a CI filter if you wanted to do it that way), it breaks when you rotate it out of landscape, etc.  But, hopefully this helped you get started.  You can download the code here.

14 thoughts on “Getting Raw Video Data into My App Quick and Dirty

  1. Reply Paul Oct 15,2011 6:35 pm

    Would this also apply to capturing the raw data from the camera for stills?

    • Reply admin Oct 22,2011 4:32 am

      There are a number of ways to capture stills. You can use this method. The CIImage can be converted into a CGImage and using the ALAssetsLibrary can be saved to the device’s photo album. I show how this is done in the book. I’ll put that code up in a post in a day or two.

  2. Reply Usama Ahmed Nov 3,2011 6:16 pm


    I can’t seems to find CoreImage framework that you have used in this example. I am using iOS 4.3 SDK.

  3. Reply Benoit Nolens Nov 23,2011 2:25 pm


    I used your code and changed the filter to CIGaussianBlur, but the result is a black screen.
    Something like this:
    image = [CIFilter filterWithName:@"CIGaussianBlur" keysAndValues: kCIInputImageKey, image, @"inputRadius", [NSNumber numberWithFloat:3.0], nil].outputImage;

    Any idea why?

  4. Reply Helmi Dec 8,2011 1:02 pm

    Hi, thanks for posting such a good tutorial. I’m planning to buy your book.
    I cant seem to run in my ipod. both run IOS5. here the error msg:

    *** Terminating app due to uncaught exception ‘NSUnknownKeyException’, reason: ‘[ setValue:forUndefinedKey:]: this class is not key value coding-compliant for the key playGOLButton.’

    • Reply Alex Jun 26,2012 9:09 am

      Yeah, I’m getting the same error here.. (iOS6 SDK B1)
      Any ideas?

      • Reply admin Jul 13,2012 4:12 pm

        Usually this means that there’s a connection in interface builder that no longer exists. Look at the IB file and see if there are any connections to missing IBOutlets. At some point I’ll fix this.

  5. Reply Chida Mar 7,2012 5:38 am

    Hi, this is very great tutorial. But i want to record the filtered video I came across lot of examples, didn’t find good tutorial to capture the video.

    Can you provide some example to record the video ? that great helpful to me.

  6. Reply Abhishek Nov 20,2012 1:05 pm

    Thanks for this tutorial.
    I am working on the same thing. But i want to play a raw file using AVCaptureSession, AVCaptureOutput and AVCaptureVideoDataOutput. The code that you have written works on recording and changing the video into RGB format and showing to the user at the same moment. But I want to play a file.
    Can you please help on this?

    • Reply admin Nov 27,2012 12:38 am

      An AVCaptureSession is specifically for video data coming from a camera. You can play video in a number of ways, if you have the file locally, you can use an AVAssetReader object. If you need to stream a remote file, there’s a new way to do that in iOS 6 that uses AVPlayer, you can read about that here,

      For the AVAssetReader method, I suggest looking at GPUImage project. Brad Larson has written an incredible library that does all kinds of things with video. One of the example projects is for reading a file, accessing raw frames, and manipulating them in an OpenGL context. Even if that’s overkill for what you’d like to do, it’s a good place to look at how to set that up.

      If you don’t need the raw frames, you can pipe things into an AVPlayerLayer, but I don’t have much experience with that so I’m not sure what the caveats are in that case.

  7. Reply Rudolph Mar 21,2014 2:02 pm

    This compiles, but displays only a black screen and I get the following message in console: “CIContexts can only be created with ES 2.0 EAGLContexts”

  8. Reply Rudolph Mar 26,2014 10:16 am

    Never mind my other comment. Code actually does work, but only with GLES 2 and not 3.

    Thanx for your example it finally helped me to get my own code to work 🙂

Leave a Reply