Conway’s Game of Life Painted with Incoming Video (Core Image Tut) 6

Combining Core Image and Core Graphics

A few months ago I heard about this great app called Composite. This app lets you draw to the canvas using the video feed input as paint. It’s really quite cool and when I started learning about what Core Image could do I started to wonder if similar thing could be done easily with core image filters.

Well the answer is yes, but I’m not going to just copy what composite is doing. I wanted to do something more . . . useless. So, I’ve implemented Conway’s Game Of Life that uses the video feed as the cell images. I’ll go through how I did that in this post.

But, I’m getting a little ahead of myself, as I am wont to do. The process of drawing using the video input as canvas involves two technologies, Core Image and Core Graphics. We’ll be drawing a CGImage using core graphics, that CGImage can then be converted into a Core Image version, and used in whatever series of CIFilters we want to put them through. Depending on resources, we can use combinations of drawing and core image filters to perform all kinds of image and video manipulation.

This post will be building on the quick and dirty video post that I wrote several weeks ago. That code is here.

Setting up the Drawing

First we need to set up the drawing classes. We need to add a CGContext to our ViewController class and a method to set it up. Your new header file should look like the following:

Header File:

@interface ViewController : GLKViewController <AVCaptureVideoDataOutputSampleBufferDelegate> {
    AVCaptureSession *session;
    
    CIContext *coreImageContext;
    
    GLuint _renderBuffer;
    
    CGSize screenSize;
    CGContextRef cgcontext;
    
    CIImage *maskImage;
}

@property (strong, nonatomic) EAGLContext *context;

-(void)setupCGContext;

@end

While we’re here we’ll add the maskImage instance variable as well. This is what we’ll use to pass around the CIImage that we’ve drawn. We’ll use that in a minute.

On to the implementation file. We need to put our screen size into our variable and call the setupCGContext method. Both of these should be in the viewDidLoad method:

CGSize scrn = [UIScreen mainScreen].bounds.size;
//UIScreen mainScreen.bounds returns the device in portrait, we need to switch it to landscape
screenSize = CGSizeMake(scrn.height, scrn.width);
    
if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)]) {
	scl = [[UIScreen mainScreen] scale];
	screenSize = CGSizeMake(screenSize.width * scl, screenSize.height * scl);
} 
[self setupCGContext];

We’re checking for for retina devices in this code block (create an instance float variable called ‘scl’ in our header). When we render to the screen using openGL we’re in pixels, so we need to scale everything up if we’re on a retina device.

Next implement the setupCGContext method:

-(void)setupCGContext {
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    
    NSUInteger bytesPerPixel = 4;
    NSUInteger bytesPerRow = bytesPerPixel * screenSize.width;
    NSUInteger bitsPerComponent = 8;
    
    cgcontext = CGBitmapContextCreate(NULL, screenSize.width, screenSize.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
    
    CGColorSpaceRelease(colorSpace);
}

We’re almost to some results. Next, we need to create the method that will draw a CGImage, and then convert it into a CIImage. Add the following method declaration to your header and implement the method in the .m file:

-(CIImage *)drawGameOfLife {
    CGContextSetRGBFillColor(cgcontext, 1, 1, 1, 1);
    CGContextFillEllipseInRect(cgcontext, CGRectMake(0, 0, screenSize.width, screenSize.height));
    
    CGImageRef cgImg = CGBitmapContextCreateImage(cgcontext);
    CIImage *ci = [CIImage imageWithCGImage:cgImg];
    CGImageRelease(cgImg);
    return ci;
}

We’ll be modifying this method to draw our Game Of Life state, but for now we’ll just draw a great big oval. The last four lines of the method create a CGImage from our context, create a Core Image image from the CGImage, release the CGImage and return the Core Image. By using these steps we can draw anything with Core Graphics, convert it into a CIImage, and then use that image in a filter.

Now that we’ve set up the method to create our CIImage, we need to invoke it and use the CIImage in a filter. We’re also going to scale our incoming video up to match the size of the CIImage that we’ve created. Change the captureOutput method to the following:

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    
    CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
    
    CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    
    float heightSc = screenSize.height/(float)CVPixelBufferGetHeight(pixelBuffer);
    float widthSc = screenSize.width/(float)CVPixelBufferGetWidth(pixelBuffer);
    
    CGAffineTransform transform = CGAffineTransformMakeScale(widthSc, heightSc);
    
    image = [CIFilter filterWithName:@"CIAffineTransform" keysAndValues:kCIInputImageKey, image, @"inputTransform", [NSValue valueWithCGAffineTransform:transform],nil].outputImage; 
    
    maskImage = [self drawGameOfLife];
    
    image = [CIFilter filterWithName:@"CIMinimumCompositing" keysAndValues:kCIInputImageKey, image, kCIInputBackgroundImageKey, maskImage, nil].outputImage;
    
    [coreImageContext drawImage:image atPoint:CGPointZero fromRect:[image extent] ];
    
    [self.context presentRenderbuffer:GL_RENDERBUFFER];
}

Filtering Incoming Video with Drawn CGImage

The first filter we apply to our image (the incoming video still) is the CIAffineTransform that scales up (or down) the video to fit the size of the screen. The second one, the CIMinimumCompositing uses the image we created (white oval on black) and compares it to the scaled still. By pixel this filter uses the minimum color value of the two values. Since black is the absolute minimum and white is the absolute maximum in color values, we’ll get a black oval around the outside and whatever color from the video frame.

Importing the Game Of Life

Now it’s time to import our Game Of Life model. I’m not going to cover the implementation of the Game Of Life. I based it on a version created by Alan Quatermain. I added a couple extra things beyond the simple execution of the Game Of Life, a random seed program that populates some of the cells, and a method to add a specific pattern (crawlers) to the grid.

Copy the files to our project, add the import statement, and create a Game Of Life instance variable in our ViewController class.

Next, initialize the GOL by adding the following lines of code to the viewDidLoad method:

GOL = [[GOLModel alloc] initGameWidth:30 andHeight:20];
[GOL randomPopulate];

Now that we have an initialized GOL object, we’ll need to change our drawGameOfLife method to visualize the state of the game.

Here’s the new drawGameOfLife method:

-(CIImage *)drawGameOfLife {
    int colwidth = screenSize.width / GOL.width;j <bold>//1</bold>
    int rowheight = screenSize.height / GOL.height;
    
    CGContextSetRGBFillColor(cgcontext, 0, 0, 0, 0.4); //2
    CGContextFillRect(cgcontext, CGRectMake(0, 0, screenSize.width, screenSize.height));
    
    NSArray *ar = GOL.cells;
    
    for (int i = 0; i < [ar count]; i++) { //3
        BOOL CELLACTIVE = [[ar objectAtIndex:i] boolValue];
        int x = i % GOL.width;
        int y = (int)(i/GOL.width);
        if (CELLACTIVE) { //4
            
            CGContextSetRGBFillColor(cgcontext, 1, 1, 1, 1);
            CGContextFillEllipseInRect(cgcontext, CGRectMake(x * colwidth + 1, y * rowheight + 1, colwidth - 2, rowheight - 2));
        }
    }
    // 5
    CGImageRef cgImg = CGBitmapContextCreateImage(cgcontext);
    CIImage *ci = [CIImage imageWithCGImage:cgImg];
    CGImageRelease(cgImg);
    return ci;
}

This method now does a couple things:

1) We’re getting the cell width and height by dividing the number of cells by the size of our screen.
2) Next, we are clearing the screen, but with an opacity of .4, this will make cells take several turns after they’ve died to completely dissappear. It makes the visual a little more interesting.
3) Then, iterating through each cell in our GOL model, we calculate the starting x and y coordinate for that cell.
4) We next draw a white circle at the appropriate position if we’ve found an active cell. We add +1 and use a width and height – 2 to leave a littel padding between cells.
5) This code is the same as before, converting the CGImage into a CIImage.

If you run now you’ll have the a bunch of little circles, randomly placed, instead of one big circle. You should be able to see the video feed through the circles.

Progressing the Game Of Life

Now let’s get the game going. We’ll need to add a button to the xib so we can turn on and off the game. Give the button an IBOutlet called playGOLButton and hook up a method called playGOL. Also, we’ll need a timer, add an NSTimer *refreshTimer instance variable as well.

Next we’ll implement the method playGOL and add another method, updateGOL that we’ll call with a timer. Add the updateGOL method to our header file. Here’s the implementation of those two methods:

- (IBAction)playGOL:(id)sender {
    if (refreshTimer) {
        [playGOLButton setTitle:@"Play" forState:UIControlStateNormal];
        [playGOLButton setTitle:@"Play" forState:UIControlStateHighlighted];
        [refreshTimer invalidate];
        refreshTimer = nil;
        
    } else {
        [playGOLButton setTitle:@"Stop" forState:UIControlStateNormal];
        [playGOLButton setTitle:@"Stop" forState:UIControlStateHighlighted];
        refreshTimer = [NSTimer scheduledTimerWithTimeInterval:0.5 target:self selector:@selector(updateGOL:) userInfo:nil repeats:YES];
    }
}

-(void)updateGOL:(NSTimer *)timer {
    [GOL update];
    maskImage = [self drawGameOfLife];
}

In the first method, we’re just changing the buttons label, and then either creating a timer for the updateGOL method or turning the timer off.

In the updateGOL method we are calling the update method on our GOL object, which progresses the game one step. Then we redraw the maskImage based on the updated GOL model state.

We were calling that same line of code in the captureOutput method. Take the line out of the captureOutput method, we don’t need to run that code so often. We should get some performance benefit our of running the drawGameOfLife method only every half second, instead of every frame. Also, add it to the viewDidLoad at the very end, otherwise we’ll have a black screen until we push the play button.

Changing Cell Values in the Game Of Life

The next thing we want to do is add multitouch interaction to our GOL model. When we touch the screen, for each touch, we need to calculate the position of the cell in GOL and turn it on or off. The first thing we need to do is turn multitouch on for our view. Put this in the viewDidLoad method:

[self.view setMultipleTouchEnabled:YES];

Next we’ll add a touchesBegan and a touchesEnded method, these are almost identical:

-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
    for (UITouch *t in touches) {
        CGPoint loc = [t locationInView:self.view];
        loc = CGPointMake(loc.x * scl, loc.y * scl);
        loc = CGPointMake(loc.x, screenSize.height - loc.y);
        
        int colWidth = (int)screenSize.width / GOL.width;
        int rowHeight = (int)screenSize.height / GOL.height;
        
        int x = floor(loc.x / (float)colWidth);
        int y = floor(loc.y / (float)rowHeight);
        
        if (refreshTimer) {
            [GOL spawnWalkerAtCellX:x andY:y];
        } else {
            [GOL toggleCellX:x andY:y];
        }
    }
    maskImage = [self drawGameOfLife];
}

-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
    for (UITouch *t in touches) {
        
        CGPoint loc = [t locationInView:self.view];
        loc = CGPointMake(loc.x * scl, loc.y * scl);
        loc = CGPointMake(loc.x, screenSize.height - loc.y);
        
        int colWidth = screenSize.width / GOL.width;
        int rowHeight = screenSize.height / GOL.height;
        
        int x = floor(loc.x / colWidth);
        int y = floor(loc.y / rowHeight);
        
        [GOL toggleCellX:x andY:y];
    }
    maskImage = [self drawGameOfLife];
}

The first thing we do is iterate through the touches set. For each touch we first flip the touch location, because the view uses one coordinate system and Core Graphics uses a vertically flipped system.

Once we’ve done that we can use the screen position and the knowledge of the cells size to calculate the coordinate of the cell we’ve touched in the GOL model. Then we call the GOL -toggleCellX:andY: method to turn it on or off.

Finally, after we’ve iterated through all our touches and flipped their respective cells, we update the drawing of the GOL in Core Graphics and return the CIImage for our CIFilter chain.

The only difference in the touchesBegan method is that if the game is currently active, instead of toggling an individual cell, we call spawnWalker. This method creates a specific Game Of Life pattern at the touch point. This self-sustaining pattern moves across the screen unless it encounters another set of active cell.

If you build and run you can manipulate the Game Of Life via touches, either while the game is running, or when it’s stopped.

Here are a couple of walkers in action:

A Few More Unnecessary Additions

Let’s put a few unnecessary visual enhancements to this app. First, let’s use a perlin noise generator to determine the color of the cell. Import the CZGPerlinGenerator class into our project. You’ll need to set up the generator object in the viewDidLoad block, like this (also, import the class into our header file and create an instance variable called perlin):

	perlin = [[CZGPerlinGenerator alloc] init];
    perlin.octaves = 1;
    perlin.zoom = 50;
    perlin.persistence = 0.5;  

Make sure that block of code is executed before our maskImage = [self drawGameOfLife]; line. Now in our drawGameOfLife method change this line:

CGContextSetRGBFillColor(cgcontext, 1, 1, 1, 1);

To this:

float r = [perlin perlinNoiseX:x * 5 y:y * 5 z:100 t:0] + .5;
float g = [perlin perlinNoiseX:x * 5 y:y * 5 z:0 t:100] + .5;
float b = [perlin perlinNoiseX:x * 5 y:y * 5 z:0 t:0] + .5;
            
CGContextSetRGBFillColor(cgcontext, r, g, b, 1);

Perlin noise is a random number generator that creates realistic looking textures. If you use the Clouds filter in Photoshop you get a perlin noise two dimensioal texture.

Now your cells look like you are looking through different color cellophane.

Performance

Finally, I want to talke a little bit about performance. I’m testing this code on an iPhone 4S and an iPad 2, so it works fairly smoothly for me, but if you are on a 3GS or something, I’m guessing it’s pretty crappy.

There are a couple things you can do, that I’ve tried, to increase the performance. You can change your CGContext from RGB to greyscale (you won’t be able to use the perlin noise color code in this scenario). This helps quite a bit. You can also reduce the number of cells in the GOL.

I haven’t tested this, but reducing the resolution of the images should help as well. Change the incoming video from 720p to 640×480, and change the cgcontext and core graphics code so that it draws a 640×480 image. Then move the scaling filter so that it occurs last. According to the WWDC video on Core Image, the performance should scale with the resolution of the images. This would reduce the number of pixels for most of the process by a factor of ~2.5.

If there are any other performance tips on this code I’d like to hear them, I’m new to all these APIs and I may be doing something really inefficiently.

I tried to create a sliding tile puzzle using the video input, and it choked when I went above 16 cells. I was using core image filters to crop, then translate, then composite each cell into tapestry. I’m thinking if I had the skill to implement this in OpenGL it would run just fine, but I’m a novice with that API.

The other thing I found was that if I tried to do all the filters at once, like all 16 or 25 cells, it just crashed and wouldn’t work at all. I got a insufficient resources error (something like that). In order to get around that I broke the 25 CI filters steps into chunks of 6 and then called the cicontext drawImage:atPoint:fromRect: 5 times (for the 25 cells). This worked, but was pretty slow on the iPad 2, it ran OK on the 4S. I’m guessing it would be garbage on anything below an iPad 2.

Here is the Completed Code for this post.

6 thoughts on “Conway’s Game of Life Painted with Incoming Video (Core Image Tut)

  1. Reply Usama Ahmed Nov 2, 2011 7:26 am

    Cool. I will definitely going to try that.

  2. Pingback: [ios开发]iphone开发资源汇总 | Linux Pig - 探讨Linux相关技术的博客

  3. Reply will Mar 1, 2012 10:24 am

    hi, I want tried this code, but when i debuged that, got a “SIGABRT” in main function.
    my xcode is 4.2, both Simulator and iphone 5.0 device have same problem.
    can anyone tell me why? thank you very much!

  4. Pingback: iphone开发资源汇总(转载) | 敲代码啊,敲代码 TapCode

Leave a Reply