CVOpenGLESTextureCache is Awesome! 3

In iOS 5 Apple included a new API that seems to be living in obscurity. The CVOpenGLESTextureCache. This API speed up the access of data in texture memory. Among other things, this means the ability to record gameplay videos, to use the camera data in games and other apps in a much more efficient way, and ultimately, enables some really cool GPGPU applications on the mobile platform.

It’s an iOS 5 only technology and so perhaps this is why it hasn’t gained more attention and use than it has. It was covered in the WWDC 2011 video 419 ‘Capturing from the Camera using AVFoundation on iOS 5’.

I’ve put together a little video of a set of classes that I’m working on that take advantage of this technology. It is a modification of the Cocos2D Box2D template. Instead of creating boxes with the numbers 1-4 on them, this creates boxes with the camera video image mapped to them. Also, another class records the video. The whole thing runs at 60fps on my iPhone 4S. Here’s that video:

Here’s a link to the project that contains the classes. If there’s enough interest I’ll build them out a little and put them up on github. As it is now, there are some issues with the code.

It requires a change to the CCTexture2D class in Cocos2D, the .name property needs to be readwrite instead of readonly. Each time a new frame comes in from the camera, the .name property is changed to point to that new texture data. I’m not sure what this might do to the memory management of the CCTexture2D class.

Also, there seems to be a bug that causes this to crash, please send any feedback you have if you try to use it.

Finally, there are some orientation issues with recorded video.

Anything you want recorded needs to be drawn into the JGREcordingRenderTexture object. I think there might be a way to avoid this, Kamcord appears to be doing it, so it’s possible. However, for now that’s what I’m doing. In my example, I only record the box2d bodies, I don’t include the labels and menu, but you could if you drew them into the render texture.

Once I’ve worked it out a little more I’ll post a more in depth tutorial on the usage of the CVOpenGLESTextureCache.

Let me know what you think.

In the past, you could render things into openGL, but getting the data back out required glReadPixels, which is known to be really slow. This API fixes that problem.

Besides getting camera data in and recording video data out from an OpenGL texture, there are lots of potential data intensive applications that could benefit from using the GPU to do processing. There’s a pixel accurate collision detection system on the Cocos2d forums here, for example, that would benefit from exploiting this API.

3 thoughts on “CVOpenGLESTextureCache is Awesome!

  1. Reply Stephane Feb 20,2013 8:28 am

    very interesting project. thanks a lot for sharing it.
    i didn’t experience any crash with my tests yet.

    i had some problems reading the pixel buffer. i have found some solutions with the use of a CVPixelBufferPool
    and a kCVPixelBufferIOSurfacePropertiesKey (which is supposed to be used in order to correctly read that buffer). this allowed me to use 2 buffers, one for display the camera on screen, the other used for off-screen processing. both of them can be altered with opengl shaders and fbo , which can be pretty fun (runs at 60fps on a ipad2, a 640×480 capture video, with some periodic fps drops (don’t know why yet). using 30fps seems much better.

  2. Reply Andrea Mar 7,2013 5:04 pm

    CVOpenGLESTextureCache is indeed awesome! I put together a demo app with GPU shaders and Camera input and this is the result (camera configured to run at 30 fps on all iOS devices):

  3. Reply infrid Apr 22,2013 8:15 pm

    I’m incredibly interested in this. I’ve been playing around a few days trying to work out how best to do this with FBO’s, etc.. I’m just starting out on this. I would love to see what your perspective is on all of this now looking back.. an updated blog post? or the github code?

Leave a Reply