In iOS 5 Apple included a new API that seems to be living in obscurity. The CVOpenGLESTextureCache. This API speed up the access of data in texture memory. Among other things, this means the ability to record gameplay videos, to use the camera data in games and other apps in a much more efficient way, and ultimately, enables some really cool GPGPU applications on the mobile platform.
It’s an iOS 5 only technology and so perhaps this is why it hasn’t gained more attention and use than it has. It was covered in the WWDC 2011 video 419 ‘Capturing from the Camera using AVFoundation on iOS 5’.
I’ve put together a little video of a set of classes that I’m working on that take advantage of this technology. It is a modification of the Cocos2D Box2D template. Instead of creating boxes with the numbers 1-4 on them, this creates boxes with the camera video image mapped to them. Also, another class records the video. The whole thing runs at 60fps on my iPhone 4S. Here’s that video:
Here’s a link to the project that contains the classes. If there’s enough interest I’ll build them out a little and put them up on github. As it is now, there are some issues with the code.
It requires a change to the CCTexture2D class in Cocos2D, the .name property needs to be readwrite instead of readonly. Each time a new frame comes in from the camera, the .name property is changed to point to that new texture data. I’m not sure what this might do to the memory management of the CCTexture2D class.
Also, there seems to be a bug that causes this to crash, please send any feedback you have if you try to use it.
Finally, there are some orientation issues with recorded video.
Anything you want recorded needs to be drawn into the JGREcordingRenderTexture object. I think there might be a way to avoid this, Kamcord appears to be doing it, so it’s possible. However, for now that’s what I’m doing. In my example, I only record the box2d bodies, I don’t include the labels and menu, but you could if you drew them into the render texture.
Once I’ve worked it out a little more I’ll post a more in depth tutorial on the usage of the CVOpenGLESTextureCache.
Let me know what you think.
In the past, you could render things into openGL, but getting the data back out required glReadPixels, which is known to be really slow. This API fixes that problem.
Besides getting camera data in and recording video data out from an OpenGL texture, there are lots of potential data intensive applications that could benefit from using the GPU to do processing. There’s a pixel accurate collision detection system on the Cocos2d forums here, for example, that would benefit from exploiting this API.