Mixing two Videos with GPUImage 11

There was some discussion on the GPUImage github a little while back about how to mix two video input sources. I have been meaning to investigate this capability for a while. Well, last night I had a burst of productive energy so I was up late playing with it. This morning I can report my discoveries.

First off, this does work which is kind of exciting. There are some limitations, but it is functional. I’ve created a sample project that loads two videos, plays them together, and records the result. It does this in real time. You can download that project here.

Here’s a video of the result:

In the video I’m blending the FrankenWeenie trailer with a video of the solar system.

Here’s the setup from the sample project, this is the -viewDidLoad method:

-(void)viewDidLoad
    [super viewDidLoad];

    NSURL *url = [[NSBundle mainBundle] URLForResource:@"FRNK" withExtension:@"mp4"];
    
    GPUImageMovie *gpuIM = [[GPUImageMovie alloc] initWithURL:url];
    
    gpuIM.playAtActualSpeed = YES;
    
    NSURL *url2 = [[NSBundle mainBundle] URLForResource:@"NAN" withExtension:@"mov"];
    GPUImageMovie *movie2 = [[GPUImageMovie alloc] initWithURL:url2];
    
    movie2.playAtActualSpeed = YES;
    
    filter0 = [[GPUImageColorDodgeBlendFilter alloc] init];
    
    [gpuIM addTarget:filter0];
    [movie2 addTarget:filter0];
    
    [filter0 addTarget:_view0];
    
    [gpuIM startProcessing];
     [movie2 startProcessing];
    
    isRecording = NO;

This code is pretty straight forward. I’m creating two GPUImageMovie objects based on two URLs for the videos. These videos are both 720p. I am setting the .playAtActualSpeed attribute to YES. I’ll talk about this in a second.

Then I create a GPUImageColorDodgeBlendFilter. Then I add the filter as a target to both of the GPUImageMovie objects. The order does matter here, depending on which blend filter I use.

Next I set the view as a target of the filter. I’ve set the view for this view controller as a GPUImageView object.

Finally, calling startProcessing on each video begins the reading of the videos.

Just this code would be enough to play the videos to the screen, and in my testing with an iPhone 4S and an iPad 3, this much works just fine with two 720p videos.

The last line sets up the isRecording variable which will help determine the state of the recorder.

Here’s the next block of code:

-(void)recordVideo {
    NSString *path = [NSTemporaryDirectory() stringByAppendingPathComponent:@"file.mov"];
    NSURL *url = [NSURL fileURLWithPath:path];
    
    NSFileManager *fm = [NSFileManager defaultManager];
    [fm removeItemAtPath:path error:nil];
    
    mw = [[GPUImageMovieWriter alloc] initWithMovieURL:url size:CGSizeMake(640, 480)];
    [filter0 addTarget:mw];
    [mw startRecording];
}

This block is what sets up and starts the GPUImageMovieWriter object. This object has to be creating anew for each movie file recorded.

The first step is to set up an NSURL object that points to a file called ‘file.move’ and is in the temp directory.

The second step is to delete any previous file that already exists (this will fail silently if no file by that name exists). The reason I do this check is that AVAssetWriter (which is what GPUImageMovieWriter is using) will die if the file already exists.

The next step is to create the GPUImageMovieWriter, which is done with the file path and a size. I’m using 640×480 in this example.

Next, add the movie writer as a target of the filter. This means that the filter has two targets, the view, and the movie writer.

Finally, I call start recording on the movie writer. It’s not too difficult to record video with GPUImage!

Now the stopRecording method:

-(void)stopRecording {
    [mw finishRecording];
    NSString *path = [NSTemporaryDirectory() stringByAppendingPathComponent:@"file.mov"];
    
    ALAssetsLibrary *al = [[ALAssetsLibrary alloc] init];
    [al writeVideoAtPathToSavedPhotosAlbum:[NSURL fileURLWithPath:path] completionBlock:^(NSURL *assetURL, NSError *error) {
        if (error) {
            NSLog(@"Error %@", error);
        } else {
            NSLog(@"Success");
            //NSFileManager *fm = [NSFileManager defaultManager];
            //[fm removeItemAtPath:path error:&error];
        }
    }];
}

The first thing is to call stopRecording on the movie writer. I get an error here telling me that I am not supposed to call finishRecording on the main thread. This is an AVFoundation AVAssetWriter warning. If you were doing this for real, you’d want to do some of this on a background thread.

There’s some threading built in to the GPUImage project already, but I haven’t dug into that too much, so I’m not sure if there are efficiencies to be gained by moving this to a background thread, or if it would just quiet the warning.

The next step is to create the same path NSURL that was used in the previous function. This will be used in the next call.

Finally, using the ALAssetsLibrary, move the completed movie file from the temp directory to the Photo Album.

One last thing:

-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
    if (isRecording) {
        [self stopRecording];
        isRecording = NO;
    } else {
        NSLog(@"Start recording");
        [self recordVideo];
        isRecording = YES;
    }
}

This just sets up the touch methods so that you start and stop the video recording with a tap. It also handles the isRecording variable so that it doesn’t try to start a second recording before the previous one is finished or stop a recording that was never started. This is the poor man’s way to handle this. There are more accurate ways to determine if the recording is in progress.

Alright, that’s the project. I also included two other filters that I created in my testing. A GPUImageTwoVideoTest filter that takes two inputs and determines which to show (on a pixel by pixel basis) based on the a brightness threshold value.

Another is a GPUImageHueBlendFilter that uses the luminance and saturation of the first video and the hue of the second. This one I will likely contribute back to the project, but it’s little slow.

So, playing two videos at 720p on recent hardware works great. However, recording to a new video starts to tax the system. After five or ten seconds you may start to get errors in the log. I suspect that reading from two videos and writing to one all at the same time is a little too much.

I did experiment with different resolution sizes of the input videos. As you’d expect, smaller videos were easier to handle.

On my iPad 3rd gen I had fewer errors than on my iPhone 4S, so I’m guessing the new iPhone would have not problems with 720p vids, I wonder if it can handle 1080p? Anybody has a device and tries it out, let me know.

Also, timing. The .playsAtActualSpeed attribute is a little code that slows the video down to real time. If this is set to NO, the video will play as fast as the device can process the frames.

It won’t skip frames if the system gets bogged down, so it doesn’t handle speeding things up if that’s necessary. So, in all my tests it was the writer process that started to have problems, but I suspect that if you have two videos, one with a low resolution, and one with a high resolution, it’s possible that the high resolution video would fall of sync if it couldn’t keep up. There’s not timing between the two videos, as the project is now.

Finally, there is probably an opportunity to sync the timing between the two videos and the writer, so that even if the whole process slows down and can’t keep up to real time, it still properly writes an output video at the correct synced timings. I don’t think this is in place now, but shouldn’t be too difficult. If I can find some time, I may try to address these timing features.

One other related point. The GPUImageMovie source is built on AVAssetReader. This object can’t use a remote video as a source, so these classes are limited to local video files. However, I ran across this post on stack overflow last night. If this is correct there’s an opportunity to write a GPUImageMovie class that uses the AVPlayerItemVideoOutput class to use remote videos as a source. This might alleviate the problem (I’m guessing it’s a disk IO issue) with a read two videos write one scenario. Even if it doesn’t the opportunity to read remove video files and apply filters to them is a big deal.

11 thoughts on “Mixing two Videos with GPUImage

  1. Reply David Spector Oct 9,2012 12:07 am

    Have you run into issues if one video is significantly longer than the other..?

    Am working on a chromakey (using the GPUImage ChomakeyBlendFilter)and find that if the chromakey source video is longer than that movie you are chomakeying on top of, they moviewriter calls the completion block and the resuklting movie is truncated to the length of the chomakey source movie…

    • Reply admin Oct 10,2012 9:30 am

      I haven’t tried to use different movie lengths myself. But, I’ll play with it and if I come up with a solution I’ll post it here. If you find one, let me know as well.

    • Reply alfons Oct 25,2012 8:25 am

      Call disableFirstFrameCheck (or disableSecondFrameCheck) on the GPUImageChromaKeyBlendFilter.

  2. Reply Anderson Nov 7,2012 2:43 pm

    Hello, great post.
    I downloaded your project, but GPUIMAGE.h file was not found.

    How do you run it ?

    Thanks.

    • Reply admin Nov 8,2012 3:20 am

      You need to set the ‘Header Search Paths’ variable in Xcode to the location of the GPUImage source.

  3. Reply Puneet Dec 27,2012 12:18 pm

    hi , the recoded video is just an Black screen , I’m download the code from given link and run it , video’s not show on Simulator, but it start recording the video and from Application directory i got the movie file too but its just an black Screen when it play in Quick Player. Please help me.

  4. Reply Mathias Franck Oct 1,2013 7:37 am

    Hi,
    Thanks and congratulations for that precious example, that exactly meet my needs.
    UnfortunatelyI’m trying to run your sample on my iPad Retina, and though i can compile and execute with no error, the result is quite surprising: After a long time displaying a white view, I have got images displayed from one movie, or completely red images (???), or images mixed with red, at a very low framerate (1 image every 5 or 10 seconds).
    The result is quite the same whatever the “blend” filter i use.
    When i set “runBenchmark” to YES for the movies, i can see that frames are processed.
    I also had to set “shouldReplay” to YES for both movies, otherwise an exception was thrown from low level GPUIMage queues, after finishing the first movie.

    Did you test this sample on latest versions of GPUImage ?
    Which version of GPUImage did-you use the last time you get this sample working ? (commit number ?)
    Any idea ?
    Thanks for your help.
    Mathias.

  5. Reply Zeeshan Oct 29,2013 11:50 am

    I have downloaded the sample project given above. When i run the project its just crashed on the following line.

    *** Assertion failure in -[GPUImageColorDodgeBlendFilter createFilterFBOofSize:], /Users/pl-12/Desktop/TwoVideoTest/Vendor/GPUImage/Source/GPUImageFilter.m:382
    2013-10-29 11:20:27.387 VideoTestingGPUImage[6862:3603] *** Terminating app due to uncaught exception ‘NSInternalInconsistencyException’, reason: ‘Incomplete filter FBO: 36054’

    I didn’t able to figure out the reason. Please help me out from this problem. Thanks

  6. Reply piyush Nov 24,2013 5:44 am

    Hi I am also geting the white screen after running the project? any ideas why?

  7. Reply Brian Jan 23,2014 5:27 pm

    Hello,

    Thanks for this code.

    I think something is broken in GPUImage and this code no longer works. Is there any way you could confirm or refute this?

    That is, could you get it to run using the latest GPUImage lib? I will seriously buy an OS 7 version of your book when it ones out if you can get it to work with latest GPUImage.

    Also, I had a very hard time getting the code to link as it kept getting duplicate symbol errors. Not sure if there is something obvious I am doing wrong.

    Thanks,

    -Brian

    • Reply Mathias Franck Apr 1,2014 8:21 am

      I can confirm that current version (commit ) is broken, and mixing GPUImageMovies together, or with GPUImage leads to many crashes and synchronization problems…
      It seems that recent refactoring (GPUImageContext, GPUImageFrameBuffer, Filter backing…) has not been extensively tested, which is not surprising, as GPUImage does not provide automated coverage testing, that may had prevent regressions.

      I keep my own version, with a very simplified but robust implementation of GPUImageMovie, that allows mixing, masking, smooth looping, and blending of movies and camera. It is a bit too consuming in CPU (vs GPU), but at least it makes the job very well !
      Just mail me via my website.

Leave a Reply to Brian Cancel Reply