When we explore a new environment we, naturally, walk around in it. Undoubtedly it will be some time before we can slip on a VR headset and walk around in a video based landscape. Right now that is only available in a CG-based VR world. However, for full immersion in 360 VR video, movement is an essential tool. However, there is a problem. Either the 360 videographer is seen as a virtual guide or we have to try to remove them.
Up till now, doing that affordably has been a challenge to 360 VR filmmakers. There are many ways to remove a standing object, like a monopod, but moving shots become complex and require rotoscoping, compositing and motion tracking.
However, now Mocha, famous for its amazingly easy planar tracking, is working on a new version that could change that drastically. The software, MochaVR, is still in beta, but I have fortunate to be involved in testing the software, and the results are impressive.
MochaVR allows you to work in native equirectangular format, no need to transform it to anything else. Just open the lens module and select Equirectangular from the drop-down list of calibration effects. Selecting that mode reveals the 360 button option. With that engaged, you can use the hand tool to look around your 360 video clip.
Next, you will want to position your view over the top of your 360 camera rig. Try to get a nice falt view of your camera rig movement. A flat plane is crucial for Mocha VR to do it’s job. Use the Zoom tool to move your view out to see your 360 camera rig and the surrounding area.
After that use the spline tool to outline the camera rig and make a second spline to outline the area in front and back of the rig. Mocha VR does its magic by reading the information in front and in the back of the camera rig to replace it in the clip. These layers should be renamed to avoid confusion. A good choice is “rig” and “bg” (for background). The rig layer should be on top and the bg layer below it. This tells Mocha VR which object is to be replaced and which one to get the info for replacement. This is very important for the process to work.
One point to keep in mind is that you do need to plan this shot to get the best results. If you are in a crowded sidewalk, the process cannot “see” enough clear space to create a good replacement.
Once you track the bg and adjust the track if needed, move to the remove tab and pay attention to the number of frames before and after. These may have to be adjusted to get a good track. The “Illumination Model” will decide how the front and back merge. The fastest mode is “none” which I used in my test. Linear and Interpolate are the other two choices and these will give you fantastic results but take much more time to complete. The Blend setting is also important as you are merging the front and back images together. It needs to be adjusted accordingly.
In my case, I choose two clips to test the process. The first was a walk down a narrow street. I shot this twice, both with my new helmet cam set up. The first shot was with my Kodak SP360 4K camera rig.
Here are the results of that test.
The next test was done on the same street using the GoPro Omni on my Helmet Cam.
As you can see, the results are quite good. This is beta software and there are always things being improved upon as the feedback comes in for the Beta Tester group. But as you can see the folks at Imagineer Systems are developing some truly amazing tools for 360 VR filmmakers. Big thanks to Martin Brennan, Product Manager, Mary Poplin, Product Specialist and Mocha Evangelist and Ross Shain who invited me into this beta. Ross is the Chief Marketing and Creative Officer of Imagineer Systems.