Are you wondering how VR/360 video works? Check out our article below to see how it all comes together!
Years ago, viewing a 360-degree video in a VR headset wasn’t all that great. The videos tended to be too compressed, could’ve had positional tracking or frame rate issues causing dizziness, or were simply underwhelming.
Today, the field for 360-degree video is vastly different than those days, and strides are consistently being made in the industry to mitigate and improve upon all these technical issues. Best practices are being developed, and the hardware and software of today are quite superior to their previous iterations. There’s a growing market for VR content, and many tools are being released for content creators, making VR production more accessible - and more affordable - than ever before.
But how does VR/360 video actually work, and what methods are most effective for capturing this data?
How Traditional VR/360 Video Works
There are a few different methods used in editing videos for virtual reality. On the real-world image capture side of things, a filmmaker uses a 360-degree camera, which has lenses surrounding its body to shoot in all directions. These 360 cameras range in price from under $100, to in the tens of thousands for professional-grade 360 recording. The cameras typically record video from each lens, that is later stitched by a user in proprietary software included with the camera, or the camera stitching is performed automatically within the camera itself.
Capturing VR/360 Video Without a Camera
You don’t have to necessarily own a 360 camera to create VR videos. You can use a game engine like Unreal 4 or Unity, or a 3D program like Cinema 4D to capture a digital scene you’ve created, and export as 360 video. Of course, this route is reliant upon CGI, but the advantage of this method is you can have a ray-traced, incredibly high-quality pre-rendered video, that isn’t limited by resolution (like a 360 camera would be).
Alternatively, you can use video software (like After Effects or Premiere Pro) to build your 360 video from pre-existing digital assets.
Equirectangular vs. Cubemap Projections
Once the spherical video data is captured, whether by a 360 camera or within a 3D program, this video has to be “unrolled” so that it can be viewed and edited on a computer monitor. The two most common ways to unroll the video are through equirectangular or cubemap projections. Which one you use typically depends on your workflow, and the programs you’ll be using to edit the video.
Equirectangular projection takes a spherical video, and rolls it out flat into a rectangle. Premiere Pro’s production workflow is optimized for the equirectangular format, which is something to keep in mind if you are planning on editing VR/360 video in Premiere Pro.
A cubemap projection divides a video up into six separate zones, each with a designated location in the video.
For a quick visual explanation of these two techniques, check out this short video from Coursera.In addition to equirectangular and cubemapping, Google’s blog discusses a new technique called “equi-angular cubemapping,” which allows for better quality video. This could become the predominantly-used projection format in the future for editing 360-degree video.
The evolution of 360 video and virtual reality is going to be an interesting one, and the great thing about becoming involved in it at this stage is you have a lot of freedom to experiment with what’s possible in VR filmmaking. You also have the opportunity to become an innovator in how you tell your story in virtual reality. And if VR adoption continues to grow, you have time now to hone and sharpen your VR video editing skill set, and have a competitive advantage in your career.
First time here? ActionVFX creates action stock footage for VFX and filmmaking. (We also have some great free stuff!)
Remember to connect with us on our social networks to stay updated on our latest news, giveaways, announcements and more!