For an updated version of this document, see Support for VR workflows.
Immersive video has been around in different forms for decades, but it hasn’t been able to break through on a large scale until recently.
The most common VR video solution places the viewer inside a sphere and wraps a video stream around the sphere. This solution can be even more immersive when using stereoscopic video, allowing the viewer to see a unique stream of wrapped video for each eye. This style of viewing video is the viewing experience provided by YouTube, Google Cardboard, and Facebook.
The key to this solution is capturing, formatting, and encoding in such a way that it can be easily wrapped around a sphere. The easiest way to accomplish this task is through equirectangular projection.
If you think about the surface of a sphere, a single point is defined as having a latitude and a longitude coordinate. Now think about a standard video frame. A video frame has a width and height, with points defined as X and Y coordinates.
Equirectangular projection simply unwraps the sphere, mapping the longitude to the X coordinate, and the latitude to the Y.
Here is a familiar example, the globe laid flat:
Notice that the image widens and it progresses toward the poles. The reasoning is that the spatial distance between longitude positions decreases the closer you get to the pole. The smaller circumference of a line of latitude close to a pole is stretched to the same distance as the latitude at the equator.
This distortion makes it difficult to comprehend what an equirectangular image is composed of. In addition, you’re seeing far more information than you’re typically accustomed to, and its position is not intuitively defined. The object at the far right of the frame is actually behind the camera, and the object to the far left is directly to the right of it.
The Golden Gate Bridge catches the eye because it’s toward the center of the frame, but when you are shooting immersive video, you want to be aware what’s in the entire scene. Looking at the frame, it is not apparent that there’s a fence behind us. Or the brave soul leaping from that fence behind your left shoulder!
To tell a good story, editorially, be aware of what is going on in the entire frame. This problem is even more of a challenge with equirectangular projection. Adobe Premiere Pro CC 2015.3 provides you with the tools necessary to look around the sphere, simulating the view you might have through a headset or watching on a desktop viewer such as YouTube.
Even though working with equirectangular projection can be a challenge, it’s not that different from any other video format. Most of your editing skills, tools, and tricks are relatable. Trimming, splicing, and track manipulation all work the same. Even many of the most commonly used effects work well with equirectangular projection.
One feature new in the Premiere Pro 2015.3 release can be especially helpful with immersive video: proxy workflows. Some equirectangular video can be as large as 8,000 pixels wide by 8,000 pixels high—much larger than most other frame sizes. Video of such high resolution is difficult to decode in real time, even with hardware acceleration. Premiere Pro’s new proxy workflow allows you to ingest this material and begin editing with it, while lower-resolution 2K versions are generated in the background, ready to take their place on the timeline with the click of a button.
By enabling the VR Video display in either the Program or Source monitor, Premiere Pro allows you to step inside the sphere and view the video from a user’s perspective. Using VR Video display, you can simulate different viewing experiences with your equirectangular video, for example, using a VR headset such as the Oculus Rift or on a desktop through YouTube or Facebook.
To activate the VR Video display, simply click the Settings icon (wrench) on the right side of the monitor, and then choose VR Video > Enable.
A new monitor button quickly toggles the VR Video display; use it to easily add to your monitor’s playback controls. To add this button, open the Button Editor by clicking the + icon in the lower right of the monitor. Drag the Toggle VR Video Display button to the desired position under the monitor display.
You can also assign a key of your choice to toggle the VR Video display from either or both of your monitors using the Keyboard Shortcuts dialog box.
Once you enable the VR Video display, your view is now from inside the sphere and your monitor transforms into an experience like this:
The upper center is your interactive window inside the sphere. You can immediately change your perspective by simply clicking and dragging your view inside the sphere, allowing you to easily pan and tilt. At any time, you can quickly recenter your perspective by double-clicking anywhere within the view.
The sliders along the right and bottom edges allow you to control the tilt and pan, respectively.
The numeric fields alongside each slider provide you with exact feedback of your viewing position, but also allow you to enter your own values. Positive values pan to the right and tilt upward, while negative numbers move your perspective to the left and downward.
The radial knob at the bottom of the display provides more-visual feedback as to which direction you’re currently facing, but it also allows you to click and drag within to pan your perspective left and right. The knob is unique in that it allows you to spin completely around and keep going past your starting point.
The VR Video display works with both monoscopic and stereoscopic video. You can also work with equirectangular footage from a smaller section of a sphere. You can configure it to simulate the view from a headset or desktop, even allowing you to view stereoscopic footage in 3D using a pair of inexpensive red/cyan anaglyph glasses.
To configure the VR Video display, click the Settings icon (wrench) on the right side of the monitor, and then choose VR Video > Settings.
Frame Layout lets you declare if your video is Monoscopic or Stereoscopic, the latter also allowing you to choose between an Over/Under or Side-by-Side layout. If you choose one of the stereoscopic layouts, you’re given a choice of a Stereoscopic View, either the Left or Right eye, along with a red/cyan anaglyph composite.
When working with immersive video, think in degrees of view, in addition to pixel dimensions. The Captured View describes what part of the sphere the video frame represents. Typically, you leave controls set at the full-sphere defaults of 360 horizontal by 180 vertical degrees.
The Monitor View fields allow you to control what portion of the sphere you view—where you can simulate different viewing experiences—while in the VR Video Display mode. For example, using a value of 90 horizontal by 60 vertical degrees approximates an Oculus Rift headset; 160 by 90 degrees simulates viewing within YouTube. Also note that these settings determine the aspect ratio of the view window. For example, 160 by 90 degrees presents a 16:9 view window.
Three common effects used in equirectangular projection are Dissolves, Color (Lumetri), and Speed. These effects do not alter a pixel’s position enough to cause an issue. The basic rule is that if an effect changes a pixel’s vertical position, it’s unlikely to work. Consider how a pixel’s vertical position affects its horizontal distortion once perspective is applied. For example, Horizontal Squeeze or Wipe works, while a simple Picture-in-Picture effect looks distorted, because it scales both the vertical and horizontal dimensions of the image.
Also consider compositing. You can composite equirectangular imagery on top of other equirectangular imagery as long as you do not change its vertical position and scale. You can shoot an object against a green screen in equirectangular and chroma key it on top of a background image. You cannot composite traditional flat video on top of equirectangular without additional steps, and that brings us to the fourth most common effect: Titles and Graphics. To work with flat imagery, such as a title, on top of equirectangular, it has to be projected into the sphere. To composite flat video on top of equirectangular, use third-party plug-ins, such as Mettle’s SkyBox 360/VR tools—specifically the Mettle SkyBox Project 2D effect.
One important built-in effect to be aware of when working with equirectangular video is the Offset effect (found within Video Effects/Distort). Unlike a traditional camera, a VR rig has no single lens to use to focus on the subject. The entirety of the scene is captured at once. When you begin to cut together different shots, you may find that your camera changed orientation, that the subject you care most about is in a different position from the previous shot. You don’t want users to have to twist their head to bring the subject back into view.
The Offset effect allows you to rotate the entire frame. Apply the Offset effect onto the cut segment in the timeline and drag the first number of the Shift Center To parameter pair left or right, depending on how you want to pan the clip. Take care not to shift the second parameter, which violates the vertical position rule. One final note: This effect is not real time and requires rendering. Mettle’s Rotate Sphere effect provides real-time support. It also allows you to tilt vertically and even roll. Use caution when you tilt and roll the sphere, as it can degrade the image quality, purely due to the way equirectangular projection works.
Exporting equirectangular video is much like exporting any other type of video, with a few caveats:
The following is an example of H.264 stereoscopic export settings:
Adobe Premiere Pro has long been the standard tool used by editors working with immersive VR video. With the 2015.03 version, Premiere Pro provides you with the tools to better view, export, and share VR video. In addition, the market is quickly growing for everything related to VR video: rigs, cameras, equirectangular video stitchers, effects, VR headsets, and even in-application headset monitoring. Immersive VR video is nascent and exploding. It’s different than stereoscopic 3D in that a small handful of large electronics manufacturers aren't driving the technology. It’s being nurtured by hundreds of companies, large and small, many working together, and Adobe continues to contribute.