User Guide

Understanding Stereoscopic 3D in After Effects

  1. After Effects User Guide
  2. Beta releases
    1. Beta Program Overview
    2. After Effects Beta Home
  3. Getting started
    1. Get started with After Effects
    2. What's new in After Effects 
    3. Release Notes | After Effects
    4. After Effects system requirements
    5. Keyboard shortcuts in After Effects
    6. Supported File formats | After Effects
    7. Hardware recommendations
    8. After Effects for Apple silicon
    9. Planning and setup
    10. Tip of the Day
  4. Workspaces
    1. General user interface items
    2. Get to know After Effects interface
    3. Workflows
    4. Workspaces, panels, and viewers
    5. Improved UI language support
  5. Projects and compositions
    1. Projects
    2. Composition basics
    3. Precomposing, nesting, and pre-rendering
    4. View detailed performance information with the Composition Profiler
    5. CINEMA 4D Composition Renderer
  6. Importing footage
    1. Preparing and importing still images
    2. Importing from After Effects and Adobe Premiere Pro
    3. Importing and interpreting video and audio
    4. Preparing and importing 3D image files
    5. Importing and interpreting footage items
    6. Import SVG files
    7. Working with footage items
    8. Detect edit points using Scene Edit Detection
    9. XMP metadata
  7. Text and Graphics
    1. Text
      1. Formatting characters and the Character panel
      2. Text effects
      3. Creating and editing text layers
      4. Formatting paragraphs and the Paragraph panel
      5. Extruding text and shape layers
      6. Animating text
      7. Examples and resources for text animation
      8. Live Text Templates
    2. Motion Graphics
      1. Work with Motion Graphics templates in After Effects
      2. Use expressions to create drop-down lists in Motion Graphics templates
      3. Work with Essential Properties to create Motion Graphics templates
      4. Replace images and videos in Motion Graphics templates and Essential Properties
      5. Animate faster and easier using the Properties panel
    3. Variable Font Axes
      1. Variable Font Axes support
      2. Working with Variable Font Axes
      3. Scripting support for Variable Font Axes
      4. FAQ for Variable Font Axes
  8. Drawing, Painting, and Paths
    1. Overview of shape layers, paths, and vector graphics
    2. Paint tools: Brush, Clone Stamp, and Eraser
    3. Taper shape strokes
    4. Shape attributes, paint operations, and path operations for shape layers
    5. Use Offset Paths shape effect to alter shapes
    6. Creating shapes
    7. Create masks
    8. Remove objects from your videos with the Content-Aware Fill panel
    9. Roto Brush and Refine Matte
    10. Create Nulls for Positional Properties and Paths
  9. Layers, Markers, and Camera
    1. Selecting and arranging layers
    2. Blending modes and layer styles
    3. 3D layers
    4. Layer properties
    5. Creating layers
    6. Managing layers
    7. Layer markers and composition markers
    8. Cameras, lights, and points of interest
  10. Animation, Keyframes, Motion Tracking, and Keying
    1. Animation
      1. Animation basics
      2. Animating with Puppet tools
      3. Managing and animating shape paths and masks
      4. Animating Sketch and Capture shapes using After Effects
      5. Assorted animation tools
      6. Work with Data-driven animation
      7. Proportional Scrubbing in timeline
    2. Keyframe
      1. Keyframe interpolation
      2. Setting, selecting, and deleting keyframes
      3. Editing, moving, and copying keyframes
    3. Motion tracking
      1. Tracking and stabilizing motion
      2. Face Tracking
      3. Mask Tracking
      4. Mask Reference
      5. Speed
      6. Time-stretching and time-remapping
      7. Timecode and time display units
    4. Keying
      1. Keying
      2. Keying effects
  11. Transparency and Compositing
    1. Compositing and transparency overview and resources
    2. Alpha channels and masks
    3. Track Mattes and Traveling Mattes
  12. Adjusting color
    1. Color basics
    2. Color management
    3. Color Correction effects
    4. OpenColorIO and ACES color management
    5. Enhanced HDR support
    6. HDR import and export 
  13. Effects and Animation Presets
    1. Effects and animation presets overview
    2. Effect list
    3. Effect Manager
    4. Simulation effects
    5. Stylize effects
    6. Audio effects
    7. Distort effects
    8. Perspective effects
    9. Channel effects
    10. Generate effects
    11. Time effects
    12. Transition effects
    13. The Rolling Shutter Repair effect
    14. Blur and Sharpen effects
    15. 3D Channel effects
    16. Utility effects
    17. Matte effects
    18. Noise and Grain effects
    19. Detail-preserving Upscale effect
    20. Obsolete effects
    21. Cycore plugins
  14. Expressions and Automation
    1. Expressions
      1. Expression basics
      2. Understanding the expression language
      3. Using expression controls
      4. Syntax differences between the JavaScript and Legacy ExtendScript expression engines
      5. Editing expressions
      6. Expression errors
      7. Using the Expressions editor
      8. Use expressions to edit and access text properties
      9. Expression language reference
      10. Expression examples
    2. Automation
      1. Automation
      2. Scripts
  15. Immersive video, VR, and 3D
    1. Construct VR environments in After Effects
    2. Apply immersive video effects
    3. Compositing tools for VR/360 videos
    4. Advanced 3D Renderer
    5. Import and add 3D models to your composition
    6. Import 3D models from Creative Cloud Libraries
    7. Create parametric meshes
    8. Image-Based Lighting
    9. Animated Environment Lights
    10. Enable lights to cast shadows
    11. Extract and animate lights and cameras from 3D models
    12. Tracking 3D camera movement
    13. Adjust Default Camera Settings for 3D compositions
    14. Cast and accept shadows
    15. Embedded 3D model animations
    16. Shadow Catcher
    17. 3D depth data extraction
    18. Modify materials properties of a 3D layer
    19. Apply Substance 3D materials
    20. Work in 3D Design Space
    21. 3D Transform Gizmos
    22. Single 3D Gizmo for multiple 3D layers
    23. Do more with 3D animation
    24. Preview changes to 3D designs real time with the Mercury 3D engine
    25. Stereoscopic 3D in After Effects
    26. Add responsive design to your graphics 
  16. Views and Previews
    1. Previewing
    2. Video preview with Mercury Transmit
    3. Modifying and using views
    4. Lossless Compressed Playback
  17. Rendering and Exporting
    1. Basics of rendering and exporting
    2. H.264 Encoding in After Effects
    3. Export an After Effects project as an Adobe Premiere Pro project
    4. Converting movies
    5. Multi-frame rendering
    6. Automated rendering and network rendering
    7. Rendering and exporting still images and still-image sequences
    8. Using the GoPro CineForm codec in After Effects
  18. Working with other applications
    1. Dynamic Link and After Effects
    2. Working with After Effects and other applications
      1. Export After Effects project as Premiere Pro project
    3. Creative Cloud Libraries in After Effects
    4. Plug-ins
    5. Cinema 4D and Cineware
  19. Collaboration: Frame.io, and Team Projects
    1. Collaboration in Premiere Pro and After Effects
    2. Frame.io
      1. Install and activate Frame.io
      2. Use Frame.io with Premiere Pro and After Effects
      3. Frequently asked questions
    3. Team Projects
      1. Get Started with Team Projects
      2. Create a Team Project
      3. Collaborate with Team Projects
  20. Memory, storage, performance
    1. Memory and storage
    2. How After Effects handles low memory issues while previewing    
    3. Improve performance
    4. Preferences
    5. GPU and GPU driver requirements for After Effects
  21. Knowledge Base
    1. Known issues
    2. Fixed issues
    3. Frequently asked questions
    4. After Effects and macOS Ventura
    5. How After Effects handles low memory issues while previewing

Learn about stereoscopic 3D and how to use it in After Effects.

Understanding stereopsis and stereoscopy

To understand what stereoscopic 3D is, it's necessary to understand perceived depth. There are many cues that help us perceive depth.

Objects in perspective, occlusion, and relative size are good indicators of depth. An object that is farther away is interpreted as such by our brains if it is much smaller than another object next to it. Our brain already knows how big those objects should be in relation to one another. If two objects are roughly the same size in our field of view, and one is occluded by or is occluding another object, our brain infers that one of those objects is in front of the other. (Occlusion means one object is laid on top of the other and obscures the other.) Paintings or games can appear 3D because they obey these rules. After Effects also obeys these rules when you create a 3D composition with a camera.

Another important depth cue is lens blur. If our eyes (or a camera lens) focus on a specific object, and another object appears blurred next to it, our brain knows that the other object is either in front of or behind the object. If there is no blur, our brain thinks that the two are at a similar distance. One can clearly see this phenomenon as our eyes focus on different objects and our retinas blur the out-of-focus objects in the background. Our brain interprets this as a depth cue without us realizing it. This phenomenon is subtle as our brain filters it seamlessly into our perception. It is usually unnoticed by the average person. However, it is possible to train our eyes and brain to experience and be conscious of the depth of field by relaxing the eye muscles and using the following (or similar) technique. Look through a windshield with water droplets on it at night. When you focus outside the windshield, the water droplets turn into little halos of color called bokeh. Similarly, when you focus on the droplets, the streetlights in the background turn into bokeh. This effect can be accomplished with one eye closed. Therefore, it has nothing to do with stereopsis, but instead has to do with our eyes’ lens focusing, similar to how a camera lenses focus. Understanding how depth of field is related is important when attempting to create realistic images and works hand-in-hand with stereoscopic 3D in After Effects. 

Finally, arguably the most powerful depth cue is stereopsis. Stereopsis is the ability of our brain to take two input images from different perspectives and gain an understanding of how far away two different objects are in relation to each other. The key point to understand is that since our eyes are spaced apart on our heads, each eye can view a slightly different perspective of the world in front of us. Look at an object nearby and close one eye, then switch eyes back and forth several times. Then try this same exercise on an object that is far away.  You notice that the object that is nearby jumps from side to side in your field of view a lot more drastically than the object far away. If the close object is in the same general direction as the far-away object, the close object switches sides of the far-away object. This is the basis of how stereopsis works. Your brain takes the relative horizontal distance between objects in your field of view and compares them to gain an understanding of where those objects are in relationship to each other in terms of depth. It is theorized that pigeons bob their head in order to gain depth perception (since their eyes are on opposite sides of their head and they can’t see depth otherwise). If you look through only one eye, you lose your stereopsis depth cue. However, if you bob your head from side to side with that eye still closed, you can get a sense of depth again. This separation between eyes that provides different perspectives is the key to stereopsis.

It is important to keep all these depth cues in mind when constructing a stereoscopic 3D composition in After Effects. In the real world, it is possible to give the brain contradictory information and trick it. Optical illusions like the Ames Room, the Infinite Staircase, or tilt-shift photography are all examples of how depth cues can be manipulated and our brains tricked. (Tilt-shift photography is a method in which a post-process depth-of-field blur is added to an image to give a broad landscape the feeling of a miniature.) Since After Effects gives you control of all of these depth cues, it's important to maintain control over their interaction and make sure that they are not giving our brains too many contrary depth cues. In real life, one can manipulate one's surroundings in intelligent ways to create optical illusions. But more often than not, inconsistencies in the digital realm are considered unnatural and can even cause eyestrain or brain pain. Stereopsis, being the most powerful depth cue, is no exception. It's important to make sure that it is not painful to look at the stereoscopic result on different screens. One's viewing experience can change depending on how big the screen is and how far away the viewer is from the screen.

Stereoscopy is a digital technique for allowing our brain to see stereopsis by tricking it. This technique is done by presenting each eye with a different image. The left eye presents a view of a scene from a virtual or real camera that shows the left perspective. Similarly, the right eye is presented with an image of the right perspective. In this way, each eye is presented with a different image independently, and our brain puts them together, and we perceive depth. When viewing a stereoscopic 3D scene on a monitor, the elements in the scene have a tendency to pop out or sink into the screen. Stereopsis is telling us that the object is closer or farther away from us than how far away the monitor actually is.

Many different devices and systems exist for delivering stereopsis to our brains. But in general, the principle behind all of them is the same – get one eye to see one view, and the other to see a different perspective of the same scene. Anaglyph glasses are the oldest method, and by far the cheapest. Different colored lenses color-filter each eye’s view differently. Red-blue glasses filter out blue on the left eye and red on the right eye. On the display side, the left image is colored red, and the right is colored blue. Then the images are overlapped. Each eye sees only the associated image. Because of the inherent color distortion, it is difficult to see all the colors accurately using an anaglyph. But the setup is very easy and works accurately for judging depth and convergence. Polarized glasses work on a simple principle. Two images are displayed on a screen, one image emits horizontally polarized light only, and one emits vertically polarized light only. The glasses have polarized lenses such that each only lets through light polarized in one direction. Active shutter glasses work by blocking one eye at a time at a high rate (usually 60fps) and switching the left and right images every frame while synchronized with the monitor. Some TVs use no glasses at all, such as those from Alioscopy. Aliscopy uses lenticular technology, in which the lens on the monitor itself actually refracts the lights in different directions so that each eye gets a different perspective simply by being in a different location in relationship to the TV. There are many more methods for stereoscopy. 

When dealing with stereopsis in the real world, the only things that can vary are the positions of objects in front of you, and the perspective from each eye can only change based on that. The only way to make an object look closer through stereopsis is to actually place it closer. You can’t easily change the distance between your eyes, your field of view, or the aperture of your eyes (at least not voluntarily) to modify the depth of field you perceive. However, in the digital realm, there are many more variables since all of these aforementioned things can be changed. Therefore, there is a high likelihood of introducing confusing depth cues that are contradictory and cause pain when viewing.

3D depth cues in After Effects

Perspective, occlusion, and relative-size depth cues are all handled automatically for you by After Effects, since it places objects in a virtual 3D space. Moving an object farther away along the z-axis of the camera makes that object smaller and places it behind other objects. Changing the camera field of view changes the perspective of the scene. A wide-angle lens gives you more perspective depth cue information than a telephoto lens, for example. Turning on Depth of Field in the camera layer and modifying the aperture adds lens blur according to the focus distance. Also, stereopsis can be added to any 3D composition in After Effects. In short, the concept is simple – create a left camera view and a right camera view of some 3D scene, and render them out. Then use a stereoscopic display to view the composition in stereo.

Creating a stereoscopic scene in After Effects

  1. Start out by taking any composition that has some 3D layers positioned along the z-axis. 

  2. Right-click a layer and select Camera > Create Stereo 3D Rig

    After Effects creates Left Eye and Right Eye compositions driven by left and right cameras. It also produces an output composition that combines the two views into a format recognized by some stereo-viewing method. If you applied the command to a camera, that camera controls your stereo cameras.

  3. At this point, you can:

    • Put on red-blue anaglyph glasses and see your composition in stereo. Objects pop out or sink into the screen according to their distance from the camera.
    • Go back to your starting composition and adjust your camera position, depth of field, layer placement, or anything else about the scene. 
  4. Play with your scene. It's very easy to see stereoscopic 3D in action when animating a camera move, objects moving closer to the camera, or Depth of Field (camera aperture, focus distance, and zoom).

Controlling stereoscopy in After Effects

Once your scene is complete, you can begin tweaking the stereoscopic 3D controls. No further changes are required in your main composition.

Switch to the Stereo 3D composition, then find the layer named Stereo 3D Controls. All of the controls necessary for stereoscopic 3D are in two effects on this layer.

Stereo Scene Depth

Stereo Scene Depth control is the main control for changing the interaxial separation of the cameras. Increasing this control spreads the cameras out. This effect is the same as if you moved your eyes farther apart. It is very difficult and unnatural in real life, so this control—if used improperly—can have very painful results, since our eyes and brain are not used to converging much more than the distance between our eyes allows. The last thing you want is to make your viewer go cross-eyed trying to converge their eyes on an object that's too close or too far away. Usually, to get the most pleasant results, you want your camera separation to match the separation of your eyes. However, it is very difficult to do, because your final output could be on a (relatively) small 50’’ 3D TV or a very large IMAX screen. In both cases, the distances between the objects on screen can vary drastically, causing eye strain or cross-eye on one viewing screen but being fine on another. For this reason, the Stereo Scene Depth property is measured as a percentage of the composition width. That way, if you change the size of your stereo composition, the stereoscopic calculation remains unchanged relative to the new size.

Changing the Stereo Scene Depth value makes the stereoscopic 3D scene appear to pop out or sink into the screen more. Setting it back to 0 removes all stereoscopy, and everything is on the plane of the screen.

To understand what this control does, consider that moving the cameras apart moves all objects in the scene horizontally, increasing the perceived depth separation. In this way, you can avoid moving an object farther back or closer to the camera to create more depth. Increasing this value increases the maximum amount an object can stick out of or sink into the monitor.

Understanding convergence

When our eyes converge on an object, if there is a difference in horizontal position of that object between the left eye’s image and the right, our mind puts the object together into one and our brain thinks the object is a certain distance away (due to parallax).

When two objects appear in the same location horizontally in both the left and right frame, the objects' distance from the camera indicates the plane of convergence. Any layer that is at the same distance as that plane from the camera is converged upon. Objects that are converged upon appear to reside on the surface of the screen that is being viewed. Everything closer to the camera than that object or others along that plane appears to pop out of the screen. Everything farther away from the camera than that object appears to be pushed deeper in the screen.

Think of the convergence plane as an anchor point for the stereoscopic 3D space. In this way, you can shift your 3D objects back and forth and control directly whether objects are all sinking in to the screen, or only popping out, or a mix of in and out. To understand how much those objects will stick out in either direction relative to the plane, see the section on Stereo scene depth.

Toe-in or parallel cameras and convergence point

Our eyes angle slightly in toward the object we are looking at. This effect is known as toe-in. After Effects does this effect when you select Converge Cameras in Stereo 3D Controls. Using toe-in can give more control, but there are several factors to consider. When cameras converge, you change the perspective of the view because the cameras are rotated, introducing distortion. No longer do the perspectives of the left and right cameras line up exactly. When you capture live stereoscopic video, you almost never want your camera rig to have a toe-in. You want to correct for the perspective distortion if you need to change the convergence point in post-production. Real scenes are almost always shot with parallel cameras. Keep this in mind if you are trying to mix and match live footage with digital elements. If your scene consists solely of 3D elements in After Effects, it is probably safe and preferable to use converged cameras.

Converged cameras

In After Effects, it is much easier to change the convergence point of your stereoscopic 3D camera rig because you can change where the cameras are pointing quite easily. Make sure that Converge Cameras is selected, and change the Convergence Z Offset property. Increasing this value moves the convergence point farther from the camera, so all objects in the scene pop out toward you when viewed on a 3D monitor. You can set where the cameras converge by changing the Converge To property. Usually, it is easiest to have the left and right cameras converge to your master camera's point of interest (default). But it is useful to change it to camera position (plus, for example, the focus distance as an offset) when trying to match the convergence point and depth of field. Likewise, you can tie the convergence point to the zoom to automatically keep the convergence the same while shifting perspective (changing the field of view of the camera during a dolly in). 

Parallel cameras

You can also use parallel virtual cameras. This technique is useful if you need to match live footage and add digital elements to that scene. Keeping the virtual camera orientations consistent with the cameras used in the footage helps keep the perspectives of the digital elements and the stereo footage aligned.

Changing the convergence plane with live footage is as simple as changing the horizontal alignment of the left and right images. Conceptually, it makes sense: each object in the left and right images has a different horizontal offset due to parallax, depending on its depth. If you align the left and right images so that a specific object in your footage appears in the exact same location when overlapped. Your convergence point is now located at the depth that matches the distance to the object when you shot your footage (or however far the object is from your virtual cameras).

You can change the 3D Glasses effect's Scene Convergence property to change the convergence plane of parallel cameras. Keep in mind, though, that because it simply offsets the final images, it acts as an additional change to the convergence if you have already converged using the Converge Cameras property with an offset. In general, change the 3D Glasses effect's Scene Convergence property only when using live footage or when Converge Cameras is off.

Increasing the Scene Convergence property moves the convergence plane farther away from the camera. Everything in the scene pops out from the screen toward the viewer.

In general, your convergence plane with parallel cameras should be at the zoom distance of your camera. However, when your cameras are parallel, an offset must be taken into account. The cameras are spaced apart, and the two perspectives are as well. To achieve the correct convergence plane, you must adjust the scene convergence to compensate for the cameras' separation. Subtracting the stereo scene depth (interaxial separation) will achieve this while keeping the convergence point from moving when using parallel cameras and virtual 3D elements. However, don't do this when using converged cameras. Set an expression on the 3D Glasses effect's Scene Convergence property to automatically account for this. Also, make sure that the Units property in the 3D Glasses effect is set to % Of Source to match the units of Stereo Scene Depth in the Stereo 3D Controls effect; otherwise, an additional calculation is necessary. After doing this, you can change the Stereo Scene Depth property, and your scene convergence doesn't change. As a test, try changing the Stereo Scene Depth property while the 3D View in the 3D Glasses effect set is set to Difference. You should not see the black areas move back and forth—only the separation of the objects in front or behind them. With the following expression for parallel cameras and the Scene Convergence set to 0, the convergence plane will be at the camera's zoom distance.

3D Glasses effect Scene Convergence property expression:

try {
var cameraOffset = effect("Stereo 3D Controls")("Stereo SceneDepth");
var converge = effect("Stereo 3D Controls")("Converge Cameras");

if (converge == false) {
value - cameraOffset;
} else {
value;
}
} catch (e) {
value;
}

Preview convergence plane with parallel cameras

When working with converged cameras, it is much easier to know how far away your convergence plane is. You have direct access to set the convergence point and offset. 

When dealing with parallel cameras, it is difficult to determine how deep into the scene the convergence plane lies. To preview this effect, change the 3D View property in the 3D Glasses effect to Difference. Objects that are aligned turn black. Any aligned objects are on the convergence plane. If you then change the Scene Convergence property by dragging the property value, you should see a darker band move through the scene. This band is the convergence plane moving back and forth through the scene. If you switch to the 3D view and put on your glasses, objects on this convergence plane appear to be on the plane of the TV screen.

Match cameras to Maya

A good thing to remember is that normally our eyes are about 6 to 6.5 cm apart. This fact is useful if you are trying to match camera separation in another program, such as Maya. If you import cameras (or nulls) from Maya and they are not lining up with the stereo rig camera positions, try adding the following expression to the interaxial separation (Stereo Scene Depth property) to handle the conversion to After Effects units. In this case, the Maya default units are cm, and they are using absolute units. It's necessary to counteract the composition width percentage calculation. However, you may need to rework any keyframes if you change your output size. Using this equation allows you to drag the property value as you normally would. It takes that value and modifies it as needed.

Stereo Scene Depth (interaxial separation) expression to match Maya cameras:

value * (100.0 * 6.5 / thisComp.width);

If your cameras are in the wrong location, make sure to verify where the master camera from Maya is in relation to the left and right. Remember that you can change the configuration in your Stereo 3D Controls effect in After Effects such that the master camera is centered between the left and right cameras, or in the same location as the left (hero left), or the same location as the right (hero right).

Match depth of field to convergence

To get a realistic scene, you usually want to add depth of field, though it is subtle unless you are using a telephoto or macro lens. Usually, you want your focus to match the convergence plane of the cameras. With parallel cameras, it is more difficult, and a little bit of eyeballing is required.

When working with converged cameras, it is very easy to match your focus distance and convergence planes. Here are a few methods:

  • If you want your focus distance to simply follow your point of interest, use the new command by right-clicking the camera layer in the timeline. Select Camera > Link Focus Distance To Point Of Interest. Then make sure that your Stereo 3D Controls effect properties are set to converge to the camera point of interest with a 0 offset.

Composite digital 3D elements with stereo footage from real-life cameras

  1. Start with your 3D scene, then create a stereoscopic 3D rig by selecting Camera > Create Stereo 3D Rig.

  2. Import your stereoscopic left-eye and right-eye footage items. Drag your left-eye footage item into your Left Eye Comp composition and your right-eye footage item into your Right Eye Comp composition at the very bottom of your layer stack and leave them as 2D layers. 

  3. Now, if you switch to your stereo 3D view, you should see your 3D elements composited with your stereoscopic 3D footage. 

  4. One final thing needs to be done in order to truly control the convergence of the footage. Add a Slider Control expression control effect to your Stereo 3D Comp composition and name it Footage Convergence

  5. Set an expression on the X position of the left and right footage layers. You will need separate dimensions of PositionAnimation > Separate Dimensions

  6. The left layer adds the slider value converted into a percentage of the composition width, and the right layer subtracts it. Make sure to replace YourCompName with the correct name for your stereoscopic 3D composition.

    Expression to set on the left-eye footage layer's X Position property:

    transform.xPosition  + (comp(“YourCompName Stereo 3D").layer("Stereo 3D Controls").effect("Footage Convergence")("Slider") / 100 * width )

    Expression to set on the right-eye footage layer's X Position property:

    transform.xPosition  - (comp("YourCompName Stereo 3D").layer("Stereo 3D Controls").effect("Footage Convergence")("Slider") / 100 * width )

  7. Now you can drag your footage convergence slider to change the convergence plane of your stereoscopic 3D footage, and use the Stereo 3D Controls effect to control the convergence of your 3D elements. 3D glasses change the convergence of both together. It is best to try and get the convergence planes to match as close as possible in this situation.

You can’t change the stereoscopic scene depth of footage after you've shot it. Doing so would involve changing the interaxial separation of the cameras and shooting the footage with new perspectives for each camera. It is very difficult to get different perspectives from an image that has already been recorded (though there is research happening in this area). Your best option is to set the Stereo Scene Depth property of your 3D elements to match as closely as possible the separation of the cameras that were used on the shoot. Matching it might be somewhat difficult. Normally, cameras are spaced 6.5 cm apart to be similar to eye separation. But depending on the camera size, it can vary (especially if the body of the camera is wider and it is not possible to place the cameras that close together). It's necessary to do some sort of calculation to compensate for the dimensions of the footage. Also, take into account correct units as mentioned previously, since After Effects operates in units of pixels, not centimeters. It can be easiest just to manually adjust it in this situation.

Remember that to match the camera zoom value in the footage, you need to subtract the cameras' separation from your footage convergence. Using the difference mode is probably the easiest and fastest way to align the object you want to be on the convergence plane. To have the best composite possible (and least painful), make sure to match the convergence plane of your 3D elements with that of your stereo footage.

 

ETLAT (edit this, look at that) 

When editing with stereoscopic 3D, it is usually invaluable to be able to see what exactly is happening and how the parameters you are changing affects your stereoscopic 3D rig. There is a simple way to get a sense of this in After Effects:

  • Open a new composition viewer, and set one to view your initial scene composition, and one to view your final stereoscopic 3D composition. Make sure to lock those views so that they do not switch.
  • With your Stereo 3D composition selected, click the controls layer and lock the Effect Controls panel so it isn't hidden.
  • Now go back to your initial composition and turn on camera wireframes. Choose View > View Options > Camera Wireframes > On. Then switch to a custom view so that you can see your cameras in 3D space.

At this point, you should be able to see three cameras: your master camera, as well as your left and right ones. Changing your settings under Stereo 3D Controls should update the cameras in your initial scene. Try changing the Stereo Scene Depth property to see the cameras separating or tweak your convergence options to see where the cameras are pointing.

This technique is especially useful when debugging problems, and when trying to match your depth of field to the convergence distance. Both the focus distance and the convergence point are shown when the cameras are converging. With parallel cameras, you can still see your focus distance or point of interest and you can see how this lines up with the perceived convergence point in your final output using the difference mode technique as described earlier.

When editing with stereoscopic 3D, it is usually invaluable to be able to see what exactly is happening and how the parameters you are changing affect your stereoscopic 3D rig. There is a simple way to get a sense of this in After Effects:

  1. Open a new composition viewer, and set one to view your initial scene composition, and one to view your final stereoscopic 3D composition. Make sure to lock those views so that they do not switch.

  2. With your Stereo 3D composition selected, select the controls layer, then lock the Effect Controls panel so it doesn't hide.

  3. Now go back to your initial composition and turn on camera wireframes. Select View > View Options > Camera Wireframes > On. Then switch to a custom view to see your cameras in 3D space.

  4. At this point, you should be able to see three cameras – your master camera, as well as your left and right ones. Changing your settings under Stereo 3D Controls should update the cameras in your initial scene. Try changing the Stereo Scene Depth property to see the cameras separating, or tweak your convergence options to see where the cameras are pointing.

This technique is especially useful when debugging problems and when trying to match your depth of field to the convergence distance. Both the focus distance and the convergence point are shown when the cameras are converging. With parallel cameras, you can still see your focus distance or point of interest, and you can see how this lines up with the perceived convergence point in your final output using the difference mode technique as described earlier.

Hook After Effects up to a 3D TV

It's pretty simple to edit while previewing the stereoscopic 3D effects that you are changing. Anaglyph mode is an inexpensive way to do this. If you happen to have a 3D TV accessible, follow these steps to see your composition and edit in stereoscopic 3D live.

  • Connect your 3D TV to your computer as a second monitor (DVI or HDMI).
  • Make sure that your composition dimensions exactly match the resolution of the 3D TV. Check your resolution settings for the second monitor.
  • Change the 3D View property in the 3D Glasses effect to match one that your 3D TV supports: either Stereo Pair (Side By Side), Over Under, or Interlaced Upper L Lower R.
  • Create a new composition viewer for your stereoscopic 3D scene, and drag it out of the After Effects frame onto the 3D TV. Make sure to lock this viewer.
  • Make sure that your Magnification Ratio in the viewer is set to 100%. 
  • Press Ctrl+\ (Windows) or Command+\ (Mac OS) twice to make the viewer full-screen on the 3D TV.
  • Turn on the associated 3D mode on your 3D TV.
  • Put on your glasses, and you should be viewing your composition in stereoscopic 3D.

Lights and cameras and the rig

Left Eye Comp composition and Right Eye Comp composition can produce different camera views because they are precomposed with Collapse Transformations on. They do not inherit camera or light data from the containing composition, but instead use the modified left and right cameras. This is good because the cameras automatically create the correct angles for the stereoscopic view without any manual work.

However, there are two limitations that this introduces:

  • You cannot use multiple cameras, since each stereoscopic 3D rig is always linked to only one master camera. If you need multiple cameras, you will need to make multiple stereoscopic 3D rigs linked to each individual camera, and then edit the stereoscopic 3D scenes together in another composition.
  • Lights do not transfer into the precompositions with collapsed transformations. If you create a light in your main composition, that light isn't used in your Left Eye and Right Eye composition, and not in your Stereo 3D composition either. If you need lights, manually copy your lights into the Left Eye and Right Eye compositions. Make sure that the lights are identical to your original lights in the main composition. Otherwise, you can get different shadows or colors in each eye, which can be a cause of visual discomfort. Adobe recommends that if you need to add lights, connect the lights in the left and right compositions via expressions to their counterparts in the master composition. Make sure to link all properties in the lights, including positional, directional, and light parameters. You can do this step easily using the pickwhip tool. Open two timelines to simultaneously show your main composition and either the left or right composition. Option-click the stopwatch for each property in the light and, using the pickwhip tool, drag it to the associated light property in the main composition. 

Ghosting

When viewing your composition through glasses, you can see areas that appear twice, called ghosting. You can test this phenomenon by closing your right eye. If you see any part of the image that only the right eye should be able to see, you know you have a problem. This problem is usually an issue with the way the display is showing your content. In general, though, suppress areas that are ghosting. Sometimes it happens when there are sharp contrasts of color and the glasses are not able to entirely block out that image from the incorrect eye. But most likely it is a display synchronization issue or similar problem with the 3D TV or display device.

Avoiding stereoscopic problems

As you can tell, there are many moving parts when working with stereoscopic 3D. As first discussed, you have access to so many more variables than in real life. Therefore, there is much more opportunity for them to not be aligned, providing contradictory depth cues and causing eyestrain or brain pain. The following are some general principles to keep in mind.

  • Make sure that your depth cues are not giving contradictory information.
  • Check your camera zoom; wide-angle lenses cause more distortion if your cameras are converging (toe-in).
  • Match your focus distance of the master camera to the distance to the convergence plane; it could be subtly confusing if they are not (it can give you the sense that something is wrong but you can’t tell what).
  • If integrating live footage, affirm that your camera angles match that of the cameras from the footage (usually parallel), and that your convergence distance also matches that of the footage.
  • Avoid introducing an extreme amount of parallax. In difference mode, look at the horizontal spacing of the left and right eyes between the closest and farthest object and make sure this is not too extreme.
  • If your eyes cannot converge or it is painful to see the image you might try these solutions:

    • Move farther away from the viewing screen when looking through your 3D glasses.
    • Make sure that your convergence point is somewhere predictable and not far off in the distance or very close to the camera where your eyes would go cross-eyed.
    • Reduce the Stereo Scene Depth (interaxial separation). If your convergence plane is reasonably located and an object that is far away from the convergence point causes you to go cross-eyed, this can be painful. Remember that it is the relationship between the objects in the scene that matters; compare the horizontal separation of the closest object to the farthest. If the two images overlaid look drastically different, this could cause strain. 

Ghosting can happen from things that are out of your control. Factors that can cause ghosting are hardware synchronization between glasses and monitor, battery power of glasses, dynamic range of the monitor, or refresh rate. But there are some things you can do to make it better. If you are getting ghosting, try the following:

  • Reduce the high-contrast areas
  • Increase brightness
  • Reduce scene depth so the separation between elements is reduced
  • Check your display's stereoscopic 3D troubleshooting guide

A final experiment

One interesting thing to try is to reverse the depth cues on purpose, and gain a sense of what happens when things go wrong. In this case, you can easily contradict your occlusion and stereoscopic 3D depth cues to give an interesting illusion. If you select Swap Left-Right in the 3D Glasses effect, it reverses all the convergences. Therefore, everything that was sticking out is now pushed in. This method is unintuitive, but the effect is that an object that is in front of another with regard to occlusion, relative size and perspective appears like it is behind the other in the stereo depth cue. It looks as if the background layer is cut out and the foreground layer is sinking into it. This effect is strange, but experiencing it helps to understand how important these depth cues are and how important it is to make sure that they are all in alignment and agree.

Adobe, Inc.

Get help faster and easier

New user?