Ghidul utilizatorului Anulare

About video and audio encoding and compression

Recording video and audio to a digital format involves balancing quality with file size and bitrate. Most formats use compression to reduce file size and bitrate by selectively reducing quality. Compression is essential for reducing the size of movies so that they can be stored, transmitted, and played back effectively.

When exporting a movie file for playback on a specific type of device at a certain bandwidth, you must first choose an encoder (codec). Various encoders use different compression schemes to compress the information. Each encoder has a corresponding decoder that decompresses and interprets the data for playback.

A wide range of codecs is available; no single codec is best for all situations. For example, the best codec for compressing cartoon animation is generally not efficient for compressing live-action video.

Compression can be lossless (in which no data is discarded from the image) or lossy (in which data is selectively discarded).

You can control many of the factors that influence compression and other aspects of encoding in the Export Settings dialog box. See Encoding and exporting.

For more information about encoding and compression options, see this FAQ entry: "FAQ: What is the best format for rendering and exporting from After Effects?"

Temporal compression and spatial compression

The two general categories of compression for video and audio data are spatial and temporal. Spatial compression is applied to a single frame of data, independent of any surrounding frames. Spatial compression is often called intraframe compression.

Temporal compression identifies the differences between frames and stores only those differences, so that frames are described based on their difference from the preceding frame. Unchanged areas are repeated from the previous frames. Temporal compression is often called interframe compression.

Bitrate

The bitrate (data rate) affects the quality of a video clip and the audience that can download the file given their bandwidth constraints.

When you deliver video using the Internet, produce files using lower bitrates. Users with fast Internet connections can view the files with little or no delay, but users with poor connections must wait for files to download. Make short video clips to keep the download times within acceptable limits if you think a majority of users may not have good internet speeds.

Frame rate

Video is a sequence of images that appear on the screen in rapid succession, giving the illusion of motion. The number of frames that appear every second is known as the frame rate, and it is measured in frames per second (fps). The higher the frame rate, the more frames per second are used to display the sequence of images, resulting in smoother motion. The trade-off for higher quality, however, is that higher frame rates require a larger amount of data, which uses more bandwidth.

When working with digitally compressed video, the higher the frame rate, the larger the file size. To reduce the file size, lower either the frame rate or the bitrate. If you lower the bitrate and leave the frame rate unchanged, the image quality is reduced.

Because video looks much better at native frame rates (the frame rate at which the video was originally recorded), Adobe recommends leaving the frame rate high if your delivery channels and playback platforms allow it. For full-motion NTSC video, use 29.97 fps; for PAL video, use 25 fps. If you lower the frame rate, Adobe Media Encoder drops frames at a linear rate. However, if you must reduce the frame rate, the best results come from dividing evenly. For example, if your source has a frame rate of 24 fps, then reduce the frame rate to 12 fps, 8 fps, 6 fps, 4 fps, 3 fps, or 2 fps.

For mobile devices, use the device-specific encoding presets from the Preset Browser panel.

Notă:

If you are creating a SWF file with embedded video, the frame rate of the video clip and the SWF file must be the same. If you use different frame rates for the SWF file and the embedded video clip, playback is inconsistent.

Key frames

Key frames are complete video frames (or images) that are inserted at consistent intervals in a video clip. The frames between the key frames contain information on changes that occurs between key frames.

Notă:

Key frames are not the same as keyframes, the markers that define animation properties at specific times.

By default, Adobe Media Encoder automatically determines the key frame interval (key frame distance) to use based on the frame rate of the video clip. The key frame distance value tells the encoder how often to re-evaluate the video image and record a full frame, or key frame, into a file.

If your footage has a lot of scene changes or rapidly moving motion or animation, then the overall image quality may benefit from a lower key frame distance. A smaller key frame distance corresponds to a larger output file.

When you reduce the key frame distance value, raise the bitrate for the video file to maintain comparable image quality.

Image aspect ratio and frame size

As with the frame rate, the frame size for your file is important for producing high-quality video. At a specific bitrate, increasing the frame size results in decreased video quality.

The image aspect ratio is the ratio of the width of an image to its height. The most common image aspect ratios are 4:3 (standard television), and 16:9 (widescreen and high-definition television).

Pixel aspect ratio

Most computer graphics use square pixels, which have a width-to-height pixel aspect ratio of 1:1.

In some digital video formats, pixels aren’t square. For example, standard NTSC digital video (DV), has a frame size of 720x480 pixels, and it’s displayed at an aspect ratio of 4:3. This means that each pixel is non-square, with a pixel aspect ratio (PAR) of 0.91 (a tall, narrow pixel).

Interlaced versus noninterlaced video

Interlaced video consists of two fields that make up each video frame. Each field contains half the number of horizontal lines in the frame; the upper field (Field 1) contains all of the odd-numbered lines, and the lower field (Field 2) contains all of the even-numbered lines. An interlaced video monitor (such as a television) displays each frame by first drawing all of the lines in one field and then drawing all of the lines in the other field. Field order specifies which field is drawn first. In NTSC video, new fields are drawn to the screen 59.94 times per second, which corresponds to a frame rate of 29.97 frames per second.

Noninterlaced video frames are not separated into fields. A progressive-scan monitor (such as a computer monitor) displays a noninterlaced video frame by drawing all of the horizontal lines, from top to bottom, in one pass.

Adobe Media Encoder deinterlaces video before encoding whenever you choose to encode an interlaced source to a noninterlaced output.

High-definition (HD) video

High-definition (HD) video refers to any video format with pixel dimensions greater than those of standard-definition (SD) video formats. Typically, standard-definition refers to digital formats with pixel dimensions close to those of analog TV standards, such as NTSC and PAL (around 480 or 576 vertical lines, respectively). The most common HD formats have pixel dimensions of 1280x720 or 1920x1080, with an image aspect ratio of 16:9.

HD video formats include interlaced and noninterlaced varieties. Typically, the highest-resolution formats are interlaced at the higher frame rates, because noninterlaced video at these pixel dimensions would require a prohibitively high data rate.

HD video formats are designated by their vertical pixel dimensions, scan mode, and frame or field rate (depending on the scan mode). For example, 1080i60 denotes interlaced scanning of 60 interlaced 1920x1080 fields per second, whereas 720p30 denotes progressive scanning of 30 noninterlaced 1280x720 frames per second. In both cases, the frame rate is approximately 30 frames per second.

 Adobe

Obțineți ajutor mai rapid și mai ușor

Utilizator nou?