Basic Video Concepts and Terminology

Encode/transcode

Encoding a file is going from a lossless, raw, uncompressed format (for example, WAV) to a lossy coded bitstream format (for example, MP4).

Transcoding a file is going from one lossy format to another (for example, FLV format to mP4). It can also be the same format, but different resolution, bit rate, and so on.

Master video files created with video production equipment and video editing software are often too large and not in the proper format for delivery to online destinations.

To convert digital video to the proper format and specifications for playback on different screens, video files are encoded or transcoded.

During the encoding or transcoding process, the video is compressed to a smaller, efficient file size optimal for delivery to the web and to mobile devices.

Video file formats

Similar to a ZIP file, a video file format determines how files are contained in the video file. It is a container file.

A video file usually contains multiple tracks – a video track (without audio) and one or more audio tracks (without video) – that are interrelated and synchronized.

The video file format determines how these different data tracks and metadata are organized.

Some examples of video file formats include MPEG, mpg, mov, mxf, wmv, and avi.

Video codec (code/decode)

The file format alone does not help alone help determine how the video is delivered or encoded.

A video codec describes the algorithm by which a video is encoded.

A video player decodes the video according to its codec and then displays a series of images, or frames, on the screen.

Codecs minimize the amount of information video files require to store to play video.

Rather than information about each individual frame, only information about the differences between one frame and the next are stored.

Because most videos change little from one frame to the next, codecs allow for high compression rates, which results in smaller file sizes.

Common codecs include H.264 (aka MPEG-4/AVC, the standard), On2 VP6 (Google acquired- not as good), Sorenson Media Spark H.263

H.264 compression is usually 4 – 1.

Resolution

The height and width of the video in pixels. Most source video is stored at a higher resolution such as 1920x1080, but gets sampled down to smaller resolutions for streaming, usually 640 x 480 or smaller.

During the transcoding is generally when the resolution is sample down.

Doubling the resolution causes a 4x increase in size.

Multi-bit rate streaming based on users bandwidth allows higher, lower resolution

Often people take the height, or smaller dimension. 1080p, for example. Really means 1920 x 1080.

Aspect ratio

Ratio of width to height of video. Two common aspect ratios 4:3 and 16:9.

  • 4:3 (1.33:1) - Used for almost all standard-definition TV broadcast content.
  • 16:9 (1.78:1) - Used for almost all wide-screen, high-definition TV content (HDTV) and movies.

Most customers use 4:3 (640x480)

Video bit rate/date rate

The amount of data that is encoded to make up a single second of video in playback (in kilobits per second)

The higher the video bit rate, the higher quality of video.

Frame rate

frames per second or fps

The number of frames, or still images for each second of video

Typically, North American TV (NTSC) is broadcast in 29.97 fps.

European and Asian TV (PAL) is broadcast in 25 fps; and movies are in 23.976 or 24 fps.

Streaming

Unlimited bandwidth - no slowdowns due to excessive viewing

Content is protected (Scene doesn't take advantage of this characteristic)

Supports seeking into large files

Uses RAM to deliver video

Dynamic client-side buffer

Support Live Video Broadcasts

Better mobile support

Progressive download

Older; few use it today

Uses more bandwidth on Server & Client

No File Protection

No seeking into large files

No insight into video experience

Reliant on disk for Video Delivery

No buffer management

No support for live events

 Adobe

更快、更轻松地获得帮助

新用户?