Visual Technology Tutorial, Part 3: Compression
This is Part 3 of a video processing technology training series extracted from RGB Spectrum's Design Guide.
Digital Video
Digital technologies have revolutionized the way we work with both audio and video signals. However, representing information as groups of binary numbers requires an enormous amount of computing power, specifically memory capacity and processing capability. These requirements become especially challenging when audio and video signals are involved, because massive amounts of data are necessary to translate the characteristics of sound and light into bits.
Digital sound and video have created entirely new industries for both consumer and professional/commercial applications. One of the most important differences between these two types of applications is that professional/ commercial users don’t just use content like consumers do; rather, they often need to work with, manipulate, and combine this content with other sources. Information from any number of content sources frequently must be shared with co-workers who may be located in the same room or in remote locations around the globe. Digital technologies and networks have made these tasks significantly more effective than was ever possible in the analog domain.
The rise of digital technology has introduced a new set of challenges, primarily related to the vast amount of data that is required to represent digital video. For example, an image size of 1920x1080 pixels at 24 bit color depth can translate to about 6 MB per frame. At a frame rate of 60 fps, just one second of this video results in 3.6 GB of data, which is impractical for most current networks and storage systems. This example illustrates why video compression technology is often necessary when working with digital signals in these contexts.
Compression
Video compression is a process that reduces and removes redundant video information so that a digital video file/ stream can be sent across a network and stored more efficiently. An encoding algorithm is applied to the source video to create a compressed stream that is ready for transmission, recording, or storage. To decode (play) the compressed stream, an inverse algorithm is applied. The time it takes to compress, send, decompress and ultimately display a stream is known as latency.
A video codec (encoder/decoder) employs a pair of algorithms that work together. The process for encoding and decoding must be matched; video content that is compressed using one standard cannot be decompressed with a different standard. Different video compression standards utilize different methods of reducing data, and hence, results may differ in bit rate (i.e. bandwidth), latency, and image quality.
Types of compression are often categorized by the amount of data that’s maintained through the stages of processing. "Lossless" refers to a compression method in which there is no loss of data during the transmission of a video signal from source to display. The displayed image is identical to the original source image. "Visually lossless" means that the displayed image will appear identical to the original image, even if some data may actually have been lost during compression. "Lossy" compression usually involves some loss of data during the data-reduction process, but noticeable quality degradation may or may not be apparent.
Компания RGB Spectrum — ведущий разработчик и производитель критически важных аудиовизуальных решений в режиме реального времени для гражданских, правительственных и военных организаций. Компания предлагает интегрированное оборудование, программное обеспечение и системы управления для реализации самых высоких стандартов. С 1987 года RGB Spectrum помогает своим клиентам принимать лучшие решения.Better Decisions. Faster.™