High-Resolution Graphics and Interactive Video Mixing Oil and Water

Copyright (c) 1988 - Future Systems Inc.
PO Box 26 - Falls Church VA 22046 USA
Phone 703/241-1799 - Telex 4996279 - Fax 703/532-0529

Michael Bush

This article appeared in the The Videodisc Monitor - March 1988. Appears here by permission

Return to Multimedia and Digital Commentary Online

The Problem

CGA, MCGA, EGA, MDA, PGA, and VGA: The letters from this alphabet soup -- acronyms of different graphics standards of the microcomputer industry -- spell nothing but confusion.

The multitude of proprietary graphics schemes also in the marketplace makes matters even worse. Add to this the interactive videodisc industry's need to display NTSC video, and the situation becomes impossible.

This confusion exacts a high price in lost economies of scale in the production of monitors and graphics cards -- and thus, has a very direct impact on pricing in the microcomputer industry.

While this effect on peripheral pricing also exists in the interactive videodisc industry, combining high-resolution graphics with television images on the same monitor is perhaps a more serious complication. Indeed, videodisc users who need graphics resolutions greater than 640x200 face a problem known as "interlace jitter."

From my own extensive examination of this jitter problem, I have concluded that display systems that employ interlace techniques are unacceptable for resolutions above 640x200, especially when the user must be in front of the monitor for any length of time. This article will provide some background on the interlace approach and suggest some solutions to the problem.

Some Historical Perspective

Interlace jitter is a vestige of the history of television technology and even pre-dates the method for obtaining color television that was adopted by the National Television Standards Committee (NTSC) in 1953. This first system (compatible with the black and white, 525-scanline system adopted in the US in 1941) was based on standards for color proposed by RCA in 1949.

In the quest for increasing screen definition, engineers had determined many years earlier that it was possible to send each frame of video in two parts, enabling more lines to be placed on the screen using the standard bandwidth that was available.

The two basic patents for "scanline interlace" (or "intermesh" as it was known at the time) that is still used by today's televisions were awarded in 1929 and 1932 to engineers of the Radio Corporation of America (RCA).

Video in the US consists of images displayed at a rate of 30 frames per second, a number obtained as a function of the 60 cycle AC power available in the US. (To avoid electrical interference between the power source and the image on the screen that produced ripple on the early TV screens, the internal synching signals in the TV were set to the same frequency as the power source.)

Restricted by the bandwidth available for TV transmissions, the television's electron guns and control circuitry could not scan the screen's 525 lines in one pass.

Instead, each frame actually consisted of two fields that were scanned to the face of the TV's cathode ray tube in succession. The first field was placed on alternating scanlines, and the second field was "interlaced" onto the lines left vacant by the first field.

Due to the difficulty of beginning the scan of each line at the precise point of the preceding and subsequent lines, the interlacing of the two fields causes jitter on the screen. Interlacing is generally acceptable for TV pictures, due to the many colors (and low contrast) contained in each picture.

However, the jitter is not desirable for the high-contrast images produced by computers.

The Implications

Computer images do not require the same bandwidth and scan rates as those of NTSC television. Thus, it is easy to produce computer monitors that can display higher resolutions than those possible with TV. For computer graphics displays, the entire image can be scanned to the CRT screen in one pass of the electron gun.

However, If graphics are to be displayed simultaneously with video, then interlace mode must be selected to maintain compatibility with the NTSC technology explained earlier.

At resolutions less than 640x200, this is no problem, since a single pixel is displayed using two scanlines; the same content will appear on two adjacent scanlines, substantially reducing the interlace jitter effect. But for higher resolutions, the problem will appear at its worst for the reasons explained above.

The Solution

You can obtain the peaceful coexistence of high resolution graphics and NTSC video on the same monitor by digitizing the video in real-time. Because each television frame would be digitized just before it is displayed as video, various tricks (such as combining computer-generated graphics to video for overlay) become almost trivial.

This process has not been cheap. Just a few years ago, a rack-mounted, laboratory piece of digital video equipment cost about $20,000. Today, some Scholar work stations being used by the Athena project at MIT use a board set that costs about $5,000 (from Parallax of California).

During the past year, Processor Sciences Inc. (Waltham MA) has demonstrated a prototype overlay board that digitizes the video in real-time. Each frame is placed into one of two frame buffers. Then, the video is mixed with the pixel representation of the computer-generated graphics prior to the digital/analog conversion necessary for display on conventional monitors.

PSI's technology uses a set of ITT silicon chips from that allow overlay board production at a very low cost. First announced in 1983 (Monitor 1/84 p15), the set implements in digital circuitry all the functions of color television, including the necessary logic elements for analog/digital and digital/analog signal conversion.

In addition to PSI, Matrox of Canada (which will supply the initial hardware for the US Army's Electronic Information Delivery System) has implemented three chips out of this set in its EIDS graphics boards.

However, Matrox's initial design produces NTSC-compatible output -- high-resolution images can be displayed on lower scan-rate, NTSC compatible monitors.

Unfortunately, this NTSC compatibility is impossible to obtain without interlace and its accompanying jitter. Matrox, however, has announced a non-interlace version of its board set, available in May 1988.

In order to use such a non-interlace board, the monitor must be capable of both high vertical and horizontal scan rates. This high scan rate allows the image (ordinarily scanned to the screen in NTSC in two passes) to be scanned (after digitization) in one pass. This effectively eliminates interlace jitter.

Such special monitors, while easy to produce, will cost more, due to lesser demand. Presently, the microcomputer graphics marketplace is so fragmented that economies of scale do not exist.

For this reason, manufacturers are building "multiscan" monitors that can handle the graphics modes of yesterday, today, and even a few of tomorrow. Most of these multiscan monitors are capable of displaying the type of picture produced by the display technology described above.

However, multiscan electronics are not particularly inexpensive. If the marketplace could stabilize on one particular graphics resolution, manufacturers could produce a non-multiscan monitor with the necessary economies of scale to lower costs. Wouldn't it be great if the videodisc industry could use the same monitor that would be used throughout the microcomputer industry?

Perhaps we are coming closer: The VGA system used in the new IBM Personal System/2 and the Apple Macintosh II color graphics system are tantalizingly alike -- yet disappointingly different. Both have analog output at a resolution of 640x480 pixels. However, VGA's horizontal scan frequency is 31.5 kHz, while the Mac II's is 35 kHz. Nevertheless, two is better than ten -- or at least, its cheaper for third-party manufacturers.

Conclusions

In those applications in which high-resolution graphics must be used in conjunction with interactive video, display methods that use field interlace are generally unacceptable. The interactive videodisc industry must adopt interlace display schemes that enhance progress in crucial areas. We should not be plagued with the limitations of a technology that was patented over 55 years ago.

Secondly, there are many benefits to gain from adopting standards for combining computer generated graphics with video on the same delivery system. I am tired of plugging and unplugging several monitors into and out of several graphics cards and video overlay implementations, just to be able to run various pieces of software.

But personal convenience is not the most pressing reason to make the necessary changes in the way the interactive world does its graphics business. A single graphics resolution and overlay technique for interactive videodisc will bring down prices for monitors and display boards. It will even improve on the quality of display that is available. The digital graphics/video technology described above will produce rock-solid images, even at high resolutions.

I think this adds up to a no-lose situation for producers and users alike. The resulting combination of lower costs, ease of implementation, and higher quality, will raise the interest level of decision makers who will be investigating interactive videodisc as a viable and cost-effective training alternative.