The Latest Thing, or the Last Hurrah?

Latest and greatest PC technologies are often touted as some kind of Second Coming, but the latest example is a third coming — AGP 3.0, the standard behind the new AGP 8X graphics cards. This new version of the Intel-created, royalty-free-licensed specification improves on AGP 4X, promising unprecedented bandwidth and faster graphics performance, especially for 3D game maniacs.

But does it really deliver? Is it worth changing your buying plans today? And can it be counted on for tomorrow?

The AGP 1.0 and 2.0 Legacy

The original Accelerated Graphics Port 1.0 specification was introduced in 1996 and signaled a real shift in the way computer graphics were handled. The AGP interface provides a dedicated, high-bandwidth connection between a PC’s core logic chipset and its graphics controller. It lets 3D textures stored in system memory be delivered directly to the frame buffer memory on a graphics card, bypassing the PCI bus so 3D and video traffic needn’t compete with other hardware for that bus’s 132MB/sec bandwidth.

The original AGP 1.0 design built on PCI bus technology, while enhancing it through a dynamic link to system memory, lower latencies, and increased speeds. The AGP bus is 32 bits wide (transferring 4 bytes per clock cycle); AGP 1.0 allowed for 1X and 2X modes, which supplied 266MB/sec and 533MB/sec of bandwidth respectively. These 1X and 2X numbers denote the operational speed (multiples of the standard 66MHz bus speed) used. In addition to the 32 lines for addresses and data, an additional 8 lines provide what’s called sideband addressing, letting the graphics controller issue new requests while continuing to receive data from previous ones.

AGP 1.0 was supposed to usher in a new era of 3D power, supplying bandwidth galore for cutting-edge graphics chips. In fact, it was a bumpy transition for many vendors and users, as the AGP and PCI interfaces are incompatible, and hardware upgrades were required to make use of the new specification.

Soon, however, AGP quickly became a standard feature, enjoying industry-wide acceptance and virtually eliminating both PCI graphics cards and older architectures such as VL-Bus. Unfortunately, many of the first wave of AGP cards (such as the 3dfx Voodoo boards) were little more than PCI/66 clones that really didn’t make use of the AGP bus for other than marketing reasons. Others made use of the AGP 2X bus, but simply didn’t have the onboard hardware to keep it full — their graphics chips didn’t process data quickly enough to need the new bandwidth.

In 1998, Intel stepped up to AGP 2.0, which defined an AGP 4X mode (1.066GB/sec) using the same physical interface with lower-voltage signaling — 1.5V, down from 3.3V, with backward-compatible 1X and 2X modes available at both voltages.

At this point, the AGP landscape started to become clearer, and many graphics pretenders were weeded out. This created a much higher performance bar for AGP 4X graphics cards, arguably led by Nvidia, whose GeForce series really helped usher in the AGP 4X technology that’s common today. (Intel also introduced the AGP Pro specification, but this had no performance impact; it was simply a way to provide an additional power rail to higher-end video cards.)

Enter AGP 3.0/8X

Introduced in September 2002, Intel’s AGP 3.0 specification keeps the same 32-bit-wide bus while redoubling speed to 8X or 533MHz, supporting a data rate of 2.1GB/sec, and cutting the signaling voltage again, to 0.8V. Backward compatibility has been cut off at the AGP 4X (1.5V) level, which means AGP 8X slots cannot use older AGP 2X (3.3V) graphics adapters.

Other than increasing bandwidth, deleting AGP 1.0 support, and changing the pin arrangement, other changes are less notable, mostly relating to performance (e.g., fast-write flow control) or feature enhancements (along with unneeded AGP 2.0 feature deletions). While most of these slide under the radar, one intriguing aspect of AGP 3.0 is its optional support for multiple AGP devices.

The standard AGP 3.0 implementation, as seen in Intel’s E7205 chipset, allows for only AGP 8X or 4X, 1.5V cards. But what’s good for Intel’s chipsets may not be good for its licensees, who accuse the CPU giant of accelerating obsolescence or trying to retire older hardware to the dump before the consumer base is ready. So the list of available implementations is quite a bit longer than originally anticipated.

In addition to standard AGP 3.0, for instance, there’s the Universal 1.5V AGP 3.0 Motherboard configuration, which supports only 1.5V graphics cards but allows backward compatibility for AGP 1X and 2X as well as 4X and 8X speeds. Still more versatile is the Universal AGP 3.0 Motherboard design, which allows both 1.5V and 3.3V cards along with all four speed options. You’ll also encounter “AGP 3.0-compatible” platforms, which meet the AGP 8X bandwidth and performance specifications, but may or may not adhere strictly to all AGP 3.0 requirements.

Since the physical interface is standard, this also brings up some compatibility issues, especially in concern to AGP 4x- or 8X-only motherboards. AGP graphics cards, with some notable exceptions, also adhere to the same AGP 1.0, 2.0, and 3.0 specifications, which means that an AGP 8X-compliant video card uses 1.5V and can run in either AGP 8X or 4X modes.

Conversely, Intel AGP 4X and 8X motherboards support only 1.5V graphics cards, and usually ship with a cardboard insert in the AGP slot warning that hardware damage could occur with a 3.3V adapter. AGP may have brought some level of standardization to computer graphics, but it’s important to know each chipset’s capabilities and support before slapping in that old Voodoo Banshee.

What Comes Next

By now, you’re probably looking forward to AGP 16X. Actually, that won’t happen: Intel says the parallel architecture of AGP 8X is the last of its kind, and will be replaced in 2004 by none other than PCI — or rather, a new serial I/O technology called PCI Express.

Also known as 3GIO (for Intel’s third-generation I/O solution), PCI Express is a high-speed, general-purpose interconnect, compatible with current PCI technology, that allows point-to-point data transfers at an estimated 4.2GB/sec. It’s meant to be a unifying standard, consolidating a number of input/output architectures within a platform.

Is It Worth It?

AGP technology is certainly light-years ahead of the old PCI bus, and the newest AGP 8X graphics cards from Nvidia, ATI, and others bring plenty of benefits to the table. But the new spec does bring with it some inherent challenges.

The most obvious is public perception, and the natural inclination to associate AGP 8X as being twice as fast as AGP 4X. This is actually far from the truth, as AGP 8X bus speed is not strictly a performance metric. Any real-world speed benefit depends on the application or game used, the design of the graphics card itself, and even the design of the overall platform.

A prominent example is current 3D cards’ handling of texture memory. Intel has long promoted AGP as a method of offloading texture graphics to system memory, thereby saving on more costly onboard (meaning on-graphics-board) buffer memory. This is largely a fallacy. Early on, there were several 4MB and 8MB AGP cards that simply didn’t provide the juice to power 3D applications.

Even with the latest dual-channel DDR platforms, general-purpose system memory is just too slow compared to a high-end video card’s memory. This is magnified on a card like ATI’s newest Radeon 9800 Pro, which features a 256-bit interface to 680MHz DDR memory, along with specialized memory-optimization features and high-end color and z-compression algorithms that no PC platform can even approach.

As graphics technology improved, memory storage and operations have almost totally moved from the motherboard to the graphics card (with the obvious exception of low-priced desktops and laptops using integrated-graphics chipsets). Even today’s entry-level 3D cards feature 64MB of onboard DDR memory, while higher-end products ship with 128MB standard and 256MB configurations are on the horizon.

While higher AGP speeds do benefit integrated chipsets such as Intel’s 845G and GE, whose only data source is system memory, not even AGP 8X with its 2.1GB/sec bandwidth can hold a candle to the 8GB/sec memory bandwidth of Nvidia’s current entry-level GeForce4 MX 440, let alone the 21.8GB/sec of the top-end Radeon 9800 Pro.

That is not to say higher-speed AGP interfaces don’t come in handy, but only that their impact has been muted by the more card-centric features of today’s graphics accelerators. Other forms of data (with less intensive requirements) do make use of the AGP bus, and depending on the graphics design, vertex, triangle, or other 3D data are routinely stored in system memory and flow back and forth across the AGP bus.

AGP 8X Performance

In terms of AGP performance numbers, the vast majority of current games and applications show no real advantage on the AGP 8X bus, when compared to AGP 4X. In most cases, the AGP 8X bus is not even a consideration (2D Windows applications) or is not the limiting performance factor (3D games or applications).

That said, there is some evidence that future 3D development may change this scenario. For example, one of today’s most demanding 3D games, Unreal Tournament 2003, shows very noticeable AGP 8X framerate gains (in the neighborhood of 15 to 30 percent depending on the graphics card). Game guru John Carmack has been touting AGP 8X as a virtual requirement for the long-awaited Doom III.

Another important factor is the speed of the graphics card itself, as you will see more noticeable AGP 8X gains from a top-end card than a value model. It’s regrettable that AGP 8X seems to be turning into a “checkmark” item, reassuring buyers that their purchases are up-to-date even where it’s actually irrelevant; from our testing, the upper middle (say Nvidia’s GeForce4 Ti) segment is the point where AGP 8X starts making sense, or where you’ll notice higher performance results with AGP 8X enabled.

That said, there’s no reason not to buy a lower-end AGP 8X card, just as long as you don’t expect a radical performance jump. But if you’re not avid to upgrade this year, there might be some wisdom in saving to buy an all-new system with PCI Express — and Serial ATA, IEEE 1394b, and other advances — in 2004.

Categories: Technology