Bigger Wafers, Smaller Dies, New Technologies

Some PC users view a desktop or notebook as a mysterious black box. Some know a good deal about the processor inside, or even picked out a particular brand and model of CPU before buying or building the rest of the PC. But not many pay attention to how the CPU itself was built — they may vaguely remember the old Intel commercials portraying dancing lab workers in clean-room “bunny suits,” but tune out when techies talk about chipmakers’ moving to 300mm wafers or SOI (silicon-on-insulator) technology.

That’s a shame — not because PC shoppers need to know every detail of silicon engineering, but because they like to get faster, more affordable CPUs. And better performance and lower costs are the twin engines behind today’s and tomorrow’s changes in semiconductor manufacturing. Let’s take a quick look at a few major trends.

From Bigger Circles …

Perhaps the most fundamental transition facing chipmakers is the move from carving chips out of 200mm silicon wafers (about 8 inches in diameter) to using 300mm wafers (about 12 inches in diameter, yielding 2.25 times as much surface area per wafer). The logic behind this move is simple: The bigger the wafer, the more chips that can be made from it, and the lower the cost of producing a single chip — roughly 30 percent lower, according to Intel.

So if 300mm wafers are such a good idea, why is the CPU industry just now getting around to using them? Intel opened a development facility in Oregon in 2001 and plans to have three high-volume 300mm “fabs” — fabrication plants or chip factories — by 2004, while AMD has partnered with United Microelectronics Corp. (UMC) to build a Singapore fab scheduled to open in 2005.

The answer is that, apart from technological barriers, there’s a huge economic one: According to Dick Deininger, director of manufacturing technology at AMD in Austin, Tex., a 300mm-wafer fab costs $2.5 billion to $3.5 billion to build, compared to $1.5 billion for a 200mm plant.

If you’re going to invest that much to produce twice as many chips from a wafer, you’d better be sure there’s a market out there for them — the reason Intel began building a 300mm fab in Ireland in June 2000, halted construction when the economy slumped, and restarted in April 2002. “People have been waiting for enough volume to support the very large investment in building, from the ground up, one of these large fabs,” explains AMD vice president for process technology Craig Sander. “Economics is what has delayed 300mm more than anything else.”

These new facilities offer advantages beyond just making whopper wafers. They are likely to have lower “defect densities,” meaning the number of imperfections per square inch of wafer, adds Peter Glaskowsky, editor in chief of the Microprocessor Report newsletter published by In-Stat/MDR in Scottsdale, Ariz.

While 300mm wafers are the wave of the future, their benefits are eluding consumers today because there aren’t enough 300mm fabs producing enough chips to have a serious impact on the market. It will take “a few years” before that will occur, Glaskowsky says.

Nevertheless, the move to 300mm manufacturing is a very big deal. While Moore’s Law rules that processing power doubles roughly every 18 months, incremental changes in wafer size can take as long as a decade.

…Come Smaller Squares

You can think of the shift to 300mm wafers as analogous to feeding more people at a pizza party by making a bigger pizza (using a larger pan). Another way to feed more partygoers is to serve smaller slices. And within the past year, both AMD and Intel have done so by moving their respective Athlon XP and Pentium 4 CPUs from 0.18- to 0.13-micron process manufacturing — referring to the average size of the circuits or elements etched onto chips.

Besides letting manufacturers make more CPUs from each wafer, denser process manufacturing makes the CPUs themselves more efficient. The chip’s transistors can switch faster; they require less energy; the chip runs cooler; and designers can pack more transistors onto the same size die. All these things boost performance.

In the second half of 2003, Intel’s “Prescott” Pentium 4 redesign will take the next step — from 0.13-micron to 0.09-micron process technology, although it’s not called that. Below a tenth of a micron (the size, Intel points out, of a typical virus), it’s fair to say you’ve entered the realm of nanotechnology. So while technicians acquired the habit of saying “0.13-micron” instead of “130-nanometer,” the next process plateau is referred to as 90-nanometer technology.

After Prescott breaks the ice, AMD says 90-nanometer versions of its Opteron server/workstation and Athlon ClawHammer desktop/notebook CPUs will ship in the first half of 2004. Ultimately, the new technology promises chips that will run twice as fast and be half the size of their 0.13-micron predecessors.

More Super Conductors

Besides making the interconnects between transistors shorter, chipmakers are working to make them smoother. Over the last few years, for instance, one material that’s been migrating into CPUs has been copper, which boosts performance because — as every home electrician knows — it’s an excellent electrical conductor.

Using copper in CPUs would seem to be a no-brainer. But before 1997, lower-conductivity aluminum was used instead, because copper atoms could leak into or “poison” the transistors. IBM pioneered a way to stop that from happening, followed in 1998 by a venture between AMD and Motorola that led to the first copper-interconnect desktop PC processors.

Most chips today take advantage of copper, with vendors busily searching for an even more efficient successor. One innovation, explains Glaskowsky, is to place an insulator between the silicon and transistor layers of the chip, in a process called silicon-on-insulator (SOI): “Separating the transistors from the silicon base allows them to move faster, because they’re not being dragged down by having the silicon nearby.”

What drags them down is a buildup of electrical charge called parasitic capacitance; according to AMD, SOI design can cut this drag by 20 to 25 percent — or, alternatively, slice a CPU’s power consumption in half without slowing performance.

While SOI technology is available today in the PowerPC processors of high-end Apple Macs, “it will be adopted more slowly by the x86 companies because they need to manufacture chips in higher volumes,” Glaskowsky says. AMD has promised to take the plunge, incorporating SOI in its Hammer family of CPUs due in the first half of 2003.

Rival Intel says it’s chosen a different technique to tweak a processor’s performance — strained silicon, which changes the silicon lattice structure to speed the flow of electrons through it, like stretching a piece of fishnet fabric to make the holes bigger. Intel says the strained-silicon design that will debut with its 90-nanometer “Prescott” CPUs enhances drive current by 10 to 20 percent while adding only 2 percent to the manufacturing cost.

Chipmakers are also looking at the wires within chips in the quest to optimize performance. Intel’s 90-nanometer designs will incorporate, and AMD’s partner UMC is also pursuing, a mix of copper with “low-k dielectrics,” a new kind of insulator between wires that increases signal speed and reduces power consumption. “The insulation around the wire influences how fast signals go through it,” Glaskowsky explains. “By changing the insulation, you can make the wire carry signals faster.”

The Incredible Shrinking Chip?

Some elements within a CPU are even smaller than its manufacturing process size — Intel’s current, 0.13-micron Pentium 4 includes transistors that measure just 60 nanometers, and the company says its 90-nanometer CPUs will feature 50-nanometer transistors whose gate oxides are literally only five atoms thick. Indeed, as processors get smaller and smaller, analysts predict they’ll reach a point where they can’t shrink any further — at least not using silicon.

There’s some debate as to when that point will be reached; some say this silicon Ragnarok is still 10 to 20 years away, while others warn it could occur as soon as five years from now. But there’s no need to lie awake over this prospect, since researchers are busily mining areas that could if necessary yield replacements for silicon.

One potential alternative: carbon nanotubes, cylinders of carbon atoms as small as 10 atoms across — 500 times smaller than today’s silicon-based transistors. Though far from optimized, nanotube transistors that outperform the silicon originals have already been created in the lab, says Phaedon Avouris, manager of nanoscale science and technology for IBM in Yorktown Heights, N.Y.

The first appearance of nanotubes in commercial products is expected next year, with several companies promising flat-panel displays based on the technology. Such screens are touted to have long lifetimes (over 10,000 hours), with wider viewing angles, lower viewing angles, and ultimately less cost than today’s LCDs.

Says Avouris, “Research has shown us that nanotubes have a remarkable set of electrical properties that free them from the kinds of problems that silicon electronics will face in 10 years or so.”

Categories: Technology