Home ATI’s Radeon X1900 series graphics cards
Reviews

ATI’s Radeon X1900 series graphics cards

Scott Wasson
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.
SO YOU’RE SITTING IN your executive suite just outside of Toronto, reading your favorite website’s stellar review of the GeForce 7800 GTX 512. This new GPU from your main competitor is putting the hurt on your company’s own high-end offering, the Radeon X1800 XT. In fact, the contest is more lopsided than the NFC championship game. Not good. What can you do about it?

If you’re a bigwig at ATI, you’ve got quite a few arrows in your quiver. You have a next-generation GPU design just recently introduced, loaded with the latest bells and whistles. You’ve already made the conversion to a 90nm chip fabrication process, so your transistor budgets are ample. And you have a pretty good idea what it might take to win back the performance crown. You’d probably order up a new chip with a whole lot more power in order to meet the competition head on. You’d want to do something gaudy, something that would be sure to raise eyebrows and also pack a heckuva wallop.

48 pixel shaders ought to do it, don’t you think?

That’s exactly how many pixel shader units ATI has packed into its new GPU, the Radeon X1900. Yes, you read that right: for-tee eight.

If you really wanted to make a splash, perhaps you’d hook two of them together into a CrossFire configuration for a total of 96 pixel shaders churning out eye candy by the bucketload. That oughta show ’em. And then you’d price ’em nice and high, but make sure that cards were widely available on their launch day, with thousands of those puppies lined up at online retailers, ready to sell.

Sounds like a plan to me. In fact, that is very much ATI’s plan for the Radeon X1900 series, and your favorite website has benchmarked the stuffing out of the high-end lineups from ATI and NVIDIA in order to see how these new entries fit into the picture. With 96 pixel shaders tearing through F.E.A.R. like Michael Moore through a loaf of cheese bread, does NVIDIA stand a chance?

R580 pours on the pixel shaders
The chip that powers ATI’s new Radeon X1900 lineup has been known through most of its life until now by its codename, R580. The R580 is the successor to the R520 GPU that powers the Radeon X1800 series of products, and it’s derived from the same basic chip architecture. The only truly major change in the R580 is the expansion of the number of pixel shader units on the GPU. Since R500-class products are very modular internally, ATI can strategically add resources with relative ease. In a nutshell, that’s how we’ve arrived at this mind-boggling situation where there are 48 pixel shader units on a single GPU.


A block diagram of R580’s shader core. Source: ATI.

The R580 block diagram above is very much a statement from ATI about where they think PC graphics is going. Like the mid-range Radeon X1600, the R580 is a radically asymmetrical design, heavy on the pixel shaders and general-purpose registers but relatively easy on some of the resources that have traditionally gone with them. ATI clearly believes that game developers will be making extensive use of the computational power of pixel shaders in future games, and they have spent the R580’s transistor budget accordingly. To give you a better sense of what I’m talking about, have a look at the table below, which shows how the R580 stacks up against the competition in handling various stages of what used to be the pixel pipeline.

Vertex
shaders
Pixel
shaders
Texture
units
Render
back-ends
Z
compare
Max.
threads
Radeon X1300 (RV515) 4 4 4 4 4 128
Radeon X1600 (RV530) 5 12 4 4 8 128
Radeon X1800 (R520) 8 16 16 16 16 512
Radeon X1900 (R580) 8 48 16 16 16 512
GeForce 6200 (NV44) 3 4 4 2 2/4 ?
GeForce 6600 (NV43) 3 8 8 4 8/16 ?
GeForce 6800 (NV41) 5 12 12 8 8/16 ?
GeForce 7800 GT (G70) 7 20 20 16 16/32 ?
GeForce 7800 GTX (G70) 8 24 24 16 16/32 ?

R580 is slightly smaller than G70

The R580 offers no more power than the R520 in terms of its abilities to apply textures to pixels, do depth comparisons, or write pixels to a frame buffer. Also, like the R520, the R580 has eight vertex shader units—no changes there.

So the basic story is: more pixel shaders. At 650MHz, the Radeon X1900 XTX should have at its disposal 31.2 billon pixel shader cycles per second. Pixel shader units and their computational capabilities vary greatly from one GPU architecture to the next, of course, but the GeForce 7800 GTX 512’s 24 pixel shaders running at 550MHz will only reach 13.2 billion cycles per second. The R580’s rich endowment of shaders is naturally a very good thing, because pixel shaders enable developers to employ all kinds of neat tricks like parallax occlusion mapping and high-dynamic-range lighting to make real-time graphics look more realistic. This bias toward shader power makes the R580 a very forward-looking design, and as is often the case with forward-looking designs, there may be a little bit of trouble with today’s applications. Now, don’t get me wrong here. A chip this capable can run just about any current game very well, but it may not reach its fullest potential while running them. Meanwhile, it has to compete with the likes of the GeForce 7800 GTX 512, which crams in more texturing capability per clock cycle than the R580—something to keep in mind when we turn toward the benchmark results.

This major upgrade in pixel shader power isn’t the only tweak to the R580, though. ATI has also added a couple of minor wrinkles to boost performance in specific situations.

You may recall how we noted a while ago that design limitations prevent some GPUs from achieving optimal performance at really high resolutions. This limitation particularly affects GeForce 6 series GPUs; their performance drops off markedly at resolutions above two megapixels, like 1920×1440 or 2048×1536. NVIDIA isn’t saying exactly what all is involved, but certain internal buffers or caches on the chip aren’t sized to handle more than that. Recent ATI graphics chips, including the Radeon X1800 series, have a similar limitation: their Hierarchical Z buffer can only handle up to two megapixels of resolution. The performance impact isn’t as stark as on the GeForce 6, but these Radeons can only use Hierarchical Z on a portion of the screen at very high resolutions; the rest of the screen must be rendered less efficiently. The R580 has a 50% larger on-chip buffer that raises that limit to three megapixels, so super-high-definition graphics should run more efficiently on Radeon X1900 cards.

The R580’s other new trick is a simple but effective optimization. The GPU’s texturing units are designed primarily to fetch traditional four-component textures—with red, green, blue, and alpha—from memory. However, some textures have only a single component, such as those used to store depth values for popular techniques like shadow maps. The R580’s new Fetch4 capability allows it to fetch four adjacent values from a single-component texture at once, potentially raising texture sampling rates substantially.

Neither of these optimizations will make the R520 look out-of-date overnight, but they should be nice to have.

So what is required to cram these tweaks plus 48 pixel shaders into a single chip? Lots of everything—a 90nm fab process, roughly 384 million transistors, and over 314 mm2 of die space. The table below has some rough estimates; caveats to follow.

Transistors
(Millions)
Process
size (nm)
Approx.
Die size
(sq. mm)
Radeon X1300 (RV515) 105 90 95
Radeon X1600 (RV530) 157 90 132
Radeon X1800 (R520) 321 90 263
Radeon X1900 (R580) 384 90 314.5
GeForce 6200 (NV44) 75 110 110
GeForce 6600 (NV43) 143 110 156
GeForce 6800 (NV41) 190 110 210
GeForce 7800 (G70) 302 110 333

The transistor counts are from ATI and NVIDIA, with each company giving numbers for its own chips. Unfortunately, transistor counts are a source of consternation for both sides, because they seem to count transistors differently from one another. In other words, one should never, ever put those numbers together in a comparison table like the one above. That would be wrong.

Also, the die size numbers are based on my own plastic-ruler measurements of the chips. There are probably more accurate ways of getting this information, like random guessing. I’ve offered my numbers for whatever they’re worth.

However you slice it, the R580 is a big chip. Even though it’s fabbed using a 90nm process, it’s nearly as large as NVIDIA’s G70, and even with a disparity in counting methods, the R580 clearly must have many more transistors than the G70.

The Radeon X1900 family
ATI is spinning the R580 out into a family of four Radeon X1900-series products, and this family is decidedly upper-middle class.

GPU clock (MHz) Memory clock (MHz) Price
All-In-Wonder X1900 500 960 $499
Radeon X1900 XT 625 1450 $549
Radeon X1900 CrossFire 625 1450 $599
Radeon X1900 XTX 650 1550 $649

ATI’s new king of the hill is the Radeon X1900 XTX, which is about all of the X’s we can handle in a name. (Secretly, though, I wish XFX would begin making Radeons, so we could be treated to the XFX Radeon X1900 XTX.) The X1900 XTX should be only a slight bit faster than the Radeon X1900 XT, but you’ll pay a hundred bucks extra for the bragging rights on what may be the fastest single video card known to man. Obviously, both the XT and the XTX should perform very well.

If the performance of just one of these cards isn’t enough for you, you can slap down another $599 for the Radeon X1900 CrossFire Edition. This card is clocked like an X1900 XT, not an XTX, so it will limit performance somewhat for XTX owners in the most efficient CrossFire load-balancing mode, alternate frame rendering. Still, with 96 pixel shaders at your beck and call, I suspect you’ll somehow suffer through.

ATI says all three of these Radeon X1900 cards—the Radeon X1900 XT, XTX, and CrossFire—should be available at online retailers starting today. They have said such things in the past, of course, and missed the date by a week or more. But they’ve been very emphatic about their determination to achieve widespread product availability on the launch date, and I’m hopeful. Some online retailers even started listing the cards for sale ahead of time.

ATI will also begin selling the multimedia-oriented All-in-Wonder X1900 on its own website today, with availability at other online stores “in the coming weeks.” They’re aiming to deliver a European version of the AIW X1900 by the middle of February, as well.


The Radeon X1900 CrossFire (left) and Radeon X1900 XTX (right)

The Radeon X1900 cards themselves look comfortingly familiar, with the same basic layout and dual-slot cooler as the Radeon X1800 XT. In operation, they make about as much noise as a Radeon X1800 XT—or not too terribly much. Those blowers do kick it up a notch or two when running a 3D app or game, but noise levels are similar to other high-end graphics cards.


The Radeon X1900 CrossFire’s compositing engine is unchanged from the X1800

ATI hasn’t made any notable changes to the compositing engine on the Radeon X1900 CrossFire Edition, either. This card has the same set of chips and compositing capabilities as the Radeon X1800 CrossFire master card. This newer CrossFire engine relieves some of the worst shortcomings of the original Radeon X850 CrossFire scheme, but it doesn’t eliminate the need for a special CrossFire Edition card or an external dongle connector.

In case you were wondering, the Radeon X1900 series is more or less a replacement for the Radeon X1800 line. Of course, Radeon X1800 cards should still be available for a time until resellers sell out of their current stock, and there may be some bargains to be had. However, once those are gone, that’s pretty much it. The X1800 series will be no more, and ATI’s product offerings will officially consist of the Radeon X1300, X1600, and X1900 lines. That leaves a gaping hole between the bottom of the X1900 series—the AIW X1900 at $499, if you count AIW cards—and the Radeon X1600 XT at $169. I’m sure ATI will address this vast swath of the market with new products at some point, but apparently the Radeon X1800 won’t live on in a lower price bracket.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Athlon 64 X2 4800+ 2.4GHz
System bus 1GHz HyperTransport
Motherboard Asus A8N32-SLI Deluxe ATI RD480 CrossFire reference board
BIOS revision 0806 080012
North bridge nForce4 SLI X16 Radeon Xpress 200P CrossFire Edition
South bridge SB450
Chipset drivers SMBus driver 4.50 SMBus driver 5.10.1000.5
Memory size 2GB (2 DIMMs) 2GB (2 DIMMs)
Memory type Crucial PC3200 DDR SDRAM at 400MHz Crucial PC3200 DDR SDRAM at 400MHz
CAS latency (CL) 2.5 2.5
RAS to CAS delay (tRCD) 3 3
RAS precharge (tRP) 3 3
Cycle time (tRAS) 8 8
Hard drive Maxtor DiamondMax 10 250GB SATA 150 Maxtor DiamondMax 10 250GB SATA 150
Audio Integrated nForce4/ALC850 with Realtek 5.10.0.5900 drivers Integrated SB450/ALC880 with Realtek 5.10.00.5188 drivers
Graphics GeForce 6800 GS 256MB PCI-E
with ForceWare 81.98 drivers
Radeon X1800 XL 256MB PCI-E
with Catalyst 8-203-3-060104a-029367E drivers
Dual GeForce 6800 GS 256MB PCI-E
with ForceWare 81.98 drivers
Radeon X1800 XT 512MB PCI-E
with Catalyst 8-203-3-060104a-029367E drivers
XFX GeForce 7800 GT 256MB PCI-E
with ForceWare 81.98 drivers
Radeon X1800 CrossFire + Radeon X1800 XT 512MB PCI-E
with Catalyst 8-203-3-060104a-029367E drivers
Dual XFX GeForce 7800 GT 256MB PCI-E
with ForceWare 81.98 drivers
Radeon X1900 XTX 512MB PCI-E
with Catalyst 8-203-3-060104a-029367E drivers
MSI GeForce 7800 GTX 256MB PCI-E
with ForceWare 81.98 drivers
Radeon X1900 CrossFire 512MB PCI-E
with Catalyst 8-203-3-060104a-029367E drivers
Dual MSI GeForce 7800 GTX 256MB PCI-E
with ForceWare 81.98 drivers
Radeon X1900 CrossFire + Radeon X1900 XTX 512MB PCI-E
with Catalyst 8-203-3-060104a-029367E drivers
GeForce 7800 GTX 512 512MB PCI-E
with ForceWare 81.98 drivers
Dual GeForce 7800 GTX 512 512MB PCI-E
with ForceWare 81.98 drivers
OS Windows XP Professional (32-bit)
OS updates Service Pack 2, DirectX 9.0c SDK update (December 2005)

Thanks to Crucial for providing us with memory for our testing. 2GB of RAM seems to be the new standard for most folks, and Crucial hooked us up with some of its 1GB DIMMs for testing. Although these particular modules are rated for CAS 3 at 400MHz, they ran perfectly for us at 2.5-3-3-8 with 2.85V.

All of our test systems were powered by OCZ PowerStream 520W power supply units. The PowerStream was one of our Editor’s Choice winners in our last PSU round-up.

Unless otherwise specified, the image quality settings for both ATI and NVIDIA graphics cards were left at the control panel defaults.

The test systems’ Windows desktops were set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Pixel-pushing power
I’ve already mentioned that the Radeon X1900 is a forward-looking design that’s heavy on pixel shaders at the expense of other things. The numbers below will illustrate that point. These aren’t measures of pixel-shading power, just basic measures of theoretical peak ability to draw pixels onscreen. These numbers become less relevant as architectures like the X1900 take hold, but they’re still noteworthy indicators of performance potential, especially in older games and apps.

Core clock
(MHz)
Pixels/
clock
Peak fill rate
(Mpixels/s)
Textures/
clock
Peak fill rate
(Mtexels/s)
Memory
clock (MHz)
Memory bus
width (bits)
Peak memory
bandwidth (GB/s)
Radeon X1600 XT 590 4 2360 4 2360 1380 128 22.1
GeForce 6800 325 8 2600 12 3900 700 256 22.4
GeForce 6600 GT 500 4 2000 8 4000 1000 128 16.0
Radeon X800 400 12 4800 12 4800 700 256 22.4
GeForce 6800 GS 425 8 3400 12 5100 1000 256 32.0
GeForce 6800 GT 350 16 5600 16 5600 1000 256 32.0
Radeon X800 XL 400 16 6400 16 6400 980 256 31.4
GeForce 6800 Ultra 425 16 6800 16 6800 1100 256 35.2
GeForce 7800 GT 400 16 6400 20 8000 1000 256 32.0
Radeon X1800 XL 500 16 8000 16 8000 1000 256 32.0
Radeon X850 XT 520 16 8320 16 8320 1120 256 35.8
Radeon X850 XT PE 540 16 8640 16 8640 1180 256 37.8
XFX GeForce 7800 GT 450 16 7200 20 9000 1050 256 33.6
Radeon X1800 XT 625 16 10000 16 10000 1500 256 48.0
Radeon X1900 XT 625 16 10000 16 10000 1450 256 46.4
GeForce 7800 GTX 430 16 6880 24 10320 1200 256 38.4
Radeon X1900 XTX 650 16 10400 16 10400 1550 256 49.6
GeForce 7800 GTX 512 550 16 8800 24 13200 1700 256 54.4

Despite its additional pixel shading power, the Radeon X1900 XT has no more fill rate than the Radeon X1800 XT. In fact, its memory is clocked slightly slower than the X1800 XT, believe it or not. The XTX offers a minor bump over the X1800 XT in terms of fill rates and memory clocks, but it’s not enough to match the multitextured fill rate or memory bandwidth of the GeForce 7800 GTX 512.

3DMark’s synthetic fill rate tests allow us to put those theoretical numbers to the test. Few cards can reach their theoretical peaks in the single-textured test, but most achieve numbers close to their peaks in the multitextured test. The Radeon X1900 XT just trails the X1800 XT here—a virtual tie—while the XTX edges out the 256MB version of the GeForce 7800 GTX.

Quake 4
We tested Quake 4 using our own custom-recorded timedemo. The game was running at its “Ultra” quality settings with 4X antialiasing enabled.

Like Doom 3 before it, Quake 4 is an OpenGL game, and that has generally meant bad things for ATI cards. Nevertheless, the Radeon X1900s put in a respectable showing, though they can’t quite keep pace with the GeForce 7800 GTX 512.

Half-Life 2: Lost Coast
This new expansion level for Half-Life 2 makes use of high-dynamic-range lighting and some nice pixel shader effects to create an impressive-looking waterfront. We tested with HDR lighting enabled on all cards.

Here the pixel shader upgrade helps quite a bit, as the X1900 cards leap out ahead of the Radeon X1800 XT. The GeForce 7800 GTX 512 runs neck and neck with the X1900s, too, until we get into dual-GPU configurations, where the X1900 CrossFire rig is easily fastest.

F.E.A.R.
We tested the next few games using FRAPS and playing through a portion of the game manually. For these games, we played through five 60-second gaming sessions per config and captured average and low frame rates for each. The average frames per second number is the mean of the average frame rates from all five sessions. We also chose to report the median of the low frame rates from all five sessions, in order to rule out outliers. We found that these methods gave us reasonably consistent results.

All of the F.E.A.R.’s graphics quality options were all set to maximum for our testing. Computer performance was set to medium.

ATI’s new cards are easily quickest amongst the single-card configs, with the X1900 XT averaging over 10 frames per second faster than the GeForce 7800 GTX 512. However, the GTX 512 SLI system manages to snag the top spot overall away from the X1900 CrossFire rig, which doesn’t scale up as well.

Battlefield 2
We’re testing BF2 at an insanely high resolution because it runs really well on just about any of these cards at lower resolutions. Also, BF2 has a built-in frame rate cap of 100 FPS. We didn’t want to turn off the cap, but we did want to see some differences in performance between the cards.

Here’s another instance where the Radeon X1900 XTX proves to be the fastest single card around, but it can’t quite beat out the GTX 512 in SLI. In this case, the X1900 CrossFire system effectively performs more like two Radeon X1900 XT cards than like two XTX cards, since it’s limited by the slower CrossFire card. That keeps it from overtaking the GTX 512 SLI setup.

Guild Wars
Like the two above, we played this game manually and recorded frame rates with FRAPS. In this case, we’re playing an online game, so frame rates were subject to some influence from an uncontrollable outside factor. Regardless, I think the numbers below reflect performance pretty well.

ATI seems to have fixed its CrossFire slowdowns in Guild Wars; the dual-card solution is now faster than a single card, at least. The X1900 cards run a little bit slower than the GeForce 7800 GTX 512, as might be expected given the relatively simple lighting in this game—pixel shader power probably isn’t at a premium.

3DMark06
This will be our first foray into 3DMark06. Let’s see what it can show us about these cards.

3DMark06’s overall score is influenced by four graphics tests and two CPU tests. Overall scores in 3DMark06 track pretty closely with the relative performance we saw from these cards in Half-Life 2: Lost Coast with HDR. The new ATI cards score highest, but the GeForce 7800 GT 512 isn’t too far behind them.

3DMark06 – SM2.0 score
This next score is determined by the results from two separate graphics tests, both of which use DirectX 9’s Shader Model 2.0.

The X1900 cards come out on top here, especially at higher resolutions.

3DMark06 – SM2.0 test 1

3DMark06 – SM2.0 test 2

Both Radeon X1900 cards outperform the GTX 512 in the two Shader Model 2.0 graphics tests.

3DMark06 – HDR/SM3.0 score
This next score is a determined by the results of the two HDR/SM3.0 tests. As you might have guessed, these tests use high-dynamic-range lighting and Shader Model 3.0-based effects.

The X1900s retain their slim advantage over the GTX 512 in the HDR/SM3.0 tests. The R580’s additional pixel shaders give it much higher performance here than the Radeon X1800 XT.

3DMark06 – HDR/SM3.0 test 1
This scene uses the same 3D models as it did in 3DMark05, but the water shader is much improved and the HDR lighting makes for some spectacular scenes.

ATI walks away with this one easily.

3DMark06 – HDR/SM3.0 test 2

This HDR test is more of a contest than the last one. The GTX 512 and the Radeon X1900 XT are closely matched; the XTX is faster than either of them.

3DMark06 – Feature tests

As one might expect, the Radeon X1900 cards come out on top in 3DMark’s simple pixel shader test. I had hoped for a more comprehensive pixel shader test in 3DMark06, but we’ll have to continue using ShaderMark for such things.

Interestingly, the Radeon X1900 XT proves to be a little faster than the Radeon X1800 XT in the vertex shader tests, despite the fact that both GPUs have eight vertex units running at 625MHz. Perhaps ATI has tweaked its vertex units a bit?

Also, note that CrossFire is a net loss in the simple vertex shader test. Somehow, this one doesn’t scale well for ATI. NVIDIA’s SLI wrings a little more performance out of a dual-card config in this same test.

ShaderMark
Next up is ShaderMark, one of the few synthetic pixel shader benchmarks around. Pardon the massive amounts of data I’ve dumped into one graph. These numbers should give you some idea how each of these cards runs various types of pixel shader programs.

There’s much to discuss here, but the thing that jumps out at me first is the performance delta between the Radeon X1800 XT and the Radeon X1900 XT. The X1900 XT is up to 100% faster at times, but usually less so. Tripling the number of pixel shaders on the chip hasn’t paid off, at least from these shader programs, as much as one might have expected.

Overall, the Radeon X1900 XTX appears to have an edge on the GeForce 7800 GTX 512, but it’s not a towering advantage. NVIDIA’s pixel shader units on the G70 look to be very potent, clock for clock, running ShaderMark’s suite of effects. Also, notice that NVIDIA seems to have tweaked its drivers so that the shadow mapping shader runs faster when flow control is enabled. That wasn’t so in our initial Radeon X1000 series review.

ShaderMark also allows us to quantitatively analyze image quality, believe it or not. It does so by comparing output from the graphics card to the output of Microsoft’s DirectX 9 reference rasterizer, so this is more of a quantitative analysis of the deviation from Microsoft’s standard than anything else. The number is reported as the mean square error in the card’s output versus the reference image.

The Radeon cards match the reference rasterizer’s output pretty closely in all but a few of the tests. The HDR shaders show a grid-type artifact that’s visible to the naked eye. I suspect this problem could be fixed via a future driver update. Similarly, the shadow mapping routine in ShaderMark doesn’t appear to function correctly at all with these Radeon cards and this driver revision. There’s no shadow! That’s why the mean square error is relatively large for these tests. This problem didn’t occur with old revisions of the ATI drivers. Beyond these two obvious problems, both the ATI and NVIDIA cards produce output very similar to that of the Microsoft reference rasterizer.

Power consumption
We measured total system power consumption at the wall socket using a watt meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The idle measurements were taken at the Windows desktop with AMD’s Cool’n’Quiet CPU clock throttling function enabled. The cards were tested under load running Half-Life 2: Lost coast at 1600×1200 resolution with 16X anisotropic filtering and HDR lighting enabled. We turned off Cool’n’Quiet for testing under load.

All of the graphics cards named below except for the two CrossFire setups were tested on the Asus A8N32-SLI mobo. We were forced to use the ATI CrossFire reference mobo, of course, for the CrossFire cards, so power consumption for the CrossFire systems will vary due to the difference in motherboards. Also, please note that the Radeon X1900 XT card shown here is actually the CrossFire master card, so its power consumption is probably slightly higher than a non-CrossFire X1900 XT that lacks the additional chips needed for image compositing.

I believe that our Radeon X1800 XL and XT cards are not wholly representative of the idle power consumption of retail cards, either. Our cards are early review units that lack the idle clock throttling we’ve observed in newer Radeon X1800 cards. Our X1900 review samples, however, do settle down to somewhat lower clock speeds at idle in order to save power and cut down on heat.

These are some surprisingly decent numbers out of a very beefy R580 GPU. The Radeon X1900 XT system eats less power at idle than the GeForce 7800 GTX 512, and the two cards are evenly matched under load—that probably means, given everything, that the Radeon X1900 XT has a performance per watt advantage on the GTX 512. The XTX is a bit more of a power hog, especially under load, thanks to its higher core and memory clocks. Still, ATI seems to be enjoying the benefits of having made the transition to 90nm before NVIDIA.

Conclusions
Well, it looks like the Radeon X1900 XTX has earned the title of the fastest single video card known to man. Oddly, though, two Radeon X1900 cards in a CrossFire config can’t claim the title of the fastest two video cards known to man. The CrossFire card’s marginally slower clock speeds, along with a tendency not to scale quite as well as SLI overall, leave that title with NVIDIA’s GeForce 7800 GTX cards in SLI. Still, the Radeon X1900 series is quite an accomplishment. If they are indeed widely available for sale as planned, they should be very solid choices for those looking to spend an insane amount of money on a graphics card, especially since GeForce 7800 GTX 512 cards have become scarce and expensive since their launch.

I’m probably not the one to tell you whether to pick the XT or the XTX version of the Radeon X1900. I’d choose the XT every time because of its lower price, lower power consumption, and nearly equivalent performance. But if you’re going to be parting ways with upwards of five hundred bucks for a video card, whether you toss in an extra hundred dollars to really go for the gold is between you, ATI, and Newegg—or something like that.

The Radeon X1900 cards are significantly better performers than the Radeon X1800s that they replace. I am a little bit perplexed, though, by ATI’s choices here. They’ve tied up roughly 60 million transistors and 50 square millimeters of die space on the R580 primarily in order to add pixel shading power, but even in ShaderMark, we didn’t see anything approaching three times the performance of the R520. Would this chip have performed any differently in any of our benchmarks with just 32 pixel shader units onboard? Seems like it is limited, in its present form, by other factors, such as pixel fill rate, memory bandwidth, or perhaps even register space. Who knows? Perhaps the R580 will age well compared to its competitors as developers require additional shader power and use flow control more freely. I wouldn’t be shocked to see future applications take better advantage of this GPU’s pixel shading prowess, but no application that we tested was able to exploit it fully. For current usage models, NVIDIA’s G70 architecture appears to be more efficient, clock for clock and transistor for transistor. Here’s hoping that ATI’s forward-looking design does indeed have a payoff down the road.

Then again, maybe I need to get over it. The Radeon X1900 XTX spots the GeForce 7800 GTX 512 advantages in both memory bandwidth and multitextured fill rate, yet the XTX still comes out on top in overall performance. It does so while consuming less power at idle and only a little bit more under load than the GTX 512. Even if we haven’t tapped all of the R580’s potential just yet, we’ve tapped enough to see that it’s the best thing going right now.

Latest News

Gold Miner Nilam Resources Shares Surge 22x Amidst Bitcoin Buying Announcement
Crypto News

Gold Miner Nilam Resources Shares Surge 22x Amidst Bitcoin Buying Announcement

BlackRock CEO Goes Bullish on BTC as Spot Bitcoin ETF Crosses $17 Billion
Crypto News

BlackRock CEO Goes Bullish on BTC as Spot Bitcoin ETF Crosses $17 Billion

Spot Bitcoin exchange-traded funds (ETFs) continue to receive massive inflows as investor demand skyrockets, with BlackRock’s IBIT at the forefront. Notably, IBIT has hit a whopping $17 billion in AUM...

Elliott Wave Pattern Indicates Ripple (XRP) Might Surge to $13
Crypto News

Elliott Wave Pattern Indicates Ripple (XRP) Might Surge to $13

Renowned market analyst Tony Severino, also called “The Bull,”  has unveiled a captivating analysis of XRP’s potential price trajectory. His findings suggest that XRP could be poised for an extraordinary...

XRP ETF Premium May Record 100x to $500 Chad Steingraber
Crypto News

XRP ETF Premium May Record 100x Growth Chad Steingraber

NFL
Streaming News & Events

NFL Discloses Moving Two NFL Games Into Streaming in 2024

Crypto News

Unveiling the Most Popular Crypto Presales in March Among Americans

Apple Users Are Being Spammed with Unwanted Password Reset Requests as Part of ‘MFA Bombing'
News

Apple Users Are Being Spammed with Unwanted Password Reset Requests as Part of ‘MFA Bombing’