Home ATI’s Radeon X700 XT graphics card
Reviews

ATI’s Radeon X700 XT graphics card

Scott Wasson
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

THE STRUGGLE FOR THE upper hand in the graphics market has never been tighter than it is right now between ATI and NVIDIA. The two companies both made giant strides forward this past spring with the introduction of new graphics chips boasting over twice the power of the previous top-of-the-line models. These new GPUs, the Radeon X800 and GeForce 6800, were very evenly matched when they arrived, although ATI’s Radeon X800 had a slight lead in performance overall. The race has only tightened as NVIDIA has refined its drivers and snazzy new games like Doom 3 have arrived.

The contest, however, is just beginning. Sure, some folks buy those $399 graphics cards, but I get a nosebleed just thinking about it. The real action is down under $200, where those of us who don’t host our own reality shows can afford to play. Two weeks ago, NVIDIA pulled the wraps off of the GeForce 6600 GT, a $199 product that offers performance superior to last month’s $299 graphics cards, causing us to proclaim the thing “freaking awesome.” Today, ATI responds with the launch of its own new mid-range GPU, the Radeon X700 XT. Derived from the Radeon X800, the X700 series offers all the fancy-schmancy new features of its big brother, including improved pixel shaders and a revamped memory controller. How does the Radeon X700 XT stack up against the GeForce 6600 GT? Read on to find out.

The Radeon X700 family
The time-honored tradition in designing a mid-range graphics chips goes something like this: saw your high-end product in half, give it half the memory and half the memory bandwidth, and you’re pretty much set. (Sure, there’s engineering work to be done in order to make that happen, but that’s the basic procedure.) This approach has become easier over time as graphics chips have grown in terms of internal parallelism and as engineers have made their chip designs more modular. The ATI R420 (Radeon X800 series) and NVIDIA NV40 (GeForce 6800 series) chips are excellent examples of this wider, more modular approach. Both chips segment their sixteen internal rendering pipelines into sets of four. These “quads” lend themselves to internal reshuffling, as twelve-pipe variants like the Radeon X800 Pro and GeForce 6800 demonstrate.

In the case of the chip behind the Radeon X700 series, the RV410, one would expect ATI to slice an R420 into two and call it a day. That’s not exactly what happened when ATI designed the RV410, however. Yes, ATI did give the RV410 a pair of four-pixel-pipeline quads so that the RV410 is an eight-pipe chip. Rather than cut the RV410 down further, though, ATI elected to retain all six vertex shader units from the X800 series, giving the RV410 a potential edge on the competition. (The NV43 chip that powers the GeForce 6600 series has only three of NVIDIA’s vertex shader units, but those units don’t necessarily deliver the same performance, clock for clock, as ATI’s.)

ATI also held the line on memory, reducing the width of the memory interface from 256 bits in the R420 to 128 bits in the RV410, but keeping 256MB of memory on the cards. At least, that was the original plan. ATI initially told us all three flavors of the Radeon X700 would come with 256MB of memory, but after seeing NVIDIA’s GeForce 6600 GT at $199 with 128MB of memory, the red team apparently changed its mind. When our review unit arrived, it was a version of the X700 XT with 128MB of memory and a $199 price tag. Here’s the rundown on the complete Radeon X700 lineup.

  Core clock (MHz) Pixel pipelines Memory clock (MHz) Memory bus width (bits) On-board memory Suggested
Price
Radeon X700 400 8 600 128 128MB $149
Radeon X700 Pro 420 8 864 128 256MB $199
Radeon X700 XT 475 8 1050 128 128MB $199
Radeon X700 XT 475 8 1050 128 256MB $249

ATI set the clock speeds on the lower end versions of the Radeon X700 some time ago, but they held off on setting final speeds for the X700 XT until they got a look at NVIDIA’s GeForce 6600 GT. The final clock speeds for the X700 XT are 475MHz for the graphics chip and 525MHz for the memory—25MHz less than the competing NVIDIA on the core clock, but 25MHz higher memory speeds.

Like the competition, the Radeon X700 family will first be available as PCI Express cards, in part because the RV410 GPU has a PCI Express interface onboard, and in part because the big PC makers like Dell already prefer to sell PCI Express-based systems. ATI says Radeon X700 Pro PCI-E cards will begin shipping immediately to the big PC manufacturers, while the other X700 PCI-E cards should be ready to roll some time during October. No firm date has been set yet for online or retail availability of these cards to consumers. Similarly, ATI will eventually introduce AGP versions of the Radeon X700 that use a bridge chip to allow the RV410 GPU to talk to AGP-based systems, but the company hasn’t said yet when those cards will arrive.

The Radeon X700 XT The Radeon X700 XT card has dimensions very similar to the Radeon 9600 XT that preceded it, and it’s very nearly the same size as a GeForce 6600 GT. The X700 XT’s cooler is similar in size to the 6600 GT’s, as well, but the X700 XT’s cooler is an all-copper affair that’s much, much heavier.

You can see the big rectangular vacancy on our X700 XT review unit in the top picture above. That’s where ATI’s Rage Theater chip will reside, to provide video input capabilities on cards with a VIVO option. ATI says some “made by ATI” cards in North America will be available with VIVO. Notice, also, the four blank pads on the underside of the card where memory chips would generally reside. This is a 128MB card. I’d expect 256MB cards to have memory chips populating those spots.

 

Die size comparison

One of the most important factors in the overall success of these chips will be how much they cost to manufacture. Estimates of things like transistor counts tend to vary, but chip die size is the real question, especially since both the RV410 and NV43 will be manufactured by TSMC on its 110-nanometer chip fabrication process. ATI and NVIDIA don’t like to divulge the die sizes of their chips, but it’s fairly easy to determine once you have the cards in hand. At the right is a photograph of the NV43 and RV410, taken side by side. As you can see, the two chips appear to be just about the same size.

By my measurements, the RV410 is 13 mm wide and 12 mm tall, or 156 millimeters squared. The NV43 measures out to 12 mm wide and 13 mm tall—or darn near exactly the same size. That means that, despite their architectural differences, both chips should cost about the same to manufacture.

You may recall that the NV40 (also known as the GeForce 6800) is a little bit larger than the R420 (or Radeon X800). Apparently, NVIDIA has saved some die space on NV43.

 

Catalyst Control Center debuts
ATI supplied us with a beta copy of its Catalyst 4.10 drivers to go along with the Radeon X700 XT card. Like all new video drivers, these drivers are 400% more powerful and 600% more bug-free than the previous revision. This time around, though, there are some really noteworthy changes to ATI’s software. The most visible of those changes is ATI’s new Catalyst Control Center, a fancy new user interface for tweaking graphics options. The Catalyst Control Center features a “3D preview” window that purportedly shows the impact of various graphics settings. Here’s how it looks.

In my view, the preview scene is a little too small, a little too foggy, and a little too atypical to show the likely impact of changes to graphics options. However, for those whose are utterly befuddled by the multitude of image quality and performance options available in a modern graphics driver, I’m sure this little scene must be better than absolutely nothing.

As you can see, the Catalyst Control Center looks quite slick. The interface is skinnable, and the Control Center installs with a selection of different skins.

Beyond the look and the preview window, though, I’m mostly unimpressed with Catalyst Control Center in its current state. In part because it requires Microsoft’s .NET Framework, the new drivers with CCC are a massive download, well over three times the size of NVIDIA’s latest drivers and almost twice the size of ATI’s previous driver revs. Also, the Control Center lacks polish, despite all the glitz. Using the default skin, pictured above, the CCC’s responses are sluggish and slow to refresh, as if the Control Center weren’t accelerated by ATI’s own 2D screen drawing capabilities. Some options have slider labels (low, standard, high) where there are no slider stops (only low and high are actual options). And on close of the application, the Control Center insists on relocating the mouse pointer to the center of the screen.

This software isn’t ready for prime time yet. I’m surprised ATI released it to the world as part of its Catalyst 4.9 driver drop. More troubling than the bugs and quirks is the basic interface design, which is sometimes baffling. One of my monitors doesn’t always show up properly via plug-and-play detection, so I have to set the refresh rate manually. On NVIDIA cards, that just means setting the refresh rate in the Windows “Monitor” tab in Display Properties. With ATI’s old drivers, one also had to set the refresh rate via the “Displays” tab in the ATI control panel. This option is positively hidden in Catalyst Control Center, requiring a right-click on a portion of the screen that doesn’t look clickable at all. Took me longer than I’d care to admit to find it; I honestly thought it wasn’t there. In a similar vein, the “no preview” mode for advanced users doesn’t expose all the driver options at once, despite using a massive amount of screen real estate. Some of the options are static in the dialog box, while finding others requires use of a scroll bar. I don’t know why.

I can see what ATI was hoping to do with the Control Center, and its goals are admirable. ATI is obviously trying to improve the usability of its drivers, and the CCC may yet become what it was intended to be. The company needs to rethink some of its decisions in order to get there, though.

 

Catalyst A.I. brings app-specific optimizations
As we’ve reported before, ATI’s Catalyst 4.10 drivers will include application-specific optimizations. NVIDIA has long used app-specific optimizations in its graphics drivers, but ATI has dismissed such things as unnecessary and somewhat unseemly. The change of heart at ATI seems prompted by a number of issues, not least of which is being locked in a heated battle for graphics supremacy that’s now a much tighter contest than it was for the past couple of years.

Another source of concern for ATI is its adaptive trilinear filtering algorithm, built into all new ATI GPUs since the Radeon 9600. This algorithm boosts performance by skipping certain filtering tasks (trilinear blends and anisotropic sampling) that it deems unnecessary after analyzing the textures involved. This optimized filtering routine generally works well, but in certain cases (such as detail textures in UT2004), the algorithm doesn’t quite handle things properly, resulting in visible image artifacts. ATI’s new application detection facility, dubbed Catalyst A.I., will allow ATI’s drivers to detect an application where there’s a filtering problem and act to correct it by applying more rigorous filtering.

Beyond that, Catalyst A.I. enables ATI to correct for bugs, bad behaviors, and incompatibilities in games, such as the problem with control panel-invoked anisotropic filtering in Doom 3 we recently noted. Rather than allow filtering to happen on the texture that acts as Doom 3’s specular lookup table, Catalyst 4.10 uses shader replacement to substitute in a “mathematically equivalent” pixel shader instruction.

Catalyst A.I. will also, of course, free ATI’s driver team to make games run faster through various sorts of optimizations, even when no bugs are present. For instance, the company is using Catalyst A.I.’s app detection facilities to optimize texture cache mapping in some games, including UT2004 and the Half-Life 2 Source engine. Apparently, cache can be mapped a couple of different ways on ATI’s GPUs, and the default mapping is less than optimal for the way these games access textures.

As of now, the list of apps detected by Catalyst A.I. includes Doom 3, UT2003, UT2004, the Half-Life 2 Source engine, Splinter Cell, Race Driver, Prince of Persia, and Crazy Taxi 3. These apps are detected simply by keying on the name of the executable program. Obviously, the list of apps detected will grow in future driver revs. ATI has stated explicitly, however, that it will never optimize specifically for a synthetic benchmark.

The Catalyst A.I. optimizations will be turned on by default in ATI’s new drivers, but ATI gives users the option of disabling them via the Catalyst Control Center. Notably, disabling Catalyst A.I. will turn off general optimizations as well as app-specific ones, including ATI’s adaptive texture filtering algorithm; this is the first time the company has offered that option to its customers. Unfortunately, users will not be able to disable ATI’s angle-dependent anisotropic filtering optimization, because angle-based aniso is hard-wired into its graphics chips.

ATI obviously hopes this measure of user control, and this transparency about which applications are being accelerated, will deflect user concerns about app-specific optimizations in its drivers. This desire for openness stands in tension with ATI’s wish to avoid giving away the secrets behind its best optimizations. When I spoke with ATI abound this issue, the company was still undecided on how much detail it would disclose about the nature of its optimizations for each application.

 

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least twice, and the results were averaged. All graphics driver image quality settings were left at their defaults, with the exceptions that vertical refresh sync (vsync) was always disabled and geometry instancing was enabled on the Radeon cards.

Our test system was configured like so:

Processor Pentium 4 3.4’E’GHz Pentium 4 550 3.4GHz
Front-side bus 800MHz (200MHz quad-pumped) 800MHz (200MHz quad-pumped)
Motherboard Abit IC7-G Abit AA8
BIOS revision 2.4 1.4
North bridge 82875P MCH 925X MCH
South bridge ICH5R ICH6R
Chipset drivers INF Update 6.0.1.1002 INF Update 6.0.1.1002
Memory size 1GB (2 DIMMs) 1GB (2 DIMMs)
Memory type OCZ PC3200 EL DDR SDRAM at 400MHz OCZ PC2 5300 DDR2 SDRAM at 533MHz
CAS latency 2 3
Cycle time 5 10
RAS to CAS delay 2 3
RAS precharge 2 3
Hard drive Maxtor MaXLine III 250GB SATA 150
Audio Integrated
Graphics GeForce 5900 XT 128MB AGP with ForceWare 61.77 drivers
GeForce 6600 GT 128MB PCI-E with ForceWare 65.76 drivers
GeForce 6800 128MB with ForceWare 65.76 drivers
Radeon X600 XT 128MB PCI-E with CATALYST 4.10 beta drivers
Radeon 9800 XT 256MB AGP with CATALYST 4.10 beta drivers
Radeon X800 Pro 256MB AGP with CATALYST 4.10 beta drivers
Radeon X700 XT 128MB PCI-E with CATALYST 4.10 beta drivers
OS Microsoft Windows XP Professional
OS updates Service Pack 2, DirectX 9.0c

The test systems’ Windows desktops were set at 1152×864 in 32-bit color at an 85Hz screen refresh rate.

We used the following versions of our test applications:

The tests and methods we employed are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Pixel filling power
The basic math involved in graphics performance gets more complex with each new chip generation, but it remains relatively simple for the Radeon X700 XT. Because it has eight pipes running at 475MHz, the X700 XT has sweet, sweet pixel-pushing power in abundance—even more than last year’s top model. Check it out.

  Core clock (MHz) Pixel pipelines  Peak fill rate (Mpixels/s) Texture units per pixel pipeline Peak fill rate (Mtexels/s) Memory clock (MHz) Memory bus width (bits) Peak memory bandwidth (GB/s)
Radeon X300 325 4 1300 1 1300 400 128 6.4
Radeon X600 Pro 400 4 1600 1 1600 600 128 9.6
GeForce FX 5700 Ultra 475 4 1900 1 1900 900 128 14.4
Radeon 9600 XT 500 4 2000 1 2000 600 128 9.6
Radeon X600 XT 500 4 2000 1 2000 740 128 11.8
GeForce 6600 300 8 2400 1 2400 TBD 128 TBD
Radeon 9800 Pro 380 8 3040 1 3040 680 256 21.8
Radeon 9800 Pro 256MB 380 8 3040 1 3040 700 256 22.4
GeForce FX 5900 XT 400 4 1600 2 3200 700 256 22.4
Radeon X700 400 8 3200 1 3200 600 128 9.6
Radeon 9800 XT 412 8 3296 1 3296 730 256 23.4
Radeon X700 Pro 420 8 3360 1 3360 864 128 13.8
GeForce FX 5900 Ultra 450 4 1800 2 3600 850 256 27.2
GeForce FX 5950 Ultra 475 4 1900 2 3800 950 256 30.4
Radeon X700 XT 475 8 3800 1 3800 1050 128 16.8
GeForce 6800  325 12 3900 1 3900 700 256 22.4
GeForce 6600 GT 500 8* 2000 1 4000 1000 128 16.0
GeForce 6800 GT 350 16 5600 1 5600 1000 256 32.0
Radeon X800 Pro 475 12 5700 1 5700 900 256 28.8
GeForce 6800 Ultra 400 16 6400 1 6400 1100 256 35.2
GeForce 6800 Ultra OC 450 16 7200 1 7200 1100 256 35.2
Radeon X800 XT Platinum Edition 520 16 8320 1 8320 1120 256 35.8

The X700 XT’s multitextured fill rate is about double that of the card it most directly replaces, the Radeon X600 XT, and it’s also well above the Radeon 9800 XT’s. Compared to the GeForce 6600 GT, the X700 XT is just a tad slower in multitextured fill rate but has a slight edge in memory bandwidth.

Notice the asterisk next to the number of pixel pipes on the GeForce 6600 GT. The NV43 GPU has an unconventional architecture; it essentially decouples earlier stages of the pixel pipeline from later stages, and the chip incorporates a crossbar (or switch) to feed pixel fragments from the eight pixel shader units into four raster operators (ROPs). By keeping these four ROPs fed opportunistically out of its eight pixel shader units, the NV43 makes very efficient use of its resources. This arrangement limits the GeForce 6600 GT’s single-textured fill rate to two gigapixels per second, but it hardly affects real-world performance at all, as our benchmarks will show.

ATI says the X700 XT has eight “full” pixel pipes and no fragment crossbar. The X700 can output eight pixels per clock, and each pipeline can execute a single Z/stencil operation per clock, or two Z/stencil ops per clock when multisampled antialiasing is enabled.

Ok, propeller head, but how does it perform? Let’s look at some synthetic fill rate tests.

The X700 XT just barely edges out the GeForce 6600 GT in 3DMark’s single-textured fill rate test, despite a massive chasm between the two chips’ theoretical capabilities here. This result demonstrates the elegance of the NV43’s crossbar approach. The multitextured fill rate test comes out just about like one would expect, with the X700 XT just trailing the GeForce 6600 GT.

RightMark shows us a bit different result. The two cards are essentially tied in single texturing, and the 6600 GT is faster with two textures. After that, though, the X700 XT achieves higher throughput when three or more textures are applied. I should mention that the Catalyst 4.10 beta drivers we were using didn’t seem to get along well with RightMark. The test stuttered a bit while running. Scores are down slightly for the 9800 XT and X600 XT compared to our previous review, and I expect the X700 XT suffered some as a result, too.

To summarize, the Radeon X700 XT can keep pace with the GeForce 6600 GT in terms of raw pixel-pushing power in the majority of our tests, and when it lags, it only lags by a small amount.

 

Doom 3 – Delta Labs
We’ll kick off our gaming benchmarks with Doom 3. Our first Doom 3 test uses a gameplay demo we recorded inside the Delta Labs complex, and it represents the sorts of graphics loads you’ll find in most of the game’s single-player levels. We’ve tested with Doom 3’s High Quality mode, which turns on 8X anisotropic filtering by default.

Note that I’ve included scores for the Radeon X700 XT both with and without Catalyst A.I. enabled. The scores without Catalyst A.I. show how the card would perform without the benefit of any app-specific tweaks and without ATI’s adaptive texture filtering routine.

NVIDIA still owns Doom 3, but with Catalyst A.I., the ATI cards save some face. Scores are up for the X600 XT and 9800 XT thanks to the new drivers, and the X700 XT manages to pull within 10 frames per second at the three higher resolutions with 4X antialiasing enabled.

Incidentally, some folks asked after our last review about the playability of DOOM 3 in the game’s High Quality mode on a graphics card with only 128MB of memory. Even with good frame rate averages, some gamers have found that the game stutters too much, and id Software recommends 256MB of video memory for High Quality mode. I played through some of the game with the GeForce 6600 GT and Radeon X700 XT today at 1280×1024 in High Quality, and it confirmed my initial impressions. The GeForce 6600 GT was smooth as glass in DOOM 3’s High Quality mode on our test system. The Radeon X700 XT slows down a little bit in some areas, but it’s still smooth enough to be playable. Of course, this was on a fast system with PCI Express and a gig of RAM, so your mileage may vary.

 

Doom 3 – Heat Haze
This next demo was recorded in order to test a specific effect in Doom 3: that cool-looking “heat haze” effect that you see whenever a demon hurls a fireball at you. We figured this effect would be fairly shader intensive, so we wanted to test it separately from the rest of the game.

The relative performance of the cards doesn’t change much in this demo, despite the abundance of shader effects.

 

Counter-Strike: Source beta Video Stress Test
The Counter-Strike: Source beta includes a video stress test that’s essentially the same test Valve used for benchmarking Half-Life 2 performance last year. The video stress test presents a sort of worst-case scenario for graphics in the Source engine, layering surfaces with multiple pixel shader effects on top of each other. If a card does well in the video stress test, you can probably expect it to run Half-Life 2 quite well.

By the way, this is a new revision of the Source engine since last time we tested. We received an update via Steam and had to retest all of the cards, because the performance changed. Scores were higher across the board. Also, this time around, we forced the GeForce FX 5900 XT into DirectX 9 mode instead of leaving it at the game default of DX8.1.

It’s veeeery tight, but the GeForce 6600GT outdoes the Radeon X700 XT most of the time. Only when 4X antialiasing and 8X aniso filtering are in use at higher resolutions does the X700 XT prevail.

 

Far Cry – Cooler
We’re using the 1.2 version of Far Cry, which includes support for the longer shader programs and geometry instancing capabilities that are part of Shader Model 3.0. We tested the GeForce 6600 GT with the SM3.0 code path and also with the SM2.0 path, in order to show what sort of performance difference the new shader model can make. (All the output is identical, regardless of the shader path. Far Cry uses the newer shader models only for better performance.) For the GeForce 6800, we tested with the SM3.0 path and skipped the SM2.0 path.

Far Cry 1.2 also supports ATI’s Shader Model 2.0b, but oddly, none of the cards showed any real performance difference between the 2.0b code path and the 2.0 path with the Cat 4.10 beta drivers. Nevertheless, we’ve reported the results from the SM2.0b path for all the ATI cards. We explicitly enabled geometry instancing, as well, which all the Radeon cards here support. (Geometry instancing must be enabled in the ATI driver, as well.)

We used “Very high” for all in-game quality settings, with the exception of water, which was set to “Ultra high.” In other words, everything was maxed out.

This first Far Cry test uses the Cooler level, one of the game’s indoor levels which has lots of long shaders and shadows.

This one isn’t as close as the last test. The 6600 GT is simply faster here, especially at higher resolutions with 4X AA and 8X aniso, where the X700 XT seems to run out of steam.

 

Far Cry – Pier
The Pier level in Far Cry is an outdoor area with dense vegetation, and it makes good use of geometry instancing to populate the jungle with foliage.

Whoa. Could this be the work of the X700 XT’s six vertex shader units? The Pier level is loaded with complex geometry, and the X700 XT performs much better here, relativley speaking.

 

Far Cry – Volcano
Like our Doom 3 “heat haze” demo, the Volcano level in Far Cry includes lots of pixel shader warping and shimmering.

Chalk up another win for the GeForce 6600 GT. With the exception of the outdoor jungle areas, Far Cry seems to run faster on the GeForce. Then again, the jungle areas make up a huge portion of the game.

 

Rome: Total War
To get a bit of a break from the first-person shooters, we’re testing this real-time strategy game with some very fancy graphics of its own. We used FRAPS to measure performance in this game during the first 150 seconds of the tutorial battle. This sequence is scripted and repeatable, and it switches between birds-eye views and close-ups of the armies. Of course, we turned up all the game’s eye candy to max before testing.

The Radeon X700 XT is faster overall in Rome: Total War than the GeForce 6600 GT. The X700 XT achieves a higher frame rate average and minimum without aniso or AA, and it manages a higher minimum frame rate when 4X AA and 8X aniso are enabled. I wouldn’t be shocked if it were the X700 XT’s edge in vertex shader power that made the difference in this game.

 

Need for Speed Underground
I used FRAPS to record frame rates during the first 25 seconds of a drag race in Need for Speed Underground. I found that I’d tend to total the car in just over 25 seconds most of the time, so that was my limit. All of the game’s quality settings were set to highest, and all of its visual effects were turned on. This game appears to be capped at about 54 frames per second, according to the FRAPS data. That explains the small differences between minimum and average frame rates without AA and aniso.

It’s a virtual tie without AA and aniso, but with them, the X700 XT falls behind.

 

3DMark03
None of the drivers we’re using are approved by FutureMark for use with 3DMark03, largely because they are too new. Keep that in mind as you look over the results.

The X700 XT is faster in three of the four resolutions in 3DMark03. Only when we get to 1600×1200 does the GeForce 6600 GT catch up. The overall score above is a composite of the four game scores below. Let’s see which game tests gave the X700 XT the edge.

The X700 XT is faster in Game 1 by a hair, but it blows away everything else in the Nature scene, probably because of its six vertex shader units.

The difference between the two tests above tells the story of the contest between the GeForce 6600 GT versus the Radeon X700 XT. The GeForce has faster pixel shaders, but the X700 XT has gobs more vertex shader power.

 

3DMark image quality
The Mother Nature scene from 3DMark has been the source of some controversy over time, so I wanted to include some screenshots to show how the three cards compare. On this page and in all the following pages with screenshots, you’re looking at low-compression JPEG images. You can click on the image to open a new window with a lossless PNG version of the image.


DirectX 9 reference rasterizer


Radeon X600 XT


Radeon 9800 XT


Radeon X700 XT


GeForce 6600 GT


GeForce FX 5900 XT

They all look about the same to me.

 

Texture filtering
Next, we’ve tested image filtering performance through a range of modes to see how the cards handle them. This is a simple fill rate test with a single texture, and the impact of various filtering methods is fairly obvious in the results. Notice how we have two sets of results for the GeForce 6600 GT. The primary set of results uses the driver default “Quality” setting, which NVIDIA claims is roughly equal to ATI’s default filtering settings. The second set of results uses NVIDIA’s “High Quality” setting, which purportedly disables various filtering optimizations. Similarly, we have results for the Radeon X700 XT with and without Catalyst A.I. enabled.

The reason you don’t see much difference between the results with Catalyst A.I. and without is that RightMark’s brightly colored, varied textures tend to require lots of filtering, and ATI’s adaptive algorithm is no doubt doing that filtering. In a game where textures are less likely to require such rigorous filtering, the X700 XT’s filtering might scale differently. Here, the X700 XT scales up with more intensive filtering very similarly to how the GeForce 6600 GT scales with its “High Quality” setting.

 

Texture filtering quality
These side-by-side images from Far Cry should show the impact of anisotropic filtering fairly clearly. We’re using trilinear filtering as well as aniso in all cases.


No aniso, 2X, 4X, 8X – Radeon X600 XT


No aniso, 2X, 4X, 8X – Radeon 9800 XT


No aniso, 2X, 4X, 8X – Radeon X700 XT


No aniso, 2X, 4X, 8X – Radeon X700 XT – Catalyst AI disabled


No aniso, 2X, 4X, 8X – GeForce 6600 GT


No aniso, 2X, 4X, 8X – GeForce 6600 GT – High Quality


No aniso, 2X, 4X, 8X – GeForce FX 5900 XT

To the naked eye, they all look about the same. I don’t see any notable difference between the X700 XT’s output with and without Catalyst A.I.

 

Texture filtering quality
Now let’s tilt that same scene on its axis and see how things change.


No aniso, 2X, 4X, 8X – Radeon X600 XT


No aniso, 2X, 4X, 8X – Radeon 9800 XT


No aniso, 2X, 4X, 8X – Radeon X700 XT


No aniso, 2X, 4X, 8X – Radeon X700 XT – Catalyst AI disabled


No aniso, 2X, 4X, 8X – GeForce 6600 GT


No aniso, 2X, 4X, 8X – GeForce 6600 GT – High Quality


No aniso, 2X, 4X, 8X – GeForce FX 5900 XT

All of the GPUs except for the GeForce FX 5900 XT used angle-dependent anisotropic filtering, and they’re all maxing out at about 2X aniso filtering here, no matter what control panel settings we use.

 

Texture filtering and Catalyst AI
Just in case you were wondering whether ATI’s adaptive texture filtering methods really make any difference at all, have a look at the images below. These are the result of a mathematical “diff” operation in an image processing program, followed by a gamma adjustment of 2.5 to bring out the differences.


Diff between default and Catalyst AI disabled – No aniso, 2X, 4X, 8X – Radeon X700 XT


Diff between default and Catalyst AI disabled – No aniso, 2X, 4X, 8X – Radeon X700 XT

There are indeed subtle differences in filtering produced by ATI’s adaptive algorithms, and here we can see them. With Catalyst A.I. enabled, the X700 XT appears to be doing a little less trilinear blending, if I’m looking at this right.

 

Edge antialiasing
Below you can see how the different cards scale up to higher sample rates for antialiasing. Unfortunately, I discovered late that Far Cry isn’t compatible with NVIDIA’s control panel-based 8X antialiasing (turn it on and nothing happens). As a result, I’ve had to omit results from NVIDIA’s 8xS mode.

Now let’s take a quick look at the sample patterns used by the different cards. Because there’s lots of overlap between the different GPU families, I’ve only included a few results below. The sample patterns shown for the X700 XT also hold true for a whole range of ATI cards dating back to the Radeon 9700. Likewise, the sample pattern for the GeForce 6600 GT is the same one used in all variants of the GeForce 6800.

  2X 4X  6X/8X
GeForce FX 5900 XT

GeForce 6600 GT

Radeon X700 XT

The Radeon X700 XT’s sample patterns are the same tried-and-true patterns used on all of ATI’s R300-derived chips. The 2X and 4X modes use a rotated grid pattern, while the 6X mode uses a sparse sample pattern not aligned to any grid. The GeForce 6600 GT also uses rotated grid patterns, very similar to those on the X700 XT, for its 2X and 4X modes. However, the 6600 GT has a very different 8X mode that combines 2X supersampling with 4X multisampling. This mode offers more than just edge antialiasing, but it does so at the expense of performance.

 

Antialiasing quality
We’ll start off with non-AA images, just to establish a baseline.


No antialiasing – Radeon X600 XT


No antialiasing – Radeon 9800 XT


No antialiasing – Radeon X700 XT


No antialiasing – GeForce 6600 GT


No antialiasing – GeForce FX 5900 XT

 

Antialiasing quality


2X antialiasing – Radeon X600 XT


2X antialiasing – Radeon 9800 XT


2X antialiasing – Radeon X700 XT


2X antialiasing – GeForce 6600 GT


2X antialiasing – GeForce FX 5900 XT

The X700 XT’s 2X mode looks about like the GeForce 6600 GT’s.

 

Antialiasing quality


4X antialiasing – Radeon X600 XT


4X antialiasing – Radeon 9800 XT


4X antialiasing – Radeon X700 XT


4X antialiasing – GeForce 6600 GT


4X antialiasing – GeForce FX 5900 XT

The image quality contest is again tight at 4X AA. Only the GeForce FX 5900 XT, with its grid-based sample pattern, looks weaker than the rest of the pack.

 

Antialiasing quality


6X antialiasing – Radeon X600 XT


6X antialiasing – Radeon 9800 XT


6X antialiasing – Radeon X700 XT


8X antialiasing – GeForce 6600 GT


8X antialiasing – GeForce FX 5900 XT

ATI’s 6X AA mode continues to be a marvel. I think it does a better job of removing edge jaggies than NVIDIA’s 8X mode.

 
Power consumption
With each of the graphics cards installed and running, I used a watt meter to measure the power draw of our test systems. The monitor was plugged into a separate power source. The cards were tested at idle in the Windows desktop and under load while running our Doom 3 “heat haze” demo at 1280×1024 with 4X AA. We have different power supplies than we used for our GeForce 6600 GT review, so I retested all the cards.

The X700 XT consumes a little more power than the GeForce 6600 GT when running at full tilt.

 
Conclusions
You’ve got to love that this $199 graphics card performs like it does, matching up well against a Radeon 9800 XT. The big question, of course, is how it measures up against the GeForce 6600 GT. As closely matched as these two cards are, that answer is surprisingly complicated.

The GeForce 6600 GT was faster in a majority of our benchmarks, if you want to go by sheer numbers. The 6600 GT waltzed through Doom 3 as expected, took two of the three Far Cry tests, and was easily faster in Need for Speed Underground. The Source engine video stress test was essentially a tie. However, the Radeon X700 XT shined in vertex shader-intensive scenarios, including the outdoor areas in Far Cry, the Rome: Total War demo, and 3DMark03’s Nature test.

So the GeForce 6600 GT seems to have a slight advantage in terms of fill rate and pixel shader power, traditionally the most important measures of a graphics card. The Radeon X700 XT, on the other hand, seems to have a considerable advantage in terms of vertex shader power, and I have to wonder which type of performance will matter most in future games.

For the time being, the GeForce 6600 GT appears to be the card of choice by the slimmest of margins. NVIDIA’s $199 card seems to have slightly higher performance generally than the Radeon X700 XT, and it offers the possibility of upgrading to a dual-card SLI config at some point down the road, when the right motherboards become available. The GeForce 6600 GT also offers Shader Model 3.0 support, packaging together a number of incremental feature enhancements, including a more natural programming model, higher-precision pixel shaders, and 16-bit floating-point frame buffer blends. More importantly, the 6600 GT seems to have very few weaknesses in terms of performance, image quality, and features.

The X700 XT also has few weaknesses, although its relative performance in Doom 3 could be a little better. Otherwise, it’s all good, from best-in-class antialiasing (including funky temporal AA) to 3Dc normal map compression to the razor-sharp video signal output of a “built by ATI” graphics card. The Radeon X700 XT will recommend itself especially to those folks looking to play real-time strategy games and the like, where vertex shader power will be at a premium.

At the end of the day, the battle over mid-range graphics is so close, it may boil down to which company executes better. The race to get these things to retail, and in AGP form, is on. If I were looking to buy a mid-range graphics card, I’d wait for whichever card gets there first, then pounce. 

Latest News

Crypto analyst Predicts Bitcoin Consolidation and Identifies Altcoin Bottom
Crypto News

Crypto analyst Predicts Bitcoin Consolidation and Identifies Altcoin Bottom

Cardano Founder Celebrates Blockchain's Cost-Effectiveness
Crypto News

Cardano Founder Celebrates Blockchain’s Cost-Effectiveness

The founder of Cardano, Charles Hoskinson, has praised blockchain’s performance, especially its cost-effectiveness.  The applause came following Cardano’s completion of a block transaction for 1,600 recipients at a cumulative fee...

memecoin base blockchain
Crypto News

New Meme Coin on BASE Blockchain Has the Potential to Make Millionaires

The first multi-chain Shiba Inu-themed meme coin is now available on Coinbase’s BASE blockchain, as well as Ethereum, BSC Chain, Avalanche, Polygon, and Solana. This addition potentially positions the project...

28 Google Employees Fired for Protesting Against The Company’s Israeli Contract
News

28 Google Employees Fired for Protesting Against The Company’s Israeli Contract

Statistics

90+ Jaw-Dropping Squarespace Statistics of 2024 You Must See

Joint International Police Operation Disrupts LabHost
News

Joint International Police Operation Disrupts LabHost – A Platform That Supported 2,000+ Cybercriminals

Apple Removes WhatsApp and Threads From App Store In China
News

Apple Removes WhatsApp and Threads from App Store in China