Home NVIDIA’s GeForce 6200 graphics processor
Reviews

NVIDIA’s GeForce 6200 graphics processor

Geoff Gasior
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

WHEN NVIDIA first announced the GeForce 6800 series, the company boasted that its new graphics architecture would scale down to mid-range and value markets by the end of the year. GeForce 6 trickle-down has already spawned the GeForce 6600 series, whose performance and feature set are a revelation for the mid-range market. Today, NVIDIA extends the GeForce 6 series even further into the value segment with the GeForce 6200. This four-pipe GeForce 6 brings Shader Model 3.0 support to graphics cards in and around the $129 mark, giving cash-strapped gamers an intriguing new low-end option.

How does the GeForce 6200 fare against competition that includes ATI’s budget Radeons and Intel’s Graphics Media Accelerator 900? Read on to find out.

The GeForce 6200
The GeForce 6200 graphics chip is a four-pipe derivative of the NV43 GPU that powers the GeForce 6600 series. Like the rest of the GeForce 6 line, the 6200 utilizes a fragment crossbar to link pixel shaders and raster operators (ROPs) within the pixel pipeline. Rather than being bound to a single pixel shader, ROPs are free to tackle output from any of the chip’s pixel shaders. This rather promiscuous arrangement allows NVIDIA to pair eight pixel shaders with only four ROPs on the GeForce 6600, saving transistors without catastrophically bottlenecking performance. With the GeForce 6200, NVIDIA pairs four pixel pipes with four ROPs. There’s no transistor savings, but the fragment crossbar may offer a clock-for-clock performance advantage over more traditional designs.

Like the GeForce 6600 series, the GeForce 6200 has full support for DirectX 9, Shader Model 3.0, and 32-bit floating point data types. The 6200 packs three vertex shader units, just like the 6600, as well. The two also share a programmable video processor that we’ll have more to tell you about soon. The GeForce 6200 differs from the rest of the GeForce 6 line when it comes to antialiasing, though: its Intellisample 3.0 implementation lacks color and Z-compression. Since low-end cards generally lack the pixel pushing horsepower to make antialiasing viable in games, the lack of Intellisample color and Z-compression isn’t a major flaw.


Looks like NV43 to me

The GeForce 6200 GPU is manufactured by TSMC on a 0.11-micron fabrication process. The die measures 12mm x 13mm according to my tape measure, making it identical in size to the NV43 GPU that powers the GeForce 6600. Isn’t that interesting? When we asked NVIDIA for the 6200’s code name according to the “NV4x” convention, the company would only say the chip was a “derivative” of the NV43. It’s entirely possible that the GeForce 6200 GPU is simply an NV43 with four pixel pipes and Intellisample color and Z-compression disabled. If this is the case, we may see enterprising enthusiasts attempt to unlock the extra pipelines with hardware or software modifications.

Unlike other members of the GeForce 6 line, there will only be one version of the GeForce 6200—no GT, XT, Ultra, or Turbo Golden Sample Special Edition. Clock speeds for the 6200 aren’t written in stone, though. NVIDIA recommends a core clock of 300MHz, but board vendors are free to run faster. There’s also flexibility on the memory clock front. Our GeForce 6200 reference card has DDR memory clocked at an effective 500MHz, which thanks to the 6200’s 128-bit memory bus, gives the card an even 8.0GB/sec of memory bandwidth. Board manufacturers will be free to run higher or lower memory clocks, and they’ll also be able to make cheaper cards that have a narrower 64-bit path to memory.


Our GeForce 6200 reference card

As you can see, the GeForce 6200 reference card is a PCI Express affair. NVIDIA doesn’t plan to make an AGP version of the GeForce 6200, leaving the existing GeForce FX products for AGP systems. Since PC builders are already producing lots of machines based on Intel’s 900-series chipsets and PCI Express chipets are coming soon for the Athlon 64, there should be a burgeoning market for PCI Express graphics cards in the coming months.

The GeForce 6200 is primarily targeted at major OEMs and system integrators, so retail products may not make it to store shelves at places like Best Buy, CompUSA, or Fry’s any time soon. Cards should be available from online retailers for between $129 and $149, if not less. Expect 64-bit flavors of the GeForce 6200 to be even cheaper and, hopefully, clearly marked.

Finally, notice that the GeForce 6200 card lacks “golden fingers.” The 6200 doesn’t support SLI, so you won’t be able to team up two cards in a single system.

 

Our testing methods
All tests were run three times, and the results were averaged, using the following test systems.

Processor Intel Pentium 4 520 2.8GHz
Front-side bus 800MHz (200MHz quad pumped)
Motherboard Gigabyte GA-8I915G-MF
North bridge Intel 915G
South bridge Intel ICH6
Chipset drivers 6.0.1.1002
Memory size 1GB (2 DIMMs)
Memory type OCZ PC3200 EL Platinum Rev 2 DDR SDRAM at 400MHz
CAS latency 2
Cycle time 5
RAS to CAS delay 2
RAS precharge 2
Hard drives Western Digital Raptor WD360GD 37GB
Audio ICH6/ALC850
Graphics ATI Radeon X600 Pro
ATI Radeon X300
NVIDIA GeForce 6200 Intel GMA 900
Graphics driver CATALYST 4.10 hotfix ForceWare 66.81 14.7
OS Microsoft Windows XP Professional with Service Pack 2

We’ll be comparing the GeForce 6200’s performance with a couple of Radeons and Intel’s Graphics Media Accelerator (GMA) 900. In lieu of a real Radeon X600 Pro, I used a Radeon X600 XT clocked Pro speeds. I also had to underclock our Radeon X300 to get it running at the correct 325MHz core and effective 200MHz memory clock speeds. The reference card I received from ATI was running a 400MHz core and 290MHz memory clock—much faster than cards you can buy on the market.

I should also note that the X300 card has 256MB of memory. This is common practice for low-end cards as manufacturers try to dazzle less savvy buyers with higher numbers. However, we’ve found that low-end cards just don’t have the horsepower to take advantage of 256MB of memory, so the advantage is dubious at best.

We used the following versions of our test applications:

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests and drivers were left at their default image quality settings. Both ATI and NVIDIA’s default image quality driver settings use adaptive anisotropic filtering algorithms.

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Pixel filling power
We’ll kick things off with a look at theoretical peak fill rates and memory bandwidth. Theoretical peaks don’t necessarily determine in-game performance, but they’re a good place to start sizing up the GeForce 6200’s capabilities. I’ve sorted the list below, which includes an array of low-end and mid-range PCI Express graphics options, according to multitextured fill rate.

  Core clock (MHz) Pixel pipelines  Peak fill rate (Mpixels/s) Texture units per pixel pipeline Peak fill rate (Mtexels/s) Memory clock (MHz) Memory bus width (bits) Peak memory bandwidth (GB/s)
GeForce 6200 300 4 1200 1 1200 500 128 8.0
Radeon X300 SE 325 4 1300 1 1300 400 64 3.2
Radeon X300 325 4 1300 1 1300 400 128 6.4
GMA 900 333 4 1333 1 1333 400 128 6.4
Radeon X600 Pro 400 4 1600 1 1600 600 128 9.6
Radeon X600 XT 500 4 2000 1 2000 740 128 11.8
GeForce 6600 300 8* 2400 1 2400 TBD 128 TBD
Radeon X700 400 8 3200 1 3200 600 128 9.6
Radeon X700 Pro 420 8 3360 1 3360 864 128 13.8
Radeon X700 XT 475 8 3800 1 3800 1050 128 16.8
GeForce 6600 GT 500 8* 2000 1 4000 1000 128 16.0

In terms of fill rate, the GeForce 6200 brings up the rear. With four pixel pipes and a 300MHz core clock, it can’t even match the peak theoretical fill rates of the Radeon X300 series. The low core clock speed means that the card’s shader units are going to be running slower than the competition, too.

In the memory bandwidth department, the GeForce 6200 looks a little more competitive. The card’s 128-bit memory bus and effective 500MHz memory clock yield 8GB/sec of bandwidth—better than the X300s but shy of the Radeon X600 Pro.

To see how these theoretical peaks pan out in the real world, let’s have a look at 3DMark05’s synthetic fill rate tests. Note that the drivers we’re using for the 6200, X600 Pro and X300, and even the GMA 900 aren’t approved by FutureMark for use with 3DMark05.

The GeForce 6200’s single texture fill rate is just a hair behind the X600 Pro, but when we start multitexturing, the 6200 is relegated to the back of the pack. Given that the 6200 has the slowest clock speed of the lot, its relatively modest performance isn’t surprising. What is surprising, however, is how close the Intel GMA 900 integrated graphics core gets to its theoretical peak fill rates.

Shader performance
While we’re looking at synthetic tests, let’s have a peek at how the GeForce 6200 fares in 3DMark05’s shader tests. I ran all the cards using 3DMark05’s Shader Model 2.0 code path. Since the 6200 also supports Shader Model 3.0, I also ran it using the SM 3.0 code path.

The 6200’s shader power is impressive even when running the Shader Model 2.0 codepath. Based on these scores, I wouldn’t expect much from the Intel GMA 900 in our game tests. It might have fill rate to spare, but shader power is sorely lacking.

 

DOOM 3
I used a couple of DOOM 3 gameplay demos to test the 6200’s performance in what’s arguably the most visually stunning game around. I ran the game with the High Quality detail setting, which enables 8X anisotropic filtering. High Quality might seem a little excessive for a low-end graphics card, but the GeForce 6200 handled it with aplomb.

The first demo takes place in the Delta Labs and is representative of the dark, shadow-filled environments you’ll encounter in the bulk of the game’s levels.

The 6200 wipes the floor with the competition. It’s not even close.

Next, we move onto our heat haze demo. This demo takes place in one of DOOM 3’s hell levels, which are visually quite different from the rest of the game. These levels make liberal use of some snazzy pixel shader-powered heat shimmer effects.

Again, the 6200 dominates. The race is a little closer this time, but not by much. The GMA 900 continues to stumble through DOOM 3 and has some serious problems displaying the heat haze effect properly.

 

Far Cry
Before DOOM 3 hit, Far Cry was arguably the best looking first-person shooter around. The game is loaded with shader effects and, perhaps more importantly, diverse indoor and outdoor environments. We’ll be looking at two of those environments today, both with the game’s high detail image quality setting.

First up we have the Pier level. Welcome to the jungle, folks.

The GeForce 6200 isn’t nearly as dominant in Far Cry as it was in DOOM 3. In fact, the Radeon X600 Pro beats it this time around. The 6200 is clearly faster than the Radeon X300, though, and undoubtedly superior to the bottom-dwelling GMA 900.

From lush jungles to underground interiors, the next Far Cry environment we’ll be looking at is the Volcano level. Like DOOM 3, this level employs heat shimmer effects in a number of places.

Again, the GeForce 6200 plays second fiddle to the Radeon X600 Pro. It’s pretty close, though, and the 6200 definitely has an edge over the Radeon X300.

As we saw in DOOM 3, the GMA 900 is way off the pace in Far Cry. The GMA 900 makes a visual mess of Far Cry’s heat effects shaders, too. The GMA 900’s DirectX 9 compatibility is an, er, far cry from DX9 competence.

 

Counter-Strike: Source
Counter-Strike: Source has officially been released and should hold anxious gamers over until Half-Life 2 hits. CS: Source also includes Valve’s shader-filled video stress test, which showcases the material and shader effects found in Half-Life 2. We used CS: Source’s high detail image quality settings and DirectX 9 code path for all but the GMA 900. The game refused to run the GMA 900 with anything but the DX 8.1 code path, so scores on that front aren’t directly comparable.

In the CS: Source video stress test, the GeForce 6200 is wedged between the Radeon X300 and X600 Pro. The GMA 900’s performance looks comparably better here.

Next, we’re looking at in-game Counter-Strike performance with a demo of online gameplay on the cs_italy map.

The GeForce 6200 is stuck between the Radeons again. The card isn’t much slower than the Radeon X600 Pro, although the game seems to be CPU-bound at lower resolutions.

 

Unreal Tournament 2004
Although DOOM 3 and Far Cry’s visuals are far more impressive than Unreal Tournament 2004, the engine has been licensed by scores of developers. A number of other titles already make use of the Unreal engine and we’re likely to see more in the coming months. Since Unreal Tournament 2004 is a little older, I was able to max out the in-game detail levels and still get playable frame rates in our custom-recorded demo of Onslaught gameplay.

Notice a pattern yet? The GeForce 6200 is faster than the Radeon X300 in Unreal Tournament 2004, but slower than the Radeon X600 Pro. Even in this older game engine, the GMA 900 is no match for the low-end graphics cards we’ve assembled. There’s definitely something wrong with the GMA 900’s performance at 800×600, too.

 

Xpand Rally
The Xpand Rally single-player demo is a new addition to our graphics benchmark suite. To test this game, I used FRAPS to capture frame rates during the replay of a custom-recorded demo. I used the game’s “balanced” image quality settings to achieve playable frames. The game uses a particularly nice color glow shader effect that doesn’t appear to translate well to the screenshot below. It doesn’t translate well to the GMA 900, either. The GMA 900 doesn’t appear to be applying most of the game’s shader effects, for whatever reason.

Ouch. The GeForce 6200 takes a bit of a beating in Xpand Rally and is just barely able to keep up with the Radeon X300. Let’s have a look at frame rates across the length of our 180-second replay.

The GeForce 6200’s frame rates are reasonably consistent across the length of the demo, at least when compared with its competition. Still, it’s disappointing that the 6200 can’t even best the Radeon X300, especially since Xpand Rally carries NVIDIA’s “The way it’s meant to be played” logo.

 

Antialiasing
To test the GeForce 6200’s antialiasing performance, I used the same Unreal Tournament 2004 demo as in our previous tests. I kept the same in-game detail levels and used a display resolution of 1024×768. The GeForce 6200 was tested with 2X, 4X, and 8X antialiasing while the Radeons were tested with 2X, 4X, and 6X AA. The GMA 900 can’t do antialiasing, so I’ve only included its score without AA.

The Radeons’ antialiasing performance scales much better than the GeForce 6200, perhaps in part because the 6200 lacks color and Z-compression. The difference in performance is especially glaring with 4X antialiasing, where the GeForce 6200 is trounced by the Radeon X600 Pro. Even the Radeon X300 squeaks ahead by a few frames per second.

That’s how the GeForce 6200’s antialiasing performs and here’s how it looks. Click on the images below to open an uncompressed PNG in a new window.


No antialiasing, 2X, 4X, 8X – GeForce 6200


No antialiasing, 2X, 4X, 6X – Radeon X600 Pro

For 2X and 4X antialiasing, the Radeon X600 Pro’s gamma-corrected antialiasing looks better than the GeForce 6200’s output. Comparing the GeForce 6200 at 8X to the X600 at 6X is a little more complicated because NVIDIA’s 8X antialiasing algorithm combines both multi and supersampling and affects more than just jagged edges.

 

Anisotropic filtering
For our anisotropic texture filtering tests, I used the same Unreal Tournament 2004 demo once more. Again, I left in-game detail levels at their highest setting and used a display resolution of 1024×768.

There isn’t much difference in aniso scaling between the GeForce 6200, Radeon X600 Pro, and Radeon X300. The 6200 suffers from a slightly less dramatic performance hit between 4X and 16X, but it’s not nearly as dramatic as our antialiasing results.

Moving from performance to quality, here’s how the GeForce 6200’s anisotropic filtering looks up to 8X. Click on the images below to open an uncompressed PNG in a new window.


No aniso, 2X, 4X, 8X – GeForce 6200


No aniso, 2X, 4X, 8X – Radeon X600 Pro

Anisotropic filtering levels are comparable between the GeForce 6200 and Radeon X600 Pro, at least to my eye.

 

3DMark05 image quality – Game test 1
When NVIDIA extended its GeForce FX line down to the low end with the GeForce FX 5200, they resorted to all sorts of partial precision tricks to improve performance. Unfortunately, dropping precision degraded image quality. At launch, the FX 5200’s DirectX 9 image quality was noticeably inferior to that of high-end GeForce FX cards. To make sure that NVIDIA isn’t doing the same thing again with the GeForce 6200, I compared its output to a GeForce 6800 GT in 3DMark05’s game tests using the Shader Model 3.0 path. The X600 Pro can only use 3DMark05’s Shader Model 2.0 path, which will produce slightly different images by default, so I haven’t included it here.

Click on the images below to open an uncompressed PNG in a new window.


Game test 1 – GeForce 6200


Game test 1 – GeForce 6800 GT

Everything looks good in game test 1. At least to my eye, the 6200’s output doesn’t look any better or worse than the 6800 GT’s.

 

3DMark05 image quality – Game test 2


Game test 2 – GeForce 6200


Game test 2 – GeForce 6800 GT

The same goes for game test 2…

 

3DMark05 image quality – Game test 3


Game test 3 – GeForce 6200


Game test 3 – GeForce 6800 GT

Game test 3 also shows no apparent differences between the cards. If NVIDIA’s cutting precision, it’s not having a detrimental impact on image quality.

 

Power consumption
I broke out our trusty watt meter to measure overall system power consumption, sans monitor, at the outlet. Power consumption was measured at idle and under a 3DMark05 Return to Proxycon game test load.

System power consumption with the GeForce 6200 closely mirrors the Radeon X300. It’s particularly interesting to note that system power consumption with the integrated GMA 900 IGP isn’t much lower than with our discrete graphics cards.

 

Conclusions
When NVIDIA launched the GeForce 6800, they talked up the new architecture as a scalable design that would power a top-to-bottom line of graphics cards. With the GeForce 6200, that top-to-bottom line is complete, at least as far as add-in desktop graphics cards are concerned. The GeForce 6200 brings the impressive rendering capabilities of the GeForce 6 series to the budget space with fewer compromises than one might expect. The GeForce 6200’s strong performance in DOOM 3 shows that this budget card isn’t too short on pixel processing power.

Priced between $129 and $149, the GeForce 6200 will do battle with ATI’s Radeon X300 and Radeon X600 Pro. Against the X300, the GeForce 6200 is clearly superior. However, with the exception of DOOM 3, the X600 Pro’s performance is tough to match. The GeForce 6200 performs exceptionally well given its relatively low clock speeds, but against a card with a 100MHz core and memory clock advantage, there’s only so much it can do.

In some circles, the GeForce 6200 will also compete with Intel’s Graphics Media Accelerator 900. Those who purchase PCs from the big PC makers will likely have the option of going with integrated graphics or trading up to something like the GeForce 6200. Based the performance of the GMA 900, trading up looks like the only viable option for gaming, at least with newer titles. The GMA 900’s lack of shader power has a devastating impact on performance, and it’s rare that pixel shader effects are even displayed correctly. To make matters worse, the GMA isn’t detected as a DirectX 9 graphics option by some games.

As I wrap things up, I can’t help but be struck by the GeForce 6200 graphic chip’s relatively large die size. It appears to be an NV43 with some of its pixel shaders disabled, and it’s a big chip for what should be a very high volume part. I wouldn’t be surprised to see a GeForce 6100 or 6300 emerge at some point down the road with a smaller die size and perhaps four pixel pipes bound to two ROPs using that fancy fragment crossbar.

Whatever happens in the future, right now the GeForce 6200 is a pretty compelling graphics card for budget-minded gamers. It’s clearly a better option than the Radeon X300, but add-in board manufacturers are going to have to have to break out some Turbo Golden Sample Special Editions with higher clock speeds to catch the Radeon X600 Pro. 

Latest News

Apple Might Join Hands with Google or OpenAI for Their AI Tech
News

Apple Is Reportedly Planning to Join Hands with Google or OpenAI to License Their AI Tools

YouTube Launches New Tool To Help Label AI-generated Content
News

YouTube Launches a New Tool to Help Creators Label AI-Generated Content

YouTube released a tool that will make creators clearly label the parts of their content that are generated by AI. The initiative was first launched in November in an attempt...

Ripple Dumps 240 Million XRP Tokens Amid 17% Price Decline
Crypto News

Ripple Dumps 240 Million XRP Tokens Amid 17% Price Decline

Popular crypto payment platform Ripple has released 240 million XRP tokens in its latest escrow unlock for March. This comes at a time when XRP’s price has declined significantly. Data from...

Crypto Expert Draws A Links Between Shiba Inu And Ethereum
Crypto News

Crypto Expert Draws Link Between Shiba Inu And Ethereum

The Lucrative FTX Bankruptcy Trade and Ongoing Legal Battle
Crypto News

The Lucrative FTX Bankruptcy Trade and Ongoing Legal Battle

Bitcoin (BTC) Price Set to Enter “Danger Zone” – Time to Back-Off or Bag More Coins?
Crypto News

Bitcoin (BTC) Price Set to Enter “Danger Zone” – Time to Back-Off or Bag More Coins?

SNB to Kick Off Rate Cut Cycle Sooner Than Expected
News

SNB to Kick-Start Rate Cut Cycle Sooner Than Expected