Home Nvidia’s GeForce 9600 GT graphics processor
Reviews

Nvidia’s GeForce 9600 GT graphics processor

Scott Wasson
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

For a while there, trying to find a decent DirectX 10-capable graphics card for somewhere around two hundred bucks was a tough assignment. Nvidia had its GeForce 8600 GTS, but that card didn’t really perform well enough to measure up similarly priced DX9 cards. On the Radeon side of things, AMD had, well, pretty much nothing. You could buy a cheap, slow DX10-ready Radeon or a faster one with a formidable price tag. Between them, crickets chirped as tumbleweeds blew by.

Happily, the GPU makers saw fit to remedy this situation, and in the past few months, we’ve gained an embarrassment of riches in video card choices between about $170 and $250, including the screaming GeForce 8800 GT and a pair of solid values in the Radeon HD 3850 and 3870. Now, that embarrassment is becoming positively scandalous, as Nvidia unveils yet another new GPU aimed at graphics cards below the $200 mark: the GeForce 9600 GT.

Where does the 9600 GT fit into the daunting mix of video cards available in this price range? How does it match up with the Radeon HD 3850 and 3870? Why is this new GPU the first installment in the GeForce 9 series? We have no idea about that last one, but we’ll try to answer those other questions.


The GeForce 9600 GT laid bare

Welcome the new middle management
Let’s get this out of the way at the outset. Nvidia’s decision to make this new graphics card the first in the GeForce 9 series is all kinds of baffling. They just spent the past few months introducing two new members of the 8-series, the GeForce 8800 GT and the confusingly named GeForce 8800 GTS 512, based on a brand-new chip codenamed G92. The G92 packs a number of enhancements over older GeForce 8 graphics processors, including some 3D performance tweaks and improved HD video features. Now we have another new GPU, codenamed G94, that’s based on the same exact generation of technology and is fundamentally similar to the G92 in almost every way. The main difference between the two chips is that Nvidia has given the G94 half the number of stream processor (SP) units in the G92 in order to create a smaller, cheaper chip. Beyond that, they’re pretty much the same thing.

So why the new name? Nvidia contends it’s because the first product based on the G94, the GeForce 9600 GT, represents such a big performance leap over the prior-generation GeForce 8600 GTS. I suppose that may be true, but they’re probably going to have to rename the GeForce 8800 GT and GTS 512 in order to make their product lineup rational again. For now, you’ll just want to keep in mind that when you’re thinking about the GeForce 8800 GT and the 9600 GT, you’re talking about products based on two chips from the same generation of technology, the G92 and G94. They share the same feature set, so choosing between them ought to be a simple matter of comparing price and performance, regardless of what the blue-shirts haunting the aisles of Best Buy tell you.

Not that we really care about that stuff, mind you. We’re much more interested in the price and performance end of things, and here, the G94 GPU looks mightily promising. Because Nvidia has only excised a couple of the SP clusters included in the G92, the G94 retains most of the bits and pieces it needs to perform quite well, including a 256-bit memory interface and a full complement of 16 ROP units to output pixels and handle antialiasing blends. Yes, the G94 is down a little bit in terms of shader processing power and (since texture units are located in the SPs) texture filtering throughput. But you may recall that the GeForce 8800 GT is based on a G92 with one of its eight SP clusters disabled, and it works quite well indeed.

Here’s a quick look the G94’s basic capabilities compared to some common points of reference.

ROP output:
Pixels/clock
Texture
filtering:
Bilinear
texels/clock
Texture
filtering:
Bilinear FP16
texels/clock
Stream
processors
Memory
interface
bits
Radeon HD 38×0 16 16 16 320 256
GeForce 9600 GT 16 32 16 64 256
GeForce 8800 GT 16 56 28 112 256
GeForce 8800 GTS 20 24 24 96 320

The 9600 GT is suitably potent to match up well in most categories with the GeForce 8800 GT and the Radeon HD 3850/3870. Even the older G80-based GeForce 8800 GTS fits into the conversation, although its capacities are almost all higher. As you know, the RV670 GPU in the Radeons has quite a few more stream processors, but Nvidia’s GPUs tend to make up that difference with higher SP clock speeds.

In fact, the GeForce 9600 GT makes up quite a bit of ground thanks to its clock speeds. The 9600 GT’s official “base” clock speeds are 650MHz for the GPU core, 1625MHz for the stream processors, and 900MHz (1800MHz effective) for its GDDR3 memory. From there, figuring out the GPU’s theoretical potency is easy.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear
texel
filtering
rate
(Gtexels/s)

Peak bilinear
FP16 texel
filtering
rate
(Gtexels/s)

Peak
shader
arithmetic
(GFLOPS)

Peak
memory
bandwidth
(GB/s)
Radeon HD 3870 12.4 12.4 12.4 496 72.0
GeForce 9600 GT 10.4 20.8 10.4 312 57.6
GeForce 8800 GT 9.6 33.6 16.8 504 57.6
GeForce 8800 GTS 10.0 12.0 12.0 346 64.0

As expected, the 9600 GT trails the 8800 GT in terms of texture filtering capacity and shader processing power, but it has just as much pixel fill rate and memory bandwidth as its big brother. More notably, look at how the 9600 GT matches up to the GeForce 8800 GTS, a card that was selling for $400 less than a year ago.

Making these theoretical comparisons to entirely different GPU architectures like the RV670 is rather tricky. On paper, the 9600 GT looks overmatched versus the Radeon HD 3870, even though we’ve given the GeForce cards the benefit of the doubt here in terms of our FLOPS estimates. (Another way of counting would cut the GeForces’ FLOPS count by a third.) We’ll have to see how that works out in practice.

Incidentally, the 9600 GT’s performance will be helped at higher resolutions by a feature carried over from the G92: improved color compression. All GeForce 8-series GPUs compress color data for textures and render targets in their ROP subsystems in order to save bandwidth. The G92 and G94 have expanded compression coverage, which Nvidia says is now sufficient for running games at resolutions up at 2560×1600 with 4X antialiasing.

The chip
Like the G92 before it, the G94 GPU is manufactured on a 65nm fabrication process. That leaves AMD with something of an edge, since the RV670 is made using a smaller 55nm process. Nvidia estimates the G94’s transistor count at 505 million, versus 754 million for the G92. AMD seems to count a little differently, but it estimates the RV670 at a sinister 666 million transistors.

Here’s a quick visual comparison of the three chips. By my measurements, the G94 is approximately 240 mm², quite a bit smaller than the G92 at 324 mm² but not as small as the RV670 at 192 mm². Obviously, the G94 is very much in the same class as the RV670, and it should give Nvidia a much more direct competitor to AMD’s strongest product.

The cards
I’ve already told you about the 9600 GT’s basic specs, but the products you’ll see for sale will have a few other common parameters. Among them is this very nice development: cards will come with 512MB of GDDR3 memory, standard. I’m pleased to see this memory size becoming the new baseline for enthusiast-class products. Not every game at every resolution requires more than 256MB of memory, but a mid-range card with 512MB is a much nicer compromise, especially given RAM prices these days. On top of that, running two GPUs in SLI makes a lot more sense with 512MB of memory than it does with 256MB, where you’re facing a serious mismatch in GPU horsepower and available memory.

Most 9600 GTs will sport a single-slot cooler similar to the one on the 8800 GT. Nvidia rates board power consumption at 95W, so the 9600 GT requires the help of a single six-pin PCIe aux power connector. And, as we’ve hinted, prices should slot in just below the 8800 GT at about $169 to $189, according to Nvidia.

Of course, there will be a range of GeForce 9600 GT cards on tap from various board makers, and many of them will be clocked at higher frequencies than Nvidia’s defaults. That’s decidedly the case with the 9600 GT we have in the labs for review, Palit’s flamboyant GeForce 9600 GT Sonic.


Do you find Palit’s palette palatable?

This puppy’s dual-slot cooler is shrouded in bright Lego/Tonka yellow, which pretty effectively broadcasts that this isn’t a stock-clocked creation. In fact, Palit has turned up the GPU core clock to 700MHz, the shader clock to 1.75GHz, and the memory to 1GHz.

Also, there’s an atomic frog. With, I think, a flamethrower. I’ve learned a lot about video cards over the years, but some things I will never fully understand.

Anyhow, Palit has loaded this thing up with more ports than the Pacific rim. Check it out:

There are two dual-link DVI outputs, an HDMI out, a DisplayPort connector, and an optical S/PDIF output. You’ll need to connect an audio source to the card’s two-pin internal plug (using the supplied internal audio cable) in order for HDMI audio and the S/PDIF output to work, since unlike the RV670, the G94 GPU lacks an audio device. Still, that’s a very impressive complement of ports—the most complete I’ve seen in a video card in some time. You’ve gotta give these guys credit for making something different here.

Unfortunately, different isn’t always better, as we found out when we tried to plug the six-pin portion of our adaptable eight-pin PCIe aux power lead into the card. Regular six-pin-only connectors work fine, but the eight-pin one didn’t fit.

Such problems could be resolved fairly easily by removing the shroud altogether, which exposes the card’s nifty cooler and would probably lead to better cooling anyhow. All other things being equal, I’d prefer a cooler designed to exhaust warm air from the case, like most dual-slot coolers do these days. However, I have to admit that this cooler did a fine job on our test card and made very little noise. This puppy has it where it counts, too, with a copper base connected to copper heatpipes routed into aluminum fins. The fan is speed controlled, and although it can be quite noisy when first booting or in a pre-boot environment, it’s impressively quiet in Windows—even when running a 3D application or game.

Palit’s festival of transgression against the reference design continues at the board level, where the firm has replaced the standard two-phase power with a three-phase design, intended to enhance board longevity and overclocking potential. Again, we like the initiative, and we’ll test the board’s overclocking headroom shortly.

This card ships with a copy of Tomb Raider Anniversary, which apparently isn’t a bad game, as hard as I find that to believe. Palit says the card’s MSRP is $219, but it’s currently selling for $209 on Newegg. Obviously, you’re paying extra for all of the bells and whistles on this card, which take it nearly into 8800 GT territory. Palit has a more pedestrian model selling for $179, as do a number of other board makers, including XFX and Gigabyte. MSI even has one with a 700MHz GPU core selling at that price.

The competition hits the juice
AMD has no intention of ceding ground to Nvidia in this portion of the market without a fight, which is good news for you and me. In order to counter the 9600 GT, AMD and the various Radeon board makers have recently slashed prices and juiced up their Radeon HD 3850 cards. Clock speeds are up, and many of the boards are now equipped with 512MB of GDDR3 memory, as well.

Diamond kindly agreed to send us its latest 3850 for comparison to the 9600 GT. This card is emblematic of the new wave of Radeon HD 3850s. It’s clocked at 725MHz with 512MB of GDDR3 memory running at 900MHz, and it’s selling for $169.99 right now. Those clock speeds put it darn near the base clocks of the Radeon HD 3870, although as we learned in our recent video card roundup, 3870 clocks are making something of a northward migration, as well.

Diamond isn’t alone in offering this class of product. In fact, we paired up the Diamond card with a similarly clocked HIS Hightech TurboX 512MB card for CrossFire testing.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core
2 Extreme X6800
2.93GHz
Core
2 Extreme X6800
2.93GHz
System
bus
1066MHz
(266MHz quad-pumped)
1066MHz
(266MHz quad-pumped)
Motherboard Gigabyte
GA-X38-DQ6
XFX
nForce 680i SLI
BIOS
revision
F7 P31
North
bridge
X38
MCH
nForce
680i SLI SPP
South
bridge
ICH9R nForce
680i SLI MCP
Chipset
drivers
INF
update 8.3.1.1009

Matrix Storage Manager 7.8

ForceWare
15.08
Memory
size
4GB
(4 DIMMs)
4GB
(4 DIMMs)
Memory
type
2
x Corsair
TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
2
x Corsair
TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
CAS
latency (CL)
4 4
RAS
to CAS delay (tRCD)
4 4
RAS
precharge (tRP)
4 4
Cycle
time (tRAS)
18 18
Command
rate
2T 2T
Audio Integrated
ICH9R/ALC889A
with RealTek 6.0.1.5497 drivers
Integrated
nForce 680i SLI/ALC850
with RealTek 6.0.1.5497 drivers
Graphics Diamond Radeon HD
3850 512MB PCIe

with Catalyst 8.2 drivers
Dual
GeForce
8800 GT 512MB PCIe

with ForceWare 169.28 drivers
Dual Radeon HD
3850 512MB PCIe

with Catalyst 8.2 drivers
Dual
Palit GeForce
9600 GT 512MB PCIe

with ForceWare 174.12 drivers

Radeon HD 3870 512MB PCIe

with Catalyst 8.2 drivers
Dual

Radeon HD 3870 512MB PCIe

with Catalyst 8.2 drivers


Radeon HD 3870 X2 1GB PCIe

with Catalyst 8.2 drivers
Palit
GeForce
9600 GT 512MB PCIe

with ForceWare 174.12 drivers
GeForce
8800 GT 512MB PCIe

with ForceWare 169.28 drivers
EVGA
GeForce 8800 GTS 512MB PCIe

with ForceWare 169.28 drivers
GeForce
8800 Ultra 768MB PCIe

with ForceWare 169.28 drivers
Hard
drive
WD
Caviar SE16 320GB SATA
OS Windows
Vista Ultimate
x86 Edition
OS
updates
KB936710, KB938194, KB938979,
KB940105, KB945149,
DirectX November 2007 Update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs. Thanks to OCZ for providing these units for our use in testing.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Sizing up the new guy
We’ve already talked some about the 9600 GT’s theoretical capabilities. Here’s a quick table to show how it compares with a broader range of today’s video cards, including the juiced-up Diamond Radeon HD 3850 512MB card we’re testing. I’ve included numbers for the Palit card at its higher clock speeds, as well.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear
texel
filtering
rate
(Gtexels/s)

Peak bilinear
FP16 texel
filtering
rate
(Gtexels/s)

Peak
memory
bandwidth
(GB/s)

Peak
shader
arithmetic
(GFLOPS)
GeForce 9600 GT 10.4 20.8 10.4 57.6 312
Palit GeForce 9600 GT 11.2 22.4 11.2 64.0 336
GeForce 8800 GT 9.6 33.6 16.8 57.6 504
GeForce 8800 GTS 10.0 12.0 12.0 64.0 346
GeForce 8800 GTS 512 10.4 41.6 20.8 62.1 624

GeForce 8800 GTX

13.8 18.4 18.4 86.4 518
GeForce 8800 Ultra 14.7 19.6 19.6 103.7 576
Radeon HD 2900 XT 11.9 11.9 11.9 105.6 475
Radeon HD 3850 10.7 10.7 10.7 53.1 429
Diamond Radeon HD 3850 11.6 11.6 11.6 57.6 464
Radeon HD 3870 12.4 12.4 12.4 72.0 496
Radeon HD 3870 X2 26.4 26.4 26.4 115.2 1056

Now the question is: how do these theoretical numbers translate into real performance? For that, we can start with some basic synthetic tests of GPU throughput.

The single-textured fill rate test is typically limited by memory bandwidth, which helps explain why the Palit 9600 GT beats out our stock GeForce 8800 GT. The multitextured test is more generally limited by the GPU’s texturing capabilities, and in this case, the 8800 GT pulls well away from its upstart sibling. The 9600 GT easily outdoes the Radeon HD 3850 and 3870, though, which is right in line with what we’d expect.

3DMark’s two simple pixel shader tests show the 9600 GT at the back of the pack, again as we’d expect. Simply put, shader arithmetic is the place where Nvidia has compromised most in this design. Whether or not that will really limit performance in today’s game is an intriguing question. We shall see.

Among the GeForce 8 cards, these vertex shader tests appear to track more closely with shader clock speeds than with the total shader power of the card. I don’t think that’s anything worth worrying about.

However, have a look at the difference in scores between the Radeon HD 3850 and 3870 in the simple vertex shader test. This is not a fluke; I re-tested several times to be sure. The 3850 is just faster in the simple vertex shader test—at least until you get multiple GPUs involved. After consulting with AMD, I believe the most likely explanation for the 3870’s low performance here is its use of GDDR4 memory. GDDR4 memory has a transaction granularity of 64 bits, while GDDR3’s is half that. In certain cases, that may cause GDDR4 memory to deliver lower performance per clock, especially if the access patterns don’t play well with its longer burst length. Although this effect is most pronounced here, we saw its impact in several of our game tests, as well, where the Radeon HD 3850 turned out to be faster than the 3870, despite having slightly slower GPU and memory clock frequencies.

Call of Duty 4: Modern Warfare
We tested Call of Duty 4 by recording a custom demo of a multiplayer gaming session and playing it back using the game’s timedemo capability. Since this is a high-end graphics card we’re testing, we enabled 4X antialiasing and 16X anisotropic filtering and turned up the game’s texture and image quality settings to their limits.

We’ve chosen to test at 1680×1050, 1920×1200, and 2560×1600—resolutions of roughly two, three, and four megapixels—to see how performance scales. I’ve also tested at 1280×1024 with the 9600 GT and its closest competitors here, since some of them struggled to deliver completely fluid rate rates at 1680×1050.

The 9600 GT delivers jaw-dropping performance in CoD4, keeping frame rate averages above 40 FPS even at 1920×1200 resolution. That’s astounding, because we’re talking about a great-looking modern game running with 4X antialiasing, 16X aniso filtering, and peak quality options. In fact, the 9600 GT shadows the GeForce 8800 GT, trailing it by only a few frames per second.

Note that in this game, I’ve also provided results for a stock-clocked GeForce 9600 GT (at 650/900MHz), for comparison. Although dropping from the Palit card’s frequencies to Nvidia’s reference clocks puts a little more distance between the 8800 GT and the 9600 GT, it doesn’t drop the 9600 GT into Radeon territory. Both the 3850 and 3870 are consistently slower.

Sadly, just when you need SLI, at 2560×1600, it fails. The story is the same on both the 9600 GT and the 8800 GT. My best theory: they may be running out of video memory, which would explain the big drop in performance.

Enemy Territory: Quake Wars
We tested this game with 4X antialiasing and 16X anisotropic filtering enabled, along with “high” settings for all of the game’s quality options except “Shader level” which was set to “Ultra.” We left the diffuse, bump, and specular texture quality settings at their default levels, though. Shadows, soft particles, and smooth foliage were enabled. Again, we used a custom timedemo recorded for use in this review.

The 9600 GT’s strong performance continues here, where it again finds it itself between the GeForce 8800 GT and the Radeon HD 3870. Dropping down the Nvidia’s base clock frequencies brings the 9600 GT closer to the 3870, but it’s still a tick faster. These are very minor differences of a few frames per second, though, to keep things in perspective.

Half-Life 2: Episode Two
We used a custom-recorded timedemo for this game, as well. We tested Episode Two with the in-game image quality options cranked, with 4X AA and 16 anisotropic filtering. HDR lighting and motion blur were both enabled.

The 9600 GT’s advantage over the Radeon HD 3850 and 3870 carries over to Episode Two. And two 9600 GTs in SLI can deliver playable frame rates at 2560×1600 with 4X AA and 16X aniso, believe it or not.

Crysis
I was a little dubious about the GPU benchmark Crytek supplies with Crysis after our experiences with it when testing three-way SLI. The scripted benchmark does a flyover that covers a lot of ground quickly and appears to stream in lots of data in a short period, possibly making it I/O bound—so I decided to see what I could learn by testing the 9600 GT and its closest competitors with FRAPS instead. I chose to test in the “Recovery” level, early in the game, using our standard FRAPS testing procedure (five sessions of 60 seconds each). The area where I tested included some forest, a village, a roadside, and some water—a good mix of the game’s usual environments.

Please note that all of the results you see below for the Radeons come from a newer graphics driver, version 8.451-2-080123a, than the ones we used for the rest of our tests. This newer driver improved Crysis performance noticeably. These driver enhancements for Crysis should be available to the public soon in Catalyst 8.3.

The cards tend to cluster together at 1280×800, and multi-GPU rendering doesn’t seem to help performance much at all. A low of 23 frames per second isn’t too bad, when you think about it, and I’d classify any of these cards as running Crysis at playable speeds at this resolution. Obviously, there’s very little difference between them.

At 1680×1050, the field begins to separate just a little, while CrossFire and SLI actually start to help. The 9600 GT is technically faster than the Radeons, but not by enough to matter much.

Unreal Tournament 3
We tested UT3 by playing a deathmatch against some bots and recording frame rates during 60-second gameplay sessions using FRAPS. This method has the advantage of duplicating real gameplay, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent and trustworthy results. In addition to average frame rates, we’ve included the low frames rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

Because UT3 doesn’t support multisampled antialiasing, we tested without AA. Instead, we just cranked up the resolution to 2560×1600 and turned up the game’s quality sliders to the max. I also disabled the game’s frame rate cap before testing.

The UT3 results pretty much confirm what we’ve seen elsewhere. Any of these cards can, incredibly, run UT3 well enough at this resolution, but you’ll probably want to drop down to 1920×1200 for the best experience with either of the Radeon HD cards. Their performance seemed a little choppy at times to me. And, heh, you’ll probably have drop down a little bit to match your monitor’s native resolution.

Power consumption
We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running UT3 at 2560×1600 resolution, using the same settings we did for performance testing.

Note that the SLI configs were, by necessity, tested on a different motherboard than the single cards, as noted in our testing methods section.

Whoa. The 9600 GT draws even less power under load than the Radeon HD 3850. That makes it one heck of an energy efficient GPU, given its performance.

Noise levels
We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 12″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured, including the stock Intel cooler we used to cool the CPU. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

Unfortunately—or, rather, quite fortunately—I wasn’t able to reliably measure noise levels for most of these systems at idle. Our test systems keep getting quieter with the addition of new power supply units and new motherboards with passive cooling and the like, as do the video cards themselves. I decided this time around that our test rigs at idle are too close to the sensitivity floor for our sound level meter, so I only measured noise levels under load.

Like I said, Palit’s cooler is nice and quiet. Of course, it doesn’t hurt to have a GPU that draws so little power (and thus generates little heat) onboard.

GPU temperatures
Per your requests, I’ve added GPU temperature readings to our results. I captured these using AMD’s Catalyst Control Center and Nvidia’s nTune Monitor, so we’re basically relying on the cards to report their temperatures properly. In the case of multi-GPU configs, well, I only got one number out of CCC. I used the highest of the numbers from the Nvidia monitoring app. These temperatures were recorded while running UT3 in a window.

Yeah, those numbers seemed wrong to me at first, too. I tried again a few times, and the GPU temps never really got any higher. Palit’s copper heatsink with heatpipes may be overkill for this GPU. Then again, it’s the kind of overkill I like.

Overclocking
So how far will this little GPU go? Finding out was surprisingly easy using Palit’s little overclocking tool. I just kept nudging the sliders upward until something went sideways and caused a Vista display driver crash in my 3D test app. Then I backed off a little bit and watched for visual artifacts. In the end, the 9600 GT was stable with a 810MHz GPU core and a 1.9GHz shader clock.

Turning up the memory clock didn’t work out so well, though. Even going to 1050MHz produced immediate visual corruption and a system lock. I gave up and ran a few tests at my known-good overclocked speeds.

A little overclocking gives the 9600 GT enough of a boost to surpass the 8800 GT—just barely. I’m curious to see how much the more pedestrian 9600 GTs out there will overclock. I suspect Palit’s spiffy cooler and three-phase power have given this card more headroom than most.

Conclusions
The pattern in our performance testing was unmistakable: the GeForce 9600 GT is just a little bit slower than a GeForce 8800 GT and a little bit faster than the Radeon HD 3850 and 3870. Of course, that statement needs some qualification, since we tested the 8800 GT and HD 3870 at bone-stock clocks, while the 9600 GT and HD 3850 we tested were both overclocked considerably. But the basic trends we spotted were consistent, even when we reduced the 9600 GT card to Nvidia’s base clock speeds. The 9600 GT also impressed us with the lowest power draw under load of any card we tested and very low noise levels—despite its amped-up clock speeds.

I’m struggling to figure out what’s not to like here. One could argue that the practical performance difference between the Radeon HD 3850 512MB and the GeForce 9600 GT in our testing was largely nil. Both cards ran most games well at common resolutions like 1680×1050, even with quality levels cranked up. Image quality between the two was comparable—and uniformly good. When there was a performance difference between them, it was usually fairly minor.

This is true enough, but the performance differences were large enough in Call of Duty 4 and Half-Life 2: Epsiode Two to distinguish the 9600 GT as the better choice.

One could also argue that the 9600 GT’s strong performance today may give way to relatively weaker performance down the road. If game developers shift their attention to using more and more complex shaders, the 9600 GT could end up running tomorrow’s games slower than the Radeon HD 3850, which clearly has more shader processing power.

This is a possibility, I suppose.

But at the end of the day, that just means there are a couple of very good choices in the market right now. The GeForce 9600 GT is one heck of a deal in a graphics card at around $179, and that’s something we like very much. If you haven’t upgraded for a while and don’t want to drop a lot of cash when doing so, it’s hard to go wrong with the 9600 GT. This card should offer roughly twice the performance of a DX9-class graphics card like the GeForce 7900 GS or the Radeon X1950 Pro, based on what we’ve seen. If you want to spend a little more and get a GeForce 8800 GT, you’ll get some additional future proofing in the form of shader power and just a little bit higher frame rates in today’s games. Whether that’s worth it will depend on your budget and your priorities. I will say this though: if there has ever been a better time to upgrade, I sure as heck don’t remember it.

Latest News

Lenovo Plans To Introduce A New Generation Of AI PCs
News

Lenovo Plans to Introduce a New Generation of AI PCs with Native AI Features

5 Major Tech Companies Announce Fresh Layoffs in March
News

5 Major Tech Companies Announce Fresh Rounds of Layoffs in March

Another month, another round of layoffs – the tech industry has been hit by multiple rounds of layoffs ever since life resumed after the pandemic and it looks like we...

Bitcoin ETF Records Inflows After Bottoming Out Favoring Self-Custody Investors
Crypto News

Bitcoin ETF Records Inflows After Bottoming Out Favoring Self-Custody Investors

Bitcoin appears to have found a sturdy floor, as dip buyers have emerged with conviction from institutional and individual investor camps.  Available data points to heavy accumulation from self-custodial holders,...

Coinbase Gears Up To Onboard Business Onchain With Plans To Store Users’ USDC On Base
Crypto News

Coinbase Gears Up To Onboard Business Onchain With Plans To Store Users’ USDC On Base

Bitcoin Primed for Massive Short Squeeze as Institutions Refuse to Blink
Crypto News

Bitcoin Primed for Massive Short Squeeze as Institutions Refuse to Blink

Shiba Inu Exceeded Expectations, Says Ethereum Founder Vitalik Buterin
Crypto News

Shiba Inu (SHIB) Exceeded Expectations, Says Ethereum Founder Vitalik Buterin

Ripple (XRP) Gained Massively from Recent Market Boom, Forbes Report
Crypto News

Ripple (XRP) Gained Massively from Recent Market Boom, Forbes Report