Say that you have a fairly robust graphics chip like the GM204 GPU that powers the GeForce GTX 980, with four Xbox Ones worth of shader processing power on tap. That’s pretty good, right? But it’s also expensive; the GTX 980 lists for $549. Say you want to make a more affordable product based on this same tech. That’s when you grab the tiny starter pull cord between your thumb and index finger and give it an adorable little tug. The world’s smallest chainsaw sputters to life. Saw the GM204 chip exactly in half, blow away the debris with a puff of air, and you get the GM206 GPU that powers Nvidia’s latest graphics card, the GeForce GTX 960.

For just under two hundred bucks, the GTX 960 gives you half the power of a GeForce GTX 980—or, you know, two Xbox Ones worth of shader processing grunt. Better yet, because it’s based on Nvidia’s ultra-efficient Maxwell architecture, the GTX 960 ought to perform much better than its specs would seem to indicate. Can the GTX 960 live up to the standards set by its older siblings? Let’s have a look.

First, although “half a GTX 980” might not sound terribly sexy, I think this product is a pretty significant one for Nvidia. The GeForce GTX 970 and 980 have been a runaway success, so much so that the company uncharacteristically shared some sales numbers with us: over a million GTX 970 and 980 cards have been sold to consumers, a huge number in the realm of high-end graphics. What’s more, we have reason to believe that estimate is already pretty dated. The current number could be nearly twice that.

But most people don’t buy high-end graphics cards. Even among PC gamers, less expensive offerings are more popular. And the GTX 960 lands smack-dab in the “sweet spot” where most folks like to buy. If the prospect of “way more performance for about $200” sounds good to you, well, you’re definitely not alone.

Also, there is no chainsaw. I probably made an awful lot of hard-working chip guys cringe with my massive oversimplification above. Although the GM206 really does have half of nearly all key graphics resources compared to the GM204, it’s not just half the chip. These things aren’t quite that modular—not that you’d know that from this block diagram, which looks for all the world like half a GM204.

The GM206 has two graphic processing clusters, almost complete GPUs unto themselves, with eight shader multiprocessor (SM) cores per cluster. Here’s how the chip stacks up to other current GPUs.



As you can see, the GM206 is a lightweight. The chip’s area is only a little larger than the GK106 GPU that powers the GeForce GTX 660 and the Pitcairn chip from the Radeon R9 270X. Compared to those two, though, the GM206 has a narrower memory interface. In fact, the GM206 is the only chip in the table above with a memory path that narrow. Typically, GPUs of this size have wider interfaces.

The GM206 may be able to get away with less thanks to the Maxwell architecture’s exceptional efficiency. Maxwell GPUs tend to like high memory frequencies, and the GTX 960 follows suit with a 7 GT/s transfer rate for its GDDR5 RAM. So there’s more throughput on tap than one might think. Beyond that, this architecture makes very effective use of its memory bandwidth thanks to a new compression scheme that can, according to Nvidia’s architects, reduce memory bandwidth use between 17% and 29% in common workloads based on popular games.

Interestingly, Nvidia identifies the Radeon R9 285 as the GTX 960’s primary competitor. The R9 285 is based on a much larger GPU named Tonga, which is the only new graphics chip AMD introduced in 2014. (The R9 285 ships with a 256-bit memory interface, although I still believe that the Tonga chip itself probably is capable of a 384-bit memory config. For whatever reason—perhaps lots of inventory of the existing Hawaii and Tahiti chips—AMD has chosen not to ship a card with a fully-enabled Tonga onboard.) Even with several bits disabled, the R9 285 has a much wider memory path and more resources of nearly every kind at its disposal than the GM206 does. If the GTX 960’s performance really is competitive with the R9 285, it will be a minor miracle of architectural efficiency.

With the addition of the 960, the GeForce GTX 900 series now extends from $199 to $549. Like its big brothers, the GTX 960 inherits the benefits of being part of the Maxwell generation. Those include support for Nvidia’s nifty Dynamic Super Resolution feature and some special rendering capabilities that should be accessible via DirectX 12. Furthermore, with a recent driver update, Nvidia has made good on its promise to deliver a new antialiasing mode called MFAA. MFAA purports to achieve the same quality as 4X multisampling, the most common AA method, with about half the performance overhead.

Also, the GTX 960 has one exclusive new feature: full hardware support for decoding H.265 video. Hardware acceleration of H.265 decoding should make 4K video playback smoother and more power-efficient. This feature didn’t make the cut for the GM204, so only the GTX 960 has it.

While the GTX 970 and 980 have 4GB of memory, the GTX 960 has 2GB. That’s still a reasonable amount for a graphics card in this class, although the creeping memory requirements for games ported from the Xbone and PS4 do make us worry a bit.

Notice that the GTX 960’s peak power draw, at least in its most basic form as Nvidia has specified, is only 120W. That’s down from 140W in this card’s most direct spiritual predecessor, the GeForce GTX 660. Maxwell-based products just tend to require less power to achieve even higher performance.

After the success of the GTX 970 and 980, Nvidia’s partners are understandably eager to hop on the GTX 960 bandwagon. As a result, Damage Labs is currently swimming in GTX 960 cards. Five of ’em, to be exact. The interesting thing is that each one of them is different, so folks are sure to find something to suit their style.

In many ways, the Asus Strix GTX 960 is the most sensible of the cards we have on hand. It’s the shortest one—only 8.5″ in length—and is the only one with a single six-pin PCIe aux power input, which is technically all the GTX 960 requires. Even so, the Strix has higher-than-stock GPU clock speeds and is the lone product in the bunch with a tweaked memory clock.

The Strix 960 was also first to arrive on our doorstep, so we’ve tested it most extensively versus competing GPUs.

Pictured above is EVGA’s GTX 960 SSC, or Super Superclocked. I suppose that name does the work you need. True to expectations, the SSC has the highest GPU frequencies of any of these GTX 960 cards. At 1279MHz base and 1342MHz boost, it’s well above Nvidia’s reference clocks.

The SSC also has an unusual dual BIOS capability. The default BIOS has a fan profile similar to the rest of these cards: it tells the fans to stop completely below a certain temperature, like ~60°C. Above that, the fan ramps up to keep things cool. That’s smart behavior, but it’s not particularly aggressive. If you’d like to overclock, you can flip a DIP switch on the card to get the other BIOS, which has a more traditional (and aggressive) fan speed profile.

Also, notice the port config on the EVGA card. There are three DisplayPort outputs, one HDMI, and one DVI. A lot of GTX 970 cards shipped with dual DVI and only two DisplayPort outputs, which seemed like a raw deal to me. Most GTX 960s are like this one. Via those three DP outputs, they can drive a trio of G-Sync or 4K (or both!) monitors.

Gigabyte sent us a pair of very different GTX 960 offerings, both clad in striking flat black. The shorter and more practical of the two is the GTX 960 Windforce, with a more-than-adequate dual-fan cooler. The G1 Gaming version of the GTX 960 ups the ante with a massive cooler sporting triple fans and a max cooling capacity of 300W. That’s total overkill—of exactly the sort I like to see.

Both of these cards have six-phase power with a 150W limit. Gigabyte says they’ll deliver higher boost clocks, even under an extreme load like Furmark, as a result. We’ll have to test that claim.

Another distinctive Gigabyte feature is the addition of a second DVI output in conjunction with triple DisplayPort outputs. Gigabyte calls this setup Flex Display. Although the GPU can’t drive all five outputs simultaneously for 3D gaming, I like the extra flexibility with respect to port types.

Last, but by no means least, is the MSI GeForce GTX 960 Gaming 2G. This puppy has a gorgeous Twin Frozr cooler very similar to the one used on MSI’s GTX 970, and that card took home a TR Editor’s Choice award for good reason. In addition to fully passive, fan-free operation below a temperature threshold—a feature all of these GTX 960 cards share—the Gaming 2G’s two fans are controlled independently. The first fan spins up to keep the GPU cool, while the other responds to the temperature of the power delivery circuitry.

Also, notice that these cards have only a single SLI connector at the front. That means the GTX 960 is limited to dual-GPU operation; it can’t participate in three- and four-way teams.

Here’s a summary of the GTX 960s pictured above. Although Nvidia has set the GTX 960’s base price at $199, each of these products offers a little extra for a bit more dough. I’d certainly be willing to spring another 10 or 15 bucks to avoid the somewhat noisy blower from reference versions of the GeForce GTX 660 and 760.

We’ve tested as many different competing video cards against the new GeForces as was practical. However, there’s no way we can test everything our readers might be using. A lot of the cards we used are renamed versions of older products with very similar or even identical specifications. Here’s a quick table that will decode some of these names for you.

Most of the numbers you’ll see on the following pages were captured with Fraps, a software tool that can record the rendering time for each frame of animation. We sometimes use a tool called FCAT to capture exactly when each frame was delivered to the display, but that’s usually not necessary in order to get good data with single-GPU setups. We have, however, filtered our Fraps results using a three-frame moving average. This filter should account for the effect of the three-frame submission queue in Direct3D. If you see a frame time spike in our results, it’s likely a delay that would affect when the frame reaches the display.

We didn’t use Fraps with Civ: Beyond Earth. Instead, we captured frame times directly from the game engine itself using the game’s built-in tools. We didn’t use our low-pass filter on those results.

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Thanks to Intel, Corsair, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Also, our FCAT video capture and analysis rig has some pretty demanding storage requirements. For it, Corsair has provided four 256GB Neutron SSDs, which we’ve assembled into a RAID 0 array for our primary capture storage device. When that array fills up, we copy the captured videos to our RAID 1 array, comprised of a pair of 4TB Black hard drives provided by WD.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Do the math involving the clock speeds and per-clock potency of these cards, and you’ll end up with a comparative table that looks something like this:

As you can see, even with relatively high memory clocks, the GTX 960 just has less memory bandwidth than anything else we tested. Versus its closest competitor from the Radeon camp, the R9 285, the GTX 960 has less of everything except for peak pixel fill rate. (Maxwell GPUs tend to have pretty high pixel rates, likely because Nvidia has tuned them for the new era of high-PPI displays.)

Of course, the numbers above are just theoretical. We can measure how these graphics cards actually perform by running some directed tests.

Traditionally, performance in this color fill test has been limited by memory bandwidth. Few cards reach their theoretical peaks. But check this out: even though it has less memory bandwidth on paper than anything else here, the GTX 960 takes third place in this test. That’s Maxwell’s color compression tech at work. In fact, the only two cards ahead of the GTX 960 are based on GPUs that also have delta-based color compression.

Texture sampling rate is another weakness of the GTX 960 on paper. In this case, the card’s delivered performance is also relatively weak.

The GTX 960’s tessellation and particle manipulation performance, however, is pretty much stellar. This is a substantial improvement over the GTX 660 and 760. In TessMark, the GTX 960 is faster than any Radeon we tested, up to the R9 290.

On paper, the GTX 960 gives up nearly a full teraflop to the Radeon R9 285—2.4 versus 3.3 teraflops of peak compute throughput. In these directed tests, though, the GTX 960 only trails the R9 285 slightly. That’s Maxwell’s vaunted architectural efficiency shining through again.

Click the buttons above to cycle through the plots. Each card’s frame times are from one of the three test runs we conducted for that card.

The GTX 960 starts off with a pretty astounding performance in Far Cry 4. However you want to cut it—the frame time plots, the simple FPS average, or our time-sensitive 99th percentile frame time metric—the GTX 960 looks to be every bit the equal of the GeForce GTX 770. To be clear, the GTX 770 is based on a fully-enabled GK104 chip and is essentially the same thing as the GeForce GTX 680, which was selling for $499 a few short years ago. The GTX 770 has almost double the memory bandwidth of the 960, yet here we are.

The R9 285 nearly ties the GTX 960 in terms of average FPS. But have a look at the R9 285’s frame time plot. Thanks to some intermittent but frequent spikes common to all of the Radeons, the R9 285 falls well behind the GTX 960 in our 99th percentile metric. That means animation just isn’t as fluid on the Radeon.

We can better understand in-game animation fluidity by looking at the “tail” of the frame time distribution for each card, which shows us what happens in the most difficult frames.

The Radeon cards’ frame times start to trend upward at around the 92% mark. By contrast, the curves for most of the GeForce cards only trend upward during the last two to three percent of frames. Some of the frames are difficult for any GPU to render, but the Radeons struggle more often.

These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

For the most part, these results confirm what we’ve already seen above. The R9 285 encounters more “badness” at each threshold than the GTX 960 does. The GTX 960 is just delivering smoother animation here. However, revisit the comparison between the GTX 960 and the GeForce GTX 770. Although these two cards have almost identical 99th percentile frame times, the GTX 960 actually fares better in our “badness” metrics. So even though the GTX 960 is based on a smaller chip with less memory bandwidth, its performance is less fragile than the GTX 770’s.

The two ultra-popular MOBAs don’t really require much GPU power. I tried League of Legends and found that it ran at something crazy like 350 FPS on the GTX 960. Still, I wanted to test one of these games to see how the various GPUs handled them, so I cranked up the quality options in DOTA 2 and raised the display resolution to 2560×1440. Then I played back a portion of a game from the Asia championship tourney, since I lack the skillz to perform at this level myself.

The big numbers in the FPS averages above are nice, but pay attention to the 99th percentile frame time for a better sense of the animation smoothness. Everything down to the GeForce GTX 760 renders all but the last one percent of frames in less than 16.7 milliseconds. That means nearly every single frame comes in at 60 FPS. That’s not just an average, but the general case.

I’m not sure what happened with the Radeon HD 7950. This is the same basic product as the Radeon R9 280, and it should be more than fast enough to perform well here. For some reason, it ran into some slowdowns, as the frame time plots show.

I don’t think the GTX 960 and the Radeon R9 285 could be more evenly matched than they are in our DOTA 2 test. Then again, you don’t need much video card to run at a constant 60 FPS in this game.

What’s interesting, for those of us with high-refresh gaming displays, is the time spent beyond 8.3 milliseconds. (At 120Hz, the frame-to-frame interval is 8.3 ms.) As you can see, getting a faster video card will buy you more time below the 8.3-ms threshold. That means smoother animation and more time at 120 FPS, which is DOTA 2‘s frame rate cap.

Since this game’s built-in benchmark simply spits out frame times, we were able to give it a full workup without having to resort to manual testing. That’s nice, since manual benchmarking of an RTS with zoom is kind of a nightmare.

Oh, and the Radeons were tested with the Mantle API instead of Direct3D. Only seemed fair, since the game supports it.

The contest between the R9 285 and the GTX 960 is again incredibly close, but the R9 285 has a slight edge overall in the numbers above, perhaps due in part to the lower overhead of the Mantle API.

The R9 285 is clearly ahead in the FPS average, but it trails the GTX 960 quite a bit in our time-sensitive 99th percentile metric. Flip over to the Radeons’ frame time plots and you’ll see why: regular, intermittent frame time spikes as high as 45 milliseconds. The magnitude of these spikes isn’t too great, and they don’t totally ruin the fluidity of the game when you’re playing, remarkably enough. Still, they seem to affect about four to five percent of frames rendered, and they’re pretty much constant. Interestingly enough, we’ve seen this sort of problem before on Radeon cards in Borderlands 2. AMD eventually fixed this problem with the Catalyst 13.2 update. It’s a shame to see it return in the Catalyst Omega drivers.

As one would expect, the frame time curves and badness metrics reflect the frame time spikes on the Radeons.

Here’s another case where the R9 285 wins the FPS sweeps but trails in the 99th percentile frame time score. You can see the “fuzzy” nature of the frame times on the mid-range Radeons’ plots, but really it’s nothing to write home about. We’re talking about a couple of milliseconds worth of difference in the 99th percentile frame time.

I wanted to include one more game that would let me do some automated testing, so I could include more cards. So I did. Then I tested the additional cards manually in all of the games you saw on the preceding pages. Since I am functionally insane. Anyhow, here are the results from the nicely automated FPS benchmark in The Talos Principle public test, which is freely available on Steam. This new game from Croteam looks good and, judging by the first few puzzles, makes Portal seem super-easy.

Oh, also, I’ve included all five of the GTX 960 cards we talked about up front here, to give us a glimpse of how they compare.

Wow, so there’s not much daylight between the different variants of the GTX 960. We’re talking a total range of less than a single frame per second at 2560×1440.

What this tells me is that the differences between the GTX 960 cards, such as they are, probably won’t be apparent without overclocking. The fastest card of the bunch, by a smidgen, is the Asus Strix GTX 960, which also happens to have a higher memory clock than the rest of the bunch. Hmm. My plan is to overclock these puppies in a follow-up article, so we can see how they differ when pushed to their limits. Stay tuned for that.

Please note that our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.

Here’s where the GM206’s smaller size and narrower memory interface pays positive dividends: power consumption. Our test rig’s power use at the wall socket with an Asus Strix GTX 960 installed is 78W lower than with a Radeon R9 285 in the same slot. As you’ve seen, the two cards perform pretty much equivalently, so we’re looking at a major deficit in power efficiency.

These new video card coolers are so good, they’re causing us testing problems. You see, the noise floor in Damage Labs is about 35-36 dBA. It varies depending on things I can’t quite pinpoint, but one notable contributor is the noise produced by the lone cooling fan always spinning on our test rig, the 120-mm fan on the CPU cooler. Anyhow, what you need to know is that any of the noise results that range below 36 dBA are running into the limits of what we can test accurately. Don’t make too much of differences below that level.

Yeah, these big coolers are pretty much overkill for the GTX 960 at stock clocks. The smallest one of them all, on the Asus Strix GTX 960, is quiet enough under load to hit our noise floor—we’re talking whisper quiet here—while keeping the GPU at 60°C. Jeez.

The one bit of good news for the AMD camp is just how very much these coolers are overkill for the likes of the GTX 960. MSI uses essentially the same Twin Frozr cooler on its R9 285, and that card also reaches our noise floor while drawing substantially more power and turning it into heat.

As usual, we’ll sum up our test results with a couple of value scatter plots. The best values tend toward the upper left corner of each plot, where performance is highest and prices are lowest.

Although it started life at $249, recent price cuts have dropped the Radeon R9 285’s price on Newegg down to $209.99, the same price as the Asus Strix GTX 960 card we used for the bulk of our testing.

At price parity, the GTX 960 and R9 285 are very evenly matched. The R9 285 has a slight advantage in the overall FPS average, but it falls behind the GeForce GTX 960 in our time-sensitive 99th percentile metric. We’ve seen the reasons why the R9 285 falls behind in the preceding pages. I’d say the 99th percentile result is a better indicator of overall performance—and the GTX 960 leads slightly in that case. That makes the GTX 960 a good card to buy, and for a lot of folks, that will be all they need to know.

It’s a close race overall, though. Either card is a decent choice on a pure price-performance basis. AMD and its partners have slashed prices recently, perhaps in anticipation of the GTX 960’s introduction, without making much noise about it. Heck, the most eye-popping thing on the plot above is that R9 290 for $269.99. Good grief. In many of these cases, board makers are offering mail-in rebates that effectively take prices even lower. Those don’t show up in our scatter plots, since mail-in rebates can be unreliable and kind of shady. Still, AMD apparently has decided to move some inventory by chopping prices, and that has made the contest between the GTX 960 and the R9 285 very tight indeed.

That’s not quite the whole story. Have a look at this plot of power consumption versus performance.

In virtually every case, you’ll pay more for the Radeon than for the competing GeForce in other ways—whether it be on your electric bill, in terms of PSU requirements, or in the amount of heat and noise produced by your PC. The difference between the R9 285 and the GeForce GTX 960 on this front is pretty dramatic.

Another way to look at the plot above is in terms of progress. From the GTX 660 to the GTX 960, Nvidia improved performance substantially with only a tiny increase in measured power draw. From the GTX 770 to 970, we see a similar performance increase at almost identical power use. By contrast, from the R9 280X to the R9 285—that is, from Tahiti to Tonga—AMD saw a drop in power efficiency. Granted, the R9 285 probably isn’t Tonga’s final form, but the story still isn’t a good one.

Nvidia has made big strides in efficiency with the introduction of Maxwell-based GPUs, and the GeForce GTX 960 continues that march. Clearly, Nvidia has captured the technology lead in GPUs. Only steep price cuts from AMD have kept the Radeons competitive—and only then if you don’t care about your PC’s power consumption.

I’ll have more to say about the different flavors of the GTX 960 we have on hand here in Damage Labs after I’ve had a little more time to put them through the wringer. For now, though, if you aren’t interested in overclocking, you might as well take your pick. They’re all blessed with more cooling than strictly necessary, and they’re all whisper-quiet. What’s not to like?

AMD began its Zen-aissance with the first-generation Ryzen CPUs in 2017, proving that it was not to be discounted from the high-performance CPU race just yet. While those CPUs weren’t perfect,…

Corsair’s HS50 has been its low-end gaming headset for a couple years now, and Eric had good things to say about it. However, the company just recently released the HS35 as its new low-end gaming…

These look nice, but it feels like $210 is too expensive. I’d rather have seen 4GB for this price, and a 2GB version for $170-$180ish.

And when places like newegg is selling 4GB R9-290 for $240, and TR showing those card are 50% to twice faster in games as the GTX 960, I cant see myself recommending this card to anyone beside thermally limited HTPC situations.

After reading through the comments…..why is there fighting over such a mid level card? I expect fanboys from each side to fight with the high end cards, but not the mid level cards.

– You review the Asus Strix as it is the normal GTX 960. It’s not. It’s around 10% faster than a normally clocked GTX 960. In your benchmark graphs you just write GTX 960, making it for anyone who didn’t check the ‘our testing methods’ page seem like you’re testing the normal GTX 960.

– The power usage of your R9 285 seems unrealistically high. Even higher than the 280X which doesn’t make any sense. If I compare this to a tech website of which I’m 100% sure they’re unbiased ([url

Please mind that I’m not some crazy person believing TR conspiracy with Nvidia, I don’t even think that TR is particularly sided with Nvidia versus AMD.

1) I wonder why is the Far Cry 4 test just walking around, without shooting anything? No explosions, no flaming arrows, nothing like that. I haven’t played the game, but in FC3 my framerate would drop during the combat (R9 270) by quite high margin, so if I wanted fluid game I had to lower graphic settings.

I suspect that’s because it’s hard to set up a test that would be highly repetitive and would involve fighting. However, you could still shoot something, throw explosives or something like that (in FC4, don’t know about Alien).

Additionally, in Borderlands you do fight and shoot a lot. It even seems like it’s pretty much impossible to play this test in the exactly same way. So – why do you do that in Borderlands and not in other games?

2) It seems to me that low overal 285 scores in 99th percentile frame time come from Borderlands results. It’s quite obvious it’s a bug (maybe bad design in game or something in drivers, not important). Not only there is a chance it could be fixed (we can’t be sure about that), but it’s just one game. Personally I would throw away this result from the overal results as not representative to average game. Anyway, my question is – can you calculate and post overal results without Borderlands results? I wonder if 285 would still be worse then 960 or maybe the difference would be neglible.

Thank you in advance. I appreciate your work, really nice review. Although I agree with most commenters that 960 is not worth buying right now, with such low 290 price.

It does look bad to be so far behind on both fronts. Doubly so with GPUs as they are on the same process node.

Poor AMD can’t catch a break. The charts don’t look to good in frames/$ or frames/watt. I hope they have something special planned to “Maxwell” (shrink their die and improve performance/watt) their tech. Their GPU is all they really have at the moment that is worth buying.

This is amazing stuff. Tonga does cool things with only 2GB of RAM and a 256-bit bus. Maxwell is keeping up with a 128-bit bus. What happens when we get 512-bit cards with quadruple the core counts?

Becomes? I have a computer, snowboarding, mountain biking, and consumer electronics addiction. It’s never not empty