IF YOU WANT to learn about Nvidia’s Tesla and GTX480 cards at GDC, don’t ask Nvidia, it has problems with the truth. The real story is found with the users, and they have interesting things to say about the upcoming card’s upward bound TDP.
If you recall, the official story is that the card, in it’s cut down and underclocked version, pulls 225W. That number, along with stunningly poor performance, has lead to some notable backpedaling. If that isn’t bad enough, some sources at GDC told SemiAccurate that Nvidia jacked up the TDP by 50W last week without warning.
We will be the first to admit we were wrong about the TDPs of the cards. At CES we said the GTX480s shown there were pulling 280W, something Nvidia vehemently denied. Engineers beavering away at the things Dear Leader thinks are important, like the style of the wheels on his Ferrari, have been pulled off to work on cards for some unfathomable reason. Working hard, they have managed to reduce the TDP of the cards 5W to 275W. Yeah, Nvidia finally admitted that the card is the burning pig anyone who has used one knows it is.
There are two problems here, one internal and one external. The internal one is that this is a big flag saying Nvidia admits defeat and has no hope of fixing the problems that plague the chip. Nvidia can’t get the power to a reasonable level, and that is the end of it. The only way to get numbers of chips to salable quantities is to jack power through the roof to mask the broken architecture, so that is what it is doing.
More problematic is what about the OEMs? Officially raising the TDP three weeks before launch by a very substantial 50W is massively stupid. Nvidia just can’t do this to OEMs without causing them lots of pain. For high end desktops with lots of space, that can be worked around, but if the system is a little closer to the edge, 20+ percent more TDP can have a profound and negative effect on cooling.
Even worse, think about all the companies that make Fermi based Tesla cards. If you put four in a system, and Nvidia jacks TDP 50W per card, that is 200W more you have to dissipate. Three weeks before launch, your cases are built and in a warehouse, your cooling system is finished, and you don’t have time to change things, much less test them. 200W is a lot in a 2U server case. 21 of these in a 42U rack is an added 4.2kW that you need to dissipate, roughly 3 hair dryers on full blast.
Then there are the laptops. I feel bad for those guys, first Bumpgate, now this. There is no way you can redesign a laptop cooling system in six months, much less three weeks. Silly ODMs, no cookie, but you will probably be blamed by Nvidia PR for ‘screwing up‘ so badly.
In the end, Fermi is turning into a running bad joke. You have to wonder about how many high margin orders will be shown the door when word of this leaks out. Nvidia might be “Oak Ridged” a few more times yet.S|A
Charlie Demerjian
Latest posts by Charlie Demerjian (see all)
- Microsoft Hobbles Intel Once Again - Sep 20, 2024
- What is really going on with Intel’s 18a process? - Sep 9, 2024
- Industry pioneer Mike Magee has passed away - Aug 12, 2024
- What is Qualcomm launching at IFA this year? - Aug 9, 2024
- SemiAccurate is back up - Aug 7, 2024