Posted on 12/09/2005 2:01:26 PM PST by Lets Roll NOW
A Google engineer has warned that if the performance per watt of today's computers doesn't improve, the electrical costs of running them could end up far greater than the initial hardware price tag.
"The possibility of computer equipment power consumption spiraling out of control could have serious consequences for the overall affordability of computing, not to mention the overall health of the planet."
Over the last three generations of Google's computing infrastructure, performance has nearly doubled, Barroso said. But because performance per watt remained nearly unchanged, that means electricity consumption has also almost doubled.
(Excerpt) Read more at news.com.com ...
Breaking news?
So Google is saying that if we use less of their service we can save energy?
i have to agree with Lets Roll NOW ............computers do a lot more now than ever. i have built the last 4 i have used, and my computing power has gone thru the roof, with NO noticeable change in wattage usage
Performance per watt has remained nearly unchanged? What is he smoking?
I wonder if he's taking into consideration automation, cooling, etc. of server farms. But still, I think he's smoking on the job.
A modern, resonably powerful desktop workstation draws 35 - 55 watts, I have measured it. I would guess that the lighting and climate control in a server room draws far more than the servers themselves.
Yeah, he's blowing smoke. And the trend in electronics has always been smaller and less power and faster (sometimes). Now with the announcement of 0.6 micron chip technology, this guy just missed the boat.
Google causes global warming.
Google causing global warming.
I love it.
HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA
This is completely backward. Equipment is getting smaller more efficient and yet more capable all the time.
Actually todays processors do consume significantly more power than their predecessors.
Here is a chart of performance per watt.
http://www.tomshardware.com/2005/11/21/the_mother_of_all_cpu_charts_2005/page4.html
I'm surprised at this. Google is gaining the reputation for having all the best minds and cutting edge developers in the business. People have been saying that Microsoft better watch out, or Google will eat their lunch.
So you'd think they would know better.
Over the short run, they might double the number of servers without updating them, and therefore double their power usage, but that certainly wouldn't be the case for very long.
Is the performance of the processor and accurate measure of the overall performance of the equipment?
There's a lot more stuff in the case than just the processor. I would think power supplies and hard disks would also factor heavily into the equation.
nanocomputers to the rescue.
Yes, thanks to leakage. Circuitry can just be sitting there doing nothing, doing no switching, and still pull a lot of power. I've been designing at 0.065 microns for the past 2 years, and it just sucks power like a Lewinsky in heat.
Why? The space between devices, and between routes, and most importantly, the number of layers of atoms vertically stacked on top of each other are so small and thus very porous. Either the material is effectively imperfect down at that scale (thus causing little paths for electrons to escape), and/or actual electron tunneling - the electron just says "SHAZAM!", and jumps across an insulator.
So we get the speed of tiny dimensions, but the power just climbs to godzooks. We in the industry are working on all sorts of things - materials, circuits, design styles, you name it - to counter the effects of leakage.
It just sucks. And I need to get back to work designing a huge robust power grid for these oh-so-efficient 0.065 micron circuits.....
I believe Intel recently made an announcement about a new material/process to address heat and size but targeted 2011 (I think) before it would reach production.
You sure about that number? Is that Idle or under load? I don't know much about electricity but I do know that i'm maxing out my 400 watt PC Power & Cooling power supply (they're one of the only companies who rate their watts at RMS... not MAX power under perfect conditions) on my rig when under full load. I'm running two 10k rpm HDD's in Raid, two 160gb backup drives, high end graphics, Athlon 64, 2gb ram, X-FI sound card, and lots of high output fans.
I could be way wrong. How (where) would you measure the power draw? I've got a really nice voltmeter that measures just about everything.
...But one aspect of the "Why switch processor suppliers?" question hasn't been answered. Intel isn't the only x86 chip maker in town. Why didn't Jobs, ever the maverick, opt for the scrappy challenger, Advanced Micro Devices Inc., instead of the old-money establishment, Intel Corp.?
The reason, industry analysts say, is that Jobs has a clear goal in mind: innovative designs. And such designs require the lowest-voltage chips, which IBM and Freescale Semiconductor Inc. weren't going to make with the PowerPC chip core -- and which AMD has not yet perfected.
"This is a practical, pragmatic Steve Jobs decision," says Shane Rau, program manager for PC semiconductors at market research firm IDC. Intel serves up the most complete line of low-power chips for mobile and small form-factor computers, and a good-looking road map for that line. Also, Intel's mammoth production capacity erases any supply worries. ...
Why Apple picked Intel over AMD
I read a WSJ article earlier this year that described Googles' infrasructure as 150,000 p.c.'s and no high-end hardware.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.