Perhaps someone will helpfully design a "PSU stability dongle" comprising a 26.1 Ohm resistor (standard 1% resistor value sufficient to ensure >450mA from 12V) and a Molex connector. Make it a passthrough.
Of course, it'll have to be a >5W resistor, but that's what I like to call an opportunity to excel: just more room to add some rakish heat sink fins. You could even spend some of your power wasting budget on LEDs instead, but obviously we'll have to charge more for that model. The margins, you see.
Only if you want the C6/C7 states, which are the really low power draw ones. If you're the kind of person who builds your own PC, it's unlikely you really care about saving a few watts while in sleep. At worst, you'll get Ivy Bridge-level power consumption.
On the contrary in 2013 the kind of person who builds their own PC's is the only one who likely cares about technical specs. If "good enough" is good enough for you there is very little reason to build your own box these days.
For performance applications on the desktop though, power draw at idle doesn't matter too much. It's all about power draw at load. Power from the wall is cheap and easy to forget about. If I'm building a machine that draws 750W at load, drawing at idle is fairly insignificant. It's laptops where power draw at idle really matters, which are rarely home-built.
I exclusively use server or workstation/server class motherboards from Supermicro or Tyan not just for myself but also for my parents, so I can then largely ignore the hardware for 5+ years plus not worry about non-reproducible errors from radiation flipping a DRAM bit. I've done this since 1995-6 and never had one fail in the field except through an act of God (http://en.wikipedia.org/wiki/2011_Joplin_tornado).
For this purpose, decreasing stress from thermal cycling is good.
(Yeah, they're grossly overpowered for what my parents do, but since I'm not including my labor in the cost they really don't cost that much.)
I consider it borderline negligent to use anything other than ECC RAM in any machine where you care about not corrupting your data with a flipped DRAM bit.
Except it's virtually impossible to find a CPU that supports ECC.
I've used ECC in every one of my machines for the last decade or so - except for the very last once since it's was impossible to get without spending a crazy amount of money (xeon processor).
The low end E3 Xeons (-1220, -1230) are actually rather competitively priced versus the i5 / i7. However this still requires use of a workstation / server motherboard for ECC support.
It's not virtually impossible. It may be virtually impossible in crappy cheap boards from Taiwan, but if you're buying cheap parts then you're likely not making money off the results that you're producing.
At the moment, the second fastest Xeon E3 (of those with integrated graphics) is $15 cheaper than the equivalent i7. They're not price-competitive with the i5s (which lack HyperThreading), but the E3 processors have been very competitive with the i7s since their introduction. The only downside is they don't get put on sale the way the overclockable consumer parts do, and they aren't widely known.
On the contrary to your contrary: the kind of person who builds their PC is the kind who customizes how it works when they're using it. There's no automatic assumption that they also harbor green feelings and want lower power consumption for idle/sleep.
Hell, many are proud of their beefy 1kW+ power supplies and the ridiculous power load they draw.
Others are proud of their energy-destroying bit-coin mining setups.
You certainly can't assume that the home builder cares at all about power consumption.
That CPU won't be appreciably more power efficient than a standard i7-3770. It just constrains its clock speeds in order to stay within a lower power ceiling.
TDP is all about cooling capacity and PSU capabilities. If you're worried about saving energy, get the processor that will run at the highest speed while computing so that it can get back into power-saving mode sooner. And you can always under-clock the standard desktop CPUs if you really want to - a 3770 can be configured to behave pretty much like a 3770T by tweaking the base and Turbo multipliers.
Fair enough. On the other hand segments of the market who think of the computer as a blackbox that is a bicycle for the mind are even less likely to care about any individual specification. Most people I know who build their own PC's are trying to optimize for some spec that manufacturers dont usually care about - loudness, gpu performance, low power draw, size, aesthetics.
On the other hand, you can't automatically assume they don't care about it at all.
I build most of my desktops, and I certainly want to reduce power usage when I'm not using them. Enough so that I'd pay for a power supply that supports the lower power states.
Not everybody who builds their own PC is doing so to make a ridiculously over powered system.
I agree. It really depends on why someone is building the PC. Sometimes you want to optimize the gaming performance, or video editing performance, or the file transfer speed and lower latency for a NAS box, or other times you're looking to optimize video playback, low power consumption, and low noise on an HTPC. All of those could be built by ones self, and each will usually require certain choices and trade-offs to achieve the desired goals.
There weren't any legitimate technical reasons for switching from LGA1156 to LGA1155, but Haswell and LGA1150 are introducing major changes to power delivery and voltage regulation. Given that power delivery accounts for something like half the pin count of a modern CPU, a new socket for Haswell is quite justified. (Although, those power delivery changes only really benefit the mobile market, but Intel's long been unwilling to produce different dies for the desktop market and the mainstream mobile market.)
I'm still perplexed why Intel doesn't redesign their CPU socket to be more efficient at delivering power. They're channelling a lot of juice through some very tiny pins or pads, a fairly straight-up evolution of the old 8088 chip that fit into a DIP socket.
Is it somehow not practical to have several bigger pins for power that can handle more current than to have literally hundreds of pins dedicated to power? A surprising percentage of the pins on a modern Intel chip do nothing more than power the chip. The traditional pair of +Vcc and GND pins just can't cut it, apparently, and no wonder with some chips drawing over 100W of power.
Not too long ago video cards started taking a direct feed from the PSU rather than relying on the PCI or PCIe bus. It's surprising the same thing hasn't happened to high-power integrated circuits.
Where to start. Multiple pins supplying power to a die isn't done because they can't clonk a pair of big connectors in the middle. The resistance within the die would cause different ground levels and voltage drops if supplied at only one physical ___location. The PCB has much more space for "thick" copper traces (complete ground plane) and buffer caps to supply a stable voltage across the entire die.
Second, CPUs have a dedicated 12V rail on the MB since quite some time. It's plugged in right near the voltage regulators of the CPU.
Third, a GPU die also has multiple power rails and distribution over the die as CPUs. Given the insane power requirements and transistor count, they likely have even more. The same is true for other power hungry ICs, for example high-speed DSPs, FPGAs, image sensors and so on.
Fourth, PCI/PCIe can't carry hundredths of watts, it would not be cost effective to kit out PCs with heavy duty connectors and thick main board traces just because one or two slots may one day be used for a space heater. Thus another dedicated rail from the PSU.
When a lot of transistors in an integrated circuit all switch at the same time, it can cause the chip's power and ground voltage levels to shift, relative to the circuit board's power and ground levels. The size of the difference depends on the inductance between the chip and board, i.e. the inductance of the chip's power and ground pins. Inductance can be minimized by connecting a lot of inductors in parallel. Lots of small pins are better than a few big ones.
If the inductance is too high and the chip's power voltage falls below its ground voltage, this will randomize every storage element on the chip. The rule of thumb is that a third of the pins need to be power or ground.
You're implying it could be done if whatever load distribution and power condition that's done outside of the chip, which consists of a lot of analog components to help manage rapid changes in power consumption, could be somehow packaged inside the chip.
So, in rough terms, the internals of a large-scale chip are not one big integrated circuit, but a large number of smaller modules that are massively interconnected, then?
That rule of thumb seems to apply to only a particular class of chips. Wouldn't the number of pins be somehow proportional to the power draw, as at higher currents induction would become a more severe problem? It's just usually the case that more power-hungry chips have more pins, as the 2011 socket is for Intel's flagship CPUs, the 1155 ones more commodity-oriented.
With the power voltage dropping below ground, that unless you had a floating ground, that'd be implying reverse flow of current, negative voltage, right? Or are you talking about a non-zero voltage ground? I'm not sure what the presumption is in real-world CPU design.
> That rule of thumb seems to apply to only a particular class of chips. Wouldn't the number of pins be somehow proportional to the power draw, as at higher currents induction would become a more severe problem?
I think it depends on the chip's speed. The problem is with rapid changes in current. Of course, higher currents can also have higher fluctuations.
> With the power voltage dropping below ground, that unless you had a floating ground, that'd be implying reverse flow of current, negative voltage, right? Or are you talking about a non-zero voltage ground?
Suppose the board's ground rail is 0 V and the power rail is at 12 V. The chip's ground voltage might bounce up to 9 and its power down to 8. It does cause reverse currents and other bad effects.
The vast majority of power efficiency losses are in the copper interconnects within the chip, not the copper pins.
Oh also, if you want fewer power pins, get ready for more heat and way more expensive CPUs. The pins are placed to be as cost effective and efficient as possible to power the individual chip modules (many of which will operate at different voltages). Also, it's way cheaper to convert power on the motherboard than within an already incredibly constrained CPU package.
Edit: "vast majority" as far as power and other connections to the board go, the transistors create way more heat than the interconnects.
It's sorta like plugging your PC into the wall plug next to it, and plugging your toaster into the wall plug in your kitchen...Yea, it would be easier to daisy-chain surge protectors from 1 outlet, but that's so messy. Like interior design layout, like circuit design layout?
And (I think) the copper circuits are so close that quantum leaps can occur -- electrons can just decide to jump from one piece of copper to another, straight through silicon or anything else, simply because the copper is so close. This means a VCC line can be giving charge to an unpowered neighboring circuit or memory cell without magnetically affecting it.
Chips have multiple power pins for signal integrity issues. You want the power source close to where it is being used, particularly for IO pads. Even chips with low power requirements will have multiple power pins.
Just how much of the system is getting shut off in these new C-states? A lot of the best power supplies have a single 12V rail, and the 3.3V and 5V rails are provided by stepping down the voltage from the 12V rail. It seems like any PSU with that kind of design wouldn't be affected unless the total system power consumption drops below 6W, which would require shutting off at least the expansion cards, disk drives, any power-hungry USB devices, and potentially the case fans. Even 4 DIMMs plus the keyboard and mouse necessary to wake up the system might be sufficient to keep the power draw high enough.
I'm going to make a wild guess that the C6/C7 states are intended to be used in notebooks and other battery powered devices. For desktops the rest of the system will easily loose 5W somewhere else.
FTA, quoting Robert Pearce: "I fully expect the [motherboard] companies to disable C6/C7 in the BIOS (though consumers could enable it if they chose to) as there are simply too many PSU's in the market space which might not work correctly."
... because the motherboard manufacturers can't figure out how to stuff a 240 ohm resistor across the 12v rail?
Edit to be fair: if the power savings from going from the legacy suspend to the Haswell states is less than 600mW, then this isn't worth it and disabling the Haswell states is the right solution. In my experience though, PC motherboards almost never idle at less than 3-5W.
It's like how motherboard manufacturers couldn't figure out how to keep the voltage regulators on their boards from overheating in the summer and desoldering themselves.
The difference between a 0.5 amp minimum current on the 12 volt supply and the 0.05 amp minimum current on the CPU might be enough to power a CPU fan (such as the Precision 690 from Dell, which is about 0.45 amp at 12V). I'm not familiar with other hardware requirements or constraints, so it might not work.
Not likely. The problem is that the PSU will shut down (self protection) if it has less that its minimum current draw. It has to do this because the voltage regulation loop goes unstable if it was designed to have a higher minimum current draw. If the PSU didn't shut down, it would fry things, including itself. Really bad.
To test the PSU works at 0.5A would end up shutting down the computer if the PSU didn't work at 0.05A... exactly what happens today. Unfortunately, at that point it is too late to tell the user "your PSU is old" or to quickly connect a resistor across the +12v rails to work around the problem.
As an experiment, find an old PSU and plug it into the wall (nothing connected to the power output connectors). It won't power on. If you connect a 2 ohm 10W resistor (or two 1 ohm 5W resistors in series) into the "hard drive" Molex connector +12v (yellow) to ground (black), it should start up and be happy.
Disclaimers:
1) The resistor will become quite warm quite quickly (6W), watch your fingers.
2) If you damage something, it is a learning experience, not my fault.
The most likely outcome though is that the supply will just power on and idle without any load. Rapid transients on the rails (say from 100W to ~0W) might cause the over voltage crowbar to trip, though.
There actually shouldn't be any issue at all with the motherboard testing this. Just set a flag in the EEPROM that your testing the C6/C7 states, and try it, if the CPU exits C6/C7 with the flag set, it passed. If the system goes through a normal boot with the C6/C7 flag still set, it knows that it failed to exit C6/C7 (the supply cut out). This is basically the same procedure many motherboards use to detect if the over-clocking settings you tried were too agressive and caused the system to fail to boot.
It seems to me that in any compliant PSU, not drawing enough current from the +12V2 rail shouldn't trigger a shutdown that results in a loss of power on the +5Vsb rail, so motherboard makers just have to ensure that whatever testing circuitry they use can run off just the standby power.
Ah, yes, the other side of the Halting Problem: not only can you not tell if your program will ever halt, but you also can't tell if your program has been externally halted.
Computers were doing that since forever though. Remember all those "your computer wasn't shutdown correctly, do try to do better next time would you" messages? Just write a bit into CMOS before doing a test, then assume the test failed if the bit is still set on next power up.
After how many years of standby will the savings of the low power state justify replacement on a 200$ PSU that is 80+ platinium rated? And anyway power states must be disabled for decent overclock to be achieved.
They'd have to change the ATX power socket on the motherboard, not the CPU socket; not impossible but it would be a shame after it's stood for this long.
Of course, it'll have to be a >5W resistor, but that's what I like to call an opportunity to excel: just more room to add some rakish heat sink fins. You could even spend some of your power wasting budget on LEDs instead, but obviously we'll have to charge more for that model. The margins, you see.