I don't have any data, but from word of mouth and variously seeing posts and videos online, 12th gen Intel CPUs seem pretty popular for gaming builds. They're winning in benchmarks against 50 series AMD (as they should, being newer), but are also cheaper. I'll be curious to see how 13th gen Intel vs. 70 series AMD plays out. There are always complicating factors, such as motherboards for 70 series AMD being quite pricey for now.
Intel repeated their Prescott (Pentium 4/Pentium D) strategy of completely removing power limitations on their team to beat AMD.
As a result, the Intel processors have TDPs and real world power usages >2.5x that of a comparable Ryzen. Sure, they're winning, but at what cost? The 12900K at 240 watts pulls almost the same power as a 280W 64-core Threadripper.
AMD is responding in kind, with new top-end processors pulling 170W, or higher with their built-in overclocking that pushes the chip to even higher power draws as long as cooling permits. This looks to put them back into the lead, but it's just not a sustainable strategy.
I'm not sure if the actual differences in energy usage are so clear cut. These charts [0], which account for the 12900K spending less time to accomplish tasks than the 5950X, seem to indicate the disparity isn't so terrible. You can always under-volt too; the 13th gen press release includes charts [1] showing that the 13900k at 65W matches the performance of the 12900K at 241W.
When a 5800X3D offers equal or better gaming performance at 1/2 to 1/3rd the power consumption (e.g. 220w vs 80w), and you pay $0.65 per kW/h during peak times, I can only imagine: "Quite a few".
If electricity was $0.65/kWh for me, I'd move out of the country lol. Assuming a more realistic $0.40/kWh (considered very expensive in the US, where most enthusiasts live), 8 hours of gaming a day, and a 200W power limit, you're paying $19/mo. Not bad.
Don't get me wrong, the 5800x3d is a phenomenal CPU. However like many owners of that CPU, I'm also in the market for a 4090 and intend to use the full 600W power limit with my water loop and outdoor radiator. CPU power consumption is just not an issue for enthusiasts.
Nice, ha ha. I was going to say, look into Undervolting. I saw 95% perf at 60% of the power come up, ie 270w; that's actually going to be superb (very cool, quiet, still extremely performant).
So maybe configure that as an optional profile for titles not needing maximum juice?
I'm quite keen myself.
Power $ mentioned was peak times in Norway; I'm in Australia where it's not anywhere that bad yet. (0.25 AUD for me).
> I saw 95% perf at 60% of the power come up, ie 270w
That's neat, but 5% is a lot when you're spending $1600. Oh and all these measurements you see tossed around are done with air coolers and stock voltages. Of course more power doesn't help if you're limited by thermals. Specifically, you're limited by hotspots you can't see because they're between the on-die temperature sensors.
> maybe configure that as an optional profile for titles not needing maximum juice?
4K gsync monitors are typically limited to 120Hz. If my GPU is capable of running a title at 1200fps, it will idle 90% of the time and consume only ~60W regardless of the 600W power limit. So tuning this is pointless. I'd still get 1/10th the rendering latency compared to a GPU that's only capable of 120fps. Same logic applies to power limiting your GPU which will increase latency and kill frame pacing in exchange for a tiny reduction in electricity bills.
> Oh and all these measurements you see tossed around are done with air coolers and stock voltages. Of course more power doesn't help if you're limited by thermals. Specifically, you're limited by hotspots you can't see because they're between the on-die temperature sensors.
This completely misunderstands heat transfer. If the hotspot temp is 75 deg even with overclocking you're not limited by thermals: https://www.youtube.com/watch?v=zc-zwQMV8-s
>4K gsync monitors are typically limited to 120Hz. If my GPU is capable of running a title at 1200fps, it will idle 90% of the time and consume only ~60W regardless of the 600W power limit. So tuning this is pointless. I'd still get 1/10th the rendering latency compared to a GPU that's only capable of 120fps. Same logic applies to power limiting your GPU which will increase latency and kill frame pacing in exchange for a tiny reduction in electricity bills.
This completely misunderstands power consumption and the nonlinear relationship between power and clockspeed. Performing the same work in the same time in short bursts of high clocks and high voltage uses more power than constant low clocks and voltage.
It matters for SFF builds where people often want the highest performance possible but are severely limited by cooking. Efficiency becomes extremely important.
Traditionally nobody, but the massive power consumption of the RTX 4090 combined with a power-hungry CPU might make people take notice. When your desktop needs a dedicated circuit you have a problem.
My microwave uses 1200 or 1500 watts and does not need a dedicated circuit. Sure, I don’t run it continuously for an hour, but that has nothing to do with whether or not it needs a dedicated circuit.
A 16A circuit allows for 3.8kW or something like that @ 240v.
With these ludicrous power requirements, you may in fact need to rethink how you use your power. E.g. if on same as a non-heat pump drier, you'd have to make a choice between gaming and drying clothes.
Having said that I saw a reference to a 4090 offering 95% of the perf at 60% of the power usage if undervolted, so that becomes an attractive option now.
I absolutely love my 5800X3D's insanely low power usage (insane = performance per watt for gaming in simulator titles where it runs circles around Intel).
Of the last gen stuff, the 5800x3D is arguably among the best bang for buck, including for use in gaming builds. The applications where Intel's 12900K/12700K hold a significant advantage against the other chip with its 96MB of L3 cache typically aren't applications desperately in need of cpu power. IME it's poorly-optimized or hard-to-optimize software with bad cache coherency that most demands speed, and it's in those cases that the X3D delivers.
I was looking for a laptop for work and decent gaming, I went with an Intel NUC. Everything pointed to Ryzen being far better in most aspects - power, weight, height, and importantly battery life - however for the price point I was looking at, the faux-no-brand Intel with the 3060 was better even if it seems the chips do lag behind for similar generations.