I'm sorta surprised there is nobody running liquid nitrogen cooled overclocked chips 'in production' and rentable.
For some usecases, such as running a single threaded application you just can't optimize any more, it would be worth the added cost and complexity to get a chunk more single threaded performance.
The extra cooling means extremely inefficient power-wise. The power gains are not linear.
So past 1st to ring the bell, e.g. some high frequency trading setups - it makes absolutely no sense at all, otherwsise. Now the real kicker is that such overclocked setups are inherently unstable - so no mass deployment for you.
I work in finance (HFT) and some shops have been using liquid cooled, heavily overclocked gear for quite some time now. Typically with all cores locked to avoid c/p state transitions.
Even immersion cooling is getting traction and showing up in data centres here and there.
For higher power chips, we're running into limits of air cooling. You can only blow so much air and make heat sinks with enough surface area that are still able to fit into chassis. At some point, we'll have to change cooling mediums, use chip materials that can run at high temperatures, and or improve performance per watt.
AIUI, if you build a custom chip to use low-power async/clockless logic, cooling it with LN2 would absolutely make sense. It would run at high speeds while sipping power. Not for commonly available chips, though - those will waste power with every clock cycle and release that as heat. (It's gotten to the point where chips cannot be fully powered on at any time - some part of the chip will always be power-gated and kept "dark".) So any practical subambient cooling system would simply be overwhelmed.
Possible reasons why it's not done are the very high energy cost of the cooling setup. I'm not sure if it's possible to bring the cost down enough to a practical level.
Another reason might be lower lifetime of many of the parts involved.
Not nitrogen-cooled, but liquid-cooled overclocked rackmount servers (with consumer gaming CPUs rather than the usual Xeon CPUs) are a thing in HFT.
They’re crazy because they are so unstable that you basically can’t reboot them and expect them to reliably come back up, so you are stuck on that given kernel version and firmware etc (important for eg the custom network cards that often want firmware updates).
You just turn it on, run it for like a year, and then it dies and you trash it and get a new one delivered.
We use heavily overclocked 10980XE and beyond common teething issues with voltages and some specimens just frying the survivors definitely offer stability if you don't push them insanely far.
But I'd agree on the bleeding edge being generally more bleeding than edge (esp. from a performance engineer perspective) with problems ranging from kernel not supporting TSC calibration correctly to acpi issues to faulty (fried) instruction cache.
Look up Kingpin Roboclocker. That's the closest anyone has gotten to develop a fully automated system that could be developed into something you could run for extended periods of time in the right environment.
Doesn't even need to be nitrogen, while that gives better possible OC there are easier to handle liquid gasses in common use (liquid ammonia).
Though ammonia is somewhat unpleasant (and toxic) it's also common in industrial refrigeration so supplies are steady and it's cheap - in a DC environment solvable.
What is the chance that the ammonia itself would damage parts of the computer? Surely few, if any machines are rated to deal with liquid ammonia immersion with direct PCB contact.
Also, afaik Ammonia dissolves zinc into zinc hydroxide, which is a semiconductor, and liquid ammonia may also be electrically conductive even if gaseous ammonia is not. There's so many things that could go wrong with that in my mind, but I would really like to know where you heard about this or if you're doing work with this somehow and fill in my own knowledge gaps if you will.
For some usecases, such as running a single threaded application you just can't optimize any more, it would be worth the added cost and complexity to get a chunk more single threaded performance.