Possible reasons why it's not done are the very high energy cost of the cooling setup. I'm not sure if it's possible to bring the cost down enough to a practical level.
Another reason might be lower lifetime of many of the parts involved.
Not nitrogen-cooled, but liquid-cooled overclocked rackmount servers (with consumer gaming CPUs rather than the usual Xeon CPUs) are a thing in HFT.
They’re crazy because they are so unstable that you basically can’t reboot them and expect them to reliably come back up, so you are stuck on that given kernel version and firmware etc (important for eg the custom network cards that often want firmware updates).
You just turn it on, run it for like a year, and then it dies and you trash it and get a new one delivered.
We use heavily overclocked 10980XE and beyond common teething issues with voltages and some specimens just frying the survivors definitely offer stability if you don't push them insanely far.
But I'd agree on the bleeding edge being generally more bleeding than edge (esp. from a performance engineer perspective) with problems ranging from kernel not supporting TSC calibration correctly to acpi issues to faulty (fried) instruction cache.
Possible reasons why it's not done are the very high energy cost of the cooling setup. I'm not sure if it's possible to bring the cost down enough to a practical level.
Another reason might be lower lifetime of many of the parts involved.