I found it curious that there is so much hardware that is at around 20% utilization. Wouldn't it be better to have less hardware with higher utilization and buy cheaper/better hardware later if warranted?
Absolutely not. Servers and disk space are inexpensive. The closer you are to capacity, the less burst room you have. Pushing your hardware to near max capacity is a recipe for disaster.
R610s are going to have dual PSUs, and they have redundant storage arrangements, so potentially they could experience hardware failure but no downtime.
Chance of hardware failure in a modern server in a colo, assuming it's been stress tested to find DOA hardware before being put in production? Low.
A quick spec on Dell's site, rough guess for the SSDs, the web servers are running around $4,000 each, or $4,500 with better warranty. Saving of 6 servers is >$20,000.
It's entirely up to their funding and growth plans and risk acceptance if that's a useful saving, but if they were a ramen-profitable-is-goal-one startup, it would be months of runway.
They apparently have two developer/sysadmins, so the $20K savings is less than a month of their salaries. For the redundancy and growth that affords, it's a no-brainer.
Probably, but considering the amount of bragging (not meant pejoratively) they've done about how cheap their hardware is considering their traffic, it's probably not a huge bank-breaking issue. Also, they could almost certainly be significantly more energy efficient with hardware running near 100% utilization.
Well, when you're putting your own hardware in your own datacenter it can take hours (or days) to scale capacity. If you have a huge traffic spike that can really hurt the user experience. It's much better to have it on standby. We learned this the hard way at Hive7 with our first social game...