Hacker News new | past | comments | ask | show | jobs | submit login

I found it curious that there is so much hardware that is at around 20% utilization. Wouldn't it be better to have less hardware with higher utilization and buy cheaper/better hardware later if warranted?



Absolutely not. Servers and disk space are inexpensive. The closer you are to capacity, the less burst room you have. Pushing your hardware to near max capacity is a recipe for disaster.


Time for the sysadmin math quiz:

10 servers running at 20% utilization. Two servers experience hardware failure. What's the resulting utilization per server?

4 servers running at 50% utilization. Two servers experience hardware failure. What's the resulting utilization per server?

Which approach affords greater redundancy and room for growth?


R610s are going to have dual PSUs, and they have redundant storage arrangements, so potentially they could experience hardware failure but no downtime.

Chance of hardware failure in a modern server in a colo, assuming it's been stress tested to find DOA hardware before being put in production? Low.

A quick spec on Dell's site, rough guess for the SSDs, the web servers are running around $4,000 each, or $4,500 with better warranty. Saving of 6 servers is >$20,000.

It's entirely up to their funding and growth plans and risk acceptance if that's a useful saving, but if they were a ramen-profitable-is-goal-one startup, it would be months of runway.


They apparently have two developer/sysadmins, so the $20K savings is less than a month of their salaries. For the redundancy and growth that affords, it's a no-brainer.


Probably, but considering the amount of bragging (not meant pejoratively) they've done about how cheap their hardware is considering their traffic, it's probably not a huge bank-breaking issue. Also, they could almost certainly be significantly more energy efficient with hardware running near 100% utilization.


We wouldn't leave our cars on 80% of the time idling, but servers, well, why not? Every little bit helps guys.


Well, when you're putting your own hardware in your own datacenter it can take hours (or days) to scale capacity. If you have a huge traffic spike that can really hurt the user experience. It's much better to have it on standby. We learned this the hard way at Hive7 with our first social game...


Ram is pretty cheap these days...


and less RAM is even cheaper. You don't address the point at all.


the point is: traffic is 50x expensive compared to RAM


I don't understand what you are saying.

The original comment is based on the assumption that less hardware would still easily support the same traffic level.


But not the same burst-over-average traffic, nor failure tolerance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: