Hacker News new | past | comments | ask | show | jobs | submit login

The fact is, managing your own hardware is a pita and a distraction from focusing on the core product. I loathe messing with servers and even opt for "overpriced" paas like fly, render, vercel. Because every minute messing with and monitoring servers is time not spent on product. My tune might change past a certain size and a massive cloud bill and there's room for full time ops people, but to offset their salary, it would have to be huge.



That argument makes sense for PaaS services like the ones you mention. But for bare "cloud" like AWS, I'm not convinced it is saving any effort, it's merely swapping one kind of complexity with another. Every place I've been in had full-time people messing with YAML files or doing "something" with the infrastructure - generally trying to work around the (self-inflicted) problems introduced by their cloud provider - whether it's the fact you get 2010s-era hardware or that you get nickel & dimed on absolutely arbitrary actions that have no relationship to real-world costs.


In what sense is AWS "bare cloud"? S3, DynamoDB, Lambda, ECS?


How do you configure S3 access control? You need to learn & understand how their IAM works.

How do you even point a pretty URL to a lambda? Last time I looked you need to stick an "API gateway" in front (which I'm sure you also get nickel & dimed for).

How do you go from "here's my git repo, deploy this on Fargate" with AWS? You need a CI pipeline which will run a bunch of awscli commands.

And I'm not even talking about VPCs, security groups, etc.

Somewhat different skillsets than old-school sysadmin (although once you know sysadmin basics, you realize a lot of these are just the same concepts under a branded name and arbitrary nickel & diming sprinkled on top), but equivalent in complexity.


How does one install and run Linux/BSD/another UNIX? One needs to learn and understand how a UNIX works.

The essence of the complaint that one has to have the knowledge of something before that something can be used. It seems like a reasonable expectation for just about anything in life.

(The API gateway in AWS is USD 2.35 for 10 million 32 kB requests, a Lambda can have its own private URL if required and Fargate does not deploy Git repos, it runs Docker images.)


> The essence of the complaint that one has to have the knowledge of something before that something can be used

My point was to disprove that "cloud" is simpler than conventional sysadmin - it is not, and it involves similar effort, complexity and manpower requirements.


I will have to disagree on that.

Cloud is simpler than conventional sysadmin, once its foundational principles are understood and the declarative approach to the cloud architecture is adopted. If I want to run a solution, cloud gives me just that – a platform that simply runs my solution and abstracts the sysadmin ugliness away.

I have experienced both sides, including UNIX kernel and system programming, and I don't want to even think about sysadmin unless I want to tinker with a UNIX box on a weekend as a leisure activity.


EC2


I would actually argue that EC2 is a "cloud smell"--if you're using EC2 you're doing it wrong.


Counterpoint: if you’re never “messing with servers,” you probably don’t have a great understanding of how their metrics map to those of your application’s, and so if you bottleneck on something, it can be difficult to figure out what to fix. The result is usually that you just pay more money to vertically scale.

To be fair, you did say “my tune might change past a certain size.” At small scale, nothing you do within reason really matters. World’s worst schema, but your DB is only seeing 100 QPS? Yeah, it doesn’t care.


I don’t think you’re correct. I’ve watched junior/mid-level engineers figure things out solely by working on the cloud and scaling things to a dramatic degree. It’s really not a rocket science.


I didn't say it's rocket science, nor that it's impossible to do without having practical server experience, only that it's more difficult.

Take disks, for example. Most cloud-native devs I've worked with have no clue what IOPS are. If you saturate your disk, that's likely to cause knock-on effects like increased CPU utilization from IOWAIT, and since "CPU is high" is pretty easy to understand for anyone, the seemingly obvious solution is to get a bigger instance, which depending on the application, may inadvertently solve the problem. For RDBMS, a larger instance means a bigger buffer pool / shared buffers, which means fewer disk reads. Problem solved, even though actually solving the root cause would've cost 1/10th or less the cost of bumping up the entire instance.


> Most cloud-native devs

You might be making some generalizations from your personal experience. Since 2015, at all of my jobs, everything has been running on some sort of a cloud. I'm yet to meet a person who doesn't understand IOPS. If I was a junior (and from my experience, that's what they tend to do), I'd just google "slow X potential reasons". You'll most likely see some references to IOPS and continue your research from there.

We've learned all these things one way or another. My experience started around 2007ish when I was renting out cheap servers from some hosting providers. Others might be dipping their feet into readily available cloud-infrastructure, and learning it from that end. Both works.


Anecdotal - but I once worked for a company where the product line I built for them after acquisition was delayed by 5 months because that's how long it took to get the hardware ordered and installed in the datacenter. Getting it up on AWS would have been a days work, maybe two.


Yes, it is death by 1000 cuts. Speccing, negotiating with hardware vendors, data center selection and negotiating, DC engineer/remote hands, managing security cage access, designing your network, network gear, IP address ranges, BGP, secure remote console access, cables, shipping, negotiating with bandwidth providers (multiple, for redundancy), redundant hardware, redundant power sources, UPS. And then you get to plug your server in. Now duplicate other stuff your cloud might provide, like offsite backups, recovery procedures, HA storage, geographic redundancy. And do it again when you outgrown your initial DC. Or build your own DC (power, climate, fire protection, security, fiber, flooring, racks)


Much of this is still required in cloud. Also, I think you're missing the middle ground where 99.99% of companies could happily exist indefinitely: colo. It makes little to no financial or practical sense for most to run their own data centers.


Oh, absolutely, with your own hardware you need planning. Time to deployment is definitely a thing.

Really, the one major thing that bites on cloud providers in there 99.9% margin on egress. The markup is insane.


Writing piles of IaC code like Terraform and CloudFormation is also a PITA and a distraction from focusing on your core product.

PaaS is probably the way to go for small apps.


A small app (or a larger one, for that matter) can quite easily run on infra that's instantiated from canned IaC, like TF AWS Modules [0]. If you can read docs, you should be able to quite trivially get some basic infra up in a day, even with zero prior experience managing it.

[0]: https://github.com/terraform-aws-modules


Yes, I've used several of these modules myself. They save tons of time! Unfortunately, for legacy projects, I inherited a bunch of code from individuals that built everything "by hand" then copy-pasted everything. No re-usability.


But that effort has a huge payoff in that it can be used to disaster recovery in a new region and to spin up testing environments.


I'm with you there, with stuff like fly.io, there's really no reason to worry about infrastructure.

AWS, on the other hand, seems about as time consuming and hard as using root servers. You're at a higher level of abstraction, but the complexity is about the same I'd say. At least that's my experience.


I agree with this position and actively avoid AWS complexity.


> every minute messing with and monitoring servers

You're not monitoring your deployments because "cloud"?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: