Hacker News new | past | comments | ask | show | jobs | submit login

This isn't a terribly practical article. I don't disagree with mocks being an "alternate reality". The author is entitled to their opinion on whether this is a good or bad thing. This said...what is the alternative? Integration testing all the way down?

The implication here is to work with stubs over mocks (i.e. I need to work with S3; I would then abstract that to provide a StubObjectStore to replace the S3ObjectStore used by other pieces of my code during tests). Great; I know they work now. But at some point, I need confidence my S3ObjectStore handles everything correctly. Do I give everyone on my team a bucket? Perhaps their own test AWS account? Test it, but only in the pipeline on its way to an intermediate stage? I can't control how AWS writes their SDKs (spoiler alert: they don't stub well), but I need some confidence that I can handle their well-understood behavior that scales. Likewise, I often can't control the libraries and integration points with other systems, and mocking offers a "cheap" (if imperfect) way to emulate behavior.




For AWS specifically, I prefer to have an AWS account specifically dedicated to each service + stage. For example, if I have an image service that handles s3 uploads (say Lambda, S3, Cloudfront and API Gateway), then I'd deploy a "test" environment to a dedicated AWS account and run tests against that. Since it's fully serverless, it only costs a few pennies to test (or free).

I try not to develop locally at all anymore. If you're looking for more practical advice, perhaps this will help: https://dev.to/aws-builders/developing-against-the-cloud-55o...


That means that every person who runs the tests needs credentials for that AWS account. That obviously won’t work for an open source project. Even for a company project, how do you distribute those secrets? It adds friction for developers getting their local dev environment setup.

Not only that, but you now need network access to run tests. A network blip or a third party service outage now makes your tests fail.

There is also the possibility that an aborted test run might leave state in s3 that you are now paying for. Someone hits Ctrl-c during a test run and now you have a huge AWS bill.


> That obviously won’t work for an open source project

On the contrary - I was an employee at Serverless Inc, working on the Serverless Framework for the last two years, we used this pattern extensively (and very successfully) in our open source repos.

You can even find an example here which provisions real live AWS infrastructure: https://github.com/serverless/dashboard-plugin/tree/master/i...

We used part of our enterprise SaaS product to provision temporary credentials via STS and an assumable role, and it works great. You could do the same thing with something like HC Vault.

For Lambda, S3, DynamoDB, the perpetual free tier means we've never paid to run our own tests. API Gateway isn't free (after 1 year), but it's still pennies per month. We've had several cases where tests stuck around a long time, but a billing alert and occasionally some CloudFormation stack cleanup takes care of that.

We still have offline unit tests which test business logic, but everything else runs against the cloud - even our development environments ship code straight to lambda.


Why spend money on AWS you don't have to? Use Minio for S3 locally (or on your build server).

Local development is the easiest way to avoid wasting money and resources on debugging/development.


Speaking from personal experience, our team wasted far more money tinkering with local dev environments and trying to replicate the cloud than we ever did simply using it to develop.

The blog post in the parent comment lays out our experience and my thoughts, but because of the pretty generous free tier, I don't think we've ever paid a penny for a build/dev/test AWS account.


I find services like this often work really well with hermetic integration tests:

https://github.com/adobe/S3Mock

It's more realistic than using mock objects/function calls and requires less maintenance.


Yes, use stubs. But then also have integration tests.

The point is to have easier to write lower overhead unit tests, and then have a few full fat integration tests that put everything together.

Mocking is a terrible middle ground.


That still doesn't address my concern. I cannot test my S3 implementation without either mocks or a very specific emulator of the protocol. AWS happens to be popular enough that some libraries exist to do the latter, but I assert this is the exception rather than the rule for external integrations. You shouldn't be checking in code without at least some unit testing along for the ride, mocks or otherwise. It is indeed no substitute for integration testing, but it can certainly help catch bugs sooner rather than later.


I agree you need some integration tests, but, in my experience, if you define an interface the defines what you need from third parties, you can make 90% of the code you care about unit testable.

For me, this isn’t just theory: I’ve worked at a place that trained its employees to write code this way, and the benefit was obvious.


If your external integration has no local alternative, you are getting locked-in to its provider, so you should either not use it, or have an abstraction layer and implement an alternative backend.


It's simple enough take to extract the interface of the S3 calls you make into a definition that you can write a test stub that unit tests can pipe fake data into.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: