Hacker News new | past | comments | ask | show | jobs | submit login

> The winner of course was the guy who understood that 6TiB is what 6 of us in the room could store on our smart phones, or a $199 enterprise HDD (or three of them for redundancy), and it could be loaded (multiple times) to memory as CSV and simply run awk scripts on it.

If it's not a very write heavy workload but you'd still want to be able to look things up, wouldn't something like SQLite be a good choice, up to 281 TB: https://www.sqlite.org/limits.html

It even has basic JSON support, if you're up against some freeform JSON and not all of your data neatly fits into a schema: https://sqlite.org/json1.html

A step up from that would be PostgreSQL running in a container: giving you the support for all sorts of workloads, more advanced extensions for pretty much anything you might ever want to do, from geospatial data with PostGIS, to something like pgvector, timescaledb etc., while still having a plethora of drivers and still not making your drown in complexity and having no issues with a few dozen/hundred TB of data.

Either of those would be something that most people on the market know, neither will make anyone want to pull their hair out and they'll give you the benefit of both quick data writes/retrieval, as well as querying. Not that everything needs or can even work with a relational database, but it's still an okay tool to reach for past trivial file storage needs. Plus, you have to build a bit less of whatever functionality you might need around the data you store, in addition to there even being nice options for transparent compression.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: