Hacker News new | past | comments | ask | show | jobs | submit login

A little while back, I had a conversation with a colleague about sorting entries by "updated at" in the user interface, and to my surprise this was not added by the backend team.

Many of these "we are going to need it"s come from experience. For example in the context of data structures (DS), I have made many "mistakes" that I do correctly a second time. These mistakes made writing algorithms for the DS harder, or made the DS have bad performance.

Sadly, it's hard to transfer this underlying breadth of knowledge and intuition for making good tradeoffs. As such, a one-off tip like this is limited in its usefulness.






Database schemas being perfect out-of-the gate was replaced by reliable migrations.

If it's not data that's essential to serving the current functionality, just add a column later. `updated_at` doesn't have to be accurate for your entire dataset; just set it to `NOW()` when you run the migration.


Sure, migrations are bearable (especially ones that only add columns).

But for the example of the "updated_at" column, or "soft delete" functionality, you only find out you need it because the operations team suddenly discovered they needed that functionality on existing production rows because something weird happened.


In C#-land, we just have it as a standard that ~every table inherits from `ITrackable`, and we wrote a little EF plugin to automatically update the appropriate columns.

public interface ITrackable { DateTime CreatedOn {get; set;} DateTime ModifiedOn {get; set;} }

Saves so much time and hassle.


“Reliable migrations” almost seems like an oxymoron. Migrations are complicated, difficult and error prone. I think there’s a good takeaway here around good initial schema design practices. The less you have to morph your schema overtime, the less of those risky migrations need to run.

My experience over the last decade has been different.

Use a popular framework. Run it against your test database. Always keep backups in case something unforseen happens.

Something especially trivial like adding additional columns is a solved problem.


Adding additional columns has always been trivial. What is not is the 98% of other things migrations do. Managing the version of the schema, applying ups in order, executing downs correctly, handling fk references. It’s not necessarily the fault of the migrations frameworks themselves, of which many exist in varying degrees of quality, but rather that the underlying problem of trying to morph a schema that is dependent on the underlying shape of the data is often a difficult problem armed with many footguns.

My experience has not been so smooth. Migrations are reasonable, but they're not free and "always keeps backups" sounds like you'd tolerate downtime more than I would.

Even in the best case (e.g. basic column addition), the migration itself can be "noisy neighbors" for other queries. It can cause pressure on downstream systems consuming CDC (and maybe some of those run queries too, and now your load is even higher).


Anything with state is going to be hard to get right. Couple sticky schema changes to that state and you’re looking at a lot of potential ways of things can go wrong. Downs being unnecessarily destructive, rollback corrupts data, migrations applied in wrong order. Everyone tangential to working with any sort of migrations system has a war story (or a few) of the creative way that the state got wrecked.

Here’s one of mine: Postgres change applied fine in unit and integration and dev but not prod because the shape of the data (enum) did not conform to the new constraint.

Another would be a monorepo that had 5-6 services that talk across db’s to each other caused dev to apply the wrong migration to the wrong HEAD, mixing up the db’s. That was a fun one to sort out


Still depends on what the data represent: you could get yourself in a storm of phone calls from customers if after your latest release there's now a weird note saying their saved document was last updated today.

"HOW DARE YOU MODIFY MY DOCUMENTS WITHOUT MY..."


Somewhat related, but I suggest having both the record updated at, and some kind of "user editing updated at". As I've encountered issues where some data migration ends up touching records and bumping the updated at, which shocks users since they see the UI reshuffle and think they have been hacked when they see the records updated at a time they didn't update them.

I mean this is what audit logs are for I'd say: generally speaking you want to know what was changed, by who and why.

So really you probably just want a reference to the tip of the audit log chain.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: