I love this story, so happy for your success. It reads great, and makes me feel great (oddly - maybe it gives me a sense of hope I can do the same thing one day).
Dealing with enterprise is going to be fun, we work with a lot of car companies around the world. A good chunk of them love to whitelist by thumbprint. That is going to be fun for them.
Also, Never Split The Difference is basically a negotiation-oriented, machismo-tinted version of Nonviolent Communication by Dr. Marshall Rosenberg. I highly recommend both books.
I bounced on that book pretty quickly, so maybe I need to give it another attempt. It started off so generic and, "Here are my expert qualifications and anecdotes".
Yeah I rolled my eyes at that, but publishers basically require the front 10% of non-fiction be like that. It's for the people walking through book stores and picking up books to browse.
I guess I'm not sure what you mean by "doesn't work." I've gone through publishers and self-published. They come with different tradeoffs including the complaints about how publishers want a book to flow as described upthread.
Nobody is immune against mistakes, but a certain class of mistakes¹ should never ever happen to anyone who should know better. And that in my book is anybody who has their code used by more people than themselves. I am not saying devs aren't allowed to make stupid mistakes, but if we let civil engineers have their bridges collapse with an "shit happens" -attitude trust in civil engineering would be questionable at best. So yeah shit happens to us devs, but we should be shamed if it was preventable by simply knowing the basics.
So my opinion is anybody who writes code that is used by others should feel a certain danger-tingle whenever a secret or real user data is put literally anywhere.
To all beginners that just means that when handling secrets, instead of pressing on, you should pause and make an exhaustive list of who would have read/write access to the secret under which conditions and whether that is intended. And with things that are world-readable like a public repo, this is especially crucial.
Another one may or may not be your shells history, the context of your environment variables, whatever you copy-paste into the browser-searchbar/application/LLM/chat/comment section of your choice etc.
If you absolutely have to store secrets/private user data in files within a repo it is a good idea to add the following to your .gitignore:
*.private
*.private.*
And then every such file has to have ".private." within the filename (e.g. credentials.private.json), this not only marks it to yourself, it also prevents you to mix up critical with mundane configuration.
But better is to spend a day to think about where secrets/user data really should be stored and how to manage them properly.
¹: a non-exhaustive list of other such mistakes: mistaking XOR for encryption, storing passwords in plaintext, using hardcoded credentials, relying on obscurity for security, sending data unencrypted over HTTP, not hashing passwords, using weak hash functions like MD5 or SHA-1, no input validation to stiff thst goes into your database, trusting user input blindly, buffer overflows due to unchecked input, lack of access control, no user authentication, using default admin credentials, running all code as administrator/root without dropping priviledges, relying on client-side validation for security, using self-rolled cryptographic algorithms, mixing authentication and authorization logic, no session expiration or timeout, predictable session IDs, no patch management or updates, wide-open network shares, exposing internal services to the internet, trusting data from cookies or query strings without verification, etc
> no input validation to stiff thst goes into your database
I'd put "conflating input validation with escaping" on this list, and then the list fails the list because the list conflates input validation with escaping.
Good point, as I mentioned, this is a non-exhaustive list. Input validation and related topics like encodings, escaping, etc could fill a list single-handedly.
We use BQ and Metabase heavily at work. Our BQ analytics pipeline is several hundred TBs. In the beginning we had data (engineer|analyst|person) run amock and run up a BQ bill around 4,000 per month.
By far the biggest things was:-
- partition key was optional -> fix: required
- bypass the BQ caching layer -> fix: make queries use deterministic inputs [2]
It took a few weeks to go through each query using the metadata tables [1] but it worth it. In the end our BQ analysis pricing was down to something like 10 per day.
I'm amazed that this has reached the almost-top of HN.
It's a very confused article that (IMO) is AI Slop.
1. Naming conventions is a weird one to start with, but okay. For the most part you don't need to worry about this with PKs, FKs, Indices etc. Those PG will automatically generate for you with the correct syntax.
2. Performance optimization, yes indices are create. But don't name them manually let PG name them for you. Also, where possible _always_ create the index concurrently. This does not lock the table. Important if you have any kind of scale.
3. Security bit of a weird jump as you've gone from app tier concern to PG management concerns. But I would say RLS isn't worth it. And the best security is going to be tightly control what can read and write. Point your reads to a read only replica.
If you have multiple unique constraints in the same table then it will spit back the same error code. I want to know the name of the specific constraint that was violated.
Always name your indexes. Always name your constraints.
I mean if you have that many violations that points to other problems, but sure if you are in that situation I can see how catching by name might be helpful. It's not a panacea.
That exactly. Fintech forced Financial Institutions into Digital Transformation. Now they have caught up, there is no "next big thing" for Fintech. Crypto might have been it, but it killed itself by terrible UI and a never ending stream of scams and frauds. I believe there is an AI Agent Internet of Money to end Ad Revenue, but haven't found the arbitrage model yet.
Halifax is owned by Lloyds banking group, and their current app is just the exact same Lloyds app with a different logo. I know because I bank with both and the apps are identical.
Previous to the merging of online services, you are correct that Halifax had its own app and it was terrible.
But at that time Lloyds had a great app, they just hadnt unified the back end tech of all the different bank brands they own.
It wasnt disruption from startups that caused the improvement, it was the parent company taking its time to merge the decent tech it had developed for itself.
Yeah I feel like Monzo and Starling really forced the high street banks to level up their game. A friend of mine was a PM on an app at one of the big high street banks and they said the instruction from the top was explicitly to be like Monzo, and they did iterate, get better at app development, and ship a bunch of features that people like (spending notifications, in app card freezing, etc).
Bear in mind that's a measure of how backwards US banking is, not how advanced.
In the UK, I can't remember the last time I wrote or received a cheque. Maybe twice in the 17 years I've been living here, and certainly not in the last decade.
So with UK cheque usage being a tiny fraction of the US rate, there's simply no demand for it in banking apps.
I get a couple of cheques a year from family in the UK. It's an infrequent transaction but an important one, and cheque scanning is actually the only reason I maintain my legacy bank account.
Congrats!
reply