Hacker News new | past | comments | ask | show | jobs | submit | 1a527dd5's comments login

I love this story, so happy for your success. It reads great, and makes me feel great (oddly - maybe it gives me a sense of hope I can do the same thing one day).

Congrats!


Thank you! I think you could do it, just ship something today!


Dealing with enterprise is going to be fun, we work with a lot of car companies around the world. A good chunk of them love to whitelist by thumbprint. That is going to be fun for them.


The best advice I found was in 'Never Split the Difference'. The rest is me being honest and direct.


Yep, I've read a lot of negotiation books and that's the one that explains all the other ones.

Everything else (including this article) is a specific case of the models described in that book.


Also, Never Split The Difference is basically a negotiation-oriented, machismo-tinted version of Nonviolent Communication by Dr. Marshall Rosenberg. I highly recommend both books.


Totally agree! Other similar books using similar models:

Connect, by David Bradford and Carole Robin (on business and personal relationships)

How to Talk so Kids will Listen and Listen so Kids will Talk (on parenting)

It's funny how at the end of the day, all interpersonal relationships work the same way, isn't it :-)


I bounced on that book pretty quickly, so maybe I need to give it another attempt. It started off so generic and, "Here are my expert qualifications and anecdotes".


Yeah I rolled my eyes at that, but publishers basically require the front 10% of non-fiction be like that. It's for the people walking through book stores and picking up books to browse.


Thats why we need disruption in the books publishing industry


You have it. You can self-publish.

Publishers are not actually idiots. By and large, they know what sells.


Self publishing doesn't work because there is no filter for quality or editing.


I guess I'm not sure what you mean by "doesn't work." I've gone through publishers and self-published. They come with different tradeoffs including the complaints about how publishers want a book to flow as described upthread.

Reviews are at least something of a filter.


Trying my best not to break the no snark rule [1], but I'm sure your code is 100% bullet proof against all current and future-yet-invented-attacks.

[1] _and failing_.


Nobody is immune against mistakes, but a certain class of mistakes¹ should never ever happen to anyone who should know better. And that in my book is anybody who has their code used by more people than themselves. I am not saying devs aren't allowed to make stupid mistakes, but if we let civil engineers have their bridges collapse with an "shit happens" -attitude trust in civil engineering would be questionable at best. So yeah shit happens to us devs, but we should be shamed if it was preventable by simply knowing the basics.

So my opinion is anybody who writes code that is used by others should feel a certain danger-tingle whenever a secret or real user data is put literally anywhere.

To all beginners that just means that when handling secrets, instead of pressing on, you should pause and make an exhaustive list of who would have read/write access to the secret under which conditions and whether that is intended. And with things that are world-readable like a public repo, this is especially crucial.

Another one may or may not be your shells history, the context of your environment variables, whatever you copy-paste into the browser-searchbar/application/LLM/chat/comment section of your choice etc.

If you absolutely have to store secrets/private user data in files within a repo it is a good idea to add the following to your .gitignore:

  *.private
  *.private.*
 
And then every such file has to have ".private." within the filename (e.g. credentials.private.json), this not only marks it to yourself, it also prevents you to mix up critical with mundane configuration.

But better is to spend a day to think about where secrets/user data really should be stored and how to manage them properly.

¹: a non-exhaustive list of other such mistakes: mistaking XOR for encryption, storing passwords in plaintext, using hardcoded credentials, relying on obscurity for security, sending data unencrypted over HTTP, not hashing passwords, using weak hash functions like MD5 or SHA-1, no input validation to stiff thst goes into your database, trusting user input blindly, buffer overflows due to unchecked input, lack of access control, no user authentication, using default admin credentials, running all code as administrator/root without dropping priviledges, relying on client-side validation for security, using self-rolled cryptographic algorithms, mixing authentication and authorization logic, no session expiration or timeout, predictable session IDs, no patch management or updates, wide-open network shares, exposing internal services to the internet, trusting data from cookies or query strings without verification, etc


> no input validation to stiff thst goes into your database

I'd put "conflating input validation with escaping" on this list, and then the list fails the list because the list conflates input validation with escaping.


Good point, as I mentioned, this is a non-exhaustive list. Input validation and related topics like encodings, escaping, etc could fill a list single-handedly.


I just wish the default wasn't bash. GHA with pwsh is a much better experience.


https://www.vantage.sh/blog/amazon-s3-tables#s3-tables-cost

I can't make head or tails of the beginning of this sentence:-

> Pricing for S3 Tables is all and all not bad.

Otherwise lovely article!


"all and all" is a typo for "all in all" which means "overall", or "taking everything into consideration"

So they are saying the pricing is not bad considering everything it does


There is something here that doesn't sit right.

We use BQ and Metabase heavily at work. Our BQ analytics pipeline is several hundred TBs. In the beginning we had data (engineer|analyst|person) run amock and run up a BQ bill around 4,000 per month.

By far the biggest things was:-

- partition key was optional -> fix: required

- bypass the BQ caching layer -> fix: make queries use deterministic inputs [2]

It took a few weeks to go through each query using the metadata tables [1] but it worth it. In the end our BQ analysis pricing was down to something like 10 per day.

[1] https://cloud.google.com/bigquery/docs/information-schema-jo...

[2] https://cloud.google.com/bigquery/docs/cached-results#cache-...


I'm amazed that this has reached the almost-top of HN.

It's a very confused article that (IMO) is AI Slop.

1. Naming conventions is a weird one to start with, but okay. For the most part you don't need to worry about this with PKs, FKs, Indices etc. Those PG will automatically generate for you with the correct syntax.

2. Performance optimization, yes indices are create. But don't name them manually let PG name them for you. Also, where possible _always_ create the index concurrently. This does not lock the table. Important if you have any kind of scale.

3. Security bit of a weird jump as you've gone from app tier concern to PG management concerns. But I would say RLS isn't worth it. And the best security is going to be tightly control what can read and write. Point your reads to a read only replica.

4. pg_dump was never meant to be a backup method; see https://www.postgresql.org/message-id/flat/70b48475-7706-426...

5. You don't need to schedule VACUUM or ANALYZE. PG has the AUTO version of both.

6. Jesus, never use a DEFAULT value on a large table that causes a table rewrite which will cause you downtime. Don't use varchar(n) https://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use...

This is an awful article.


> don't name them manually let PG name them for you

If you care about catching stuff like unique constraint violations in your code then you should know what the indexes are named.

Always name your indexes and constraints. They're part of your code. How would you catch an exception that you don't know the name of?


You don't need to know the name; I prefer to go by the error code PG spits back at you. In this case 23505.

https://www.postgresql.org/docs/current/errcodes-appendix.ht...

That is much more flexible than hard coding the name.


If you have multiple unique constraints in the same table then it will spit back the same error code. I want to know the name of the specific constraint that was violated.

Always name your indexes. Always name your constraints.


I mean if you have that many violations that points to other problems, but sure if you are in that situation I can see how catching by name might be helpful. It's not a panacea.


It's not that hard to just name the things you care about instead of relying on auto-generated stuff from the database. PG lets you do this easily.

> if you have that many violations that points to other problems

No, it doesn't. This is a sophisticated design choice. You might be out of your depth here.


Okey dokey mate, you do you. Have a great one.


This is a great book that explains PG in a step-by-step way. I learned a ton from it

https://theartofpostgresql.com/


I'm aware of it thanks.


Also useful for migrations, especially if you support multiple DB backends.


> 4. pg_dump was never meant to be a backup method; see https://www.postgresql.org/message-id/flat/70b48475-7706-426...

Those are very recent emails that don't say anything about what it was originally meant for, just that nowadays there's better options.

Though really the part they quote called it a backup tool, that does imply it was originally meant to be a backup method.


> I'm amazed that this has reached the almost-top of HN.

The article is objectively bad; the subject is clearly something a lot of people think about, myself included.


They might not be disrupting them, but they are definitely causing competition in the market place again.

My main bank account is with Halifax, everyday spend is with Starling. Then Monzo for anything risky.

Before Starling/Monzo the Halifax app was _crap_. Barely got any updates and was very basic.

Now? The Halifax app is on par with the newer banks, and sometimes even release new features before (e.g. scan cheque in to deposit).


That exactly. Fintech forced Financial Institutions into Digital Transformation. Now they have caught up, there is no "next big thing" for Fintech. Crypto might have been it, but it killed itself by terrible UI and a never ending stream of scams and frauds. I believe there is an AI Agent Internet of Money to end Ad Revenue, but haven't found the arbitrage model yet.


Halifax is owned by Lloyds banking group, and their current app is just the exact same Lloyds app with a different logo. I know because I bank with both and the apps are identical.

Previous to the merging of online services, you are correct that Halifax had its own app and it was terrible. But at that time Lloyds had a great app, they just hadnt unified the back end tech of all the different bank brands they own.

It wasnt disruption from startups that caused the improvement, it was the parent company taking its time to merge the decent tech it had developed for itself.


Yeah I feel like Monzo and Starling really forced the high street banks to level up their game. A friend of mine was a PM on an app at one of the big high street banks and they said the instruction from the top was explicitly to be like Monzo, and they did iterate, get better at app development, and ship a bunch of features that people like (spending notifications, in app card freezing, etc).


Interesting... We've had scanned check deposits at Chase (US) for at least 15 years, I think.


Bear in mind that's a measure of how backwards US banking is, not how advanced.

In the UK, I can't remember the last time I wrote or received a cheque. Maybe twice in the 17 years I've been living here, and certainly not in the last decade.

So with UK cheque usage being a tiny fraction of the US rate, there's simply no demand for it in banking apps.


Cheque use in the UK is now around two per year per person. (This includes business-to-business cheques.)

The over-65 age group is most likely to use them, and least likely to use an app, so you can see why it wasn't a big priority for most banks.

It's been at least 15 years since the banks stopped giving account holders chequebooks by default. If you want one you have to ask.


I get a couple of cheques a year from family in the UK. It's an infrequent transaction but an important one, and cheque scanning is actually the only reason I maintain my legacy bank account.


most countries abandoned checks at least 15 years...


ppl dowvoting facts now


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: