Hacker News new | past | comments | ask | show | jobs | submit login

I'm astounded that anyone even considered permitting LLM generated code in a browser engine, yet alone a technical committee. Why would you risk using a system that's known to generate logic errors and bugs in a system where security and correctness are the single highest priority?



Frankly, I just assumed AI generated code would be treated with the same suspicion as human code and reviewed.


Reviewing code is harder than writing it.

When _writing code_, you achieve a certain level of understanding because you fundamentally need to make all the decisions yourself (and some of them are informed by factual statements).

When _reading code_, a lot of those decision points are easy to miss or take for granted which means you don't even notice there are alternatives. Furthermore, you don't look up the factual statements, therefore you have a lower level of understanding. Also you have no opportunity to review if those statements are actually true so decisions made on false assumptions get into the codebase.

Finally, reviewing code (to the same level of depth) is significantly more mentally taxing.


To be honest the whole premise AI code needs to be banned sounds like a bit of a histrionic caricature to me, so I might not be in the right mindset to accept this either, but, this does feel a bit histrionic too. Like, maybe in a vacuum, in a codebase, language, and functionality I'm not familiar with, or if I'm too inexperienced to be diligent, or I don't bother with tests...maybe I'm just old, and those things seem like necessary preconditions even though I'd merrily ignore them 12 years ago, and working on my own now predisposes me to being happy with the work




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: