I feel like that's a good option ONLY if the code you are writing will never be deployed to an environment where security is a concern. Many security bugs in code are notoriously difficult to spot and even frequently slip through reviews from humans who are actively looking for exactly those kinds of bugs.
I suppose we could ask the question: Are LLMs better at writing secure code than humans? I'll admit I don't know the answer to that, but given what we know so far, I seriously doubt it.
I suppose we could ask the question: Are LLMs better at writing secure code than humans? I'll admit I don't know the answer to that, but given what we know so far, I seriously doubt it.