This is more analogous to a company having an internal "not doing crime" division. I do mention in my original post that having specialist skills within legal or compliance to handle the specific legal and ethical issues may make sense but having one team be the "AI police" and everyone else just trying to build AI without having responsibility baked into their processes is likely to just set up a constant tension like companies often have with a "data privacy" team who fight a constant battle to get people to build privacy practises into their systems and workflows.
Police are needed for society when there's no other way to enforce rules. But inside a company, you can just fire people when they misbehave. That's why you don't need police inside your company. You only need police at the base-layer of society, where autonomous citizens interact with no other recourse between them.
Engineers are incentivized to increase profits for the company because impact is how they get promoted. They will often pursue this to the detriment of other people (see: prioritizing anger in algorithmic feeds).
Doing Bad Things with AI is an unbounded liability problem for a company, and it's not the sort of problem that Karen from HR can reason about. It is in the best interest of the company to have people who can 1) reason about the effects of AI and 2) are empowered to make changes that limit the company's liability.
The problem is that a company would only fire the cavalier AI researchers after the damage is done. Having an independent ethics department means that the model wouldn't make its way to production without at least being vetted by someone else. It's not perfect, but it's a ton better than self-policing.