It is worth calling out the motivations of most entrepreneurs here. But I think that analogy you used is very uncharitable - drilling and burning fossil fuels necessarily harms the environment, but the track record on big companies handling alignment/safety in house, rather than open source with the whole research community working on it is still very much up in the air. Sidney (bing assistant) was easy to prompt inject and ask for bad things, and the research that people have been able to do on forcing the output of llama to confirm to certain rules will likely prove invaluable in the future.
>the track record on big companies handling alignment/safety in house, rather than open source with the whole research community working on it is still very much up in the air. Sidney (bing assistant) was easy to prompt inject and ask for bad things
Yep, Microsoft did a terrible job, and they should've been punished.
I'm not claiming that Big AI rocks at safety. I'm claiming that Big AI is also a big target for regulators and public ire. There's at least a chance they will get their act together in response to external pressure. But if cutting-edge models are open sourced indefinitely, they'll effectively be impossible to control.
>research that people have been able to do on forcing the output of llama to confirm to certain rules will likely prove invaluable in the future.
You may be correct that releasing llama was beneficial from the point of view of safety. But the "conform to certain rules" strategy can basically only work if (a) there's a way to enforce rules that can't be fine-tuned away, or (b) we stop releasing models at some point.