All of the information AI regurgitates is either already available online as part of its corpus (and therefore the AI plays no particular role in access to that information), or completely made up (which is likely to kill more terrorists than anyone else!)
Reiterating other comments, terrorists can't make bioweapons because they lack the facilities and prerequisites, not because they're incompetent.
The "all the info is already online" argument is also an argument against LLMs in general. If you really believe that argument, you shouldn't care one way or another about LLM release. After all, the LLM doesn't tell you anything that's not on Google.
Either the LLM is useful, in which case it could be useful to a terrorist, or it's useless, in which case you won't mind if access is restricted.
Note: I'm not saying it will definitely be useful to a terrorist. I'm saying that companies have an obligation to show in advance that their open source LLM can't help a terrorist, before releasing it.
If LLMs are set to revolutionize industry after industry, why not the terrorism industry? Someone should be thinking about this beyond just "I don't see how LLMs would help a terrorist after 60 seconds of thought". Perhaps the overall cost/benefit is such that LLMs should still be open-source, similar to how we don't restrict cars -- my point is that it should be an informed decision.
And we should also recognize that it's really hard to have this discussion in public. The best way to argue that LLMs could be used by terrorists is for me to give details of particular schemes for doing terrorism with LLMs, and I don't care to publish such schemes.
[BTW, my basic mental model here is that terrorists are often not all that educated and we are terrifically lucky for that. I'm in favor of breakthrough tutoring technology in general, just not for bioweapons-adjacent knowledge. And I think bioweapons have much stronger potential for an outlier terrorist attack compared with cars.]
Top AI researchers like Geoffrey Hinton say that large language models likely have an internal world model and aren't just stochastic parrots. Which means they can do more than just repeating strings from the training distribution.
Facilities are a major hurdle for nuclear weapons. For bioweapons they are much less of a problem. The main constraint is competency.
I think you might want to take a look at some of the history here, and particularly the cyclical nature of the AI field for the past 50–60 years. It’s helpful to put what everyone’s saying in context.