You are spreading dangerous misinformation about LLMs. They cannot reliably generalize outside of their training data ergo if they are able to give detailed enough information to bootstrap a bioweapons this information is already publicly available.
Your second point boils down to "this makes fraud easier" which is true of all previous advances in communication technology, let me ask what is your opinion of EU Chat Control?
LLMs that are currently public can't. Safety teams are a way to determine if an unrealeased system can or cannot.
> this information is already publicly available.
In a form most people have neither time, nor money, nor the foundational skill necessary to learn.
> let me ask what is your opinion of EU Chat Control?
I could go on for pages about the pros and cons. The TL;DR summary is approximately "both the presence and the absence of perfect secrecy (including but not limited to cryptography) are existential threats to the social, political, and economic systems we currently have; the attacker always has the advantage over the defender[0], so we need to build a world where secrecy doesn't matter, where nobody can be blackmailed, where money can't be stolen. This is going to extremely difficult to get right, especially as we have no useful reference cases to build up on".
[0] extreme example: use an array of high precision atomic clocks to measure the varying gravitational time dilation caused by the mass of your body moving around to infer what you just typed on the keyboard)
Do you not see the massive contradiction in your view that "we should build a world where secrecy doesn't matter" and "we need to make sure that LLMs keep secrets?"
I don't think I could've been more explicit, as I just said that secrecy is simultaneously necessary for, and an existential threat to, the world we currently have.
That you’re phrasing it as “training an LLM to keep secrets” suggests a misapprehension: A downloadable LLM that knows secrets fundamentally cannot be expected to keep them. The fixable condition is much weaker: capabilities, if an LLM has the capability to do dangerous things.
The problem for secrets in general (which is a separate issue to LLMs, I interpreted you asking me about the EU Chat debate as an attempted gotcha not as a directly connected item) that no matter what you do, we’re unstable: not having secrets breaks all crypto which breaks all finance and approximately all of the internet, while having it creates a safe (cyber)space for conspiracies to develop without detection until too late. And also no room for conspiracies means no room to overthrow dictatorships, so if you get one you’re stuck. But surveillance can always beat cryptography so even having the benefits of crypto is an unstable state.
See also: Gordian Knot.
Find someone called 𐀀𐀩𐀏𐀭𐀅𐀨 to solve the paradox, I hear they’re great.
But LLMs only have the capability to statistically associate strings of words. That's all they are. There is no other capability possible there.
And you admit that they cannot be expected to keep secrets. So what is the point of trying to have a "security" team hammer secret keeping into them? It doesn't make sense.
I bring up chat control since I've noticed most "AI Safety" advocates are also vehemently opposed to government censorship of other communication technology. Which is fundamentally incoherent.
> But LLMs only have the capability to statistically associate strings of words. That's all they are. There is no other capability possible there.
The first sentence is as reductive, and by extension the third as false, as saying that a computer can only do logical comparisons on 1s and 0s.
> So what is the point of trying to have a "security" team hammer secret keeping into them? It doesn't make sense.
Keep secret != Remove capability
If you take out all the knowledge of chemistry, it can't help you design chemicals.
If you let it keep the knowledge of chemistry but train it not to reveal it, the information can still be found and extracted by analysing the weights, finding the bit that functions as a "keep secret" switch, and turning it off.
This is a thing I know about because… AI safety researchers told me about it.
Your second point boils down to "this makes fraud easier" which is true of all previous advances in communication technology, let me ask what is your opinion of EU Chat Control?