"What if ChatGPT told someone how to build a bomb?"
That information has been out there forever. Anyone can Google it. It's trivial. AI not required.
"What if ChatGPT told someone how to build a nuke?"
That information is only known to a handful of people in a handful of countries and is closely guarded. It's not in the text ChatGPT was trained on. An LLM is not going to just figure it out from publicly available info.
>The real risk is AI being applied in decision making where it affects humans
100% this. The real risk is people being denied mortgages and jobs or being falsely identified as a criminal suspect or in some other way having their lives turned upside down by some algorithmic decision with no recourse to have an actual human review the case and overturn that decision. Yet all this focus on AI telling people how to develop bioweapons. Or possibly saying something offensive.
The information necessary to build a nuclear weapon has been largely available in open sources since the 1960s. It's really not a big secret. The Nth Country Experiment in 1964 showed that a few inexperienced physicists could come up with a working weapons design. The hard part is doing uranium enrichment at scale without getting caught.
"What if ChatGPT told someone how to build a bomb?"
That information has been out there forever. Anyone can Google it. It's trivial. AI not required.
"What if ChatGPT told someone how to build a nuke?"
That information is only known to a handful of people in a handful of countries and is closely guarded. It's not in the text ChatGPT was trained on. An LLM is not going to just figure it out from publicly available info.
>The real risk is AI being applied in decision making where it affects humans
100% this. The real risk is people being denied mortgages and jobs or being falsely identified as a criminal suspect or in some other way having their lives turned upside down by some algorithmic decision with no recourse to have an actual human review the case and overturn that decision. Yet all this focus on AI telling people how to develop bioweapons. Or possibly saying something offensive.