It might help to consider that this comes from a company that was founded because the founders thought that OpenAI was not taking safety seriously.
A radiation therapy machine that can randomly give people doses of radiation orders of magnitude greater than their doctors prescribed is dangerous. A LLM saying something its authors did not like is not. The former actually did happen:
Putting a text generator outputting something that someone does not like on the same level as an actual danger to human life is inappropriate, but I do not expect Anthropic’s employees to agree.
Of course, contrarians would say that if incorporated into something else, it could be dangerous, but that is a concern for the creator of the larger work. If not, we would need to have the creators of everything, no matter how inane, concerned that their work might be used in something dangerous. That includes the authors of libc, and at that point we have reached a level so detached from any actual combined work that it is clear that worrying about what other authors do is absurd.
That said, I sometimes wonder if the claims of safety risks around LLMs are part of a genius marketing campaign meant to hype LLMs, much like how the stickers on SUVs warning about their rollover risk turned out to be a major selling point.
A radiation therapy machine that can randomly give people doses of radiation orders of magnitude greater than their doctors prescribed is dangerous. A LLM saying something its authors did not like is not. The former actually did happen:
https://hackaday.com/2015/10/26/killed-by-a-machine-the-ther...
Putting a text generator outputting something that someone does not like on the same level as an actual danger to human life is inappropriate, but I do not expect Anthropic’s employees to agree.
Of course, contrarians would say that if incorporated into something else, it could be dangerous, but that is a concern for the creator of the larger work. If not, we would need to have the creators of everything, no matter how inane, concerned that their work might be used in something dangerous. That includes the authors of libc, and at that point we have reached a level so detached from any actual combined work that it is clear that worrying about what other authors do is absurd.
That said, I sometimes wonder if the claims of safety risks around LLMs are part of a genius marketing campaign meant to hype LLMs, much like how the stickers on SUVs warning about their rollover risk turned out to be a major selling point.