> Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
I dont think an agent fighting in a video game really counts? There is quite a significant gap between an FPS and a missile launcher, and it would be a waste not to explore how these agents learn in FPS environments.
They intentionally included combat training in the dataset. It is in their Technical Report.
How can combat training not be interpreted as "principal purpose or implementation is to cause or directly facilitate injury to people"?
Do you believe the agent was trained to distinguish game from reality, and refuse to operate when not in game environment? No safety mechanisms were mentioned in the technical report.
This agent could be deployed on a weaponized quad-copter, or on Figure 01 [0] / Tesla Optimus [1] / Boston Dynamic Atlas.
this is direct violation of google ai principles on autonomous weapon development: [1]
[0] Screenshot from SIMA Technical Report: https://ibb.co/qM7KBTK
[1] https://ai.google/responsibility/principles/