Hacker News new | past | comments | ask | show | jobs | submit login

combat training agent? [0]

this is direct violation of google ai principles on autonomous weapon development: [1]

[0] Screenshot from SIMA Technical Report: https://ibb.co/qM7KBTK

[1] https://ai.google/responsibility/principles/




> Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

I dont think an agent fighting in a video game really counts? There is quite a significant gap between an FPS and a missile launcher, and it would be a waste not to explore how these agents learn in FPS environments.


What counts then?

They intentionally included combat training in the dataset. It is in their Technical Report.

How can combat training not be interpreted as "principal purpose or implementation is to cause or directly facilitate injury to people"?

Do you believe the agent was trained to distinguish game from reality, and refuse to operate when not in game environment? No safety mechanisms were mentioned in the technical report.

This agent could be deployed on a weaponized quad-copter, or on Figure 01 [0] / Tesla Optimus [1] / Boston Dynamic Atlas.

[0] https://twitter.com/Figure_robot/status/1767913661253984474?... [1] https://www.youtube.com/watch?v=cpraXaw7dyc


Dawg, we both know the moment there is any share holder value to be found in the tech, the TOS changes real quick. Look at Open AI.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: