> Whoever ultimately owns the AI (or the Bazooka) will always dictate how and where the particular tool is used.
Your take confuses me, because in this case the owner is Meta. So yes, they have to think about what tools they make ("should we design a bazooka") and how they'll use what they made ("what's the target and when to pull the trigger ?")
They disbanded the team that was tasked with thinking about both.
From the article:
> RAI was created to identify problems with its AI training approaches, including whether the company’s models are trained with adequately diverse information, with an eye toward preventing things like moderation issues on its platforms. Automated systems on Meta’s social platforms have led to problems like a Facebook translation issue that caused a false arrest
Yes. I think that's par for the course, most decisions and team management will be aimed either at producing revenue or reducing cost.
HR isn't there for employee happiness either, strictly speaking they'll do what's needed to attract employees, reduce retention cost through non monetary measures, and potentially shield the company from lawsuits and other damages.
Your take confuses me, because in this case the owner is Meta. So yes, they have to think about what tools they make ("should we design a bazooka") and how they'll use what they made ("what's the target and when to pull the trigger ?")
They disbanded the team that was tasked with thinking about both.
From the article:
> RAI was created to identify problems with its AI training approaches, including whether the company’s models are trained with adequately diverse information, with an eye toward preventing things like moderation issues on its platforms. Automated systems on Meta’s social platforms have led to problems like a Facebook translation issue that caused a false arrest