I think there are elements we agree on (liability), but I don't think it's about any real safety concern or anything beyond just "we are not sure we cannot break the law on privacy and data collection/advertising with our AI...so we are going to pretend we are trying", and this just seems like it's Meta just stopping the pretending, but naturally just my opinion which is open to change.
that's more my point, but yes, I can see that maybe I came off as too disagreeable
edit: In other words, my contention with Meta's statements and your analysis is mostly that I don't really think "safety" is Meta's concern -- the knife analogy I think isn't even necessary (the models are already neutered in this regard as I see it), I think instead it's that they likely know the models will violate many regulations and also privacy laws, and they're trying to seed the idea that they built their AI implementation responsibly and any violation is just a "hallucination".
It would be great if a reporter truly took meta to task on what they mean by safety and what specifically they are trying to protect people from; I have little hope this will happen.
>but I don't think it's about any real safety concern or anything beyond just "we are not sure we cannot break the law on privacy and data collection/advertising with our AI...so we are going to pretend we are trying", and this just seems like it's Meta just stopping the pretending, but naturally just my opinion which is open to change.
Read more carefully. I literally said Meta does not give a shit. We are in agreement on this.
The difference between us, is I don't give a shit either. I agree with metas hidden stance on this.
yes but what i’m saying is the knife analogy weakens this position imo )) if it’s bullshit (which we agree)i personally find such analogies serve to support the nullshit narrative instead of calling bs on it ). that’s all). i do agree you and i agree though penultimately