Yes*. At least for the purposes of understanding what the implementations of "AI safety" are most likely to entail. I think that's a very good cognitive model which will lead to high fidelity predictions.
*But to be slightly more charitable, I genuinely think Stability AI / OpenAI / Meta / Google / MidJourney believe that there is significant overlap in the set of protections which are safe for the company, safe for users, and safe for society in a broad sense. But I don't think any released/deployed AI product focuses on the latter two, just the first one.
Examples include:
Society + Company: Depictions of Muhammad could result in small but historically significant moments of civil strife/discord.
Individual + Company: Accidentally generating NSFW content at work could be harmful to a user. Sometimes your prompt won't seem like it would generate NSFW content, but could be adjacent enough: e.g. "I need some art in the style of a 2000's R&B album cover" (See: Sade - Love Deluxe, Monica - Makings of Me, Rihanna - Unapologetic, Janet Jackson - Damita Jo)
Society + Company: Preventing the product from being used for fraud. e.g. CAPTCHA solving, fraudulent documentation, etc.
Individual + Company: Preventing generation of child porn. In the USA, this would likely be illegal both for the user and for the company.