Hacker News new | past | comments | ask | show | jobs | submit login

somebody has a no disparage



Trying to be diplomatic, but this is such an unnecessary snarky, useless response. Google obviously did go slow with their rollout of AI, to the point where most of the world criticized them to no end for "being caught flat footed" on AI (myself included, so mea culpa).

I don't necessarily think they did it "right", and I think the way they set up their "Ethical AI" team was doomed to fail, but at least they did clearly think about the dangers of AI from the start. I can't really say that about any other player.


AI in Microsoft's hands when they can't even be ethical about how the develop their own OS. Scary stuff.


> Google obviously did go slow with their rollout of AI, to the point where most of the world criticized them to no end for "being caught flat footed" on AI (myself included, so mea culpa).

they were criticized because they are losing competition not because of rollout, their current tech is weaker than ChatGPT.


Their current tech is weaker because they couldn't release the full version due to the additional safeguards (partly to prevent more people claiming their AI is sentient) and partly also due to cost cutting.


how are you so confident about that?


Straight from Sundar himself in https://blog.google/technology/ai/bard-google-ai-search-upda...

> We’re releasing it initially with our lightweight model version of LaMDA. This much smaller model requires significantly less computing power

Translation: we cannot release our full model because it costs too much. We are giving the world a cheap and worse version due to cost cutting.

> It’s critical that we bring experiences rooted in these models to the world in a bold and responsible way. That’s why we’re committed to developing AI responsibly

Translation: we value responsible AI so much that we'd nerf the capability of the AI to be "responsible"

If someone more ambitious than Sundar were to be CEO I'm sure the recent events would turn out very differently.


ChatGPT is also lighweight model, but it visibly outperforms Bard.


Their current generative AI is weaker because they were focused on many other facets of AI such as AlphaFold and Waymo.


where they didn't create positive revenue products yet despite billions of investments, while putting main cash cow (search) into risk by neglecting that area.


Google went slow not due to ethics but because running neural inference is a lot more expensive than serving SERP data from cache.


You honestly suggesting the inventors of the TPU bailed because they couldn't foot the compute bill?


They use a lot of machine learning for ads and YouTube recommendations - the TPU makes sense there and if anything shows how hard they try to keep costs down. It’s a no-brainer for them to have tried keeping Search as high-margin as possible for as long as possible.


Cade Metz is the same muckraker who forced Scott Alexander to preemptively dox himself. I don’t know Hinton apart from the fact that he’s a famous AI researcher but he has given no indication that he’s untrustworthy.

I’ll take his word over Metz’s any day of the week!


Yes, Cade Metz clearly pushes a certain agenda above all.


That’s not how a non-disparagement clause works.

It puts restrictions on what you’re allowed to say. It doesn’t require you to correct what other people say.

If your badly thought through assumption was correct, the logical response from him would be to simply say nothing.


Unless he wanted to say something.


I've always thought about leaving a little text file buried somewhere on my website that says "Here are all of the things that Future Me really means when he issues a press statement after his product/company/IP is bought by a billion-dollar company."

But then I remember I'm not that important.


Do it for other reasons such as inappropriate treatment and abnormal terminations driving from misbehaving coworkers

Date stamped

Weird & very uncool coworkers do get hired.


More like HR said, “Well, there is option A where you leave and are free to do what you wish. And then there is option B (points at bag of cash) where you pretend none of this ever happened…”


I assume Geoffrey Hinton has enough bags of cash for his lifetime and a few more on top of that. IDK why someone so well compensated and so well recognized would agree to limit themselves in exchange for a, relatively speaking, tiny bit more cash. That doesn't make the slightest bit of sense.


HR might as well say:

"It doesn't matter if you take the bags of cash or not, we will do our best to destroy your life if you mess with us after you are gone. The bags of cash are a formality, but you might as well accept them because we have the power to crush you either way"


Google HR is going to crush Geoffrey Hinton? I feel like that would work out worse for Google than for him.


Large corporations like Google have a lot of resources and connections to really mess up a single persons life if they really want to, with expensive legal action and PR campaigns.

Yeah, they might cause their reputation some damage by going after the wrong person, but let's be real here.. the worst outcome for Google would likely be miles ahead of the worst outcome for Hinton.

Edit: Note that I'm not actually saying that I think Google and Hinton have this level of adversarial relationship.

I'm just saying that big companies may come after you for speaking out against them regardless of if you've accepted hush money or not.

Given that, it's usually worth being tactful when talking about former employers regardless of any payouts you may have accepted or agreements you may have signed.


The Google department responsible for this is called Global Security and Resilience Services. Staffed by ex-military and FBI. Look it up.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: