I hope I'm wrong, but given the composition of the donors I'd be surprised if they really put much scrutiny on near-term corporate/government misuse of AI, apart perhaps from military robots. There are definitely interesting ethics questions already arising today around how large tech companies, law enforcement, etc. are starting to use AI, whether it's Palantir, the FBI, Google, or Facebook, so no argument that it's a timely subject at least in some of its forms. It'll be interesting to see if they get into that. I'd guess they probably want to avoid the parts that overlap too much with data-privacy concerns, partly because a number of their sponsors are not exactly interested in data privacy, and partly because the ethical debate then becomes more complex (it's not purely an "ethics of AI" debate, but has multiple axes).
I share your concerns. It also worries me that the brightest ML researchers choose to work at companies like Facebook, Google and Microsoft instead of public universities. One reason is probably that academia and public grants are too sluggish to accommodate this fast paced field. Another is that these companies have loads of data that these researchers can use to test their ideas.
The downside is that much of the research is probably held secret for business advantages. The public releases are more of a PR and hiring strategy than anything else in my opinion. By sending papers to conferences, Google's employees can get to know the researchers and attract them to Google.
Others say there's nothing to worry about, Google and Facebook are just today's equivalent of Bell Labs, which gave numerous contributions to computer technologies without causing much harm.