As I posted elsewhere, I think this is a conflict between Dustin Moskovitz and Sam Altman. Ilya may have been brought into this without his knowledge (which might explain why he retracted his position).
Dustin Moskovitz was an early employee at FB, and the founder of Asana. He also created (along with plenty of MSFT bigwigs) a non-profit called Open Philanthropy, which was a early proponent of a form of Effective Altruism and also gave OpenAI their $30M grant. He is also one of the early investors in Anthropic.
Most of the OpenAI board members are related to Dustin Moskovitz this way.
- Adam D'Angelo is on the board of Asana and is a good friend to both Moskovitz and Altman
- Helen Toner worked for Dustin Moskovitz at Open Philanthropy and managed their grant to OpenAI. She was also a member of the Centre for the Governance of AI when McCauley was a board member there. Shortly after Toner left, the Centre for the Governance of AI got a $1M grant from Open Philanthropy and McCauley joined the board of OpenAI
- Tasha McCauley represents the Centre for the Governance of AI, which Dustin Moskovitz gave a $1M grant to via Open Philanthropy and McCauley ended up joining the board of OpenAI
Over the past few months, Dustin Moskovitz has also been increasingly warning about AI Safety.
In essense, it looks like a split between Sam Altman and Dustin Moskovitz
It's clear that Adam himself has a strong conflict of interest too. The GPT store announcement on DevDay pretty much killed his company Poe. And all this started brewing after DevDay announcement. Maybe Sam kept it under the wraps from Adam and the board.
I've heard others take this stance, but a common response so far has been "Poe is so small as to be irrelevant", "I forgot it exists", etc in the grand scheme of things here.
Poe has a reasonably strong user base for two reasons:
(i) they allowed customized agents and a store of these agents.
(iI) they had access to GPT-32k context length very early, in fact one of the first to have it.
Both of these kinda became pointless after DevDay. It definitely kills Poe, and I think that itself is a conflict of interest, right? Whether or not it's at a scale to compete is a secondary question.
What matters is how much personal work and money Adam put into Poe. It seems like he's been working on it full-time all year and has more or less pivoted to it away from Quora, which also faces an existential threat from OpenAI (and AI in general).
Either way, Adam's conflict of interest is significant, and it's staggering he wasn't asked to resign from the board after launching a chatbot-based AI company.
What do you mean? Obviously, yes, they have for more than a decade. I don't have many opinions about the former, but how does OpenAI change the relevance of SO?
More confusion - Emmett Shear is a close friend of Sam Altman. He was part of the original 2005 YCombinator class alongside Altman, part of the justin.tv mafia, and later a part-time partner at YCombinator. I don't think he has any such close ties to Dustin Moskovitz. Why would the Dustin-leaning OpenAI board install him as interim CEO?
This whole thing still seems to have the air of a pageant to me, where they're making a big stink for drama but it might be manufactured by all of the original board, with Sam, Ilya, Adam, and potentially others all on the same side.
I find it really good but if you don't like rationality/EY it's really easy to latch on as something to hate (overly smart fanfic does sound cringe on the face of it).
> More confusion - Emmett Shear is a close friend of Sam Altman. He was part of the original 2005 YCombinator class alongside Altman, part of the justin.tv mafia, and later a part-time partner at YCombinator. ... Why would the Dustin-leaning OpenAI board install him as interim CEO?
This was my first thought too: Is this a concession of the board to install a Sam friendly-ish Interim CEO?
Is Emmett Shear really "friends" with Sam Altman? He (Emmett) literally liked a tweet the other day that said something to the effect of: "Congratulations to Ilya on reclaiming the corporation that Sam Altman stole". I'm paraphrasing here, but I don't think Emmett and Sam are friends?
Back in the early 2000s, the PayPal founders were from a handful of universities (UIUC, Stanford) and had a massive alumni network from those two programs. This was called the PayPal Mafia [0]
To this day, the any tight collection/network of founders from the same organization is called a "Mafia"
In startup culture, it usually refers to a group of individuals who underwent a formative experience in one company, then went on to start separate individual companies where they all cross-invest, cross-advise, and generally help out each others' companies. Term was originally used to refer to the PayPal mafia [1, notable members include Peter Thiel, Max Levchin, Elon Musk, Chad Hurley, Reid Hoffman, Jeremy Stoppelman, Yishan Wong, notable descendants include SpaceX, Tesla, YouTube, Yelp, LinkedIn, and arguably Facebook]. Since expanded to the justin.tv mafia [2, members = Justin Kan, Emmett Shear, Kyle Vogt, and Michael Seibel, descendants include Twitch & Cruise]. Arguably the Fairchildren (descendants of Fairchild Semiconductors - Intel, AMD, National Semiconductor Kleiner-Perkins, Sequoia Capital, and by extension Apple, Google, Cisco, Netscape, etc.) and the descendants of General Magic (E-bay, Android, iPod/iPhone, Nest, WebTV, and the United States Digital Service) could also be termed "mafias", although the aren't usually referred to as such.
It's not just starting a second company - it's that a group of people who were all bound together by one company end up starting second companies, and they continue to go on to help each other and collaborate in their later ventures.
A thing a sad, understated ongoing is that so many people are throwing vitriol at Ilya right now. If the speculation here is true, then he just chased by a mob over pure nonsense (well, at least purer than the nonsense premise beforehand).
Gotta love seeing effective altruists take another one on the chin this year though.
No. It reminds me more of Muddle [1] Ages' intrigue, scheming, and backstabbing, like the Medicis, Cesare Borgia and clan, Machiavelli (and his book The Prince, see Cesare again), etc., to take just one example. (Italy not being singled out here.) And also reminds me of all the effing feuding clans, dynasties, kingdoms, and empires down the centuries or millenia, since we came down from the trees. I guess galaxies have to be next, and that too is coming up, yo Elon, Mars, etc., what you couldn't fix on earth ain't gonna be fixable on Mars or even Pluto, Dummkopf, but give it your best shot anyway.
just sounds like billionaire tech bros wanting to appear smarter and more important than they really are and a weird cult like obsession with AI at the expense of the rest of humanity even though AGI won't be possible for like a hundred or so years
Yes, except their obsession is really with getting even more money than the billions they already have, rather than an obsession with AI for advancing humanity, blah blah blah, which they have (not just like) to pretend it is, otherwise no one will respect them, let alone ass-kiss them (which, deep down, is what they really want, hence all the yes-men they surround themselves with).
Absolutely. It is borderline salacious. Honestly didn't feel too good watching so many people opine on matters they had no real data on and insult people; also the employees of openai announcing matters on twitter or wtv in tabloid style.
I hope people apologize to Ilya for jumping to conclusions.
Yes, it was especially weird to read this on HN to such a big extend. The comments were (and are) full of people with very strong opinions, based on vague tweets or speculation. Quite unusual and hopefully not the new norm...
TechCrunch should be at the forefront with coverage, but their glory days is far behind it. And Valleywag is gone. So I guess it's up to us to gossip on our own.
At best he doesn’t have much integrity and caves to peer pressure. I would respect him more if he stood by his actions. Abandoning them just shows how frivolously his decision making was.
Perhaps more info will come out that casts a different light, but as of now it seems obvious that he decided to vote to fire Altman for reasons that he’s not willing to clarify and which he abandoned as soon as he saw the overwhelmingly negative reaction. He didnt even say he regrets firing Altman - just that he doesn’t want OpenAI to fall apart.
Yeah, these votes were supposed to be what kept us safe from AI. The whole board was ill equipped to keep themselves neutral from their own business, let along protect humanity from the irresponsible outcomes of their models.
This is why I detest YC (despite taking part in salicious gossip on here due to my social media addiction). A couple YC friends of mine have been very explicit about how they detest the conspiratorial YC hivemind.
A "conflict" or false opposition can also be used in a theater like play. Maybe this was setup to get Microsoft to take on the costs/liability and more. Three board members left in 2023 that allowed this to happen.
The idea of boards might even be an anti-pattern going forward, they can be played and used in essentially rug pull scenarios for full control of all the work of entire organizations. Maybe boards are past their time or too much of a potential timebomb/trojan horse now.
This has been the case as long as companies existed. Even with all this, companies still have boards because they represent the interests of several people.
Can anyone explain why when I go to https://twitter.com/sama I don't see the linked tweet, but if I navigate to https://twitter.com/sama/status/1726594398098780570 I can? Is this a nuance of like tweets being privatable? Or deleted tweets remaining directly navigable? Sorry probably something basic about how twitter functions.
That's meaningless; he would welcome Ilya back as a defector no matter what. What happens to Ilya later, after he is no longer a board member or in a position of power, will be much more informative.
This is the most logical explanation I have seen so far. Makes me wonder why Dustin Moskovitz himself wasn't on the board of OpenAI in the first place.
Dustin was an early investor in Alameda Research and was also one of the biggest donors to Mind the Gap -- Sam's mom's Super PAC. When SBF was about to go under Dustin was his first call to try to raise money (came out in the FTX trial).
Obviously we should all want our altruism to be effective. What is the other side of it? Wanting one's altruism to not really accomplish much?
But with everything that has gone on, I cannot imagine wanting to be an Effective Altruist! The movement by that name seems to do and think some really weird stuff.
The "Think Globally, Act Locally" movement is the competing philosophy. It's deeply entrenched in our culture, and has absolutely dominated philanthropic giving for several decades.
"Think Globally, Act Locally" leads to charities in wealthy cities getting disproportionately huge amounts of money for semi-frivolous things like symphony orchestras, while people in the global south are dying of preventable diseases.
It’s obnoxious because of that exact implication, that pre-existing altruism wasn’t concerned with efficacy. And that’s just not true.
There has been a community of professional practice around measuring impact and directing funds accordingly for decades. Does it always generate perfect results? No, far from it!! Is there room to improve? Absolutely, lots and lots of room!
But it’s hard not to notice that the people who call themselves Effective Altruists have no better a track record (actually, I would say far worse) than the so-called bloated NGOs and international organizations when it comes to efficacy.
I heard directly from Dustin that he was surprised as anyone by the board’s actions. He is not some hidden mastermind behind the scenes, he has just been personally invested in AI and AI safety for a long time and therefore has many connections to other key players in the space.
If there was there are 700 people motivated to leak it and none did. What could they be aware of that the rest of OpenAI would not be aware of? How did the learn about it?
I agree it's probably not something new. But I will observe that OpenAI rank and file employees, presumably mostly working on making AI more effective, are very strongly selected against people who are sympathetic to safety/x-risk concerns.
Correlation =/= causation. This is most likely coincidental. I highly doubt Dustin's differing views caused a (near) unanimous ousting of a completely different company's CEO that had nothing to do with Dustin's primary business.
No board is ever controlled by a CEO by virtue of the title/office. Boards are controlled by directors, who are typically nominated by shareholders. They may control the CEO, although again, in many startups the founder becomes the CEO and retains some significant stake (possibly controlling) in the overall shareholding.
The top org was a 501(c)3 and the directors were all effectively independent. The CEO of such an organisation would never have any control over the board, by design.
We've gotten very used to founders having controlling shareholdings and company boards basically being advisory rather than having a genuine fiduciary responsibility. Companies even go public with Potempkin boards. But this was never "normal" and does not represent good governance. Boards should represent the shareholders, who should be a broader group (especially post-IPO) than the founders.
That isnt relevant to the question. Sam was on the board prior to all of these other directors, and responsible for selecting them.
The post asks how/why Sam ended up with a board full of directors so far out of alignment with his vision.
I think a big part of that is that the board was down several members, from 9 to 6. Perhaps the problem started with not replacing departing board members and this spiraled out of control as more board members left.
Actually, you're rephrasing the question - it was specifically about "control", not "alignment".
Even if we substitute "alignment" the problem is that the suggestion is still that Sam would have been "better protected" in some way. A 501(c)3 is just not supposed to function like that, and good corporate governance absolutely demands that the board be independent of the CEO and be aligned to the company goals not the CEO's goals.
> good corporate governance absolutely demands that the board be independent of the CEO
CEOs and subordinate executives being on boards are not unusual, and no board (especially a small board) that the CEO (and/or subordinate executives) sits on is independent of the CEO.
By "independent" I don't mean "functions separately". Of course the CEO sits on the board. Sometimes the CFO is on the board too, although subordinate executives usually _should not be_ (they may _attend_ the board, but that's a different thing).
But fundamentally, the CEO _reports to_ the board. That's the relationship. And in a 501(c)3 specifically, the board have a clear requirement to ensure the company is running in alignment with its stated charter.
Whether or not this board got that task right, I don't know, it doesn't seem likely (at least, in hindsight). But this type of board specifically is there for oversight of the CEO, that's precisely their role.
Dustin Moskovitz was an early employee at FB, and the founder of Asana. He also created (along with plenty of MSFT bigwigs) a non-profit called Open Philanthropy, which was a early proponent of a form of Effective Altruism and also gave OpenAI their $30M grant. He is also one of the early investors in Anthropic.
Most of the OpenAI board members are related to Dustin Moskovitz this way.
- Adam D'Angelo is on the board of Asana and is a good friend to both Moskovitz and Altman
- Helen Toner worked for Dustin Moskovitz at Open Philanthropy and managed their grant to OpenAI. She was also a member of the Centre for the Governance of AI when McCauley was a board member there. Shortly after Toner left, the Centre for the Governance of AI got a $1M grant from Open Philanthropy and McCauley joined the board of OpenAI
- Tasha McCauley represents the Centre for the Governance of AI, which Dustin Moskovitz gave a $1M grant to via Open Philanthropy and McCauley ended up joining the board of OpenAI
Over the past few months, Dustin Moskovitz has also been increasingly warning about AI Safety.
In essense, it looks like a split between Sam Altman and Dustin Moskovitz