Ford Aerospace was one of the first commercial sites of BSD Unix. The licensing was complicated. We had to buy Unix 32V from AT&T first.
That transaction got on the path for major corporate documents. AT&T and Ford Motor had a cross-licensing agreement. Eventually, I got a no-cost license agreement embossed with the corporate seals of both the Ford Motor Company and the American Telephone and Telegraph Corporation. Made a copy and taped it onto a VAX. Then I drove up to Berkeley from Palo Alto and Bill Joy gave me a BSD tape.
BSD didn't have networking at that point. We bought 3COM's UNET.[1] That was TCP/IP, written by Greg Shaw. $7,300 for a first CPU. $4,300 for each additional CPU.
It didn't use "sockets"; you opened a connection by opening a pseudo-device. UNET itself was in user space, talking to the other end of those pseudo-devices.
Once we got that going, we had it on VAX machines, some PDP-11 machines, and some Zilog Z8000 machines.
(The Zilog Z8000 was roughly similar to a PDP-11)
All of which, along with some other weird machines including a Symbolics LISP machine, eventually interoperated. We had some of Dave Mills' Fuzzballs as routers [2], and a long-haul link to another Ford ___location that connected to the ARPANET. Links included 10Mb/s Ethernet, a DEC device called a DMC that used triaxial coax cables, and serial lines running SLIP. A dedicated 9600 baud serial synchronous line to Detroit was a big expense.
My work in congestion control came from making all this play well together. These early TCP/IP implementations did not play well with others. Network interoperability is assumed now, but it was a new, strange idea back then, in an era when each major computer maker had their own networking protocols. UNET as delivered was intended to talk only to other UNET nodes. I had to write UDP and ICMP, and do a major rewrite on TCP.
When BSD got networking, it was initially intended to talk only over Ethernet, to other BSD implementations. When 4.3BSD came out, it would only talk to some other implementations during alternate 4 hour intervals. I had to fix the sequence number arithmetic, which wrapped incorrectly.
And finally, it all worked. For a few years, it was said of the TCP/IP Internet that it took "too many PhDs per packet." One day, on the Stanford campus, I saw a big guy with a tool belt carrying an Ethernet bridge (a sizable box in those days) under his arm, and thought, this is finally a working technology.
> The first copy we know to make it out of Berkeley was to Tom Ferrin at UCSF on the 9th of March in 1978. The license was signed on the 13th, the media was an 800 bpi tape, and on the tape was the “Unix Pascal system” and the “Ex text editor.”
UCSF and UCB (as well as all the UC campuses and enterprises) are technically the same legal entity, incorporated by article 9 section 9 of the California constitution. Strange that they signed a license, when I worked for the regents we called them MOUs if it was a "contract" with another part of ourselves.
When I started working for UCSD and found out we were licensing BSDi for some boxes I was sort of confused, like why don't we get that for free?
Tom was my manager for a while when I was a programmer at UCSF (~20 years after this event). He ran the Computer Graphics Lab which had a bunch of SGIs and DEC Alpha servers to do molecular modelling and I was 'the first linux guy in the group' (they asked me why I used linux when bsd existed). I remember him saying that whenever he licensed a commercial UNIX he insisted the contract include a section giving him access to the source code for the kernel ("because you never know when you will have to recompile the kernel or debug a kernel-space problem").
He's also known for presenting at USENIX on how to enable a hardware-disabled (but physically present) instruction on a cheaper version of the PDP/11. You'd cut a trace in the microcode and add a jumper wire, and presto! you had a fast floating point instruction that was previously disabled.
I always thought there was quite a contrast between UCB giving away BSD versus UCSD making its p-System a commercial product. In the short term, it was probably good for UCSD, but in the long term, the p-System is all but forgotten while BSD trucks along.
I thought BSD being free was part of the consent decree from AT&T vs Regents. The technology transfer office would never sign off on open source license, even in the early 2000s. You would have to make and end run around them to office of general council to start a new open source project, once granting agencies started requesting open source. Even if you did end run around tech transfer, you could not use a license that mentioned anything about patents (UC does not want to give away patents, but giving away copy rights was permissible)
yes, we were told we could not contract with ourselves, so we had to do memorandums of understanding when we had revenue or other agreements between units.
We also had a policy to follow state law (which does not apply to UC unless UC is specifically mentioned in the law).
As noted above, the University of California is given special status by a section of California constitution, so it has special status through that. It's presumably clauses like that fact it's governed by the Regents "...subject only to such legislative control as may be necessary to insure the security of its funds and compliance with the terms of the endowments of the university..." that mean it isn't normally affected by new laws.
There's an alternate universe, almost identical to our own, where BSD 4.4 was released slightly earlier, its legal status was not ambiguous, it got ported to the 386 in '90, Linus never bothered to write Linux, and every machine from smartphone to server runs some version of BSD.
«
My first choice was to take the BSD 4.4-Lite release and make a kernel. I knew the code, I knew how to do it. It is now perfectly obvious to me that this would have succeeded splendidly and the world would be a very different place today.
RMS wanted to work together with people from Berkeley on such an effort. Some of them were interested, but some seem to have been deliberately dragging their feet: and the reason now seems to be that they had the goal of spinning off BSDI. A GNU based on 4.4-Lite would undercut BSDI.
So RMS said to himself, "Mach is a working kernel, 4.4-Lite is only partial, we will go with Mach." It was a decision which I strongly opposed. But ultimately it was not my decision to make, and I made the best go I could at working with Mach and doing something new from that standpoint.
This was all way before Linux; we're talking 1991 or so.
»
> Some of them were interested, but some seem to have been deliberately dragging their feet: and the reason now seems to be that they had the goal of spinning off BSDI.
As a VP of the Free Software Foundation, the author of the quote you've shared is not well-positioned to recognize the reason a person might not enthusiastically cooperate with Richard Stallman: a fairly large percentage of the people who deal with him find him supremely annoying.
> I do think his intransigence was integral to getting the Free Software movement off the ground.
I'm not sure a Linux-like ecosystem would be possible on BSD licenses. With GPL, company A is free to contribute code without the fear competitor B will take it, make some improvement, and pack it with their proprietary product to undercut company A. This is an important selling point of licensing your tech as a commodity that cannot be exploited to gain a competitive edge on you.
> I wish I had done.
A friend of mine who hosted him says "it was an honor, but not a pleasure".
Yeah, I know, it's a somewhat complicated story. But even the kernel has, as the Wikipedia article confirms, considerable BSD heritage, and the userland is even more BSD flavored. I would think that the vast majority of Mac / iOS programmers has never made a direct Mach system call in their code.
At the very least, you can say that OS X is more BSD than Linux is GNU.
The vast majority also doesn't do POSIX stuff, rather Objective-C and Swift framework calls.
The UNIX compatibility on NeXTSTEP, and follow up culture on OS X, was that BSD layer was to bring UNIX stuff into the system and get those DoD contracts, not much so for the software born on the platform.
My uninformed understanding is: Darwin is Open Source, so when they improve/extend/modify it for their own needs (Darwin still is the foundation for macOS, iOS etc.), they must make those changes available.
No; open source encompasses copyleft, which broadly requires derivative works to be under the same or similar license, and permissive licenses, which generally don't care and are happy to let you incorporate the code in differently licensed derivatives up to and including producing completely proprietary programs using permissively licensed open source code. Darwin largely uses permissive license code, in particular from the BSDs, which means that it's under very little obligation to share. (IANAL and this is somewhat oversimplified, but that's the gist)
Actually, you don't have to. For GPL software you must provide source code for the binary, and can't put it under NDA, that is many fails to comply with. But you don't have to upload the files onto Internet or anything. This is about BSD so it's even less than that, you only have to display correct attributions.
You can refuse to distribute binary and/or source code, or your private copy thereof, entirely, with FOSS. You don't have to send your fork to non-users randomly asking for it, even with AGPLv3.
It's just a totally fine and also easy conformance that is slightly beneficial to all.
> How exactly would someone prove they improved/extended/modified the source?
Disassembly would work.
> These sort of copy-left licenses rely an awful lot on good faith, and that does not contend for human nature.
This is not a copyleft license. Also, copyleft licenses rely on lawsuit, more than good faith.
> Even if someone could prove it - good luck legally compelling Apple to do anything.
If Apple gets sued, they do have to show up in court. If they lose, they must pay damages. If they're injuncted, and do not comply with the injunction, the courts can take any number of actions against them in response. I don't sit on Apple's board, but I can't imagine them preferring an arbitrarily-large fine over releasing GPL-licensed code which Apple modified without complying with the license.
> If Apple gets sued, they do have to show up in court
This is a very naïve view of the legal process. Apple could, and would drag the process out and make it as costly/lengthy as possible. They can outlast any feasible plaintiff - especially in the FOSS world (what we're discussing here).
It matters not how just or righteous your case may be. It only matters how deep your pockets are.
Apple's are much deeper than FSF or any other organization that would bring a suit. It's that simple...
> Disassembly would work.
Doing so would violate their EULA/TOS and expose yourself to a suit as well. Fun times.
> Also, copyleft licenses rely on lawsuit, more than good faith
How many of those cases have actually seen a courtroom - and how many did the plaintiffs actually prevail?
My understanding is that git and mercurial were created independently around the same time, so in a world where Torvalds didn't create git we'd probably all use hg. Although, it isn't super hard to imagine him still working in system software and eventually creating git under almost the same circumstances, just that it would be used on a BSD rather than Linux.
I studied CS in the university and there was that guy who was a computer geek (out of very little among my fellow students), but actually was just your average gamer with close to zero knowledge of anything computer beyond double-clicking the shortcut to CS / Half-Life / WarCraft / Dota / whatever else was popular back then.
I met him 15 years later. We had some casual chat about this and that. And at some point he said—
Well, I have to give you a bit more of the context. He was a junior front-end (React, ofc) developer at that point. Yes, 15 years later! He just arrived into the profession due to his wild incompetence. Although he sold himself as a senior to the company! Still, he was very incompetent, and I had those wild eyes of wondering how on Earth some company would even hire such a guy, not to say giving him a senior role!
He was talking about this front-end thing. That there’s that other OS, Ubuntu, and he thinks he’ll try it instead of Windows. As drumroll for some reason (for some reason!) this git thing works better on Ubuntu than it works on Windows. My first try of Linux was like 15 to 20 years before the talk, so I was like spilling my coffee on him with laughing.
And here is your comment, casually pretending everyone on HN is aware of that history of git. And that it’s that git guy is somehow related to this Ubuntu thing, you know.
Mercurial was also created by a Linux kernel developer, and started development a few days after Git (the reason was that Linux had previously been using BitKeeper for version control but BitKeeper stopped offering free licenses).
Interesting - that I didn’t know. Based on my personal p4 experience, I’m sorry anybody else had to use it.
That said - nobody has mentioned that git (and hg) was preceded and directly inspired by Larry McVoys BitKeeper[0]. Indeed, I’ve heard rumor that both “git” and “mercurial” were named after people’s perceptions of others personalities during the Linux/BitKeeper breakup/debacle.
There is an alternative universe where Scott G. McNealy has a backbone and doesn't stop participating in BSD development.
Sun almost up-streamed NeWS, and they did a ton of other stuff that should have continue to go into BSD. Driven by Sun as the primary BSD advocate, BSD would have dominated being both open and with a powerful backer.
Sun really had the largest unix mind-share and most application were targeted at Sun workstations. Eventually this resulted in the Unix wars because lots of people were afraid of Sun/AT&T.
Sun could have avoid that and simply made itself and BSD the standard. If that standard was open, there wouldn't have formed a powerful alliance against them. AT&T would still have tried but non-open alliance lead by AT&T wouldn't work out well.
But because of a bunch of lawyers and eventually one bad quarter, Sun went full out proprietary and joined itself to AT&T.
That BSD would most likely be similar to what Linux is today. There's this idea that this alternative universe would have one of the BSD variants as they exist today be dominant, but colour me skeptical on that.
RedHat would have been a "BSD company", systemd would have been a "BSD thing", Hans Reiser would still have written ReiserFS, but for BSD, Steve Balmer would have ranted that "BSD is cancer", etc.
There would be some technical differences of course, but I suspect many of them would be relatively small and not all that interesting.
So basically it would have been "Linux, but with fewer Penguins and more Satanism".
No there would have been some big differences. The first is that threads would have a kernel representation unlike in Linux where the kernel just knows about processes. This mistake caries over to tools like strace that grab one thread by default because they don't know the others.
Secondly with integrated user space and kernel packaging and distribution would not be a necessity. That would change a lot of things.
Microsoft had taken the TCP implementation from BSD for Windows as shown in the copyright notice. The relationship would be different.
We would not have alternate libcs. Containerization would be built around slightly more heavyweight jails which look like real systems as more software would be assuming that environment.
With systemd there is a real need, but I would hope the solution is better.
You're assuming that BSD systems would have evolved the same way over the last 35 years. I don't think that's a given. People like to tinker and would have created alternative ${anything}s for BSD systems. And even with a bundled userspace and kernel you still have distros; e.g. PC-BSD did the whole "fat packages" thing 20 years ago, and there have been tons of "distros" based on various BSD systems. Jails weren't created until around the late 90s (1998 or 1999 IIRC), and who knows what that would have looked like if (Free)BSD had massively taken off in the same way Linux did, and who knows if "FreeBSD cgroups" wouldn't be a thing in addition to jails?
Or: all the people that worked on those Linux things wouldn't have worked in a completely different field: they would have worked on BSD, presumably with (roughly) the same interests, and (roughly) the same opinions. Of course it wouldn't be exactly the same, but roughly, from a high level? It would probably be very similar.
The threading stuff was being worked on at the same time across Linux and BSD and System V derived systems, and this was during the early Unix wars. Cgroups came long after jails, and I think the need would have been less, although there are reasons they look the way they do.
The key difference would be if they kept permissive licensing or if there would be copyleft BSD derivatives. For example, without GPL forcing the kernel to stay open source, Android phones would be a lot harder to modify
Most Android phones are already so locked down and partly proprietary that it's a right pain to meaningfully modify anything without jumping through tons of hoops.
In short, I don't think it would have made much of a difference. "GPL vs BSD" is an old debate that's been done a million times, and I never seen any convincing argument one way or the other. Plus there's long been copyleft (GPL, CDDL) parts to many BSD systems, both in userspace and kernel.
To be clear, the only GPL source in Android is the kernel. Google took great pains[0] to avoid shipping copyleft code. This is why they were able to make most of Honeycomb proprietary[1]. They can't withhold source on the kernel, but most of what makes Android Android isn't part of Linux and thus the GPL does not apply[2].
> For userspace (nonkernel) software, Google prefer Apache 2.0 (and similar licenses such as BSD and MIT) over other licenses such as the GNU Lesser General Public License (LGPL).
Android is already permissively licensed and that's half of why Google can legally lock things down[3] so easily. The other half being that Google requires a CLA from outside contributors, which is just plain silly for a permissively licensed project. Then again, GNU did the same thing at one point.
In the alternate world where BSD was unambiguously legal and RMS was the Ted Nelson[4] of Free Software, I don't think they would have independently invented copyleft. A lot of the BSD people had very reactionary opinions to the concept, and while some of it might be motivated by their personal disdain for RMS, it's hard to separate RMS and copyleft.
Given all that, a legally unambiguous BSD almost certainly would have wound up being eaten by Microsoft rather than AT&T and the UNIX Wars. All the anger from early 2000s Microsoft about Linux was solely because the GPL prevented them from just writing a better version of Linux or GNU that other people couldn't have. Microsoft was very good at taking open standards and extending them with the goal of making their implementation replace the standard. The GPL copyleft was the perfect thorn in the foot of a Microsoft that liked to cheat the norms of standards bodies.
[0] At least on early versions of Android, this went as far as changing all the system paths. GNU and Android can technically coexist under the same Android/Linux kernel.
[1] Even the FOSS release of Ice Cream Sandwich deliberately omitted all the Honeycomb-related tags to frustrate attempts to use Honeycomb under the FOSS terms of the later AOSP release.
[2] Linux is commonly stated to be GPLv2, but it additionally has an exception stating that user-mode code never trips the GPL copyleft, as a "just in case" sort of thing. This is the reason why Nvidia is allowed to have kernel modules, and also the reason why Linus explicitly refused GPLv3. The v3 anti-TiVo clause forbids you from shipping an application on the same consumer electronics system as GPL software, if the application refuses to run if the GPL software is modified. This is perfectly reasonable for GNU but flies in the face of the whole "user mode APIs are not copylefted" thing.
> Given all that, a legally unambiguous BSD almost certainly would have wound up being eaten by Microsoft rather than AT&T and the UNIX Wars. All the anger from early 2000s Microsoft about Linux was solely because the GPL prevented them from just writing a better version of Linux or GNU that other people couldn't have. Microsoft was very good at taking open standards and extending them with the goal of making their implementation replace the standard. The GPL copyleft was the perfect thorn in the foot of a Microsoft that liked to cheat the norms of standards bodies.
This is a strange counterfactual to propose, given that Microsoft at one time sold the most installed Unix on the market: Xenix. Which it later abandoned. Given that they were licensing System 7 from AT&T, why would a tweak to BSD licensing result in them "eating" it?
Microsoft's biggest existential risk in the 80s was not having a "real OS"[0]. They had DOS, which they bought from SCP[1] to fix a problem IBM was having, and they had XENIX, which they had to license from AT&T. DOS was technically unsound - it almost sort of worked for personal computing but nobody was going to build workstations out of IBM PCs running DOS. They wanted to build a "real OS" on top of XENIX, going as far as to add porting aids between DOS 2 and XENIX in what they dubbed "XEDOS". Then Bell Labs started selling System V directly in 1983 and Microsoft chickened out of UNIX.
In this environment, Microsoft wound up writing several attempts at home-grown "real OS" projects: Multitasking DOS 4, and ADOS, which then became OS/2. None of these succeeded. OS/2 got close, but Microsoft really didn't like working with IBM, and Windows 3[2] was selling way better. Microsoft ultimately got itself out of this dilemma with NT and Windows 95, both of which used a slightly modified Win32 API that was "real OS" compatible. However, that was almost a decade during which any number of companies could have easily upended, if not killed the company.
BSD didn't start separating itself from AT&T owned code until 1989. Had they done so earlier, Microsoft could have pivoted to a BSD-based XENIX and saved themselves the cost of an AT&T license (which drastically increased with SysV). Microsoft could have shipped Windows on top of that instead of, say, the hackery that was WIN-OS/2 or waiting for Dave Cutler to design and build NT.
[0] Virtual memory, apps run in protected mode ring 3, and preemptive multitasking.
[1] Seattle Computer Products, not the SCP Foundation
[2] Which is not a "real OS" in the sense that it's technically near identical to the Macintosh's system software.
Sometimes I wonder how much they repent themselves of having sold Xenix, and later on not holding on to a better POSIX support on NT, having to ship a Linux VM in the end.
I for one only bothered with Linux initially after having learned UNIX on Xenix at the technical school, and using DG/UX on university campus, because POSIX support on Windows NT was so flanky, and later on it was more practical to deal with dual boot than SUA.
I feel a bit of history is slipping away. Linux forced a lot of companies and individuals to keep stuff together in a way that uniquely unified industry against Windows and custom embedded software stacks.
Without the Linux kernel we’d have a bunch of competing proprietary forks bickering at every turn.
You could have forgotten video drivers from major vendors like NVidia. Even with Linux it was an anomaly.
Huh? We have many competing forks - debian, redhat, ubutntu, arch... Sure there is mostly on kernel, but there is a lot more to an OS than a kernel.
Linux didn't force anyone to keep stuff together. It just is to your advantage to work together most of the time. Anyone who didn't work together soon paid a price as something you want couldn't easily be brought to you.
I may not make my point clearly - but I have seen from the inside how companies reluctantly use the Linux kernel because it's the industry standard and the board support package comes with Linux.
If BSDs had won, there would have been very little incentive to upstream patches or provide source code to customers, but a lot of incentive to pitch the vendors proprietary additions as something special. Every company selling a solution would have been tempted to close off the source and not give anything back. Back then, it was a knee-jerk reaction to keep as much as possible in-house.
Everything begins with hardware support.
The standard distributions also all used glibc and GNU utilities which is also GPL. For the longest time all standard distributions were based on either RedHat or Debian.
I think Linux shaped the industry in very special ways which would not have come naturally in a non-GPL world. Things today are by-and-large extremely open, with github etc being almost a force of nature, but my hunch is that this outcome was not a given which would have just have happened in some kind of manifest-destiny way. I think the world could easily have tipped into a much more proprieraty path without GNU and Linux.
It seems to me the incentive is mostly that it costs tons of effort to maintain these things in-house, and that it doesn't actually hurt the business to share (most) things.
Does copyleft help? Perhaps, but that's not clear to me. There are tons of successful non-copyleft projects so copyleft certainly doesn't seem to be a requirement, and look at how many companies are violating the GPL and basically just keep doing it. While GPL suits do happen (just last week in France), in general they're exceedingly rare, and the risk consists of a bit of negative publicity among a small number of people.
All of this is of course an old discussion, and to be fair I don't think anyone can be sure of anything here which is why people have been discussing this for 30 years. Basically we'd have to construct an alternative universe to be sure.
I was a BSD admin for 4 years back in the early 00s after running Sun Solaris DNS servers for a well known ___domain registrar. Ran Web servers, and later firewalls using FreeBSD. While I really like BSD for a server, I've had terrible times with it on laptops. I've tried them all: Free, Net, Open, Ghost, and others that are no longer being developed. I've standardized on Debian over the years because it just works. I buy refurb Lenovo laptops from eBay for about $120.
I would like to see BSD grow and become more popular on the desktop. For now, Debian is just super stable and good enough.
Kirk McKusick gives a great ~45min talk called the history of BSD at conferences. There are essentially 3 versions (one about the lawsuit, one about the TCP wars with BBN, and one about the early history), and he asks the audience to vote on which version they want to hear. Search for Kirk McKusick history of BSD on YouTube, or come to a BSD conference to hear it.
And I'd recommend coming to a conference, just so you can talk to Kirk. He's one of the most down to earth, friendly and just generally nice luminaries I've ever talked to in person.
Yes, I have fond memories of the one or two times I heard him as well.
One bit I remember from his presentation was his parable of BSD as the building of a road, with Bill Joy (IIRC) hacking a path through the jungle with a machete, someone else driving a bulldozer along that path, somebody pouring asphalt, somebody adding streetlights, and finally somebody (I seem to recall it was Sam Leffler) painting the streetlights.
Beautiful bit of history. I remember quite a lot of it being old.
Nice work by the author. Without people taking the time to write things like this up history gets lost.
Years ago I build one of the first local search engines for the Netherlands called Search.NL. I am indebted to the writer of the only page that refers to this history any more here:
I was a Solaris admin in those days. No one was impressed with the changes. I still, even after all these years, recall the BSD command differences. For whatever reason, I still sometimes enter a random BSDism command on the Linux command line and then say, "Ooops, wrong command. I cannot believe I did that." Especially since I've run Debian stable for the better part of 20 years on and off with attempts to like other distros like Fedora or Arch. Still working in IT closing in on 60 years of age. Still love being a sysadmin. Still love writing shell scripts and automating things.
I used to work with the Sun 3's, SparcStations and other Sun hardware at the time when they were transitioning from the BSD flavoured SunOS to Solaris. There was confusion at many levels including branding. The SysV flavour was initially called SunOS 5, not Solaris. Later they decided to use the Solaris name for later SunOS releases and drop the SunOS name altogether. There was some confusion about the Solaris 1.x vs 2.x naming, since Solaris 1.x was really the BSD flavoured SunOS, while Solaris 2.x was a completely different beast, being all SVR4.
What first struck me when we moved from SunOS to Solaris was just how clean and simple SunOS was to manage, from the directory structure to the /etc configurations to device file names. Yes, there were different ways in Solaris to address the disk block device (physical, logical, etc.) but it always made me cringe that something I'd addressed as '/dev/sd0a' now became something like '/devices/iommu@0,10000000/sbus@0,10001000/espdma@5,8400000/esp@5,8800000/sd@0,0:a', or at best '/dev/dsk/c0t0d0s0'. The /etc/ filesystem hierarchy also got a lot more complicated. I felt the SVR4 Solaris was far less elegant than SunOS.
Guess it was also because of the relative maturity of SunOS, but I felt things just worked as expected and the OS itself was generally snappier than SVR4/Solaris on the same Sparc hardware. Eventually we had no choice but to move to SVR4/Solaris but I still have fond memories of a simpler time and a cleaner SunOS.
This was the moment many former Sun people identify as the 'death of original Sun'.
Sun wanted to make SMP work, so Solaris was a huge amount of work and was incredibly buggy for a long time. Sun effectively lost a huge amount of time going threw this transition and it took years to be back to the same amount of stability. Bryan Cantril talkes about this in LISA11 - Fork Yeah! The Rise and Development of illumos [1]
But not everybody agree, if you read 'Sunburst: The Ascent of Sun Microsystems' that was written in 1991, they are very pro the AT&T deal and explicitly call out the BSD people as 'anti-Standard' and thus anti-Sun.
The real difference there is the argument about what a Standard is. Their approach is basically Unix belongs to AT&T and therefore they are the standard. While the (in my opinion) more correct argument was that BSD/Sun had the most software and they were the defacto standard and being more open, continuing be the standard was what Sun should have done.
Another incredibly important battle was over X vs NeWS. NeWS was arguably technically superior. There is a nice talk by James Gosling about it [2]. It still had technical problems but it would have been a great base to build off. But because some companies, lead by Dec made X open(ish) and many embraced it, X became 'the standard' and thus many at Sun basically said 'Well X is the standard, so therefore we have to switch to X'. Instead of saying, lets make NeWS open and try to be the Standard.
The Sun transition also hurt their costumers. In "LIFE UNDER THE SUN: My 20-Year Journey at Sun Microsystems" David Yen mentions costumers bitterly complaining about it. They made the transition very, very aggressively basically telling costumers to STFU. David Yen loved the aggressive approach because he was selling massive SMP servers that need Solaris. SMP servers where clearly where Sun saw the future.
If Sun had embraced BSD, open sourced NeWS and made the SMP transition in the open, Unix might have gone into the 90s being an unstable juggernaut.
The sword of damacles explanation of BSDi's ending is utter bullshit in this article. BSDi knew they were in direct competition with RedHat in the late 1990s. The President, Rob Kolstad, ran a very lean operation with only about 20 employees. Red Hat managed to IPO and was heavily capitalized. Some BSDi employees and one in particular, argued for an IPO and the particular guy from MIT guaranteed that with an IPO everyone would be rich in one year. The palace revolt worked, the MIT guy took over, Rob Kolstad was deposed, and the MIT guy bankrupted the company in just 1 year (record time). The pieces were sold off to Wind River, which was later absorbed into VxWorks, which finally cancelled development a few years later, and that was the end of BSD/OS or BSDi.
I have the sources to "JHU UNIX" which were a direct fork of one of the earliest BSD's, though I don't recall hearing which. They were recovered off data tapes (and are still in an arcane raw binary format) from the late 70's / very early 80's.
And this is why UNIX took over the server world, and C widespread everywhere, had it been a commercial product sold by AT&T at a similar price tag as VMS, MVS and others, history would have been quite different than the one portrayed on the article.
BSD didn't have networking at that point. We bought 3COM's UNET.[1] That was TCP/IP, written by Greg Shaw. $7,300 for a first CPU. $4,300 for each additional CPU. It didn't use "sockets"; you opened a connection by opening a pseudo-device. UNET itself was in user space, talking to the other end of those pseudo-devices.
Once we got that going, we had it on VAX machines, some PDP-11 machines, and some Zilog Z8000 machines. (The Zilog Z8000 was roughly similar to a PDP-11) All of which, along with some other weird machines including a Symbolics LISP machine, eventually interoperated. We had some of Dave Mills' Fuzzballs as routers [2], and a long-haul link to another Ford ___location that connected to the ARPANET. Links included 10Mb/s Ethernet, a DEC device called a DMC that used triaxial coax cables, and serial lines running SLIP. A dedicated 9600 baud serial synchronous line to Detroit was a big expense.
My work in congestion control came from making all this play well together. These early TCP/IP implementations did not play well with others. Network interoperability is assumed now, but it was a new, strange idea back then, in an era when each major computer maker had their own networking protocols. UNET as delivered was intended to talk only to other UNET nodes. I had to write UDP and ICMP, and do a major rewrite on TCP.
When BSD got networking, it was initially intended to talk only over Ethernet, to other BSD implementations. When 4.3BSD came out, it would only talk to some other implementations during alternate 4 hour intervals. I had to fix the sequence number arithmetic, which wrapped incorrectly.
And finally, it all worked. For a few years, it was said of the TCP/IP Internet that it took "too many PhDs per packet." One day, on the Stanford campus, I saw a big guy with a tool belt carrying an Ethernet bridge (a sizable box in those days) under his arm, and thought, this is finally a working technology.
[1] https://archive.org/details/bitsavers_3Com3ComUN_1019199/pag...
[2] https://eecs.engin.umich.edu/stories/remembering-alum-david-...