Hacker News new | past | comments | ask | show | jobs | submit login
Cooperative Linux – running Linux as a Windows driver (2011) (colinux.org)
115 points by ornxka on July 10, 2020 | hide | past | favorite | 58 comments



I used colinux as part of my personal plan to migrate away from Windows back in ~2005-2006.

I wanted the transition to be as painless as possible, and I also wanted to be able to use both system at the same time for a while while getting used to Linux & its applications.

First step was starting to use multiplatform & open applications. So I left IE, Outlook and Office for the windows verdions of Firefox, Thunderbird and Open Office.

Once I got accustomed to that, I installed Debian in dual boot and moved all my files to ReiserFS. This was before the unfortunate events involving its creator: eventually I moved to ext4 and years later I made a super cool in-place conversion to btrfs, but that's another story).

Now I was accustomed to open applications, and my data was on Linux, but I still found myself frequently needing to start windows for a lot of reasons. But then all my emails and documents were locked in the Linux partition.

Here colinix came to the rescue: while on windows, I started a colinux instance (it was super fast, no way a VM could have served that purpose), mounted the physical partition in it, and shared to the windows machine via Samba.

It was a long journey, but taught my younger self a lot, and above all grew me fond of how powerful and flexible an open software ecosystem can be. Nothing of that would have been possible on proprietary technologies.

Many years later, I have never looked back, and I grateful to all the wonderful passionate people that made this possible.


I'm the original author of this project. It warms the heart to see that it helped so many fellow hackers to introduce themselves to Linux!

Recalling this, it makes me feel quite a old hack, when I started this back in December 2003, without Git, through mailing lists, and on sourceforge - oh boy. Would have definitely been easier maintaining this project with all the tech, the reach, and the amount of potential contributors that we have today.


Man, without you I don't think I would have survived in the corporate world. If it wasn't for the fact that I could install coLinux to turn my work XP in an actually functional OS, I don't know what I would have done.

Nothing (almost!) against Windows, but with experience steeped in Xenix, Solaris and Interactive... let's say that coLinux was also pretty effective as an anti depressant :D


Thanks. Used it quite a lot to do multiplatform development with C#/.NET


I remember running this (way earlier than 2011, like 2003?) and it was really cool but it’d slow down the real time clock on the ‘host’ (co-habitating?) Windows install. I figured that was due to the clock being tied to cpu cycles and Linux stealing some of those. I’m not so sure about that explanation anymore, but I don’t have a better hypothesis either. Funny how that little detail stuck with me for almost 20 years…


I thought it was due to the timer's frequency being programmable, and both operating systems setting and assuming different values. It's been a long time though and my memory is fuzzy.


That reminds me of the one annoyance of the very cool Softice debugger. The clock would stop while active.


Little trivia: There has been an official UNIX subsystem in Windows since 2000: https://en.wikipedia.org/wiki/Windows_Services_for_UNIX It was a company called Interix, and I believe it was not maintained for a long time (but continued to ship). IIRC the reason it was there was to satisfy DoD contracts or something similar, that required a POSIX-compliance.


Your timeline is a bit off, Windows NT released in 1993 had POSIX personality already.

https://en.wikipedia.org/wiki/Architecture_of_Windows_NT

Had Microsoft taken care of it like other OS vendors did with their POSIX compatibility layers, instead of using it for DoD contracts and Win32 migrations, and FOSS UNIX clones would never had mattered.

The only reason I started tinkering with Linux, was that Xenix and DG/UX were a little hard to come by for work assignments at home, and stuff like Coherent wasn't really a solution.


> FOSS UNIX clones would never had mattered.

I am not sure about that. NT licenses weren't cheap back in the day, and Linux was $0. (And didn't come with baggage like a more expensive SKU to support more CPUs)


NT licenses were included in the computer price anyway. I was running NT during university, and eventually dual booting with Linux.


“has been” isn’t the case, it was removed in Windows 8. It worked on some Windows 7 variants though, with a separate download of tools.


Ah, I remember that. Netbsd/pkgsrc developers bragged about how portable their software was that they could even do their development on windows using that thing.


To be fair, "that thing" was a real UNIX OS running on top of the NT kernel.


I used to use this in the days of XP. It was an amazing way to learn Linux without losing Windows for me. I was sad when I had to move on without it due to the limitations of 32bit and 4GB of RAM (of which I could only actually use 3.2GB of at the time). Strange that this abandoned relic of a time long past showed up here in HN.


My story with coLinux was that it was also my first experience with Linux, also around 2008. I wanted to try it but the thought of installing it and dual booting seemed a bit scary to me, so when I came upon a solution that supposedly let me run Linux and Windows at the same time, I was thrilled. However, since I was very young and knew nothing about Linux or even computers in general, I never managed to actually run any Linux programs with it. I don't remember if I just didn't install a disk image with a distribution on it or what, but since I didn't get it to work, I assumed that it was some sketchy project that amounted to snake oil.

Fast forward to earlier today, after using Linux (proper) for the past 12 years, I came across another comment on HN that mentioned it and realized that even today I didn't really understand how the promises of coLinux were possible - how can you run Linux and Windows at the same time? Was it some kind of userland virtualization, like Wine, or was it just a fancy virtual machine, or what? After reading about it I realized it was in fact a very real project that actually did work, and that it used a very novel method of running two operating systems together without "virtualization" as people typically know it, where there is a "host" and a "guest" separated by hardware isolation mechanisms.

Instead, it works by literally just running the Linux kernel inside of a Windows driver, with some bootstrapping code to allocate memory to it, some glue for Windows/Linux context switching (with control yielding from Linux to Windows after a time slice, and control passing from Windows to Linux via a userland daemon in Windows calling into the coLinux driver on a timer), and a mechanism for ferrying interrupts between the two kernels. This basically amounts to "cooperative multitasking", which hadn't really been a thing since segmentation and paging were introduced at least as far back as the early 90's, and as far as I'm aware hasn't been used as a technique for serious virtualization since (for probably the obvious reasons).

It was pretty fascinating learning that this thing I'd tried so many years ago (and hadn't managed to get working, sadly) had such a novel approach to virtualization, so I thought it was interesting enough to share here.


Some mandatory mentions:

https://en.wikipedia.org/wiki/MkLinux - linux on top of the mach microkernel, I think it was sponsored by Apple (especially its book was great)

https://en.wikipedia.org/wiki/L4Linux


There is now PureDarwin[1][2] also. It's still actively developed - last commit was a few hours ago.

[1] http://www.puredarwin.org/

[2] https://github.com/PureDarwin/PureDarwin


Reading through the FAQ, looks like the kernel can run on 1 physical core and SMT/hyperthreading isn't enabled (maybe that's changed).

Seems cute but the WSL has the full support of the Windows scheduler which is gonna make the practical choice of tool obvious for folks with Linux workload requirements.


coLinux being 32-bit only seems like a somewhat bigger problem.


Might have something to do with the fact that coLinux is almost as old as Linux itself. I was surprised to see it on the front page of HN, it was an iffy solution back when it was a semi-viable option. Now that hardware-assisted VMs with very low overhead have been commonplace for a decade, it's pretty much completely obsolete.

Edit: I guess development on it started in 2004, it was only a few years later that hardware-assisted virtualization started becoming relatively mainstream.


I imagine that's a more tractable problem to solve but as it stands this is definitely the bigger issue.


Not in 2011, it didn't.


It's a shame the 64-bit port was never completed. WSL might never have needed to happen, although I confide it would have happened anyway.


I actually remember the discussion on that. The reason 64-bit port didn't happen can be tracked to 1 key difference between Windows and Linux. On 64-bit Windows, the sizeof(long) is 4, while on 64-bit Linux it is 8. While this looks superficial and has known workarounds like INT_PTR and intptr_t, it's a deal-breaker when you want to marry 2 huge codebases that aren't using these workarounds properly.

Microsoft had enough resources to painstakingly go through massive amounts of existing code, catalogue the differences and eventually deliver WSL. A small enthusiast group behind CoLinux would never commit to anything like this, since it's hard to imagine a less rewarding and intellectually stimulating job.


While this is true that on Windows 64 the sizeof(long) is 4 while on 64-bit Linux it is 8, and while it may have been a problem for coLinux (I don't know how it was built, however I doubt it was with much MSVC anyway because the Linux codebase uses tons of GNUisms), and don't think it was very much relevant for WSL1 (they only had to implement the syscall APIs correctly), and probably even less so for the core of WSL2. WSL2 is basically a VM plus some integration bits (think kind of like your VMWare / Virtual Box integration tools) -- this may be tricky for the DirectX forwarding, but I'm not really following the recent devs and MS made it work already anyway (not upstream, but they have the code and they posted it on the LKML, so you can take a look at it) pretty much with a gigantic copy past of tons of data structures, so they somehow managed to handle that.

And I suspect it was not even too hard anyway: identify from which universe comes a type, then identify where you need compat, and use an explicit size to setup the typedef, and it's ok. Windows typically uses uppercase defined types like LONG, and Linux does not do that, so it's hard to mix up unintentionally.


OK, I've found the page [0] outlining the blockers.

It's not a hard task, it's just extremely boring and repetitive, so open-source developers who are driven by fun, rather than tickets and deadlines, won't bother doing it.

[0] https://colinux.fandom.com/wiki/Dashboard_for_developing_a_6...


Oh I see the architecture, it make sense in their case to consider it an important issue depending on the surface area to audit and the amount of work it would take. The architecture of both WSL1 and 2 are different, and neither have this problem I think (maybe punctually in some very limited indirect fashion and easy to handle, but both were developed for 64-bits from the beginning so they obviously did not attempt to share the same struct between both world with bare 'long' in it)


On a tangential note, I must say that I do not quite appreciate this obsession with 64 bits, especially for "smaller" use cases, such as a PC for the "everyday" use. There's still plenty of perfectly usable machines sold with 4GB of RAM. Also, pointers are everywhere, and 64-bit ones take twice as much space, so the gain is not as big as one might think.


I still use an EeePC on a fairly regular basis, so I agree with your statement, but also I think that the desire to drop support for older platforms is itself a concession, that it's too hard to engineer software at an abstract enough level where ISA doesn't really matter. I understand that maybe fewer people are using those machines, but outside of some really platform-dependent code in the deeper parts of the kernel, how much should ISA really matter? Of course, in reality it does, in the sense that virtually nobody writes truly "portable" C code and instead encode all kinds of implicit assumptions about things, but that doesn't mean we should accept that or that the solution is to just bake those assumptions into the system as a whole and stop supporting architectures that violate those assumptions. Instead, supporting more architectures and finding more instances where implicit assumptions about hardware behavior are violated is actually beneficial to constructing a more abstract and easily-portable system, which incidentally can benefit all of the rest of the architectures as well.


A few years after the project was created, I was contacted occasionally by people who wanted to assist in developing a 64-bit port. This required the same technical depth in kernel hacking as recreating the thing, and quite a lot of work. Unfortunately I was occupied with other things and it never fell through. Also, hardware-assisted virtualization came about the same years and made the project somewhat obsolete. It was still very good for its time!


Licensing issues? Too power hungry for embedded ARM devices?


As reasons why the port never happened? I don't really know, but the sense I get is that it just got too hard to find and sustain the kind of developer interest it needed. Again, though, I don't really know; my familiarity with coLinux tops out at having been an enthusiastic user, and even that was quite a long time ago now.

As reasons for WSL to happen anyway? It's hard to know, despite my earlier glib comment that it would. It's plausible that a 64-bit coLinux might have found enough adoption to provide the same support for Microsoft's Windows strategy that WSL does, although coLinux's FOSS licensing model would certainly complicate that a lot - but, on the third hand, how much does it cost to convince a dev team it's time for an acquihire and a community fork? And how much is WSL or something like it worth to Microsoft, anyway? I know a fair number of devs who are glad to have it, but the next I meet who actually switched (or switched back) to Windows for it will be the first.


I used this to stay sane at my first dev job at IBM--we were allowed to "use Linux" but most of the required software wouldn't run. Colinux to the rescue!


I loved coLinux! Was super simple to set up and way less overhead than a VM.

People talk a lot about WSL2 as if it's "already here", but WSL2 still requires the latest update of Windows 10 from two months ago, or a slightly older build along with an "insider preview" activated copy of Windows, and you have to enable Hyper-V. Not all devices/users will be able to support all of this and it will break compatibility with some apps.

Given that, your other option is WSL1, which is not really a replacement. If you want to run Docker on it, you either have to install Docker for Windows (which may also have conflicts with Hyper-V apps) or use the oldest stable version of Docker on Linux along with some annoying hacks to be able to run a container. All of this, and you finally run some apps, and it feels like half your system's performance is gone, and the apps run at about 1/5th their normal speed.

Maybe in a year most people will be on the right build of Windows for WSL2, and maybe we'll have patched all the Hyper-V conflicting apps, and maybe there'll be a way to use it without a long HOWTO and researching buggy commands. Until then, a VM is way easier, more functional, faster, and more reliable.


>"How does it work

Unlike in other Linux virtualization solutions such as User Mode Linux (or the forementioned VMware), special driver software on the host operating system is used to execute the coLinux kernel in a privileged mode (known as ring 0 or supervisor mode).

By constantly switching the machine's state between the host OS state and and the coLinux kernel state, coLinux is given full control of the physical machine's MMU (i.e, paging and protection) in its own specially allocated address space, and is able to act just like a native kernel, achieving almost the same performance and functionality that can be expected from a regular Linux which could have ran on the same machine standalone."

This sounds great and is a noteworthy achievement!

Favorited.

Although (and this is just me geeking out here, and doing some imaginary future engineering in my mind!), wouldn't it be great if someone modified both Linux and Windows -- to use a common, neutral, small-as-possible microkernel?

You know, take out the common stuff that both of them do, and put all of that into a microkernel, then change them so they can happily run alongside one another, supported by that microkernel at the center?

Note to future self (when I have the time): Look into doing this... what would be learned about microkernels and microkernel design -- if someone went down this path?

(Yes, I know about Mach and other microkernels... maybe the job would be fitting Linux and other OS's to an existing microkernel... that would certainly be easier than generating the microkernel from scratch -- but not as educational...<g>)


I used it for many years up to the end of 2008. Then I realized that I was using only software also available on Linux and I installed Ubuntu 8.04 with Windows in VirtualBox (IE was important back in the time.)

Colinux worked very well. Thanks to the authors.


There was a "distribution" called "pubuntu" (aka Portable Ubuntu) that makes use of coLinux, along with an X server (probably Xming) and even PulseAudio running on the host Windows system. For a while I used it to play around with Android porting (that was around Gingerbread), compiling kernels and Android with no issues at all. Fun times.


Good old times; I made the CentOS, Fedora and OpenSuSE images

Used this a lot for cross platform development, like C#/.NET and Mono. Used this with an xserver on windows (xming?) and ran monodevelop or gvim.


Why didn't Microsoft go with something like this for WSL, instead of using a VM for WSL2?


There's an interesting thread [1] on github.com regarding this.

Here is a copy of the comment I added there:

I was not surprised when Microsoft took the approach with WSL2, as I've gone a similar path when writing coLinux. I've considered extending an existing project named LINE, which is more similar in architecture to WSL1. Not having the full resource of a development team, I took the easier cooperative VM approach. Re-implementing stuff is a pain. Integrating is more fun.

However I think both WSL1 and WSL2 have their added value being active projects. WSL1, is good for being more 'in the Windows ___domain of things' compared to WSL2, and not requiring a hardware-accelerated hypervisor behind the scenes. For WSL2, back when writing coLinux, I've imagined the things that could have been gained if I had had access to the Windows internals, especially regarding memory management. I believe that the Windows dev team has a much better chance to make Linux more compatible and smoothly integrated in a performant way with this approach.

Perhaps it's too late, but I wish that the two approaches would have had different names so to not suggest that one is entirely newer or better than the other.

[1] https://github.com/microsoft/WSL/issues/4022


Cooperative multitasking is, in general, less than optimal in terms of stability. Both kernels run in the same address space, and must voluntarily cede control to the other, so any problem with either writing to the wrong place or failing to yield control to the other can cause a system failure. In addition to that, there is also the latency introduced when Linux has control and receives a hardware interrupt which it must then ferry to Windows for processing. In general, they can't both be running at the same time, which is not the case with all other virtualization methods.

The real strength of this approach I think lies in the total time and manpower required to get it working - the paper on their web site says that from the day he sat down to start the project, it took him roughly one month until he was able to run KDE programs, and the total modifications to the Linux kernel were only a few thousand lines of code. I find this pretty incredible in itself.


That's true. The effort was very concentrated.

Those days in late 2003 were crazy. Waking up at noon to work on it for until 9:00pm, and then off to a night shift writing boring tests script stuff until the morning. And also weekends. It was like a full time job with extra hours during that month. Was full head-on stamina at age 21, and I don't even drink tea or coffee.

Interestingly, if it wasn't for the boring night shift job I had back then, I would have never found all the time back at home during the day time to do all this. And once I figured out how I want to write it, nothing stopped me until it worked.


Security and OS stability comes to mind.

In an time where we are finally migrating to better isolated architectures on mainstream OSes, having a full OS running as a driver isn't that appealing.


I used to use andLinux for my CS work before I grew out of gaming for good. It was great.


Interesting how this compares with the newer Linux Services for Windows.


Basically, WSL2 is a VM, whereas coLinux acts like a device driver.


These two are not mutually exclusive. To be more exact, coLinux is a VM running under a device driver. The driver is mainly just a bridge to get into the privileged execution path and manage the resources it needs.


yes, i would like to know too - is it more battery friendly or less? does it have upsides or downsides compared to wsl or wls2?


One downside is that it's old and only works on 32bit machines. It only mentions up to Windows 7 too, I wonder if it would work on Win10 (is there 32-bit Win10?)


Yes, 32-bit Windows 10 exists... (and is in fact the only choice for those who want to run 16-bit Windows apps)


If I understand correctly, this is a limitation of x86_64 backwards compatibility rather than a limitation of Windows. Enough of the 16 bit behavior was preserved in 32 bit protected mode for Win16 apps to run, but nearly all of it doesn't work in 64 bit long mode (things like virtual 8086 mode aren't available in 64 bit mode). There's also all the GDT, LDT, TSS stuff on 32 bit x86 that was either dropped or reduced for x86_64.

Your only option at this point is to use a virtual machine. Intel has supported real mode VMs since Sandy Bridge and I'm not sure when AMD got it, but all of the Zen parts definitely support it.


It was just a choice Microsoft made. Wine happily runs win16 apps on x86_64 Linux.

There was a wine derived thing that used to work on Win64 to run win16 apps, but something in later windows versions broke that.


> There was a wine derived thing that used to work on Win64 to run win16 apps

Does that mean winevdm doesn't work anymore?

https://github.com/otya128/winevdm


Oh, I didn't realize they'd worked around MS removing LDT access. They did it by running in an 16bit x86 emulator, which is a pretty big hammer, but I guess win16 apps are old enough that the performance is still more than fine on modern CPUs.


A bigger reason for WoW not being available on 64-bit Windows is that HANDLEs would get truncated in that case, which isn't wanted behavior.


It a limitation of Windows; you can still have 16bit legacy mode code segments on amd64 processors on a 64bit OS. The issue is they exceed the maximum HANDLE table size on 64-bit windows versus what win16 expects.

It was a tradeoff to intentionally break compatibility in a pain point for them where it would be acceptable by their customer base.


You could use WineVDM.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: