Hacker News new | past | comments | ask | show | jobs | submit login

Given that this is owned Broadcom now, and they are going all in with squeezing every last drop from ESXi and similar offerings, I wonder what's gonna happen with Fusion in the future; while now you only pay for commercial usage, maybe they are going to let it rot over the years until it's no longer cutting edge software? Would they keep it as a loss-leader?



If you want to virtualize something with good performance on desktop windows you use Hyper-V; if you want to do it on mac you use Apple's Virtualization Framework; if you want to do it on Linux you use KVM.

Desktop virtualization products used to bring the secret sauce with them; now that every OS ships with a well-integrated and well supported type 1 hypervisor they have lost much of the reason for existing. There's only so much UI you can put in front of off-the-shelf os features and still charge hundreds of dollars per year for.


They still need to. You are glossing over the fact that you need to provide device access, USB access, graphics, and a lot of things that are not necessarily provided by the "native" hypervisor (HyperKit does not do even half of what Parallels does, for instance).


> have lost much of the reason

I didn't say they have no reason to exist. I indicated they are moving towards becoming UI shells around standard OS features and/or other commodity software, which they are. Look at UTM, for instance. Even VMware Workstation and VirtualBox on Windows use HyperV under the hood if you have HyperV or WSL features enabled.

While everyone still seems to be busy disagreeing with me because of <insert favorite feature>, I'll mention that HyperV does have official support for transparent GPU paravirtualization with nvidia cards, and there are plenty of other open projects in the works that strive to "bleed through" graphics/gpu/other hardware acceleration api's from host to guest on other platforms and hypervisors. With vendors finally settling around virtio as somewhat of a 'standard pipe' for this, expect rapid progress to continue.


> Even VMware Workstation and VirtualBox on Windows use HyperV under the hood if you have HyperV or WSL features enabled.

VirtualBox is consistently (and significantly) slower when it uses HyperV as backend than when it uses its original driver, and many features are not supported at all with HyperV. In fact the GUI actually shows a "tortoise" icon in the status bar when running with HyperV backend.


What is the use case for using Virtualbox with Hyper-V? Why not just use Hyper-V directly?


For a start, the list of operating systems Hyper-V supports is an order of magnitude less than what VirtualBox supports. Likewise for emulated hardware, like 3D as mentioned a number of times here. The GUI is also much better on VirtualBox.

And Windows many times forces HyperV onto you, taking exclusive control of the CPU's virtualization features, thereby forcing VirtualBox to either use Hyper-V as a (terrible) backend .... or not run at all.


The use case is mainly interoperability with VirtualBox; they can still keep thier own disk/vm formats, guest tools, etc. and use HyperV as the 'virtualization engine'. Users that have workflows that call out to VirtualBox can continue to work; a lot of vm image tools (vagrant, packer) continue to work, etc.

But yes, of course you can also change your tools to use HyperV directly.

"Having to use HyperV" is not actually anything nefarious as the other comment seems to imply. You can't have two type 1 hypervisors running cooperatively on the bare metal and you cant implement your type 2 hypervisor hooks if you have a type 1 hypervisor running. So if you have enabled HyperV directly or indirectly by using WSL2 or installing any of the container runtime platforms (Docker Desktop et al) that use it, then you have to use HyperV as your hypervisor.

Note this is different than nested virtualization (ESXi on HyperV, etc.) which is supported but a completely different beast.

For the same reason you cannot run Xen and KVM VM's simultaneously on Linux (excepting nested virtualization).


> "Having to use HyperV" is not actually anything nefarious as the other comment seems to imply. You can't have two type 1 hypervisors running cooperatively on the bare metal and you cant implement your type 2 hypervisor hooks if you have a type 1 hypervisor running.

The nefarious part is that Windows enables Hyper-V even if you don't actually use Hyper-V VMs and never will. KVM doesn't take exclusive control of VMX until you _actually_ run a KVM VM.

By the way, the distinction between type 1 / 2 is purely academic at this point: there is no definition where KVM is a type 1 hypervisor and VirtualBox isn't, as they are _literally_ the same conceptually-wise: both are a kernel module that implements a VMX manager/root. Same on Windows. The only remaining type 2 hypervisor these days is kqemu which can still work in binary translation mode (and therefore can work even without access to VMX).


> The nefarious part is that Windows enables Hyper-V even if you don't actually use Hyper-V VMs and never will.

It does not actually enable it by default but there are many settings or apps that can cause it to become enabled. Virtualization-based-security, WSL, container tools, etc. Providing a hypervisor and related functionality is part of what a modern OS kernel should do! It's not nefarious!


Why is the Hyper-V backend so much slower?


In truth it's not; however if the software has to do extra things like, for instance, translate io calls to a virtualbox disk format that hyperv cannot natively support or do an extra memcpy on the video framebuffer to get their UI to work then there will be necessary performance impacts. How fast a guest os "feels" is mostly down to the performance of the virtualized devices and not necessarily the overhead of virtualization itself.


Yes, it is; even the official documentation mentions it (and recommends you disable Hyper-V) and it is FAQ #1 in the support website.

One of the reasons mentioned is that VirtualBox runs (some) emulated devices in kernel space but is not allowed to do when running with Hyper-V. The official API forces custom devices to strictly be user space, and only some basic hardcoded devices are emulated from kernel space.

The "secret sauce" of a desktop virtualizer is in part in the selection of devices it emulates, so this severely cripples VirtualBox.


GPU-P is a pita to keep updated and is half baked. So many things just don't see or use the Nvidia drivers properly. If you want 60fps, Parsec is the only solution I found for desktop/mobile Nvidia graphics.


If accessing windows- look at enabling remoteFX, h264 hardware stream, with YUV444, in RDP. See: https://techcommunity.microsoft.com/t5/security-compliance-a...

Once I discovered that, I haven't looked at parsec. Moonlight/sunshine (whatever the pair is) is... terrible. And, When I was looking YUV444 wasn't a feature. Or, at least not one anybody actually knew how to use.


Agreed. Virtualized 3d acceleration in particular still has quite a bit of "secret sauce" left in it.


Today this is mostly implemented by having a guest driver pass calls through to a layer on the host that does the actual rendering. While I agree that there is a lot of magic to making such an arrangement work, it's a terrible awful idea to suggest that relying on a vendor's emulation layer is how things should be done today.

Proper GPU virtualization and/or partitioning is the right way to do it and the vendors need to get their heads out of their ass and stop restricting its use on consumer hardware. Intel already does; you can use GVT-g to get guest gpu on any platform that wants to implement it.


So you say having a decoupled arrangement in software (which happens to be a de facto open standard) is a "terrible awful idea" and that instead you should just rely on whatever your proprietary hardware graphics vendor proposes to you? Why?

And that's assuming they propose anything at all.

Even GVT-g breaks every other Linux release, is at risk of being abandoned by Intel (e.g. how they already abandoned the Xen version) or limited to specific CPU market segments, and already has ridiculous limitations such as a limit on the number of concurrent framebuffers AND framebuffer sizes (why? VMware Workstation offers you an infinitely resizable window, does it with 3D acceleration just fine, and I have never been able to tell if they have a limit on the number of simultaneous VMs... ).

In the meanwhile "software-based GPU virtualization" allows me to share GPUs in the host that will never have hardware-based partitioning support (e.g. ANY consumer AMD card), and allows guests to have working 3D by implementing only one interface (e.g. https://github.com/JHRobotics/softgpu for retro Windows) instead of having to implement drivers for every GPU in existence.


> So you say having a decoupled arrangement in software (which happens to be a de facto open standard) is a "terrible awful idea" and that instead you should just rely on whatever your proprietary hardware graphics vendor proposes to you? Why?

Sandboxing, and resource quotas / allocations / reservations.

By itself, a paravirtualized GPU just treats each userland workload launched by any given guest onto the GPU, as all being siblings — exactly as if there was no virtualization and you were just running multiple workloads on one host.

And so, just like multiple GPU-using apps on a single non-virtualized host, these workloads will get "thin-provisioned" the resources they need, as they ask for them, with no advance reservation; and workloads may very well end up fighting over those resources, if they attempt to use a lot of them. You're just not supposed to run two things that attempt to use "as much VRAM as possible" at once.

This means that, on a multi-tenant hypervisor host (e.g. the "with GPU" compute machines in most clouds), a paravirtualized GPU would give no protection at all from one tenant using all of a host GPU's resources, leaving none left over for the other guests sharing that host GPU. The cloud vendor would have guaranteed each tenant so much GPU capacity — but that guarantee would be empty!

To enforce multi-tenant QoS, you need hardware-supported virtualization — i.e. the ability to make "all of the GPU" actually mean "some of the GPU", defining how much GPU that is on a per-guest basis.

(And even in PC use-cases, you don't want a guest to be able to starve the host! Especially if you might be running untrusted workloads inside the guest, for e.g. forensic analysis!)


Why does multi-tenant QoS require hardware-supported virtualisation?

An operating system doesn't require virtualisation to manage application resource usage of CPU time, system memory, disk storage, etc – although the details differ from OS to OS, most operating systems have quota and/or prioritisation mechanisms for these – why not for the GPU too?

There is no reason in principle why you can't do that for the GPU too. In fact, there have been a series of Linux cgroup patches going back several years now, to add GPU quotas to Linux cgroups, so you can setup per-app quotas on GPU time and GPU memory – https://lwn.net/ml/cgroups/20231024160727.282960-1-tvrtko.ur... is the most recent I could find (from 6-7 months back), but there were earlier iterations broader in scope, e.g. https://lwn.net/ml/cgroups/20210126214626.16260-1-brian.welt... (from 3+ years ago). For whatever reason none of these have yet been merged to the mainline Linux kernel, but I expect it is going to happen eventually (especially with all the current focus on GPUs for AI applications). Once you have cgroups support for GPUs, why couldn't a paravirtualised GPU driver on a Linux host use that to provide GPU resource management?

And I don't see why it has to wait for GPU cgroups to be upstreamed in the Linux kernel – if all you care about is VMs and not any non-virtualised apps on the same hardware, why couldn't the hypervisor implement the same logic inside a paravirtualised GPU driver?


> Sandboxing, and resource quotas / allocations / reservations.

But "sandboxing" is not a property of hardware-based virtualization. Hardware-based virtualization may even increase your surface attack, not decrease it, as now the guest directly accesses the GPU in some way software does not fully control (and, for many vendors, is completely proprietary). Likewise, resource quotas can be implemented purely in a software manner. Surely an arbitrary program being able to starve the rest of the system UI is a solved problem in platforms these days, otherwise Android/iOS would be unusable... Assuming the GPU's static partitioning is going to prevent this is assuming too much from the quality of most hardware.

And there is an even bigger elephant in the room: most users of desktop virtualization would consider static allocation of _anything_ a bug, not a feature. That's the reason most desktop virtualization precisely wants to to do thin-provisioning of resources even when it is difficult to do so (e.g. memory). i.e. we are still seeing this from the point of view of server virtualization, and just shows how desktop virtualization and server virtualization have almost diametrically opposed goals.


A soft-gpu driver backed by real hardware "somewhere else" is a beautiful piece of software! While it certainly has application in virtual machines, and may even be "optimal" for some use cases like desktop gaming, it's ultimately doesn't fit with the modern definition of "virtualization --

I am talking about virtualization in the sense of being able to divide the hardware resources of a system into isolated domains and give control of those resources to guest operating systems. Passing API calls from guest to host for execution inside of the host ___domain is not that. A GPU providing a bunch of PCIe virtual functions which are individually mapped to guests interacting directly with the hardware is that.

GPU virtualization should be the base implementation and paravirtualization/HLE/api-passthrough can still sit on top as a fast-path when the compromises of doing it that way can be justified.


I would say the complete opposite. The only reason one may have to use a real GPU driver backed by a partitioned GPU is precisely desktop gaming, as there you are more interested in performance than anything else and the arbitrary limits set by your GPU vendor (e.g. 1 partition only) may not impact you at all.

If you want to really divide hardware resources, then as I argue in the other thread doing it in software is clearly a much more sensible way to go. You are not subject to the whims of the GPU vendor and the OS, rather than the firmware, control the partition boundaries. Same as what has been done in practically every other virtualized device (CPUs, memory, etc.). We never expected the hardware to need to partition itself; I'd even have a hard time calling that "virtualization" at all. Plus, the way hardware is designed these days, it is highly unlikely that the PCI virtual functions of a GPU function as an effective security boundary. If it wasn't for performance, using hardware partitioning would never be a worthwhile tradeoff.


Yeah, if you care about 3D acceleration on a Windows guest and aren't doing pcie passthrough, then KVM sure isn't going to do it. There is a driver in the works, but it's not there yet.

edit: I made a mistake and got confused in my head with qemu and the lack of paravirtualized support. (It does have a PV 3D linux driver, though)


KVM will happily work with real virtual GPU support from every vendor; it's the vendors (except for intel) that feel the need to artificially limit who is allowed to use these features.


I was mostly hoping qemu would get paravirtualized support some day, because it is leagues ahead of VMware Player in speed. Everyone's hopes are riding on https://github.com/virtio-win/kvm-guest-drivers-windows/pull....


I guess my comments make it sound like I don't appreciate this type of work; I absolutely do. An old friend of mine[1] was responsible for the first 3d support in the vmware svga driver, so this is a space I have been following for literally decades at this point.

I just think it should be the objective of vendors to offer actual GPU virtualization first and to support paravirtualization as an optimization in the cases where it is useful or superior and the tradeoffs are acceptable.

[1] https://scanlime.org/


There has been a driver "in the works" for the past decade. Never coming. MS/Apple do not make it easy anyway.


Do any of the commercial hypervisors do that today?


Pretty much all of them do, though the platform support varies by hypervisor/guest OS. Paravirtualized (aka non-passthrough) 3D acceleration has been implemented for well over a decade.


However NVIDIA limits it to datacenter GPUs. And you might need an additional license, not sure about that. In their view it's a product for Citrix and other virtual desktops, not something a normal consumer needs.


Yes and no; you can use GPU partitioning in Hyper-V with consumer cards and Windows 10/11 client on both sides, it’s just annoying to set up, and even then there’s hoops to jump through to get decent performance.

If you don’t need vendor-specific features/drivers, then VMware Workstation (even with Hyper-V enabled) supports proper guest 3D acceleration with some light GPU virtualization, up to DX11 IIRC. It doesn’t see the host’s NVIDIA/AMD/Intel card and doesn’t use that vendor’s drivers, so there’s no datacenter SKU restrictions. (But you are limited to pure DX11 & OpenGL usage, no CUDA etc.)


VMWork workstation still has a massive leg up in 3d (and to some extent, 2d) video acceleration. Many programs need this to run smoothly these days


Am I the only one who explicitly does not want type 1 hyper visor on my desktop? Am I out dated?

I like workstation and virtualbox because they're controllable and minimally impactful when I'm not using them.

Installing hyper v (and historically even WSL - not sure if it's still the case but it was never sufficiently explicit) now makes my primary OS a guest, with potential impact on my gaming, multimedia, and other performance (and occasional flaky issues with drivers and whatnots).

Am I the only grouchy geezer here?:-)


Apart from Hyper-V and WSL, some Windows 11 security features also depend on virtualization.

Did you measure the performance hit? How often did you encounter driver trouble?


I used to worry about this overhead too but this appears to be nothing on modern CPUs. I had minuscule differences here-and-there on Intel 9th gen (9900K) but my current Intel 13th gen (13900K) has zero performance decrease with HV enabled. (At least on any perceptible level)


Thanks - can you share your usage patterns, what kind of usage, does it include heavy gaming and media?

Note, I'm less worried about percentage performance, as some things just not working well at all, because of assumptions of direct hardware access vs reality of running under hyper v. I.e. Are ALL hardware calls and capabilities 100% absolutely completely available once your main Windows install is running as a VM? Not most, majority, should be good; but actually, seamlessly all? My understanding was No, but things may have changed for the better.


Hyper v doesn't have 3d acceleration, so if you play game or want to use linux desktops it's pretty bad.


WSL2 seems to virtualize the GPU pretty well, I had an easier time getting my GPU to work for machine learning inside WSL2 than I have with plain Windows and Linux in the past.


It does, but it’s a whole rabbit hole of specialized settings from what I can tell. Toying around with GPU-PV in Hyper-V with a Windows 11 guest was complicated and ultimately had performance & compatibility problems. (With my previous PC even deadlocking when I used the onboard video encoder from within the VM)


If you want to pass through USB, SCSI, or something like that then VMWare Workstation is better than Hyper-V for sure.


And if you want to do it on FreeBSD you use bHyve.


Snapshot support in at least Libvirt based KVM still sucks. :(

As in, technically it supports snapshots.

But try have a tree of different snapshots of a VM based upon different points in time.

Super useful when doing integration work and testing out various approaches that evolve over time as things progress.

With VMware Workstation I've been able to do that for years, for Libvirt based KVM it's not even possible.

To be clear, I really wish it could be done in Libvirt/KVM too. ;)


I would never use "performance" and "Hyper-V" in the same sentence.


It’s not the UI you charge for, it’s the Enterprise Support Plan.


Hyper-V does not have PCI passthrough and with that you lost me, while ESXi does. Also I want to test my multiplatform software on all major OSes (MacOs included) ESXi is then the only one that can run Darwin in parallel with rest.


Worse, it's actively worse VMware on Windows vs Hyper-V.


Workstation was already rotting for all intents and purposes. Likely Fusion was also rotting since the switch to ARM but never tried it.

The entire space ("desktop virtualization") is dead. Even VirtualBox which I praised a year ago seems to be slowing down.

This likely just poisons the well for a market they had all but abandoned.


Virtual box was never good, it always felt half baked.


My point is that at least their paid support fixed the things I asked them to fix; I cannot say the same of VMware where support was already non-existent a couple years ago (I stopped using them the moment someone here in HN said the entire Workstation staff had been fired and replaced with skeleton overseas crew, and this was way before Broadcom).


Years ago (pre-oracle) it was enough though: i have fond memories of using it with vagrant and be heppy.


Yes, when it was Sun VirtualBox I remember it was a favorite for testing out other operating systems for free with a simple UI. It wasn't the most powerful or flexible, but it's what was recommended if you wanted to (for example) try Ubuntu on your Windows host without dual boot or using another disk, etc.


Pretty much all desktop virtualization/VDI/etc. products have been de-emphasized by essentially everybody except to the degree that they're a largely free byproduct of server virtualization. I doubt any company is devoting more than a minimal number of resources to these products--maybe Apple more than others. Red Hat, for example, even sun-setted its "traditional" enterprise virtualization product in favor of Kuvevirt on OpenShift. And a VDI product was pretty much abandoned years ago.


There are new VDI contenders still coming up though. This caught my attention recently (due to trialling Proxmox for another purpose):

https://www.youtube.com/watch?v=tLK_i-TQ3kQ

There's clearly still demand for VDI solutions too. A recent example:

https://forum.proxmox.com/threads/vdi-solution-for-proxmox.1...


I don’t know anything about high-end workstations really. But I wonder if the whole ecosystem is in a rough spot generally? Seems like cloud tooling is always getting easier.

Shame really, people do fun stuff with excess compute.


Fusion became basically worthless when you couldn't easily run Intel Windows on a Mac anymore because the underlying processor changed.


Worthless for some use cases but there are reasons to run Mac-on-Mac vms, including testing, development, and security (isolation). The first two also apply to some folks (maybe not many) for Linux VMs.


ARM Windows seems to be able to run x86 apps, though. I use an ARM Windows on an M1 mac to run BMW E-Sys and it works well enough.


Can’t even run x86 Linux on Mac right now


UTM supports Rosetta in Linux VMs: https://docs.getutm.app/advanced/rosetta/

OS still needs to be ARM, as far as I know, but you can then use Rosetta to speed-up x86_64 Linux binaries.

Docker Desktop also uses this to run x86_64 Docker images, and in many cases performance is quite close to the native ARM binaries, but this heavily depends on the workload.


Linux has always been a bit of an easier deal because you can (often, not always) just get a version for ARM that is "close enough".


If my case that isn’t close as we have deps that aren’t ported to arm


qemu works as well as it always did—which is to say slow as hell but good enough for automation in a pinch


... I've had a Fusion licence since v10 was current (~2017) and I don't think I've used it to run Windows even once.


I wonder how common that is though.

In terms of people who might consider Fusion you have:

- People who only use Windows

- People who only use macOS

- People who only use Linux

- People who virtualize Windows on macOS

- People who virtualize Linux on macOS

- People who run FreeBSD or similar on their computers

- People who virtualize FreeBSD or similar on macOS

- People who virtualize various operating systems on Windows

- People who virtualize various operating systems on Linux

- People who virtualize various operating systems on FreeBSD or similar

And I would guess that the largest group of people that use Fusion use it for running Windows in a VM on macOS.

I would guess that the people who develop for Linux servers would mainly use Docker if they run macOS, and that also relies on VM, but not using Fusion.


What about people who virtualize various operating systems on macOS? That was my entire team at a prior engagement (at Microsoft, as it happens…). I suspect it’s a large number, developers tend to like macOS, so if you’re making a cross platform application and want to be able to test anything at all, you need a VM.


> I would guess that the people who develop for Linux servers would mainly use Docker if they run macOS, and that also relies on VM, but not using Fusion.

x86 Docker on ARM Mac is an insanely complex setup - it runs an ARM Linux VM inside Hypervisor.framework that then uses a Rosetta client via binfmt that <somehow> communicates with the host macOS to set up all the Rosetta specific stuff and prepare the client process to use TSO for fast x86 memory access.

Unfortunately, Apple heavily gates anything Rosetta, I'm amazed Docker got enough coordination done with them - because QEMU didn't, they don't support anything Apple ARM-specific as a result and don't plan to unless Apple significantly opens up access and documentation; TSO for example is gated behind private entitlements.


There's surely no mystery as to how Docker is doing this:

https://developer.apple.com/documentation/virtualization/run...


Yeah that's a "how to use it in the simple case", that's not a "here is how this shit works under the hood so you can use it for more than just running userland processes" and it also doesn't state the limitations (e.g. what instructions are supported and which are not).


I had Fusion and ran Windows with it early on (it could even play some games!) and since I had it, I used it for Linux and some other things.

Those are now down with an old ESXi box or other forms of VMs now. Maybe I should look into the various VM options still, but I don't have any pressing needs.


The argument given was that VMWare became useless because of the switch to Arm.

There are more Hypervisor managers available on macOS now than there have ever been before - largely because Apple provides the underlying framework to do most of the hard work... but there is clearly significant demand to run VMs on Arm Macs still, regardless of whether that includes running Windows (which does exist for Arm too)


Well, I use Parallels to run a Windows VM for work (on ARM). It's its own little bubble universe, completely isolated from my Mac desktop, but available at a swipe.

I do use Fusion as well (on my laptop), and have a Windows VM there as well, but solely to run older games. Works fine.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: