Hacker News new | past | comments | ask | show | jobs | submit login
FreeBSD Bhyve Virtualization (vermaden.wordpress.com)
143 points by vermaden on Aug 18, 2023 | hide | past | favorite | 29 comments



I've recently switched from Proxmox to bhyve while upgrading my dedicated box lease. I use it for prod mail&web and various lab work. Should have done it earlier, as I don't really need the things bhyve lacks (suspend, HA/clustering) for my use-cases.

    # vm list
    NAME         DATASTORE  LOADER     CPU  MEMORY  VNC  AUTO     STATE
    access       default    bhyveload  2    8G      -    No       Stopped
    cicd         default    bhyveload  2    6G      -    No       Running (33065)
    feeds        default    grub       2    4G      -    No       Running (35227)
    gate         default    bhyveload  2    4G      -    Yes [1]  Running (21012)
    k8s-master0  default    grub       2    4G      -    Yes [4]  Running (1543)
    k8s-node0    default    grub       6    64G     -    Yes [5]  Running (9988)
    mail         default    bhyveload  3    8G      -    Yes [3]  Running (87082)
    ml           default    grub       4    24G     -    No       Running (64236)
    registry     default    bhyveload  2    2G      -    No       Running (47712)
    repos        default    bhyveload  2    2G      -    No       Running (14767)
    web          default    bhyveload  2    4G      -    Yes [2]  Running (92365)
Other than that, I use bhyve on my laptop daily since around 2015-2016. It was somewhat painful at first. I had to bake a CD key into Windows ISO for headless install but now VNC support exists and it's easy to output any graphical installer via VNC.

bhyve doesn't offer API and has not the most user-friendly interface (vm-bhyve[1] for the rescue!) but overall, I couldn't be happier with its - typical for FreeBSD - set-and-forget stability.

[1] https://github.com/churchers/vm-bhyve


You could just use plain libvirt/KVM, virt-manager works well enough and can be used to manage remote libvirt hypervisors too.

In previous company we pretty much went from Proxmox to bare tools once we got the hang of it.


I'm using a system that I know and a hypervisor that I come to trust. Previous setup had different requirements.


Hey vermaden, great article! I saw you had some problems with virt-manager, and I did too! There is a solution your problem:

> I was able to start FreeBSD 13.2 installation … but it got frozen at the kernel messages and nothing more happened.

If you notice at the bootloader screen, it says something like "5. Cons: Dual (Serial primary)"

Pressing the 5 key at the bootloader screen will toggle the output from "Serial primary" to "Video primary" or something to that effect. Once you toggle this setting, it should fix the output hanging and you should get a login prompt.

The other way to fix this problem, is through the virt-manager menu, you can simply switch the view from Graphical Console, to the Serial Console. Under View -> Consoles -> Serial 1

Hope this helps make your experience a little better under virt-manager.


Thank You - I will check that out.

If it will work out I will update the article and maybe even probably add another dedicated virt-manager/libvirt Bhyve article.

Regards, vermaden


For FreeBSD VMs I would recommend removing the VNC console. If the only console is a TTY FreeBSD does the right thing out of the box. It also requires a lot less bandwidth to tunnel a serial console than a video stream of a text console.


I tried it and the Serial console was greyed out - I could not select it.

Probably when I would force that upon the FreeBSD loader(8) maybe it will work - but there are other issues.

The Network Type 6 is not supported - so I was not able to create a NAT.

I could start a VM without any networks - but that would render then unusable for me.

I will try again in the future when the libvirt(8) driver for Bhyve is better implemented and more complete.

Regards, vermaden


Great article! As a FreeBSD user (albeit server-side only), I always enjoy reading articles like this. Keep up the good work vermaden, your blog is a real treat!

Bhyve virtualization is of special interest to me, as I'm in the process of writing my own little (small-time, amateur-use only) wrapper for bhyve. Essentially a re-implementation of a subset of churchers/vm-bhyve in python.

I was under the impression that virtio-blk + zvols were the best option for storage (although I didn't do any tests so far, just a naive assumption)! Will definitely explore the nvme + raw-file route now.


There was a bunch of discussion about the live migration features of bhyve on a recent episode of “Oxide and Friends”

https://m.youtube.com/watch?v=eQR98smFYTc


It’s funny this article popped up. I have been trying to get the oxide vmm working on freebsd in my spare time.

Too bad I don’t know nearly enough about rust, bhyve, openbsd or illumos to ever come close to get this working.


While illumos carries a port of bhyve which is largely similar to upstream FreeBSD, there are several areas where it diverges. Propolis (the userspace VMM component) relies on some of those differences to function, especially when it comes to live migration.


Well I notice that alias from the propolis repo :).

Thanks for the heads up. Will probably ditch this effort.

Really want to play around with some of the oxide stuffs but it’s hard with illumos not supporting nested virt.


I check every now and then to see if there is MacOS guest support, I would really like to use BSD as a host and completely drop Apple hardware since they started soldering SSDs. I have the last Intel MBP, and sure ARM is powerful, but in 5 years of heavy use it will be a brick in need of a new mainboard and that makes no sense to me.

I guess the reality is in a year or two I'll land on fedora and proxmox or something. I have seen plenty of interesting content on proxmox and PCIE passthrough, but very little showing actual pro users with this kind of set up so the MacOS workstation VM idea might not be as practical as I'm hoping.


I'm using Bhyve and doing PCIe passthrough of some NVMe's to a Linux guest.

To my slight surprise, PCIe passthrough was device dependent, ie not all PCIe devices were supported. I'm running TrueNAS which runs a slightly older FreeBSD kernel, and from what I could gather GPU passthrough would not work there. IIRC the latest FreeBSD has some support for that.

But network cards should work, and NVMe's definitely work.

However it wasn't straight sailing. Using the approach from the wiki[1] did not work directly, I had to mask the devices using a pre-init boot script with

    devctl set driver -f nvme1 ppt
[1]: https://wiki.freebsd.org/bhyve/pci_passthru


I just researched a bit, mac os x guest vm with pcie passthrough seems possible on linux.

Dropping the links below:

https://github.com/kholia/OSX-KVM

https://github.com/yoonsikp/macOS-KVM-PCI-Passthrough


I've been looking at this for years, there are options if you have suitable hardware (there are a number of caveats). Proxmox has looked the most promising for a while.


Why do you need macOS guest support? IIRC none of the official reasons (like the forced developing/releasing of App Store apps only from macOS) would work in a non-supported virtualised environment due to Apple shenanigans with the hardware and software checks thereof.


I prefer MacOS as a daily driving operating system to linux or Windows for development and general computing.


Huh? When I still virtualized macOS everything worked just fine. I think the last one was 11 or 12. ESXi has now dropped support and Mac has not much relevance for me anymore.

I heard the apple silicon platform is a bit tighter but that can't really be virtualized on x86.


bHyve in jails is a god send.

I can create a jail, pass bHyve to it, allocate resources limits, assign a virtual nic enabling the jail to utilise firewalls. And then hand the jail IP to a client. Enabling them to create as many virtual machines their resources allows.

Using Jails this itself enables another level of security.

As if a VM hack breaks free, itself is then only isolated to the jail. And, if I was to be truely paranoid I could create a fortress jail with sub jails with a jail in the sub jail allowing bHyve to operate within.

Backups are only a matter of backing up the jail. ZFS does this without sweat.


That's interesting way to do shared hosting, can jails limit by CPU time ?


You can with rctl [0][1].

Another hack was one where you could use tcsh shell and /etc/login launching an application as a man in the middle to limit and launch a process with cpu limits.

[0] https://klarasystems.com/articles/controlling-resource-limit...

[1] https://wiki.freebsd.org/Hierarchical_Resource_Limits


vermaden's site is a must stop for all FreeBSD users. A lot of information and great articles! I'll take a look at bhyve at some point. Thanks for the article, it does seem to be a comprehensive guide to bhyve!


The only thing missing is migration from old iohyve saved zfs images to vm-bhyve or libvirt

Otherwise, a super and very complete guide. Well worth bookmarking if you want to do virtualised os on FreeBSD.


From what I checked the iohyve puts disk images as ZFS zvol volumes like:

- /dev/zvol/zroot/iohyve/VMNAME/disk0

So the migration would be:

- create your new VM with disk of the same size

- check your new VM disk place at 'vm-bhyve':

    # vm info VM | grep system-path 
    system-path: /vm/VM/disk0.img
- copy contents of 'iohyve' disk into VM disk

    # dd bs=1m if=/dev/zvol/zroot/iohyve/VMNAME/disk0 of=/vm/VM/disk0.img
- start your new VM

This should work.

Same with libvirt/virt-manager - just copy your VM image from /var/lib/libvirt/images/ dir and it should work the same.

Hope that helps.

Hope that helps.


Nested virtualization?


True it would be interesting to know how well WSL works.

And some of the complex multiple network device configurations, I had iohyve administered virtuals with a backend filestore network as well as a front door.

Iohyve used zfs environment vars embedded in the FS to hold configuration which was clever, arguably too clever.


The problem with nested virtualisation is that it isn't fully hardware accelerated. The missing features (shadow paging, etc.) have to emulated in software. Such emulation requires lots of delicate error prone code.


I seem to remember zvol having known performance issues that were improved greatly in the latest openzfs release.

If you read https://github.com/openzfs/zfs/issues/11407

It should be in there somewhere.

Dataset plus raw file will almost always be faster though due to the rest of the issues mentioned




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: