Hacker News new | past | comments | ask | show | jobs | submit login
Quickemu: Quickly run optimised Windows, macOS and Linux virtual machines (github.com/quickemu-project)
452 points by overbytecode on Jan 30, 2024 | hide | past | favorite | 133 comments



Shout out to https://virt-manager.org/ - works much better for me, supports running qemu on remote systems via ssh. I used to use this all the time for managing bunches of disparate vm hosts and local vms.


virt-manager is one of the most underrated softwares there is. It's a powerhouse and I use it all the time. It is going to expect you to know some basic terminology about VMs, but it reminds me a lot of the old skool GUIs that were packed with features and power.

If your needs are simple or you're less technical with the VMs, Gnome Boxes uses the same backend and has a beautiful streamlined GUI. With the simplicity of course comes less flexibility, but cool thing is you can actually open Gnome Boxes VMs with virt-manager should you later need to tweak a setting that isn't exposed through Boxes.


I’m so appreciative that virt-manager has a GUI that crafts and then lets you edit the XML directly. It really eased that beginner into competent stages of using the program for me.


Agreed, it's much better than nothing, though I still don't know how to port forward.


Absolutely love virt-manager. I try gnome-boxes every so often and it just doesn’t compare. I guess its interface is easier for beginners.


I know it's not the same thing, but Quickemu happily works over SSH too.

Run it on a remote system via ssh, and it will "X-Forward" the Qemu console on my local Wayland session in Fedora.

First time I ran it, thinking I was doing a headless mode, and it popped up a window, was quite surprising. :)


It's wild how important and useful a program that does nothing but configuration can be.

Imagine what life would be like if configuration was separated from the software it configures. You could choose your favorite configuration manager, and use that, rather than learn how each and every program with a UI reinvented the wheel.

The closest thing we have are text configuration files. Every program that uses them has to choose a specific language, and a specific place to save its configs.

An idea I've been playing with a lot lately is a configuration intermediary. Use whatever language/format you want for the user-facing config UI, and use that data as a single source of truth to generate the software-facing config files.


You have some incumbent competition already, in case you're not aware, and I'd say many of these are closer to what you're describing than text configuration files.

You would do well to learn by past and current attempts. This book should be enlightenig (and yes, Elektra is very much alive): https://www.libelektra.org/ftp/elektra/publications/raab2017...

Would also be a useful excercice to write a new configuration UI for existing configuration backend(s) (preferably something already in use by some software you're already in want of better configuration for) - even if you do end up aiming at your own standard (xkcd.com/927), it should give you some clarity on ways to approach it.


The irony here is that the problem you have proposed - the complexity introduced by creating a new solution - is the same problem that each solution is intended to solve.

That means that any adequate solution should recursively resolve the problem it introduces.

oh, and also thank you for introducing me to Elektra. That was very helpful of you.


Libvirt and virt-manager are just simplified user interface to the real software, which is qemu(and KVM). They solve pretty trivial problems, like parsing config file and passing the right options to the qemu binary.

Yes, they have some additional useful administration features like start/stop based on a config file, serial console access, but these are really simple to implement in your own shell scripts. Storage handling in libvirt is horrible, verbose, complex, yet it can't even work with thin LVs or ZFS properly.

Unless you just want to run stuff the standard corporate way and do not care about learning fundamental software like qemu and shell, or require some obscure feature of libvirt, I recommend using qemu on KVM directly, using your own scripts. You'll learn more about qemu and less about underwhelming Python wrappers, and you'll have more control on your systems.

Also, IBM/Red Hat seems to have deprecated virt-manager in favour (of course) a new web interface (Copilot).

Quickemu seems to be of more interest, as it allows launching new VM right after a quick look at examples, without time wasting on learning a big complicated UI.


The real advantage to libvirt is that it also works with things other than qemu.


Who uses libvirt with anything else than qemu? Xen has its own much better UI tools, like the xl toolstack, or xcp-ng with Xen orchestra.

If you mean libvirt can provide single UI to different hypervisors, that is true, but I don't see any technical reason to have single UI to different hypervisors. It just provide familiar clicky UI for nontechnical users who do not want to bother with learning hypervisor features and its dedicated tools.


> Quickemu seems to be of more interest, as it allows launching new VM right after a quick look at examples, without time wasting on learning a big complicated UI.

Why would anyone want a qt frontend when you can call a cli wrapper, or better yet the core binary directly?


Libvirt can dump configs as scripts. If virsh/virt-manager does the 90% of the tedious work with a fine UI in order to be replicated later, I guess libvirt wins here.


You've missed the point of libvirtd.


That's possible, but you also seem to have missed mine, which is libvirt is not great time investment if you want to learn how qemu works, or have the most flexibility and control in its deployment.


I just like passing in options to QEMU on cmdline. This works well for some older OS like Windows NT on MIPS, or Ultrix.


This is the way.


Anyone running virt-manager on mac connecting to a headless linux hypervisor on the same network? I tried installing it through "brew", but was getting many random errors.

I thought about running it over the network using XQuartz, but I'm not sure how maintained / well supported that is anymore.


In virt-manager, you should* be able to go to file -> add connection.

Select Hypervisor "Custom URL", and enter: qemu+ssh://root@<host>/system

And Bob's your uncle.

It works great for me! This means it likely won't work for you until you've paid the proper penance to the computer god.


This might not fit your use case but what I do is:

ssh -L 5901:localhost:5901 username@hypervisor

on the hypervisor, start Qemu with -vnc :1

Then open a local VNC client like RealVNC and connect to localhost:1


I've done this several years ago, with no issues, though I haven't tried any time recently.


I just wish it had a web interface option, but I guess there is always proxmox.


Or if you're feeling a little more adventurous https://github.com/retspen/webvirtcloud


Cockpit has a good web interface for libvirt


Virt Manager is fantastic. I've used it for more than a decade and it's been rock solid throughout.

Being able to connect to my TrueNAS Scale server and run VMs across the network is the icing on the cake.


At this point, you could probably use Qubes OS, which is basically an OS that runs everything in VMs with a great interface. My daily driver, can't recommend it enough.


I wish Quickemu would make it easier to interface with libvirt, but apparently that's been marked as out of scope for the project.


I've been using proxmox to manage my containers and vms.

Do people normally move from virt-manager to proxmox or the opposite?


Just a security reminder from the last time this got posted[1]

This tool downloads random files from the internet, and check their checksum against other random files from the internet. [2]

This is not the best security practice. (The right security practice would be to have the gpg keys of the distro developers committed in the repository, and checking all files against these keys)

This is not downplaying the effort which was put in this project to find the correct flags to pass to QEMU to boot all of these.

[1] https://news.ycombinator.com/item?id=28797129

[2] https://github.com/quickemu-project/quickemu/blob/0c8e1a5205...


Can someone explain how this is a security problem? While GPG key verification would be the best way to ensure authenticity, it's doing nothing different from what almost everyone does: download the ISO from the distro's own HTTPS site. It then goes beyond what most people do and validates that the hashes matche.


IMO you're exactly right.

I just looked at the shell script and it's not "random" at all, it's getting both the checksum and the ISO from the official source over TLS.

The only way this technique is going to fail is if the distro site is compromised, their DNS lapses, or if there's a MITM attack combined with an incorrectly issued certificate. GPG would be more robust but it's hardly like what this tool is doing is some unforgivable failure either.

It's not that the OP is wrong but I think they give a really dire view of what's happening here.


Getting the signature and the file from the same place is questionable practice in itself. If the place is hacked, then all the hacker needs to do is to just hash his own file, which has happened in at least one high profile case [0]. And this practice doesn't even offer any extra protection if the resource was accessed with HTTPS in the first place.

[0] https://www.zdnet.com/article/hacker-hundreds-were-tricked-i...


Absolutely true, but one additional factor (or vector) is that this adds a level of indirection. That is, you're trusting the Quickemu people to take the same diligence you yourself would do when downloading an ISO from, say ubuntu.com for each and every target I can conveniently install with Quickemu.

It's a subtle difference, but the trust-chain could indeed be (mildly) improved by re-distributing the upstream gpg keys.


Eh, you can fetch the GPG keys from some GPG keyserver, it's not like those keys are just random files from the Internet. They're cross-signed, after all!


How do you know which keys to get? Let me guess... you read their website.


> It then goes beyond what most people do and validates that the hashes matche.

It might go above and beyond what most people are doing, but not what most tools are doing. Old school package managers are still a step ahead in this area, because they use GPG to check the authenticity of the data files, independent of the transportation channel. A website to download files and checksums is one such channel. This enables supporting multiple transportation channels, then it was a mirror of ftp mirrors. Today, it might be bittorent or ipfs or a CDN. And GPG supports revoking the trust. Checksums that are hardcoded into a tool cannot be revoked.

As soon as we start to codify practices into tools, they become easier to game and attack. Therefore tools should be held to higher security standards than humans.


Because you wrote HTTPS in italic .. HTTPS doesn't mean anything. Both the good and bad actors can have perfectly valid HTTPS configured. It is not a good indicator of trustworthiness of the actual thing you download.


> HTTPS doesn't mean anything.

That's not accurate at all. HTTPS should mean "we've validated that the content you're receiving comes from the registered ___domain that you've hit". Yes, it's possible that the ___domain host itself was compromised, or that the ___domain owner himself is malicious, but at the end of the day you have to trust the entity you're getting the content from. HTTPS says, importantly, "You're getting the content from whom you think you're getting it from."


Yes but we abandoned that idea a while ago. There are no more green locks in browsers. Nobody buys those expensive certificates that proof ownership. When you curl something it doesn't show anything unless it is an actual invalid certificate.

You are correct that it _should mean_ but reality today is that it doesn't mean anything.


No, it still means that you've connected to the ___domain that you wanted to connect to and the connection is reasonably resistant to MITM attacks. It doesn't say anything about who controls the ___domain, but what it provides still isn't nothing.


It is not about the ___domain.

"It is not a good indicator of trustworthiness of the actual thing you download."

I just downloaded something with malware from github.com. I indeed wanted to connect to github.com and I trust that it is Github.com. But again ... it did not say _anything_ about the trustworthyness of the _actual_ thing I did, which was to download an asset from that ___domain.

That is my point. In the context of this discussion about downloading dependencies.


But by using GPG to check the authenticity of the actual files that are downloaded, we can remove the web site -- whether https is sufficienctly secure or not -- from the trust chain all together. The shorter the trust chain, the better.


> HTTPS says, importantly, "You're getting the content from whom you think you're getting it from."

You need certificate pinning to know this for sure, due to the existence of MITM HTTPS spoofing in things like corporate firewalls. HTTPS alone isn't enough; you have to confirm the certificate is the one you expected. (You can pin the CA cert rather than the leaf certificate if you want, if you trust the CA; that still prevents MITM spoofing.)


I’m not aware of any HTTPS MITM that can function properly without adding its own certificate to the trusted roots on your system (or dismissing a big red warning for every site), so I don’t think certificate pinning is necessary in such an environment (if the concern is MITM by a corporate firewall).

An attacker would still need to either have attacked the ___domain in question, or be able to forge arbitrary trusted certificates.


If an attack requires compromising my operating system certificate store, I'm reasonably comfortable excluding it from most of my threat models.


Obviously you choose your own relevant threat models, but it's common to do in iOS apps--many apps are including it in their threat models. Pinning the CA cert is what Apple recommends to app developers. It's not an unreasonable thing to do.

https://developer.apple.com/news/?id=g9ejcf8y


That link discusses how to do it but not why. The most likely thing that occurs to me is that iOS apps consider the user a potentially hostile actor in their threat model, which is... technically a valid model, but in the context of this thread I don't that counts as a real concern.


I wouldn't be sure about most. I and everyone I worked with took gpg verification seriously. I always verify isos. Always.

People who are responding to you with "you are absolutely right" might not represent the silent majority (within our field, not talking about normal users).



Trust is an input into any security equation. Do you trust all sources of these files? I don't think anyone was challenging gpg


How much of this is outdated practice? Shouldn't TCP/TLS be doing checksum and origin signing already?

In the days of FTP, checksum and gpg were vital. With http/TCP, you need more GPG due to TCP handling retries checksum etc, but still both due to MitM.

But with https, how does it still matter? It's doing both verifications and signature checks for you.


TLS prevents a different kind of attack, the MitM one which you describe.

GPG signing covers this threat model but much more, the threats include:

* The server runs vulnerable software and is compromised by script-kiddies. They, then, upload arbitrary packages on the server

* The cloud provider is compromised and attackers take over the server from the admin cloud provider account.

* Attacker use a vulnerability (from SSH, HTTPd, ...) to upload arbitrary software packages to the server

GPG doesn't protect against the developer machine getting compromised, but it guarantees that what you're downloading has been issued from the developer's machine.


I agree, but I think that model of GPG is not how it's used any more. I think nowadays people upload a one-shot CI key, which is used to sign builds. So you're basically saying "The usual machine built this". Which is good information, don't get me wrong, but it's much less secure than "John was logged into his laptop and entered the password for the key that signed this"

So, you're right, that GPG verifies source, whereas TLS verifies distribution. I suppose those can be very different things.

Perhaps counter example: https://launchpad.net/~lubuntu-ci/+archive/ubuntu/stable-bac...

> The packages here are from the latest upstream release with WORK IN PROGRESS packaging, built from our repositories on Phabricator. These are going to be manually uploaded to the Backports PPA once they are considered stable.

And presumably "manually" means "signed and uploaded"


No established GNU/Linux distribution is going to half ass GPG signing as you've implied.


Which part is half ass? Manual or automatic?


One shot CI keys. I guess I shouldn't have used that term, it certainly is more work that doing otherwise.

Nevertheless, their advantages offer nothing of value in this context. At least, I think so. Correct me if I'm wrong.


> They then upload arbitrary packages on the server

And change the instructions to point to a different GPG key (or none at all).

I think the only situation it possibly helps in is if you are using untrusted mirrors. But then a simple checksum does that too. No need for GPG.


The "different gpg key" would be flagged by a package manager, but (critically) not this tool.


Also, author is typing his user password during live streaming with a mechanical keyboard while microphone is on.


You mean that the sound of each key is unique and sufficiently different from the others? Or it has to do with how a person is typing?



I’ll be yodeling while typing from now on. Happy open-spacing everyone.


It doesn’t need to be unique, it just needs to leak enough information to decrease the search space enough to where brute force (or other methods) can kick in.


Each key will produce a different sound even if it's just a touch screen surface keyboard due to being in different positions on the surface and having a relative position to the microphone - it may be more difficult and require a higher quality microphone.

Once you isolate and cluster all the key sounds you end up with a simple substitution cipher that you can crack in seconds.


While this comment doesn't seem 100% serious, I wonder if this kind of attack is made less effective by the trend in mechanical keyboards to isolate the PCB and plate from the case acoustically, e.g. gasket mount, flex cuts, burger mods. In my experience the effect of these changes is each key sounds more similar to the others rather than the traditional case mount setup where each key sound changes drastically based on its proximity to mounting points.


Poe's law strikes again.


It doesn't download "random files from the internet", it seems to be using original sources only.


If you don't control then source, you can't guarantee that what it points to today is what it points to tomorrow.


FWIW:

- Signatures are checked for macOS now

- No signatures are available for Windows

Maybe this year attention from Hacker News will encourage someone to step up and implement signature checking for Linux!


Still magnitudes better security practice than using any proprietary software or service.


My red flag was that there is no explanation of what an "optimised" image is.


UTM[0] does this quite well on macOS. They also have a small gallery[1] of pre-built images.

0. https://mac.getutm.app/

1. https://mac.getutm.app/gallery/


UTM even works on iPads! I was able to run Arch Linux in TTY mode quite well.

https://docs.getutm.app/installation/ios/


libvirt ships with virt-install which also allows for quickly creating and auto-installing Windows and many Linux distributions. I haven't tried it with mac.

Here's a recent example with Alma Linux:

  $ virt-install --name alma9 --memory 1536 --vcpus 1 --disk path=$PWD/alma9.img,size=20 --cdrom alma9.iso --unattended
Then you go for a coffee, come back and have a fully installed and working Alma Linux VM. To get the list of supported operating systems (which varies with your version of libvirt), use:

  $ osinfo-query os


Also

  $ virt-builder fedora-39
if you wanted a Fedora 39 disk image. (Can be later imported to libvirt using virt-install --import).


virt-builder is awesome for quickly provisioning Linux distros. It skips the installer because it works from template images. You can use virt-builder with virt-manager (GUI) or virt-install (CLI).


Does virt-install automatically download the ISOs? When I try it, I get the following message:

    $ virt-install --name alma9 --memory 1536 --vcpus 1 --disk path=$PWD/alma9.img,size=20 --cdrom alma9.iso --unattended
    ERROR    Validating install media 'alma9.iso' failed: Must specify storage creation parameters for non-existent path '/home/foo/alma9.iso'.


It is not obvious what the result of this would be. What hostname will it have? How will the disk be partitioned? What packages will be installed? What timezone will be set? What keyboard layout will be set? And so on.


virt-install can be given all of those parameters as arguments[0], too; parent just didn't post an obnoxiously large shell line to demonstrate.

[0]: https://linux.die.net/man/1/virt-install


To do this I had to install libosinfo-bin


The convenience of such a tool is great, but it's also ~5000 lines of bash across the two main scripts.

I'd want to vet such a thing before I run it, but I also really don't want to read 5000 lines of bash.


While I agree in general that shell script is not usually fun to read, this particular code is really not bad.

Not sure if this will sway you, but for what it's worth, I did read the bash script before running it, and it's actually very well-structured. Functionality is nicely broken into functions, variables are sensibly named, there are some helpful comments, there is no crazy control flow or indirection, and there is minimal use of esoteric commands. Overall this repo contains some of the most readable shell scripts I've seen.

Reflecting on what these scripts actually do, it makes sense that the code is fairly straightforward. At its core it really just wants to run one command: the one to start QEMU. All of the other code is checking out the local system for whether to set certain arguments to that one command, and maybe downloading some files if necessary.


I do see that it is better structured, but as any other bash script it relies heavily on global variables.

For example, `--delete-vm` is effectively `rm -rf $(dirname ${disk_img})`, but the function takes no arguments. It's getting the folder name from the global variable `$VMDIR`, which is set by the handling of the `--vm` option (another global variable named $VM) to `$(dirname ${disk_img})`, which in turn relies on sourcing a script named `$VM`.

First, when it works, it'll `rm -rf` the parent path of the VMs disk_img variable is set to, irrespective of whether it exists or is valid as dirname doesn't check that - it just tries to snip the end of the string. Enter an arbitrary string, and you'll `rm -rf` your current working directory as `dirname` just return ".".

Second, it does not handle relative paths. If you you pass `--vm somedir/name` with `disk_img` just set to the relative file name, it will not resolve`$VMDIR` relative to "somedir"- `dirname` will return ".", resulting in your current working directory being wiped rather than the VM directory.

Third, you're relying on the flow of global variables across several code paths in a huge bash script, not to mention global variables from a sourced bash script that could accidentally mess up quickemu's state, to protect you against even more broken rm -rf behavior. This is fragile and easily messed up by future changes.

The core functionality of just piecing together a qemu instantiation is an entirely fine and safe use of bash, and the script is well-organized for a bash script... But all the extra functionality makes this convoluted, fragile, and one bug away from rm -rf'ing your home folder.


Why is it different from any other software just because it is a shell script? Do you read the kernel sources for your OS before running it? Your web browser? My point is not that we should blindly run things, but that we all have criteria for what software we choose to run that typically doesn't rely on being familiar with its source code.


Well, yes, I read code of (and contribute to) the kernel and web browsers I use, but that's not really relevant.

There's a big difference between "large, structured projects developed by thousands of companies with a clear goal" vs. "humongous shell script by small group that downloads and runs random things from the internet without proper validation".

And my own personal opinion: The venn diagram of "Projects that have trustworthy design and security practices", and "projects that are based on multi-thousand line bash scripts" is two circles, each on their own distinct piece of paper.

(Not trying to be mean to the developers - we all had to build our toolkits from somewhere.)


Heh, this reminds me a bit of when on live television Contessa Brewer tried to dismiss Mo Brooks with "well do you have an economics degree?" and he actually did and responded with "Yes ma'am I do, highest honors" :-D [1]

I have no problem with (and have written a few) giant bash scripts, and I completely agree with you. A giant bash script isn't going to have many eyes on it, whereas a huge project like the kernel is going to get a ton of scrutiny.

[1] https://www.youtube.com/watch?v=5mtQyEd-zS4


I believe GP implicitly assumes that bash (and generally POSIX-y shell script) has lots of quirks and footguns (to which I generally agree).

After skimming through the source code though, I'd say the concerns are probably overstated.


Probably going to catch some flack for this comment but... if you are that concerned with it, and have some free time, you could always use chatgpt to talk about the code. A prompt could be: "You are a linux guru, and you have extensive experience with bash and all forms of unix/linux. I am going to be pasting a large amount of code in a little bit at a time. Every time I paste code and send it to you, you are going to add it to the previous code and ask me if I am done. When I am done we are going to talk about the code, and you are going to help me break it down and understand what is going on. If you understand you will ask me to start sending code, otherwise ask me any questions before you ask for the code." I have used this method before for some shorter code (sub 1000 lines, but still longer than the prompt allows) and it works pretty well. I will admit that ChatGPT has been lazy of late, and sometimes I have to specifically tell it not to be lazy and give me the full output I am asking for, but overall it does a pretty decent job of explaining code to me.


I'd say this is a general issue with software, most generally how and what you do to establish trust, what expectations/responsibilities there are of a developer and user. The "many eyes make all bugs shallow" phrase does seem to be a little bit of a thought terminating cliché for some users, if it's open to scrutiny then it must be fine, conjuring an image of roaming packs of code auditors to inspect everything (I'd expect them more on the malicious side rather than benevolent)

Over for windows, there's been a constant presence of tweak utilities for decades that attract people trying to get everything out of their system on the assumption that 'big corp' developers don't have the motivation to do so and leave easy options on the table behind quick config or registry tweaks that are universally useful. One that comes to mind which I see occasionally is TronScript which if I had to bet on it passes the 'sniff test' with its history and participation I'd say it's good, but presents itself as automation, abstracting away the details and hoping they make good decisions on your behalf. While you could dig into it and research/educate yourself on what is happening and why, for many it might as well be a binary.

I think only saving grace for this is that most of these tools have a limited audience, so they're not worth compromising. When one brand does become used often enough you may get situations like CCleaner from piriform that was backdoored in 2017.


Googled that, found the GitHub with a <h1> of

> DO NOT DOWNLOAD TRON FROM GITHUB, IT WILL NOT WORK!! YOU NEED THE ENTIRE PACKAGE FROM r/TronScript

I see later it mentions you can check some signed checksums but that doesn't inspire confidence. Very much epitomises the state of Windows tweaky utilities vs stuff you see on other platforms.


Looks interesting but would someone be so kind to point out if there are any advantages for a guy like me who just runs win 11 in VirtualBox under Ubuntu from time to time?


Hard to answer this question as it largely depends on what you are doing with your VM. This appears to be a wrapper for QEMU and tries to pick reasonable settings to make spinning up new VMs easier.


Especially regarding GPU acceleration... Running video-conferencing inside windows inside vbox is almost impossible, and even modestly complex GUI apps have a significant lag there.


Does qemu allow GPU acceleration while running with a single GPU? From the video on the website it appears so, however from what I’ve read (at least with amd igpus) it doesn’t seem to work.


Install the guest additions and enable 3D acceleration in the emulated video card settings.

Also, give it 128 MB of RAM as a minimum.


I think it is more an alternative to gnome boxes where the tool take care of downloading latest image in addition to offering a default config specific to that distro/os and additionally supporting dirty OSes like windows and macOS.


If it actually runs MacOS then it's a huge advantage to installing in VirtualBox or VMware where it's very difficult to get it running (you have to patch various things).


For Linux I highly recommend Incus/LXD. Launching a VM is as simple as

``` incus launch images:ubuntu/22.04 --vm my-ubuntu-vm ```

After launching, access a shell with:

``` incus exec my-ubuntu-vm /bin/bash ```

Incus/LXD also works with system containers.


One thing I loved but rarely mentioned is systemd-nspawn. You do `docker create --name ubuntu ubuntu:22.04` and then `docker export ubuntu` to create a tar from an arbitrary docker image. Then you extract that to `/var/lib/machines/ubuntu`. Make sure to choose an image with systemd or install systemd in the container. Finally do `machinectl start ubuntu` and `machinectl shell ubuntu` to get inside.

systemd-nspawn is very simple and lightweight and emulates a real Linux machine very well. You can take an arbitrary root partition based on systemd and boot it using systemd-nspawn and it will just work.


systemd-nspawn is simple but AFAIK it doesn't do any security other than the kernel namespacing. Docker is even worse because it runs containers as root, which means a rogue process can take over the host very easily.

Incus/LXD runs containers as normal users (by default) and also confines the whole namespace in apparmor to further isolate containerized processes from the host. Apparmor confinement is also used for VMs (the qemu process cannot access anything that is not defined in the whitelist)


Docker runs container as the user you tell it to. Same with nspawn. There's not much difference there in that respect.

Nspawn does seccomp-based filtering, similar to the usual systemd services.


Are there any numbers on performance change vs naively running a VM? Usually running Linux guest inside Linux host and frequently disappointed at the guest performance. I have never done any research on tuning the VM experience, so I am curious how much I might be missing. 5% faster? 100%?


How are you running them? Running KVM/Qemu with appropriate settings gives near metal performance.


virt-manager with PopOS host, usually Ubuntu/PopOS guest on a Ryzen 5500(? Something in that series). Do not know what virt-manager runs under the hood. Again, never done anything other than install virt-manager, so would be happy to read a guide on any recommended configuration settings.


Does it run natively on Arm (Apple Silicon)? How about the latest versions of macOS? Is there graphic acceleration? How's network handled?


Wonder what the difference is with Proxmox and if there’s any optimisation done here that I can manually recreate in my Proxmox environment.


This is straggeringly different from Proxmox. Proxmox is made for labs and datacenters that have a need to host lots of servers as VMs. Quickemu looks like it is mainly geared toward desktop use.


It's a QEMU wrapper. I don't know how is this useful. It might save you 2 minutes. Maybe more with windows 11 because of tpm.


Quickemu gives me the ability to instantly spin up a full blown VM without fiddling with QEMU configurations, just by telling it what OS I want.

This might be less useful for those who are quite familiar with QEMU, but it’s great for someone like me who isn’t. So this saves me a whole lot more than 2 minutes. And that’s generally what I want from a wrapper: improved UX.


Looks like this tries to use better default settings for qemu, which doesn't always have good defaults.

I think that is useful practically, as a learning tool, and as a repository of recommended settings.


this is what we are really missing, something like: "here are 'good enough' cmd line args that you can use to boot $OS with qemu". Quickemu seems to try to help here.


> Quickemu is a wrapper for the excellent QEMU that attempts to automatically "do the right thing", rather than expose exhaustive configuration options.

As others have said, it's to get past the awful QEMU configuration step. It makes spinning up a VM as easy as VirtualBox (and friends).


I couldn’t answer this from the site. Will this let me run macOS Catalina on an M2 Mac Studio with usable graphics performance? Because that would give me back a bunch of 32-bit games I didn’t want to give up.


No. It will be slow as hell

But something like El Capitan will be somehow acceptable and Lion will be actually usable


Is there something similar to this but for Windows 10 or 11? I want a Windows GUI for QEMU to build some Linux machines. I tried QtEMU but didn't like it. Thanks in advance.


Anyone know if I can I legitamately make and submit iPhone builds off a macosx VM?


Technically, yes probably. You’ll be breaking Apple’s ToS though, so depends how big of a fish you are as to whether Apple cares.


I don't think you can. All virtualized MacOS machines, iirc, can't fully install the tools necessary to build software for MacOS. For example, I don't believe you will ever be able to sign and staple the app.

I would really love to have someone prove me wrong on this thread but I've never found a solution other than building on MacOS hardware, which is such a pain to maintain.

I have multiple old MacOS machines that I keep in a stable state just so I can be sure I'll be able to build our app. I'm terrified of failure or just clicking the wrong update button.


You can run codesign just fine in a VM.


I really appreciate your comment, I'm hoping I am wrong about my experiences!

But, this is the issue I believe:

https://mjtsai.com/blog/2023/09/15/limitations-on-macos-virt...

(or, the original is here: https://eclecticlight.co/2023/12/26/when-macos-wont-work-wit...)

You cannot login using AppleID. If you can't do that, aren't you prevented from basically doing any kind of stapling and/or retrieving certificates for signing?

I would LOVE to be wrong about this. You've done that?


This is only true for products based on the Virtualization framework. Intel “Macs” can sign in just fine. (Also, I think you can authenticate things with an API key these days rather than your credentials?)


Meaning, Intel vms? This is great. I'll check it out.


Yep. Give it a try!


How will they know?


Sadly “ macOS Monterey, Big Sur, Catalina, Mojave & High Sierra”


Why is it sad?


Probably because the two latest major versions - Ventura (13.x) and Sonoma (14.x) are not included in that list, and may not be supported. Patches to older versions may be supported. Apples patch policy according to Wikipedia:

``` Only the latest major release of macOS (currently macOS Sonoma) receives patches for all known security vulnerabilities.

The previous two releases receive some security updates, but not for all vulnerabilities known to Apple.

In 2021, Apple fixed a critical privilege escalation vulnerability in macOS Big Sur, but a fix remained unavailable for the previous release, macOS Catalina, for 234 days, until Apple was informed that the vulnerability was being used to infect the computers of people who visited Hong Kong pro-democracy websites. ```


As someone who has recently struggled with setting up both macOS and Windows VMs with qemu, this was really useful and really easy to set up.


quickemu has been great, really convenient for running a performant Windows VM on my Linux laptop.


Would this be how I get to run PC games on Steam on my Mac?


No, that would be either Crossover [0] or Game Porting Toolkit [1] (easily run via Whisky [2]).

[0] https://www.codeweavers.com/crossover

[1] https://www.applegamingwiki.com/wiki/Game_Porting_Toolkit

[2] https://getwhisky.app/


Something like macOS Parallels would be nice on Linux.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: