Shout out to https://virt-manager.org/ - works much better for me, supports running qemu on remote systems via ssh. I used to use this all the time for managing bunches of disparate vm hosts and local vms.
virt-manager is one of the most underrated softwares there is. It's a powerhouse and I use it all the time. It is going to expect you to know some basic terminology about VMs, but it reminds me a lot of the old skool GUIs that were packed with features and power.
If your needs are simple or you're less technical with the VMs, Gnome Boxes uses the same backend and has a beautiful streamlined GUI. With the simplicity of course comes less flexibility, but cool thing is you can actually open Gnome Boxes VMs with virt-manager should you later need to tweak a setting that isn't exposed through Boxes.
I’m so appreciative that virt-manager has a GUI that crafts and then lets you edit the XML directly. It really eased that beginner into competent stages of using the program for me.
It's wild how important and useful a program that does nothing but configuration can be.
Imagine what life would be like if configuration was separated from the software it configures. You could choose your favorite configuration manager, and use that, rather than learn how each and every program with a UI reinvented the wheel.
The closest thing we have are text configuration files. Every program that uses them has to choose a specific language, and a specific place to save its configs.
An idea I've been playing with a lot lately is a configuration intermediary. Use whatever language/format you want for the user-facing config UI, and use that data as a single source of truth to generate the software-facing config files.
You have some incumbent competition already, in case you're not aware, and I'd say many of these are closer to what you're describing than text configuration files.
Would also be a useful excercice to write a new configuration UI for existing configuration backend(s) (preferably something already in use by some software you're already in want of better configuration for) - even if you do end up aiming at your own standard (xkcd.com/927), it should give you some clarity on ways to approach it.
The irony here is that the problem you have proposed - the complexity introduced by creating a new solution - is the same problem that each solution is intended to solve.
That means that any adequate solution should recursively resolve the problem it introduces.
oh, and also thank you for introducing me to Elektra. That was very helpful of you.
Libvirt and virt-manager are just simplified user interface to the real software, which is qemu(and KVM). They solve pretty trivial problems, like parsing config file and passing the right options to the qemu binary.
Yes, they have some additional useful administration features like start/stop based on a config file, serial console access, but these are really simple to implement in your own shell scripts. Storage handling in libvirt is horrible, verbose, complex, yet it can't even work with thin LVs or ZFS properly.
Unless you just want to run stuff the standard corporate way and do not care about learning fundamental software like qemu and shell, or require some obscure feature of libvirt, I recommend using qemu on KVM directly, using your own scripts. You'll learn more about qemu and less about underwhelming Python wrappers, and you'll have more control on your systems.
Also, IBM/Red Hat seems to have deprecated virt-manager in favour (of course) a new web interface (Copilot).
Quickemu seems to be of more interest, as it allows launching new VM right after a quick look at examples, without time wasting on learning a big complicated UI.
Who uses libvirt with anything else than qemu? Xen has its own much better UI tools, like the xl toolstack, or xcp-ng with Xen orchestra.
If you mean libvirt can provide single UI to different hypervisors, that is true, but I don't see any technical reason to have single UI to different hypervisors. It just provide familiar clicky UI for nontechnical users who do not want to bother with learning hypervisor features and its dedicated tools.
> Quickemu seems to be of more interest, as it allows launching new VM right after a quick look at examples, without time wasting on learning a big complicated UI.
Why would anyone want a qt frontend when you can call a cli wrapper, or better yet the core binary directly?
Libvirt can dump configs as scripts. If virsh/virt-manager does the 90% of the tedious work with a fine UI in order to be replicated later, I guess libvirt wins here.
That's possible, but you also seem to have missed mine, which is libvirt is not great time investment if you want to learn how qemu works, or have the most flexibility and control in its deployment.
Anyone running virt-manager on mac connecting to a headless linux hypervisor on the same network? I tried installing it through "brew", but was getting many random errors.
I thought about running it over the network using XQuartz, but I'm not sure how maintained / well supported that is anymore.
At this point, you could probably use Qubes OS, which is basically an OS that runs everything in VMs with a great interface. My daily driver, can't recommend it enough.
Just a security reminder from the last time this got posted[1]
This tool downloads random files from the internet, and check their checksum against other random files from the internet. [2]
This is not the best security practice. (The right security practice would be to have the gpg keys of the distro developers committed in the repository, and checking all files against these keys)
This is not downplaying the effort which was put in this project to find the correct flags to pass to QEMU to boot all of these.
Can someone explain how this is a security problem? While GPG key verification would be the best way to ensure authenticity, it's doing nothing different from what almost everyone does: download the ISO from the distro's own HTTPS site. It then goes beyond what most people do and validates that the hashes matche.
I just looked at the shell script and it's not "random" at all, it's getting both the checksum and the ISO from the official source over TLS.
The only way this technique is going to fail is if the distro site is compromised, their DNS lapses, or if there's a MITM attack combined with an incorrectly issued certificate. GPG would be more robust but it's hardly like what this tool is doing is some unforgivable failure either.
It's not that the OP is wrong but I think they give a really dire view of what's happening here.
Getting the signature and the file from the same place is questionable practice in itself. If the place is hacked, then all the hacker needs to do is to just hash his own file, which has happened in at least one high profile case [0]. And this practice doesn't even offer any extra protection if the resource was accessed with HTTPS in the first place.
Absolutely true, but one additional factor (or vector) is that this adds a level of indirection. That is, you're trusting the Quickemu people to take the same diligence you yourself would do when downloading an ISO from, say ubuntu.com for each and every target I can conveniently install with Quickemu.
It's a subtle difference, but the trust-chain could indeed be (mildly) improved by re-distributing the upstream gpg keys.
Eh, you can fetch the GPG keys from some GPG keyserver, it's not like those keys are just random files from the Internet. They're cross-signed, after all!
> It then goes beyond what most people do and validates that the hashes matche.
It might go above and beyond what most people are doing, but not what most tools are doing. Old school package managers are still a step ahead in this area, because they use GPG to check the authenticity of the data files, independent of the transportation channel. A website to download files and checksums is one such channel. This enables supporting multiple transportation channels, then it was a mirror of ftp mirrors. Today, it might be bittorent or ipfs or a CDN. And GPG supports revoking the trust. Checksums that are hardcoded into a tool cannot be revoked.
As soon as we start to codify practices into tools, they become easier to game and attack. Therefore tools should be held to higher security standards than humans.
Because you wrote HTTPS in italic .. HTTPS doesn't mean anything. Both the good and bad actors can have perfectly valid HTTPS configured. It is not a good indicator of trustworthiness of the actual thing you download.
That's not accurate at all. HTTPS should mean "we've validated that the content you're receiving comes from the registered ___domain that you've hit". Yes, it's possible that the ___domain host itself was compromised, or that the ___domain owner himself is malicious, but at the end of the day you have to trust the entity you're getting the content from. HTTPS says, importantly, "You're getting the content from whom you think you're getting it from."
Yes but we abandoned that idea a while ago. There are no more green locks in browsers. Nobody buys those expensive certificates that proof ownership. When you curl something it doesn't show anything unless it is an actual invalid certificate.
You are correct that it _should mean_ but reality today is that it doesn't mean anything.
No, it still means that you've connected to the ___domain that you wanted to connect to and the connection is reasonably resistant to MITM attacks. It doesn't say anything about who controls the ___domain, but what it provides still isn't nothing.
"It is not a good indicator of trustworthiness of the actual thing you download."
I just downloaded something with malware from github.com. I indeed wanted to connect to github.com and I trust that it is Github.com. But again ... it did not say _anything_ about the trustworthyness of the _actual_ thing I did, which was to download an asset from that ___domain.
That is my point. In the context of this discussion about downloading dependencies.
But by using GPG to check the authenticity of the actual files that are downloaded, we can remove the web site -- whether https is sufficienctly secure or not -- from the trust chain all together. The shorter the trust chain, the better.
> HTTPS says, importantly, "You're getting the content from whom you think you're getting it from."
You need certificate pinning to know this for sure, due to the existence of MITM HTTPS spoofing in things like corporate firewalls. HTTPS alone isn't enough; you have to confirm the certificate is the one you expected. (You can pin the CA cert rather than the leaf certificate if you want, if you trust the CA; that still prevents MITM spoofing.)
I’m not aware of any HTTPS MITM that can function properly without adding its own certificate to the trusted roots on your system (or dismissing a big red warning for every site), so I don’t think certificate pinning is necessary in such an environment (if the concern is MITM by a corporate firewall).
An attacker would still need to either have attacked the ___domain in question, or be able to forge arbitrary trusted certificates.
Obviously you choose your own relevant threat models, but it's common to do in iOS apps--many apps are including it in their threat models. Pinning the CA cert is what Apple recommends to app developers. It's not an unreasonable thing to do.
That link discusses how to do it but not why. The most likely thing that occurs to me is that iOS apps consider the user a potentially hostile actor in their threat model, which is... technically a valid model, but in the context of this thread I don't that counts as a real concern.
I wouldn't be sure about most. I and everyone I worked with took gpg verification seriously. I always verify isos. Always.
People who are responding to you with "you are absolutely right" might not represent the silent majority (within our field, not talking about normal users).
How much of this is outdated practice? Shouldn't TCP/TLS be doing checksum and origin signing already?
In the days of FTP, checksum and gpg were vital. With http/TCP, you need more GPG due to TCP handling retries checksum etc, but still both due to MitM.
But with https, how does it still matter? It's doing both verifications and signature checks for you.
TLS prevents a different kind of attack, the MitM one which you describe.
GPG signing covers this threat model but much more, the threats include:
* The server runs vulnerable software and is compromised by script-kiddies. They, then, upload arbitrary packages on the server
* The cloud provider is compromised and attackers take over the server from the admin cloud provider account.
* Attacker use a vulnerability (from SSH, HTTPd, ...) to upload arbitrary software packages to the server
GPG doesn't protect against the developer machine getting compromised, but it guarantees that what you're downloading has been issued from the developer's machine.
I agree, but I think that model of GPG is not how it's used any more. I think nowadays people upload a one-shot CI key, which is used to sign builds. So you're basically saying "The usual machine built this". Which is good information, don't get me wrong, but it's much less secure than "John was logged into his laptop and entered the password for the key that signed this"
So, you're right, that GPG verifies source, whereas TLS verifies distribution. I suppose those can be very different things.
> The packages here are from the latest upstream release with WORK IN PROGRESS packaging, built from our repositories on Phabricator. These are going to be manually uploaded to the Backports PPA once they are considered stable.
And presumably "manually" means "signed and uploaded"
It doesn’t need to be unique, it just needs to leak enough information to decrease the search space enough to where brute force (or other methods) can kick in.
Each key will produce a different sound even if it's just a touch screen surface keyboard due to being in different positions on the surface and having a relative position to the microphone - it may be more difficult and require a higher quality microphone.
Once you isolate and cluster all the key sounds you end up with a simple substitution cipher that you can crack in seconds.
While this comment doesn't seem 100% serious, I wonder if this kind of attack is made less effective by the trend in mechanical keyboards to isolate the PCB and plate from the case acoustically, e.g. gasket mount, flex cuts, burger mods. In my experience the effect of these changes is each key sounds more similar to the others rather than the traditional case mount setup where each key sound changes drastically based on its proximity to mounting points.
libvirt ships with virt-install which also allows for quickly creating and auto-installing Windows and many Linux distributions. I haven't tried it with mac.
Then you go for a coffee, come back and have a fully installed and working Alma Linux VM. To get the list of supported operating systems (which varies with your version of libvirt), use:
virt-builder is awesome for quickly provisioning Linux distros. It skips the installer because it works from template images. You can use virt-builder with virt-manager (GUI) or virt-install (CLI).
It is not obvious what the result of this would be. What hostname will it have? How will the disk be partitioned? What packages will be installed? What timezone will be set? What keyboard layout will be set? And so on.
While I agree in general that shell script is not usually fun to read, this particular code is really not bad.
Not sure if this will sway you, but for what it's worth, I did read the bash script before running it, and it's actually very well-structured. Functionality is nicely broken into functions, variables are sensibly named, there are some helpful comments, there is no crazy control flow or indirection, and there is minimal use of esoteric commands. Overall this repo contains some of the most readable shell scripts I've seen.
Reflecting on what these scripts actually do, it makes sense that the code is fairly straightforward. At its core it really just wants to run one command: the one to start QEMU. All of the other code is checking out the local system for whether to set certain arguments to that one command, and maybe downloading some files if necessary.
I do see that it is better structured, but as any other bash script it relies heavily on global variables.
For example, `--delete-vm` is effectively `rm -rf $(dirname ${disk_img})`, but the function takes no arguments. It's getting the folder name from the global variable `$VMDIR`, which is set by the handling of the `--vm` option (another global variable named $VM) to `$(dirname ${disk_img})`, which in turn relies on sourcing a script named `$VM`.
First, when it works, it'll `rm -rf` the parent path of the VMs disk_img variable is set to, irrespective of whether it exists or is valid as dirname doesn't check that - it just tries to snip the end of the string. Enter an arbitrary string, and you'll `rm -rf` your current working directory as `dirname` just return ".".
Second, it does not handle relative paths. If you you pass `--vm somedir/name` with `disk_img` just set to the relative file name, it will not resolve`$VMDIR` relative to "somedir"- `dirname` will return ".", resulting in your current working directory being wiped rather than the VM directory.
Third, you're relying on the flow of global variables across several code paths in a huge bash script, not to mention global variables from a sourced bash script that could accidentally mess up quickemu's state, to protect you against even more broken rm -rf behavior. This is fragile and easily messed up by future changes.
The core functionality of just piecing together a qemu instantiation is an entirely fine and safe use of bash, and the script is well-organized for a bash script... But all the extra functionality makes this convoluted, fragile, and one bug away from rm -rf'ing your home folder.
Why is it different from any other software just because it is a shell script? Do you read the kernel sources for your OS before running it? Your web browser? My point is not that we should blindly run things, but that we all have criteria for what software we choose to run that typically doesn't rely on being familiar with its source code.
Well, yes, I read code of (and contribute to) the kernel and web browsers I use, but that's not really relevant.
There's a big difference between "large, structured projects developed by thousands of companies with a clear goal" vs. "humongous shell script by small group that downloads and runs random things from the internet without proper validation".
And my own personal opinion: The venn diagram of "Projects that have trustworthy design and security practices", and "projects that are based on multi-thousand line bash scripts" is two circles, each on their own distinct piece of paper.
(Not trying to be mean to the developers - we all had to build our toolkits from somewhere.)
Heh, this reminds me a bit of when on live television Contessa Brewer tried to dismiss Mo Brooks with "well do you have an economics degree?" and he actually did and responded with "Yes ma'am I do, highest honors" :-D [1]
I have no problem with (and have written a few) giant bash scripts, and I completely agree with you. A giant bash script isn't going to have many eyes on it, whereas a huge project like the kernel is going to get a ton of scrutiny.
Probably going to catch some flack for this comment but... if you are that concerned with it, and have some free time, you could always use chatgpt to talk about the code. A prompt could be:
"You are a linux guru, and you have extensive experience with bash and all forms of unix/linux. I am going to be pasting a large amount of code in a little bit at a time. Every time I paste code and send it to you, you are going to add it to the previous code and ask me if I am done. When I am done we are going to talk about the code, and you are going to help me break it down and understand what is going on. If you understand you will ask me to start sending code, otherwise ask me any questions before you ask for the code."
I have used this method before for some shorter code (sub 1000 lines, but still longer than the prompt allows) and it works pretty well. I will admit that ChatGPT has been lazy of late, and sometimes I have to specifically tell it not to be lazy and give me the full output I am asking for, but overall it does a pretty decent job of explaining code to me.
I'd say this is a general issue with software, most generally how and what you do to establish trust, what expectations/responsibilities there are of a developer and user. The "many eyes make all bugs shallow" phrase does seem to be a little bit of a thought terminating cliché for some users, if it's open to scrutiny then it must be fine, conjuring an image of roaming packs of code auditors to inspect everything (I'd expect them more on the malicious side rather than benevolent)
Over for windows, there's been a constant presence of tweak utilities for decades that attract people trying to get everything out of their system on the assumption that 'big corp' developers don't have the motivation to do so and leave easy options on the table behind quick config or registry tweaks that are universally useful. One that comes to mind which I see occasionally is TronScript which if I had to bet on it passes the 'sniff test' with its history and participation I'd say it's good, but presents itself as automation, abstracting away the details and hoping they make good decisions on your behalf. While you could dig into it and research/educate yourself on what is happening and why, for many it might as well be a binary.
I think only saving grace for this is that most of these tools have a limited audience, so they're not worth compromising. When one brand does become used often enough you may get situations like CCleaner from piriform that was backdoored in 2017.
> DO NOT DOWNLOAD TRON FROM GITHUB, IT WILL NOT WORK!! YOU NEED THE ENTIRE PACKAGE FROM r/TronScript
I see later it mentions you can check some signed checksums but that doesn't inspire confidence. Very much epitomises the state of Windows tweaky utilities vs stuff you see on other platforms.
Looks interesting but would someone be so kind to point out if there are any advantages for a guy like me who just runs win 11 in VirtualBox under Ubuntu from time to time?
Hard to answer this question as it largely depends on what you are doing with your VM. This appears to be a wrapper for QEMU and tries to pick reasonable settings to make spinning up new VMs easier.
Especially regarding GPU acceleration... Running video-conferencing inside windows inside vbox is almost impossible, and even modestly complex GUI apps have a significant lag there.
Does qemu allow GPU acceleration while running with a single GPU? From the video on the website it appears so, however from what I’ve read (at least with amd igpus) it doesn’t seem to work.
I think it is more an alternative to gnome boxes where the tool take care of downloading latest image in addition to offering a default config specific to that distro/os and additionally supporting dirty OSes like windows and macOS.
If it actually runs MacOS then it's a huge advantage to installing in VirtualBox or VMware where it's very difficult to get it running (you have to patch various things).
One thing I loved but rarely mentioned is systemd-nspawn. You do `docker create --name ubuntu ubuntu:22.04` and then `docker export ubuntu` to create a tar from an arbitrary docker image. Then you extract that to `/var/lib/machines/ubuntu`. Make sure to choose an image with systemd or install systemd in the container. Finally do `machinectl start ubuntu` and `machinectl shell ubuntu` to get inside.
systemd-nspawn is very simple and lightweight and emulates a real Linux machine very well. You can take an arbitrary root partition based on systemd and boot it using systemd-nspawn and it will just work.
systemd-nspawn is simple but AFAIK it doesn't do any security other than the kernel namespacing. Docker is even worse because it runs containers as root, which means a rogue process can take over the host very easily.
Incus/LXD runs containers as normal users (by default) and also confines the whole namespace in apparmor to further isolate containerized processes from the host. Apparmor confinement is also used for VMs (the qemu process cannot access anything that is not defined in the whitelist)
Are there any numbers on performance change vs naively running a VM? Usually running Linux guest inside Linux host and frequently disappointed at the guest performance. I have never done any research on tuning the VM experience, so I am curious how much I might be missing. 5% faster? 100%?
virt-manager with PopOS host, usually Ubuntu/PopOS guest on a Ryzen 5500(? Something in that series). Do not know what virt-manager runs under the hood. Again, never done anything other than install virt-manager, so would be happy to read a guide on any recommended configuration settings.
This is straggeringly different from Proxmox. Proxmox is made for labs and datacenters that have a need to host lots of servers as VMs. Quickemu looks like it is mainly geared toward desktop use.
Quickemu gives me the ability to instantly spin up a full blown VM without fiddling with QEMU configurations, just by telling it what OS I want.
This might be less useful for those who are quite familiar with QEMU, but it’s great for someone like me who isn’t. So this saves me a whole lot more than 2 minutes. And that’s generally what I want from a wrapper: improved UX.
this is what we are really missing, something like: "here are 'good enough' cmd line args that you can use to boot $OS with qemu". Quickemu seems to try to help here.
> Quickemu is a wrapper for the excellent QEMU that attempts to automatically "do the right thing", rather than expose exhaustive configuration options.
As others have said, it's to get past the awful QEMU configuration step. It makes spinning up a VM as easy as VirtualBox (and friends).
I couldn’t answer this from the site. Will this let me run macOS Catalina on an M2 Mac Studio with usable graphics performance? Because that would give me back a bunch of 32-bit games I didn’t want to give up.
Is there something similar to this but for Windows 10 or 11? I want a Windows GUI for QEMU to build some Linux machines. I tried QtEMU but didn't like it. Thanks in advance.
I don't think you can. All virtualized MacOS machines, iirc, can't fully install the tools necessary to build software for MacOS. For example, I don't believe you will ever be able to sign and staple the app.
I would really love to have someone prove me wrong on this thread but I've never found a solution other than building on MacOS hardware, which is such a pain to maintain.
I have multiple old MacOS machines that I keep in a stable state just so I can be sure I'll be able to build our app. I'm terrified of failure or just clicking the wrong update button.
You cannot login using AppleID. If you can't do that, aren't you prevented from basically doing any kind of stapling and/or retrieving certificates for signing?
I would LOVE to be wrong about this. You've done that?
This is only true for products based on the Virtualization framework. Intel “Macs” can sign in just fine. (Also, I think you can authenticate things with an API key these days rather than your credentials?)
Probably because the two latest major versions - Ventura (13.x) and Sonoma (14.x) are not included in that list, and may not be supported. Patches to older versions may be supported. Apples patch policy according to Wikipedia:
```
Only the latest major release of macOS (currently macOS Sonoma) receives patches for all known security vulnerabilities.
The previous two releases receive some security updates, but not for all vulnerabilities known to Apple.
In 2021, Apple fixed a critical privilege escalation vulnerability in macOS Big Sur, but a fix remained unavailable for the previous release, macOS Catalina, for 234 days, until Apple was informed that the vulnerability was being used to infect the computers of people who visited Hong Kong pro-democracy websites.
```