Has anyone got a checklist for package managers? There are so many it'd be nice to be able to predict what problems they're going to face in the future (and hopefully prompt better design in the process)
In all seriousness, there needs to be a good way to [spray your stuff](http://dilbert.com/strip/2013-06-09) all over a system with the rest of the system knowing about it. FPM does a decent job of this.
So you are developing a package in language and you package it with insert language specific package manager you are now done. If you package it for insert system package manager to achieve a similar level of coverage you must not only figure out how to create packages for 17 different systems you have to either figure out the standards and procedures required to get your package included on each platform or figure out how to host your own repo and instruct users on how to set it up this may include variations in setup and instructions that differ between multiple versions of the os that may be in use concurrently.
This could be 1000 x as much work for not much gain.
Right, much more work for little gain for the developer.
Then you have a bunch of developers all taking the easy way. And now all the system administrators running systems with those projects can't trust their packaging tools anymore, and have to do a lot more work in their daily workflow.
Oh, and as a bonus, none of these systems have a secure update mechanism, so you've bypassed the inherent code whitelisting that comes with a system package manager with a centralised repository and a requirement for strong package signatures. At best, they do like PyPi and optionally allow a developer to specify a GPG key. So you have to do "pip install <bla>" and pray to the computer god that the developer of <bla> and all the developers of all its dependencies cared about code integrity and set up GPG keys. From experience, the chances of that are approximately 0%, because it's not a requirement.
Ideally, your language package manager should know how to build (candidate) packages for common system package managers (eg '''setup.py bdist_rpm''' or '''setup.py bdist_debian'''). That way, the technical end user can install the package in a way their package manager knows about. If your package becomes widely used, it also simplifies (but does not remove) the job of the distribution's packager to build an official package.
This doesn't work so well for micro-dependencies, though. The TeTeX distrib. in Fedora is bad enough, I'd hate to see the same done with node packages.
This isn't really usable if you develop with the latest packages for dependencies. While I could generate an rpm for my python module, Fedora and CentOS/RHEL are going to have incompatible dependencies. I could package up my dependencies too, but then I'll break system software (a lot of fedora's tooling is written in Python).
A much better solution would involve making languages like Python accept the Python version as a dependency while making building the PYTHONPATH more flexible with common tooling. Making PYTHONPATH building better would enable installation with namespace+version, such that we could have:
Python in particular is still a bit challenging in that the compiler assumes it can spit out byte-compiled code all over the place (.pyc), but if someone was going through the tooling effort, they'd presumably find a way (or get something merged if it doesn't exist), to add a shadow directory of .pyc files that could be keyed by version and cached somewhere user-writeable (~/.pymodules for example).
Not that it is ready for primetime yet, but guix solves this problem well by allowing each user/application to have its own dependencies of different versions while allowing them to still share if they want the same version.
The main distribution will only ever have free/open source software in it but it is trivial to make your own packages/package repositories and share them since it is just a guile code file.
That only really works in a world where all of the users on the machine that can run Python also can write to shared library directories. That seems kinda awkward from a security perspective unless you're already using containerization (in which case you don't care about any of this).
This is certainly better than it was (.pyc files just didn't co-exist sanely across python versions), but it doesn't provide a method for running multiple versions of libraries between different apps.
> That only really works in a world where all of the users on the machine that can run Python also can write to shared library directories.
Far from it. It's up to the system package manager to generate the .pyc files for the globally installed version of the libraries. __pycache__ directories allow this to be done cleanly for any installed interpreter that supports them. It has all the information it needs, and users need not get involved.
Unless the user is running a copy of Python that wasn't installed by the package manager, there should be no need for them to cause the .pyc files to be generated. Besides, .pyc files are an optimisation so that parsing and bytecode compilation doesn't have to repeatedly happen, and aren't even strictly required: the result is slower startup, but it couldn't prevent the code from being run.
> it doesn't provide a method for running multiple versions of libraries between different apps
Indeed, and I pointed that out in my comment in the addendum. Virtual environments are still pretty much the state of the art in that regard, unfortunately. It's possible that some PYTHONPATH magic might help with that to some degree, but that would be messy.
This is especially true with native code, between apt/yum/zypper/pacman on Linux and pacman as part of msys2 on Windows you really should be using your system package manager (if you need to, build your own packages, it really isn't that much harder than any of these language-specific ones, I swear).
There was a tool I used years ago that would build rpm's from makefile-based build files. Basically, you would do './configure && make && [app name] make install' and it would then run a dummy 'make install', monitor syscalls that did copying etc. and make an rpm from that. That was easy enough to make packages from tarballs. Anything more complicated (like, having to actually know the package file formats) - forget it.
Anyone know the name of this tool? It was unmaintained for a long time already when I stopped using Linux, and that was 10 years ago.
It was probably checkinstall, today I'd really recommend learning how to make an .rpm or .deb the proper way, it's really not that hard. My experience with debian packaging is fairly limited, but the typical rpmspec for a autotools based build is as simple as (minus package metadata because that's kinda noisy):
%prep
%setup -q
%build
%configure
make %{?_smp_mflags}
%install
make install DESTDIR=%{buildroot}
%files
/usr/*
Boom, there you go, build it with rpmbuild or mock and you're done. Debian control files are just Makefiles with a bunch of magic targets that get called, and debhelper 99% of the time will automatically figure out what you are trying to package (autotools, python, ruby, java, node, etc) and build/install/package it correctly for you so that most debian/control files are one or two lines.
You only even need to go the debhelper route if you're intending your package to go upstream into Debian or Ubuntu itself. You can go a looooong way with just `DEBIAN/control` and `dpkg-deb --build`.
Do any system package managers support per-user installations, or are they designed mainly to install to system directories? Pip for example has the option to install packages into the user's home directory, which allows for easy cleanup. If developers want to experiment with a library for a project, they might not want everything chucked into /usr/ and made part of the system.
This is actually why I immediately discard use of the system package manager. One library version set per OS installation means that I can't run two things on the same machine unless I get really lucky with dependency versioning. Now the only way to get things done is to spin up one virtual machine per project, and completely setting up a new development environment per project is a huge pain in the ass that can only be optimized so far.
nix/guix is a sort of reasonable answer to this issue, I guess, but then you have to develop on nix/guix and hope that apt/rpm/pacman can handle your dependencies.
At least for guix, you can install it on top of any distribution and use it instead of the system package managers. I had some troubles with this when I did it, but nothing deal breaking and it isn't 1.0 yet so I imagine some of it will be cleaned up.
GNU Guix[0] supports unprivileged package management, among other features [1]. Each user may maintain one or more "profiles" containing whichever packages they'd like. On top of that, they have access to a universal virtualenv-like tool called "guix environment"[2] that can create arbitrary software environments for development or even just to do one-off experiments. It can even create containers for maximal development environment purity.
Generally the purpose of a system package manager is to install packages for the system. I think the GP's assumption was that the context is "how do I set up a system whose primary purpose is deployment of X", not "how do I write X on a system whose primary purpose is development".
Then again, I've had good mileage out of making .debs that plonk dev work and deps into /opt locally. YMMV.
FreeBSD ports and pkg allow you to set the installation base, with $LOCALBASE and $PREFIX (I guess other BSDs do too). Also AFAIK Guix. And BSD systems use /etc and /bin and /usr/bin for OS stuff, and /usr/local for packages. Even the config files go to /usr/local/bin/etc (NetBSD same, but in /usr/pkg).
When system package managers get properly documented maybe ... when instead of having 1 package manager per system , we have a single package manager for every system ... otherwise no, use the package manager that comes with the language if there is one.
Given how many non-obvious but easily-solvable problems there are with package management... this might actually not be a terrible idea. The biggest issue would be making it sufficiently flexible to integrate with anything that wants packages, including things like video game modding platforms and programming languages with in-language namespacing/module/package features.
I'm irritated by the loading screen and the enormous list of JavaScript pulled in. That's a simple static page using a few css animations. Is adding bloat considered "modern webdesign"?
From the down-votes, I gather that people may have missed what I was trying to say. The left-pad discussion generally divided people into 3 general opinions:
1) Use a monolithic library like lodash
2) Use a library for each function (note that lodash can also be used in this way but we are already far off-topic)
3) Inline these small functions.
So a website loading lots of monolithic libraries and using JS for everything is actually the opposite extreme for what caused that drama. If I'm wrong or irrelevant, please tell me how instead of down-voting please because I sincerely don't see how. Thank you.
Well, you can add bloat by inlining it too, if it's repeated many times. I thought bloat was for unused code loaded as a side effect to code that one actually depends on, for example, as a part of a big library. That makes a huge difference if you are targeting the client side.
I mean, you could also have it as a helper function in your project but that decreases the re-usability to copy-pasting across projects.
In the end, effective or not, I can see it as an attempt on reducing bloat. I see that many don't agree, but I'm afraid they don't take into account the constraints imposed by sending code to the clients rather than a binary.
>I'm irritated by the loading screen and the enormous list of JavaScript pulled in.
Why, did you have to count the JavaScript scripts and/or volume? On a web browser JS loading is usually transparent, unless one opens the developer tools.
So, either the page was slow for you (a legitimate concern, but the same volume could be matched with 1-2 image assets, not particularly a JS issue), or there's absolutely no reason to care how many JS it loads -- especially given that it's a useful open source project to provide people with packages for Qt.
I'm personally getting so sick of all the package managers I have to use, I'm more and more inclined to not use a technology if it comes with a package management system.
Seriously, what is going on with the OS vendors? Its like .. the Web came along and now nobody wants to build better operating systems .. shouldn't package management be a function of the OS? Its really just filesystem/tree management, and this really, really should be something we can use the OS for.
However, its just not happening. Language vendors have to solve this problem because the OS vendors are asleep at the wheel. I think its apathy - the browser has replaced the OS, and very poorly. Nobody wants to fix this problem, I guess ..
>I'm personally getting so sick of all the package managers I have to use, I'm more and more inclined to not use a technology if it comes with a package management system.
So, you're better off with a technology that needs you to manually manage packages and dependencies?
>Seriously, what is going on with the OS vendors?
What about them? What do they have to do with Node, QT, Python, and other communities with package managers?
OS X has the App Store for OS X apps, Windows has something similar, Linux distros have package managers for their userlands. Other projects have nothing to do with OSes schedules...
> So, you're better off with a technology that needs you to manually manage packages and dependencies?
No, you're better off having a single well-integrated system package manager. Once you start stuffing shit into system directories behind its back you lose all oversight. You can't even answer a simple questions such as "where did this file in /bin come from?" anymore. You can't tell the package manager "verify all installed files in system directories with the checksums listed in the package managers database" because the package manager doesn't know all the files.
I agree with GP, I avoid projects that require me to use a third party package manager if at all possible. If only because none of them seem to understand the reason for code signing.
>No, you're better off having a single well-integrated system package manager.
And wait for MS and Apple and your distro to keep up with ALL upstream changes? And have them approve what goes in there? Yeah, good luck with that...
>Once you start stuffing shit into system directories behind its back you lose all oversight. You can't even answer a simple questions such as "where did this file in /bin come from?" anymore.
You'd be surprised. E.g. brew files are in the Cellar dir, npm modules are in node_modules, global pip files are in their own folder, etc.
No package manager I use (and I use several, and all are quite popular) messes with system directories.
>You'd be surprised. E.g. brew files are in the Cellar dir, npm modules are in node_modules, global pip files are in their own folder, etc.
Yeah, you're pretty competent at remembering all that, because you have to, over and over again, re-build this ontology... probably as part of a job or profession. Therefore "its good for you".
But, outside that box, is a world where applications, developers, the operating system .. and most important of all: users, don't have to hire people such as yourself to maintain all that gluck. And in that world, the OS and Apps guys, when they build Frameworks, don't make the mistake of -DRY/NIH/RTW'ing themselves, over and over and over again (I count 4 in your list, but there are many, many more sorts of this kind of thing) .. ad infinitum.
It is a huge mess, modern software development. There is nothing efficient, effective, or 'intelligent' about everyone having their own idea about how to add packages to a system. We learn this, every single time some language-/tech-/stack-dejour forks off their own little mind about how this all should be done.
Its because there isn't such attention being paid by the people who assemble the operating systems, about the needs of .. what admittedly becomes a huge reason to stay 'stuck' on the OS. Once you get it all set up. And have a team of monkeys to keep it from falling over, just because someone did an install somewhere, touched a .file accidentally, didn't quite properly set up a BOGUS_ENV before doing configure, etc. and etc. and /etc.
In case of Linux: the kernel alone does not make an OS, so if you consider the various distributions of Linux OS'es, like Debian for example: Each (well the large ones) distribution comes with their own package management tools.
So currently the "state of the art" is: Create a package for each OS (which means multiple OS'es of flavour "Linux").
I really wish their was a generic tool that would input a source tree and output packages for the most common package management systems.
You could have a look at CPack: https://cmake.org/Wiki/CMake:Packaging_With_CPack
It was created to be used with the C++ build system CMake, but can be used without it too. I use it to automatically build debian packages from Jenkins, but it also supports tarballs, RPMs, NSIS (Windows), PackageMaker (OS X), and cygwin packages. Hope this helps a bit!
That's precisely the point: we already have package managers. Its just 'unfashionable' to use them.
What I would do if my language needed package management: write an abstraction for whatever package managers exist on the platform I'm targeting. So my 'package manager' would just be a smart shell around apt-get, or yum, or whatever.
Okay, this looks like an idea ripe for the attention ..
How exactly do you see it done by the OS vendors ? Should they try to support every conceivable programming language out there ? Should they force every language to use a certain package format ?
Until languages reach a stage where they have totally transparent interoperability - which is the ability to use a library in your language regardless of the language the library was written in - OS vendors have a monumental (impossible) task on their hands.
Windows tried this with COM and ActiveX and I don't think that succeeded entirely.
Things like CLR or JVM are a partial solution, but you can't expect a C++ programmer to work with the JVM or a C guy to make his hack compatible with CLR.
I guess the way we have it right now is the natural "best" way. Let's hope someone somewhere gets enlightened and enlightens everyone else.
I had hoped, by now, that OS vendors would have evolved a system of managing packages in a filesystem that was standardized, easily portable across operating systems, and easy to implement with a little forethought from anyone wishing to package their apps for the system or add support for the package/bundle in their language as a first-class implementation. In many ways we have this: at least in the Games world, where we can't expect the user to do package management to just play a game, we've got things like good ol' .zip files masquerading as filesystems, and thus .. our game is really a little OS with its own universe. The other end of the scale.
But its really, really hard, it seems, to get everyone to agree to these kinds of standards and share things across the filesystem in a way that is easily maintainable by real people as well as automated systems. I don't truly believe all this new school of thought, either, about filesystems having to go away - to me this is just a cop-out by folks who should be taking responsibility for teaching people what a filesystem is (since we have so many of them) before we start throwing them away and replacing them with .. something .. which inevitably doesn't solve any of the filesystem problems, but does mean less code has to be maintained.
Which is why I think language-people do this "NIH"'ification of the problem: nobody is enforcing any rules, so Langage-X has to have its own way of laying things out, providing common code, isolating things into common/routine layouts, and so on.
It truly is, for me at least, that if your language requires package management, its broken and isn't any better than any of the other umpteen languages that preceded yours, no matter how sexy it is, and which also has "package management".
The whole are is just broken and isn't fixed yet. I don't know all the answers, but I have a feeling a good start might be "build an OS, an execution environment, and a user interface using one language and one language only, and make all sub-systems of that OS utilize a standard, built-in, intelligently designed means of handling dependency graphs for user sub-modules".
The decision was between Node, Qt, and Go. At the end of the day, we wanted the possibility to share code between the client and server and Go's server side support is much better than Qt's so we decided to implement it in Go. Go is great for command line tools and being able to cross-compile a Windows binary from a Mac is awesome. Go's static binary approach made deployment trivial in comparison to Node so that made it a winner :)
Qt isn't really C++-only anymore, at least with QML. Notwithstanding the use of JS for QML itself, there exist some bindings to QML in non-JS, non-C++ languages. So I guess there is demand for Qt (or at least some parts of it) in non-C++-worlds.
>Qt isn't really C++-only anymore, at least with QML. Notwithstanding the use of JS for QML itself, there exist some bindings to QML in non-JS, non-C++ languages. So I guess there is demand for Qt (or at least some parts of it) in non-C++-worlds.
All of these worlds would be loading a ton of C++ QT base libs in any case.
As someone mentioned including node.js to QML projects: Does anyone know if there exists a QML port of fabric.js (or a similar canvas library?). I would need a library to create all kind of geometric shapes (rectangles, circles,..) and the ability to move them around, snap them to other objects, resize them...
I am currently trying to implement that myself with the QML canvas, but it is not that easy to get a smooth working library.
I'm not sure if you gain a lot of confidence from me if you build a package manager in go, and the avoid distributing it using go excellent package manager (ie: "go get qpm.io/qpm", done)... ;-)
If you will pardon me from being lazy and not looking at the source, is there a reason for that?
I see a lot of people here don't seem to get the purpose of this -- having played a bit with c++ and pkg-config[1] ... I assume that pkg-config is the kind of software this project tries to improve on? I can see how that might make sense for qt, which already has some properties of a runtime/standard-library (eg: qt string). And/or perhaps copy go's package management for c++/qt?
I suppose I can't really say I've seen the need that this project fills -- but I still wish it well :-)
Very valid point about 'go get'! The main problem currently is that we put all the source dependencies in the repo as Git submodules and use the repo itself as the GOPATH. We do this because we are using some dependencies whose APIs are still changing often (eg: Protobuf/GRPC) and we wanted control over the exact SHA1 we were using. Now that the vendoring experiment is enabled in Go 1.6, we can probably use that to solve this. Thanks for bringing this up!
As for the problem this solves; this is a package manager for Qt app development. It doesn't try to operate on a system level like pkgconfig. There's no system-wide shared code. Every app you are building has it's own copy of the code it needs locally like npm/node_modules. In addition to downloading dependencies, the tool generates some code to cleanly add it to your project without you having to mess with include paths or other settings. It also encourages things like proper namespacing (both C++ and QML) to avoid collisions.
C++ works with qpm, but it doesn't package anything in binary form. It just pulls down the source code from Github and generates some boilerplate to wire it up to the application project.
It's a method on a QString object? No, no, no! Look at all the other methods on QString, what a mess! What happened to do one thing and do it well? This needs to be in a separate downloadable package. That package could be so simple and minimal.
My guess is that these Qt people probably use an IDE and want to be able to immediately see what methods are available in the context of a QString. Then they want code completion as they supply parameters to it. That's so OO. (Ok the method is immutable and composable but its still blatant 90s object orientation)