Unity (and many other Linux UI projects really) current state: reinventing the wheel, with chances of hitting a usability sweet spot just before the next rewrite.
Unity 8 looks a lot like Unity 7 which I use daily. While a lot is changing underneath there's definitely a feeling of continuation in the user experience.
The 3d desktop effects, however, don't seem to be the future. I loved the Compiz cube when if first came out and it impressed onlookers at the time but these things don't add to your enjoyment of using the system after a (fairly short) and they never make using it easier.
I have to admit I still like and use the Compiz cube, but what I really like is the Wobbly Windows effect. It may seem cheesy to many, but it's oddly satisfying to me. And it's really smooth even in VirtualBox.
I love my wobbly windows. It's an effect that you can only do on an open source desktop. I don't doubt that Microsoft or Apple could implement wobbly windows if they really wanted to, but it's a silly effect and servers no real purpose, so they don't. It's a bit of bling that makes my desktop look futuristic when I drag terminal windows around.
yep and i'm running this on an old celeron and i'm using mirscreencast and then pipe to ffmpeg to record the video. so what you see is unity8 running on old hardware and with cpu load at 100% :D
The cube is too distracting for most people to be involved in regular daily work, but more subtle 3D desktop effects really do make the experience more pleasant and are an important part of a modern, functional desktop environment. The "Expose" functionality popularized by OS X is one example of one such widely used and fully incorporated effect. Others include smooth animations on minimization, smooth flyouts (e.g., Windows 10 Start Menu), transparency, and tiling effects.
I get the feeling that as soon as Wayland gets significant uptake and starts being less than broken for most users, it will instantly be replaced with something even more bizarre and inexplicable. The OSS community seems to view constant change -- and churn -- as an inherent good.
And you forget the constant churn of protocols, APIs, services, and whatsits in other domains not related to the display server. init to upstart to systemd is one example. It'd be one thing if there were a more robust rearchitecting in the spirit of the original (like runit in the init space), but nope -- it's always "that which exists is all legacy cruft, and must all be burned to the ground". And once problems come up, the new stuff too must be completely razed before starting afresh.
The current push to wipe X from the pages of history and pave the way for a Brave New Wayland World reminds me more of GNOME's troubled development cycle, marked by a complete churn of APIs, components, and UI standards every few years, than it does of Unix tradition which has kept knowledge alive and broadly transferrable from platform to platform for 40 years.
Canonical's attempt to powergrab over wayland, like someone else said below.
> init to upstart to systemd is one example
Upstart, another canonical brainchild.
As for initd, it's a similarly old system with a lot of cruft. Systemd brought the linux desktop and server an insane amount of QOL that it was sorely lacking.
Honestly, there's no nice way to put it: It's incredibly arrogant of you to claim that various systems are useless simply because you can't see why they'd be useful.
That OSS community that "loves churn", the people who are working on Wayland and such, it's a talented full bunch that is, in the majority of cases, giving its talent and time away for the good of the community. They're certainly not forcing you to use what they're making (like you seem to think they are), so it's incredibly thoughtless of you to shit on their work like that.
I hate sysvinit, but the init binary itself is quite small and cruft-free. It's the /etc/rcN.d stuff that has all the cruft. This boggled my mind (and got me hacked) when I came off Slackware, where I'm used to fine-tuning service access by tweaking a shell script or two.
Had systemd been a framework like runit, which nicely combines Unix primitives into an easy-to-use and featureful framework, I wouldn't whinge so much about it. But its design is monolithic (despite having 60-odd binaries, they all mutually and tightly depend on one another), its code is not that great and it's basically an inner platform in its own right. And all that isn't necessary to achieve the goals of dependency-based startup, process isolation, etc.
Sendmail belongs in the dustbin of history. But its replacements, like qmail and exim, preserved its spirit while fixing its deficiencies. Thus it is with init and runit. The current environment does not favor keeping the spirit of the past alive, but burning it to the ground and erasing it from memory. And when serious deficiencies are found in the new hotness, the answer is to burn that to the ground too. I blame the GNOME project, whose manifesto was called "Let's Make Unix Not Suck"; Unix not sucking involved writing a Windows-like inner platform and encouraging or coercing developers to target that instead. Famously, GNOME has undergone at least two slash-and-burn cycles and its latest installation is known for its instability.
Nahh, that's not why. They got rid of the idea of the X protocol and network transparency.
I like being able to log in to remove Linux machines, and run programs that show up locally on my desktop. And since it's not some framebuffer, I know the X commands are being send and then rendering on my side. That's loads better than some dumb pixel maps.
And X is extensible. Plugins aren't easy to write, but you can do it. We have Screen for X (Xpra), mpX (multiple pointer/keyboard), OpenGL, and loads more. Yeah, some could be considered cruft, or multiple ways to do the same thing, but that's the Unix Way.
None of that is relevant to parent's comment about "churn". I guess they're assuming the X11 ecosystem is literally the same as the Javascript one, or something.
I'm honestly astonished at how quickly people jump on to wayland to complain about it, about how things are changing, about how it's not perfect etc. Wayland is an ongoing answer to a plethora of long term problems that Linux is facing. Display-level app sandboxing, screenlock security, high DPI (scaling) support, etc. Dropping support for 30 years of legacy that is no longer relevant is just a bonus.
In the real world, things like multi pointer never worked (neither GTK nor Qt really supports it). High DPI monitors on the other hand are getting more and more common. So yeah, priorities.
And the wayland devs aren't some cabal out to prevent people from implementing those things you mention; or creating a solid plugin interface. They're just lacking in manpower because 1. it's hard and 2. few are seriously paying for that development to happen.
So when a company like Canonical decides to do their own version of wayland, not only are they actively removing potential users from the display server, and duplicating the migration effort, but they're also impeding development on wayland itself. So if you're annoyed multipointer isn't supported in Wayland, I can point you towards whose fault it is.
I think the general frustration is not that X is being replaced, rather that practically anything the OSS world does quickly fragments over minor differences, resulting in a very inefficient division of already-scarce labor.
I believe this happens because there is no real unifying force other than good feelings. In a company, people are forced to make reasonable efforts to work together or they imperil their paycheck. In the open-source world, people won't volunteer their time to work on your young project if it doesn't fit their taste profile quite exactly. Since it's rare for that to happen, lots of people spend lots of time re-inventing the same basic stuff and very few people spend time working on the difficult 20% of problems that would yield an 80% improvement in overall experience.
We see this over and over in the open-source world, and it's not without its benefits, but particularly in core underlying components like display servers, it can be very frustrating because you just want everyone to adopt something and move forward. The delays caused by the fragmentation can impair things for years as driver vendors play a "wait and see" game and talented developers look at it and say "Wow, that looks like a mess, I'll stick with Windows for now".
Personally I think Mir is straight up power grab. Canonical said "Why should we be bound by what these Wayland dweebs say? Even if their stuff is great, we are the biggest desktop Ubuntu distribution and we want to be in control. We will make our own display server." It is frustrating on the outside, but I guess you can't argue that it's not in Canonical's interests, and they may indeed end up winning the war (though it didn't work that way with Upstart).
Ubuntu is arguably the most widely used Desktop Linux distro, and likewise has become very popular on the server. In this case they felt that they had a better handle on the UI needs for the graphics system in place, and in fairness they probably do. In fairness, Canonical is free to do wtf they want... there are places where they've gone with the larger linux community and other upstream things (like SystemD).
I really don't fault them too much for their decision, especially depending on how much, or not their input was considered regarding Wayland... I'm also not sure when each respective project was started.
I do find a few quirks around Ubuntu... my HTPC box is about the only Ubuntu UI I work with, the rest are VM images running a server version. The intel audio (via hdmi) often doesn't come back after sleep/suspend, there are other issues there (i3-5010U nuc), but for the most part, I appreciate the Unity ui. That said, I don't do much beyond launch Kodi or Chrome and the terminal to fix problems after updates more often than I'd like.
Also, sometimes you need a dissenting opinion in order to make something better. As much as I've disliked some of the delays from MS wrt browser enhancements, a lot of the time what comes out after is arguably better than what came before... although I do still feel that MS should have just adopted SQLite for web-sql like everyone else.
It really does depend. In the end, canonical is dedicating developer resources to make something better, and is a much smaller company with far fewer resources than Intel.
If you use X11 forwarding with any application that uses a modern toolkit (Gtk+ 3, Qt 5, maybe also Gtk+ 2 and Qt 4) you are just pushing uncompressed pixmaps over the wire, as a bad substitute for VNC/RDP/etc.
Xpra adds image compression on top, AFAIK, so it should be slightly better.
The entire video (https://youtu.be/RIctzAQOe44) provides a good overview as to why people that have been working on X11 for many many years decided to start Wayland.
Whose fault is that? The toolkit's? Or the protocol's? Expecting Wayland to sop up the slop because by definition it communicates at RAM-bandwidth speeds does not remove the fact that you have slop, and eventually it will bite you.
Most GUIs are relatively static. They are not Crysis. Even if you hate X's drawing primitives and believe absolutely in the power of client-side rendering, there are huge wins to be had simply by pre-rendering stuff and storing it server-side as pixmaps, then using XCopyArea calls to show them on the screen. Sheesh, I was doing that since I was 18 years old, and I figured it out using just the man pages. It's not hard. It's not even particularly burdensome. It's just a case of "herp, derp, we don't want to think about our rendering code" from the toolkit crowd. Do we have to burn the entire display stack to conform to toolkits written the stupid way around?
Well, I'd be all for a rewrite. But doing so, we're throwing all existing Xwindows code down a drain. I'm not liking that, at all.
I also want network transparency. Right now, I can run a Windows program remotely, via X and WINE. I can't do that on Windows proper without a whole lot of setup on a Windows Server (I've done it). Yeah, X may be slow. It may have crappy security model. But in the end, it's better than what MS has and better than what Mac has.
I could imagine something different. Go back to core principles, and write a server using current design paradigm and knowledge of hardware. Get the server side running, and well. Write an emulation layer that will allow existing X apps to use it. And then start porting from X to.. say, Ywindows. :)
(Then again, I wished that DRM existed for Linux, so the graphical subsystem could have its own top, memory manager, process scheduler, and such. Then that would have set up Linux to be the dominant force in computation. Alas, it never happened that way.)
There's nothing inherently insecure about the X protocol. Sandboxing X would involve simply not providing information about drawables not created by the client. That means filtering out input events destined for those drawables and returning blank pixmaps when their contents are asked for.
That the X server doesn't do this, and provide knobs to control who is trusted and who isn't, is a shame. But again, we don't have to burn the entire graphics stack because security.
I am a KDE user for 10 years now and I can't emphasize enough how annoyed I am about this.
Every major release breaks something.
And it takes months and years for seemingly basic things to come back, if they come back at all. For example Shift+Del for files on the desktop https://bugs.kde.org/show_bug.cgi?id=344969 .
Right now I am especially pissed about the state of multi-monitor support for my Thinkpad + docking station. It is completely broken since KDE 5.
I can't undock, dock, suspend + combinations without everything crashing. (That means I can't carry my laptop around the office.)
KDE also has troubles arranging the desktops, keeping task bars on the correct monitor, or starting the desktop correctly at all.
The middle mouse button of the external keyboard doesn't work after suspend.
If I have two external monitors the frame rate drops significantly and I get screen tearing while moving windows around (Not sure whose fault that is. Maybe graphics drivers.)
Etc. :(
I know, open source, payment, free time. You can't really blame anyone for this.
The incentives and accountability, like for commercial products, aren't there.
With Microsoft's wish to be more like Google and the new telemetry features, I am either looking for a Mac next time or at least move to Gnome or Unity.
What also sucks is that these issues lead to me only using a small subset of programs like console, browser and gimp (no fancy editor, no mail client) to avoid anything important I use breaking.
Do you really need kde at all then? I started using a much more minimal setup (dwm) because of a similar experience and everything is rock solid, it's so refreshing! I was on mac for years as it stayed out of my way for most of the time but that stopped being true in the last few years -- maybe get a loaner and try it for a bit before you commit to it; I can imagine swapping out of frustration just to find your new setup is equally annoying wouldn't be fun!
Worse, by embracing web applications the GNU/Linux community made those GNU/Linux UIs even less relevant.
Comparing with other desktop OSes, not only those UIs keep getting rewritten, there is a lack of common infrastructure for modern UI desktops, akin to Framework modules in other desktop systems.
So anyone that cares about UI/UX has a more enjoyable experience in other systems.
Sadly, I have to agree, though hearing this must be very discouraging to developers.
I'm running Ubuntu 15.04 and it's broken in so many basic ways; like, it crashes when you open the calender widget on the wrong day of the month, apps only refreshing the UI when you move the window around, isn't even able to format an USB stick with FAT out of the box, etc.
I can't back it up with numbers, but my feeling is that desktop Linux has peaked around 2010 (Ubuntu 14 LTS), and is regressing since. In times of shrinking desktop usage, rather than inventing new stuff all the time, it'd be nice if distros could focus on providing a robust runtime environment for the OSS desktop software we have. Maybe I should use Slackware 14.2 (systemd-free; its setup on modern hardware with nvme, uefi etc. seems painful, though).
But I'm not complaining; all the work is done mostly by enthusiasts, and commercial OSs don't work well either.
You are using a version that has reached its end of life months ago. If you are looking for stability, you should stick to LTS versions. 16.04 should be reliable enough by now, and you can stick to it for years.
The 2010 version you speak of should be 10.04. Or do you mean 14.04, which was released in 2014?
I actually grew used to using Unity after some time, so much so that I now prefer even the windows taskbar to be on the side of the screen and not on the bottom when I am on a windows box.
EDIT: Removed part of the text that was actually about ccsm
Yes, I fully agree that use of the sides of the letter-box shaped wide screens was a good move, probably an inheritance of Unity from its Netbook incarnation. However, I found aspects of Unity in previous versions a little 'busy' and reverted to Gnome (I am one of those people who actually like Gnome Shell. Probably the result of 18 months on dwm with meta key bound to Mod4).
Busyness: 1) hit Alt-F to bring up (say) the Save As dialog box within an application but hesitate a few milliseconds on the Alt key and you get the HUD. HUD was a genius move for using desktop apps on a tablet or similar but you need to be able to bind it to something like the CapsLock key or something).
2) For quite a few releases, Alt-chord mnemonics were broken, but have now been restored, except you have to release the Alt key after the first note of the chord. For instance to insert a formula in LibreOffice Writer, you type Alt-IOF but in Unity (and Gnome) you have to actually type this: Press-Alt-Type-I-Release-Alt-Type-OF.
3) I recollect (14.04? that moving the mouse pointer anywhere near the top left corner brought up the dash by default but sometimes you wanted a menu item
4) Top bar menus on a large screen with multiple windows and multiple applications - huge trek to top bar with risk of target application losing focus.
5) Maximised window with application, user goes to close application but brings up the shutdown menu by mistake (top right of screen).
6) For ages, the Dash took a long time to arrive compared to Gnome (hit Windows key in a live session of Ubuntu and compare with hitting windows in a live session of Fedora Workstation).
PS: Canonical Design used to release details of user testing of Unity. Not so much recently. Anyone know of any user testing results on this?
I think this shows off why linux desktop enviroments aren't up to par with other systems. At one point the recorder clicks what I assume is a button to open an image they download in "Gallery" and nothing happens. Did it open in the background? Is it loading? No idea.
Also, another issue is that there are thousands of different ways across all of the apps for how to interact with them. Some have "Quit" options, some have top bars, some have icons everywhere. There isn't cohesion. I wish there was a "Linux-Desktop Standard" like OSX and Windows have but no one would ever agree on anything so it would go nowhere unfortunatly.
> I think this shows off why linux desktop enviroments aren't up to par with other systems.
I find the top bar menus well done on Unity compared to what we have on macOS. Take the sound menu, for example: on macOS you have a small slider to control the volume, and that’s all. You can’t even control iTunes from macOS’ topbar; you have to install a third party. On Unity I can control the sound not only with a slider but also by scrolling on the sound icon—no need to open the menu. Players like Spotify are nicely integrated by default and I have basic controls like play/pause/backward/forward as well as album artworks. That’s sounds like a simple menu but it doesn’t even exist on macOS nor (IIRC) on Windows.
> I find the top bar menus well done on Unity compared to what we have on macOS
I completely agree. I like it much better then the stupid uniform menu of macOS. I use Manjaro XFCE as my only desktop OS and I like the UI for the most part. I just really dislike the inconsistencies.
Most people who see me using my computer thinks it looks "cool" and are wowed that my Thinkpad x220 runs that smoothly. They'd love to use it but I know that most people wouldn't put up with the 50 different ways to interract with very basic menues. Even file picking is a pain (no icons) and using arrow keys to navigate are broken, and the "recent" files don't actually work.
Edit: In general my issue is cohesion. Linux's Desktop enviroment is Good Enough (TM) for people and already looks fantastic. It just needs polish which it isn't going to get for a while in my opinion. I'd happily pay a good 10 bucks to someone whos willing to do it and it looks like Elemntry OS are the only people who are willing to. Sadly it's Debian based and I prefer Arch based distros.
i have plans, but limited time. the base system is starting to come together, but i need to do a couple more iterations before i'll be willing to move on. i should probably do some write-ups while i'm working on the stuff.
>Did it open in the background? Is it loading? No idea.
This happens sometimes on my Mac.
Along with the computer freezing or giving a black screen when I remove an HDMI cable or other external screen, random keychain messages asking for passwords on boot, somehow losing access to my Apple account which prevented updates and required a phone call to Apple, crashes...
I use Ubuntu at home and don't notice any obvious deficiencies .
Not that I'm doubting you, but none of those have ever happened to me while using macOS. No OS is perfect, and you clearly had your share of issues with macOS, but I think the difference is that experiences like yours are the exception on macOS where on the Linux desktop they are closer to the rule.
EDIT
Not sure why all the down votes. Ignoring the problems is how they never get fixed. Go read all the recent threads on trying to replace a MBP with Linux that highlight the issues with desktop Linux. Power management, HiDPI, drivers, UI consistencies, etc...
This may just be me but UI inconsistencies are a minor annoyance compared to the iCloud account/login fiasco that Apple regularly puts my family through.
OS update? Sweet, let's apply that and see those new features... "Please login to your account to continue..."
OK, this is my wife's Mac so I think the password for that is... No, that wasn't it. Alright it must be... Nope. Then it's definitely... Damnit. Now the account is locked!
I'll take UI inconsistency over that garbage any day. The password is for a cloud service and not the password for the host itself. So to apply updates in Mac land you not only need the password for an account with administrative access you need the password for the 3rd party Apple account. It's an OS update! Why does it require logging in to a 3rd party service just to upgrade?
Then if the account gets locked it gets locked everywhere so suddenly my wife's iPod can't install stuff or apply updates either until the account is unlocked which requires a password change which requires reentering that password on n Apple devices.
I'll just deal with the minor annoyance of having to use GIMP's interface alongside Dolphin's (KDE's file manager).
You can go to App Store -> Settings and change "Free Downloads" to "Save Password". I get prompted only to sudo when doing app and OS updates.
With that said, I'm still confused about this 'fiasco'. There is signing into the computer, and signing into various services. One of those services happens to be iCloud/apple account. I'm sure you are having some issues, but most do not.
Everything else you're ranting about is called good security.
Windows has two control panel apps. One in the Windows 8 app style, and the old one from Vista and up that has been mutilated repeatedly.
The styling and even UI elements of Office vs everything else are wildly different.
There is no cohesion there, and there never needed to be any. On Android practically every corporate app is a completely custom styled atrocity with no common behavior, but Android took over the world.
What matters is that what the developer gives you out of the box is uniform and consistent. Anything you yourself install after that can always, and often will, break that common theming. And that is by necessity a product of an unrestricted development platform - if developers can change everything, they will.
>Windows has two control panel apps. One in the Windows 8 app style, and the old one from Vista and up that has been mutilated repeatedly.
However, unlike others, MS is in the process of gradually cleaning things up and updating their UI to to comply their new UI standard. Change is a slow and delicate process when high levels of backwards compatibility is a must.
"Also, another issue is that there are thousands of different ways across all of the apps for how to interact with them. Some have "Quit" options, some have top bars, some have icons everywhere. There isn't cohesion."
Most users dislike the Ribbon menu. When I've setup systems for people, and family, and they requested "word" they are expecting this [0]. When they see the ribon menu they feel "I don't want to learn this, I already know word, this isn't word". The average user also doesn't open or change anythign in a browser. All they do is type into google. None the less they dislike those changes too. Cohesion is the most important thing in an enviroment.
Oh sure, but those are still 3 different menu styles for Windows. I have never tried the ribbon thing but I think I wouldn't like it as a casual libreoffice user.
Three window modes in Windows: full screen, borderless and windowed.
Full screen will take up the entirety of the current monitor with no menu bar. If the resolution of the app doesn't match the monitor, it will either skew the content (common) or add a black border (uncommon) to fill the unmatched window space. Removing focus from a full screen app will minimize it to the tray. Games and some video players are typically the only apps that use full screen.
Borderless is the same as full screen, except the content will not be minimized if the app loses focuses. Text editors/IDEs in "distraction free" mode, full screen mode for browsers and games set to borderless are typical use cases for this mode.
Windowed has three icons in the top right with minimize, restore/maximize, and exit always on the menu bar in that order. The menu bar is draggable. Double clicking the menu bar will maximize or restore down the application.
The notable exception to inconsistent behavior is the exit button. It used to be that the exit button would cleanly shutdown the responsible process. Devs now sometimes input a menu option that makes the exit button close the window while leaving the process running in the background. The app still resides in the system tray. Apps with this behavior are usually communication software (Skype, Slack, ICQ) or something that processes data in the background where a main GUI isn't always needed.
Fun fact: if you use Windows 10 LTSB, the Windows Store is disabled/removed, and since Microsoft replaced the default image viewer in Windows 10 with a "store app" gallery, that means there's no image viewer in Windows 10 LTSB that you can use (other than Paint, which you have open manually).
But I agree with the general sentiment. Linux distros often have this problem of "not getting" what a typical regular user might need, because the distro developers and their core users still prefer doing most things by the command line, which they say is "easier" (and virtually impossible for a regular OS user, but that aspect often seems to escape them).
And Ubuntu is one of the more user-friendly distros out there, but even it can't get many things right. Hopefully this will improve with the arrival of flatpaks/snaps, and with a bigger focus on "apps" (rather than packages).
well the last time I tried kde plasma I actually felt a little bit overwhelmed. And I guess the more non technical the person gets who is using it, the more overwhelming it gets.
(I think neon somewhat resolves most problems, tough. But I didn't had the time to look at it (yet))
> I think this shows off why linux desktop enviroments aren't up to par with other systems. At one point the recorder clicks what I assume is a button to open an image they download in "Gallery" and nothing happens.
I try those new shiny DEs from time to time and always come back to xmonad.
In particular, I need to be able to have desktop #x on display #y and desktop #z on display #t, i.e. have virtual desktops independent from the multiple displays setup - a feature that other DEs / WMs seem to never provide.
This is very practical to, say, keep your emails always visible on one display while switching between other tasks on other displays.
Sure, sometimes a release of xmonad will make the chrome dropdown menus stop working [1], or some other weirdness, but all in all, it's quite stable and I can focus on creating a workflow that really suits me and that I never need learn again.
https://en.wikibooks.org/wiki/Guide_to_X11/Window_Managers