If true this would be a huge reason to get a kobo over a kindle.
I use calibre and have owned multiple kindle and just plug in once every few months to copy books over usb.
Depends on a metric. I found Nvidia drivers to be better on every platform. It's just because it's closed-source it's PITA to install and update, but everything in between is great.
I've switched to AMD for religious reasons, tho. Even update story is worse on Windows for AMD, at least it was when I had RX 6900 XT.
When I was using my 1080Ti on Debian with the official driver, I would frequently have problems on screen 2 where there seemed to be no video acceleration. I could fix this temporarily through some setting in the Nvidia drivers but it would stop working after a reboot.
I had several bad updates (this was when using Fedora) and was left with graphics 30 minutes before I had to start work, I ended up plugging in a really old AMD GPU to be able to work for the day and then spent several hours faffing to get graphics back up.
I will only buy AMD/Intel cards now because it is plug and play on Linux. I've had no problems with the AMD card on Debian 12. On Debian 11 I had to enable a non-free repo to install the relevant firmware. The 1080Ti as awesome at the time as it was, only worked properly and reliably under Windows.
The other issue with Nvidia is when they stop supporting your card, their drivers will sooner or later not work with the kernel. I have an older machine that works quite well and I had to buy another GPU as the legacy Nvidia drivers do not work with new kernels. The hardware itself works fine.
Since it was a debian, i have to ask - was a driver 10 years old already when you used it? I also had 1080 Ti (and a regular 1080), worked flawlessly in: Windows 10, FreeBSD, NixOS. I don't recall versions I ran tho.
As I said there was issue where one screen wouldn’t be accelerated. This was on both Debian and Fedora this was around 2020. I started using Debian around 10 I think.
It is mostly the same driver on both platforms. In any case, I use Nvidia’s 565.77 driver on Linux with KDE Plasma 6.2 and Wayland. It works well for me on my RTX 3090 Ti.
probably because there's a free OSS equivalent to most software.
The free equivalent to meeting up with friends/developers is a zoom call, and those suck.
I think it's straight forward. They made a decision to pay top dollar because they had ambitious plans, and wanted the silicon valley types.
All went well.
Then, as this part of the company grew, some bean counter decided it was a huge expense, and something had to be done.
I suspect these walmart labs people were costing triple the standard walmart webdev, and so to the bean counters, the path forward was obvious.
It's really unfortunate when non tech people make decisions like this. I've worked at a FAANG for 10 years, and before that was at HP and other mid-sized companies. HP's average principal engineer would be outperformed by our interns.
This doesn't make sense given what the person you're responding to is claiming. If several people over several non-overlapping time periods have the same positive experience then decline, what you're stating is that the company is having memory loss every few years, decides to go for the pros, then remembers they didn't actually like doing that.
Maybe that is what's happening, and management is just cycling through the same way the engineers are. Walmart does have plenty of money to relearn lessons every few years, but I also would be surprised for the same reason - they didn't find billions of dollars under a rock. They're good at making money. Making the same mistake with personnel repeatedly is not a good way to make money (or maybe it is, what do I know).
As an employee, you are generally evaluated based on your perceived impact, at any level up to and including the CEO. This is easily twisted into change for the sake of change.
For a software engineer this does lead to some pear shaped decision making, like adopting new UI frameworks every few years, or whatever the case may be but these kinds of things are overall pretty benign compared to the same problem in the management of the business.
In the management of the business, the correct decision may be to stay the course on something a lot of outsiders and pundits and new grads want to pot shot and second guess like what industry you should even be in or how you should structure the company itself.
Let's say you are Visa and your transaction processing runs on IBM mainframes as an example. Everything is working with known parameters and risks, has a long and predictable roadmap, whatever. Being the guy that says "ok we are going to keep doing this for 10 years and evaluate again periodically if needed but this is the plan of record" takes massive guts, and should be paid at least as well as the guy that says "throw everything out and do this risky untested thing instead" but very few managements actually work like that.
The same waffling happens with remote vs RTO and either having the guts to make a particular stand or kowtowing to what you preceive to be the popular/prevailing opinion one ought to have as a CEO at this moment.
It can also lead directly to the situation you are describing where a decision keeps getting remade, perhaps even in a flip flop loop, to the benefit of multiple generations of "decision makers".
I appreciate the added context. I agree, this kind of flip-floppy, turbulent thing does of course happen. My whole comment was poorly written. I was mostly trying to say that it seems unlikely that some time ago someone said "we should hire people and treat them well for a while then treat them poorly and do it all over again."
Except, I don't even really agree with that. That's how companies treat employees all the time. Not just in software, but floor workers and warehouse folk and anything else.
Yeah and my comment is addressing some of the broader motif of the thread and article since I didn't want to leave multiple. Addressing your hypothetical thought quotation the "we should hire people..." I agree that it would rarely to never go down like that. Instead, the impact on people's lives and livelihoods is collateral damage to the need to be perceived as a change maker or in charge or whatever. The fact that it is repeating is just an artifact of people being rewarded to retread the same ground because institutional memory is for whatever reason not in the control system's feedback design.
It goes both ways. The comps being paid in the tech industry right now would pay off an average single family home in California in under 5 years. At this point these people are driving around in Porsches with second homes choosing to retire at 45.
But that's Scott's point. If the OS devs had thought through this from the beginning, app devs wouldn't have to keep dealing with breakage. iOS devs have other issues, but not these.
Apple and Google approached the mobile OS from opposite sides. Apple locked everything down and has gradually been adding access/features. Google left the barn door open, and is now trying to shut it. I know which OS/API I'd rather program against.
Heh, I never worked on iOS but based on what I heard from our iOS team at the time, I don't think iOS was any better. Though a lot of the frustrations back then were largely app review issues rather than API stability, like trying to push out a big feature release or bug fix and getting rejected because the reviewer found a new way to follow 20 links to get from the help center website to a page allowing you to sign up for a subscription outside the app store...
Web might be a better counter example - it started super locked down, but has slowly gained useful functionality like notifications, USB, serial, GPU, etc within the sandbox model. It just encourages more investment over time as new functionality is added, rather than annoying devs as useful functionality (documented or undocumented) is taken away.
iOS doesn't regularly break these mainstay APIs.. but when they do break APIs, they never backport them, unlike Google.
One example of an API where we lost power in exchange for security was UIWebView -> WKWebView.
It can end up being far more annoying than usual, even for smaller APIs, because you must maintain both versions of APIs until you get the green light to raise the minimum permitted iOS version.