Thing is, in the ‘olden days’, the only thing you could do with a home computer was tinker - the job of a computer was to be a computer but little else. And the number of people interested in doing that was small.
Today, the job of a computer is everything. Computers are in everything and do everything. They are our interface to the world. Their value is what they enable - writing, tax returns, video consumption, gaming and everything else, not the fact they’re a ‘computer’.
For those of us who want a computer to tinker with, we’ve never had it so good. There’s so many more options than there’s ever been for hackable tech. Just because most people don’t want to do that doesn’t mean they’re wrong, it just means they have different priorities.
I love the fact I don’t have to muck about with my iPhone to get it to work just as much as I love mucking about with my raspberry pi to get it to work.
I love the fact I don’t have to muck about with my iPhone to get it to work ...
I have changes I want to make to my iPhone. Some of them would be as easy as adding a trigger to an SQLite datbase that a stock "app" uses. I'm not "allowed" to.
I don't think companies should be legally permitted to take away the rights of an owner to do what they please with their own property. Locked bootloaders, e-fuses, crypto keys buried inside hardware, etc, should all be able to be overridden by a physical interlock. (Like Google did with Chromebooks, for example.)
Flip that switch and you lose the "walled garden". Okay. So be it. No running software that "trusts" my device (which is, arguably, misplaced trust anyway). No "app store" for me. I can live with that. It's my device and my choice to make. I get to make the call to give up "security" for freedom.
It's morally wrong to take away the ability of the owner to do what they wish with their devices. It should be illegal too.
Believe me, in general I'm with you. But at the end of the day you just don't have to buy an iphone. Get an android device, flash it with Linux, buy a pinephone, do whatever you want! But Apple doesn't sell tinker's devices, they sell a walled garden. Why force them to spend even 10 seconds of effort on something that isn't their business, when there are plenty of other options?
There's a reason I don't have any Apple device: they don't make anything that I want. It seems pretty pointless to mandate that all companies must sell products that I like.
Don't you suppose you're being a bit dogmatic about it? Every device has to be as flexible as possible or it's against the law? Genuinely not trying to straw man, but should my oven manufacturer have to account for me wanting to turn it into a space heater if I feel like it?
Companies make and sell products for specific use-cases all the time. Saying the can't sell something unless they also make the product able to break away form that specific use case, regardless of time and cost to the company and consumers for parallel built in systems or whatever the lift may be doesn't sound feasible to me.
I don't know, I'm on your side that I should be able to dismantle my iPhone and use the parts to my hearts content if I want to. But I'm not sure that it's something Apple has to build into the system just in case someone feels like doing it, especially since I could buy any number of other computing devices that might work just as well.
What Google did with the Chromebooks-- an interlock that zeros the manufacturer-provided root-of-trust-- is a good example of all that I want. No more, no less. Let me flip a switch and make the device "untrusted" so that I can load my own software.
I am being dogmatic. I can't think objectively about this because I have yet to see how it benefits society to absolutely remove the rights of owners to legally do what they please with their property (at least in terms of chattels).
Even with an iPhone, you have the right to flash whatever firmware you feel like. This has been upheld in US Supreme Court cases. Apple simply is under no obligation for it to be easy for you to do so. Jailbroken iPhones aren't illegal. Apple treats them as a security risk, since mechanisms for jailbreaking iPhones exploit security flaws in order to get root-level code execution. I can't say they're wrong to do so! It's no secret that they do. Or that Apple doesn't ship anything but iOS to an iPhone, or that you won't be able to install apps from outside the App Store. These are marketed as features.
Most people are interested in solving a task, and their device helps them with this. They are uninterested in general-purpose computers. And there's no shortage of them for the rest of us! Android devices are the obvious choice here, with their user-exposed flashable ROMs. (Not just the OS, but every ROM is exposed through fastboot) Or PinePhone. Or whatever Firefox is doing. You've got choices for tinkerers and general-purpose computation, it just doesn't live in a device with an apple logo.
> you have the right to flash whatever firmware you feel like... Apple simply is under no obligation for it to be easy for you to do so
This is disingenuous. Apple is not being called out for failing to create desirable functionality for iPhones. Rather Apple (et al) are being criticized for deliberately acting to purposefully constrain functionality. The first criticism would be asking manufacturers to do work to improve their products. The second criticism is telling them to stop crippling devices they sell.
> Don't you suppose you're being a bit dogmatic about it? Every device has to be as flexible as possible or it's against the law? Genuinely not trying to straw man, but should my oven manufacturer have to account for me wanting to turn it into a space heater if I feel like it?
You're reversing the burden. Apple is spending a lot of money to NOT make it possible to tinker with it.
> should my oven manufacturer have to account for me wanting to turn it into a space heater if I feel like it?
No, because their failure to do this has no structural consequences for the market in ovens or space heaters. Apple’s control over the market for software that can run on 2 billion iOS devices, on the other hand, is a big deal. Some think it’s Apple’s just deserts for creating a platform that users like; others think the government should end that control using the antitrust laws passed to limit corporate power at this scale.
I mean if I sell you a brick, it's not gonna connect to wifi. I'm not advertising it as connecting to wifi, so if you buy it than it's really your fault for thinking it will. Similarly, Apple doesn't prevent you from doing anything you want with an I-Phone. You can (if you so choose) saw it in half, open it up and try to figure it out, and interact with it in any way you please. If those interactions don't yield results you want, you really can't blame Apple since they pretty much told you what you were getting.
Now there are some real scummy practices that Apple uses, and those should be regulated. Software that detects tampering and shuts down the device? Sure, get rid of it. Some vital service they can (and do) shut down at any time, bricking the device? Regulate it to nothing. But ultimately 'do what you want with your property' is completely different from 'sell whatever I personally want in a device regardless of how others feel'. Apple sells a walled garden by design, and buying one of their devices pretty much entails realizing that.
You have the right to do whatever you want to with your iPhone. Apple is not obligated to make it easy for you to do so, or honor their warranty if you choose to fiddle with it. The law getting involved here would be total overreach, and I say this as someone who’s politically left, a total commie by US standards.
The technical protections in a modern device are impossible, from a practical perspective, for an individual to defeat. For all practical purposes Apple has removed the rights of owners to do what they please with the devices they own. It's not that they didn't make it easy-- they made it completely impossible.
I am not mature, informed, or articulate enough to argue about this objectively. I cannot conceive of how society benefits from taking away the rights of owners.
But you can cooperate with others to figure out to defeat those protections, or not buy into the Apple ecosystem in the first place. I agree they make it very difficult to fully personalize their products and share your dislike, but to me this is like complaining about a case being glued shut or requiring a special driver to access.
>but to me this is like complaining about a case being glued shut or requiring a special driver to access.
Both of those are fairly valid complaints though. Intentionally making a device hard to access or repair is almost as bad as not allowing it at all. That shows an actual concerted effort to stop or make it as difficult for consumers as possible.
That's basically the definition of anti-consumer practices.
It in no way benefits me to not be able to use a standard screw driver to open something and have to either purchase an expensive proprietary tool or at worst take it to an 'authorized' repair shop that has the license to use the proprietary tool.
OK, but that's when you make the decision not to buy it in the first place. I've managed to go my whole life without buying an Apple product because I always thought their products were kinda user-hostile and unfriendly to hacking/customization. On the other hand, I don't feel that Apple has any obligation to deliver the sort of product I want, when they have so many customers that actively prefer their walled garden/black box approach.
Occasionally I find myself thinking I'd like to own an Apple product, eg when I run across a cool iPad app or something, but this rarely persists longer than the few minutes it takes me to remember that I don't want to reorient around the Apple ecosystem.
I’m sorry to say, but your first paragraph is blatantly false. The jailbreaking community has been active since the iPhone product line existed, up to this day. Feel free to look it up. They provide instructions that a high school script kiddie is able to follow. I know because I was one of them.
If you were not motivated enough to research the options available to you, you cannot in good faith argue that Apple did anything wrong here.
I'm well aware of jailbreaks. I've jailbroken a number of Apple devices and some consoles.
In every "jailbreak" case, the manufacturer's intention was to make practically impossible owner control of devices. Jailbreaks are about exploiting bugs. The manufacturer's intention remains the same-- to remove owner control. Just because Apple made mistakes in their implementation of their architecture of control doesn't change their intention.
re: impractical for individuals - Try un-blowing an e-fuse in a Nintendo Switch. That's practically impossible. That's the kind of protections I'm talking about.
Believe me, in general I'm with you. But at the end of the day you just don't have to buy an iphone. Get an android device, flash it with Linux, buy a pinephone, do whatever you want!
The problem with this argument is the same as any essential utility. At some point, it becomes impossible to live a normal life without access to a certain facility. If access is only available through a monopoly provider or a small enough number of providers that they tend to move the market in unison even if not actively collaborating, that can become a serious problem.
This is why we have government regulation of essential markets. Regulators might impose pricing limits in violation of the "free market", to prevent exploitation of vulnerable customers (and in this context, remember that most or all customers might be in that category) where competition fails to do so effectively on its own. But laws and regulations can also impose other safeguards, such as requiring honest advertising, adequate privacy and security protections, or interoperability.
I think it is no longer credible, at least in my country (the UK) and others like it, to argue that a modern smartphone is a luxury. People use their phones to access government services, shops and home delivery services, banks and financial services. They are by far the dominant communication device of our age, not just for calls but also texts, emails, numerous Internet-based communications channels. Some people no longer have any other reasonably convenient means to access those services and communications. Some of those services and channels are provided exclusively via mobile apps and simply aren't available to those who don't have a phone (which is in itself a problem, because obviously not everyone does, and this is making things very difficult for some demographics during the strange times we are living in).
This being the case, it is reasonable to argue that people should not have to choose between two dominant ecosystems for their phone when both have serious problems in areas like privacy, security, reliability and data lock-in, some of which are a direct result of the interests of the providers of those two dominant platforms, without any realistic ability for most people to protect themselves from those risks or with any such ability relying essentially on luck (for example, the availability of jailbreaks and the continued operation of essential apps despite any jailbreak that has been applied).
It may well be that internet access in general is an essential utility, though I'd disagree that any device in particular is. However, the fact that the 'two dominant platforms' continue existing is mostly due to people's personal preferences. Besides, saying that Android devices even approach the same level of locked-downness of Apple devices is absurd; I run an open source version of android on all my mobile devices, and not once was it particularly difficult to install.
Even more, it doesn't seem to me that cell phone manufacturers are currently moving the market in particular unison. There are still more and less open devices, just as there are more or less expensive ones. Perhaps we're moving slowly towards that, but we're certainly a long way away as of now.
My primary issue is the difference between ensuring that no one has to get screwed and ensuring that no one can get a product they want. So long as there are more open devices, no one is compelled to use an IPhone. Anyone who wants to be protected from big bad apple may simply refrain from paying them. A utility is generally regulated not because of its vitality, but because of inherent restrictions on consumer choice. Unless it becomes impossible to just not buy Apple's shitty locked down hardware, it doesn't make sense to constrict.
Yep, although getting it to work if you root your device takes a few extra minutes. More importantly, to what extent is it Apple's fault that banking apps don't work on my android phone? Even if every device had a little hardware switch that would grant root access, those apps would then be under no obligation to function. And if you then go after the apps, why did you bother with the phone anyway?
As someone who owns a Pinephone, don't buy a Pinephone if you're expecting a functional, usable device. It might get there in a year or more, but it's nowhere near there now.
That's like saying you don't have to buy a car, or you don't have to get an air conditioner. Sure, you could tolerate a 2 hour bus ride and a summer with 90+ degree heat waves, but there is no reason to put yourself through that given the cost of the alternative luxury is not insurmountable even for working people in expensive places.
We are in a similar place in the mobile world, where life is getting increasingly reliant on electronic luxuries. If you want to take advantage of the convenient electric scooter or rideshare, you need an app. If you want to cash a check without going to a bank or paying a fee with a third party like a grocery store, you need an app. If you want to get early warnings from the city and/or state about earthquakes of all things, you need an app (MyShake). And for all those apps, you can't build from source on your own hardware. There is rarely a mobile website alternative. You need to get them from one of two centralized and moderated marketplaces, where only sanctioned and licensed devices may participate. You could make a stand here with a dumb phone and yell at the clouds that the world is this way, and it really is bad that it is this way, but you'd realistically be living a decade in the past and miss out on all this useful functionality.
No it's not! The parent listed other phones as options, not "do without a phone".
Following your weird examples the parent's comment is like saying you don't have to buy a *Ford* car, or a *Carrier* AC. There are many more car and AC options if you don't like Ford or Carrier.
They also listed the Pinephone, which had no facility to do most of those things. It's either Apple or Android, and both of these are making it harder and harder over time to run your own software stack.
But I mean, you can just buy a different type of car. Like, if you feel really strongly that every car ought to have a headphone jack because otherwise consumers are stripped of their right to play music through a headphone jack in their car, you can buy a car that has one. You aren't forever stuck on the bus, you just have to look at more than one whole car before you decide which one to buy.
Same goes for Apple; I would never use an Apple device, because I like to have control over my technology. Somehow, despite my amazingly brave stand against tyranny and capitalism, I still have a phone, running a LineageOS install that took me 20 minutes and an online guide to get running years ago. On the other hand, if I wanted to give my grandma a device I was confident she couldn't somehow force into a bad state, I would like it to be legal to sell me one.
This is exactly why Richard Stallman created GNU and the GPL. He might seem like an idealist or even an extremist but he's been warning us about this very thing for decades now.
The Right to Read is coming closer each day. Almost nobody cares. It seems like an increasing number of technical people want that future to happen. It fills me with immeasurable sadness. I am having a really hard time coming to terms with the fact that computers aren't "for" me anymore.
I think this is related to techies have given up selling things, and now sell subscriptions, or best of all remotely terminated licenses for “purchase”. If you have a general purpose device, you’ll find all those artificial limitations can just as easily be removed, thus destroying the rent.
Most people I know who are any good at programming had started learning about it by modifying something either on their own computers (pre-2000) or on the web (post 2000). They took things they found interesting or useful and somehow introspected and changed them. Good chunks of skills gained by that were transferable to professional environment.
Today, this is not how people get into tech. There is an ever increasing gap between technologies used for professional computing and things that are observable and modifiable by a normal person out there.
Curiosity and experimentation have been replaced by (appropriately named) coding bootcamps.
This is a problem EE’s currently talk about. People good at analog electronics are starting to age out of the workforce. There are a ton of people who got into electronics 40+ years ago by fiddling with things. This is harder today than it was back when through hole was how components were made (though the maker movement has changed that trajectory IMO). I could easily see the same problem with CS in general as access to general computing becomes less populous. I’ve even heard rumor it’s happened to some degree already when people talk about an unusual age band for devs born around +/- 1975 who tend to have a particularly good grasp of computers due to growing up with the first personal computers.
I agree that the proportion of people getting into tech via curiosity and experimentation has decreased.
However, I'm not convinced that the absolute number has decreased. It's completely possible that larger numbers than ever of people are getting into tech as an extension of their own curiosity, and are simply less visible due to being outnumbered by the masses from bootcamps.
You are talking about a barrier to entry (in the sense that it used to be easier) for a person who is new to general purpose computing to get on boarded while the person you are replying to is talking about the opportunity increase in the number of systems you can with work with presently.
I think it's actually easier today, because of the large amount of free well made educational resources out there, stack overflow, raspberry pi, instant IDEs from just typing in a URL, scriptable and popular online games like roblox and more. While in the past you had books you had to buy and if you were lucky, some help on IRC and basic on your computer, that you were encouraged to use to get some basic games on your computer if you were a gen X kid.
As a kid I got into programming in basic on my Acorn Archimedes because I had a book on basic. However I never got further than that because I didn’t have access to any more advanced programming books.
Now, all the information about it everything is available within a few minutes of searching.
But most tinkering journeys do not start with dedicated hardware. They start with hardware intended for something else, generally on a quest to solve some problem or do something cool..
> I love the fact I don’t have to muck about with my iPhone to get it to work...
Imagine being able to quickly and easily add your own custom programs to your iPhone without asking anyone's permission though? Also, what if - as the owner of that device - you had full control over it's capabilities like you do with a desktop computer? Adding these abilities certainly would not be the downfall of phone ecosystems as we know them. People that want convenience wouldn't leave their app store.
The fact that they are our interface to the world is exactly the reason why we need to take control of them away from the Apple/Google oligarchy. People need to stop comparing these devices to gaming consoles, which don't impact the world even a fraction as much as smartphones, when we're talking about making new rules for running smartphone platforms.
> Imagine being able to quickly and easily add your own custom programs to your iPhone without asking anyone's permission though?
Cool? I'm a software engineer and I have no desire to do this. The number of people who care about this feature is vanishingly low. I think it is reasonable to lament losing a particular feature for tinkerers, but it becomes unreasonable to demand that businesses cater to this very small niche.
> Adding these abilities certainly would not be the downfall of phone ecosystems as we know them. People that want convenience wouldn't leave their app store.
Downfall? No. But locked down systems are a somewhat effective way of keeping badware off of devices for the masses.
The key here is that it's "badware as defined by Apple and Google" though. Who watches the watchers??
If you're not advocating self reliance and self governance for people, you're not thinking long term. It's historically proven bad advice to willingly give all control to central powers.
> ...locked down systems are a somewhat effective way of keeping badware off of devices for the masses.
Developers are not the masses. The masses aren't installing the necessary tools and compiling their own programs and they never will. I'd settle for signing a written agreement with Apple and waiting a week for approval when I buy my phone so that I can actually control my own device.
I'm not arguing for letting me put my apps on everyones iPhones, just mine.
I agree with you. We have these amazing devices in our pockets, but we can only install things on them that are approved by enormous, faceless corporations whose incentives are often misaligned with our own well-being.
I don't advocate removing all safety checks from phones, but making somebody jump through 15 hoops to install a non-corporate-approved app is stupid.
Secondly, these companies stop a lot of badware, but they also act as gatekeepers to stop anti-establishmentware that may be for the public benefit at the detriment of the gatekeepers' stranglehold on power.
In other words, these devices are systems of perpetuating the status-quo and safeguarding the profits of the ones making them. This in itself I'm not upset about: of course capital seeks to cement its own power. The real issue I have is how much potential is destroyed as collateral damage.
Just as it’s not possible to provide “Lawful Access to Encryption” (master keys) without weakening crypto, it’s not possible to “quickly and easily add your own custom programs without asking permission” that _also_ provides the safety so your mom or grandfather don’t install something that allows crooks to drain their bank accounts.
General purpose computers are great for people who know what they’re doing (although there are some _really_ good phishing scams out there that can even fool trained people), but they’re an absolute disaster for the vast majority of people who _don’t_ become specialists in computer security.
I _just_ had a support ticket opened by someone who goes between different stores for a client…and this man sent his password in the clear in the support ticket. This man isn’t an idiot, it’s literally _not_ in his job description to be a computer security expert (his job has to do with hardware sales or lumber or something like that).
This man would be better served with two things we’re building later this year (AD integration and an Android version of the employee app), because then he can just log in with the same (probably simple and insecure) password that he uses for his Windows log-in at work. He is the type of person that, when told to do something, would simply install program X because someone told him to do so (never mind that program X is actually malware).
So no, while I think that Apple’s signing restrictions are a little on the draconian side, I don’t think that that Windows 95-like “permissions” are what most people want or need.
So, you're saying that there is absolutely now way, no how that we can put up a fence to keep people with low understanding of tech from compiling and installing some random app?
I don't buy it.
> I don’t think that that Windows 95-like “permissions” are what most people want or need.
That's an extremely bad faith take on my argument because there's a very wide spectrum of possibilities between the Windows 95 free-for-all and what we have with iOS. You're presenting a false dichotomy.
For one thing, iOS could easily put up tons of scary warnings before letting you sideload things. That would be enough to dissuade most people from doing it. However, I'd be willing to go to extremes to get control over the devices which I supposedly own. Make me come into the Apple store and sign away any rights to a warranty from Apple. Make me pay extra. Whatever you want - just don't put every single user in prison because a likely majority of people can't handle making good decisions on what software to install.
The idea that we must remove any and all control from users to protect the innocent is just as bad as the idea to have a War on Drugs - and I highly suspect that these preventions are actually in place for the same reason: to actually control people and rake in profits, not to protect them.
Google puts up tons of scary warnings for sideloading…and IIRC, that’s part of Epic’s case against Google (the warnings discourage people from installing alternative stores or from alternative means), in addition to Google’s position on Play. (I do not believe that Epic has a meaningful case on any front, but that’s ultimately for the courts to decide. The courts often decide wrongly, as was recently done for Oracle v Google re: the copyright of status of APIs.)
You say that Apple can make you pay extra. OK. Here’s a $99/year developer contract with which you can develop and install software that you want as you want (I believe that these builds are good for ~90 days, so you recompile/reinstall every 90 days; unlike the 7 previously mentioned). But people don’t _like_ that and have said that’s unfair.
I am completely saying that there’s no _meaningful_ way we can put up a fence to keep people who _shouldn’t_ be running random apps from doing so. We can’t keep users from clicking on _links_ that they shouldn’t be clicking on. Just this morning, I had a neighbour ask me about one of those full-screen “WARNING FROM MICROSOFT YOUR COMPUTER IS INFECTED” pop-ups; even though she was smart enough not to click on anything, she _still_ copied down the phone number to maybe call the scammer. My father, a couple of weeks ago, didn’t know about Ctrl-W / Command-W on a similar full-screen hijack that had affected both his Chromebook (locked down) and my mom’s MacBook (mostly not locked down).
I remember a few years ago there were a number of minor malware issues that were caused by people following instructions randomly on the internet to open the javascript console and “paste this in to see something neat”. So no, I don’t believe we can put a fence that protects the ignorant / unready / unwise but enables the people who think that they know better (and actually sometimes might).
If you can crack that, then there’s going to be a lot of people who will be at your door to reward you…and then many more looking for the backdoors you left so that they can continue to infiltrate systems for their own rewards (whether state actors or criminal actors).
You mention that the JavaScript console has been used maliciously – doesn’t its continued existence show that we can put up a fence that’s good enough? Even the most locked down corporate PCs still provide access to the console. I’ve also never heard of anyone being tricked into rooting an Android device or enabling USB debugging.
It shows just the opposite. These instructions were being provided to people who should never be opening the JavaScript console because they couldn’t understand what they were pasting into said console. And for some browsers, that means potentially opening up things like USB because of inane standards like WebUSB.
Maybe not to write them, but to run them you need Apple to sign them. A free developer account can be used to have Apple sign your app, but then Apple will only give you permission to run that app for 7 days, and only give you permission to install 3 apps signed that way at a given time. In any case, side loading iOS applications absolutely requires Apple's permission.
Freedom to modify is not the same as zero cost. You paid for the iPhone hardware, after all. Just pretend it cost $99 more and came with the right to load your own apps.
Freedom to modify is also not the same as a renewable 7-day licence to run up to 3 custom apps, as long as you maintain a $99/year subscription and a device that can run Xcode.
I would love to fully access the data on my phone. But of course it would also introduce security holes currently unimagined. Maybe someone can devise a secure sandbox model to enable this, but after Java's failure to resolve this problem, I'm not holding my breath.
Why is giving you a person, access to all of your data a problem?
The root cause is the complete lack of facilities to quickly and simply delegate capabilities to applications. There is no way to tell the OS, give this file to this application. Instead of trusting nothing, and providing the PowerBox facility to allow the user to do this, all the popular OSs just trust the applications with everything that the user has permissions to access, by default.
---
If you want to pay for a purchase at a store with cash, you carry a number of units of currency... and you only give the clerk the appropriate amount, and you can count the change to verify the transaction is correct.
Giving you a person, access to your cash is a solved problem. 8)
There is no equivalent way to work with an application on mainstream OSs. This is collective insanity, which I've been calling out for more than a decade.
iOS works the way it does to make money for Apple. First it was to appease carriers to enable the devices to exist. Then it was to appease publishers to get content on the devices. Now it's to make that sweet, sweet app store commission. That there are benefits for device owners who are willing to give up their freedom is a happy accident.
Note that every stage you have described has been called "security". Security is inherently a multi-perspective endeavour, and the word gets abused by powerful entities looking to preserve their own security, likely at the expense of your own (eg the TSA). I do agree that end users receive benefits from Apple's security, but to insist that explains the whole story is hopelessly ignorant.
Imagine being able to quickly and easily add your own custom programs to your iPhone without asking anyone's permission though?
Or even better, imagine being able to quickly and easily add your own custom programs to someone else’s iPhone, without asking anyone’s permission though?
But those aren't the same thing. Me being able to install what I want on my phone doesn't give me the ability to install it on other people's phones.
I currently have the ability to install apps on my phone from the app store, but I can't make that decision for someone else's phone. So why would it be different for software that doesn't come from the app store?
The point is that these are security boundaries. Look at all the malware that was being spread by people posting bad Fortnite apks. I posted a Fortnite apk to a forum and I included some of my own custom code. I made a decision to install that on other people’s phones. (I didn’t really do this)
Obviously this is a bit self-serving on Apple’s part - restricting software installs begins to feel like a protection racket. But it still serves as a security perimeter.
Sure, but users still have to make the decision to install that apk.
Yes, there's a security risk, but what's really being said here is "we're going to decide what you're allowed to do so you won't hurt yourself".
And I guess I just don't think that should be their decision to make.
Or to put it another way: Apple can keep the app store, and even make it the default, but there should be a setting to allow installing things from outside the app store. It can even be behind a few scary popups warning the user of the danger.
Sure, but users still have to make the decision to install that apk.
They certainly didn't make a decision to install my malware in that apk!
Yes, there's a security risk, but what's really being said here is "we're going to decide what you're allowed to do so you won't hurt yourself".
I'm sure you won't hurt yourself - most iphone users will though.
Or to put it another way: Apple can keep the app store, and even make it the default, but there should be a setting to allow installing things from outside the app store. It can even be behind a few scary popups warning the user of the danger.
Yeah, I agree with you - as long as it's not something as simple as a click-through and ignoring a few worthless scary warnings that everyone ignores anyways and exposes non-technical users to bogus certificates and downloaded malware on Windows and Android. If it was something like MacOS's system integrity protection deactivation where you have to restart the whole system and execute a command through some obscure interface, it could work.
>For those of us who want a computer to tinker with, we’ve never had it so good.
I disagree to an extent. While the hardware is much, MUCH faster, efficient, reliable.. it isn't something we can tinker with anymore.
Would you feel safe picking up a random USB drive and running the programs on it? A huge chunk of the fun of early PCs is gone. You can't just find new stuff and try it out to see how well it works.
If its open source, you get dependency hell, if it works at all. Plus there could be any number of backdoors or bugs in it waiting to subvert your system. Plus the ever present threat of having all of your passwords and/or data exfiltrated to who-knows-where.
In the days of 2 floppy disk machines, none of this was a worry.
> Would you feel safe picking up a random USB drive and running the programs on it?
Sure, why not? Just don't run it on the same computer that you access your sensitive data on. Given that you can buy a fully functional and tinker-friendly computer for like 20 bucks, this seems like a pretty straightforward solution.
Why $20/pop? Just reuse the same untrusted computer for all untrusted things?
Not to mention, sticking random floppies in your main computer was never exactly safe to begin with. The heydey of the Michaelangelo virus was 1991, if memory serves.
>Why $20/pop? Just reuse the same untrusted computer for all untrusted things?
Because the hardware could have something planted in it... and if what you're tinkering with turns out to be useful, then what? Then you have to spend $20 for another computer that you can trust with that one little thing.
Yes, in the days of MS-DOS, there were virii spread by floppy disk, but those could be guarded against fairly easily. You could always start fresh with a clean copy of your OS and use it to clean up the mess.
A better way is to have an OS that protects the hardware, and itself. Then you can have a single computer for everything, without having to trust any piece of application code, ever.
This is what I personally feel is just missing these days. I was never part of the leading edge of hobby computing.
I was very far off, geographically, growing up a teenager in India. But back then, 1998 - 2006 ish, I feel the Internet was a lot more about experiments. I would read up on some software or hardware group all the time. LUGs were way popular. I used to be on chat rooms from MIT or other unis every other day.
Nowadays most people I see of that age group are happy with a YouTube video on their smartphones. General purpose of computers is simply not visible. Not even to the level that I, a curious teenager, had experienced in those years from so far in India.
We have too many tall-walled gardens now.
Update: I should add I am happy with how powerful computers are these days, but they are not for the purpose of learning or tinkering the actual device or its software. It is just way to consume media for the mainstream.
Software is economically worthless without making it artificially scarce. A lot of "software innovation" is not about building better software, but building software you can charge for, which means "walled gardens", patents, DRM, license servers, anti-tampering mechanisms, and the like. In essence all content (music, movies, software, books, etc) follows the same pattern.
Note that profit-driven entities can give away software, but only if it supports another artificially scarce good. Google rents space on its search tool, as Facebook rents space on its social media tool. Ad space is scarce, and made more valuable the more Google and Facebook give away (which is proportional to the attention they capture).
But devs generally don't like artificial scarcity, so the platforms need a path in for them. I think Apple demonstrates this most clearly: the iPad and iPhone are software consumption devices, but the Mac is also a software creation device, and therefore is generally more open, free, and changeable.
> Software is economically worthless without making it artificially scarce. A lot of "software innovation" is not about building better software, but building software you can charge for, which means "walled gardens", patents, DRM, license servers, anti-tampering mechanisms, and the like. In essence all content (music, movies, software, books, etc) follows the same pattern.
You remind me of Cory Doctorow's "The Coming War on General Computation" keynote, which interestingly didn't notice that streaming and similar cloud-based services would become the DRM of the last decade.
Or, as it turns out, the rising appeal of non-general purpose devices that are more efficient.
I think you mean a different definition of economic worth than the GP post. One meaning is the developer's ability to sell the software, the other meaning is the value derived by using the software.
I maintain an open source project that sits in a very small niche. Based on downloads, stars on GitHub, and comments on forums, I estimate that hundreds or thousands of people find it useful. I do get the occasional donation, but if I calculated dollars per hour that I have put into this project, it would probably be in the single digits.
Economically worthless is a very bad phrasing, but the concept is very real (economists go with non-excludable, but that has an underwhelming impact on laypeople), software authors have a really hard time capturing the value they create.
I've always thought of it as the turn-style in front of the theme-park. Without that turn-style, the theme-part is economically worthless no matter how good it is.
And not only that... but a theme park without an entry fee is still a money-maker via snacks, drinks, souvenir cups, and extra-cost rides and line-skipping.
In much the same way that a company can produce an open-source product and charge for support, feature development, consulting, data feeds and sometimes SAAS operation.
I just mean you can't charge for it directly. "Utility" is perhaps what you mean. Note that the most important achievements (scientific and artistic) in human history were economically worthless for their authors, but have had enormous indirect positive economic benefits.
Yes but that’s not what the parent meant but in a money making way, and that has been something big companies realized a long time ago and capitalized on it. The general trend has been to create attractive gardens, get people used to them then raise up the walls. In the end even open source has stated to make a bit less sense to its creators and maintainers who are people and need to eat too
Control of where Linux development is going is indeed economically worth a lot. Copies of the Linux kernel are near enough to worthless for the distinction not to matter.
>Software is economically worthless without making it artificially scarce.
If that were the case no software would be written except by hobbyists as a kind of joke. Software provides value other than being scarce and it's really the maintenance of the software that people are wanting (those paying attention at least) not the software itself.
> Software is economically worthless without making it artificially scarce.
Worthless to whom? Maybe for corporations who want to make billions by exploiting a legal monopoly. For the rest of humanity, abundant software is extremely valuable.
The difference is that those people who are happy with YouTube videos wouldn’t have been tinkering with computers in 2006. You have to remember that tinkering with computers was and still is a nerdy niche. Let’s say there are 1000 people in the world. Of those, 10 are hackers (in the “tinker with computers” sense, not on the malicious sense), 20 are enthusiasts, the rest are neither. In 1990, you’d have 15 people on the net talking about computers and doing stuff with them. Everyone else is watching TV, reading books, whatever. In 2006, you have 100 people: 10 hackers, 20 enthusiasts, 70 people who found cat videos. In 2021 you have 990 people on the net: same hackers abs enthusiasts but also a whole bunch of people who are here for the cat videos. Does that detract from what the 10 hackers are doing? I don’t think so. In fact, it adds opportunity to the hackers and enthusiasts.
We didn’t lose the hacker spirit. Go look at hackaday.com to see what people are working on. We simply gained an audience.
You are creating a false dichotomy of cat videos vs. hacker. I've watched a cat video or two. I've also spent the vast majority of my life programming (coming up on 40).
My point is, video games were my gateway to programming. I got into mods and then expanded from there. I'm sure I am not alone. It's these natural paths that spark curiosity which are being cut off.
Is there any evidence that these paths are being cut? Mod communities for video games seem like they are thriving. Look at the sheer array of Minecraft mods, or mods for nearly any popular game on Steam.
Combine this with popular and easy game making software like RPGMaker, or even the ease that someone can set up a basic game on Unity. Beyond this, there are programs like Scratch and Processing that allow for fun and relatively easy computational tinkering.
I doubt the natural paths of curiosity have shrunk, though they may look different. Rather, the number of people using computers has massively increased, and the number of computer "tinkerers" is just a smaller proportion of people of the larger whole.
I am oversimplifying the situation. There are more than 1000 people in the world, everyone can enjoy cat videos. My main point is not that the world is divided into hacked and cat videos lovers. (There are two types of people: those who divide people into two categories and those that don’t :)).
My point is that there is a small percentage of the population who are makers and a large percentage who are consumers. The Internet of 30 years ago was mostly populated by the makers so it felt like everyone was a maker. Now it is much more representative of the real public because the consumers joined. It now feels like it’s mostly consumers because in the world’s population consumers outnumber makers by a large percentage. However, and this is the crux of my point, the absolute number of makers did not go down and in fact went up. You won’t find many makers in your Facebook feed, in relative numbers. But maker communities are bigger, better, easier to find, and easier to join than ever before. Just because consumers have joined the makers does not mean that things are worse. They are better. The rest is nostalgia.
Why the negative outlook? The world now is way more awesome for hackers than it was back then!
1. You can find any information you want, in an instant.
2. Hardware is very cheap, computers can be tiny, you can order components from anywhere
3. Robotics and electronics are way cooler now, with drones etc.
4. Software development is way more productive with all the libraries and tools we have right now.
5. When I was a kid, games were made by 1 or a few people. It quickly evolved into big studios. But nowadays, a solo or tiny team can release super successful games.
> 2. Hardware is very cheap, computers can be tiny, you can order components from anywhere
In the past I could get any component I might need from a local store in 20 minutes. now I've got 2 weeks to 3 months turnaround on shipping from shenzhen.
In the past, there were only enough components to fill a few shelves. Also, you can get many, many components from local stores now, if you're willing to pay the 4x-10x premium.
Not many people are, so local stores aren't as common.
In 2010, the local store had more components than I can order at reichelt, and was cheaper than aliexpress (if you include shipping).
I really miss having a place (back then we even had ad dozen of them in a few km distance) with a huge selection carrying every component you might ever need, and with enough technical expertise that you could show your circuit diagram and get feedback how to improve it. This is something only seen in shenzhen today.
Now I just don't do any small projects anymore as there are no stores in the entire state left, and the extreme premium in shipping and waiting time of aliexpress just isn't worth it anymore.
>I really miss having a place (back then we even had ad dozen of them in a few km distance) with a huge selection carrying every component you might ever need
That sounds very atypical. I live outside a major US metro and I know of one good computer store in the area (Micro Center). I imagine there a few small places that carry electronic components but not sure how many. And it wasn't much different 10 to 20 years ago unless you count Radio Shack which wasn't really all that great.
I count RadioShack, because it was great. As late as 2016 they had their transisor, resistor, capacitor buckets and speaker wire. You couldn't get arbitrary ICs there, but for basic components or last-minute bodges they were key. And now they're no longer carried.
>You can get many many components from local stores now, if you're willing to pay the 4x-10x premium
Well. No. RadioShack is closed. Fry's is closed. If I needed a resistor, I honestly wouldn't know where to go. Probably AliExpress or DigiKey. There's no local stores left.
Don't get me wrong, I am not talking about myself here. I am still an absolute curious fellow. I just started learning woodworking, I am 37 years old now. I start learning something new each year.
I meant the current trend of digital products which are, in my opinion, more like a TV than "computers".
I have also been disillusioned of late with the computing hobbies that have fueled me most of my life. I'm still heavily involved with computing for my employment, but there's just something that doesn't spark my curiosity as much as it used to.
Interesting that you mentioned you've moved to woodworking, as I found a way to rekindle my curiosity be exploring older technology as well. I've dived head first into antique clock and watch repair.
I took up to painting as Im involved too in computers as my day job. The contrast is wonderful, there is freedom in it, there is a sense of limitless possibilities and no pressure or hard deadlines. I’ve often said to myself if I went to art school I’d probably have been interested in computers now. I have also become disillusioned a while ago with computers and I think there is a limit to any interest without taking a break. But recently about an hour a year ago I started to become interested in programming again and one was using racket/lisp/scheme and second was Linux. I’m a new Linux user I am very stoked about it.
What, if any, resources did you use to learn woodworking? In-person courses and hackerspaces are not available in my area right now and I wonder if there are any good online classes. And by that I'm not talking about those quick-cut YouTube videos where you need a workshop worth 50k in machinery and tools to properly follow "this one easy trick" ;-)
If you have any resources you can share/recommend, I'd be very curious also ;-)
Sounds like you might like Wood Working for Mere Mortals (https://woodworkingformeremortals.com/). I haven't taken any of the courses Steve offers but I'm a huge fan of his YouTube videos.
Rex Krueger has the most budget-conscious channel I know of (https://www.youtube.com/c/RexKrueger). His "Woodworking for Humans" series starts out with just needing three tools and bootstrapping your way to more.
Personally I started with restoring discarded and thrifted furniture. The initial outlay was just for some glue, varnish, and sandpaper.
> 5. When I was a kid, games were made by 1 or a few people. It quickly evolved into big studios. But nowadays, a solo or tiny team can release super successful games.
And games will look exponentially better considering the tools we have at disposition now. Also, selling millions of units of one game is "possible" even for Solo developers (while it's obviously going to be a superstar kind of probability). in the 90s reaching that kind of market was impossible without a huge corporation support.
From my perspective I think it might feel that way because back then it was mostly tech curious people who used the internet. Now it’s mainstream and everyone is connected which means most of the content is produced for massconsumation. Back when I got my first computer and got online, in around 1996 I think, the computers were still quite expensive and I would have killes to have access to the information and tinkerboards/cheap computers that are available today.
I clearly remember my friends' parents being scared of me since I used to open up PCs and see what components were in them.
I have been using laptops that can not be easily opened/upgraded for the last 4-5 years. Devices are being made mainly for people to consume media and that's it. I think computers of the earlier era and now are not at all comparable. The devices now are not really "computers" as they used to be. They are opaque all-soldered-and-sealed boxes. How will you inspire anyone to tinker with them?
Adding a graphics card to a box was such a cool thing for all of my gamer pals. Video cards, audio cards, or any upgrades would naturally involve casual talks about tech. Who does that these days, unless you own and upgrade a desktop?
I think you're comparing the wrong things. When I was a kid, I tinkered with clocks and watches. I thought digital ones were opaque boxes and remember calling them boring. The mechanical clocks were much more fun and reconfigurable.
Does that mean we've sadly come to the end of the era of hobbyist friendly alarm clocks? Maybe, but so what? We then got computers instead. Now we have Raspberry Pi's for kids who see cellphones as boring black boxes. There's always something and it's fine if it's not the same thing it used to be. The same thing happened with cameras - people used to build their own cameras out of bits of wood and develop their own pictures from glass plates. but cheap film cameras eliminated that hobby from the mainstream. It's fine. Technology moves on. Now we have 3D printers that enable kids to make things they never could have before without access to an expensive milling machine.
I never liked to tinker with hardware. Tinkering with software was more fun, and the hardware that I can run my self-made software on is now more powerful than ever. Even an iPad, or rather, especially an iPad.
Maybe we could make an Internet for Age 35+ or so. Remember when one could take over Yahoo chats with Javascript? With containers and stuff, the Internet could be a lot of fun again. App battles in the park by old timers on the chess tables.
The mainstream was never going to be part of the tinkerers and hackers. But the depressing thing is to what extent todays hackers have become consumers.
But, we've still got linux and a lot of hardware support for that. I'll likely be the last bastion of general purpose computing.
I was a computer geek from about 1981 onwards, starting with Commodore PETs at high school. These machines were uniquely accessible. You could do surprisingly creative graphics using only the graphic characters hardwired into the machine (no bitmaps, colours or sprites). But these characters were printed right on the keyboard, and could be used in normal BASIC "PRINT" statements to do quite performant animations and such, without any advanced programming skill.
That was as good as it got. The difference between a beginnner dabbling with his first PRINT loop and an advanced assembly language programmer wasn't that great because the machine was so limited; you could go from one extreme to the other in skill in a year or two if you were interested enough. My natural response to seeing the "MISER" adventure, after solving it of course, was "I have all the programming skills to make a game like that" and did so.
And while then as now, only <1% of the general population was interested enough to get good at programming these things, another 10-20% was interested enough to hang around, dabble, try the latest cool programs that came down the pipe, or were made by the 1%. I had people playing a game that I made that consisted of one (long) line of BASIC.
Then, it seemed, most of the rest of the population (the other 80-90%) got computers too, Commodore 64s mostly where I lived. And still, even if only a tiny minority actually programmed their own stuff, it felt like a part of a vibrant scene, you could always show off your stuff.
With modern machines, even without the discouragement of locked down operating systems, missing accessible programming languages and such... there is just no hope of making something "cool" you can impress your friends with. So the tiny minority that does learn HTML5, Javascript, Arduino programming, what have you, is relatively obscure and seems to be a shadow of its former self. It isn't really. It's just that 99.99% of the computing power now in the hands of the population will never be tinkered with.
>there is just no hope of making something "cool" you can impress your friends with
I believe there is hope of making cool graphics effects with GPU shaders. GPU tech is still advancing similar to how CPUs were back then, so people have not yet explored all the possibilities of the hardware. You can do impressive things just by experimenting even if you don't have a lot of theoretical knowledge. Sites like shadertoy.com make it easy to get started.
> With modern machines, even without the discouragement of locked down operating systems, missing accessible programming languages and such... there is just no hope of making something "cool" you can impress your friends with.
Every modern computing machine I know allows you to add a very rich set of coding tools, other than unjailbroken mobile devices perhaps. Yeah, poor access to mobile sucks, but if you feel the need to hack, you won't let walled gardens stop you.
Web scraping is an insanely rich source of data for building cool tools. Then you might add a bit of simple machine learning, like a voice-based interface or a GAN-based video face-melter or visualizing web-scraped results on a map. These sorts of tricks were hard or impossible to do 10 or 20 years ago. But not today.
If you want to hack, I'd start by immersing yourself in Linux. Or Raspberry Pi. Better yet, both.
You are looking at this through the eyes of someone who is already comfortable with their understanding of computers and software. To a kid with no prior knowledge, the entry barriers might appear impossibly high.
No, they're lower than ever. They're one google search away from a huge amount of usable resources, youtube videos, web IDEs that they can use on their locked down iPad / School Chromebook and more.
And to do anything non-trivial requires years of tediously building up a knowledge of all the abstraction layers and dependencies between you and the machine. For a long time there is nothing you can do that a million others haven't already done far better and made readily accessible.
I can personally attest to what a massive turn off this is. I grew up in an age when computers were a part of everyday life, but their inner workings - hidden behind a mountain of ugly obfuscation. If you find the rare trivial task like making stamped websites fascinating, good for you. But for people like me who couldn't care less, it takes some sort of external impetus to actually discover their interest in computing. In my case it was a half assed mandatory programming class in engineering school where I found out I had a talent for it, and discovered an interest in the inner workings of things I had been taking for granted all my life.
Just because resources are easy to find doesn't mean anybody cares for finding them.
The percentage of people who tinkered and explored, and did all the fun stuff back then is probably still the same as now. But back then most of the "normal" people didn't use the internet at all, and now use it for youtube and facebook.
Classic PC and even some laptops especially with open operating systems are still fantastic tool for learning and tinkering. There's just more options now for people that simply wants to consume media without thinking too much about everything else. PC enthusiast scene is getting bigger every year.
Non sense. Linux makes many details about computers much more explicit than Windows does. Even installing Ubuntu teaches you something about how computers boot, what partitions are and file systems do. Linux also makes it easy to get started in programming on the OS level -- thanks to the Internet it is way easier to find information about how everything works.
So from a learning experience perspective, a kid tinkering with Linux might gain more insight into how computers work and than, say, on Windows or Mac.
Agreed. I still remember my first foray into linux install. PC wouldn't boot afterwards and I had no one to shepherd me through it. These days if I want to get information on how to get something working, 9/10 someone already posted it for your benefit.
Some of things are worse, but some things got infinitely better.
That's today, but it wasn't so initially. You'd learn about your monitor's scan modes, how the disk is structured, dumping bytes between devices, optimizing for load by placing bytes on the outer sectors of floppy disks, using RAM as a disk to speed things up, and whole lot of "useless" things that build machine sympathy.
Yes, but it's also true that drivers and peripherals are generally a lot more complicated today and hardware services more federated across layers of dependency and security than in the days of DOS or Windows 3. Reverse engineering today's Windows 10 wifi-based encrypted RAID disk array is a good deal trickier than a 30 year-old DOS FAT IDE drive ever was.
Well you still have easier and more difficult distros. You can go easy route and play with Ubuntu or Fedora but you can approach linux from the hard side and try Arch or Gentoo
I feel very similar. It took less than 2 decades to go from tinkers, creators,and just general curiosity, to somewhat mindless consumption of videos,or photos.
When the Covid wave started last year, we had to assess who has necessary equipment at home to be able to work remotely. A lot of people don't even have a computer at home,all they need is a phone or a tablet, which are created for consumption,not to be creative with it.
It's worth noting that the article is about the development of specialized hardware rather than walled gardens. In many cases this can be hugely beneficial. Not only has it allowed for the development of energy efficient hardware to watch videos on a smartphone, the same concept has allowed for the development of efficient encoding hardware for video to serve that market. Likewise, the energy efficient processors that enable media consumption have other applications. In the end we carry around a tremendous amount of general purpose computing power in our pockets.
Walled gardens and specialization are somewhat different concepts. The former is primarily concerned with restricting how hardware is used, while the latter is mostly concerned with optimizing hardware for specific applications. While there are times when these two concerns can overlap, this need not be the case.
This can be seen historically. Early microprocessors were developed for integer calculations, with floating point operations being done in software or with an optional floating point coprocessor. Floating point was integrated, and became a fundamental part of our notion of general purpose microprocessors, later in the game. The path taken in computer graphics was much more complex, but it is also worth noting that it started off as specialized hardware. Experimentation resulted in GPUs becoming a generalized tool. Recent developments have encouraged them to diverge into a multitude of different specialized tools. Both of these examples illustrate how specialization has broadened what can be done, without creating and environment that restricts what can be done.
While I agree that walled gardens encourage media consumption at the expense of experimentation or creation, this is more of a cultural thing. There is far more money to be made in the former than the latter, so that is what the market focuses upon.
Except that the algorithm is optimized for revenue and attention-grabbing content (the opposite of university material), and will gradually steer watching habits in that direction.
I hear people saying this a lot, and I admit I sometimes think it myself. But I'm not sure how true it is, as opposed to just a perception.
> Not even to the level that I, a curious teenager, had experienced in those years
I think therein lies the rub. Teenagers curious about computers aren't very common. And even back then (early 2000s), my experience in Europe was the same [0]. Most of the people in my school didn't really care about computers. They had other interests and pursuits.
The difference today is that computers are much more widespread because they allow people to do much more diverse things. At the time, they would maybe type up a report or something and that's it. There wasn't Facebook et al., so many people didn't spend any significant amount of time in front of the computer. Idle time would mostly be spent in front of the TV or hanging out with friends.
I think the reason this perception comes up is that at the time, people who "spent time on computers" were doing so out of curiosity and interest in learning about them. So the shortcut is that since now many more people are spending a good chunk of their day in front of the computer, that must mean they're as interested in them as the "curious teenagers" of our time. That's simply not true. Just look at what they do on the computer. It's mostly idle scrolling on some form of social media.
And, to address the subject of the OP, I'm also wondering whether there's an actual decline in the "general-purposeness" of the computer, as opposed to there just being more "non general-purpose" computers.
There's a definite decline in the percentage, and maybe even in absolute numbers if we look at people who used to use computers because they had no alternative but who are now better served by smartphones or tablets and also for uses which need "specialized processors" (say ML).
I draw a parallel to my feeling whenever I see discussions about iOS being locked down and people expecting the whole of the HN crowd to be up in arms about it. In my case, I was one of those curious teenagers of yore. I still love tinkering with computers, etc. But phones? I just can't be bothered to care about them. Can I make calls and check some maps? It's great!
I have an iPhone and treat it pretty much like a dishwasher. I barely have any apps installed. Neither allows me to install whatever random app I want. It's not an issue for my dishwasher, and frankly, nor is it for my iPhone. Whatever "general computing" I feel like doing, I prefer doing it on an actual computer.
Of course, there are other philosophical considerations in this discussion which I understand and can get behind, but the point is that, to many people, a computer is an appliance. They want it to do certain things. Does it do that? Great! If it doesn't, it's not fit for purpose. They simply do not care if some guy some where would like to do some thing with it but can't because Microsot / Apple are locking it down.
---
[0] I went to school in the suburbs of Paris, so many of the kids' parents were pretty well-off and many were even working in computer-related fields, so there wasn't any access problem. Practically all of them actually had at least one up-to-date computer at home.
I don't think it's curiosity about computers - it's more general curiosity about how certain kinds of systems work. It isn't even about appliances and mass consumption. That's a relevant view, but not a very revealing one.
Computers are an idealised example of an organised system which is assembled/operated by solving puzzles. Some people enjoy solving those kinds of puzzles. Most people don't. The first kind of person doesn't really understand the second kind of person - and vice versa.
Something interesting and disturbing has been happening over the last twenty years. The Internet stopped being a technological puzzle and became a social and political puzzle.
This kind of "cultural engineering" - where the currency is attention, influence, patronage, and sometimes actual spending, not puzzle prowess - appeals to a completely different kind of person. And these people have been driving the development of the Internet for a while now.
People who like solving computer puzzles have been completely blindsided by this and continue to act as if technology is still somehow primary - when it isn't, and hasn't been for a while.
Lots of technologies start out appealing to technologists until they mature enough to reach mainstream adoption. It's not disturbing, it's a sign that they were successful! Cars did the same, and aeroplanes, and electricity. I'm happy the internet is now as common and easy to use as electricity. Geeks can move on to the next interesting immature technology. Perhaps cryptocurrency or machine learning or whatever. The world is full of exciting things.
Cryptocurrency has now pretty solidly transitioned from hobbyist/enthusiast to monetization through centralization. Which is a little ironic but that’s how it goes.
> Computers are an idealised example of an organised system which is assembled/operated by solving puzzles. Some people enjoy solving those kinds of puzzles. Most people don't. The first kind of person doesn't really understand the second kind of person - and vice versa.
Plenty of puzzle lovers don't care about computers at all. Plenty of good programmers and tinkerers don't care about puzzles of different kind.
> Something interesting and disturbing has been happening over the last twenty years. The Internet stopped being a technological puzzle and became a social and political puzzle.
The “internet” as a thing is still capable of the same things it was 20 years ago. The proportion of people using the internet who care more about the social and political puzzle rather than the technological is what dramatically increased.
> But phones? I just can't be bothered to care about them. Can I make calls and check some maps? It's great!
I understand where you are coming from, and it makes 100% sense if you own a PC/laptop as well. But guess what? In a lot of places the majority of population (including younger people) do NOT have them. Their first and only experience of computers is a mobile phone. Even though "teenagers who care about computers" is a low number, now a lot of them won't realize that something like a general purpose computer exists (until way later, if they end up using a computer at work).
I grew up being fascinated with how useful and flexible computers are and fell in love with creating software. I'm quite sure that I would not have happened if I was born during the last 10 years.
I do get your point, but I don't agree it supports the thesis that there's a "regression" happening.
Broadly, I could see two situations in which people only get a phone and have no computer: either "poor" countries, where they can't afford a computer and will get a phone that's better than nothing, and very affluent ones where they'll just get a phone or tablet and not care. Although I'm not sure that this second category has no computer.
While I do sympathize with this, and I think that it would more likely than not allow people to discover this curiosity by, as you say, being fascinated with how flexible phones would be, my point is somewhat different: this isn't "going back". Phones never were open. And I'm pretty sure that the people who can't afford a computer now couldn't afford one in the 90s.
As such, for me, this is more of a lost opportunity to progress rather than a regression.
[for simplicity, I'm using the term "computer" as a desktop/laptop computer]
> Phones never were open
They certainly were more open than today. It was much easier to fiddle around with the software with smartphones that Nokia used to have, for example. Both iOS and Android were easier in that respect, too. So there's clearly been a regression there. However, if you meant "phones were never as open as PCs", ah yes then I agree.
> As such, for me, this is more of a lost opportunity to progress rather than a regression.
I would agree with the spirit of that, but the problem is that a lot of computers are actually being "dumbed down" so that they are more similar to phones simply because a larger population is used to them. For example most young people might not find it weird if some day MS or Apple make it painfully difficult to install PC/Mac apps from outside their stores.
You're right we are not there yet (yay?), but we seem to be moving towards that.
>It was much easier to fiddle around with the software with smartphones that Nokia used to have
Uh, as a young person during that time, I think there may have been one kid in my high school who was the right combination of nerdy and well-off to get a smartphone. It was still cool if your flip-phone came with any internet-connected features at all.
Is it really any more limiting? They can make programs in Scratch or other web apps on their phone. It's not "real" native software but it wasn't in the 1980's either. Kids then used BASIC interpreters. Compilers for "real" programming languages like C were unaffordable. Even assembly language and machine language were often inaccessible because computers didn't have an assembler or hex editor and you couldn't get new software like that without spending money.
You can’t use Scratch on your iPhone, and the built-in basic programming was far more “native” than any similar smartphone equivalent we have today. My TI-89 calculator remains way more programmable than my smartphone.
I’m not sure why it has become so common on HackerNews to defend this trend and/or pretend it’s not a trend.
Isn’t that the wrong comparison though? If you want to run scratch just use something other than an iPhone.
When I was a kid we had 8bit micros. They were programmable but they were also eye wateringly expensive. So it was a struggle to own anything you could program at all.
Also they all came with closed source, proprietary OSs.
I used to spend a fair amount of time just starring at adverts for things I couldn’t afford and that would be obsolete before I could.
Now you can get a whole computer given away free on the front cover of a magazine and it will do a pretty good job of running scratch for you.
ML wasn't totally inaccessible by that metric, it was quite common to store ML in DATA statements and POKE it into the desired memory locations. Depending on the platform it would either be directly called with SYS or CALL type statements, or it'd be vectored through USR() (or perhaps the ampersand in Applesoft).
That's true. I was mainly thinking of a struggle I had trying to create a binary .com file containing ML. The BASIC had no function for writing an arbitrary binary file. In hindsight, I suppose you could use one of those ML techniques you mentioned to do file I/O but that would have seemed out of reach in difficulty to me as a teenager.
> Most of the people in my school didn't really care about computers
When I was in university in the Netherlands studying CS in the early 90s, many (I would say most) didn't care about computers; they were there because 'money later' or 'making games'. By far most never wrote a line of code. It was frightening (to me) how much they had to catch up as the lectures were fast paced. Many changed to psychology or law in the first 2 years.
I think there is an other issue, which is that in the time when most of us got involved with computers they were devices in which almost anything could be done but hardly anything had been done, this attracted people who were explorer types, but also people who wanted something to be on the computer that there wasn't would be pushed to developing it themselves.
Now we live in an age where multiple versions of nearly everything seems to be available. Just buy what you want, don't learn to make it would seem to be the imperative of the day.
You are right overall but the smartphone does not inspire tinkering at any level. It feels closer to the TV to me than the computer. Computers used to be boxes which, even non-developers, would open up and upgrade RAM, HDD, etc.
That encouraged tinkering to a high degree. I remember having lengthy talks about what components to use to build a PC with gamers or videgraphers. The more computers become a closed box which can not be opened, casual onlookers will have no reason to even configure them.
The industry is simply not giving people a box to open.
> The industry is simply not giving people a box to open.
The industry is, it’s people who don’t want it.
The sealed boxes are some combination of more resistant to damage and longer lasting and smaller fo factor that a modular box can’t be, the sealed software is less prone to malware and blue screens of death.
The cloud makes things work like an appliance, rather than having to worry about dying drives and backups and figuring out port forwarding on your router. It also helps companies extract maximum rents on their product, but the point is it came with some benefits that consumers do value.
I don't think the problem is curiosity around computers. Back in the day, computers were just a novelty and weren't yet essential to most jobs. It's fine if people didn't care then.
Nowadays however most white-collar jobs and daily life tasks involve computers at some point, and knowing how to use & program them would make these tasks more efficient - almost too efficient, to the point where it goes against corporate interests (turns out society normalized and encourages business models that bank on artificially created or maintained inefficiency) which is partly why general-purpose computing is on the decline now.
This looks spot on.as I understand, what you're saying is that folks curious about computers shouldn't expect others in general to be curious about them too, despite their ubiquity?
> But back then, 1998 - 2006 ish, I feel the Internet was a lot more about experiments. I would read up on some software or hardware group all the time. LUGs were way popular. I used to be on chat rooms from MIT or other unis every other day.
> Nowadays most people I see of that age group are happy with a YouTube video on their smartphones. General purpose of computers is simply not visible.
Have a look at GNU/Linux phones recently developed:
The challenge with technology these days is that while the capabilities have gone in one direction, the culture has not necessarily gone in the direction many of us who came of age during an earlier era wish it might have. To some extent, I guess this is just a part of what happens when cutting edge things go mainstream -- no frontier lasts forever, sadly.
You cant compare back-then-yourself to general population now and assume the comparison is good comparison between generations.
Plenty of your peers were not on those chat rooms. They did not interacted with tech like you or even did not interacted with tech at all. They had different hobbies and interests, different habits then you.
Note that the article is about innovation and performance gains in general-purpose processors slowing, and about the increasing shift onto specialized computing engines by those who seek further performance gains for specific workloads.
This was foreshadowed with the Netburst not being able break 4 GHz in 2004-2005, and CPUs having to shift to multi-core. This bought "classic" CPUs more time, but CUDA showed up in 2007 and GPUs went from strictly specialized computing engines to general-purpose (in research, if not yet in the home). CPUs have also been steadily gaining SIMD extensions.
Now GPUs are showing promise for NN workloads, but in environments where the stack is tightly controlled, NN co-processors are showing up. This is because tightly controlling the stack has the benefits of being able to optimize and harmonize software and hardware, and interop outside of the stack (and in some cases, stack longevity) is not a factor.
The article isn't truly about how more and more computing environments tightly control their stack, but that mechanism does play a part in the design choices that result.
The article discusses the current trend but you hint at the possibility of a trend reversal. The CPUs are more and more multicore with wider SIMD. Many now have dynamic clock rates or low clock rates. The GPUs which use to actually have fixed functions now more and more are just GPGPUs. This to me is a convergence.
This seems a little sensationalist to me. The GPU is a general-purpose device optimized for highly parallel workloads. One could just as easily say that a traditional CPU is a special-purpose device optimized for branch-heavy code at the expense of parallel throughput.
The current state of GPUs is based on Nvidia's decision to focus on GPGPU instead of being a DirectX accelerator forever. They specifically decided to make the device more general-purpose than it needed to be at the time. Presumably they thought there was a chance to start another virtuous cycle for parallel computing.
I don't think we are close to realizing the GPU's potential as a general-purpose device. Imagine the kind of software we will have when every phone has the equivalent of today's biggest desktop GPU inside it.
The next revolutionary general-purpose device will probably feel like a niche product at first, just as the 3Dfx Voodoo did in 1996.
The article is factually correct, but I feel that the analysis is missing some of the historical context. Here's a quite probably strained analogy to cosmology:
Microprocessors and CMOS upended the computer industry, to the extent that a few years later, all the big companies in computing, up to that point, were in precipitous decline (even IBM, which embraced the new world order with the PC, only delayed this reckoning.)
In those days, microprocessors and DRAM alone were the cutting edge of technology, and they opened up all sorts of possibilities (though this also depended on some additional special-purpose equipment, notably for graphics and networking.)
One might draw an analogy here to the Big Bang. What happened next was like a period of cosmic inflation, in which Moore's law and Dennard scaling (plus fiber optics and the wiring of the world) created exponential growth. As in cosmology, we end up with a much bigger but quite uniform universe, in this case because the growth of the basic technology alone was enough fuel for all the innovation we could come up with.
It was only after inflation that the universe became really interesting. It is still overwhelmingly hydrogen, but it has differentiated: there are also galaxies, stars, planets and people.
So, the long-delayed conclusion to this analogy: innovation in computing is now more mature, but it is not shrinking, it has diversified - and there are still 10^n (for some large n) 8- and 16-bit microprocessors being made, and there are still hobbyists doing clever things with what is now basic, even primitive, hardware (that, not so long ago, was unattainable) - but we also have emerging technologies (machine translation and autonomous vehicles, for example) that quite a few people, not so long ago, assumed would be forever beyond the capabilities of mere machines.
A had a similar reaction to the article, and I appreciate your cosmic analogy.
The metaphor that came to my mind was the long persistence of steam power after electric motors were invented. Initially, 'going electric' meant replacing your one giant steam boiler with one giant electric motor, and doing everything else the same way. The factory remained organized around the drive shafts and pulleys and mechanical distribution systems. There was little advantage. It took decades of experimentation to gain the insights of how many small motors and task lighting could allow a factory to be optimized for task flow, not power distribution. And then it took longer for those insights to diffuse through slow human networks.
Hardware has developed so fast that software had no time or incentive to mature. Each decade's tech just gets ossified into the stack because hardware was making the stack faster at a better rate than human insight.
Every few decades, we get lucky and an invention happens when the ecosystem can use it... so we get compilers and sql and automated tests. But then other insights like immutable data structs just stay niche.
I hope that as hardware progress flattens, opportunities emerge for better software paradigms. Maybe this will coincide with the craft of software becoming introspective about it's myriad social issues.
If by decline most meant "computers being more affordable, powerful, and with more languages, compilers, documentation, FOSS and proprierary software available than ever, not even close to the 'heyday' of general-purpose computer era they have in mind in the 80s or so" then they're right. Though I think they mean "I want a specific vendor to make a product they don't want to make".
(This is not about TFA, which examines a more well dedined "universal vs specialized processor" dichotomy, but about the frequent laments about general purpose computing).
Some people literally created an online dumpster fire. You send it an email, it prints it out, and the paper goes into a flaming dumpster. It then sends you the video of this happening.
Why? Just because they thought it would be cool. (Er, hot.)
It’s happening in the tools for thought community on Twitter. It’s more about the software layer and innovations in human computer interaction design. A lot of the ideas from the 60’s and 70’s are having a resurgence like Memex, backlinks, moldable dev environments etc.
> We do mean that the economic cycle that has led to the usage of a common computing platform, underpinned by rapidly improving universal processors, is giving way to a fragmentary cycle, where economics push users toward divergent computing platforms driven by special purpose processors.
computing in my view is rapidly subliming away into the cloud. whether users will have much computer at all in front of them is questionable, or perhaps semi-vestigial. work/applications are headed into data centers. want photoshop today? even the old-fashioned install & run it yourself version has major cloud connectivity.
even within the world.of.consumer electronics, I don't see specialization as a trend. video game consoles have tended towards "exactly like a pc" over time. cell phones are pc's with the most proprietary/controlled drivers on the planet. 5g base stations are pc software-defined-networks plus gobs of general purpose dsps/sdrs. in the smaller computing/devuce world, arm and espressif and soon risc-v are cutting out larger & larger swarths.
thus far ai processors have had fairly wide ability to operate models in a cross-architecyure fashion. make sure you can run onnx or tf-lite. big only marginally customized gpus still have a huge presence. there is variety here but remarkably little end user differentiation.
where are the specialized systems this write up talks to? where are the fragmented systems? the door is opening as we try to re-learn how to make silicon foundries, how to do open source chip making, but I haven't seen what's being talked about here, specialization, bifurcation of capabilities.
but cloud? cloud is murdering the shit out of general purpose computing. applicationization of software turns software effectively hard to end users. we have no power, no ability to adapt or change or see the computing. it's 100% what is given to us. every system is 100% special/specialized, and in the post Pax Intertwingularis death of the api & interoperability, that means radical rigidity. systems are radically more specialized & in&general, but it's not a hardware problem like this paper asserts and it's not something on thebhorizon: it's already happened, it's already made the general.purpose computer obsolete, and it's at the software level, it's about where most of the world's computing is run, on special systems, on the clouds.
Fab numbers are astronomical. "TSMC recently announced plans for a $19.5B 3nm plant for 2022". How can the next computing paradigm even take off when all the oxygen is depleted in chasing "proven" technologies?
What are the other options you would want to replace those proven technologies with? Sure, there are better semiconductor materials but they're expensive and already used where it makes economical sense. Nanotubes or graphene can't be handled at scale yet. Superconductors need huge contraptions for cooling. Using organic molecules is messy, see OLED.
Optical parts might help, but that is difficult with silicon which brings us back to the other semiconductor options. Spintronics are already being put to use but those things have eye-watering prices.
Look at intel, even incremental changes can go wrong and cost you competitiveness.
Classic innovator's dilemma. I personally am biased to silicon optics after seeing a few impressive demos. Albeit in laboratory conditions, under controlled settings ;)
That's not as much money as it sounds for the industry.
Apple revenue is more than that each month.
There's enough money around. So it seems unlikely that the cost of TSMC silicon fabs would be the obstacle to others developing alternative computing paradigms.
"Personal" computers were subversive when first introduced. They were designed for complete control by an individual. They were also, initially, not networked. When networks first appeared it was with the assumption that the user community of the network did not have a persistent high level of malicious security threats.
In a kludgy, gradual way personal computers were tamed. This was inevitable. If you network completely general purpose, user-controlled machines in an institutional environment you will get security chaos.
Some of this, though, like putting digital rights management into personal computers, was tragically evil. What we are left with is neither here nor there: Computers are not secure. Our communications are not secure and are under constant surveillance. And we do not control machines we supposedly own. Lose, lose, lose.
You cannot have security and surveillance and central control of "personal" devices. You cannot protect "digital rights" and sell entertainment content to users that truly control their personal computers.
There's no doubt that we're getting an ever-widening variety of clients, but this "general purpose" notion is a bit slippery. I remember when "general purpose" meant that not only did computers crunch numbers for science and engineering, but they could finally handle more general business computing such as accounting and text-heavy business data processing, which was much larger than number crunching--more general purpose.
Then we got microcomputers, and we added consumers and individual employees to "general purpose", and "word processing, word processing, word processing, and spreadsheets" became the new definition of general purpose computing, dwarfing big iron data processing. Then laser printers and desktop publishing, then Photoshop, CDs and "multimedia", and then The Internet (meaning the web) exploded and general purpose computing suddenly meant email, instant messaging, and so many new home pages that Yahoo could hardly list them all.
In the Internet Boom, nobody defined general purpose computing as mostly word processing, much less business data processing, and forget about scientific calculating.
Fast forward a couple more decades, and general purpose computing is streaming YouTube celebs, texting, binge watching streaming "TV", getting most of your news managed by Big Tech censors so you can't "misunderstand", keeping up with ever-streaming timelines of your 900 closest friends, going to work or school via Zoom, doing most of your shopping....
And where is word processing in this general purpose computing? Probably bigger than ever, but now buried in the "other" category because newer components of general purpose computing dwarf it.
So if some of this is done with a laptop, and some with a "phone", and some with a flat screen in your car, and some with your TV remote, and some with an iPad...I suspect that "computers" could easily be described as either declining or growing for general purposes depending on which of many reasonable definitions you employed.
You are trying to define tasks as "general purpose". They aren't and never can be.
The point is that the basic modern computer design works well (enough) for almost all of those tasks. For the most part all that you need is the basic operations on 32 or 64 bit "integers": move, compare, add, subtract, and, or, xor, shift left, shift right.
With this you can do spreadsheets, word processing, database, accounting, desktop publishing, web browsing, email, instant messaging, decoding MP3 audio and JPEG photos and MPEG video. And show it all on a GUI on a high resolution bitmapped display.
You can increase the efficiency of some of those tasks with hardware and special purpose instructions for floating point arithmetic (though spreadsheets for example hardly benefit) and SIMD processing for media. See my other (top level) post.
But you don't need them, especially if you have many general purpose CPU cores.
Don't forget the original Macintosh did basically all of those tasks with just a simple integer-only CPU running at 8 MHz. No FPU, no GPU, no media instructions.
A modern multi-GHz CPU can do all those tasks acceptably using only simple integer instructions as long as you don't insist on very high resolution video (and games) at high frame rates.
>And show it all on a GUI on a high resolution bitmapped display.
Minor quibble aligning with my minor quibble with the article itself: modern graphics has been depending on a dedicated graphics subprocessor for ages now. Even most Linux distros that aren't targeting extremely constrained hardware vend a graphics subsystem that assumes an accelerator card.
Perhaps the graphics card can even be noted as the harbinger of the specialized-hardware trend (if one ignores the audio coprocessors before it). This trend has been riding alongside the CPU trend for ages.
This is a very hardware focused overview of technology. To summarise, Moore's Law is dead. Specialise processors takes over. And they will be owned by specific Companies.
That is all true as I have been posting pretty much exactly the same on HN for a few years. The real question is though, does it matter?
I am not convinced, at least not yet convinced what we are doing now on a computer is that much different to 20 years ago. Excel, Web Browsing, Media Consumption and Content Creation like Photoshop. The only real recent innovation is Machine Learning that greatly increase productivity in certain niche.
And it does not seems to me any of the above activity is going to fundamentally shift. There is nothing in terms of Hardware tech limitation that is holding up performance for further productivity increase, rather it is Software that is not getting much improvement if you consider the worst and best software could have performance different of 100x.
To give an example, I could bet with few billion dollar we could create a Hardware GPU that is 80% close to Nvidia GPU or AMD GPU. I am also willing to bet even with 10 Billion dollar funding we cant get a CUDA Compatible Library or Drivers that is 80% of what Nvidia is offering today.
This fragmentation means that parts of computing will progress at different rates. This will be fine for applications that move in the 'fast lane,' where improvements continue to be rapid, but bad for applications that no longer get to benefit from field-leaders pushing computing forward, and are thus consigned to a 'slow lane' of computing improvements. This transition may also slow the overall pace of computer improvement, jeopardizing this important source of economic prosperity.
This is perceptive, but also red ocean thinking. Moving ML to GPU is only bad for other workloads if CPU languishes as a result. But I think there's reason to believe it's a blue ocean. The more performance ML gets out of GPU, the more money the ML business can pour into semiconductors, batteries, EDA, and so forth. CPU benefits from all of these.
I think you can see this at work in the laptop market. Cell phone R&D has driven battery technology. Server R&D has driven efficient high-performance x86 cores. Both have driven power-efficient process nodes. Put them together and you can make incredible laptops.
"Decline" as in "Still growing but not as much as some other things". This analysis sounds like sour grapes. "Phones get all the love!" doesn't actually mean that general-purpose computing is declining. Just the opposite. It continues to grow in power as it always has.
I agree and the pool of tinkerers and intelectually curious has been growing steadily. There are so many free resources available that the barrier to entry has lowered enormously. However, the enormous growth has gone towards consumption and that is not necessarily a bad thing.
That was a decade or more ago. Now we're getting heavier and heavier JavaScript and WebASM running locally on our "dumb terminals" -- and with them using local permanent storage too.
In some ways I don't mind that. If all the normal people are off using their terminals then perhaps workstations will shift back towards supporting advanced users.
Anyway, I don't see the problem. I don't see how e.g. better GPUs cannibalize the performance of general-purpose CPUs, and I doubt that the market for general purpose computing isn't already saturated. You can buy CPUs for a couple of bucks (little ones), you have a supercomputer in your pocket (phone), and people only buy fancy chips to play video games.
It's declining perhaps, yes, but it's not there yet. Because modern workloads shifted mostly towards gaming, crypto-mining and web browsers doesn't mean that general-purpose computers declined. It only means they haven't developed very fast, but that is thanks to Intel and Microsoft.
In my opinion, the OS needs to go all virtual+distributed for general-purpose to survive. Or else all hardware will become web browser accelerators. Even now all browsers already have their own hardware drivers for video.
What I find is that people of my generation have some basic idea of at least some computer jargon because of what we had to learn just to get our PCs working in the 80s and 90s. What in fuck's name is a parity bit? We learned because we wanted to get our modems configured to play Doom together when our moms weren't on the phone. I'm not talking about people who went into computers, I'm just talking about the people who used them.
Nowadays everything just works so damn well. My phone works with my printer without any driver installation let alone mucking around. So nobody has any idea how anything works.
The people who grew up knowing what a parity bit was, are in the IT consulting business now, myself included. I think that's how things naturally change. IT and computer hardware has become that much complex and popular at the same rate, you now need people and abstraction for that. It doesn't have to be a bad thing though.
I really don't think its any more complex than it was 20-30-40 years ago. At the end of the day, none of this GUI abstraction does anything more than what the computer can do at the command line, and that's practically been etched in stone at this point—some of these bash commands are older than my boomer parents. Instead, people just don't have to bother learning beyond the abstraction to get work done anymore. It's just like cars: you don't need to know how to drive a stick shift and all the mechanical theory behind shifting gears smoothly if you have an automatic transmission instead.
I think consumer devices are going to remain consumerified, but we will see an increase in people across disciplines who need and make use of general compute power. More industries are starting to recognize the importance of statistical modelling and interpreting data, and that means being able to develop bespoke analysis on general purpose computing hardware. Right now it's just the data scientists and engineers doing this work, but I wouldn't be surprised if this grows to include investors, accountants, actuaries, urban planners, etc, in the coming decades. I expect we will see more smaller companies with unique cluster offerings to compete against AWS and their increasing prices, especially as hardware grows cheaper by the year.
Yeah computers are everywhere now “ Collectively, these findings make it clear that the economics of processors has changed dramatically, pushing computing into specialized domains that are largely distinct and will provide fewer benefits to each other. Moreover, because this cycle is self-reinforcing, it will perpetuate itself, further fragmenting general purpose computing. As a result, more applications will split off and the rate of improvement of universal processors will further slow.”
I really believe that distributed systems will weight more in the future and that virtual machines will be able to abstract both local resources and remote ones and new paradigms will be created in order to program these. We are still in the early days of programming and there's a lot of room for innovation.
I think as computers fail to get faster, we won't be able to improve performance without making application specific logical units, however, we may still be able to make it general purpose.
What I'm most interested in is what the trends will be in how those specialized circuits are developed.
Circuit design has been marching steadily along for decades, and between the tools available to allow non-specialists to go from abstract hardware description to image-ready chip specification and the ever-cheapening, ever-broadening capabilities of custom chip fab, are we approaching an era where fabricating mid-range specialized circuits is on par with the difficulty of writing and compiling software, only a bit slower?
I'm waiting for automatic code optimization. Write the code going from an input to an output however you like, show the computer your input and output, have it rewrite the most optimal (and probably very weird looking) code possible to go from your supplied input to your supplied output, and reap the efficiency rewards to mastering a stack of arcane low level code reference books without spending the thousands of hours of your precious and limited free time to do so. That would be a huge leap imo.
The fact that I can use Python to program a game on a raspberry pi, and use the same language to train GPT-3 is evidence that general purpose computing has not been declining...
What the authors have completely missed in their analysis is the rapidly-growing trend of specialising standard general-purpose CPUs -- that is adding special-purpose instructions to general purpose CPUs instead of adding special-purpose processors.
This process started a couple of decades ago, adding instructions to assist in the calculation of hashes and encryption, and relatively narrow SIMD parallel processing to assist multimedia.
Now virtually all high volume general purpose processors have such instructions.
If you count simple floating point add, subtract, multiple, divide as special purpose then this process started 50+ years ago.
The advantage of doing this is less added area and power consumption, and the ability to mix special purpose and general purpose operations at a much finer-grained level. Sometimes its worth copying a few MB of data across to a GPU's local RAM, downloading a special program to it, and then copying the results back.
The number of potential special purpose operations that are useful to someone is probably unbounded. But each one might be useful to only a small number of people. It's not feasible to just keep on adding everything someone thinks of to the volume leader mass-market processor.
Three related things are happening to help with this:
1) adding reconfigurable hardware to a general purpose processor or embedding the processor into reconfigurable hardware. Here we have Xilinx "Zynq" and MicroChip "PolarFire SoC" with ARM and RISC-V (respectively) hard CPU cores inside an FPGA. We also have Cypress PSoC which I believe is more like adding a small amount of reconfigurable hardware on the side of a conventional CPU core. If the performance needs of the general-purpose part of the processing are relatively low then you can use a "soft core" CPU built from the FPGA resources themselves. Each FPGA vendor has had their own custom instruction set and CPU core, but now people are moving more and more to instructions sets and cores they can use on any FPGA -- chiefly RISC-V.
2) making custom chips with a standard CPU core augmented with a few custom instructions / execution units. Again, much of this activity is centred around RISC-V though ARM has announced support for this with one or two of their standard CPU cores, initially the Cortex A35. Going into full production of a chip like this has costs in the low millions of dollars, with incremental unit costs as low as $1 to $10. Small numbers of custom chips (100+) can be made for $5 to $500 each depending on the size of the chip and the process node -- bigger, slower nodes are cheaper.
3) adding special purpose instructions that can be more flexibly applied to a larger range of problems. The main contender here is support for "Cray-style" processing of (possibly) long vectors of flexible length. If appropriately designed the same program can run at peak efficiency (for that chip) on CPUs with vastly different vector register lengths. This is in contrast to traditional SIMD where the program has to be rewritten every time a CPU is made with longer vector registers -- and it is very inconvenient to deal with data set sizes that are not a multiple of the vector length.
If suitable primitives are included for predication of vector elements and divergent and convergent calculations then such a vector processor can run the same algorithms as GPUs (e.g. directly compiling CUDA and OpenCL to them). CPUs with sufficiently long vector registers can then compete directly in performance with GPUs on GPU-style code. All while staying tightly integrated with general purpose computations.
ARM SVE and the RISC-V V extension are the examples of this, with I think the RISC-V version being the more flexible and forward-looking.
Today, the job of a computer is everything. Computers are in everything and do everything. They are our interface to the world. Their value is what they enable - writing, tax returns, video consumption, gaming and everything else, not the fact they’re a ‘computer’.
For those of us who want a computer to tinker with, we’ve never had it so good. There’s so many more options than there’s ever been for hackable tech. Just because most people don’t want to do that doesn’t mean they’re wrong, it just means they have different priorities.
I love the fact I don’t have to muck about with my iPhone to get it to work just as much as I love mucking about with my raspberry pi to get it to work.