The problem with this "You are not the user" criticism is it misses the point, which is that the users that totally rely on your product are the ones living with the edge cases. Most of the other users could just switch elsewhere and it's not going to bother them one iota. This is why MS Office has held on to the market so strongly, as seemingly every finance department is reliant on a different obscure feature that isn't replicated in alternatives.
This has been happening over at Apple too, where the power user experience has deteriorated dramatically now that they've decided that lot are all stuck in the ecosystem already. It's a dangerous game, and it's what caused MS to get unstuck as Linux rose up. Right now you'd be hard pressed to consider OS X, Windows, Android, Chrome OS or iOS as headed in directions power users (or content creators) want or need. This will lead to a division like in the early 90s again, where to do serious work you need a "workstation" which will be quite different to a normal machine.
Could you give examples for such developments on OSX? I'd consider myself a power user, but the issues I have with OSX are not related to features, rather to its stability and complexity (resulting in more bugs). I liked it best back at version 10.4, but today we have way more features, lots of them directed at power users.
When releasing iWork 2013, Apple ditched all AppleScript bindings[1]. In later updates, the bindings returned[2], but it still caused uncertainty about the future for users with AppleScript workflows.
Something similar happened when they released Final Cut Pro X[3], also fixed in later updates. But again: uncertainty, and many users prematurely switching to alternatives (like Adobe Premiere).
Yes, the new iWorks (if it can still be called that) is clearly a step back - no new features and a dummed down UI. I don't consider it part of the OS though.
I recently upgraded OSX and couldn't read KeyNote presentations by any one else in the office. I'd honestly pay them $100 to take the crippling upgrade back at this point.
>This will lead to a division like in the early 90s again, where to do serious work you need a "workstation" which will be quite different to a normal machine.
I'm not comfortable with that... at all, but it may not be a bad idea.
As Alan Kay says, computing is the one field where, thanks to Moore's Law, you can live in the future just by spending enough money. He puts forth a pretty good argument that one of the problems with modern computing is that it's too incremental, and one of the reasons for that is because most programmers (and students) use consumer-grade laptops and desktops, and thus they're stuck writing code for yesterday's state-of-the art hardware. On the other hand, programmers using workstations can write for tomorrow's consumers. He attributes much of the breakthrough progress of the 1970's Xerox PARC to its willingness to buy the researchers and programmers $60,000 workstations (that's adjusted for inflation, and they built 80 of these)[1][2]. The expressed goal was to be able to develop software for the computer that would exist 10 years hence.
The first Alto (mostly designed and built by Chuck Thacker at Xerox Parc) started working in April 1973. The original cost target was around $15K in the dollars of that time. We eventually made more than 1500 of them (actually close to 2000 by some estimates) during the 70s.
I recall that the budgeting was actually about $22K per machine, which would be roughly more than $100K in today's dollars.
There are a variety of points here. The biggest one is that -- being a rather practical field -- we tend to have ideas that are possible. A factor of 10-50 allows us to have ideas that our subconscious minds reject automatically, because we are used to working within "normal". We have to shift or get beyond "normal" to make big progress.
I may be missing something, but how would giving programmers more powerful hardware prevent incremental changes? Even if I'm using a top of the line rig, it's unlikely that anyone actually using the software I write will have that same level of technology available. I can only see this as a benefit in specific cases where the limits of modern hardware are being challenged, which are far fewer today than they once were.
The point is that you can write something that takes advantage of hardware that will be consumer grade in x years, rather than hardware that is consumer grade now. I thought parent stated that fairly clearly.
Ah, that makes sense. However, since most applications outside of video editing, gaming, etc. are not resource intensive, why shouldn't programmers just use consumer-grade hardware? An email client running on 2014's hardware and one on 2019's hardware are probably not going to be substantially different.
The "email" clients of the future are going to be dealing with multi-terabyte caches[1] with lots of high-bitrate audio and 5K+ video and images. A robust and competitive email client should be able to do real-time summarization, translation, text-to-speech, speech-to-text, and index into media files (I should be able to query for a phrase and get the relevant portion of a video or audio clip in the search results).
We have all of the algorithms now, so the hardest part is probably developing a good UI. So, you need to a super fast computer so that:
(1) You can power the UI
(2) You can rapidly iterate in response to user testing.
[1] By "cache" I mean stuff that is stored locally rather than in the cloud; I'm not talking about CPU caches which will probably stay about the same.
Can you point to active research in the area of interactive applications with local or multi-tier caching? Everything in the news is about "cloud". The closest I've seen to cache-oriented apps were the cloudlets at CMU, http://elijah.cs.cmu.edu.
> We have all of the algorithms now
Aren't some of those algorithms patented, e.g. speech recognition from SRI, Nuance, Apple/MS/Google, IBM, AT&T and increasingly implemented in centralized cloud storage rather than at the edge? How about lack of access to training data?
I think the point here is that we don't see this kind of research because nobody invest in these super fast and expensive workstations that could mimic the future hardware.
I don't think speech recognition algorithms are patented. At least not the Google ones, since AFAIK they use neural networks. You could train your algorithm centrally and then ship all the neurons, weights and biases of to the individual device and keep the training data secret.
it's a 'strike price' R&D programme. work on expensive stuff _now_ on the assumption that it will be cheap/mainstream in x years. if you waited for it to be cheap enough before you start, you'll be outgunned by the crew that started 6 months before you did. that's iterative of course, how far back you go is the decision that counts. someone once said "10x faster isn't just faster - it's different". i like that metaphor, in CS you might throw away processor cycles for the GUI at a time when everyone else is optimising the metal for a text inteface to eke out a millisecond or two...
Developing bleeding edge software takes time. So you start writing your SW on a top end machine; in two years it becomes mainstream, which might coincide with the time of your first release. Moore's law worked perfectly for you here.
This has been happening over at Apple too, where the power user experience has deteriorated dramatically now that they've decided that lot are all stuck in the ecosystem already. It's a dangerous game, and it's what caused MS to get unstuck as Linux rose up. Right now you'd be hard pressed to consider OS X, Windows, Android, Chrome OS or iOS as headed in directions power users (or content creators) want or need. This will lead to a division like in the early 90s again, where to do serious work you need a "workstation" which will be quite different to a normal machine.