I created a Github repo that tracks "classic papers" of machine learning that folks on HN have enjoyed in the past. Feel free to make a PR if you think I'm missing something good.
I think my inaugural blogpost/website entry will be on the fun I've been having reading old books and old papers. Some are long gone for a reason but some have clearly been replaced for no reason.
Does anybody have more resources on the research done on Ubiquitous Computing at PARC?
Steve Jobs saw some amazing things decades ahead of it’s time during his famous tour, and the iPad wasn’t released all that long ago. I’m curious to what other kinds of things were forgotten about.
This includes primarily heavily excerpted copies of the papers. While some editorial choices may make a lot of sense, you don’t know what you don’t know between the ellipses.
The last entry is 1979, which happens to coincide with the year of my birth. Apparently I am the final product of computer science research, and there is nothing left to discover?
Anyways, it's an interesting question what ought to be on this list if it continued to the present. Something blockchain related, probably. And something machine learning related. There's been a lot of progress in programming language design and theory, producing Haskell, Rust, Agda, Idris, etc.. The Linux kernel also appeared in that time, though there's not that much that's fundamentally revolutionary about Linux from a computer science perspective (just a lot of little things). Along with Linux there was the rise of free / open source software.
I'd put the original paper describing Google on that list. Bittorrent maybe -- some of the other proper DHT's are more interesting, but bittorrent is what actually worked for a lot of people.
Whitted's ray tracing paper is from 1979. That's had a big impact on graphics (and probably an even bigger one going forward). Then there was path tracing, and photon mapping, and the study of building and efficiently traversing bounding volume hierarchies which is what makes ray tracing computationally tractable.
FFTs are another technology that's been pretty relevant in modern times, though the idea has apparently been around a lot longer than 1979.
History shows that, in total, research, at least the best of it, is powerful stuff, powerful enough to change civilizations, perhaps the most powerful stuff.
The OP shows some of the history of some research and, thus, is good for better understanding of how research works.
Can observe that for all or nearly all the topics in the OP, the best present treatments are much better sources than the historical documents. So, (a) even for those now famous research documents, the time to the present brought a lot of improvements, often even in the foundations, and advancements. (b) Curiously, for someone wishing to study and learn for applications or additional research, bluntly, it's nearly always best just to ignore the historical documents.
Another point: It seems clear that now integrated digital circuits with, say, feature sizes under 10 nanometers, communications via solid state lasers lighting long haul optical fibers, operating systems and other general purpose software, means of developing new software, and the Internet represent good opportunities, likely new versions of copper, bronze, iron, concrete, open ocean sailing, steam, oil, electricity, electronics, etc. Then, bluntly, to take such opportunities mostly means (a) find a need and (b) use these new tools to meet the need, but in that work the foundations of computer science are only occasionally directly relevant. In simple terms, we can all learn to develop the software: Get a laptop computer, download a lot of software, learn some programming languages and some software libraries, and start meeting the needs.
Of the topics of those historical documents, the ones I encountered in some detail include discrete optimization, information theory, RSA public key cryptosystems, algebraic coding theory, quite a lot of numerical analysis (e.g., Gauss elimination and alternatives), a lot in probability (and a lot in statistics), and a lot of the usual algorithms from heap sort, discrete dynamic programming, etc. Also covered a lot of topics not on that list, in total more topics than on that list. Lesson to others: I spent too much time on those topics, went much deeper than really useful for developing solutions to real problems.
Net lesson: Get on with something current and useful to meet a need and (a) be careful not to spend too much time on those foundational topics and (b) in simple terms, for the historical documents, just f'get about them.
The thing about a lot of these classic papers is that they are often not very easy to digest compared with modern retellings of their subject matter. They typically represent the genesis of an idea that became influential and has since been popularized.
Even if the paper's authors are gifted writers (which isn't common), these papers represent early iterations of the presentation of an idea. Later, people who specialize in writing will come along and refine the presentation.
There's a flipside to this viewpoint: it's often (though not always) helpful to read and know the original formulation of a concept. Even if there is a better subsequent development, the original statement is the idea as presented. More importantly, for ideas which have literally changed our understanding, they represent the border between the before and after worlds. Often much of what makes such explanations so foreign is that we come to them with our "after" minds, already fertilised with the ideas contained, rather than the beginner's mind, or the prior-belief-system's indoctrination, which is how contemporary readers would have arrived at the material.
Much derivative treatment of great ideas is itself not particularly good or clear, so readers and students are left with the problem of having to hunt down good sources. Better to specifically call these out if you know them rather than do cast a broad aspersion on the original authors' treatments.
This question arises often in the study of philosophy, as well as other fields, and though there are some original authors who are notoriously prickly (Hegel, by most accounts, Kant can be difficult), there are a surprising number who are clear, and even fun and amusing to read.
I'd say don't be afraid of original sources". If they prove difficult, look for recommended better treatments. But if you can* rely on originals.
I tried feeding the DOIs to Sci-Hub but unfortunately they don't have these documents yet. If anyone here has access may I suggest submitting them to Sci-Hub or Library Genesis? It is, after all, a bit odd to see papers like "Prior Analytics (~350 BCE)" and "The True Method (1677)" hidden behind a paywall.
I believe if you search the title ("Ideas That Created the Future") of the book, you can find it on library genesis. If anyone asks, you didn't hear it from me.
If you don't want to sign up or pay for this particular source, you can use your favorite search engine, enter in the title from the papers, and find all of them online for free.
I'd also wager that, like the book these come from, the PDFs linked here are edited (some for conforming to modern notation, others for fitting into the book itself) versions of the papers. If you use a search engine and find the actual papers, you can find the unedited versions.
https://github.com/daturkel/learning-papers