Well, Wolfram's thesis is that analytical representation is not the correct frame for modeling the world, and algorithmic/procedural representation (e.g. simulation) is the correct method.
This isn't particularly novel, but it is a key insight of computer science. I just don't think that Wolfram deserves credit for it.
Edit for clarification:
The key insight being that you can model things algorithmically. Whether it's "correct" or not is a different matter. But this type of modeling is different from analytical modeling.
No, this is a mischaracterization of what Wolfram is saying. Wolfram says: why consider computer programs to be valuable as mere approximations to the underlying analytic models, when you can consider computer programs themselves to be an entirely new and unexplored land of complex and interesting models.
Traditional computer science considers computer programs to be merely a means to an end.
A quite delicious irony is given by the example of Naiver Stokes equations. They are themselves an idealization of the flow of fluids composed of discrete particles. Because of our obsession with continuous models, we mostly resort to laboriously solving them numerically -- but it turns out that simple lattice gas cellular automaton models are actually 1) fairly accurate 2) much more computationally efficient, and 3) more suggestive of the underlying microscale physics.
To step back a bit, a helpful analogy would be that simple computations are the 21st century equivalent of differential equations, which were studied rigorously in their own right starting in the 18th century, often prior to their application to concrete problems.
Whether this intuition will turn out to be prescient, or whether it will fizzle out, is another question.
That doesn't make sense in the context of my comment, and besides, i don't think that's true.
If you take the Turing Test to it's logical conclusion, a program that is indistinguishable from a human intelligence is a human intelligence. There's no means to some other end. The program is the intelligence.
This is also something that Turing came up with in the 40s & 50s. Not exactly novel.
Uh... well, I'm getting Wolfram's take right. So you don't think my analogy is true? It's just an analogy.
But it's clear that it has methodological implications for how one does science:
For example, if you think computers are just a way to simulate continuous systems, it would not occur to you to sample random programs and see what they do. It would not occur to you to enumerate simple programs. And you wouldn't think that it is very interesting that such and such a simple program can do such a such a computation.
If you did, it would. And if you were ambitious enough, you would actually try hunting for the program that computes the universe, as Wolfram has been doing on a cluster in his basement (I love this tidbit) for some years now: http://www.ted.com/talks/stephen_wolfram_computing_a_theory_...
No, you're making the assertion first, that everyone positively asserts that the universe is continuous, and second that computer scientists see algorithmic systems as being simulations of analytical systems.
The first i am agnostic to (although i do like the feynman quote at the top of http://arxiv.org/pdf/quant-ph/0206089v2 which was linked to above), and the second is most certainly false, as i have indicated above.
The evidence in NKS to support wolfram's assertion that the universe is a simple program is circumstantial at best, and so his windmill tilting quests for the program that is the universe seems quixotic at best, and arrogantly foolhardy at worst.
Analytical modeling, algorithmic modeling, or whatever other model someone wants to use to represent reality are models until you can prove them to actually be fundamentally connected with the manner in which reality functions.
Re: NKS theory of physics. You're right, it's far from convincing. But it is intriguing speculation, and I think he adequately hedges it as such. Some fascinating partial results are that the natural restriction he introduces for graph automata to be deterministic are enough to induce special and general relativity. That's pretty eery!
Re models: now we're getting into epistemology. I don't think the aim you ascribe to scientists to "prove them (models) to actually be fundamentally connected with the manner in which reality functions" has much meaning when one is talking about, say, quantum field theory. How do I connect the mathematics of QFT with what is "really going on"? You can't. It just is.
> "Some fascinating partial results are that the natural restriction he introduces for graph automata to be deterministic are enough to induce special and general relativity. That's pretty eery!"
That sounds remarkable indeed; can you point to an online reference that has more info on this? thanks.
This isn't particularly novel, but it is a key insight of computer science. I just don't think that Wolfram deserves credit for it.
Edit for clarification: The key insight being that you can model things algorithmically. Whether it's "correct" or not is a different matter. But this type of modeling is different from analytical modeling.