The cool thing about Conway's Game of Life is that you can't predict it unless you do the full recursion, there is no shortcut. It relates to external undecidability of recurrent processes.
"Computation irreducibility"[0] is Mr. Wolfram's word for it and I believe it has some relationship to his CA physics, but I won't pretend I understand.
Yes. He relates an interpretation/definition of the second law of thermodynamics- the increasing entropy thing- to his irreducibility. And builds a computational theory about the role and importance of the physics "observer" in these analyses. Computational irreducibility is basically a statement both about the intrinsic requirement for computation to arrive at the future, and also about the computational capability of the "observer"- a model, or our brains- to arrive, or not, at the future more efficiently.
I am a computational person, not a scientist, and I think science people find him to be speaking total garbage. That seems a correct assessment to me. His model of the world from physics perspective seems wrong. Nevertheless personally I find his computational lens/bias to be useful.
> I am a computational person, not a scientist, and I think science people find him to be speaking total garbage. That seems a correct assessment to me. His model of the world from physics perspective seems wrong.
I don't think it's that he is speaking garbage, he is basically talking about digital physics which is a real theory being considered and researched, not pseudoscience.
But he doesn't work with the scientific community at all, he just writes his long essays and uses his own terms and ignores anyone else doing similar work. He then gets upset when scientists don't just defer to him.
Though most academic disciplines have a strong "not invented here" bias. It makes you ignore anything outside your citation bubble or from different fields with somewhat different conventions and terms. Even if the other guys are academics as well.
Kaggle's competitions [0] do pull a lot of impressive little pieces of code, a number of which actually do take shortcuts. They do define a few things to make it more possible, and there is luck involved not just deterministic results.
Wouldn't every Turing complete cellular automaton have this property? What would be an example of a nontrivial (i.e., sufficiently expressive) CA that is "predictable"?
One example would be a CA that takes an exponential number of steps to emulate n steps of a Turing machine. That allows you to predict exponentially far into the future by running the TM machine instead.
This insight is why I stopped trying to use CA as my underlying computational substrate in genetic programming experiments. It is much, much cheaper to run something like brainfuck programs than it is to simulate CA on a practical computer.
A switch statement over 8 instructions contained in a contiguous byte array will essentially teleport its way through your CPU's architecture.
I feel like CA (single, or multi-state) would work quite well on dedicated hardware, how big could the grid even be? I may be missing the obvious, but it does seem easier to scale compared to cores and manual dispatch.
But otherwise yeah, not the most efficient on current CPUs.
To be fair, a one-dimensional CA is effectively a sort of UTM with a weird program counter and instruction set. I think the more useful CAs will tend to be of the higher dimensional variety (beyond 2D/3D). Simulating a CA in hyperspace seems problematic even if you are intending to purpose build the hardware.
I feel like I'm too dumb to understand whether these CA articles of his are interesting or just numerology deep-dives like people who spend their lives "investigating" the torah.
From the article, for some context for what I'm about to say:
"And indeed, what we’ll often see is that the more optimized a structure is, the less modular it tends to be. If we’re going to construct something “by hand” we usually need to assemble it in parts, because that’s what allows us to “understand what we’re doing”. But if, for example, we just find a structure in a search, there’s no reason for it to be “understandable”, and there’s no reason for it to be particularly modular."
If I were going to sum up what I will politely call Wolfram's Conjecture, it is that there is some other way to start with some sort of understanding of cellular automata and derive from that understanding the ability to model systems, and presumably the ability to build systems in cellular automata, using that superior understanding that are not characterized by the modularity shown in human constructions. Something like chaos mathematics, but for cellular automata, and presumably, something that goes beyond statistical characterization into some sort of deeper understanding that is somehow analogous to our ability to deeply understand the modular systems.
For the purposes of what I'm saying here, I'm virtually equating "understanding" with "ability to meaningfully manipulate". You or I may not be able to manipulate one of the human constructed systems in Life, but there is a path to learn how that is observably human-comprehensible and capable of being conveyed through human communication, because it has been. By contrast, the article shows many cases of "humans constructed this system with property X, and by random search we found a much more efficient system with that property, but it is not something a human could ever have reasonably designed".
What I may less charitably call "Wolfram's Mistake" is that there is at the moment frankly no evidence that I can see that such a thing is true, despite him having been searching for it for a very long time.
There's two angles on that, which are: One, is there any such understanding that is accessible to human cognition? And two, is there any such understanding at all, without that limitation, but with some sort of finite level of intelligence significantly smaller than the level necessary to just brute force the problem?
Or, to put it another way, it is not hard to hypothesize intelligences that could just glance a description of Life and then simply internally simulate systems directly looking for whatever you like. You may consider them something like "What if a human just had a modern computer built directly into their brain?" They would have vastly, vastly more raw computational power than a human without that for this purpose, but, is there any sort of understanding of cellular automata in general or even Life in particular that is meaningfully more compact than simply running the automata?
If you want to dig more into what it means to "understand" something, I refer you to "Why Philosophers Should Care About Computational Complexity" https://www.scottaaronson.com/papers/philos.pdf , rather than trying to further elaborate in this post. An "understanding" of this sort of cellular automata, whatever it may be, no matter how alien the system that has this "understanding" may be, should me meaningfully more computationally compact than the exponential complexity of simply having a table of all possibilities in memory.
From this perspective "Wolfram's Mistake" is that there is no such understanding; there is fundamentally no way to create a computational model for the phenomena in a cellular automata system that is simpler than the cellular automata itself in a general case.
Which means that rather than being the key to understanding, they're actually a really bad lens to try to view the world through for any sort of understanding by us finite beings. As incomplete and simplified as our many other lenses may be, at least they work in some cases. Cellular automata don't seem amenable to compact theories of any utility. (And by "any utility", I mean, any utility; scientific, mathematical, engineering, practical, all of them.) There's a few places of use, but not very many.
An interesting aspect is that while all Turing Complete systems ultimately exhibit this behavior, they do not seem to all do so equally. Lambda calculus, for instance, has all the same chaos in some sense, if for no other reason than you can always create the game of Life directly in lambda calculus. Yet lambda calculus also admits human understanding. I select lambda calculus in particular as one of the most human-friendly Turing Complete formalisms. Turning them into actual useful programming languages is on the order of "useful learning exercise". Turing machines are sort of in between; we can work with them, but they do tend to explode into chaos relatively easily. Cellular automata, by contrast, require extreme human effort to work with, just to get them to halfway tame. There's an interesting study here about why that is, which is currently well beyond me.
Again, looking at Wolfram's Conjecture another way is that this is somehow not correct and we're just not looking at CAs correctly, and if we did, they might become more useful than Turing machines or perhaps even lambda calculus, and, again, I just don't see the evidence for this.
He’s been at this for decades, without a single prediction of any phenomena in nature. His theories can’t even reproduce known physical phenomena correctly. So you can be dumb and still correctly conclude that it’s basically numerology.
Whilst it is not framed as such, there are some interesting high level takeaways in here for AI, particularly around e.g program synthesis/induction and ARC.
I think Wolfram could contribute a lot if he shifted his focus towards these domains.
This article is fascinating! Some of the concepts in it, like _computational irreducibility_ seem to be core concepts that live in a ___domain so low I'm not sure where to define its bounds. At the same fuzzy level of "evolution".
Even if you are a hater, this is a very interesting and information-rich overview.
Wolfram is relentlessly litigating his place in history and this piece is no exception but I enjoyed the almost manic amount of detail. Long read indeed.
> if we want to get closer to the study of the pure phenomenon of innovation
Innovation in the real world is often driven by the usual incentives of capitalism, like the basic need to out-compete the competitors by improving quality or lowering costs. I do not really think Game of Life serves as a model for innovation in the real world; it might serve as a model of the pure phenomenon of innovation. In the real world, even things like pure math research is motivated by applied math, by monetary factors like NSF grants etc.
I agree. Going back further, we see the innovation drivers were God providing for man (Bible), people providing for their needs/wants at a societal level (most systems), individuals pursuing what they enjoy, seemingly random acts that go against reason, etc.
This was enacted via specific processes (human brain) using resources and their environment within their constraints, sometimes surplus. It involved local and global phenomenon that were dependent and independent.
In short, it's nothing like cellular automata or most simplistic models of the world. We'd have to model the above within this world's laws to know what drives innovation among humans in this world.
I'm not sure what you're saying is inconsistent with parent's comment: you're both talking about different points in time in humanity's history – the snark seems unwarranted.
Calling the parent myopic while saying "Also, humanity is hardly an economy of capitalism any more. More similar to oligarchical capital feudalism." Is deserving of snark.
In the context of 5 centuries of capitalism we live in an age of unprecedented equality, freedom, and opportunity.
Further, observed over the last 5000 years of human civilization the stupendous rise of innovation under capitalism is so dramatic that implying it is not responsible for the wealth we enjoy today beggars belief.
Wolfram's problem is mindset IMO. There's a certain kind of highly intelligent mind, but because of fame, money, habit or personality, makes them intellectually lazy. Instead of doing the proofs, research, careful discussion and reconceptualizing that leads to useful theories and conclusion, they jump straight to the end, because they think they're smart. This produces some impressive sounding theory, but which upon closer inspection proves to be to be too vague to actually be testable, if not contain outright errors. People I put in this group including Wolfram and his computational theories, Langan and his Cognitive Theoretical model of the Universe, and the Italian school of algebraic geometry. There's a kind of interesting perception here. A reasonable person thinks one who makes correct theories is an intelligent man. Others think they are intelligent, so therefore their theories are correct.
My impression is different: I think Wolfram is plenty hard-working, but his internal mental life is so full of fantasies of unlimited success and fame that he cannot manage an impartial-enough evaluation of his own work output.
"your impression", is true only in the Ruliad. Ask people who knew him back in the 80's... he never worked on anything on his own that has amounted to anything. He is a real life crank, crackpot and all around charlatan.
Mathematica is the product of 30 years of work on Macsyma and Maple systems at MIT and elsewhere, which he cracked its codebase, usurped and sold to an unsuspecting academic community back when only the likes of Bill Gates would do such as thing.
He also missed the entire LLM wave because he refused to open source WolframAlpha codebase and work with others in the NLP community.
It's unfortunate that people seem to be prejudiced against Wolfram at this point, when the field has a lot to explore and learn from. Cellular automata are so powerful, yet so conceptually simple, it wouldn't surprise me if it did have revelations for more fundamental concepts of the universe.
What if the fundamentals of the universe are so simple it'd shock us, we just haven't looked at it from the right perspective yet, and we're over-complicating it?
People should be able to separate the merits of the work from the worker but people didn't just randomly decide to not like Wolfram. He is a narcissist of the highest order.
But researcher? Does he actually research anything? He just claims lots of things. He is half-way to being a fiction author, just that he would never acknowledge that.
It’s more like a continuous eye roll since 2002 when he released _A New Kind of Science_.
I spent around 15 years working on stochastic lattice models. They can be amazing. They can also fail to capture the essence of the problem. Same with cellular automata.
They’re definitely not _new_. They weren’t new in 2002. I’ve always viewed Conway’s game of life as an interesting deterministic variant of an Ising model, and Ising models date back to the 1920s.
Most physicists I knew at the time looked at the book, shrugged, and kept working on what they were doing. I love lattice models, cellular automata, lattice fluid models, and the like. But they’re just one class of useful model.
And yet because Wolfram has a perpetual money machine called Mathematica, he’s got a huge megaphone to advocate for himself.
I’ll keep rolling my eyes. I don’t hate the guy, but he is just a little too into self-promotion for my taste.
And yet because Wolfram has a perpetual money machine called Mathematica, he’s got a huge megaphone to advocate for himself.
Wolfram primarily posts to his blog and occasionally publishes a book. He’s not exactly buying a marketing blitz with all of his money. Wolfram Research itself is primarily focused on other things too.
I don’t know what it is about Stephen Wolfram that drives people crazy. Yes, he’s self aggrandizing, but he’s hardly unique in that respect. Simply read past it or roll your eyes and move on. But apparently there are more than a few people who can’t help themselves (even this thread is an example).
Not to weigh against Wolfram one last time, but he's (finally?) renamed Mathematica just "Wolfram". Why he would toss out years of brand-related goodwill is beyond me.
Have you opening Mathematica recently? Or visited the product page for Mathematica[1]? The only change has been branding the language itself as the “Wolfram Language” where Mathematica is just one of their product offerings.
Mathematica is open on my computer as we speak (or, rather, now Wolfram.app). The "About" screen indicates "Wolfram 14.2". I have a seat on an site licence.
Mathematica (MMA) and the Wolfram Language (WL) used to be the one and the same. But now a user could be using WL in a web based notebook, through Wolfram Alpha, or even on SystemModeler.
The brand name “Mathematica” isn’t going anywhere, not after nearly forty years. It’s basically marketing being like “how do we communicate updates to WL as not just being updates to MMA?”.
> The brand name “Mathematica” isn’t going anywhere
But it’s not what it used to be. Now you don’t “run” Mathematica - you “access” it running another program. It’s basically marketing to weaken one brand and strengthen the other.
>Why he would toss out years of brand-related goodwill is beyond me.
I suspect that Wolfram is the bigger name ever since Wolfram Alpha has been a thing. I'm sure way more people interact with that than Mathematica. Besides, as far as I can tell, it's still named and marketed as Wolfram Mathematica, not really sure where you got the idea it was renamed.