TLA+ is a language that allows you to specify your system in any level of detail. As such, it is more expressive than any programming language is or could ever be.
TLA+ does indeed have a model checker, but model checkers don't "try many inputs." They check the entire state-space, covering potentially infinitely many executions.
> TLA+ is a language that allows you to specify your system in any level of detail. As such, it is more expressive than any programming language is or could ever be.
I don't see that this necessarily follows? In most programming languages it's normal to have high level functions that describe your program in broad strokes and lower-level functions that describe your program in more detail. If you're talking about the ability to check aspects of program correctness without necessarily implementing an executable program, several languages support a technique of using "undefined" or similar to typecheck a "program" that can't actually run, or maybe can't even be built.
It does follow from the need for arbitrary nondeterminism (basically, the unconstrained use of existential quantifiers) for arbitrary-level specification, and the absolute inability to compile/interpret such uses in any feasibly efficient manner. In TLA+ you could describe an algorithm that "somehow" performs some operation, without specifying how. E.g. in TLA+ you can say "this component somehow sorts a list efficiently," if the particular sorting algorithm is not central to what you want to specify.
The requirement that descriptions of computation be mechanically translated into something you can run is an extremely constraining one (both theoretically and in practice -- most high-level descriptions of computational systems cannot be mechanically turned into executables). If you remove it, you get something that is not a programming language, but that is strictly more expressive. So, if all specifications of computations (programs) in your language can be interpreted/compiled into an executable, it necessarily follows that it is less expressive than TLA+.
While programming languages could add constructs for such specifications (contracts/dependent types) they cannot compile them into an executable. This means that you get a language that really contains two separate languages (and they must be separate for compilation to work), one for nondeterministic specification and one for deterministic specification, with only the latter used to describe actual computation, and resulting in a language that is both an exceptionally complex specification language and an exceptionally complex programming language -- basically the worst of both worlds. That such language have only ever been used for software that's substantially smaller and simpler than jQuery is the empirical manifestation of that (not that such use cases aren't important, but that you can only effectively fully specify tiny programs does imply that their expressivity is limited).
> If you remove it, you get something that is not a programming language, but that is strictly more expressive. So, if all specifications of computations (programs) in your language can be interpreted/compiled into an executable, it necessarily follows that it is less expressive than TLA+.
> While programming languages could add constructs for such specifications (contracts/dependent types) they cannot compile them into an executable. This means that you get a language that really contains two separate languages (and they must be separate for compilation to work), one for nondeterministic specification and one for deterministic specification, with only the latter used to describe actual computation, and resulting in a language that is both an exceptionally complex specification language and an exceptionally complex programming language -- basically the worst of both worlds.
I'm not convinced. Using "phantom types" to express additional specifications that don't get compiled is a fairly common/widespread technique (indeed any use of generics in Java could be considered an example). There is a lot of overlap between the things you want to do with specifications and the things you want to do with executable code (composition, structuring, reuse), and in a lot of simple cases (plumbing rather than logic - but I'd estimate that's the majority of business code) one can be inferred directly from the other. So to my mind there's a lot of value to be had from a language that integrates specification and executable and is able to express both.
I'm not talking about phantom types/ghost variables, and I don't mean "not compiled" but cannot possibly be compiled or not compileable. You can easily write non-computable specifications in TLA+, and it's useful! (e.g., to state that a certain program decides halting in certain circumstances). You can also very easily define things like being in NP with angelic nondeterminism. Many interesting and useful things you can say about a computation cannot themselves be compiled to computations.
> There is a lot of overlap between the things you want to do with specifications and the things you want to do with executable code
Sure. Programs are a special case of specifications. TLA+ easily expresses both. The tradeoff is this: is every specification of an algorithm compilable or not? While TLA+ can easily specify algorithms at any level of determinism, not ever specification of an algorithm is compilable -- i.e., some may be too nondeterministic. For example, even Quicksort is not-compilable. Quicksort implementations that can run (i.e., those expressed as programs) do not actually specify Quicksort, but particular refinements of it (see https://pron.github.io/posts/tlaplus_part3).
> So to my mind there's a lot of value to be had from a language that integrates specification and executable and is able to express both.
As I said above, TLA+ easily expresses both (in fact, there are research tools that compile C and Java bytecode to TLA+). But to guarantee compilability you must demarcate the executable part (a language like, say, Agda, does this with a formal separation between types and "terms"; Java+JML does this with a formal separation between code and specification). So as to languages that are made of two sub-languages -- one that affords arbitrary nondeterminism and one that can always be compiled -- sure, that has a value, but that value comes at an extremely steep price. Sometimes that price may be worth it; sometimes not. Not to mention that even those languages rarely achieve TLA+'s strengths, because very often TLA+ is used not to specify a particular executable, but a whole system (which includes multiple executables and their environment). In those very common cases the ability to compile becomes pure cost.
What TLA+ shows is the great value in giving up the ability to compile every specificaton -- an extremely expressive specification language that can be easily learned and used. I've written someplace else (https://www.reddit.com/r/tlaplus/comments/abi3oz/using_tla_t...) that TLA+ might well be the first rich formal logic (as opposed to programming languages that have no quantification) in the history of formal logic that is actually used by practitioners whose job isn't to use formal logic all day (or have been trained in formal logic for a very long time) -- in effect, the first to achieve formal logic's stated goal as a simple tool for practitioners. A goal that has been abandoned by logicians very early on.
In TLA+, you could write a realizable implementation of Quicksort, but also specify Quicksort itself and show that any implementation works. You could do the same in, say, Agda (or Java+JML), but at a much higher cost. TLA+'s power is in being extremely expressive, and at the same time extremely simple (never mind Agda - it is simpler than Python).
TLA+ lets you work at much different scales. I have one spec I'm working on where the "___domain model" is multiple nodes trying to reach consensus, where some fraction of the nodes are adversarial and intentionally trying to subvert the others. With that you can ask questions like "what is the maximum fraction that can be adversarial and still guarantee these specific global system properties."
Another one involved modeling several microservices, each of which had its own team and codebase. That kind of scale is really hard to do at the lines-of-code level of verification.
Normal testing feeds selected inputs to the software and succeeds for the corresponding output.
QuickCheck feeds random values to the software and looks for "good" results.
TLA+ takes a model of the system (not the software) and explores the entire state space and looks for anything that violates given logical predicates. It is a different later of abstraction.
The idea that "you don't test a use-case, you test many inputs that may work differently" is solved by fuzzing/test factories.
So TLA+ without the fuzzing, leaves what, temporal stuff? Did I miss anything?