I think I can explain a little, but caveat is that it's a little bit of the blind leading the blind, ie. this isn't my field, but I work with people who are working on this and they have talked to me a little about it.
Declarative programming is more or less orthogonal to differentiable programming. You're right that declarative as a paradigm leaves the implementation details up to the "compiler," and so you can have arbitrary implementations created in response to one declared specification. And often what that means is that under the hood it's possible that the compiler can be tweaked to output better results. But the thing is that those compiler changes and optimizations, are just that: arbitrary. You can't "know" or "prove" anything about them, and that means it mostly requires human creativity and intelligence to make progress.
That is fine and all, but differentiable programming is asking/answering the question: Ok, but what if we COULD prove something here? What if the units of computation could be guaranteed to have certain mathematical properties that make them isomorphic to other formalisms?
And why do we care about that?
Well, in math what happens is that some people will start with their favorite formalism and prove a bunch of stuff about them, and figure out how to do interesting calculations or transformations on them. Like "Ah, well if you have a piece of data that conforms strictly to the following limitations, then from this alone we can calculate this very interesting answer to a very interesting question."
But a lot of the time those mathematical paradigms can't talk to each other -- in programming terms, their apis just aren't compatible. Like raster vs vector images. Both image storage/display paradigms are "about" the same thing, but a lot of the operations you can do on one don't even make sense to try on the other, and our ability to translate back and forth between them is a little wonky. Math formalisms are a bit like that, a lot of the time.
So it's very interesting in math world when someone proves that a formalism in one paradigm can be transformed perfectly into a formalism from another paradigm. All of a sudden all the operations available in either paradigm become available in both paradigms because you can always just take your data, transform it, do the operation, then transform the answer back.
(Side note: this is why some people are excited about Category Theory: it's like a mathematical rosetta stone. Ie. A lot of things can be translated into and out of category theory, and in turn all those things that were previously in separate magisteria are interchangable.)
Ok so, back to differentiable programming. If you suddenly have a way to conceive of your program / unit of computation as a differentiable function, then right off the bat you get access to all the tools ever created for calculus. The optimization thing where you find the gradient of the program and follow it toward some target is just one of the things. You also get a huge suite of tools that let you enter the world of provably correct programs, for example.
You also get access to all the tools of all math that can translated to and from calculus, which is... a lot of them. I wish I knew more about math so I could rattle off the 100 ways that would help, but I can't, so instead I'll just say that I think it would be a game-changer for creating optimized, robust systems that work way, way better than our current tech.
Declarative programming is more or less orthogonal to differentiable programming. You're right that declarative as a paradigm leaves the implementation details up to the "compiler," and so you can have arbitrary implementations created in response to one declared specification. And often what that means is that under the hood it's possible that the compiler can be tweaked to output better results. But the thing is that those compiler changes and optimizations, are just that: arbitrary. You can't "know" or "prove" anything about them, and that means it mostly requires human creativity and intelligence to make progress.
That is fine and all, but differentiable programming is asking/answering the question: Ok, but what if we COULD prove something here? What if the units of computation could be guaranteed to have certain mathematical properties that make them isomorphic to other formalisms?
And why do we care about that?
Well, in math what happens is that some people will start with their favorite formalism and prove a bunch of stuff about them, and figure out how to do interesting calculations or transformations on them. Like "Ah, well if you have a piece of data that conforms strictly to the following limitations, then from this alone we can calculate this very interesting answer to a very interesting question."
But a lot of the time those mathematical paradigms can't talk to each other -- in programming terms, their apis just aren't compatible. Like raster vs vector images. Both image storage/display paradigms are "about" the same thing, but a lot of the operations you can do on one don't even make sense to try on the other, and our ability to translate back and forth between them is a little wonky. Math formalisms are a bit like that, a lot of the time.
So it's very interesting in math world when someone proves that a formalism in one paradigm can be transformed perfectly into a formalism from another paradigm. All of a sudden all the operations available in either paradigm become available in both paradigms because you can always just take your data, transform it, do the operation, then transform the answer back.
(Side note: this is why some people are excited about Category Theory: it's like a mathematical rosetta stone. Ie. A lot of things can be translated into and out of category theory, and in turn all those things that were previously in separate magisteria are interchangable.)
Ok so, back to differentiable programming. If you suddenly have a way to conceive of your program / unit of computation as a differentiable function, then right off the bat you get access to all the tools ever created for calculus. The optimization thing where you find the gradient of the program and follow it toward some target is just one of the things. You also get a huge suite of tools that let you enter the world of provably correct programs, for example.
You also get access to all the tools of all math that can translated to and from calculus, which is... a lot of them. I wish I knew more about math so I could rattle off the 100 ways that would help, but I can't, so instead I'll just say that I think it would be a game-changer for creating optimized, robust systems that work way, way better than our current tech.