Perhaps I don't understand it fully, but my view of dynamic programming is that you can always get the same benefits using a caching function and just writing your program naturally.
This approach has worked for me for a variety of DP problems I've tried, and I've always wondered if there are functions for which the simple memoization trick does not work (although I cannot figure out why that would be the case). Also, this caching code is much easier to read and requires no back-and-forth step in terms of building up some auxiliary array and then backtracing or whatever to get the final results.
Can someone provide me a counter-example and some intuition for why caching doesn't work in that case?
(Incidentally, in most functional languages, you can write a generic memoization function once and never have to write it again. In Python, you can even write it as a decorator function and then making a function cacheable is as simple as prepending @memoize to it's declaration).
I always considered memoization a type of dynamic programming, rather than something different from dynamic programming. Wikipedia at least uses this same nomenclature -
Anyways, there are cases when memoization takes orders of magnitude more memory. Consider a problem where calculating F(x, y) is dependent on F(x - 1, y) and F(x, y - 1). (Recursively calculating n choose k is one such example.) Let's say F(x, 0) and F(y, 0) are non-recursive, they just take O(1) time. If you use memoization, in order to find F(n, n) you will take O(n^2) memory. But if you fill an array, you can calculate all the F(1, x), then all the F(2, x), and once you have done all the F(2, x) you don't need the memory for the F(1, x) any more. Et cetera. So you only need O(n) memory.
dynamic programming is usually more efficient. They'll be on the same order, but the constant factor is usually lower. However, if all that matters is that you're not getting into exponential territory, memoization should be fine.
Agreed. The pseudo-code had it in a more readable style. If it must be compressed onto one line then
a = 0; b = (0, 0); c = float('infinity')
is preferable if a, b, and c are logically unrelated compared to
a, b, c = 0, (0, 0), float('infinity')
especially as the list gets longer. Why should the reader first count to find where variable `foo' is on the left-hand side before then counting the same number of places on the right?
I agree about compound statements, but I think most people will use the tuple assignment style, especially when its simply initializing variables. There is no reason to expand it beyond one line.
There is a reason: it reduces the reading distance from variable to initial value. It also reduces the amount of effort you need to exert to map them to each other. This is especially true when you are using () which impact the meaning of ','.
Even more interesting, there are now some compiler optimization algorithms which can theoretically do some of this stuff automatically. Look up incrementalization on citeseerx etc...
This approach has worked for me for a variety of DP problems I've tried, and I've always wondered if there are functions for which the simple memoization trick does not work (although I cannot figure out why that would be the case). Also, this caching code is much easier to read and requires no back-and-forth step in terms of building up some auxiliary array and then backtracing or whatever to get the final results.
Can someone provide me a counter-example and some intuition for why caching doesn't work in that case?
(Incidentally, in most functional languages, you can write a generic memoization function once and never have to write it again. In Python, you can even write it as a decorator function and then making a function cacheable is as simple as prepending @memoize to it's declaration).