> Locality: traditional runtime computations focus on Big O, the number of operations computed. However, for modern computing, moving data around in memory can be very time-consuming
I need to nitpick here... Big O notation is a way to describe growth rates of functions. You can count data movements (or anything else) with Big O.
Moreover, "moving data around in memory can be very time-consuming" means that in the end, it is still about time, not about memory.
So the correct way would be to translate memory access to time, but that means modelling the memory hierarchy, modelling caches in general, and finally perhaps modelling the specifically used caching strategies.
The limits on memory access are physical, illustrated by grass hoppers famous video about nanoseconds and the speed of light.
Should computer algorithms always assume they need to model caches since they are never going away? When determining the computational complexity, time and memory are treated with equivalence, but real memory doesn't and never will behave that way.
Yes, the memory hierarchy is discussed in the course, and algorithms are shown that are designed to take account of this. Older algorithms tended to only consider computation time, not memory access time.
Candid question: isn't the growth rate of functions the derivative of the Big O complexity instead?
O(1) does not mean that the growth rate is constant. It means that the number of operations is constant, and its derivative being zero, that means the growth is nil.
O(1) means the number of operations is bounded. f = O(g) measures growth in the sense that f/g = O(1), ie. g grows quickly enough to cancel out any tendency of f to go to infinity.
I need to nitpick here... Big O notation is a way to describe growth rates of functions. You can count data movements (or anything else) with Big O.