I mostly agree, but I think it fundamentally relies on a good query plan optimizer to figure out the proper order to execute joins, rewriting predicates, etc.
So I think I take the position that the machine is not perfect [1], and doesn't always provide a perfect abstraction of a fast declarative answering interface. Sometimes you really do need to tell it how to access the data. This is why, for example, some SQL query engines let the user provide join hints.
That said, I do agree that procedural queries are mostly a quick fix, and not very future-proof (against future improvements to the query engine).
And FWIW, Spark's DataFrame API [2] is not actually that procedural; it lets you specify something that feels like a direct query execution plan, but actually still gets optimized underneath.
So I think I take the position that the machine is not perfect [1], and doesn't always provide a perfect abstraction of a fast declarative answering interface. Sometimes you really do need to tell it how to access the data. This is why, for example, some SQL query engines let the user provide join hints.
That said, I do agree that procedural queries are mostly a quick fix, and not very future-proof (against future improvements to the query engine).
And FWIW, Spark's DataFrame API [2] is not actually that procedural; it lets you specify something that feels like a direct query execution plan, but actually still gets optimized underneath.
[1] http://www.vldb.org/pvldb/vol9/p204-leis.pdf
[2] https://spark.apache.org/docs/latest/api/scala/index.html#or...