Hacker News new | past | comments | ask | show | jobs | submit login

No. Automatic differentiation is superior to the complex step method in every way.

The only reason to use the complex step method is if you're using legacy languages where it's difficult to implement dual numbers but you have good support for complex numbers. I don't think anybody should be using the complex step method in new applications.

Some reading ~ http://aero-comlab.stanford.edu/Papers/martins.aiaa.01-0921....




wtf

Automatic differentiation only works for the simplest functions for which you already know what the Taylor series looks like. For those cases, you might as well just hardcode derivative functions and the basic derivative rules (linearity and chain rule). It is not a general-purpose method.

For functions that you can't even express by a simple formula, you still have to rely on finite differencing.

Don't call "legacy" anything that you don't understand.


The question was about automatic differentiation vs the method in the article (the complex step method) which is essentially a defective variant of automatic differentiation with a spurious numerical parameter.

Automatic differentiation works for any composition of those ‘simplest functions’ you mention, which is quite a lot of stuff, including whole programs.

Approximation methods have their place, sure. Sometimes they're good enough and sometimes it's all you can do. What does that have to do with anything?


Thanks for the link, btw. I'm in the process of digesting the paper.


I was objecting to the suggestion that the complex finite differencing scheme was worthless ("legacy") and automatic differentiation is all we should ever do from now on.


Can you comment on the space of functions which you know to be complex differentiable but for which you can't analytically evaluate/approximate the real derivative?

If so, I look forward to learning about it. If not, you were throwing stones in a glass house.


Sure, a simple example is the derivative of Riemann zeta for Re s < 1. Granted, usually you work with the logarithmic derivative which is a lot more tractable, but if for whatever reason you need the actual derivative, you're gonna have a lot of trouble with automatic differentiation, as the typical analytic continuation formulas involve some complicated improper integrals.

There are also functions that you only know by sampling (e.g. ocean temperatures) for which you assume smoothness. You need to pick an interpolation method, but sometimes you do not interpolate beyond the sampling points, because that's just making up numbers. When you're limited by your original sampling step size, you have little recurse but to compute derivatives by some finite differencing scheme.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: