Whilst what you say is accurate, it has nothing to do with my comment which you are replying to, which has specifically about the shape of the loss functions in deep learning, and recent research on how it impacts optimization of these functions.
The grand parent comment asked, "In short, are the Krylov methods (or perhaps other approaches) a bit too precise in their attraction to critical points?" to which you replied, "Exactly. Saddle points are like a magnet for them!" There's little room for ambiguity here and this is categorically false. Your linear solver has nothing to do with saddle points and Krylov methods do not attract them.