It’s a pretty bad system since it takes O(n^2) time to produce n integers but ¯\_(ツ)_/¯. Haskell avoids the extra cost by immutable-self-reference instead of creating a new generator each time.
EDIT: Wrong, the Haskell code is linear. See child comments.
The extra cost is not in the recursive calls, of which there is only one per returned number. The cost is in achieving a yielded value n by starting with 1 and adding 1 to it (n-1) times. The given Haskell code:
ints = 1 : map (+1) ints
has the exact same problem, and it's just as quadratic. It might have a somewhat better constant factor, though (even apart from Python being interpreted) because there's less function calls involved.
Interesting! My intuition was wrong. I neglected to fully appreciate that the list is memoised.
What's happening, if I'm not mistaken, is that the unevaluated tail of the list is at all times a thunk that holds a reference to the cons cell holding the previous list item. Hence this is more like `iterate (+1) 1` than it seems at first glance.
I am fairly certain lists in Haskell are just normal singly linked lists with pointers pointing to the next item in the list. There's no need to maintain a reference to the previous cell. The last cell in any infinite list (or any finite list that hasn't been fully evaluated) will be an unevaluated thunk but that's not really observable to the program/programmer.