Decent, fast, incremental, aggressive, safe GC is hard.
Parallel processing, especially across multiple CPUs and multiple boxes, is hard.
Mixing the two can lead to, well, let's just call them "undesirable interactions of features".
The specific example he gives of inserting node B between nodes A and C is easily solved by not holding the pointer to C in a register, but instead by setting B.NEXT from A.NEXT before setting A.NEXT to point to B.
But that's not the point. The point is that despite some fantastic advances in computing languages and runtime systems, sometimes we still have to think about these things.
His concern is valid, and while I think that we should never go back to C++ by breaking memory safety, he makes a good point. His languages are not expressive enough to allow the programmer to do what he wants.
More languages maybe should have value types, like C#. Maybe they also should have linear types and/or uniqueness types.
Lots of mechanisms exist to enable aggressive (and scalable) memory management. There is no reason why we should just accept the current state of the art.
His languages are not expressive enough to allow the programmer to do what he wants.
His languages are Ruby and Common Lisp; dunno about ruby, but lisps allow you to have plenty of control over GC, up to and including turning it off, but usually the programmer knows better and uses something portable, like object pools, fixed containers in static memory areas, and destructive operations instead of fresh consing . The less portable options include: weak pointers, static-arrays, weak vectors, weak hashtables, finalizations, the last of which is a callback mechanism that lets you register a function to be invoked by Lisp before the object it's associated with is marked for GC.
In SBCL, you can even allocate your object with C's malloc and free! That's right SB-ALIEN:MAKE-ALIEN and SB-ALIEN:FREE-ALIEN, you can also allocate objects in "foreign" space and tagged them yourself, so Lisp treats them as its own objects.