OP is just saying that you don't have to permit operations such as addition or incrementation on range types, in which case you don't need the corresponding type-level operations.
> That is certainly a valid way to address your question – i.e. don't allow incrementing said type. Force converting it to a type that supports incrementing, and then from that the developer can, if they so choose, convert it back to an appropriate range type, including the original range type if suitable.
The quoted part above is an argument for dependent types. The conversion back to a range type creates a type that depends on a value, which is the essence of dependent typing.
No, I think the idea is that you'd get a runtime exception if the value was outside the range. No need for dependent types. It is no different conceptually from casting, say, a 64-bit integer to a 32-bit integer. If the value is outside the range for a 32-bit integer, then (depending on the language semantics) you either raise a runtime error, or the result is some kind of nonsense value. You do not need to introduce dependent types into your language to enable such casts (as long as you're willing to enforce the relevant checks only at runtime, or forego such checks altogether).
I think the original comment is imprecise. E.g. "don't allow incrementing said type" can be read as either "don't allow incrementing values of said type" or literally as don't allow incrementing the type. I can see both your and my interpretation, depending on how one chooses to read the comment.