I think it comes down to the syntax being used to make regex. It is borderline an assembly language level of making them. Better language tools would make it easier for people to grok. Basically instead of using chars like ?^()$ and the like lets use words people understand.
As @aranchelk said LL and LR grammars are literally that. They allow you to assign names to regexs and compose them.
Libraries like python's lrparsing [0] let you assign regex's (aka tokens) to variables, then build grammars by combining them using python expressions. For example:
These grammars are more complex, with more rules you have to follow, but also more checking is done when they are compiled. They tend to mostly work once they do compile. So they are what you asked for, but there is no free lunch.
On the down side, lrparsing is pure python so it's slower than python's inbuilt regex's.
It’s not just the syntax. Regex isn’t directly composable (unless you just mash together strings). They cannot define nested structures, I.e. no recursion. They can’t maintain context.
We don’t need better language tools. Better parsers can, and already have, been implemented in libraries.
I played with parser combinators in Elm and found it very useful c.f. regexes - the issues were mainly down to the Elm implementation. Not sure how effective they are in Rust or Go
Is true though, or perhaps "state"? I know I had to come up with an algorithm because regexp alone couldn't do what I wanted (not even advanced features like lookahead, lookbehind, etc.)