I think the worst case here is it writing a regex that mostly works but fails for some edge cases that you don't think to test but will encounter in production.
OK, but back to the regex. That's just pattern matching, and ML/AI has been shown to be amazing at pattern matching (albeit underwhelming at most other tasks). I would trust an AI/ML generated regex, but only because such structures are easily testable. This tool from the University of Trieste has probably been around for 10+ years--probably only 1e3 parameters, not 1e10.
The tool you linked uses a GA (not a NN) to find a short regex that gives correct answers to whatever testcases you provide. If your test cases are correct, it is guaranteed to produce correct output.
GPT-3 is known to be confidently wrong at things like basic arithmetic, so I really wouldn't trust it in scenarios like this. Maybe one day I'll be able to trust NN-generated code without testing it first, but we're not there yet.
It's a form of defense in depth. Ideally your application has specific test cases, property-based testing, linting, and whatever other forms of static and dynamic analysis you can think of. But if your code is obfuscated and/or you don't have a clear mental model of what it does, that adds a layer of uncertainty and could potentially hurt debugability.
Linked lists, hash functions, etc are mostly solved problems with clearly defined interfaces, built and tested for edge cases by humans. Each regex is a special snowflake.