All they wanted was to fork the project to "clean up" the code in ways they wanted, manage the community themselves, etc. It wasn't like all their patches were flawless and Bram just wouldn't accept them.
Just dumping any old code into a project the first time it's demanded is a sign not of enlightened maintenance but of gross negligence. It's good if there's a lot of feedback on patches, it's not some kind of aggression.
What does PG need to improve, then? I see a lot of discussion of how Oracle is better at this or that but not much discussion on how PG will come to parity and how we'll know when it's finally good enough to use.
There's some things (better replication out of the box , higher performance).
> but not much discussion on how PG will come to parity
That's because this subthread started with "No one ever chooses Oracle or Sybase on technical merits." - neither Postgres' strengths and needed/planned improvements are relevant to refute that position.
> on how PG will come to parity and how we'll know when it's finally good enough to use.
Just because Oracle has some features that postgres doesn't match doesn't mean it's not good enough. There's a lot of features where postgres is further along than Oracle, too. For a good number of OLTPish workloads postgres is faster.
We're talking about large and complex products here - it's seldomly the case that one project/product is going to be better than all others in all respects. Postgres has been good enough to use for a long time.
If you're interested in which areas postgres needs to improve, I'm happy to talk about that too.
It's worse than that. You're not just choosing a reliable solution, or even a solution with a conservative reputation so that your butt is covered. You're choosing a solution which may actually be worse because when you fail to deliver for someone else, you have someone to scapegoat. If there is nobody else to blame, you might take the blame for having chosen open source with no specific party to blame (the hot potato stopped with you).
You can absolutely do string handling in a common subset of Python 2 and Python 3. It won't work for legacy string handling code if you relied on implicit coercion, but then the same apps are unlikely to support unicode either.
> It won't work for legacy string handling code if you relied on implicit coercion
That's far from the only pitfall. For instance some of the P2 stdlib relies on implicit coercion to bytestrings whereas the Python 3 version has been properly converted to unicode (I discovered that when I tried to disable implicit coercion in one of the codebases I work with for cleanup/migration purposes).
So cross-platform string handling is not just a matter of properly splitting bytestrings and textstrings, but also finding out where "native strings" need to remain.
Well now you're getting onto my personal reason why I don't adopt Python 3. Python 2 will remain just as useful as it's been. Python 3 will not significantly extend that usefulness.
So I don't think there's much to be gained.
The 3 syntax and new features are not massively better than the sweet spot hit by 2, they're not even incrementally better. They're just massively more complicated.
So the cost/benefit ratio of Python 2 to 3 just isn't there. People are changing up because it's "the done thing" not because they really gain anything.
I was surprised as hell when I realized I wouldn't use Python 3, but it is a rational and considered opinion.