> Synchronizing data in real-time is computationally expensive
No, its not. I was responsible for implementing, deploying and managing the infrastructure at lever.co when we were a tiny fledgeling startup. The entire application is built on top of JSON OT. I took some measurements at one point when we had ~thousands of active browser sessions of our app. At the time we were seeing about 1-2 OT merges (transform + re-apply) per day. All the other non-concurrent operations can get sent straight to the database. I don't know what the numbers are now, but I'd be shocked if OT ever becomes the bottleneck.
For text based OT you'll see more concurrent edits. But for text based OT, I have a little C library that can comfortably do 20 million simple text OT transforms/second on a single core of my old 2012 macbook air. Good luck making that a bottleneck.
I guess it somewhat depends what kind of OT we're talking about (text or JSON). In any case, I don't think that the bottleneck would
be the OT algorithm itself - More likely, the bottleneck would be the number of messages (HTTP requests or WebSocket frames) required to send each individual operation between the client and server.
If you have text-based OT and you send an operation over the wire each time a user presses a key, and if you do this for every single input field in your app, it's going to add up.
Not saying it's not feasible but it's not going to be as fast as making a plain REST call for those use cases that don't require synching.
That said, I wouldn't be surprised if OT turns out to be much faster than differential transform in terms of raw algorithm speed but again I think the bottleneck will be the number of frames/requests that need to go over the wire.
> Not saying it's not feasible but it's not going to be as fast as making a plain REST call for those use cases that don't require synching.
The whole point of OT is that it lets you fearlessly merge concurrent edits. You can lean on that to rate limit messages to or from the client, if you need to. Because the client can always apply its own edits immediately to its own local model, there's no perceived latency. So, if you design your system right you can batch up changes at any granularity you like (per page, per form, per second, dynamically based on load, whatever). Of course, the tradeoff is that you lean on the OT system more by batching up edits. But it can be made effectively free to transform if the edits modify different parts of the database (using range trees to cull, etc).
But yeah - it is much faster than differential transform. I'm not sure how good diffing algorithms are in practice, but best case they need to scan the entire document. On the other hand OT only requires size & computation proportional to how much was changed. If we're collaboratively editing a 10kb text file, changes will mostly only be a few bytes each. And applying each change using a good rope library is a O(log n) operation.
Of course, all this requires a database which supports OT out of the box whistles innocently
No, its not. I was responsible for implementing, deploying and managing the infrastructure at lever.co when we were a tiny fledgeling startup. The entire application is built on top of JSON OT. I took some measurements at one point when we had ~thousands of active browser sessions of our app. At the time we were seeing about 1-2 OT merges (transform + re-apply) per day. All the other non-concurrent operations can get sent straight to the database. I don't know what the numbers are now, but I'd be shocked if OT ever becomes the bottleneck.
For text based OT you'll see more concurrent edits. But for text based OT, I have a little C library that can comfortably do 20 million simple text OT transforms/second on a single core of my old 2012 macbook air. Good luck making that a bottleneck.