This is no secret, most of "big 4" consulting is about telling directors what they want to hear anyway (eg layoffs) but wrapping that in a glossy report with a logo on it
Not a fan of the Trump administration but I imagine the official pentagon communications systems must be extremely clunky and annoying, and about 20 years behind civilian tech.
During the UK Covid-19 enquiry into gov decision making at that time it came to light that most of the UK cabinet were co-ordinating via Whatsapp groups. Again, I'm not a fan of Boris and Dom Cummings but this makes some sort of sense to me. I recognise the need for government teams to have quick convenient chat available to them. Things move too fast these days to wait for the next cabinet meeting or to arrange things via a series of phone calls.
The problem here is that the convenience is coming at the expense of proper identity management. SignalGate is a good example of the principle. Some Apple convenience feature helped the user by putting the phone number of the reporter into the addressbook under the identity of a government official. Signal then cheerfully used that incorrect phone number to add the reporter to the group chat.
That 20 year old tech is simply more secure... specifically because it is less convenient. By doing things the way they do them they can enforce access to desired levels of security by controlling physical access to the equipment. With something like Signal, that access is entirely the responsibility of the user. The user will inevitably mess that up, particularly when things get exciting. ... and Signal is not even really all that good at preventing the user from messing the identity thing up.
You are right, but I'd also say that high security brings a lot of friction that slows down decision making. Irrespective of Trump and his friends (whom I dont like) as a point of principle I think world leaders have to choose between secure and slow vs fast and risk of leaks. For most purposes, fast and risk of leaks is going to be more optimal.
I hear you both. Frankly, I think we could use a little friction in communication to slow it and the resulting decisions down. I don't know about everyone else, but I don't make best decisions on the fly.
And unarchived. It's very convenient to not have to do things in meetings with minutes where people might later question your decisions. Or report them to the police.
Yes you are right and these people probably are crooks. But in principle I think politicans should be able to have private conversations. These used to happen in literal back rooms but these days everyone is geographically spread out and thats not so possible. Formal decisions should be ratified in official minuted meetings but informal chat should also be possible. Because people need to actually talk to each other in an unguarded way to figure things out sometimes. At the moment the principle seems to be 'anything that a politican types to anyone else should be archived for later perusal' and I'm not sure that thats going to give us better decisions.
> [...] politicans should be able to have private conversations. [...] Because people need to actually talk to each other in an unguarded way to figure things out sometimes.
Which works fine as long as there are no bad actors who may bribe, corrupt, blackmail, etc. Unfortunately that is not the reality we live in and one way[0] of counteracting the bad actors is to enforce transparency with things like "everything must be recorded and archived".
Right but what is the cost of insisting that "everything must be recorded and archived". Are you going to strap recording devices onto everyone in congress? You have to have a mix of safeguards but also practicality, surely?
"Is this politican bad" is not a very good conversation for HN, but "what technology should politicians be using to make them effective" is a good topic for HN, and thats what I'm trying to have a conversation about
My kneejerk reaction is the same as yours, but the fact that they were using disappearing messages - they're using Signal to get around their legal reporting requirements. Even if they have other motivations, what they're doing is illegal.
Also, I complain a lot about Teams, but my understanding is modern DoD basically runs on Microsoft, AWS, (also Google?) just the same as private companies. Probably not Zoom, which is unfortunate from a usability perspective but also wise I think.
The whole world had to shift online with about 2 weeks notice, so I'll forgive them that. At the time I was kind of impressed to be honest that red tape didn't bring the govt machinery to a halt and that they were actually able to improvise a bit. But yes Zoom is not generally the platform I'd want them to use.
There were better alternatives and they had more time than that (when lockdown was possible but not enforced) to prepare.
IIRC the French installed gov controlled Jitsi server. That plus a VPN would be a whole not more secure.
If you do not have things in place I think "we need to discuss state secrets securely" would have been clearly sufficient to justify an exemption to lockdown rules.
Can you name a popular civilian tech that blocks adding random journalists to small chat groups? That includes strong identity guarantees? That meets compliance requirements around logging calls?
Bloomberg might come the closest on this. Why don't you go out and price a Bloomberg terminal for yourself, at the grade that lets you trade options with other Bloomberg terminal owners over the chat interface?
I dont like Trump but I'm interested in the idea of what technology we want our politicians to use if we actually want them to be functional teams. This seems like a topic that might be good to talk about on HN.
This 'single line of code' headline trend is dumb. Of course a single line of code can fuck everything up, code is complicated and thats how it works. Its not knitting.
Taking offense when none was tendered is a special kind of social malfeasance that has gained popularity among the idle and boorish class of recent years. I appreciate it as a facile outward indication of low character and questionable intellect.
Well that may be. But I'm talking more about differences between 2003 and now. Regardless of cancel culture etc, there is a physical possibility of reply/response now, that did not exist in 2003. Perhaps the two things are related?
Daniel Dennet had a good few paragraphs about this in Consciousness Explained - the Turing Test is supposed to be challenging/adversarial. The example Dennett gave was telling the AI a joke and then asking it to reflect on and explain the joke and come up with some alternative punchlines (I note that contemporary LLMs would still be good at that, but when the book was written in 1991 that sortof interaction with an AI was unthinkable)
Do the goalposts have to keep moving until we can no longer find any gap in common knowledge or eccentric behavior in AI? If so, what does that say about eccentric human beings?
Of course; that's the point of an adversarial test, to free the interrogators to use all their human intelligence to place the goalposts wherever they judge best. There will always be individual humans who'd fail any sane version of the test (illiterate, comatose, etc.), so the test is meaningful only as a statistical aggregate.
To me it just sounds like you're holding interrogators to an unreasonably high standard in order to deny the findings of the study. If we're talking about statistical aggregates, knowing that the average person lacks the knowledge to exploit known biases of current AI models is enough to dismiss the expectation that interrogators should target them specifically. Commenters also seem to be missing the fact that this is a situation where the interrogator does not know if they are conversing with an AI model or a human being. I wouldn't expect someone to go all out boxing a punching bag if I told them there's a 50% chance that there's another person trapped in there. I've never seen the Turing Test described in such demanding terms, and a look at the Wikipedia page contradicts the definitions pushed forward here.
Perhaps another name should be coined to describe the level of perfection that critics expect from this. It sounds like what you want is something akin to a comprehensive test for AGI.
If your standard for how hard the interrogator should try isn't "as hard as they can", then what do you propose instead? It's always possible to fool a sufficiently lazy human, so you need something.
> It sounds like what you want is something akin to a comprehensive test for AGI.
Since you mentioned Wikipedia, their first proposed test for AGI is Turing's:
I (generally, not from you) see a motte-and-bailey game, where the strongest versions of Turing's test are described as equivalent to AGI, and then favorable results on weaker versions are used to claim we've achieved it. I think those weaker results are significant, probably in economically important ways, though mostly socially destructive. I think this preprint is mostly good. I don't like that conflation, though.
>To me it just sounds like you're holding interrogators to an unreasonably high standard in order to deny the findings of the study.
There isn't a THE Turing test. On a deep philosophical level, a Turing test is a kind of never ending test for everyone we interact with all the time. I don't want to get too deep in the weeds of philosophy here, but the idea is that we are talking about verifying intelligence in general, just like we verify any scientific theory through replication.
In a very scientific way, it's just another case of perpetual falsifiability. The same way that Newtonian physics is a "fact" until it isn't, an AI passes a Turing test until it doesn't.
Here are some example questions that Turing proposed when initially describing the test:
>"I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?"
>"In the first line of your sonnet which reads "Shall I compare thee to a
summer's day," would not "a spring day" do as well or better?"
It seems to me that it isn't a movement of the goalposts to demand that the interrogators are adversarial and as challenging as possible - it's what Turing originally envisioned.
As such, "The AI did not pass the Turing test because the interrogators were not sufficiently challenging" becomes a standard impossible to beat. The reductio on this is that in order for AI to pass the Turing test, it has to fool everyone on the planet which is not what I believe is intended.
Rather, we should set an upper bound on what a reasonable interpretation of "as challenging as possible" means.
Thats true, in recent years its been less of a disaster with lots of good csv libraries for various languages. In the 90s csv was a constant footgun, perhaps thats why they went crazy and came up with XML
I've been thinking about this a lot - nearly every problem these days is a synchronisation problem. You're regularly downloading something from an API? Thats a sync. You've got a distributed database? Sync problem. Cache Invalidation? Basically a sync problem. You want online and offline functionality? sync problem. Collaborative editing? sync problem.
And 'synchronisation' as a practice gets very little attention or discussion. People just start with naive approaches like 'download whats marked as changed' and then get stuck in the quagmire of known problems and known edge cases (handling deletions, handling transport errors, handling changes that didn't get marked with a timestamp, how to repair after a bad sync, dealing with conflicting updates etc).
The one piece of discussion or attempt at a systematic approach I've seen to 'synchronisation' recently is to do with Conflict-free Replicated Data Types https://crdt.tech which is essentially restricting your data and the rules for dealing with conflicts to situations that are known to be resolvable and then packaging it all up into an object.
> The one piece of discussion or attempt at a systematic approach I've seen to 'synchronisation' recently is to do with Conflict-free Replicated Data Types https://crdt.tech
I will go against the grain and say CRDTs have been a distraction and the overfocus on them have been delaying real progress. They are immature and highly complex and thus hard to debug and understand, and have extremely limited cross-language support in practice - let alone any indexing or storage engine support.
Yes, they are fascinating and yes they solve real problems but they are absolute overkill to your problems (except collab editing), at least currently. Why? Because they are all about conflict resolution. You can get very far without addressing this problem: for instance a cache, like you mentioned, has no need for conflict resolution. The main data store owns the data, and the cache follows. If you can have single ownership, (single writer) or last write wins, or similar, you can drop a massive pile of complexity on the floor and not worry about it. (In the rare cases it’s necessary like Google Docs or Figma I would be very surprised if they use off-the-shelf CRDT libs – I would bet they have an extremely bespoke and ___domain-specific data structures that are inspired by CRDTs.)
Instead, what I believe we need is end-to-end bidirectional stream based data communication, simple patch/replace data structures to efficiently notify of updates, and standard algorithms and protocols for processing it all. Basically adding async reactivity on the read path of existing data engines like SQL databases. I believe even this is a massive undertaking, but feasible, and delivers lasting tangible value.
Indeed, the simple approach of "send your operations to the server and it will apply them in the order it receives them" gives you good-enough conflict resolution in many cases.
It is still tempting to turn to CRDTs to solve the next problem: how to apply server-side changes to a client when the client has its own pending local operations. But this can be solved in a fully general way using server reconciliation, which doesn't restrict your operations or data structures like a CRDT does. I wrote about it here: https://mattweidner.com/2024/06/04/server-architectures.html...
> how to apply server-side changes to a client when the client has its own pending local operations
I liked the option of restore and replay on top of the updated server state. I’m wondering when this causes perf issues? First local changes should propagate fast after eg a network partition, even if the person has queued up a lot of them (say during a flight).
Anyway, my thinking is that you can avoid many consensus problems by just partitioning data ownership. The like example is interesting in this way. A like count is an aggregate based on multiple data owners, and everyone else just passively follows with read replication. So thinking in terms of shared write access is the wrong problem description, imo, when in reality ”liked posts” is data exclusively owned by all the different nodes doing the liking (subject to a limit of one like per post). A server aggregate could exist but is owned by the server, so no shared write access is needed.
Similarly, say you have a messaging service. Each participant owns their own messages and others follow. No conflicts are needed. However, you can still break the protocol (say liking twice). Those can be considered malformed and eg ignored. In some cases, you can copy someone else’s data and make it your own: for instance to protect against impersonations: say that you can change your own nickname, and others follow. This can be exploited to impersonate but you can keep a local copy of the last seen nickname and then display a ”changed name” warning.
Anyway, I’m just a layman who wants things to be simple. It feels like CRDTs have been the ultimate nerd-snipe, and when I did my own evaluations I was disappointed with how heavyweight and opaque they were a few years ago (and probably still).
> Yes, they are fascinating and yes they solve real problems but they are absolute overkill to your problems (except collab editing), at least currently. Why? Because they are all about conflict resolution. You can get very far without addressing this problem: for instance a cache, like you mentioned, has no need for conflict resolution. The main data store owns the data, and the cache follows. If you can have single ownership, (single writer) or last write wins, or similar, you can drop a massive pile of complexity on the floor and not worry about it. (In the rare cases it’s necessary like Google Docs or Figma I would be very surprised if they use off-the-shelf CRDT libs – I would bet they have an extremely bespoke and ___domain-specific data structures that are inspired by CRDTs.)
I agree with this. CRDTs are cool tech but I think in practice most folks would be surprised by the high percentage of use cases that can be solved with much simpler conflict resolution mechanism (and perhaps combined with server reconciliation as Matt mentioned). I also agree that collaborative document editing is a niche where CRDTs are indeed very useful.
> I believe we need is end-to-end bidirectional stream based data communication
I suspect the generalized solution is much harder to achieve, and looks more like batch-based reconciliation of full snapshots than streaming or event-driven.
The challenge is if you aim to sync data sources where the parties managing each data source are not incentivized to provide robust sync. Consider Dropbox or similar, where a single party manages the data set, and all software (server and clients), or ecosystems like Salesforce and Mulesoft which have this as a stated business goal, or ecosystems like blockchains where independent parties are still highly incentivized to coordinate and have technically robust mechanisms to accomplish it like Merkle trees and similar. You can achieve sync in those scenarios because independent parties are incentivized to coordinate (or there is only one party).
But if you have two or more independent systems, all of which provide some kind of API or import/export mechanisms, you can never guarantee those systems will stay in sync using a streaming or event-driven approach. And worse, those systems will inevitably drift out of sync, or even more worse, will propagate incorrect data across multiple systems, which can then only be reconciled by batch-like point-in-time snapshots, which then begs the question of why use streaming if you ultimately need batch to make it work reliably.
Put another way, people say batch is a special case of streaming, so just use streaming. But you could also say streaming is a fragile form of sync, so just use sync. But sync is a special case of batch, so just use batch.
I agree! Lots more things are sync. Also: the state of my source files -> my compiler (in watch mode), about 20 different APIs in the kernel - from keyboard state to filesystem watching to process monitoring to connected USB devices.
Also, http caching is sort of a special case of sync - where the cache (say, nginx) is trying to keep a synchronised copy of a resource from the backend web server. But because there’s no way for the web server to notify nginx that the resource has changed, you get both stale reads and unnecessary polling. Doing fan-out would be way more efficient than a keep alive header if we had a way to do it!
CRDTs are cool tech. (I would know - I’ve been playing with them for years). But I think it’s worth dividing data interfaces into two types: owned data and shared data. Owned data has a single owner (eg the database, the kernel, the web server) and other devices live down stream of that owner. Shared data sources have more complex systems - eg everyone in the network has a copy of the data and can make changes, then it’s all eventually consistent. Or raft / paxos. Think git, or a distributed database. And they can be combined - eg, the app server is downstream of a distributed database. GitHub actions is downstream of a git repo.
I’ve been meaning to write a blog post about this for years. Once you realise how ubiquitous this problem is, you see it absolutely everywhere.
And then there's the third super-special category of shared data with no central server, and where only certain users should be allowed to perform certain operations. This comes up most often in p2p networks, censorship resistance etc.
In most cases, the easiest approach there is just "slap a blockchain on it", as a good and modern (think Ethereum, not Bitcoin) blockchain essentially "abstracts away" the decentralization and mostly acts like a centralized computer to higher layers.
That is certainly not the only viable approach, and I wish we looked at others more. For example, a decentralized DNS-like system, without an attached cryptocurrency, but with global consensus on what a given name points to, would be extremely useful. I'm not convinced that such a thing is possible, you need some way of preventing one bad actor from grabbing all the names, and monetary compensation seems like the easiest one, but we should be looking in this direction a lot more.
> And then there's the third super-special category of shared data with no central server, and where only certain users should be allowed to perform certain operations. This comes up most often in p2p networks, censorship resistance etc.
In my mind, this is just the second category again. It’s just a shared data system, except with data validation & Byzantine fault tolerance requirements.
It’s a surprisingly common and thorny problem. For example, I could change my local git client to generate invalid / wrong hashes for my commits. When I push my changes, other peers should - in some way - reject them. PVH (of Ink&Switch) has a rule when thinking about systems like this. He says you’re free to deface your own copy of the US constitution. But I don’t have to pull your changes.
Access control makes the BFT problem much worse. The classic problem is that if two admins concurrently remove each other, it’s not clear what happens. In a crdt (or git), peers are free to backdate their changes to any arbitrary point in the past. If you try and implement user roles on top of a crdt, it’s a nightmare. I think CRDTs are just the wrong tool for thinking about access control.
I can't wait to read that blog post. I know you're an expert in this and respect your views.
One thing I think that is missing in the discussion about shared data (and maybe you can correct me) is that there are two ways of looking at the problem:
* The "math/engineering" way, where once state is identical you are done!
* The "product manager" way where you have reasonable-sounding requests like "I was typing in the middle of a paragraph, then someone deleted that paragraph, and my text was gone! It should be its own new paragraph in the same place."
Literally having identical state (or even identical state that adheres to a schema) is hard enough, but I'm not aware of techniques to ensure 1) identical state 2) adhering to a schema 3) that anyone on the team can easily modify in response to "PM-like" demands without being a sync expert.
> And 'synchronisation' as a practice gets very little attention or discussion. People just start with naive approaches like 'download whats marked as changed' and then get stuck in the quagmire of known problems and known edge cases (handling deletions, handling transport errors, handling changes that didn't get marked with a timestamp, how to repair after a bad sync, dealing with conflicting updates etc).
I've spent 16 years working on a sync engine and have worked with hundreds of enterprises on sync use cases during this time. I've seen countless cases of developers underestimating the complexity of sync. In most cases it happens exactly as you said: start with a naive approach and then the fractal complexity spiral starts. Even if the team is able to do the initial implementation, maintaining it usually turns into a burden that they eventually find too big to bear.
CRDTs work well for linear data structures, but there are known issues with hierarchical ones. For instance, if you have a tree, then two clients could send a transaction that would cause a node to be a parent of itself.
That said, there’s work that has been done towards fixing some of those issues.
Evan Wallace (I think he’s the CTO of Figma) has written about a few solutions he tried for Figma’s collaborative features. And then Martin Kleppmann has a paper proposing a solution:
Martin Kleppmann in one of his recent talks about the future of local-first, mentions the need for a generic sync service for the 'local-first end-game' [0] as he calls it. Standardization is needed. Right now everyone and their mother is doing sync differently and building production platforms around their own protocols and mechanisms.
The problem is that the requirements can be vastly different. A collaborative editor is very different to say syncing encrypted blobs. Perhaps there is a one size fits all but I doubt it.
I've been working on sync for the latter use case for a while and CRDTs would definitely be overkill.
Automatic conflict resolution will always be limited. For example, who seriously believes that we’ll ever be able to fully automate the handling of merge conflicts in version control? (Even if recorded every single edit operation on the syntax-tree level.) And in regular documents the situation is worse, because you don’t have formal parsers and type checkers and unit tests for them. Even for schematized structured data, there are similar issues on the semantic level, that a mere “it conforms to the schema” doesn’t solve.
As long as all clients agree on the order of CRDT operations then cycles are no problem. It's just an invalid transaction that can be dropped. Invalid or contradictory updates can always happen (regardless of sync mechanism) and the resolution is a UX issue. In some cases you might want to inform the user, in other cases the user can choose how to resolve the conflict, in other cases quiet failure is fine.
Unfortunately, a hard constraint of (state-based) CRDTs is that merging causally concurrent changes must be commutative. ie it is possible that clients will not be able to agree on the order of CRDT operations, and they must be able to arrive at the same state after applying them in any order.
I don't think that's required, unless you definitionally believe otherwise.
When clients disagree about the the order of events and a conflict results then clients can be required to roll back (apply the inverse of each change) to the last point in time where all clients were in agreement about the world state. Then, all clients re-apply all changes in the new now-agreed-upon order. Now all changes have been applied and there is agreement about the world state and the process starts anew.
This way multiple clients can work offline for extended periods of time and then reconcile with other clients.
I've looked at CRDTs, and the concept really appeals to me in the general case, but in the specific cases, my design always ends up being "keep-all-the-facts" about a particular item. But then you defer the problem of 'which facts can I throw away?'. It's like inventing a ___domain-specific GC.
I'd love to hear about any success cases people have had with CRDTs.
There was an article on this website not so long ago about using CRDTs for collaborative editing and there was this silly example to show how leaky this abstraction can be. What if your have the word "color" and one user replaces it with "colour" and another deletes the word, what does the CRDT do in this case? Well it merges this two edits into "u". This sort of makes me skeptical of using CRDTs for user facing applications.
There isn’t a monolithic “CRDT” in the way you’re describing. CRDTs are, broadly, a kind of data structure that allows clients to eventually agree on a final state without coordination. An integer `max` function is a simple example of a CRDT.
The behavior the article found is peculiar to the particular CRDT algorithms they looked at. But they’re probably right that it’s impossible for all conflicting edits to “just work” (in general, not just with CRDTs). That doesn’t mean CRDTs are pointless; you could imagine an algorithm that attempts to detect such semantic conflicts so the application can present some sort of resolution UI.
> There isn’t a monolithic “CRDT” in the way you’re describing.
I can't blame people for thinking otherwise, pretty much every self-called "CRDT library" I've come across implements exactly one such data structure, maybe parameterized.
It's like writing a "semiring library" and it's simply (min, +).
It's still early, but we have a checkpointing system that works very well for us. And once you have checkpoints you can start dropping inconsequential transactions in between checkpoints, which you're right, can be considered GC. However, checkpointing is desirable anyway otherwise new users have to replay the transaction log from T=0 when they join, and that's impractical.
For me the main issue with CRDTs is that they have a fixed merge algorithm baked in - if you want to change how conflicts get resolved, you have to change the whole data structure.
I feel like the state-of-the-art here is slowly starting to change. I think CRDTs for too many years got too caught up in "conflict-free" as a "manifest destiny" sort of thing more than "hope and prayer" and thought they'd keep finding the right fixed merged algorithm for every situation. I started watching CRDTs from the perspective of source control and having a strong inkling that "data is always messy" and "conflicts are human" (conflicts are kind of inevitable in any structure trying to encode data made by people).
I've been thinking for a bit that it is probably about time the industry renamed that first C to something other than "conflict-free". There is no freedom from conflicts. There's conflict resistance, sure and CRDTs can provide in their various data structures a lot of conflict resistance. But at the end of the day if the data structure is meant to encode an application for humans, it needs every merge tool and review tool and audit tool it can offer to deal with those.
I think we're finally starting to see some of the light in the tunnel in the major CRDT efforts and we're finally leaving the detour of "no it must be conflict-free, we named it that so it must be true". I don't think any one library is yet delivering it at a good high level, but I have that feeling that "one of the next libraries" is maybe going to start getting the ergonomics of conflict handling right.
This seems right to me -- imagine being able to tag objects or sub-objects with conflict-resolution semantics in a more supported way (like LWW, edits from a human, edits from automation, human resolution required (with or without optimistic application of defaults, etc).
Throwing small language models into the mix could make merging less painful too — like having the system take its best guess at what you meant, apply it, and flag it for later review.
I just want some structure where it is conflict-free most of the time but I can write custom logic in certain situations that is used, sort of like an automated git merge conflict resolution function.
I've been running into this with automated regex edits. Our product (Relay [0]) makes Obsidian real-time collaborative using yjs, but I've been fighting with the automated processes that rewrites markdown links within notes.
The issue happens when a file is renamed by one client, and then all other clients pick up the rename and make the change to the local files on disk. Since every edit is broken down into delete/keep/insert runs, the automated process runs rapidly in all clients and can break the links.
I could limit the edits to just one client, but it feels clunky. Another thought I've had is to use ytext annotations, or just also store a ymap of the link metadata and only apply updates if they can meet some kind of check (kind of like schema validation for objects).
If anyone has a good mental model for modeling automated operations (especially find/replace) in ytext please let me know! (email in bio).
Absolutely. My current product relies heavily on a handful of partner systems and, adds an opinionated layer on top of these systems, and propagates data to CRM, DW, and other analytical systems.
One early insight was that we needed a representation of partner data in our database (and the downstream systems need a representation of our opinionated view as well). This is clearly an (eventually consistent) synchronization problem.
We also realized that we often either fail to sync (due to bugs, timing, or whatever) and need a regular process to resync data.
We've ended up with a homegrown framework that does both things, such that the same business logic gets used in both cases. This also makes it easy to backfill data if a chosen representation changes)
We're now on the third or fourth iteration of this system and I'm pretty happy with it.
Once you add a periodic resync you have moved the true synchronization away from the online "(eventually consistent) synchronization" and into the batch resync. At that point the online synchronization is just a performance optimization on top of the batch resync.
I've been in that situation a lot, and I'd always carefully consider if you even need the online synchronization at that point. It's pretty rarely required.
In our case it absolutely is. There are user facing flows that require data from partner systems to complete. Waiting for the next sync cycle isn't a good UX.
reply