TCP/IP is so entrenched in everything, literally, it will still be in use when we leave this planet and it gets swallowed up by the sun. The news will report the destruction of Earth and explain that it and its inhabitants longest lasting legacy is TCP/IP, then report that the transition to IPv6 is going well and everyone will be on it soon...
I think the real issue is the operating systems did not properly abstract away the APIs, protocols and networks. On plan 9 the dial string is my favorite part of networking because you optionally specify the network in addition to the address and port in the form of a string: net!address!service. To dial an ssh server on port 1234 you run "ssh user@tcp!server.net!1234". The network database (ndb) can then be setup to alias names with protocols or ports so you can omit parts of the dial string for known services e.g. ssh defaults to tcp and port 22. The dial string neatly wraps up the entire network connection into a single string alleviating the program from having to offer command line arguments for port numbers which leaks protocol details into the code making things ugly and difficult to change.
This lets your server take a dial string and listen on any network - even if that network isn't IP! So say you wanted to bring IPX/SPX back from the dead and use it in any program: so you write an IPX/SPX stack that binds over /net, then tell your server to listen on spx!*!555. Now your client also runs an IPX/SPX stack bound over /net and you hand the client program the dial string spx!server!555 and your program starts communicating over IPX/SPX to the server. Done. Want to use connect to an ssh server using quic on plan 9 (Assuming an imaginary quic stack)? just change the dial string to quic!host!port. Done.
Once you use an OS with clean and simple abstractions you are left craving more of it. I wish more people took an interest in building much more radical operating systems.
That assumes that the speed of light is an insurmountable speed limit. We can't be 100% sure of that.
But even if it is, TCP can be used with any latency if you configure the timeouts accordingly and maintain a large enough send buffer to allow retransmissions.
I don't think volume of data average user will want to send to Mars over next 100 years will be a important factor enough. I envision Mars comms using some kind of relay service, to which you'd communicate using TCP
Earth-Mars will be a store and forward network not unlike the old UUCP or BBS networks like FidoNet. You will send blobs of data via something that probably looks like a replicated S3 bucket and get notified when replication of an object has occurred.
IP is usable to the Moon but a lot of protocols would need latency constants adjusted and very large packets would be desirable. Store and forward with custom replication protocols might still be more efficient for large volumes of data.
Actually, even TCP is going to be replaced. I just want to remind the initial work of Google on QUIC, which now has become some standard. It is based on UDP and re-defining TCP features based on UDP together with cryptography and by allowing IP changes to allow moving clients.
And yet, even with QUIC and a brand new Google phone on a Google wifi network and a Google Fi cell connection, I still can't walk away from my house and have a video call migrate from wifi to 5G without the audio glitching and dropping frames.
All the mobility benefits of QUIC mean nothing when the rest of the software stack isn't designed to let it work.
* Android lets an application talk over both wifi and 5G at the same time.
* Android exposes information about signal strength on a per packet basis, so that the application can decide at some point that too many packets are too close to not being received, so it's time to send data over 5G in addition. It should also expose data about packet retransmissions at the physical layer (ie. collisions and backoff times due to another device using the media).
* When the 5G data flow is established and channel parameters like delay and loss are characterized, then stop sending data over wifi.
* And since this process never involved delaying any data by a roundtrip, nor any packet loss, there is no need to drop any frames.
Note that the cell network has been able to handoff voice calls from one cell tower to another without glitches since the 80's!.
I think the factor you're missing is that the WiFi and 5G connections may have substantially different latencies to the other person. Going from a lower-latency to a higher-latency connection will always involve audio dropping out and video pausing. And in the opposite direction, it's preferable to skip over a few frames in order to reduce latency, rather than maintain higher latency.
I wonder if you don't see this with cell phones because the latency is generally identical, or if it's just less noticeable with audio than with video? I guess I'd also wonder if cell towers really do hand off without glitches, since there always seem to be glitches when you're driving, but you don't have the slightest idea whether they're from tall buildings or interference or Bluetooth or handoffs or what, or even on your end or the other person's end.
Video conferencing has latency in the 300ms - 1000ms range[1]. The actual network component of that is pretty small. And video conferencing software already has logic for time stretching/compression to handle variable latency - typically they'll speed up or slow down gaps between words and sentences.
Actually no -- things like FaceTime are more like 90 ms, while other common products like Zoom and Meet are around 150-200. These are minimums with somebody in your own city. This is actually from my own testing in the past, where FaceTime was the clear winner, since Apple seems to care a lot about latency and its software is custom written for its own hardware.
But networks absolutely can add major latency, have you never had a slow Zoom call? It's because of congestion building up and radio interference, not Zoom's software. That's what leads to things like 1,000 ms latency, which makes back and forth conversation very difficult. Moderate-to-major perceptible latency issues in conversation are always because of the network.
And yes some products do time stretching but that's also what people often call glitches because it's very weird.
Why not? It is not obvious to me why a seamless transition is impossible?
Isn't the whole point of TCP that individual packets can take different paths over different networks and when they reach the destination they can be sequenced? Why should changing the network disrupt the individual packets from traveling independently?
There are several things that make this difficult. Much of the difficulty relates to the device changing it's network address. Seamless transition requires that the application can:
- Find the new address; i.e. Cell provider vs. Residential/Business ISP
- Associate the new address with the same flow
- Duplicate packets and reassemble them, or change to "better" path interface.
On both android and iOS, a regular app can't choose to send packets over both 5G and wifi at the same time. Thats needed to setup a new connection while still using the old one.
Never is a long time. QUIC isn't great for this, because while it has IP address flexibility, it's not designed to have multiple paths simultaneously active.
If you design for this use case, you can make it work today; especially since the user is on a video call and only asking for the audio to be glitchless. Sending audio over both wifi and cell is possible and simple and would solve audio at the expense of double the audio bandwidth. More bandwidth efficient methods are left to the reader.
TCP is fine and isn't going anywhere. QUIC is an overengineered contraption that only exists to serve more ads. SSH doesn't need it. Postgres doesn't need it. Nobody needs QUIC except Google.
QUIC is not without defects, I'm with you there, but almost almost any "web-scale" company dealing with a lot of cellular connections would disagree with your statement. Uber, Verizon, Cloudflare, Meta, Fastly, among others, some of which have reported very significant latency reductions.
SSH's "master" mode connection sharing would benefit from QUIC / HTTP/3 head of line blocking elimination just as much as browsers would.
Run a heavy rsync using connection sharing with an interactive session and you'll notice the interactive sessions latency suffering noticably.
Long-lived SSH connections would benefit from QUIC / HTTP/3 session survivability.
The fact that it takes so long to transition to IPv6 and like many other obstruction to technological/social progress are economic and business gain for a certain small group of people and organisations.
I'd say that it's not that there's some gain causing people/organizations to be obstructive, but rather that IPv4 and its address shortage simply isn't a problem for the influential people and organizations, they're doing just fine and for them there is nothing that needs to be fixed; IPv6 is needed by "someone else" who can be safely ignored with no noticeable effect to the bottom line.
I propose the Willis law (are you allowed to name a law after yourself?):
The spread and reach of TCP/IP will accelerate at a pace faster than that of the transition to IPv6 indefinitely, therefore we are saddled with IPv4 until the end of time.
In fact, we know that IPv4 lives in the observable universe. You can travel at the speed of light towards IPv6, but the expansion rate of space time-itself (you can think of space-time as a series of tubes) is not limited by the speed of light. Thus, IPv6 is only theorized to exist, in order to explain other physical phenomena. Worse, even if it was proven to exist by a distant observer, there no way for them to relay that information back to us because our routers here at earth would drop the packets.
It's just plain cost. Reworking things takes engineering time, an engineers need to be paid.
If I were to list impediments to the technical development of the Internet, local telco monopolies would be much higher in the list, but it's another kettle of fish.
That's not necessary; listening on the wildcard IPv6 address without the "IPv6 only" flag enabled on the socket is enough. A IPv4 connection arriving at it is mapped to a special IPv6 address, so the code doesn't have to care about IPv4.
Unless you were using Windows, where IIRC the IPv4 and IPv6 stacks were separate, and a single socket couldn't be used to listen to both at the same time. (This might already have been fixed in more recent Windows releases.)
People wanted to change bitfield with version from 4 to 6 and increase amount of bytes in IP address filed. Simple KISS.
Instead of that we got fragile, backward incompatible, unnecessarily complex, jack of all trades protocol. And people are surprised that even after decades since the release of IPv6, it is being avoided like a plague.
This comes up often. What do you think would happen if you sent a “IPv4+” packet? All the ossified equipment would drop it. So you have to replace all this equipment, ideally _once_ so we ought to make it count.
Consider that NAT64 is able to translate between IPv4 and IPv6 at the packet level. If the protocols were as different as you suggest, this wouldn't be possible.
Doesn't IPv6 remove broadcast, thus removing DHCP and ARP?
I mean, hosts need to be 'dual stack' to support IPv4 and IPv6 together - whereas TheLoafOfBread's proposal would allow support of both with one backward-compatible stack.
NAT64 is never in a position to need to translate DHCP & ARP, those live fully on one side of NAT64 ("4" side for DHCP & ARP, "6" side for SLAAC/DHCPv6 & NDP).
NAT64 is involved on the level of TCP & UDP, not lower level protocols.
DHCP and ARP use 32-bit addresses, so those protocols had to change regardless.
I would challenge you to find someone who knows the difference between Ethernet broadcast and multicast without looking up the answer. They are very similar mechanisms.
I know you are joking but I don't get the longevity of this joke. I've been running Debian with XFCE for more than 12 years, on PCs custom-built and branded, and on laptops both of enterprise quality and retail junk. Heck, our entire B2B commerce business runs the same setup, with far fewer (read: zero) issues compared to when we were using Windows 98, 98SE, XP SP3, 7, Vista, and 10, at which point we declined Microsoft's telemetry and ads and forced OneDrive shenanigans and promptly switched everyone to Debian+XFCE.
99.9% of the time, modern Linux works out of the box everytime.
So I'd say it's on par with Windows versions, except I practically never need to download install drivers, or to face ads.
Maybe I'm on a different planet on which the year of the Linux desktop has arrived for longer than a decade.
The year of IPv6 is already here, my home is already on IPv6 as is my mobile provider. Normal ISP and mobile provider in New York, not anything particularly geeky.
IPv6 is waiting on stubborn cloud networks and CDNs. Most of the edge now has it. My guess is that it’s mostly reluctance to introduce complexity when most customers are not asking for it. The biggest holdouts seem to be connected to Microsoft, with GitHub being one of the most relevant.
The Linux desktop will arrive if we just wait for Microsoft to keep making Windows worse. Linux doesn’t have to get better. MS just needs to incorporate more ads.
>IPv6 is waiting on stubborn cloud networks and CDNs. Most of the edge now has it.
Depends on the ISP. E.g. a lot of Verizon/Frontier FiOS residential customers don't have IPv6. Google statistics say USA is ~47% IPv6. Frontier FiOS is less % than that.