It's not just web apps; TCP itself behaves badly under high-latency conditions. If you have no other choice but to use high-latency links, you are better off with a protocol like QUIC that includes forward error correction. If you need to send a lot of data over a high-latency link you need to start looking at less common approaches; for example, this:
When the link is unreliable -- as it is in most wireless systems -- TCP also tends to behave suboptimally as it assumes lost packets are due to congestion. Again, if you need to deal with unreliable links, forward error correction is your friend.
Have a look at Google's BBR congestion control algorithm [1]. This is perhaps a solved problem now (although it's still not supported on all platforms, but only the side sending the bulk of the data needs to have it).
No, BBR does deliver significant performance improvements in high packet loss links. It is designed as a general purpose algorithm (not just for high bandwidth links) - they are using it on the YouTube (and other) content servers as well as between their datacenters. I believe trying to optimise delivery of content to high loss, high latency mobile users was one of the use cases they designed for.
It sounda similar to fiber latency. Ive never thought about this before but the amount of time for a round trip to a leo satellite, shouldnt that be in the range of 10 m/s. (1200 km / 300,000 km/ s) = 4 m/s x 2 i wonder whats the complete breakdown, is most of it inter satellite optical communication? Does this offset the additional 8 ms because its a simplified topology?
I had Google Fiber installed in my apartment a year ago. I was walking through the parking garage and I caught one of the techs doing alignment splicing. I asked him where the nearest Google Fiber Hut was and he said about the quarter mile away near a Taco Bell. I remarked that's pretty good, but he said well it's even further than that because it's not line of sight. He pulled out a tool that could measure the total length of installed fiber via refection... It was over 20,000 feet. And that was just to go from the garage to fiber hut 1/4 mi away. I don't know how the topology of Google's network works, but I thought that was crazy.
Granted leo satellites are 1200mi away but it's worth remembering that.
So thats a ping of 240, minimum. That's assuming everything else in the system happened instantaneously.
If you can however manage to solve the whole aiming at a moving target 360 kilometers away then maybe you could do a LEO network, and yeah the ping would be closer to 1.2 assuming again literally everything was instantaneous.
As you note, LEO is very different from geostationary. Moving target doesn't strike me as impossible. A LEO satellite has a very predictable path, a stationary dish has a predictable movement relative to that path and a mobile phone knows where it is at all times.
There are several companies building (and a few shipping) flat-panel phased array antennae that can "point" at a moving target 360 km away. They also can switch to a new satellite instantly.
Capacity would be a huge problem though, even with 5000 satellites, if you wanted to rival fixed-line or cellular wireless networks for a significant part of the world's population...
To put that in perspective, in Australia we have 13,400 cellular towers [1], with an extra 9,000 proposed to be built, to cover just 26.3 million subscribers [2]. This is to deliver 2.8% of the data downloaded over the last year (97.2% having been delivered over fixed line) [2].
So, with an LEO constallation, you might have a few dozen satellites over the populated parts of Australia at any one time, each of which has a finite amount of capacity (both uplink/downlink spectrum and capacity back to the ground station) shared between all the users.
I think this network could see a massive benefit for rural areas, developing countries, maritime markets, etc. but to think that the network would have the capacity to deliver anything rivalling cellular networks in metro/suburban areas to most subscribers is very far fetched.
> a 2015 proposal from Samsung has outlined a 4600-satellite constellation orbiting at 1,400 kilometers (900 mi) that could provide a zettabyte per month capacity worldwide, an equivalent of 200 gigabytes per month for 5 billion users of internet data
That is actually surprisingly low. How many of those towers are due to georgraphic constraints rather than capacity constraints? Each town tends to have its own tower, and then a tower often exists between towns to fill dead spots. I can get 3g or 4g from almost the entire length of my state, and I can imagine in a few of those spots it's just me and a handful of others that are connected.
Really interested to see how this plays out. Given the number of satellites needed to make this happen, and the number of companies launching similar initiatives, it seems low earth orbit real-estate is about to get pretty competitive. Anyone know if there is a limit on the number of satellites that can occupy LEO? SpaceX alone wants to launch around 5,000.
That just means SpaceX will become one more giant ISP that is not required to support NN in the US. I love SpaceX, and I'm optimistic that they will be a much better provider than, eg, Comcast (low bar, I realize) ... but they are not the solution to the FCC/NN debacle.
https://en.wikipedia.org/wiki/SpaceX_satellite_constellation