Hacker News new | past | comments | ask | show | jobs | submit login
Nokia to demonstrate a technique for terabit-speed data over optical-fiber (zdnet.com)
104 points by wallflower on Sept 24, 2016 | hide | past | favorite | 49 comments



"Researchers will this week demonstrate a newly-refined data-transmission technique that can deliver one terabit per second (Tbps) over optical fiber"

We already had 10 Tbps per fiber pair: http://www.extremetech.com/internet/231074-googles-faster-un...


There's supposedly a 255Tbps multi-mode cable[0] as well.

[0]: http://www.extremetech.com/extreme/192929-255tbps-worlds-fas...


Just to clarify this is not regular multi-mode fiber, but a single fiber with seven cores.

A singlemode fiber is limited to about 100 Tbps per fiber at 10 bits/s/Hz. It is not obvious to me how much bandwidth usage has to grow until it becomes more economical to deploy this new kind of fiber rather than multiple (existing) singlemode fibers. For example, for high bandwidth applications MPO connectors with 12 or 24 fibers per connectors are used.


I'm curious as to where you came up with your symbol rate numbers. The theoretical limit[1] is 2 bits/Hz which for infrared light (~400THz) would put the limit at ~800Tbps.

[1] https://en.wikipedia.org/wiki/Nyquist_rate


The Nyquist signaling rate only applies to binary/two-level signals. Most optical communications systems use many levels to improve the data rate. StackExchange discussion: http://electronics.stackexchange.com/questions/21854/why-is-...


Ahhh, that makes sense! I wasn't aware that the Nyquist rate only applied to binary signal. Thanks for the SE link.


The usable parts of infrared light (amplifiable C and L bands in single mode fiber) is about 10 Thz. According to Shannon's theorem the theoretical upper limit for symbol rate in a linear channel with 30db SNR is 10 bits/s/Hz, independent of modulation format. This is how I derived my rough estimate of the capacity of single mode fiber.

The Nyquist rate is modulation dependent. Optical fiber can certainly achieve spectral efficiency over 2 bits/s/Hz.


Ok, Throndor provided a link which has a better explanation about this. I have some reading to do!


What technology determines how cable bandwidth is segmented to different tenants on fiber optic trunk connections? In theory 10 Tbps would divide out to something like 1,000 dedicated 10Gbps L2 connections but I'm guessing there's a lot more to it than that.

Either way it's pretty cool to see the increases in trunk capacity. This is enabling for a huge range of applications.

(Edit: Bad math on bandwidth, must.have.coffee!)


Bandwidth can be segmented off using multiple technologies. Basically you can either dedicate wavelengths per tenant (typically 96 wavelengths on a DWDM system) or dedicate lower bitrates per customer (for example 1 Gbps) and mux them together with other traffic until you fill up a 10G or 100G wavelength.


Thanks, that got me started. :)


the main difference is, this demonstration uses cables within Telekoms Network - so it's no laboratory but a real-world-test. (german source: http://www.golem.de/news/saser-muenchner-forscher-uebertrage...)


According to the article, FASTER has already entered service. It's the name of a deployed submarine cable, not a lab demo of new fiber technology.


uh, the parent is talking about an in-operation undersea cable. So it's no laboratory, it's a cable deployed in really harsh conditions.


I am barely able to get a 20mb ADSL2 connection here in Australia, let alone 100mb. I know I won't live to see even a 1gb connection speed becoming available in Australia in my lifetime. Lab conditions and real world conditions are two different things. You can have a fast connection, but if the bandwidth isn't there to meet the demand, then you just end up with a fast link to the exchange and then from there, it's plain old slow internet.


This is all about core networks, the comparison to residential surface is just bad journalism.

It seems, reading between the lines, that they were able to make 10x bandwidth improvements over existing fiber with fancy new physical layer encoding (QAM with some statistical magic)

And it's not in a lab, they're experiments between major German cities.


If it means replacing much of the current fiber optic equipment it's not going to be popular soon.


This is ridiculous.

Last-mile operators don't care very much about speed, but core network operators definitely do because performance is money. Replacing fiber optic equipment is damn near a trivial technical operation and business decision.


Not if you bought enough politicians and can simply tell them to mandate data rate limits on core network they will pay you to build(op country).


If it's between upgrading equipment and adding more lines I think it will be a very popular option.


Why not? Replacing cables is cheap if you built the infrastructure properly.

Switching my entire district from ADSL to fiber-to-the-building took a team of a dozen workers only 2 weeks, because all cabling between buildings and all street cabling is in cable tunnels where the cabling can be easily replaced with RC vehicles.

That means replacing those cables again in a few decades will be basically free, too.

For backbones, the same applies, except there the reward is even higher: They’re all in cable tunnels, where they can easily be replaced.


> That means replacing those cables again in a few decades will be basically free, too.

I think you and me have a different definition of free. There are still substantial costs in both labor and materials to replace the outside plant, even if there is existing ducting in place with free space.


Almost all of the costs of replacing the cabling in the US today is digging a trench, or getting access to privately owned telephone poles.

As evidenced by Google Fiber’s struggles.

If that cost is eliminated, replacing fiber outside is as cheap as replacing it inside a datacenter.


> Almost all of the costs of replacing the cabling in the US today is digging a trench, or getting access to privately owned telephone poles.

Firstly, if you have existing ducts there is no need for trenching. Secondly, a lot of trencing is replaced by directional boring and, where possible, by microtrenching or plowing. Thirdly, gaining access to poles isn't usually the problem. What takes time and money is to get them into such a state that new cable can be installed. Fourthly, your largest cost is always going to be labor, regardless of what you do. Lastly, even ignoring all the other costs there is still equipment and materials, which will account for at least a fifth of the project costs.

So in summary, no almost all the cost is not in the line items you specify. A lot of money is going to get spent no matter what when you replace your outside plant.

> As evidenced by Google Fiber’s struggles.

Google is also being actively hampered by competitors that own the poles they need access to.

> If that cost is eliminated, replacing fiber outside is as cheap as replacing it inside a datacenter.

That's a bit optimistic, as the outside plant is far larger than any datacenter, not administered by a signle party nor a controlled environment.


Around here they're on telephone poles - we moved into an apartment in a building that wasn't wired for anything but POTS, so they just sent someone up the phone pole to pull a fiber cable straight into our 3rd-floor apartment.


Interesting; how many homes in a district?


Have fun counting this suburb: http://i.imgur.com/T3jcRzs.png (I literally don’t know, but pulled this image from here maps – Google maps doesn’t have the suburb at all on it yet)


Equipment gets replaced all the time. Obviously there is a diffusion curve. It's not only absolute costs that matter, costs per bit or bit mile matter too. Demand, and it's economics, determine how popular it will become and how soon.

One thing is undeniable tho. Bandwidth demand is steadily growing and there isn't a clear end in sight. And where there is demand there is progress.


Well, you do live in probably the most Internet-hostile part of the world.


I can think of a few more hostile places. Cuba is doing well to qualify, but North Korea is still king of the hill.


Have you lived through 9600bps or 14.4k to 20Mbps?


problems in Australia are entirely of political nature (corruption/collusion).

There is absolutely ZERO reason for national fiber backbone to enforce data cap limits and encourage metered plans, but there it is, in Australia. Countries considered an ass of Europe like Romania roll on the floor laughing at you ($15 unmetered 1Gbit home connections).


I agree with you that the nature of the problem in Australia is political, but not that the main root cause is corruption/collusion.

The main problem is that the politicians made promises they cannot keep: cheap fiber to everyone at the same price, at no cost to the state since it will pay back itself.

Well, turns out it's not cheap to build out fiber to everyone, costs differ enormously between the city and the bush and there isn't enough revenue coming in to pay for it.

Since no politician will ever admit they were wrong, consumer prices can't be raised and the buildout cannot be stopped.

Desperate to save money and face, the politicians opted to switch to DSL from fiber, bought up all cable tv companies to substitute CATV for fiber, forbid any competition with the national fiber company and proceeded to squeeze every last penny out of the Internet service providers, since they couldn't touch the consumer prices.

While there are zero reasons for data caps and metered plans, the national fiber network is so expensive to use, ISPs have no other option but to use data caps and metering so as not to go out of business immediately.

The end result is very poor and very slow service on the national fiber network with customers complaining and wanting to stay on old ADSL connections, which is ironic. Unfortunately they can't since the copper network is being decommissioned, so as not to compete with the national fiber network and to bring in more money to the government owned company.


its a community management problem. If only you connect it will be costly, but if a few hundred people come together it will be cheap.


If you voted liberal in the last election then sorry you made your bed, otherwise I sympathise. This if for backbone infrastructure though and I think it's likely you will see gigabit connections in the next 10 years just at an enormous cost to the taxpayer or through 5G or some such.


Too bad, Deutsche Telekom is blocking fiber built-up for customers in Germany by pushing VDSL vectoring.


Is that the reason for the slow fiber roll-out in Germany? I just looked at Berlin and there's a small neighborhood in Mitte that has it, otherwise there's nothing. Can't any other company than Telekom lay out fiber?


Telekom are toying with (super) vectoring and g.fast to avoid having to roll out fiber to every household. I guess it makes financial sense because the are still moving the Network Access points closer to homes and these are usually connected with fibre. However this will work until 500-1000Mbps at best and probably only for a few people, so at some point they have to do FTTH especially because cable providers are already pushing 400Mbps in many places in Germany.

DNS.NET and Versatel (now 1&1) have their own fibre network in some areas of Berlin and they actually do FTTH if you are one of the lucky few. I live in a pretty dense and central area of Berlin and all i get is 50Mbps VDSL, because i want to avoid the cable providers that have an inferior network. For private usage it's still okay and i have access to 1Gbps at work, but i still can't wait for Vectoring to arrive.


Any idea why we have nothing like M-net though? I understand Berlin is poor, but with all the tech startups nowadays, I'm sure there would be demand in a few neighborhoods.

This is one ___domain where Berlin is clearly lagging behind compared to most European cities. The LTE coverage is another one too. I personally don't care much, but the reliability of Telekom's network in Berlin is a total joke (Vodafone seems a bit better, O2/E-plus suck but it's cheap at least).


Probably because Berlin's status as a tech hub is a relatively recent development and before that the city was way behind other major german cities that had strong economies for decades. DNS.NET does FTTH but they are super small compared to MNET, so i hope this will get better in the next few years. Cable in the west of Berlin through Kabel Deutschland is fast and okay, just in the east with Telecolumbus it sucks pretty badly in some areas.

I have no problem with Telekom LTE though, seems to generally work fine and gives me >50Mbps in most areas, it's often faster than my 50Mbps at home.


Can you explain:

"I want to avoid the cable providers that have an inferior network"

I live in Köln, and because of the distance to the nearest loop, I don't get the 100mbit offer from Telekom… only 18mbit vdsl at best. I went to UnityMedia cable and get 200mbit. Unity is an Evil Media Company, but I haven't noticed anything wrong with their network. So what am I missing?


Kind of a Berlin problem i guess. In the places i lived before i had UnityMedia and was generally pretty happy with it. In west Berlin you have Kabel-Deutschland which is also okay (but offers only Fair Use Flats yuck), but in the east where i live there is only Telecolumbus if you choose cable. While they offer 400Mbps there are a lot of dense places where people complain about <10Mbps in the evening or the weekends.

In general using Cable, more people have to share the same bandwidth, especially if the provider cheaped out and only does whats necessary. You don't even get your own IPv4, you are basically sharing public IPs with other people through NAT, which makes accessing anything at your home more complicated as well.


ds-lite + shoddy aftrs unless you're on a business contract. meaning a subpar ipv4 experience. at least we have router freedom now.


If memory serves there have being commercial multiplexers that could do that for a long time.


Just a thought - maybe Google will buy the new Nokia this time to expand Fiber...


Google's problems in rolling out fiber is not with the equipment, it's in putting the actual cable on poles and in ducts. Buying Nokia would not help one bit with this.


I'd love it if Google Fiber would come to my area (suburbs of eastern Mass.) regardless of speed! One gigabit sounds so much better than what we have now.


Ok this is interesting tech and I like the light being shined on the remnants of Nokia occasionally, but the headline is misleading clickbait. They compare the new speed against today's fast consumer fiber, even though they say in the article that this is a new tech for backbones. It feels like the headline just tries to work "Google" into it as an adversary to get the clicks :(

Go go Nokia Bell, go away zdnet ;)


I'm excited to hear the new BS excuses for data caps existing once common infrastructure can handle this bandwidth.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: