Hacker News new | past | comments | ask | show | jobs | submit login

I wish we could start over and redesign everything from Ethernet up to TLS with the lessons we now understand. So many layers could be merged, security could be so much easier, IP addressing hassles could be unnecessary.

But all the stuff that seems obvious now couldn't have been learned without the decades of kind of terrible hackery that is OSI and the associated awful stuff like DNS.

I'm not sure what the moral of that story is or how anything could have been done any better. Maybe the software APIs could have been less coupled to the actual protocol, they could have designed things so app code had to be agnostic to the number of bits in an IP address, etc.




I wonder if we will see more of the network layer move into the application layer, like what I think happened with QUIC.

I’m not really a network guy, but the thing I find interesting about QUIC is that it’s upgradable. Normalising regular browser updates was a godsend for the web, while TCP/IP remains hard to upgrade because you still, in 2023, have to upgrade your whole OS.

Protocols like TLS that sit outside the OS seem to have been much more dynamic, and that too is the promise of QUIC. It might be interesting to find operating system primitives that enable use of the networking hardware without having to implement the protocol itself. Although that stuff is above my pay grade.


The need to upgrade the OS is only a small part of the problem.

On the one hand, the fact the TCP is unchanging has led to network cards supporting TCP directly. You can shove a large segment of data in the network card and it will split the data up in TCP segments and transmit them. There are also big content companies that let the kernel do all TLS encryption, in some cases that can also be offloaded to hardware.

The big problem for TCP is all the boxes on the internet that understand TCP and don't understand TCP extensions. That could be home routers doing NAT, firewalls, etc. Those boxes are not only hard to upgrade, they also break things. QUIC fixes this by making sure that as much as possible is encrypted.

Some ISPs monitor TCP headers and from retransmissions they can figure out where problems are in the network. QUIC will take that ability away.

So QUIC is a mixed blessing.


I disagree.

If there’s a financial incentive (better performance or whatever) then the server side will upgrade; it’s a financial and timing decision with strong motivators.

But the client side problem is 5-6 orders of magnitude larger than the server side problem - and to make things worse, they basically don’t care.

People seem happy enough to upgrade their browser, but upgrading their OS is a big deal. Even I don’t like doing it, and I consider it important.

So I think a reasonable rule of thumb would be, the server side will take care of itself. But if you need your clients to reboot their computer to upgrade the network stack to improve your server performance, well, it ain’t gonna happen.


This is where a lot of high-performance stuff went in the last decade or so. HPC, HFT, AI training, several storage services, and clouds pretty much all do their networking in something other than a kernel. It turns out you can get huge performance gains by not using a one-size-fits-all networking system.

It only takes a few more steps for this model to trickle down to general applications, as HTTP and RPC stacks pick it up.


> I wonder if we will see more of the network layer move into the application layer, like what I think happened with QUIC.

https://lwn.net/Articles/169961/


Or the kernels becomes even less of a monolith on all fronts and more userland-like.

Both io_uring and eBPF would allow us to handle more complex tasks inside the kernel, reducing context switch losses yet still allowing the benefits of a upgrade-able structure without having to upgrade the OS or even the kernel itself.


Yeah. It strikes me that the lack of a stable Linux kernel ABI is part of the reason why we can’t easily upgrade bits of it. Upgrading a kernel module would be a good solution (could even be done by applications) but IIUC its infeasible.

But still, it seems conceivable that a networking kernel module could talk exclusively to an adaptor shim that is part of the kernel.

idk, I actually haven’t so much as compiled a kernel since the late 90s, so I’m pretty much talking out the wrong end :)


> I wish we could start over and redesign everything ...

I wish we all could learn that "because it works now" is a valid reason to resist change, and that assuring back compatibility is a condicio sine qua non for every change that we want adopted by people at large.


"Because it works now" is a perfectly good reason, I was more talking about a hypothetical "wave a wand and change all devices overnight" scenario.

I think an IPv8(Apparently we skip odd numbers) could be a real practical thing one day though, because a lot of things really do with quite work that well. The classic "it's always DNS" meme seems to be very real, tying TLS to domains instead of IPs impedes anything on LAN, IPv6 has too much inconsistency in how many bits are allocated to customers, and would be much saner with a more structured ISP/Region/Subnet/DeviceChosenIDThatIsTheSameEvenIfYouChangeISPs scheme so every ISP got the same number of bits.

Insecure DNS doesn't need to exist outside of MDNS, and even MDNS could at least have pairing prompts, if all records are signed and everything is secured at the IP level(Embedding key hashes in the IP), then certificate authorities don't need to exist either.

MACs don't really need to be a thing either outside industrial niche stuff, if we mostly just do IP traffic, that extra framing and the lookup/translation layer can go away.

Some level of onion routing could probably just be built right into IP itself, there's not much reason an ISP or even the local wifi router needs to know the destination I'm sending a packet to, if there's a fixed heirarchal structure, it just needs to know the recipients ISP, and the rest could potentially be encrypted at the cost of some negotiation setup complexity.


> I think an IPv8(Apparently we skip odd numbers)

A lucky few of us have had the pleasure of working with ST-II, AKA IPv5:

https://en.wikipedia.org/wiki/Internet_Stream_Protocol


> tying TLS to domains instead of IPs impedes anything on LAN,

TLS doesn't care what's in the certs, that's an application concern. IP addresses work fine in x.509 certs, but are discouraged because of difficulty of validating that control of the IP will continue through the certificate validity period. But this wouldn't much help for your LAN use case, assuing you're using RFC1918 addresses, because no one can prove control of those, so no publicly trusted CA should issue certs for them. Same thing if you use ipv6 link local, although I don't know too many people typing those addresses, mostly people want to type in a ___domain name, and if you make a real one work (which is doable), you can use a real cert; if not, not.

> IPv6 has too much inconsistency in how many bits are allocated to customers, and would be much saner with a more structured ISP/Region/Subnet/DeviceChosenIDThatIsTheSameEvenIfYouChangeISPs scheme so every ISP got the same number of bits.

DeviceChosenID that is the same everywhere is kind of a huge privacy thing that we already had and mostly rejected. Giving every ISP the same number of bits is silly anyway; Comcast needs more bits than [ISP that only serves California], and heirachical addressing is lovely for humans, but not necessarily a great way to manage networks; each regional portion of the network is going to end up doing its own BGP anyway, at which point, the heirarchy doesn't make a huge difference.

> Some level of onion routing could probably just be built right into IP itself, there's not much reason an ISP or even the local wifi router needs to know the destination I'm sending a packet to, if there's a fixed heirarchal structure, it just needs to know the recipients ISP, and the rest could potentially be encrypted at the cost of some negotiation setup complexity.

Source routing exists in RFCs but was quickly dropped. There's a lot of security reasons, but also it just doesn't work that well. The destination is needed to make the best routing decisions.

Say you're in Chicago, sending to me near Seattle, and our ISPs peer in Virginia and Los Angeles (which is a bit contrived, but eh). If you send to the nearest peering, traffic will go east to Virginia, then west to Seattle. If you look up by destination, you'll more likely go west to LA, then north to Seattle and total distance will be much less.


You can prove control of an IP address if they were longer, and the DeviceGeneratedUniquePart was a hash of the certificate.

If you need to renew, you just get a new IP and tell DNS about it, if you're using fixed IPs and can't easily renew without manual work, you're still better off than no encryption.

Instead of proving control of a ___domain, you'd be proving that an IP is one of the correct ones for a ___domain.

Tech is advanced enough now that we don't need to conserve every single bit. What you lose in efficiency, you gain in easily being able to tell what part of the IP corresponds to one customer for antiDDOS ratelimiting and the like.

Source routing would probably be best as an optional feature. But if you had a hierarchy with a part explicitly tied to the region, you could source route to the county level at least or even data center level without needing to reveal the exact destination.


you don't understand the power of the Lindy effect. All of the dated infrastructure we have now has ultimately survived the test of time.

https://en.wikipedia.org/wiki/Lindy_effect

by the way DNS is wonderful.


Lindy is pretty much just natural selection for ideas, right? It doesn't prove they are optimal.

It does seem to suggest that frequently switching protocols all the time like we switch web frameworks would be awful though. The benefits of having any universal standard at all can outweigh the flaws in just about anything, even Ipv4 or in some cases even analog stuff.


Exactly. It's like the laryngeal nerve of the giraffe. It did the job for millions of years but there is no good reason other than history for why it still needs to loop around the heart.

https://www.mcgill.ca/oss/article/student-contributors-did-y...


> Lindy is pretty much just natural selection for ideas, right? It doesn't prove they are optimal.

You missed the point. Lindy law proves that being optimal is not necessary for an idea to be successful. Your association with natural selection is valid from this point of view, too.


Then 'we' would miss out on the learning inviting larger mistakes, more likely, at grander scale. That starting over isn't viable is powerful knowledge. The network, if you like, as a learning organism with chaotic knowledge distributed and embedded within.


I'm curious, what exactly would you redesign? Seems to me that it's intrinsically necessary to have DNS to translate between human-readable names and machine-efficient addresses. What am I missing?


The main thing is I would collapse most of the layers.

Something like DNS is necessary, but I'd use individually digitally signed records except for mdns, and if you couldn't reach upstream, you wouldn't ever just drop stuff out of the cache.

I'd get rid of certificates entirely though. Without any insecure DNS, you

I'd also change IP addresses to be longer, maybe even 256 bits. 48 would be reserved for identifying an ISP (And large ISPs would have different codes for different regions), 16 would be reserved for whatever the ISP wanted to do with them, 32 bits would mark a customer, 16 would be for a customer subnet, and the rest would be chosen by the device, based on the hash of a public key.

All communication would be secured at the IP level, if you have the right IP, it's all good, if you want to refresh your "certificate" you just get a new IP and tell DNS about it.

Since the last section of the address is enough to uniquely identify the device, you can move between providers while still keeping your cryptographic identity.

Which also means that any kind of decentralized DHT routing can work transparently, the first half of the address is just routing info you can ignore if you have a better route, like being in the same LAN.

DNS servers could also also let you look up the routing info just given the crypto ID of a node, so you don't need a true static IP.

You could also do the same lookup with a local broadcast, and then cache the result for later use, so your phone can find your IoT devices, and then access them on the go with the cached routing info, if DNS is down.

A special TLD could exist that's just the crypto ID in base64, that doesn't require any registration. You can't spam too many of them, because you can reliably IP identify customer numbers unlike with ipv6 which makes that rather hard.


It's really hard to prove anything about some things being different in this space, I'm certain it's not a unilaterally good thing.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: