This is true and IMO does not get enough discussion.
It's a relatively easy problem to solve, but I'm not sure that hardcoding IP addresses into programs is the best solution for users who do not compile from source.
Out of curiosity I once reviewed the source code for a certain "onion router" program and found a handful of hardcoded IP addresses. Servers at MIT if I recall correctly. If these servers are inaccessible, what is the effect?
Perhaps "bootstrapping" is simply a matter of voluntarily downloading a trusted file, containing some reachable IP addresses.
Can millions of people all download the same file? If no, then how does Firefox, Adobe Flash or Chrome get installed on so many computers?
I consider the web (cf. internet) as more or less "bootstrapped". This is because of the preference for ___domain names versus IP addresses. It necessitates lookups.
Bootstrapping all starts with one file: root.zone.
Since 1993, this file has not changed very often. The IP address for the server where one can download it does not change very often either.
Once I have that file, whether the root servers are working is irrelevant. Don't need them. Only need TLD nameservers, .com, etc. to be working. And of course one can download a local copy of the .com zone, etc., as a backup.
But assuming those root servers are working I do not even need this file. Long ago I memorized the IP addresses for one root server and one com nameserver.
As long as I have a means to send DNS packets[1] I can look up the IP address for any ___domain name and I can "surf the web". Or at least get myself out of a jam without the use of recursive DNS servers.
1. Even netcat will work, as explained in its tutorials. I do not use "dig" or BIND libraries. Nor do I ever need to set the recursive bit. I can do the resolution using non-recursive requests, using some helper scripts/programs I wrote. Interestingly, using this method beats the speed of a cold cache in nearly all cases.
For the newly decentralized web, we will need supernodes that do not pass traffic but only provide port information to allow users to connect with each other through NAT.
The IP addresses of these supernodes will be the "bootstrap".
There could be millions of supernodes.
Today, the "web" is addresses of servers controlled by companies pandering to advertisers.
The "decentralized web" is addresses of users that can connect with each other to form communities. Those communities can be connected with each other, and so on.
The "content" being served to users by today's web is largely "user-generated". But overrun with advertising and "analytics". The content is served indiscriminantly to the open internet. And even if there are access controls, because the data is centralized with a handful of companies it's an easy target for third parties who want access to it.
Tomorrow's decentralized web allows user to selectively exchange their content with each other, without the companies and advertisers in the middle. Third parties wanting access may have to be more clever and more selective.
I agree. It doesn't get enough discussion because it would disrupt the hype about blockchain. When in fact as you said, it is relatively easy to solve the problem.
<<< Servers at MIT if I recall correctly. If these servers are inaccessible, what is the effect? >>>
Precisely, that is the problem.
<<< Perhaps "bootstrapping" is simply a matter of voluntarily downloading a trusted file, containing some reachable IP addresses. >>>
Could be a solution. The issue is again, from where we download it? If we download it from a centralized source then the decentralization does not exists. There are suggestions to solve this with Bitmessage or Telehash, but those are both having this very issue with bootstrapping. Using Bitmessage or Telehash for bootstrapping an another network is just kicking the can down the road. I understand you didn't suggest this :-) I am just saying.
The problem with DNS is that an authority can ban a ___domain name and the idea of permissionless systems is that there is no central authority should control the access of users.
mDNS and UDP multicast work fine on local networks and we are working to solve this on global networks as well. IPv6 anycast looks promising but I haven't got yet the prototype.
Note I mentioned the web and DNS only as an example of the bootstrapping issue, not as a solution for how to overcome centralization. (And to illustrate how I work with the web's bootstrap requirements -- using root.zone file.)
The need to bootstrap is ubiquitous. Even a user's internet connection is "bootstrapped". She has to know at least an RFC1918 address to get started.
Disseminating a list of addresses of working supernodes so users can form networks and connect to each other should not be an insurmountable problem.
The list does not have to disseminated via the network. Remember the days of POP and dial-in numbers? If the user has no internet connection then how did she get the dial-in numbers?
This is not a difficult problem.
re: ability to "ban a domanin name"
When authorities "ban a ___domain name" via DNS, they only ban lookups using certain DNS servers. The server at the IP address associated with the ___domain name could still be accessible.
The reason banning DNS lookups in order to take sites "offline" is so effective is because usually these sites are doing something shady and need to keep changing IP addresses frequently. No one knows what IP address they will be using in the future. They are very reliant on DNS.
Otherwise, if we are dealing with a legitimate site that changes its IP only infrequently, it would be futile to try to "ban" it via DNS.
It would be like expecting every nerd worldwide to forget that ftp.internic.net is associated with 192.0.32.9 or that example.com is associated with 93.184.216.34.
Some will have saved this information. There are publicly available archives of historical DNS data.
It's a relatively easy problem to solve, but I'm not sure that hardcoding IP addresses into programs is the best solution for users who do not compile from source.
Out of curiosity I once reviewed the source code for a certain "onion router" program and found a handful of hardcoded IP addresses. Servers at MIT if I recall correctly. If these servers are inaccessible, what is the effect?
Perhaps "bootstrapping" is simply a matter of voluntarily downloading a trusted file, containing some reachable IP addresses.
Can millions of people all download the same file? If no, then how does Firefox, Adobe Flash or Chrome get installed on so many computers?
I consider the web (cf. internet) as more or less "bootstrapped". This is because of the preference for ___domain names versus IP addresses. It necessitates lookups.
Bootstrapping all starts with one file: root.zone.
Since 1993, this file has not changed very often. The IP address for the server where one can download it does not change very often either.
Once I have that file, whether the root servers are working is irrelevant. Don't need them. Only need TLD nameservers, .com, etc. to be working. And of course one can download a local copy of the .com zone, etc., as a backup.
But assuming those root servers are working I do not even need this file. Long ago I memorized the IP addresses for one root server and one com nameserver.
As long as I have a means to send DNS packets[1] I can look up the IP address for any ___domain name and I can "surf the web". Or at least get myself out of a jam without the use of recursive DNS servers.
1. Even netcat will work, as explained in its tutorials. I do not use "dig" or BIND libraries. Nor do I ever need to set the recursive bit. I can do the resolution using non-recursive requests, using some helper scripts/programs I wrote. Interestingly, using this method beats the speed of a cold cache in nearly all cases.
For the newly decentralized web, we will need supernodes that do not pass traffic but only provide port information to allow users to connect with each other through NAT.
The IP addresses of these supernodes will be the "bootstrap". There could be millions of supernodes.
Today, the "web" is addresses of servers controlled by companies pandering to advertisers.
The "decentralized web" is addresses of users that can connect with each other to form communities. Those communities can be connected with each other, and so on.
The "content" being served to users by today's web is largely "user-generated". But overrun with advertising and "analytics". The content is served indiscriminantly to the open internet. And even if there are access controls, because the data is centralized with a handful of companies it's an easy target for third parties who want access to it.
Tomorrow's decentralized web allows user to selectively exchange their content with each other, without the companies and advertisers in the middle. Third parties wanting access may have to be more clever and more selective.