Infosphere: Tilting the balance.
(Internet Evolution and Nanotechnology).

S. Osokine.
Server Architect, Infolio.
osokin@osokin.com
30 Mar 2001.

Contents

1. The Big Crash
2. "Good help is hard to come by"
3. "Good hacker is a silent hacker"
4. Balancing the "Infosphere"
5. Infosphere. The Limits to Growth
6. Gnutella and Distributed 'Broadcast-Route' Protocols
7. From Infosphere to the Physical World
8. What are the 'Network Immune System' Components?
9. How to Grow the Nanonet without Nanomachines?
10. References


1. The Big Crash

   "On January 15, 1990, AT&T's long-distance telephone switching system crashed."

   This is the first phrase of Bruce Sterling's excellent book The Hacker Crackdown [1]. This book tells a riveting story, describing how exactly the large part of the Continental US lost its phone service for nine hours, why did that crash happen, and what happened after that. In a sense, some social effects of that crash - like the aftershocks after the really big earthquake - can be felt even now, more than eleven years after the event.

   It took a whole book (and a considerable literary talent!) to tell the full story of that crash. Here, however, we will concentrate on just a few technical aspects of this story and use these as the 'anchors' to illustrate some practical ties between several seemingly unrelated knowledge fields - like nanotechnology, Gnutella music copying software and the methods used to protect the computer networks against attacks.

   As far as the network-wide crashes go, there was nothing really special about that particular one. The huge computers used to route the long-distance phone calls went out of service and all the attempts to put them back on-line were failing immediately after being tried. The similar scenarios were observed on the Internet long before that crash - the Morris Worm was already an ancient history in 1990.

   What made this crash special was the fact that it was arguably the first case when the pure information (in form of the software) caused the disruption of something as essential and taken for granted as the phone service on the continental scale. If we define 'magic' as the direct way of causing the material effects with the purely informational means [2], the January 15, 1990 date is the prime candidate for going into history as the Birthday of the Magic.

   Several aspects of what was going on are important in the context of this article.

   First, an original assumption of the AT&T technicians was that the network went down because of the sophisticated hacker attack. This assumption led to the 'hacker crackdown' eloquently described by Sterling, even though eventually it turned out that it was the bug in the AT&T software that has caused the crash. Naturally, the knee-jerk reaction cost several people some time behind bars on the unrelated charges as a result, but this is not what is really interesting here.

   What is interesting is the fact that a very simple bug was able to cause such major disruptions throughout the whole system. Even if the hackers would really have a goal of disrupting the phone service, they would encounter some major problems along the way and might even be unsuccessful, whereas a small piece of garbled data was able to achieve the same result with no effort.

   If we compare the hacker activity with the viral and bacterial diseases that multiply the foreign DNA in the organism, this crash was more close to cancer, where the error in the native cell DNA might cause some significant problems for the whole organism.

   Second, the only reason why the full nation-wide phone system collapse was avoided was that some switches still ran the 'obsolete' code, which was in the process of being gradually replaced by the new (and buggy!) software. The switches with the old software acted as the 'naturally immune hosts' - a certain percent of the population which cannot be affected by a particular disease due to sheer luck.

   Third, during the crash and immediately afterwards it was impossible to tell the difference between the malicious activity and the 'normal' bug in the code. The effects were precisely the same and had to be fought in a similar way regardless of the underlying cause. The response mounted by the technicians was largely independent of their thoughts about the crash reason. The phone system was down, it had to be brought up, and that's exactly what they did after nine hours of frantic effort.

   The old software was installed throughout the system, effectively erasing the defective DNA in the phone network 'organism'. If someone would have told the technicians at that point that what they did very closely resembled the immune system response to the foreign proteins, and the technicians themselves were essentially doing the job done in the organism by the immune system cells, they would probably be surprised. But that is exactly what they did. The phone system had all the reasons to be proud of its immune system that was able to identify the threat, come up with the cure and distribute it throughout the organism in mere hours.

   The problem is, the AT&T phone system was very lucky.

2. "Good help is hard to come by".

   Very few organisms can boast the immune system with the T- and B-cells with an intellectual level of the Bell Labs engineers. In fact, most of today's corporate networks do not have engineers of that caliber. AT&T long-distance network was a Very Important Network and it got all the VIP (oops, VIN) treatment that it deserved.

   However, the Internet and the number of computers attached to it grow exponentially, and there are certain limits placed on the number of IT personnel by the educational system and by the human reproduction rate. Shortly speaking, the total IQ of all the engineers in the world shared between all the computers in the world, approaches zero very fast when counted on the 'IQ units per computer' basis.

   Of course, some measures are taken to counteract this process. The Computer Emergency Response Team (CERT), created shortly after the Morris Worm, collects the best and the latest knowledge about the security threats and distributes it among the network users. The problem is, for every computer someone still has to regularly read the CERT bulletins and make sure that all the latest countermeasures listed there (software upgrades, patches, configuration settings, etc) are applied to this particular machine.

   Not surprisingly, even that level of 'health care' is unavailable for many (and arguably for the most) computers on the Internet. Even if every corporation would somehow manage to hire the knowledgeable IT staff, which is something that is not easy (and not cheap!) to do these days, that would not be the end of the story. The proliferation of home networks, permanent connections, DSL and cable puts an amazing number of networked computers into the hands of people who have no idea what CERT is, and would not recognize the patch even if it would hit them on the head.

   Naturally, this is not the fault of these people - this is the network-wide problem that has to be solved on the network infrastructure level with some automatic instruments. For example, the virus-protection software does an amazing job of constantly updating the virus databases on the millions of machines.

   Unfortunately, in order to use it, the user has to know what the anti-virus software is in first place, which is also far from certain these days. To make the long story short, any hacker with an idea of going out on the net and compromising some machines can do it easily - there are no omnipotent 'security forces' that would protect the computer without the skilled human help, which is difficult to come by.

   From the security standpoint, this situation is bound to be rapidly deteriorating. The number of the connected computers grows very fast. PDAs, cell phones, wireless pagers - all these devices have CPU that is running some code and thus is vulnerable to the potential attack (or self-inflicted damage that could be just as bad from the practical standpoint, as the Big Crash story shows).

   As the number of such devices increases, the hope to somehow keep all of them away from trouble disappears pretty fast. Fortunately, it is not necessary to do so.

3. "Good hacker is a silent hacker".

   Many of the computers on the Internet are compromised - that is, many computers are routinely used by the people who are not supposed to use them. Even more (arguably most) computers contain bugs in the code that can potentially lead to some very serious problems under the certain conditions.

   The Internet has been in this state for years - one might say that it is a natural state of affairs for the network, and the exponential Internet growth won't make the situation better (or different, depending on the one's viewpoint).

   Every time the 'hacker' or a 'vulnerability' problem is raised in the public's mind by the media, it is usually a result of some hacker or bug causing some effect that is 'reportable'. Someone might steal some money from the bank, deface a Web site or jam a major Web service with phony requests, and it becomes news instantly. At the same time hundreds of hackers and thousands of bugs are just sitting silently, minding their own business, not interfering with anyone's activity - and no one notices them.

   The Internet as a whole might be regarded as an organism that is concerned with its own survival. Many years of the Internet 'evolution' led to the creation of certain 'standard practices'. Some of these practices are codified, some are passed from one sysadmin to another, and some probably exist on the subconscious level only - the system of that complexity is bound to have many well-hidden ways of controlling itself in order to survive.

   Pretty much as an organism does not really care about any one of its cells, the existence of the Internet does not depend on the existence or well being of any particular computer, so its evolutionary developed 'rules' tend to be lenient towards the 'silent hackers'. Everyone knows that these hackers are out there, that some computers are compromised - but so what? If Windows (or some other OS) has some dangerous bugs or security flaws, it is not a good enough reason to ban it from the network, as long as the existence of these bugs does not threaten the network as a whole.

   Now, the deliberate destructive attack or the serious bug (like the one that has caused the Big Crash) is quite another matter. Every time this happens, all the Internet resources are mobilized to fight that threat in a cooperative effort that can literally span the globe. The new virus can make the whole countries unavailable on the net, and something has to be done about it - fast. After the cure is found, the network comes back to norm, and an occasional attempt by some uninformed user to double-click the "anna_kournakova.jpg" mail is not a cause for the global concern anymore.

   Looking at the whole Internet as an entity, such behavior is strikingly similar to the 'primary' and 'secondary' immune responses in the organism. New viruses can make an organism sick for weeks until the proper countermeasures (antibodies) enter the 'production state'. This is the 'primary' immune response. If, however, the organism survives this particular sickness, every time it encounters the same virus, the 'secondary' immune response is fast and decisive. The existing antibodies attack the virus when it has not had a chance to replicate yet, and the resulting virus destruction is normally not even noticeable for the attacked organism. Sure, some cells might be destroyed in the process, but who cares about the individual cells?

4. Balancing the "Infosphere".

   This whole process of Internet protecting itself against the attacks can be viewed as the continuous struggle between the 'good' and 'bad' information for access to the Internet storage space and CPU power.

   'Bad' information (data and code) can appear on the individual hosts in form of the Trojans, viruses, can be downloaded when the host password is compromised and so on. In many cases the 'bad' information also physically removes the 'good' information from the host - for example, the self-monitoring tools can be replaced by their 'bad' versions that hide the attack from detection. Destructive viruses can wipe out the vital data for the whole companies.

   'Good' information tries to fight the 'bad' one. 'Good' code can scan the files for the virus signatures, detect the suspicious access patterns and so on, deleting the 'bad' information when it is found.

   In any case, barring the final victory of one side (which seems unlikely), the certain balance is usually achieved between the 'good' and the 'bad' information in the "infosphere" - the space where the information of different kinds coexists. Pretty much like the predators and the herbivores can coexist in the ecosphere, the attacking programs of different sorts, the security programs, the application programs can coexist in the same infosphere at the same time. This coexistence is difficult to call peaceful, but the same applies to the ecological coexistence.

   On one hand, the analogy with the predators and herbivores might seem superficial, but on the other hand the hackers and security people really do need each other. Hackers justify the existence of the security and validate the approaches used by it; security provides the challenges for the hackers.

   Pretty much like the real ecosphere, the infosphere is controlled by a huge number of very subtle mechanisms. From the opinion of the hacker's referent group to the legislative efforts on the national level to the new ways to deflect the denial-of-service attacks to the latest bugs in some OS or programming language code - all these factors control the info-balance.

   Many of these controlling factors have the social nature - roughly speaking, behind many advances of the 'good' or the 'bad' code there might be a real live person that has consciously or unconsciously helped to advance the 'front line' to one side or the other. This is natural, but despite the advances in the automatic attack and automatic defense capabilities, today the humans largely control the battle in a very 'hands-on' fashion. Every computer is still considered to deserve an individual treatment, and every computer normally has a real live human being that cares (or at least is supposed to care) about its state and tries to affect it.

   This cannot continue forever.

5. Infosphere. The Limits to Growth.

   There are two factors limiting the direct, 'hands-on' human intervention into the infosphere balance affairs.

   The first one is the sheer size of the infosphere as it keeps growing. The virus-protection company can maintain a Web site with an automatic virus database update feature for only so many clients. After the number of clients reaches a certain limit, the Web site will collapse regardless of its capacity - the central distribution solution just does not scale.

   Right now the virus-protection company might be happy to expand its client-handling capacity, but this is only because there is a human being behind every request. This human being can pay some money to offset the Web site maintenance cost, or be a target for advertising, but what if the download requests for the virus database start to arrive from the faceless networking entities? What if there are several thousands of computers per every human being on the face of the Earth, and all of them want the newest virus data? These faceless entities will put the same load on the central servers as the real live people at the displays, but won't see any advertising and won't increase the revenues. When this happens, the cost and the complexity of the Web site maintenance become prohibitive pretty fast, and several thousands of computers for every human is not even a very large number [3].

   So clearly some distributed solution is necessary to deliver the data to the 'network edges' - the computers should be able to get the data from their peers, not from the central location. Of course, this approach opens the brand-new doors for the 'bad code' to subvert these distributed data transfers in order to avoid detection or in order to cause something resembling the 'autoimmune diseases', which would cause the legitimate software to be attacked and removed by the security mechanisms. This can be partially prevented by the sophisticated encryption and authentication technologies, but some new ways for the 'bad code' to propagate itself will certainly be created in the process, if only because of the bugs in the encryption and authentication software.

   However, it seems certain that there's no other alternative to that approach if the network keeps growing. The 'central server' approach ultimately does not scale; but the same is true for the data exchange in general. Today almost every byte of data coming through the Internet is passing through some ISP access point - the computer paid for and maintained by an Internet Service Provider. The number of ISPs is limited by the business rules, and the number of machines that an ISP can directly control is limited by its revenue flow.

   The ISP customers pay ISPs for an access to the Internet. So as the number of networked computers grows, the situation described above for the virus-protection servers will be reproduced on the ISP level. The 'ISP-type' solution does not scale technically when the Internet grows to infinity, but it also becomes economically unfeasible and not functionally necessary at the same time.

   The bulk of the information on the infinitely large (and thus infinitely dense) Internet can be accessed directly, because it is statistically bound to be on the computers that are close by. Big corporate LANs can already keep most of the intranet traffic inside the LAN. The Internet backbone trunks and servers can probably survive (although in a different shape), because they will provide the global communication functionality, but the ISPs are likely to find themselves in a very uncomfortable position. They will try to sell the access, which is something that will be more cheap and convenient to get on the peer-to-peer basis in the distributed fashion, especially since the ISPs won't be technically able to provide an access service for the infinite number of machines anyway.

   The second obstacle for the direct human control of the infobalance is the rate of the attacks. Today the new viruses appear daily. This is because today the viruses are mainly the products of the 'manual labor'. At the very least, someone has to launch the virus-making utility, select the virus from the menu, and change it subtly to 'fool' the existing detection software. This process takes time, and is very slow by computer standards. Besides, the viruses are generally ultimately traceable to the creator host, which normally has some human associated with it, and this human might be extra-cautious because of the danger of detection.

   When the number of computers exceeds the number of humans by several orders of magnitude, the virus-creation operations can be fully automated, and the task of fighting the viruses will be much more complex. When the virus 'antibodies' are produced manually, it becomes very difficult to do so as the new viruses are generated somewhere on the Net several times per second.

   The same applies to the bugs. Arguably the bugs are the more serious threat to the network than the deliberate attacks, because there are much more of them. The code development process generates bugs with a certainty of the nuclear explosion generating the radiation, and there's nothing to be done about that. The code is buggy. It is an axiom of the software development, and some bugs can be worse than very elaborate hacking efforts.

   So the automatic mechanisms should take care of the antibody production in pretty much the same fashion as the human body produces the antibodies for the new viruses and other unknown proteins without any conscious effort. When the network encounters the 'new' viruses or other 'foreign proteins', the new 'antibodies' have to be automatically generated and automatically distributed over the whole network segment that has never met this virus yet and has no 'antibodies' against it.

   Naturally, in the live organisms there are certain classes of diseases that survive by subverting the immune mechanism itself. Autoimmune diseases, for example, cause the body to treat its own proteins as the foreign ones. Massive viral doses have been observed to cause the loss of the immune function due to the immune system 'confusion' - it could no longer tell the difference between the 'own' and 'foreign' proteins based on their relative frequency [4]. And of course, the HIV virus attacks the immune system directly, opening the gates for the other diseases.

   The same has to be true for the network immune systems.

   These facts, however, do not make the immune systems in general and the 'networking immune system' in particular useless. So in any case (whether a centralized or a distributed antibody production method is used), the existence of the scalable data-propagation mechanism suitable for the infinitely large networks seems to be essential for the network health and growth. In the live organisms similar mechanisms (like a blood flow) are taken for granted, but the ability of the networks to exchange data when the number of hosts is infinitely large, is a big problem by itself. Event though the addressing problem can be solved with a big enough address space (like IPv6), the routing problems alone can cripple any network when the routing is done in the hierarchical fashion.

   As the computers freely change their physical location inside the network, the current approach to routing that requires the multilevel system of the routing gateways, experiences the same scalability problems as the ones noted above for the central servers and for the ISPs. It seems likely that the Internet can grow only if the present-day routing approach is reserved for the data that has to be routed that way - essentially, for the data that fulfills the global communication function. The bulk of the traffic can and should be shifted to the peer-to-peer channels and is likely to flow between the machines that are relatively close in the physical space. The goal here is to increase the global conductive capacity of the data transfer media (for example, of the frequency band for the wireless communications) by using the same channels for the different traffic in the different physical points. The systems like the cellular networks, 802.11b, and Bluetooth already do that - the low-power transmissions allow you to reuse the frequency bands.

   Enter Gnutella.

6. Gnutella and Distributed 'Broadcast-Route' Protocols.

   Gnutella is the file searching and downloading protocol firmly associated in the public mind with its better known cousin Napster. However, as far as this document is concerned, it is anything but Napster. Even though as Napster does, it uses the peer-to-peer links between hosts to transfer data, unlike Napster, there is no central server to handle the requests. In fact, the Gnutella network has no well-defined structure at all and can be very fluid and constantly changing. Every host tries to maintain the connections to 4-5 other hosts, these hosts in turn try to maintain the connections to the different hosts and so on.

   The resulting structure can be viewed as a very complex graph, connecting thousands of machines within a several-hop distance. Every time the host wishes to find something on the network, it issues the search request that is propagated (broadcast) over these links, and the responses (if found) are routed back along the paths used by the corresponding requests. No individual computer is important to the Gnutella network health; if some machine drops out, its peers just find another hosts to connect to and continue functioning as if nothing happened.

   Early versions of Gnutella did not have the flow control, which caused the Gnutella network overloads and lead to the widespread illusion that it is not scalable. This problem is, however, being addressed [5], and the future versions of the protocol will be perfectly scalable regardless of the number of hosts in the network, the frequency of the requests or the intended network application. The file-searching requests are not the only examples of the possible Gnutella requests - the protocol was designed for the general-purpose requests that would be able to reach as many hosts as possible on the networks with the very low individual host reliability and lifetime.

   Today the Gnutella protocol exists in the normal Internet environment, and uses the normal ISPs and the normal backbone routers to transfer the data. In fact, of the three 'scalability breakdown' points mentioned in the previous section (central servers, ISPs, routers) only the first one is removed. This is not a strict requirement, though. The Gnutella protocol can be easily modified (or a similar broadcast-route protocol can be created) to use the different transport mechanisms - wireless broadcasts or whatever comes handy.

   These qualities of the Gnutella and similar broadcast-route protocols make these protocols almost ideal transport mechanisms for the immune information in the large-scale networks. The new batch of 'antibodies' can be propagated as a request; the request for the antibodies for a particular virus signature could be broadcast over the network and answered by the antibody design for this particular virus; other similar uses are also possible.

   Of course, the immune information is not the only data that can be carried this way. Ultimately this approach can be used to transfer most of the data on the infinite (or very large) network.

   Even though every individual Gnutella host broadcast can possibly reach just a limited number of neighbors (the typical value is several thousands), the Gnutella network has a natural capability to replicate the useful content. For example, the antibodies for the viral attack in progress will be quickly replicated all over the network, reaching further and further if the viral attack happens to be a widespread one.

7. From Infosphere to the Physical World.

   So far we have made no assumptions about the physical location of these billions (or trillions) of computers that will make the Internet sooner or later. One computer could be in India, the one connected to it - in Japan, and still another - in California. In the Infosphere, where all the battles are virtual, what difference does it make? From half a globe away, we can still replace the enemy code by our code, 'nuke' the enemy machine with a barrage of requests, or do something similar. We are just fighting the 'memory wars' here, and even if the results can cause hundreds of billions in damages, no one is directly hurt, right?

   The introduction of the molecular nanomachines [3] into the picture adds some new wrinkles to the security situation. For one, the struggle for the storage and CPU resources is no longer purely virtual - the winning computer can literally 'eat' the losing computer, converting its molecules into the copy of the winner, and launch the winning code on the same molecules organized into a new structure.

   From the purely informational standpoint, the difference is minimal. If the nanomachine with the 'bad code' eats the nanomachine with the 'good code', from the viewpoint of the infosphere, nothing really special happens. One host loses the 'good code', one (or two, or ten) hosts now have the 'bad code'. This fact should be noticed by the immune system, and the appropriate response should be mounted.

   The biological analog of this situation might be the 'costimulation' [4]. In certain situations the Th-cell activates the antibody generation process only when two simultaneous events occur: the unknown protein is met and the 'damage signal' is received from the body tissues, making it statistically probable that it was this unknown protein that has caused the damage. Closer to home, the company system administrator can effectively play the role of the Th-cell, safely ignoring the frequent firewall alerts until the user simultaneously calls him and asks: "Listen, what's up with these two hundred "I Love You" messages that I have in my mailbox?" This second signal activates the sysadmin and makes him abandon the game of Tetris and fight the infection.

   From the physical standpoint, however, the result of the absent immune response might be disastrous. One estimate [6] states that it can take as low as 20 minutes for the runaway nanomachines to convert the entire planetary biomass to 'nanomass'. Of course, there are all kinds of restrictions aimed at the prevention of this. The nanomachines are supposed to be physically unable to replicate outside the controlled environment, etc, etc. Still, it seems to be a virtual certainty that sooner or later someone will create a nanomachine without such restrictions. Why this would happen is largely irrelevant. Whether this will be done for the military purpose, for fun, happens as a result of the mutation, or just because someone forgets that the 'break' statement does not break an 'if' statement, breaking the outside 'switch' instead [1], the results will be the same. The uncontrolled replication similar to the Morris Worm would become a very real threat.

   This scenario makes two things certain. First, the network should be ready to this situation. When it happens, it will be too late to look for the cure. It took 9 hours to contain the 1990 At&T crash, which might be too long.

   Second, the response should be largely automatic. The case in point [1]: on September 17, 1991 the group of the AT&T switching stations in New York simply ran out of power and shut down, leaving the JFK, La Guardia and Newark airports without any voice and data communications. The backup batteries also failed. 500 flights were cancelled, and another 500 delayed (one of the affected passengers was the FCC Chairman). The technicians that should hear the alarms when the switches lost power were absent - they were attending the training class about the alarm systems for the power room.

   The point here is that even if the human reaction time will be fast enough to come up with the countermeasures, bad luck or incompetence could prevent it. The distributed automatic response throughout the network might make more sense.

   In any case, the immune response signal has to be propagated throughout the whole network. Otherwise the situation might be similar to the case when every user would have to fight the "I Love You" virus on his own, which would be a rather hopeless undertaking. When any host has a good idea about what's going on, it is his obligation to the network to propagate this information and help others fight the same problem.

   Now, setting aside the most basic features of the networking immune system (such as the 'broadcast-route' transport) - what exactly are the immune mechanisms and how should they interact?

8. What are the 'Network Immune System' Components?

   The proper honest answer should be: who knows?

   Actually I might finish right here, but let me elaborate. I could tell that the 'costimulation' (described earlier) should be present to protect the nanonetwork from the autoimmune responses, I could mention other mechanisms, etc, etc, etc…

   But first, it is more interesting to consult the original work [4] for the list of the possible immune mechanisms.

   And second, truth be told, no one can tell anything precise about the mechanisms required for the nanonetwork safety today. The attempt to directly project the biological mechanisms into the networking context is certainly useful to some extent - at least both biological and networking objects function in the same set of the physical rules, and the general approaches to their security should have a lot in common. However, there are lots of things that we don't know about the biological immune systems that might be vitally important. And the biological immune systems were created to protect the well being of the biological species, which is not exactly the same as protecting the well being of the Internet, of the nanonet made of nanomachines, and of the biosphere surrounding the nanonet.

   It could be said that in the nanonet immune systems the Gnutella-type protocol plays the role of both the vascular system (that carries the information inside the body) and of the health-care system (that carries the vaccines from the research centers to the patients).

   It could be said that the hackers can play the role similar to the trace quantities of the foreign proteins. According to one theory, these stay in the body for years, continuously stimulate the B-cells and cause the acquired immunity to be 'remembered' by the immune system, even though the B-cell lifetime is measured in days [4].

   All such analogies are obviously superficial. The immune system is the result of the long evolutionary process that has the survival of the species (not even of the individual organism!) as its goal. The immune system needed for the nanonets should have the prosperity of the humanity as its goal, and it is a separate and an interesting question how to formalize such a requirement, and whether it is possible to do that at all. Regardless of how stupid it might seem, some evolutionary requirement of this sort should be explicitly included into the nanonet design. Otherwise the nanonet might eventually confuse its own survival goals with the humanity goals, and given the [almost] plenipotentiary power of the network made of molecular nanomachines, it is hard to imagine what might cause it to change its mind.

   The problem is, today we cannot exactly breed the nanonets like you breed the dogs - to be obedient, friendly and protective, even though that approach would be probably pretty close to an ideal one. First, we have just one global network - the Internet. It is kind of difficult to replicate it, change the development approaches and see what happens. Second - and more important - we have no idea what exactly do we want to have even when we develop the Internet. Sure, it experiences all sorts of the evolutionary pressure from the different directions, but it would probably be difficult even to list the main pressure groups.

   Trying to determine the direction of the evolution would be even more difficult, and trying to control it is close to impossible on our present knowledge level.

   So today the question "What are the network immune system components?" just does not make much sense - it is too early to ask it. The proper question would be: "How do we choose the components for the network immune system?"

   The specific mechanisms that would allow the nanonet to successfully function and to fight the external attacks, internal bugs and mutations, probably have to be developed gradually, using the evolutionary approach. The 'closed diamond sphere' nanofabrics suggested in [3] have one very significant drawback - these nanofabrics are absolutely incapable of telling us anything about the interactions between the large groups of these nanomachines. This approach is like creating a single cell in a test tube and starting to grow the whole organism without giving a thought to whether it is going to be a sheep or a tiger. Not to mention the next-level question: "What are the obedience training methods for tigers?"

   Sure, the 'closed sphere' development might be useful to create the specific mechanisms, but it appears to be more important to 'grow' the whole network in a fashion that would create a 'positively balanced' infosphere within such a network.

   And the good time to start doing that appears to be… how about right now?

9. How to Grow the Nanonet without Nanomachines?

   Not that difficult, really.

   Remember, we are talking software here. The software does not care whether there is a physical world outside the infosphere or not - whatever the input signals are going to be, we can easily simulate these (including the distress panic 'costimulation' signal: "Something is killing me!").

   Besides, such a precise simulation is not the first-priority task anyway.

   More important it is to create the network that would be able to maintain the 'infobalance' (the infosphere balance) for an indefinitely long time regardless of the external and internal factors, pretty much like the sealed glass ball ecospheres can survive for a long time. (Well, I think that these 'glass ball' ecospheres eventually die, but we have an important advantage here - the infosphere does not have to be a totally closed system. We can always 'nudge' it in the right direction if things appear to go wrong.)

   And the best candidate for the Nanonet v. 1.0 is, naturally, the Internet.

   The Internet accumulates the combined effort of millions of designers, architects, programmers, administrators, hackers, computer technicians and users that were collectively creating it for years. There are literally millions of man-years of work in the Internet in its current state. Some people were trying to improve it, others to break it, still other people to use it or to overload it. It is still here. Of course it is not ideal. Of course there are problems. But as a starting point for the Nanonet evolution, it is infinitely closer to our final goal than if we would try to define the set of the security rules from scratch. Internet is a survivor of the twenty year-old 'war in memory' game with no rules. And it won this game.

   Today the Internet is essentially a giant 'diamond sphere' nanofabric [3] that is capable of holding the whole 'Nanonet' inside. Whatever hackers do today, they cannot damage the environment or destroy the whole biosphere. As far as the nanotechnology is concerned, the Internet in its present state is a giant 'sandbox environment', where the dangerous experiments can be performed without grave risk.

   Internet is also developing in the right direction. It grows exponentially and the same approaches suggested above for the nanonets would eventually be tried on the Internet first. In fact, some of these approaches are already being tried - the computer immunology [7], the general-purpose peer-to-peer networks - the similar difficulties lead to the similar solutions.

   Several practical approaches are possible.

   First, the Internet can be used for the 'validation'. For example, some computer immunology approaches might work on the Internet. Others might fail. After all, the Internet 'organism' is very different from the biological organism. The approaches that will work on the Internet are quite likely to be useful for the Nanonet, and other way around.

   Second, the Internet could be eventually just gradually expanded to handle also the communications between the molecular nanomachines. Then the 'Internet DNA' - the set of written and unwritten rules, RFCs, known (and unknown!) bugs, code development approaches, safe practices, and superstitions that control its existence can be just left in place without trying to 'adopt' it to the brand-new network. This approach has some very obvious dangers, but from the practical standpoint it is quite likely to be used - today every new device is eventually connected to the Internet, and it will probably take some conscious effort to not to use this approach.

   Third, the Internet can be studied from the infosphere standpoint. The information coexistence in the Internet infosphere should be studied in the fashion similar to the ecology approach to the biological coexistence in the biosphere. Pretty much as today we know that it is not a good idea to kill all the sparrows or to bring rabbits to Australia, this research should lead to the 'verbalization' of the 'safe Internet practices', bringing these to the conscious and 'formalizable' level. Then the same practices (or rather the informational laws that are the underlying reason for these practices) can be applied to the nanonets.

   What is important here, these 'safe Internet practices' as they evolve on the Internet consisting of the billions of machines, then of the trillions of machines, and so on, will always be the practices that tilt the infosphere balance towards the 'good code'.

   Eric Drexler in [3] tries to answer basically an unanswerable question: how do you make the nanomachines capable of withstanding the deliberate attacks from the 'battle bots' specifically developed for the destruction? The international cooperation, all sorts of treaties, etc, etc, can only go so far (though the power of the social aspects should not be underestimated - this is something that has prevented the nuclear war, after all).

   Drexler does suggest using the simulations and research in order to be able to eventually withstand the nanotechnology threat [8], but the weakest point in these particular suggestions is that the nanotechnology is presumed to be used for that purpose. Which creates the chicken-and-egg problem: how do you create the nanotechnology for the threat research and make this technology non-threatening at the same time? All kinds of ingenious ways to deal with that issue have been suggested (space labs, non-replicating assemblers, immediate destruction of the first-generation protein-based assemblers and so on), but today we seem to have an instrument that allows us to do some real infobalance research - the Internet.

   Hopefully the Internet research approach suggested above can help us bring the various doomsday scenarios from the fairy tales domain into the scientific domain, actually giving us some hard theory about the ways that the free information uses to propagate itself.

   Of course, this can be very difficult, but then the 'Internet expansion' approach allows us to just create the stable infosphere that has been proved to withstand thousands of the different-angle attacks, even if we do not fully understand why and how does it happen.

   Such a stable infosphere (today it is Internet) has a very important property: the number of people trying to destroy it is much less than the number of people striving to keep it in the working condition. The sheer number of the man-years invested into the defense exceeds the effort invested into the destructive approaches by several orders of magnitude.

   As a result, typically the global attacks on the Internet fail just because it is very difficult for a small group of people to break something that is being maintained by millions of people. In fact, not that many global attacks are even tried - typically the attacks have a very narrow Internet segment (or even a single machine) as their goal.

   Hopefully if the gradual evolution of the Internet into the Nanonet is done right, this relationship will persist, and even if the whole nation-state commits its resources to the destruction, the combined mass of the rest of the network would be enough to withstand and to nullify such an attack. Maybe the Nanonet can be not even deployed before the actual attack or the runaway incident, although this proposition seems shaky. Due to the Nanonet bugs, its instant deployment can be more dangerous than the original attack, so it is much safer to use the gradual deployment path.

   Actually most of the spectacular Internet-wide crashes (Morris Worm, early-TCP meltdowns, network 'flapping', etc) were not the results of the deliberate attacks at all, but rather resulted from the code bug or some improperly understood data transfer effect. We can probably fully expect this trend to be continued into the future, with bugs being more dangerous than the deliberate attacks, which, of course, does not make the deliberate attacks less dangerous and requires some very sophisticated mechanisms to fight them.

   Finally, it should be noted that regardless of the nanotechnology future, the approaches suggested in this document can prove to be useful (and even essential) for the 'normal' Internet growth and bring us many unexpected benefits along the way.

10. References.

   [1] Bruce Sterling. The Hacker Crackdown.
    http://www.lysator.liu.se/etexts/hacker/

   [2] K. Eskov. Our answer to Fukujama. (in Russian!)
    http://www.kulichki.com:8100/moshkow/PROZA/ESKOV_K/pub_fuj.txt

   [3] K. Eric Drexler. Engines of Creation.
    http://www.foresight.org/EOC/

   [4] Steven A. Hofmeyr. An Interpretative Introduction to the Immune System.
    http://www.cs.unm.edu/~steveah/imm-overview-new.pdf

   [5] S. Osokine. The Flow Control Algorithm for the Distributed 'Broadcast-Route' Networks with Reliable Transport Links.
    http://www.grouter.net/gnutella/flowcntl.htm

   [6] Robert A. Freitas Jr. Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations.
    http://www.foresight.org/NanoRev/Ecophagy.html

   [7] Stephen A. Hofmeyr, Stephanie Forrest. Architecture for an Artificial Immune System.
    http://www.cs.unm.edu/~steveah/ecj.pdf

   [8] K. Eric Drexler and Chris Peterson, with Gayle Pergamit. Unbounding the Future: the Nanotechnology Revolution.
    http://www.foresight.org/UTF/Unbound_LBW/index.html

*** End of the Document ***

Back to the Gnutella development page.