From World-Wide Web to Global Brain
Designing a nervous system for the super-organism
A number of recently developed mechanisms, such as software agents, knowledge discovery and associative learning, are likely to transform the World-Wide Web into an intelligent network, giving users access to the whole of human knowledge in a simple and intuitive way. This network will function in a way similar to the human brain, using associations between documents to guide the diffusion of "thoughts" over the Web. Such a global brain will function as a nervous system for the social superorganism, the integrated system formed by the whole of human society.
Dr. Francis Heylighen is a Senior Research Associate ("Onderzoeksleider") for the Belgian National Fund for Scientific Research (NFWO). He works at the Free University of Brussels (VUB), where he is an Associate Director of the transdisciplinary research Center "Leo Apostel" (CLEA). He is an editor of the world-wide Principia Cybernetica Project for the collaborative development of an evolutionary-systemic philosophy.
Society as a super-organism
It is an old idea, dating back at least to the ancient Greeks, that the whole of human society can be viewed as a single organism. Many thinkers have noticed the similarity between the roles played by different organizations in society and the functions of organs, systems and circuits in the body. For example, industrial plants extract energy and building blocks from raw materials, just like the digestive system. Roads, railways and waterways transport these products from one part of the system to another one, just like the arteries and veins. Garbage dumps and sewage systems collect waste products, just like the colon and the bladder. The army and police protect the society against invaders and rogue elements, just like the immune system.
Such initially vague analogies become more precise as the understanding of organisms increases. The fact that complex organisms, like our own bodies, are built up from individual cells, led to the concept of superorganism. If cells aggregate to form a multicellular organism, then organisms might aggregate to form an organism of organisms: a superorganism. Biologists agree that social insect colonies, such as ant nests or beehives, are best seen as such superorganisms. The activities of a single ant, bee or termite are meaningless unless they are understood in function of the survival of the colony.
Individual humans may seem similar to the cells of a social superorganism, but they are still much more independent than ants or cells (Heylighen & Campbell, 1995). This is especially clear if we look at the remaining competition, conflicts and misunderstandings between individuals and groups. However, there seems to be a continuing trend towards further integration. As technological and social systems develop into a more closely knit tissue of interactions, transcending the old boundaries between countries and cultures, the social superorganism seems to turn from a metaphor into a reality.
The superorganism and its global brain - a brief review of the literature
Several authors have suggested that the integration of humanity into a social super-being is as unavoidable as the evolution of multicellular organisms. The French paleontologist and jesuit priest Pierre Teilhard de Chardin (1955) was perhaps the first to discuss this evolution towards super-human integration. He coined the term "noosphere" (mind sphere) to denote the growing network of thoughts, information and communication that englobes the planet. The Russian computer scientist Valentin Turchin applied the principles of cybernetics to analyse this evolution (1977). He introduced the concept of "metasystem transition" to denote the evolutionary emergence of a "metasystem", a higher level of complexity which integrates the systems of the level below. He noted that humanity is undergoing such a metasystem transition towards a superbeing, caused by the growing connections between the nervous systems of individual humans. The American biologist Gregory Stock (1991) wrote a popular account of this process, where the individual is increasingly tied to others through technology. He called the emerging superorganism "Metaman".
The domain where integration seems to be moving ahead most quickly is the development of communication media. In the society as super-organism metaphor, the communication channels play the role of nerves, transmitting signals between the different organs and muscles. In more advanced organisms, the nerves develop a complex mesh of interconnections, the brain, where sets of incoming signals are integrated and processed. After the advent in the 19th century of one-to-one media, like telegraph and telephone, and in the first half of this century of one-to-many media, like radio and TV, the last decade in particular has been characterized by the explosive development of many-to-many communication networks. Whereas the traditional communication media link sender and receiver directly, networked media have multiple cross-connections between the different channels. Moreover, the fact that the different "nodes" of the digital network are controlled by computers allows sophisticated processing of the collected data, reinforcing the similarity between the network and the brain. This has led to the metaphor of the world-wide computer network as a "global brain".
Peter Russell (1983, 1995), a British physicist who got interested in Eastern religions, developed the Global Brain theme in a "New Age" vision, emphasising consciousness-raising techniques like meditation that might help individuals to live more synergetically. Joël de Rosnay (1986, 1995), a French futurologist, analysed the emergence of the cybernetic superorganism (which he calls the "cybionte") and its "planetary brain", by means of concepts from systems theory, and the theories of chaos, self-organization and evolution. He emphasizes the role of the Internet and the different multimedia technologies. A similar focus can be found in the work of the German complexity theorist Gottfried Mayer-Kress, who explores the analogies between the Internet and complex adaptive systems like the brain, and the applications of the network to solving large-scale, complex problems. More and more people seem to notice the similarities between the dynamic, informational network structures of the Internet (and in particular, the World Wide Web), on the one hand, and the human brain, on the other hand.
Most recently, the American mathematician Ben Goertzel, in a 1996 proposal for using the World-Wide Web to implement globally distributed cognition, introduced the concept of the "WorldWideBrain": a massively parallel intelligence, consisting of structures and dynamics emergent from a community of intelligent WWW agents, distributed worldwide.
However, none of the proposals I have read until now is very clear on precisely how this emerging intelligent network will perform the basic functions of a brain: learning, thinking, solving problems, and discovering new concepts. My collaborator Johan Bollen and I have started to address this very problem, through a combination of theoretical reasoning, experimental tests with small networks and computer simulations (Heylighen & Bollen, 1996; Bollen & Heylighen, 1996; Heylighen, 1996). The following gives a simple account of how we imagine that a smart World-Wide Web might function. This approach builds on Turchin's theory of Metasystem Transitions, which we are developing further, together with Turchin and Cliff Joslyn, as part of the Principia Cybernetica Project.
The world-wide web as an associative memory
The present World-Wide Web, the distributed hypermedia interface to the information available on the Internet, is in a number of ways similar to a human brain, and is likely to become more so as it develops. The core analogy is the one between hypertext and associative memory. Links between hyperdocuments or nodes are similar to associations between concepts as they are stored in the brain. However, the analogy goes much further, including the processes of thought and learning.
Retrieval of information can in both cases be seen as a process of spreading activation: nodes or concepts that are semantically "close" to the information one is looking for are "activated". The activation spreads from those nodes through their links to neighbouring nodes, and the nodes which have received the highest activation are brought forward as candidate answers to the query. If none of the proposals are acceptable, those that seem closest to the answer are again activated and used as sources for a new process of spreading. This process is repeated, with the activation moving from node to node via associations, until a satisfactory solution is found. Such a process is the basis for thinking. In the present Web, spreading activation is only partially implemented, since a user normally selects nodes and links sequentially, one at a time, and not in parallel like in the brain. Thus, "activation" does not really spread to all neigbouring nodes, but follows a linear path.
A first implementation of such a "parallel" activation of nodes might be found in WAIS-style search engines (e.g. Lycos), where one can type in several keywords and the engine selects those documents that contain a maximum of those keywords. E.g. the input of the words "pet" and "disease" might bring up documents that have to do with veterinary science. This only works if the document one is looking for effectively contains the words used as input. However, there might be other documents on the same subject using different words (e.g. "animal" and "illness") to discuss that issue. Here, again, spreading activation may help: documents about pets are normally linked to documents about animals, and so a spread of the activation received by "pet" to "animal" may be sufficient to select the searched-for documents. However, this assumes that the Web would be linked in an intelligent way, with semantically related documents (about "pets" and "animals") also being near to each other in hyperspace. To achieve this we need a learning process.
Learning webs
In the human brain knowledge and meaning develop through a process of associative learning: concepts that are regularly encountered together become more strongly connected (Hebb's rule for neural networks). At present such learning in the Web only takes place through the intermediary of the user: when a maintainer of a web site about a particular subject finds other web documents related to that subject, he or she will normally add links to those documents on the site. When many site maintainers are continuously scanning the Web for related material, and creating new links when they discover something interesting, the net effect is that the Web as a whole effectively undergoes some kind of associative learning.
However, this process would be much more efficient if it could work automatically, without anybody needing to manually create links. It is possible to implement simple algorithms that make the web learn (in real-time) from the paths of linked documents followed by the users. The principle is simply that links followed by many users become "stronger", while links that are rarely used become "weaker". Some simple heuristics can then propose likely candidates for new links, and retain the ones that gather most "strength". The process is illustrated by our adaptive hypertext experiment, where a web of randomly connected words self-organizes into a semantic network, by learning from the link selections made by its users. If such learning algorithms could be generalized to the Web as a whole, the knowledge existing in the Web could become structured into a giant associative network which continuously adapts to the pattern of its usage.
Answering ill-posed questions
We can safely assume that in the following years virtually the whole of human knowledge will be made available electronically over the networks. If that knowledge is then semantically organized as sketched above, processes similar to spreading activation should be capable to retrieve the answer to any question for which an answer somewhere exists. The spreading activation principle allows questions that are ill-posed: you may have a problem, but not be able to clearly formulate what it is you are looking for, but just have some ideas about things it has to do with.
Imagine the following situation: your dog is continuously licking mirrors. You don't know whether you should worry about that, or whether that is just normal behavior, or perhaps a symptom of some kind of disease. So you try to find more information by entering the keywords "dog", "licking" and "mirror" into a Web search. If there would be a "mirror-licking" syndrome described in the literature about dog diseases, such a search would immediately find the relevant documents. However, that phenomenon may just be an instance of the more general phenomenon that certain animals like to touch glass surfaces. A normal search on the above keywords would never find a description of that phenomenon, but the spread of activation in a semantically structured web would reach "animal" from "dog", "glass" from "mirror" and "touching" from "licking", thus activating documents that contain all three concepts. This example can be easily generalized to the most diverse and bizarre problems. Whether it has to do with how you decorate your house, how you reach a particular place, how you remove stains of a particular chemical, what is the natural history of the Yellowstone region: whatever the problem you have, if some knowledge about the issue exists somewhere, spreading activation should be able to find it.
For the more ill-structured problems, the answer may not come immediately, but be reached after a number of steps. Just like in normal thinking, formulating part of the problem brings up certain associations which may then call up others that make you reformulate the problem in a better way, which leads to a clearer view of the problem and again a more precise description and so on, until you get a satisfactory answer. The web will not only provide straight answers but general feedback that will direct you in your efforts to get closer to the answer.
We have tested this principle in practice with the associative network derived from our learning web experiment. There exists a TV game (called "Pyramid" on the French station A2) where players have to guess words, using a minimum of different clue words provided by their partners. For example, if you have to guess "boat", your partner might suggest the clues "vehicle" and "water". If that is not sufficient to make you guess correctly, depending on your answer your partner might add an additional clue, e.g. "sail". We implemented spreading activation in our experimental network so that it would be able to play this game. The user selects a combination of words from the network (e.g. "control" and "society"), and the spreading activation mechanism finds the words that are most closely related to the combined clues (e.g. "government"). Within the limitations of our data (the network contains at present only 150 words) the network seems to play the game about as well as a human player.
From thought to web agent
The mechanisms we have sketched allow the Web to act as a kind of external brain, storing a huge amount of knowledge while being able to learn and to make smart inferences, thus allowing you to solve problems for which your own brain's knowledge is too limited. In order to use that cognitive power effectively, the distance or barrier between internal and external brain should be minimal. At present, we are still entering questions by typing in keywords in specifically chosen search engines. This is rather slow and awkward when compared to the speed and flexibility with which our own brain processes thoughts. Several mechanism can be conceived to accelerate that process.
First, there have already been some experiments in which people steer a cursor on a computer screen simply by thinking about it: their brain waves associated with particular thoughts (such as "up", "down", "left" or "right") are registered by sensors and interpreted by neural network software, which passes its interpretation on to the computer interface in the form of a command, which is then executed. If such direct brain-computer interfaces would become more sophisticated, it really would suffice that you just think about your dog licking mirrors to see the documents explaining that behavior pop-up on your screen.
Second, the search process itself should not require you to select a number of search engines in different places of the Web. The new technology of net agents is based on the idea that you would formulate your problem or question, and that that request would itself travel over the Web, collecting information in different places, and send you back the result once it has explored all promising avenues. A software agent is a small message or script embodying a description of the things you want to know, a list of provisional results, and an address where it can reach you to send back the final solution.
Using our experimental network, we have simulated an agent that searches for information in the network using associations. The agent gets a target word that it needs to find, and a random starting position. From that position, it explores the available links by selecting the one with the highest association to the target. It repeats this proces when it reaches the new position, until the target is found. In most cases, the target is reached very quickly, in a way similar to the way a human user would select hypertext links.
In the future intelligent web, such agents could play the role of external thoughts. Your thought would initially form in your own brain, then be translated automatically via a neural interface to an agent or thought in the external brain, continue its development by spreading activation, and come back to your own brain in a much enriched form. With a good enough interface, there should not really be a clear boundary between "internal" and "external" thought processes: the one would flow over naturally and immediately into the other.
Integrating individuals into the super-brain
Interaction between internal and external brain does not always need to go in the same direction. Just like the external brain can learn from your pattern of browsing, it could also learn from you by directly asking you questions. A smart web would continuously check the coherency and completeness of the knowledge it contains. If it finds contradictions or gaps it would try to situate the persons most likely to understand the issue (most likely the authors or active users of a document), and direct their attention to the problem. In many cases, an explicit formulation of the problem will be sufficient for an expert to be able to quickly fill in the gap, using implicit (associative) knowledge which was not as yet entered clearly into the Web. Many "knowledge acquisition" and "knowledge elicitation" techniques exist for stimulating experts to formulate their intuitive knowledge in such a way that it can be implemented on a computer.
In that way, the Web would learn implicitly and explicitly from its users, while the users would learn from the Web. Similarly, the web would mediate between users exchanging information, answering each other's questions. In a way, the brains of the users themselves would become nodes in the Web: stores of knowledge directly linked to the rest of the Web which can be consulted by other users or by the web itself.
Though individual people might refuse answering requests received through the super-brain, no one would want to miss the opportunity to use the unlimited knowledge and intelligence of the super-brain for answering one's own questions. However, normally you cannot continuously receive a service without giving anything in return. People will stop answering your requests if you never answer theirs. Similarly, one could imagine that the intelligent Web would be based on the simple condition that you can use it only if you provide some knowledge in return.
In the end the different brains of users may become so strongly integrated with the Web that the Web would literally become a brain of brains: a super-brain. Thoughts would run from one user via the Web to another user, from there back to the Web, and so on. Thus, billions of thoughts would run in parallel over the super-brain, creating ever more knowledge in the process.
The brain metasystem
The creation of a super-brain is not yet sufficient for a metasystem transition beyond the level of human thought: what we need is a higher level which somehow steers and coordinates the thoughts of the level below. The next level may be called metarationality: the capacity to automatically create new concepts, rules and models, and thus change one's own way of thinking. This would make thinking in the Web not just quantitatively, but qualitatively different from human thought. It would in a sense automatize the creativity of the scientific genius, who develops wholly new visions of the world, and make these creative powers available to everyone.
An intelligent Web could extend its own knowledge by the process of "knowledge discovery" or "data mining" (Fayyad & Uthurusamy, 1995). This is based on an automatization of the mechanisms underlying scientific discovery: a set of more abstract concepts or rules is generated which summarizes the available data, and which, by induction, makes it possible to produce predictions for situations not yet observed. As a simple illustration, if after an exhaustive search it would turn out that most documented cases of dogs licking mirrors would also suffer from a specific nervous disease, a smart Web might infer that mirror-licking is a symptom of that disease and that new cases of mirror-licking dogs would be likely to suffer from that same disease, even though that rule may never have been entered in its knowledge base and been totally unknown until then.
Many different techniques are available to support such discovery of general principles, including different forms of statistical analysis, genetic algorithms, inductive learning and conceptual clustering, but these still lack integration. The controlled development of knowledge requires a unified metamodel: a model of how new models are created and evolve. A possible approach to develop such a metamodel might start with an analysis of the building blocks of knowledge, of the mechanisms that (re)combine building blocks to generate new knowledge systems, and of a list of selection criteria, which distinguish 'good' or 'fit' knowledge from 'unfit' knowledge.
References
Francis Heylighen & <a href="http://pespmc1.vub.ac.be/Campbel.html">Donald T. Campbell: <a href="file://ftp.vub.ac.be/pub/projects/Principia_Cybernetica/WF-issue/Social_MST.txt">Selection of Organization at the Social Level: obstacles and facilitators of Metasystem Transitions, "World Futures: the journal of general evolution", Vol. 45:1-4 (1995), p. 181.
PierreTeilhard de Chardin: "Le Phénomène Humain" (Seuil, Paris, 1955). (translated as : "The Phenomenon of Man" (1959, Harper & Row, New York)).
Joël de Rosnay: several books in French, including <A HREF="http://www.quebecscience.qc.ca/derosnay.html">"L'Homme Symbiotique. Regards sur le troisième millénaire"::http://www.cybion.fr/joel.htm (Seuil, Paris, 1996), "Le Cerveau Planétaire" (Olivier Orban, Paris, 1986), "Le Macroscope" (Seuil, Paris, 1975) ( (Seuil, Paris, 1996), "Le Cerveau Planétaire" (Olivier Orban, Paris, 1986), "Le Macroscope" (Seuil, Paris, 1975) )Gottfried Mayer-Kress: <a href="http://www.ccsr.uiuc.edu/People/gmk/Publications/pub-intl.html">several papers, including: Gottfried Mayer-Kress & Cathleen Barczys (1995): "The Global Brain as an Emergent Structure from the Worldwide Computing Network", The Information Society 11 (1).