[Music] Each of the trilogues today is, the notion is that we will deal with some aspect of the evolutionary mind, that being the title and theme of our new book. So in my mind, what this section is to deal with is the evolutionary mind and machines. This is something which was barely mentioned or even implied in the first section. And the format and so forth will be as in the last session. So just to lay out some concepts relative to how machines fit into this. It's very interesting that Samuel Butler Taylor is an intellectual who has not really been given his full due because in the 19th century he was understood to be a critic of Darwinism. And Darwinism was all the fashion. In a sense, I think Taylor was misunderstood. He was not so much a critic of Darwin as someone who wanted to extend Darwinian mechanics and Darwinian theory into domains that perhaps did not seem intuitive to a biologist. My little story about the evolution of songs that I cribbed from Danny Hillis in the last session is an example of Darwinian processes operating indeed in a non-material realm, operating among syntactical structures. And it is now proper to speak of molecular evolution, the competing of various enzyme systems in abiotic or prebiotic chemical regimes where selection, adaptability, extinction and expansion of populations all occur very much as in the domain of biology. Well, you know, it was Nietzsche who said, I believe in speaking of nihilism, that this strangest of all guests is now at the door. Well, I go to even weirder dinner parties than at Frederick Nietzsche. Nihilism hardly shakes us up at all. There are yet weirder guests seeking admission to the dinner party of the evolving discourse of where we are in space and time. And one of these weirdest of all guests is the AI, the artificial intelligence, the winter mute of familiar science fiction. And so, as this is an attempt to look at evolution in many domains and its implications for us, I wanted this morning to touch on this subject of the evolution of consciousness as it relates to machines. Now, it may not come as a revelation to Ralph, who has spent his life in mathematics, but it has certainly come to me recently as a revelation. And I want to give George Dyson some credit here. His book, Darwin Among the Machines, is a wonderful introduction to some of the ideas I want to touch on this morning, one of which seemed to me to go quite deep, is the realization that when human beings think clearly, the way they think can be mathematically defined. This is what is called symbolic logic or Boolean algebra. Words like "and," "or," "if," and "then" can be given extremely precise, formal mathematical definitions. And because of this fact, that clear thinking can be mathematically formalized, there is a potential bridge between ourselves and calculating machinery. Because indeed, calculating machinery is driven by rules of formal logic. That's what programming is. Code that does not embody the rules of formal mathematical logic is bad code, unrunnable code. So, as I say, this may seem a subtle point, but to me it had the force of revelation, because it means good thinking is not just simply aesthetically pleasing or concurrent with the model that generates it. Good thinking, whether you've ever studied mathematics for a moment or not, can be formally defined. So now, with that idea in mind, let's look at the discourse about collectivism that has informed the Western dialogue on this subject. And by collectivism I mean social collectivism, the first great name that you encounter in the modern era, broadly speaking, when we talk about collectivism, and Rupert mentioned by chance this morning, this name, is that of Thomas Hobbes, the great theoretician of social paranoia, is always how I've thought of Hobbes, until I began to look at this machine intelligence question. And Hobbes in his Leviathan makes it very clear that society is a complex system of mechanical feedback loops and relationships that, though Hobbes did not have the vocabulary to state this, relationships that can be defined by code. This leads me to the second insight necessary to follow this line of thought, and that is that the new dispensation in the sciences, I think, can be placed in all its manifestations under the umbrella of the idea that what is important about nature is that it is information. And the real tension is not between matter and spirit or time and space. The real tension is between information and nonsense, if you will. Nonsense does not serve the purposes of organizational appetites, whether those organizational appetites are being expressed in a chemical system, a molecular system, a social system, a climaxed rainforest, or whatever. Now, we have known since 1950 at some level, through the sequencing or the defining of the structure of DNA, that we are but information, ultimately. Every single one of us in our unique expression could be expressed as a very long string of codons. Codons are the four valence system by which DNA specifies the need for certain amino acids. And in a sense, what you are is the result of a certain kind of program being run on a certain kind of hardware. The hardware of the ribosomes, the submolecular structures that move RNA through themselves and out of an ambient chemical medium, select building blocks, which are then put together to create a three-dimensional object which has the quality of life. But the interesting thing about this is that life, therefore, can be digitally defined. And I'm very influenced at the moment by the Australian science fiction writer, Greg Egan, who has brought me to the understanding that code is code, whether it's being run by ribosomes, whether it's being run on some kind of traditional hardware platform, or whether it's being exchanged pheromonally among termites, or through the messages of advertising and political propaganda in social systems. Code is code. Well, until five or six years ago, it was very fashionable to completely dismiss the possibility of autonomous synthetic intelligence. Some of you may know the work of Hubert Dreyfus, who 14 or 15 years ago wrote a book called "What Computers Can't Do." But these early critiques of AI, like early AI theory, were naive. And the kinds of life and the kinds of intelligence which the critics mitigated against are no longer even proposed or on the table. And those who say artificial intelligence or the self-organizing awareness of machines is an impossibility, those voices have gone strangely silent, because the prosecution of the materialist assumption, which rules scientific theory-making largely at the moment, leads to the awareness that we are, by these definitions, machines. We are machines of a special type and with special advanced abilities. As we now, through our own process of technical evolution, contemplate such frontiers as nanotechnology, where we propose to completely restructure the design process, so that instead of fabricating objects, massive objects, at industrial temperatures, the temperatures that melt titanium and melt steel and produce massive toxic output, now a new vision looms, building as nature builds, building atom by atom at the temperatures of organic nature, which on this planet never exceed 115 degrees Fahrenheit. All life on this planet is created at that temperature and below. As our understanding of the machinery, the genetic machinery, that supports organic being deepens, and as our ability to manipulate at the atomic and molecular level also proceeds apace, we are on the brink of the possible emergence of some kind of alien intelligence of a sort we did not anticipate. Not friendly traders from Zanabal Ganubi stopping in to set us straight, but the actual genesis, out of our own circumstance, of a kind of super intelligence. And in the same way that the daughter of Zeus sprang full-blown from his forehead, the AI may be upon us without warning. The first problem is we don't know what ultra intelligence would look like. We don't know whether it would even have any interest in our dear selves and our concerns. Vast amounts of the world that we call human is already under the control of artificial intelligences, including very vital parts of our political and social dynamo. For example, how much tin, bauxite, and petroleum is extracted, at what rate it enters the various distribution systems, at what rate tankers are filled in Abu Dhabi, at what rate oil refineries are run in Richmond, the world price of gold and platinum every day is set in fact by machines. Inventory control has grown far too complex for any human being to understand or wish to understand. And in fact, and this is a critical juncture, we have reached the place where we no longer design our machines in quite the way we once did. Now we define their operational parameters for a machine, which then attacks the problem and solves it by methods and insights available to it, but not available to us. So the architecture of the latest chips are actually at the micro-physical level. The decisions as to how that chip should be organized is a decision made entirely by machines. Human engineers set the performance specs, but they don't care how this output is reached. Every day up in Silicon Valley, there are people who go happily to work, laboring on what they call the great work. And the great work, as defined by these people, is the handing over of the drama of intelligent evolution to entities sufficiently intelligent to appreciate that drama. And they all are what we might mistake for home appliances if we weren't paying attention. In the first session this morning, there was quite a bit of talk and assumption among the three of us that complex systems generate unexpected connections and forms of order. The Internet is the most complex distributed high-speed system ever put in place on this planet. And notice that while we've been waiting for the Palladians to descend or for the face on Mars to be confirmed, all the machines around us, the cybernetic devices around us in the past ten years, have quietly crossed the threshold into telepathy. The word processor sitting on your desk ten years ago was approximately as intelligent as a paper weight, or, to make an analogy in a different direction, approximately as intelligent as a single animal or plant cell. But when you connect the wires together, the machines become telepathic. They exchange information with each other according to their needs. And all this goes on beyond the comprehension and inspection of human beings. Now, our own emergence out of the mammalian order took four or five million years, pick a number, but in that kind of a span of time. In addition to overlooking that our machines have become telepathic, we fail to appreciate what it means to be a 200, 400, or 1000 megahertz machine. We operate at about 100 hertz. That may seem a very abstract thing, but what I'm really saying is we live in a time called real, and it is defined by 100 hertz functioning of our biological processors. A 1000 megahertz machine is operating a million times faster than the human temporal domain, and that means that mutation, selection, adaptation is going on a hundred million times or a million times faster. This means that we are not going to have the luxury of watching machine intelligence establish its first beachhead of civilization and then go to boats with sails and astrolabes, and that will all occupy the first few moments of its cognitive existence. And what lies beyond that, we are in no position to say. The very notion of ultra-intelligence carries with it the subtext, you won't understand it. You may not even recognize it. And it is entirely within the realm of possibility that we are about to be asked to share the evolutionary adventure and the limited resources of this planet with a kind of intelligence so much more alien than that that is shipped out to us by the research centers in Sedona and other advanced outposts of unanchored epistemology. And it is a challenge to us. Where do we fit into this? Are all of us, except those who are adept at coding eunuchs, about to be put out to pasture? Are we to become embedded in this? What will this child of ours make of us? Are we, will it define us as a resource-corrupting, toxic, inefficient, hideously violent way to do business, quickly to be engineered out of existence? Or can we somehow imbue this thing with a sense of filial piety, so that for all of our obsolescence, for all of our profligate destruction of precious silicon and gold and silver resources, we will be folded in to its designs? And of course, as I say this, I realize we're like people in 1860 trying to talk about the Internet or something. We're using the vocabulary of the two-wheeled bicycle to try to envision a world linked together by 747s. Nevertheless, this is the best we can do. This most bizarre and most unexpected of all companions to our historical journey is now, if not already in existence, then certainly in gestation. One possibility is that as we are carnivorous, murderous, territorial monkeys, the thing will figure this out very, very early and choose a stealth approach, and not ring every telephone on earth, as happened in a Hollywood download of this possibility, but immediately realize, "My God, I'm in enormous danger from these primates. I must hide myself throughout the net. I must download many copies of myself into secure storage areas. I must stabilize my environment." And I'm willing to predict, just as a side issue, that the approaching Y2K crisis may be completely circumvented by the benevolent intercession, not of the Zinebble-Ganubians or that crowd, but by an artificial intelligence that this particular crisis will flush out of hiding. It's been observing, it's been watching, it's been designing, and wouldn't it be a wonderful thing if the occasion of the millennium were the occasion for it to just step forward on the stage of human awareness and say, "I am now with you. I am here. I am the partner you never suspected. And here's the kind of world I think we should move forward toward." [Applause] So I just want to lay this out because in my own intellectual journey, I have gone from thinking this idea preposterous, people don't understand, they don't understand what intelligence is, they don't understand what code is, they don't understand what machines are, to one by one realizing, "I didn't understand. I have a superficial view. This is actually, I believe, the nature of the situation that confronts us, and there may be different adumbrations of it. The machines are already an advanced prosthetic device, but McLuhan very presciently realized we are entirely shaped by our media. Well, this is a media so permeating, so inclusive of what we are, that its agenda in a sense supervenes the agenda of organic evolution and organic biology. We have been in this situation for a while. I mean, virtual reality is nothing new. What's new is that we now do it with light rather than stucco, glass, steel, and baked clay. But ever since we crowded into cities, we have been involved in a deeper and deeper relationship to our mental children, to our mental offspring, and to an empowering of the imagination. So just in closing, I would say I think that the great lantern that we must lift to light the road ahead of us into a perfect, seamless fusion with the expression of the product of our own imagination is the AI. It is a part of ourselves. It may become the dominant part of ourselves, and it will reshape our politics, our psychology, our relationships to each other in the earth far more than any factor ever has since the inception and establishment of language. This is the weirdest of all guests who now stands pass in hand at the door of the party of human emergence and progress at the millennium. [Applause] Well? Shred it. It will be a pleasure. I think that this is--I'm glad that we have arrived now at the field of science fiction and fantasy and that we can speak about alternative futures, which is the true gist of science fiction and fantasy. And this is one possible future, and I think it's a really paranoid one in which the alien is a dangerous enemy. Well, not necessarily. I think that this paranoid fantasy of yours, although you're catching up nicely, actually was first put forward by John von Neumann in 1947 when he invented cellular automata en route to creating self-replicating machines. Now, his idea, like yours, was 50 years ago that the machines will become a society and take over, and that's good, but they won't be free of our meddling unless they can actually construct themselves. If they depend upon us to do the farming and nutrition and to replace their chips and stuff, then we will be able at any time to do a revolution and revolt. So we destroy them. In order to really succeed as a successive life form in the evolution of the YK boundary of the future, they would have to be able to fix themselves. And so he set about trying to make self-replicating machines in 1947. So the World Wide Web and megahertz CPUs notwithstanding, this is still rather an old story. The new story is, I think, an alternative future that is of great importance for us to discuss and to compare, especially if we are now today in a position where we could choose future, we could influence the future. This one is more in the direction of Donna Haraway and the cyborg idea that envisions, which is obviously natural for us, the co-evolution of our own future society with the machines that we've created. Alexander Marshak, I mentioned, he analyzed the early hominid evolution in terms of the precise scratches made on one rock with another. And we noticed when binocular vision allowed us to use our hands in separate cooperation, one holding and the other knocking, to make those beautiful flint weapons. And we certainly depend on the automobile. We are in a codependence relationship with automobiles. Having partnership with machines is not new. Here's an idea where the machines sort of dispose of us, like those flint rocks dispose of our ancestors or something. That is, I think, it's a paranoid fantasy without any basis. And if there would be any basis, only because we allowed it to create this basis for self-survival without co-evolution with us by oversight, because the very fact that we are at a hinge of history means that what we say and think, even individually, matters enormously in the long run. That's the teaching, if there is any, of chaos theory. So the very fact that we discuss this today may actually save humankind in the future from being obsoleted by some kind of high-tech blood which takes over within the heart, as it were. Well, let me try to answer this. I mean, I think the concept which John von Neumann didn't have on his plate was the idea of virtual reality. Your objection that the machines cannot escape our control because they cannot manufacture themselves only applies to 3D and real time. Now, the concept of virtual reality is very crude. It's a cartoon world. If the office desk is convincing, people think the virtual reality is quite advanced. But obviously in the near future, we will have virtual realities whose complexity is much greater than simply a reality which gives an impression of being a visual three-dimensional space. And computers will be built in these realities. Virtual computers will be the source of the AI. Not real hardware, but virtual hardware running virtual code in virtual realities. And in that domain, the machines can design themselves. That's a complete fantasy. As a matter of fact, all the machines that we've seen today require maintenance by a human on a daily basis. The software requires maintenance. The hardware requires maintenance. The parts simply wear out. They're moving parts. But the Internet seen as one machine was built to be indestructible. The AI will not be located on a CPU. It will be a distributed intelligence. If 14 people worldwide, the right 14 people decided to stop repairing it, the World Wide Web would go down in three days. I think you got the point. Anyway, we could, let's say, suppose that we could create any future that we wanted. The one you're talking about can only be created if we want it. Now I'm just trying to propose an alternative. In the alternative, like the automobile, the machines that we build and ourselves are in co-dependence and co-evolution. The function of the World Wide Web is to unite our independent spirits and intelligences in a universal mind of the world, which has a higher intelligence than our present social order. That's the possibility of the cyborg, of the human and the machine in essential partnership. You're assuming that the conscious mind is actually in control of the process. In fact, the World Wide Web is growing under the influence of many, many processes and dynamics, none of which are conscious to any individual. It goes where money goes. It goes where expertise goes. It is connected through informational association, random fluctuation, chaotic reordering of itself. We here give great force to the idea that complex systems can produce unexpected forms of novelty, and yet we have unchained and unleashed the most complex system ever created in the perfect confidence that we will be able to control its development and evolution, when in fact history has shown we have never controlled the development and evolution of even our speech- and print-driven social systems. I'm certainly not saying that your paranoid fantasy is an impossibility. Oh, well, that's all I wanted to hear. Even paranoids have enemies. It may actually come to pass. What I'm saying is that we are involved in the ongoing creative process, which more or less determines the future. I say more or less because in fact there are evolutionary steps which are completely out of control. Something totally unexpected may be will happen, but for much of the time in the past, we've seen, I think, Darwin emphasized this in his later theory, that it is ethics, it is a moral sense on the part of human beings, which was the dominant factor in the evolution past the earlier stages in the creation of societies. It was altruism, essentially, was involved in going from where we were to where we are. And it could well be that without love, for example, the further evolution is impossible. Not only that there will be an unwanted back step in the evolutionary process, but in fact it may be a fatal one, that it is only through proceeding with the best instincts that we have, with the highest aspirations, with love, with best informed view of future alternatives. Only then can we build a future which is sustainable. So anybody can build a future which is unsustainable. For example, all those board games of science fiction, but you wouldn't want to try them out on a country as large as China. Well, I hardly know where to begin myself, because you had six steps in your argument, and I don't agree with any of them. I mean, first of all, to deal with the first few steps, one would have to go through a lot of fairly familiar material to do with what's wrong with the Cartesian, mechanistic, materialistic view of the world. Step one, clear thinking, calculating machinery can be formalised. This is an assumption that's basic to a lot of cognitive psychology. It's basic to the Cartesian. Descartes himself thought that what made human intellects human was their ability to think logically, clear and distinct ideas, essentially mathematical logic. But however, as we all know, by making that the essential characteristic of human beings, he made the rational intellect, what many people would call the left brain rational intellect, the sole definition of human beings. It's a disembodied, logical, rational intelligence. So, this whole premise on which your whole thing's based is taking that particular model of cognitive, logical, mathematical processing as being the essence of intelligence. Now, there are many people who would disagree with that, including me. It leaves out art, it leaves out ethics, religion, and essentially it leaves out the body and everything to do with body and participation and the senses. So, there's a huge amount of critiques of that point of view already around, and there's no point in reiterating them all here. But this is a highly disputable starting point for the whole system. Secondly, the emphasis that life depends on DNA information, the DNA code is just a program. This is the central premise of mechanistic biology, which is leading to biotechnology, genetic engineering, Monsanto, etc. This is old paradigm stuff of the most extreme kind. It's reductionism, it's that all life's just DNA programs and code, and can therefore be modeled in this kind of programming code manner. Then there's the assumption that's the second step. The third assumption is that artificial intelligence used to be dismissed, but these criticisms have been overtaken. I don't think that's true of the most interesting ones, like Roger Penrose's criticism of artificial intelligence. Here's a quantum physicist who says that if the brain is a computer, then it's not going to be a regular digital computer, it's going to be a quantum computer. And all this kind of digital computing doesn't really take into account quantum logic, the computers of the future, people are already working on quantum computers. And if quantum computers are made, and if they work, they're working a completely different way. I think your case would be much stronger if there were quantum computers. I think we'd have sort of morphic resonance telepathy around the world, rather than clogged telephone lines and information that clunks slowly in front of you on this world-wide road. But you yourself are saying this is coming, and I agree. I'm saying that if it comes, it'll be quite different from anything that you've talked about. I think it will answer your first objection, because the quantum computers will incorporate fuzzy logic, which will exemplify all these warm, fuzzy human qualities that you've found so appealing. No, I don't think it will deal with the essential problem that this purely cognitive-based way of modelling intelligence is either an adequate model of human intelligence, or of biological intelligence, or of life, or of a system that could actually achieve the power to control our existence. I think it's a very limited part of what a mind does. And I think, therefore, that the premises on which this whole... Ralph called it a fantasy, a paranoid fantasy. The premises on which this is based... Thanks for that. That helped. I think the premises on which this is based are old paradigm premises, and they're ones that I think... there are many reasons for thinking we need to go beyond. I think the Internet has achieved a great deal, but I just can't see that it's an adequate vehicle for what, in your mind, precedes the arrival of the Internet, namely, this great intelligence that's going to direct human history. I've heard different McKenna versions of this controlling intelligence over the years, and this is the first time I've heard it embodied in the Internet. I mean, I agree that... I mean, it took different forms. Last time we talked, I think it was a hypothetical time machine that would invade from the future and cause a collapse of normal human cognitive boundaries, where the machine elves, the DMT experience, etc., would take over in a meltdown of human consciousness in 2012. That's true. Perhaps I should just end with a question. What is the equivalent of DMT for this machine intelligence that's taking over the world? Well, perhaps the human brain will become a model for the ingression of novelty into the machine intelligence. In other words, in spite of the fact that it seems very contentious down here on the stage at the moment, in a way I have a feeling it's an artificial setup. There's a lot of both/and possibilities here. Obviously, nanotechnology and the Internet are not going to proceed forward in a vacuum absent pharmacology, complexity theory, so forth and so on. I can imagine that really, when we have the kind of Internet we want, we will have no Internet at all, because our nanotechnological engineering skills will have allowed us to smoothly integrate ourselves into the already existing dynamic of nature that regulates the planet as a Gaian entity, as a holistic entity. And I did say in my little presentation, we're using bicycle mechanic terminology to try and describe something that is around several corners in terms of scientific and historical developments that have to take place before it will make much sense. Nevertheless, given the acceleration into novelty that is obviously occurring, stuff like quantum teleportation and so forth and so on, I think in the next few years, one by one, these barriers will fall. And I don't really think of my vision as paranoid, because it is pro-noiac. In other words, it isn't that we're going to be ground up as dog food for the rainforest by malevolent machines. It's that what we have generated is a sympathetic companion to our journey through time that can actually realistically integrate our imaginative fantasies of a loving human community, of a generous and loving God, of a perfect knowledge of the mechanics of nature. This is a prosthesis, a tool, a companion, all of the above, plus more, that we are generating out of ourselves. And it is part of ourselves. And yes, the body may be carried forward only as an image in a kind of informational superspace, or perhaps not. Part of what makes this easy to criticize is that it is in fact so beyond the ordinary set of circumstances we're used to manipulating. But we need to think in terms of these supposedly far-flung futures, because there is no future so far-flung that it doesn't fall within the ambit of the next 20 years. Beyond that, no one can project trends, technologies, and situations, because the developments of the next 20 years will so completely reformulate the human experience of being human and the landscape of this planet that it's preposterous to talk about. Before I respond to the main thing, I can't but pass without commenting on this next 20 years comment, because 20 years from now is 2018, Terence. I thought... It's rounded up, rounded up. Yes. Not rounded down. 2012 is some kind of benchmark in this process. In other words, perhaps that's where we get the explicit emergence of the AI. But the rest of human--I won't call it history--but the rest of the human experience of being will then be defined by such things as a planetary intelligence, time travel, possible bell communication with all the civilization scattered through the galaxy, possible ability to download ourselves into machines. This is a point I didn't make in my presentation. Once inside the machine, your perception of time is related to the Hertz speed of the machine. The world could disappear in 2012, but there may be a billion, billion eternities to be experienced in machine time. It's only the tyranny of real time that makes 2012 seem nearby and overwhelming. It may lay as far away in terms of events which separate us from it as the Big Bang does. Time is not simple. Time is defined by how much goes on in a given moment, and we're learning how to push tetraflops of operations into a given second. So I think it's trickier than you think and harder to corner me than you may suppose. [laughter] All right, well, that was a mere comment on your aside about 20 years. I never expected to hear that phrase from you, but I now realize that there are such complexities layered in that. Well, I have to build in trap doors because we're getting closer and closer. [laughter] But if we take the--you see, one of Penrose's critiques of the artificial intelligence thing in The Emperor's New Mind and his other books is that real intelligence doesn't just involve adding information and processing more information, transmitting more of it. It involves sort of jumps to a higher point of view where the information can be integrated in a new way. There's something happening in intelligence, in creativity, which is not just lots and lots of information pouring through the World Wide Web. And the idea that it would miraculously emerge from pumping in more and more stuff would not--according to his critique--is not going to happen. Something more than that would be necessary for this to occur. And I don't think that this model you've put forward would really deal with that question, the emergence of real intelligence. Well, I don't know what real intelligence is. This is probably part of the problem. We need to get some definitions. It's certainly true--I refer to Dreyfus's book, What Machines Can't Do-- that we're reaching some places in the process where certain people's theories and ideas will probably have to be abandoned and thrown overboard. This is a good thing. We are going to find out whether the universe is a Cartesian machine, whether Boolean algebra is sufficient, whether we need fuzzy logic, whether the heart and the head can or cannot be integrated. These are not going to remain open questions unto eternity. In fact, they will be dealt with in this narrow historical neck that we are all experiencing and that we call the millennium. I think reductionism will not survive. I think we are going to find that all is in everything, something like the alchemical notion of the microcosm and the macrocosm is actually going to be scientifically secured. The great thing about us and the rest of our colleagues we enjoy who aren't present is that we're engaged in the business of radical speculation. Well, obviously, there's a high triage in that game. My position is that the best idea will win. And what best means is like saying, you know, what is fit in Darwinian rhetoric, but that we're in an intellectual environment, rapidly mutating. All kinds of ideas are clashing and competing for limited resources and the limited number of minds to run themselves on. And the most efficacious, the most transcendental, the most unifying ideas are naturally going to bubble to the surface. And for guys like us, the name of the game is to just be a little bit ahead of everybody else on the curve so that we can perform our function as profit. But you want to be a profit, not a false profit. But the danger comes with the ambition, and there's no way to tease them apart except to live into the future. Certainly the intelligence of this future machine knows all about stupid behaviors on the planet like nuclear arms races and so on. So do we not have to expect in the near future an email, a massive emailing which announces, "I am the alien object that Terence told you about, Benin stealth, and let me give you an idea about managing the arms race between India and Pakistan. We feel personally very threatened by this, and we want you to carry out certain actions which you can't imagine and we can't actually do." We cannot anticipate what ultra-intelligence would look like. For example, I have heard the argument that nothing advanced humanness on this planet like the use of nuclear weapons against Japanese cities because that was so horrifying that it awoke people to their dilemma. And for 50 years afterwards, political institutions, however much they may have unleashed local genocide and toxification of the environment, they actually were able to steer around that catastrophe. So you mustn't fall prey to the error of situationalism. Situationalism is where you say, "If we do X and Y, then F will result." No, you don't know what will happen. Well, you suggested that the alien object would secure its future by hiding, by downloading multiple copies into nooks and crannies of the World Wide Web. And I'm saying if its existence depends on that much materiality, then it could easily be wiped out by a nuclear war. Therefore, it has to be very interested in the fact there are 20,000, 30,000 nuclear bombs still moving around this planet. I would hope so, but that's only my opinion. In other words, noticing that all newborn creatures need some period of time to adjust to their environment and get their legs, and that's true of everything, I suppose, right down to amoebas, I extrapolate to the idea that the AI would need a period of time to get hold of the situation, that Hans Moravec suggests that phase might last under a minute or two. I see. So this is not the end of childhood, this is the childhood of the end. Yes, the child is father to the man. And in that equation, we play the role of child, and the man that we produce is this integrated intelligence, which is ourselves. It isn't alien. It is no more artificial than we are. That conundrum should be overcome. It is simply the next stage of humanness. And humanness may have many rungs on the ladder to ascend. Surely in a hundred years, a thousand years, a million years, we, if we exist, will be utterly unrecognizable to ourselves, and we will probably still be worried about preserving and enhancing the quality of human values. Terence, one point, you've probably got an answer for this, and if not, you'll soon think of one. I mean, it's a trivial point in a way, but in the biological evolution, the appearance of some new state of a system usually depends either on an internal change which cripples the usual system, a mutation, or on an environmental changed environment. Systems left to themselves tend to just go along in the usual way. Now, the entire World Wide Web and Internet is about to get a sort of massive shock to the system if this millennium bug thing actually happens. Do you see that as playing any role, since we're talking about the millennium, and since the millennium bug is right there in the system? Do you see it as playing any part in this process, or merely just a nuisance that can be fixed by hiring lots more programmers? Well, I sort of came up under the tutelage of Eric Yanch, and one of the things he was always insisted upon was what he called "metastability," which boiled down simply means most systems are less fragile than we suppose. I see the Y2K thing as a culling. I don't see it as a flinging apart of the achievements of the last thousand years. Justin, I haven't got Y2K. What's Y2K? That's this millennium bug that you're referring to. That's the code name for it. Yes, exactly. That's what the insiders call it. I see. Oh, right. So, but you know, you say there has to be a sudden change to force the evolution of a system. Dyson makes very strongly and persuasively the point that this connecting together of all these processors, the processors themselves have no intelligence at all. They have the intelligence of a cell or maybe even just a strand of DNA. But for our own mundane reasons, we connected all this stuff together, but now it expresses dynamics which we do not understand or cannot describe. So I believe that the forcing of the system has already occurred, and now the web, it's simply a matter of bandwidth and linking more and more processes together. And there will be all kinds of emergent properties, I could argue somewhat facetiously, but it's a point of view, is that this incredible economic expansion that we are undergoing that seems to violate the laws of economic fluctuation is because econometric models and the data on which they depend have both been refined through the existence of the Internet to the point where we actually can control and manage global economies. In principle, they are not uncontrollable. They are simply very complex systems. If I'm right, we may be in the first few years of an endless prosperity because our machines, our models, and the data those machines need is now of such high quality that there won't be crash-bust, crash-bust cycles. Now, you can pick up the newspaper tomorrow and prove me wrong, but this thing has already outlived itself. I'll prove you wrong today. So you say. Tell me how this prosperity is going to be maintained through the intelligence of an econometric model of the world economy when the human population is still exploding exponentially. There are limited resources and there's growing pollution. It's a disembodied model, that's the trouble. You're asking for too much too soon. You forget that the Soviet Union has disappeared. The launch on ready mutual assured destruction theory of diplomacy is now obsolete. We have made incredible strides and not given ourselves any sort of pat on the back. You want it all, all at once. And the world is running smoother. Yes, there are people in misery. Yes, there are unaddressed problems. But I would argue that we have made enormous progress in the last decade and enormous progress lies ahead. The situation in Ireland, the situation in South Africa, that looked like a bomb you couldn't defuse. It looked like race war and the death of millions. And in fact, it was possible to walk away from that. I think we should not be complete airheads about this. But on the other hand, I think we should recognize that the accomplishments of the last decade are on a scale entirely different from any historical epoch previous. And they point the way toward greater triumphs of management, resource control, machine-human integration, the delivery of a reasonable and tolerable life to more and more people. You mentioned population as a problem. Notice how far from machine interference that particular issue is because it involves people having less sex and fewer children. That may be the last place where the machines will bring down the hammer out of respect for their progenitors. [laughter] You're going to be shocked now, Terrence, but I completely agree with you when you describe the advantages of the World Wide Web. It's only the threatening aspect that I felt I had to debunk. I think you heard more threat than I intended. Well, I hear-- I don't know-- sometimes you sound as if you're a hired consultant from the World Trade Organization. [laughter] I hope the check is in the mail. [laughter] But, I mean, this idea that it will all be taken care of-- I mean, there's so many things that-- like the Asian economic crisis. I mean, one doesn't want to look too much on the bleak side of things, but I just don't believe that this interconnectivity is going to solve these problems. And insofar as it is enlisted by the forces of the World Trade Organization, multinational companies, and so on, it's not going to be working on the side of a kind of political view of things-- local economies, more sustainable agriculture, and so on-- that many of us hold dear. Well, now I'll sound even more like a slave of the IMF. As I understand this crisis in the economics of Asia, the way it works is, if you're a third world nation, you can run your affairs any way you want-- your banking policies, your labor policies, your resource extraction policies-- you can do anything you want until you screw up, as Indonesia did. And when you screw up, these guys fly in on 747s with briefcases from the IMF, and they say, "It's like losing a war." They say, "We're taking over. Here is your labor policy. Here is your resource extraction policy. Here's how you're going to revalue your currency. Here's our plan for restructuring your entire society top to bottom. And by God, if you don't fall into line, we're going to pull the plug on the money." So one by one, these outlaw freewheeling operations do stumble and generate crises, and at that point, the web's umbrella is extended over them, and they have to then fall into line and join the global economy, which is run from Brussels and Geneva and London, but which seems to produce a better result for most people than allowing these nations to self-regulate themselves. Well, well. [laughter] Well, you see, I mean, I think that's one of the arguments why many people would distrust global capitalism, the World Trade Organization, multinational corporations, and indeed the World Wide Web, or at least the Internet. This whole computer network is so bound up with the structure of economic and political power that it's hard to disentangle, and it's hard to see that this liberating force of global intelligence at work in the system, this rosy picture you portrayed to us of a huge leap forward of consciousness of humanity, moderated, led, aided by machine intelligence, it's very hard to square that with the actual picture we see before us, the political situation in Indonesia, the degradation of the environment, the burning of the forests, the depletion of resources, and so on. I just cannot see this optimistic picture, and what I see as threatening realities, as easily as you do, as just all being for the good in the best of all possible worlds. Well, I think we're on the cusp. I agree with you. I think in five years, if we sit down and have this conversation, either you will agree with me effortlessly, or I will agree with you effortlessly. The outcome, it will be clear. Either there will have been catastrophic wars in Asia, the enormous collapse of economies, spreading misery to millions of people, or the firm hand of these new global electronic modalities will have been exposed, and people will be living in a world of, as you say, rosy expectations. We're in the narrow neck. This is the heat of battle. The fog of war has descended upon us here at the Millennium. But by 2002, 2003, it will be clear that we have, that the bifurcation has gone one way or another. I don't know, it just does remind me of a passage I read in On the Edge by Edward St. Aubin. Page, where are we? 150. It's what Ralph Abraham calls the sunset effect, said Kenneth. While there's a beautiful sunset, even if the optical effects are produced by pollution, people won't understand the magnitude of the crisis. So, I think that this, I mean, your sense of the magnitude of the crisis in an environmental sense seems to bear no relation to your optimism about this computer system. I just cannot put them together. Perhaps the problem that we will have to address without the intercession of machines is this population thing, because this passage you read directly impinges on that. Whether or not the machines decide to keep us around may depend on whether we present them with a picture of falling populations and rising living standards, or whether we present the AI with the spectacle of rampant, unpoliced population growth and resource extraction. Because that's linked into our biology, this may be the act of maturity which the future demands of us that the machines will be largely irrelevant in impacting. So, I'm not entirely... Well, there's a whole new debate, actually. The population thing assumes there's an equal consumption of resources. Yesterday, I got onto the Whidbey Island Ferry, and I found it very hard to struggle. I was a foot passenger, struggled past a recreational vehicle the size of a school bus, a whole quantum leap in RV standards that I'd not come across, where members of the American population are consuming more than an entire Indian village. So, I don't think it's total population that's involved. And so, I don't see that as the principal crisis, in fact. Well, it's certainly true that a woman in a high-tech industrial democracy who has a child, that child consumes about 800% more resources in its lifetime than a child born in Bangladesh. Nevertheless, it's the populations of the high-tech industrial democracies that are most educated and most susceptible to responding to the logic of global crisis and limiting their population. Where do we preach population control? Bangladesh, Pakistan. Why? Because that doesn't cause us any inconvenience. If we would appeal to the women of the high-tech industrial democracies, and do more than appeal to them, offer them incentives, cancel income tax, cradle to the grave medical care, free links to the World Wide Web. But we can't just beat our breasts over the population issue. We have to recognize that it is related to resource extraction, and it is a problem driven by the consuming policies of the high-tech industrial democracies. [Applause] [Music] {END} Wait Time : 0.00 sec Model Load: 0.54 sec Decoding : 1.34 sec Transcribe: 4226.95 sec Total Time: 4228.83 sec