Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: robj@netcom.com (Rob Jellinghaus) Newsgroups: sci.nanotech Subject: Re: Argument against self-reproducing nanites, 5 Message-ID: Date: 11 May 94 02:12:18 GMT Sender: nanotech@planchet.rutgers.edu Organization: Netcom Online Communications Services (408-241-9760 login: guest) Lines: 66 Approved: nanotech@aramis.rutgers.edu JeffreyJ8 writes a fairly good summary of a not uncommon view on the difficulty and danger of producing "self-replicating nanites". The entire discussion was somewhat hampered, however, by any definition of exactly what "self-replicating nanites" are. It seems that "SRNs" are nanotechnologically-engineered devices on approximately the scale of a bacterium, that are intrinsically capable of self-reproduction in an arbitrary environment. JeffreyJ8 enumerates many computational and physical reasons why such devices would be hard to build given medium-term estimates of our design capability, and hints at some hazards of having them around. All well and good. Only a couple of observations: In article jeffreyj8@aol.com (JeffreyJ8) writes: >I think that real, practical, achievable nanites will be quite prosaic in >nature. These nanites will be composed of durable materials which provide a >rigid structure (which would be much easier to engineer than flexible >materials Why would SRNs necessarily need a flexible structure? They could, in principle, construct rigid shells within which they could do work, thus obtaining protection from the environment without undue flexibility. >They >will be mass produced by dedicated assembler devices, and will be chemically >powered by external sources. In order for mass production at the atomic scale to be practical, some degree of self-assembly and self-reproduction will be required. "Mass production" on the nanoscale in essence _implies_ some form of replicating nanite, albeit, as you say, one which can only replicate in a precisely defined environment (probably containing complex prebuilt structural components and external control signals). >I think there are many other reasons for avoiding the production of >self-reproducing nanites which have little or nothing to do with the technical >difficulty of the task. There are sociological, political, and economic >considerations, as well. Unfortunately, I think many of these other reasons also apply to the production of _any_ form of molecular self-assembly technology. You have made the case against autonomous, self-replicating nanites. But there will very likely be autonomous (non-replicating) nanites and replicating (non-autonomous) nanites, and I think many of the considerations you allude to also argue against these nanites as well.... On a slightly longer scale, I don't think nanites are the problem any more. I strongly doubt that humans will be able to impose permanent, ubiquitous restrictions on nanotechnological development. Given that as a premise, I foresee a virtually indefinite rise in available computing power, with a concomitant ability to design and/or evolve myriads of autonomous devices. Those trends, extrapolated by a century or two, lead directly to new forms of life, evolving on their own beyond our ability to comprehend them. In other words, your arguments against SRNs are a weak form of an argument against the occurrence of the Singularity. While you may well be correct in believing that early nanotech will not involve creating SRNs, I think your arguments weaken significantly in the long term. -- Rob Jellinghaus robj@netcom.com uunet!netcom!robj Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: news@zurich.ibm.com Newsgroups: sci.nanotech Subject: Re: Argument against self-reproducing nanites, 5 Message-ID: Date: 16 May 94 01:38:55 GMT Sender: nanotech@planchet.rutgers.edu Organization: IBM Zurich Research Laboratory Lines: 14 Approved: nanotech@aramis.rutgers.edu > argument against the occurrence of the Singularity. While you may Forgive a stupid question, I have not seen the answer in the FAQ (maybe I didn't look hard enough?!). What is this Singularity? Is it a science-fiction thing invented by Vernor Vinge? Could someone please define this concept? Thanks Morten Holm Pedersen IBM Zurich Research Laboratory [I believe that Vinge's Singularity is what was being referred to. It is not part of the technical domain of nanotechnology, although nanotechnology may well be a part of the progress toward a Singularity. --JoSH] Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: eder@hsvaic.hv.boeing.com (Dani Eder) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 23 May 94 03:51:48 GMT Sender: nanotech@planchet.rutgers.edu Organization: Boeing AI Center, Huntsville, AL Lines: 136 Approved: nanotech@aramis.rutgers.edu > What is this Singularity? Is it a science-fiction thing >invented by Vernor Vinge? The Singularity Human history has been characterized by an accelerating rate of technological progress. It is caused by a positive feedback loop. A new technology, such as agriculture, allows an increase in population. A larger population has more brains at work, so the next technology is developed or discovered more quickly. In more recent times, larger numbers of people are liberated from peasant-level agriculture into professions that entail more education. So not only are there more brains to think, but those brains have more knowledge to work with, and more time to spend on coming up with new ideas. We are still in the transition from mostly peasant-level agriculture (most of the world's population is in un-developed countries), but the fraction of the world considered 'developed' is constantly expanding. So we expect the rate of technological progress to continue to accelerate because there are more and more scientists and engineers at work. Assume that there are fundamental limits to how far technology can progress. These limits are set by physical constants such as the speed of light and Planck's constant. Then we would expect that the rate of progress in technology will slow down as these limits are approached. From this we can deduce that there will be some time (probably in the future) at which technological progress will be at it's most rapid. This is a singular event in the sense that it happens once in human history, hence the name 'Singularity'. This is my definition of the concept. Vernor Vinge, in his series of stories 'The Peace War' and 'Marooned in Real Time' had a different definition. He implicitly assumed that there was no limit to how far technology could progress, or that the limit was very very high. The pace of progress became very rapid, and then at some point mankind simply disappeared in some mysterious way. It is implied that they ascended to the next level of existence or something. From the point of view of the 20th century, mankind had become incomprehensively different. So that time horizon when we can no longer say anything useful about the future is Vinge's Singularity. One would expect that his version of the Singularity would recede in time as time goes by, i.e. the horizon moves with us. When will the Singularity Occur? The short answer is that the near edge of the Singularity is due about the year 2035 AD. Several lines of reasoning point to this date. One is simple projection from human population trends. Human population over the past 10,000 years has been following a hyperbolic growth trend. Since about 1600 AD the trend has been very steadily accelerating with the asymptote located in the year 2035 AD. Now, either the human population really will become infinite at that time (more about that later), or a trend that has persisted over all of human history will be broken. Either way it is a pretty special time. If population growth slows down and the population levels off, then we would expect the rate of progress to level off, then slow down as we approach physical limits built into the universe. There's just one problem with this naive expectation - it's the thing you are probably staring at right now - the computer. Computers aren't terribly smart right now, but that's because the human brain has about a million times the raw power of todays' computers. Here's how you can figure the problem: 10^11 neurons with 10^3 synapses each with a peak firing rate of 10^3 Hz makes for a raw bit rate of 10^17 bits/sec. A 66 MHz processor chip with 64 bit architecture has a raw bit rate of 4.2x10^9. You can buy about 100 complete PC's for the cost of one engineer or scientist, so about 4x10^11 bits/sec, or about a factor of a millionless than a human brain. Since computer capacity doubles every two years or so, we expect that in about 40 years, the computers will be as powerful as human brains. And two years after that, they will be twice as powerful, etc. And computer production is not limited by the rate of human reproduction. So the total amount of brain-power available, counting humans plus computers, takes a rapid jump upward in 40 years or so. 40 years from now is 2035 AD. Can the Singularity be avoided? There are a couple of ways the Singularity might be avoided. One is if there is a hard limit to computer power that is well below the human-equivalent level. Well below means like a factor of 1000 below. If, for example, computer power were limited to only a factor of 100 short of human capacity, then you could cram 100 CPU chips in a box and get the power you wanted. And you would then concentrate on automating the chip production process to get the cost down. Current photolithography techniqes seem to be good for a factor of 50 improvement over today's chips (maybe a real expert can correct this figure for me if I am off). So it seems that we need at least one major process change before the Singularity and maybe it doesn't exist. Another way to possibly avoid the Singularity is by humans messing themselves up sufficiently. The argument goes that the work involved in killing people is roughly constant over time, but the energy and wealth available to each person goes up over time. So it becomes easier over time for small numbers of people to kill ever larger numbers of people. Then, given a small but finite rate of loonies bent on mass murder, you eventually kill off large numbers of people and set things back. The usual technologies pointed to are nuclear weapons and engineered plagues. One can describe scenarios like the hobbyist mad scientist of the future extracting Uranium from sea-water (where it is present in a few parts per billion), and then separating the U-235 with a home mass-spectrometer, and building a bomb with his desktop milling machine. It all is designed on his 'SuperCAD version 9.0' design software. Some Other Interesting Thresholds Human life expectancies have been increasing at about 0.1 years per calendar year. If the rate of progress in medical areas increases by a factor of 10, then life expectancy will be increasing as fast as you are aging. This means your projected lifespan suddenly jumps from being in the mid to upper 80 year range to a much larger number. From my point of view as a 36 year old, biotechnology is making gratifyingly rapid progress even today, and I hope that this will feed jumps in life expectancy in the future. Whether the size of a factory or a Drexler-style assembler, the complexity of a self-replicating machine is probably about constant. At some point we will have tools capable of modeling and designing such machines, and shortly therafter building them. A finite investment in building the first such machine will yield an exponentially expanding output. This has radical consequences for wealth levels, etc. Even nearly self- replicating machines (say 99% capable) will have dramatic economic effects. Dani Eder -- Dani Eder/Rt 1 Box 188-2/Athens AL 35611/(205)232-7467 Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: leech@cs.unc.edu (Jon Leech) Newsgroups: sci.nanotech Subject: Singularity Message-ID: Date: 23 May 94 18:52:07 GMT Sender: nanotech@planchet.rutgers.edu Lines: 72 Approved: nanotech@aramis.rutgers.edu Dani Eder writes: >When will the Singularity Occur? > >The short answer is that the near edge of the Singularity is due about >the year 2035 AD. Several lines of reasoning point to this date. One >is simple projection from human population trends. Human population >over the past 10,000 years has been following a hyperbolic growth trend. >Since about 1600 AD the trend has been very steadily accelerating with >the asymptote located in the year 2035 AD. Now, either the human >population really will become infinite at that time (more about that I don't understand what this means. The trend can accelerate all it wants without the population becoming infinite. Maybe you are using infinite to mean some very large number? And with roughly 2 generations between now and 2035, it's hard to see how the population could increase much more than 16 times or so even if every woman on the planet decided to have a large family. >Can the Singularity be avoided? > >There are a couple of ways the Singularity might be avoided. One >is if there is a hard limit to computer power that is well below the >human-equivalent level. Well below means like a factor of 1000 >below. If, for example, computer power were limited to only a >factor of 100 short of human capacity, then you could cram 100 CPU >chips in a box and get the power you wanted. There seems to be an assumption here that "computer power" is some linear function of how many switching elements are available, so by plugging in 100 CPUs, we have 100 times the C.P. This overlooks problems of hardware (wiring costs), software (generating truly scalable algorithms that fit these architectures) and the problem domain - the size of a problem imposes a limit on how much parallelism can be used. So far, this has worked to the advantage of the dreadfully slow, not very parallel computers we've built. The processors get faster and there are more of them, so use a larger matrix in the LINPACK benchmark, and so on. I imagine we might find more and more important problems that are too small to scale with the increasingly complex machines they'll be running on. >Some Other Interesting Thresholds > >Human life expectancies have been increasing at about 0.1 years >per calendar year. If the rate of progress in medical areas increases >by a factor of 10, then life expectancy will be increasing as fast >as you are aging. I don't know much about this area, but I thought that *maximum* human lifespan was not increasing very rapidly, and more and more people are dying due to, essentially, wearing out - so if you look at the mortality rate curve over time, it's forming a sharper and sharper peak down at the relatively inflexible limit. Maybe biotechnology can help with bodies wearing out, but it seems like a different problem from fixing externally caused problems like disease. >Whether the size of a factory or a Drexler-style assembler, the complexity >of a self-replicating machine is probably about constant. At some point >we will have tools capable of modeling and designing such machines, and >shortly therafter building them. A finite investment in building the >first such machine will yield an exponentially expanding output. One thing I wonder about these schemes is how they cope with changing the use of the surplus fraction they devote to building "wealth" (as opposed to more factories). Suppose Apple had built a S.R. factory in 1984 that had busily replicated itself and was now cranking out Apple IIs for 5 cents each. How many people would want one? How about an IC fab line suitable for building 2102 (1K bit) memory chips (note that the cost of new fab facilities to work in denser technologies is climbing fast, perhaps faster than the complexity of the chips they produce). Jon __@/ Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: brauchfu@fnugget.intel.com (Brian D. Rauchfuss) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 25 May 94 03:59:03 GMT Sender: nanotech@planchet.rutgers.edu Organization: INTEL.FOLSOM Lines: 27 Approved: nanotech@aramis.rutgers.edu In article eder@hsvaic.hv.boeing.com (Dani Eder) writes: >... >is simple projection from human population trends. Human population >over the past 10,000 years has been following a hyperbolic growth trend. >Since about 1600 AD the trend has been very steadily accelerating with >the asymptote located in the year 2035 AD. Now, either the human >population really will become infinite at that time (more about that >later), or a trend that has persisted over all of human history will >be broken. Either way it is a pretty special time. This does not jive with my information, which is that human population increases 2-3% per year, an exponential, not hyperbolic growth pattern. Hyperbolics are created by 1/x as x->0; what are humans losing which would cause them to have more babies? This is a more important point than it might appear; there has been a general realization that higher populations in the future will tend to create more problems with resource scarcity and crowding than are solved by the greater brainpower. So the question is, will we be able to solve problems with greater technology faster than they are created by an exponential population growth? Can even nanotechnology keep up with such problems? (The answer, btw, is "no". I do not know of any solution to population growth but to limit it to linear or less, and nanotechnology will exaberate such problems by increasing lifetimes) BDR Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: jsn@cegt201.bradley.edu (John Novak) Newsgroups: sci.nanotech Subject: Re: Singularity Message-ID: Date: 25 May 94 04:00:15 GMT Sender: nanotech@planchet.rutgers.edu Organization: Bradley University Lines: 89 Approved: nanotech@aramis.rutgers.edu In leech@cs.unc.edu (Jon Leech) writes: >>The short answer is that the near edge of the Singularity is due about >>the year 2035 AD. Several lines of reasoning point to this date. One >>is simple projection from human population trends. Human population >>over the past 10,000 years has been following a hyperbolic growth trend. >>Since about 1600 AD the trend has been very steadily accelerating with >>the asymptote located in the year 2035 AD. Now, either the human >>population really will become infinite at that time (more about that > I don't understand what this means. The trend can accelerate all it >wants without the population becoming infinite. Maybe you are using infinite >to mean some very large number? Well, this is an artifact of using such a simple and idealized mathematical model as 'a hyperbolic growth trend' on such a non-ideal and real world problem like population prediction. If we assume for a moment that population growth not only presently fits, but is in fact properly modeled by a hyperbolic function (along the entire time axis, yet...) with an asymptote at 2035 AD, then, by definition our population is as close to infinity as you care to name at 2035 minus a little bit. _At_ 2035 AD, its rather undefined. And depending on your model, at 2035 plus a little bit, it could several things. Personally, I think there are a number of problems with such a blanket assumption, even laying aside the fact that I haven't checked a reference to see for myself that the population curve even resembles a hyperbolic function. It could turn out to be something like an offset hyperbolic tangent function (ie, an S-curve type structure.) The largest problem is that our data are, by their very nature, approximations, and somewhat discretized in nature along the time axis. We _don't_ know when every child is born and every child dies. Another problem is that we're discretized _again_ in the population axis-- we don't have half-people. And we're completely ignoring the possibility of effects on the population which have been negligent and trivial until now, like resource space and social attempts to attain zero population growth. > And with roughly 2 generations between now and 2035, it's hard to see >how the population could increase much more than 16 times or so even if >every woman on the planet decided to have a large family. Another very valid argument. >>Can the Singularity be avoided? (Hell. Assuming for the moment that its coming, do we _want_ it avoided?) >>Some Other Interesting Thresholds >>Human life expectancies have been increasing at about 0.1 years >>per calendar year. If the rate of progress in medical areas increases >>by a factor of 10, then life expectancy will be increasing as fast >>as you are aging. > I don't know much about this area, but I thought that *maximum* human >lifespan was not increasing very rapidly, and more and more people are dying >due to, essentially, wearing out - so if you look at the mortality rate >curve over time, it's forming a sharper and sharper peak down at the >relatively inflexible limit. Maybe biotechnology can help with bodies >wearing out, but it seems like a different problem from fixing externally >caused problems like disease. Aside-- this is an incredibly vague statement. Increasing .1 y/y for how long?? Peg my lifespan at 80 years, right now. 800 y * .1 y/y = 80 y Surely lifespans were longer than 0 years in 1200 AD... :-) Second aside, on the subject of Other Thresholds-- I seem to remember reading (here, possibly) that according to projections, information storage density reaches 1 bit per atom in 2032 AD. Does this ring a bell, or is my poor brain fabricating numbers, again? -- John S. Novak, III jsn@cegt201.bradley.edu jsn@camelot.bradley.edu Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: jarice@delphi.com Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 26 May 94 18:04:35 GMT Sender: nanotech@planchet.rutgers.edu Organization: Delphi (info@delphi.com email, 800-695-4005 voice) Lines: 31 Approved: nanotech@aramis.rutgers.edu Dani Eder writes: >human brain has about a million times the raw power of todays' computers. >Here's how you can figure the problem: 10^11 neurons with 10^3 synapses >each with a peak firing rate of 10^3 Hz makes for a raw bit rate of >10^17 bits/sec. A 66 MHz processor chip with 64 bit architecture has >a raw bit rate of 4.2x10^9. You can buy about 100 complete PC's for >the cost of one engineer or scientist, so about 4x10^11 bits/sec, or >about a factor of a millionless than a human brain. > >Since computer capacity doubles every two years or so, we expect that >in about 40 years, the computers will be as powerful as human brains. BYTE magazine date March 1994, pg. 32 has a short article about the new Intel Ni1000 neural network chip. "According to Mark Holler, director of Intel's neural-network group, the Ni1000 chip performs TEN BILLION operations per second and is capable of recognizing 40,000 patterns per second...." How does ops/sec compare to your rough calculation of 10^17 bits per second? If we take this as current (1994) state of the art, and double it every 18 months instead of 2 years (which more accurately reflects the most recent computing power trend), when do we achieve human parity? I suspect that it is far closer than 40 years. Moravec in his book, "Mind Children", estimated that 5 teraops/ sec was required. Based on 100 Ni1000 chips producing 1 teraop, it appears that a parallel computer with 500 Ni1000 chips would equal the human brain's power. Even if you double that power to allow for software inefficiencies, we're now up to 1,000 Ni1000 chips. At a cost of, say, $500/chip, we have a chip cost of $500,000. Tripling this to pay for power supplies, optical back-planes, etc, and we're up to $1.5 mil for a human equivalent supercomputer that could be built today. Still think it'll take 40 years? Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: news@zurich.ibm.com Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 26 May 94 18:07:18 GMT Sender: nanotech@planchet.rutgers.edu Organization: IBM Zurich Research Laboratory Lines: 16 Approved: nanotech@aramis.rutgers.edu Dani Eder (eder@hsvaic.hv.boeing.com) wrote: > > What is this Singularity? ... > Human history has been characterized by an accelerating rate of > technological progress. It is caused by a positive feedback loop. [etc] Thanks for your carefull explanation. I agree that the next century will be interesting for mankind. I will retire in 2035, then we can discuss your projections... Regards Morten Holm Pedersen IBM Zurich Research Laboratory Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: ttf@dsg59.nad.ford.com (Tihamer Toth-Fejel) Newsgroups: sci.nanotech Subject: The Singularity (and population) Message-ID: Date: 26 May 94 18:09:12 GMT Sender: nanotech@planchet.rutgers.edu Lines: 90 Approved: nanotech@aramis.rutgers.edu >eder@hsvaic.hv.boeing.com (Dani Eder) writes: >Human population over the past 10,000 years has been >following a hyperbolic growth trend. BDR writes: Human population increases 2-3% per year, an exponential, not hyperbolic growth pattern. Hyperbolics are created by 1/x as x->0; what are humans losing which would cause them to have more babies? Very good point! Singularity proponents keep missing this important distinction. There is no asymtote for exponentials. Caveat: the asymototes *do* hit unbendable physical limits, so the Singularity, in one form or another, *is* coming. There has been a general realization that higher populations in the future will tend to create more problems with resource scarcity and crowding than are solved by the greater brainpower. That "general realization" has been wrong since Malthus first proposed it. We know that it is wrong -- otherwise we would all be starving right now. The Club of Rome and Meadows et.al gave Malthus more validity because they used a computer (GIGO), with that "general realization" as an erronius assumption. Again, by their predicions, we should all be starving right now. Their model also assumed that the environment of the biosphere is limited to the lower edges of Earth's atmosphere. Wrong! The noosphere (Teilhard De Chardin's term for our sphere of cognitive influence) now extends at least to geosyncronous orbit (I don't know the probability of readers getting this message via a comsat). And the Pioneer and Voyager probes are out of our solar system, so the limits to growth have not been hit yet. Why do you believe in limits to growth when all the predictions made by Malthus and Club of Rome have been incorrect? Correct me if I'm wrong, but because they are based on things we know. The problem is that they ignore what we don't know, or don't quite know how to get to (e.g. nanotech and space settlement). And the history of the human race has been a continuous process of expanding what we know. Even the limits we do find (lightspeed, Godel's therem) are not very limiting. I challenge you to name a single resource that is limited, given nanotechnology. As far as crowding is concerned, why are people all over the world leaving farmlands to cram into cities? Have you figured out how small an area the entire population of Earth could fit into at suburban densities? Do the numbers yourself -- and the rest of the world would be *empty*. Yeah, I consider 90 billion a bit much, but that is because I like wilderness, and we're still stuck on this single planet. But nanotechnology will take care of that. Will we be able to solve problems with greater technology faster than they are created by an exponential population growth? It has so far. I recently read in the newspaper that this year's Unicef report says that there are fewer children starving now than 20 years ago (when Limits to Growth first appeared). This makes sense because the doubling time for technology is shorter than it is for humans. I'm betting that it will be true even if you can xox yourself (look out! here come billions and billions of Keith Hensons!). Can even nanotechnology keep up with such problems? (The answer, btw, is "no". I do not know of any solution to population growth but to limit it to linear or less). Let me question your assumptions - you seem to persist in seeing population growth as a "problem". I see the problem as ignorance and injustice (wrt people starving today). But if nanotech increases the comfortable carrying capacity of this planet to 90 billion, (which it can, if Eric is anywhere close in his predictions) what is *inherently* wrong with that? Considering Asimov's whimsical prediction of a wall of human flesh expanding at lightspeed, what is *inherently* wrong with that? If you substitute mind for flesh, his attempt at parody is actually a pretty good approximation of how we will get to the Omega Point (when the entire universe becomes conscious). As an aside, how do you think uploading will affect population growth? Nanotechnology will exaberate such problems by increasing lifetimes. Increasing lifetimes will negligibly affect population, unless women decide to reverse menopause using nanotech. Number of children per couple is much more an important factor for population growth. ***************************************************************** ttf@dsg59.nad.ford.com (Tihamer Toth-Fejel) Office: 313 594-2165 845-7918, 3646 (Secretary) Fax: 313 594-7837 Home: 313 662-4741 Concept 2010 Design Studio, Ford, Dearborn, Michigan. Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: eder@hsvaic.hv.boeing.com (Dani Eder) Newsgroups: sci.nanotech Subject: Re: Singularity Message-ID: Date: 26 May 94 18:11:11 GMT Sender: nanotech@planchet.rutgers.edu Organization: Boeing AI Center, Huntsville, AL Lines: 126 Approved: nanotech@aramis.rutgers.edu leech@cs.unc.edu (Jon Leech) writes: >Dani Eder writes: >>When will the Singularity Occur? >> >>Human population >>over the past 10,000 years has been following a hyperbolic growth trend. > > I don't understand what this means. The trend can accelerate all it >wants without the population becoming infinite. Maybe you are using infinite >to mean some very large number? > What a hyperbolic trend means is this: if you plot 1/population versus year you get a straight line. If you take population figures from 1600 AD to today and plot their inverses, it is very nearly a straight line which crosses zero at about 2035 AD. If 1/population is zero, the n population is infinite. Going back all the way to 8000 BC, the trend is similar, but has a few changes in slope and bumps due to various causes. Now, I agree with you that human population can't become infinite by any means, and can't grow more than a few times it's present value by normal biological means. So one of two things can happen: (1) A trend spanning all of human history will be broken within one generation, or, (2) The trend will persist right up to very near the theoretical date. (2a) Humans could be produced in large quantites by artificial means [which I don't consider very likely] (2b) Counting 'intelligent entities' as the value being plotted, the number of humans plus artificial intelligences can possibly increase in number very rapidly. >> If, for example, computer power were limited to only a >>factor of 100 short of human capacity, then you could cram 100 CPU >>chips in a box and get the power you wanted. > > There seems to be an assumption here that "computer power" is some >linear function of how many switching elements are available, so by plugging >in 100 CPUs, we have 100 times the C.P. This overlooks problems of hardware >(wiring costs), software (generating truly scalable algorithms that fit >these architectures) and the problem domain - the size of a problem imposes >a limit on how much parallelism can be used. So far, this has worked to the >advantage of the dreadfully slow, not very parallel computers we've built. >The processors get faster and there are more of them, so use a larger matrix >in the LINPACK benchmark, and so on. I imagine we might find more and more >important problems that are too small to scale with the increasingly complex >machines they'll be running on. But the particular problem we are discussing here, producing human-like intelligence in a computer of sufficient raw power, seems to be a very parallel process in the current implementation (i.e. the neural nets in your brain). > >>Some Other Interesting Thresholds >> >>Human life expectancies have been increasing at about 0.1 years >>per calendar year. If the rate of progress in medical areas increases >>by a factor of 10, then life expectancy will be increasing as fast >>as you are aging. > > I don't know much about this area, but I thought that *maximum* human >lifespan was not increasing very rapidly, and more and more people are dying >due to, essentially, wearing out - so if you look at the mortality rate >curve over time, it's forming a sharper and sharper peak down at the >relatively inflexible limit. Maybe biotechnology can help with bodies >wearing out, but it seems like a different problem from fixing externally >caused problems like disease. > I don't find maximum lifespan as interesting from a practical standpoint as average life expectancy. After all, as a relatively average person, I am likely to live around the average expectancy. Only one person in a million or some other small fraction reaches the maximum lifespan. Since we are only now beginning to have an understanding of what causes the maximum lifespan value to be around 110-120 years, I would say it is premature to call it an 'inflexible limit'. We simply did not have the knowledge before now to do anything about it. We do know that somehow the aging clock gets reset when babies are made, and that some lines of lab-grown cancer cells seem to be immortal, so in principle there may be a way to get around the 120 year barrier. > One thing I wonder about these schemes is how they cope with changing >the use of the surplus fraction they devote to building "wealth" (as opposed >to more factories). Suppose Apple had built a S.R. factory in 1984 that had >busily replicated itself and was now cranking out Apple IIs for 5 cents >each. How many people would want one? How about an IC fab line suitable for >building 2102 (1K bit) memory chips (note that the cost of new fab >facilities to work in denser technologies is climbing fast, perhaps faster >than the complexity of the chips they produce). In order for a self-replicating system to make economic sense, it must replicate faster than the change in technology lowers costs. For example, in the case you cite, You can buy about twice as much computer power for the same dollars every two years. So you have a choice of holding on to your money for a couple of years and buying a more powerful machine, or buying the output of the replicating factory. If it takes 2 years for the factory to copy itself, then it could produce 2 Apple IIs at the end of the time, so you again get twice the computer power for the same dollars, and it is a push compared to buying the better machine (or buying 2 comparable newer models at half price). If the factory can copy itself in one year, then you can get 4 Apple IIs out at the end time, versus the 2 by newer manufacturing techniques. So the self-replicating factory is a winner. Also, there is nothing that precludes the existence of a self-improving factory. As better processes are developed, you instruct the factory to build a newer generation factory rather than simply a copy of itself. Note that the world economy is an example of such a self-improving system. The interesting questions are can you build a system smaller than the world as a whole - say the size of an industrial park or a pinhead, and can you build a system that does not have humans in the loop (or a very small requirement for humans), since humans are a sharply limited resource (there's only 3 billion units in stock (adults) and the orders for new production take 20 years to fill). -- Dani Eder/Rt 1 Box 188-2/Athens AL 35611/(205)232-7467 Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: bernardh@wimsey.com (Bernard J Hughes) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 26 May 94 18:11:32 GMT Sender: nanotech@planchet.rutgers.edu Organization: Pacific Press Lines: 27 Approved: nanotech@aramis.rutgers.edu In article , brauchfu@fnugget.intel.com (Brian D. Rauchfuss) wrote: > So the question is, will we be able to solve problems with greater > technology faster than they are created by an exponential population growth? > Can even nanotechnology keep up with such problems? > > (The answer, btw, is "no". I do not know of any solution to population growth > but to limit it to linear or less, and nanotechnology will exaberate such > problems by increasing lifetimes) > > BDR I would say the answer is "yes". If you assume that "people" will remain instantiated in a natural biological form then the answer would be "no". But if you could "download" whatever constitutes a person to another less matter intensive form, then you could see a massive increase in "population". Such creatures could not still fully be described as human, but that seems to me a major part of the concept of The Singularity. The Singularity as I see it, is it is the point at which the human becomes the Transhuman, and therefore unpredictable to us. -- Bernard Hughes (604) 251-7381 Vancouver B.C bernardh@wimsey.com Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: kickaha@smug.student.adelaide.edu.au (Sundance Bilson-Thompson) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 26 May 94 18:12:18 GMT Sender: nanotech@planchet.rutgers.edu Organization: The University of Adelaide Lines: 36 Approved: nanotech@aramis.rutgers.edu Dani Eder (eder@hsvaic.hv.boeing.com) wrote: : The short answer is that the near edge of the Singularity is due about : the year 2035 AD. Several lines of reasoning point to this date. One : is simple projection from human population trends. Human population : over the past 10,000 years has been following a hyperbolic growth trend. : Since about 1600 AD the trend has been very steadily accelerating with : the asymptote located in the year 2035 AD. Now, either the human : population really will become infinite at that time (more about that : later), or a trend that has persisted over all of human history will : be broken. Either way it is a pretty special time. I'd like to know where you drew that 'hyperbolic' bit from. Every other reference to human population growth that I was aware of says it's been exponential. This makes more sense because each generation has n children per parent, who have n children each, who in turn have n children each, hence the population of the 'm'th generation (StarTrek: the 'm'th generation :-) will be i*n^m where i is population of the initial human generation (generation zero). This does not reach an asymptote at any point, it just gets bigger continually. Of course, a hyperbolic function doesn't reach infinity at its asymptote for that matter, it becomes undefined, and becomes negative on the other side of the asymptote. Somehow I don't think this is a valid model for population growth. The curve will _never_ become vertical, and so the year 2035AD loses all significance based on this facet of your argument. Cheers, Sundance ************************************************************************ Sundance O. Bilson-Thompson. * "WHAT DO WE WANT ! ?" Adelaide, South Australia * "QUANTUM UNCERTAINTY !!" student Mathematical Physicist * "WHEN DO WE WANT IT ! ?" and Redhead fanatic. * "Aw......sometime this week." ======================================================================== kickaha@smug.student.adelaide.edu.au *********************************************************************** Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: arromdee@jyusenkyou.cs.jhu.edu (Ken Arromdee) Newsgroups: sci.nanotech Subject: Re: Singularity Message-ID: Date: 31 May 94 03:37:23 GMT Sender: nanotech@planchet.rutgers.edu Organization: Johns Hopkins University CS Dept. Lines: 21 Approved: nanotech@aramis.rutgers.edu In article , Dani Eder wrote: >Now, I agree with you that human population can't become infinite by >any means, and can't grow more than a few times it's present value >by normal biological means. So one of two things can happen: >(1) A trend spanning all of human history will be broken within one >generation, or, There was an Analog article around 1961 or so (I first read it long afterwards in a reprint collection :-)) which applied the same sort of analysis to the maximum speed at which humans can travel. The conclusion was that we would have FTL drives around 1985, because the curve was getting steeper fast. If the growing figure is affected by some sort of limit that hasn't had much effect yet, you may end up with an S-shaped curve that starts level, gets steep, and levels out again as it approaches the limit. -- Ken Arromdee (email: arromdee@jyusenkyou.cs.jhu.edu) ObYouKnowWho Bait: Stuffed Turkey with Gravy and Mashed Potatoes "You, a Decider?" --Romana "I decided not to." --The Doctor Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: eder@hsvaic.hv.boeing.com (Dani Eder) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 31 May 94 03:38:45 GMT Sender: nanotech@planchet.rutgers.edu Organization: Boeing AI Center, Huntsville, AL Lines: 93 Approved: nanotech@aramis.rutgers.edu brauchfu@fnugget.intel.com (Brian D. Rauchfuss) writes: >eder@hsvaic.hv.boeing.com (Dani Eder) writes: >>... >>is simple projection from human population trends. Human population >>over the past 10,000 years has been following a hyperbolic growth trend. > >This does not jive with my information, which is that human population >increases 2-3% per year, an exponential, not hyperbolic growth pattern. My sources are (1) World Almanac and Book of Facts, 1992 edition and (2) Statistical Abstract of the United States, 1988 edition. I can't plot a graph in ASCII, but here's the data: Year Population Inverse Change in Growth Rate (A.D.) (Billions) (1/Billions) Inverse/year (%/year) 1 0.200 5.000 -0.0019 0.06 1650 0.550 1.818 -0.0044 0.28 1750 0.725 1.379 -0.0053 0.48 1850 1.175 0.851 -0.0045 0.62 1900 1.600 0.625 -0.0042 0.75 1930 2.000 0.500 -0.0055 1.25 1950 2.565 0.390 -0.0056 1.87 1980 4.477 0.223 -0.0036 1.77 1990 5.333 0.187 As can be seen by inspection of the fourth and fifth columns, the data since 1650 is much closer to a hyperbolic than an exponential. The inverse of a hyperbolic is a line of constant slope, and the slope varies by +10% to -25% of it's average value of -0.0048 over the period 1650-1990 A.D. An exponential has a constant percentage growth rate. Here the growth rate varies from -60% to +175% of it's average value of 0.67% per year over the same time period. Extrapolating the average change in the inverse gives 39 years from 1990, or 2029 A.D. An eyeballed line fit to the graphed data indicates 2035 A.D. >This is a more important point than it might appear; there has been a general >realization that higher populations in the future will tend to create more >problems with resource scarcity and crowding than are solved by the greater >brainpower. So the question is, will we be able to solve problems with greater >technology faster than they are created by an exponential population growth? >Can even nanotechnology keep up with such problems? > >(The answer, btw, is "no". I do not know of any solution to population growth >but to limit it to linear or less, and nanotechnology will exaberate such >problems by increasing lifetimes) > >BDR I invite Mr. Rauchfuss to support his claim that brains lose out to bodies with increasing population. The simplest argument I can make is that every brain comes with two hands to feed it, which is a linear problem. In more detail, the Census Bureau projects that world population growth will fall to 1.4% per year by 2010, since birth rates are falling not only in the developed world, but in much of the undeveloped world also. Resources are not, on the whole getting scarcer. The fossil fuels which have been burned in the past are still on Earth. Some of it is in the form of CO2 in the atmosphere, but it still is here. Given sufficient need and available power, you can get it back out. As far as crowding and such, here's an example. Based on closed life support studies for spacecraft, it takes 10 square meters of growing area to feed one person. So a good sized mountain hollowed out could provide all the food for North America. The rest of the land could be returned to it's natural state except for the 1% occupied by people at a comfortable suburban density (1/4 acre lots with 2.5 people per household). This example assumes well understood technology that probably will be test flown in the next 10 years on the Space Station. The REAL limits to population growth at 2% per year are when your civilization is expanding in a spherical shell of 150 light year radius at the speed of light. Beyond that point a 2% growth rate would require expansion rates faster than light. After that population growth is limited to cubic. Dani Eder -- Dani Eder/Rt 1 Box 188-2/Athens AL 35611/(205)232-7467 Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: sterner@upenn5.hep.upenn.edu (Kevin Sterner) Newsgroups: sci.nanotech Subject: Re: The Singularity (and population) Message-ID: Date: 31 May 94 03:40:26 GMT Sender: nanotech@planchet.rutgers.edu Organization: University of Pennsylvania Lines: 19 Approved: nanotech@aramis.rutgers.edu ttf@dsg59.nad.ford.com (Tihamer Toth-Fejel) writes: >I challenge you to name a single resource that is limited, given >nanotechnology. Energy. (Granted, it's a very big limit...) A more serious (and related) limit is the ability to dissipate heat. I'm currently developing a very high resolution detector system that will depend upon *very*-VLSI. The ability of the device to dissipate heat looks like it might drive the design. We haven't even attempted to tackle that problem yet--it just looms on the horizon like a hurricane. I suspect it will loom on the horizon of many nanotech systems, too. -- K. ------------------------------------------------------------------------------ Kevin L. Sterner | U. Penn. High Energy Physics | Smash the welfare state! ------------------------------------------------------------------------------ Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: leech@cs.unc.edu (Jon Leech) Newsgroups: sci.nanotech Subject: Re: The Singularity (and population) Message-ID: Date: 31 May 94 03:40:58 GMT Sender: nanotech@planchet.rutgers.edu Organization: The University of North Carolina Lines: 9 Approved: nanotech@aramis.rutgers.edu In article , ttf@dsg59.nad.ford.com (Tihamer Toth-Fejel) writes: |> I challenge you to name a single resource that is limited, given |> nanotechnology. Energy, mass, and communications bandwidth. Admittedly it will likely be a while before these are real problems. Jon __@/ Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: dsiebert@icaen.uiowa.edu (Doug Siebert) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 31 May 94 03:42:36 GMT Sender: nanotech@planchet.rutgers.edu Organization: Iowa Computer Aided Engineering Network, University of Iowa Lines: 23 Approved: nanotech@aramis.rutgers.edu jarice@delphi.com writes: >BYTE magazine date March 1994, pg. 32 has a short article about >the new Intel Ni1000 neural network chip. ... >... Still think it'll take 40 years? No, it won't take 40 years. It'll take longer. Software development isn't moving at nearly the pace hardware is. If you sold a human-brain-power equivalent box for $1000 we'd be no closer to duplicating a human brain (or even a bee's brain, probably) than we are today. Getting the hardware is by far the easiest part of the battle. Just read comp.risks sometime and see how much problem simple non artifically intelligent software still gives us, even software that is written for life and death and/or million/billion dollar systems that you would expect would be checked out a bit more closely for bugs than your average Microsoft program for your PC. -- Doug Siebert | I have a proof that everything I have stated above dsiebert@isca.uiowa.edu | is true, but this .sig is too small to contain it. Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: jlewis@bigdog.engr.arizona.edu (Jay A. Lewis) Newsgroups: sci.nanotech Subject: Re: Singularity Message-ID: Date: 31 May 94 03:45:23 GMT Sender: nanotech@planchet.rutgers.edu Organization: University of Arizona, CCIT Lines: 29 Approved: nanotech@aramis.rutgers.edu Obviously picking a particular year like 2035 or 2032 could never be more than a guestamate. It does seem that many trends are aproaching some sort of maximum within a decade or so of that date. I have never heard it described this way, but I like the concept. Any half decent catastrophy will set it back, but so will a major breakthrugh accelerate it. I think a better mathmatical representation would be an overdamped wave of some sort. Technological advances and population grouth can only accelerate until they reach some point of human limitation, then level off. For population, the more technologically advanced countries seem to be having smaller families (a leveling off of growth). As for technology, its growth is slowed by the ability of people to adjust to new technology. I think a sympton that this is happening allready are the commercials for products that havn't been developed yet (have you seen the AT&T comercials? You will.) Strictly speaking there will be no 'singularity', its definition is arbitrary. Pick a population number and there is still room for one more person to be born. Pick a computer speed and a computer will be accelerated a fraction faster somewhere. -- ____ _____/ / _____/ | / / _____/ / / / | / / / / / ___/ | _ \ / / ____ / / / / / | / \ / / / /___/ _/ _________/ _______/ ____/ ___/ __/ ______/ jlewis@bigdog.engr.arizona.edu ---- PGP 2.3 Public Key available upon request ---- Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: eder@hsvaic.hv.boeing.com (Dani Eder) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 1 Jun 94 18:43:22 GMT Sender: nanotech@planchet.rutgers.edu Organization: Boeing AI Center, Huntsville, AL Lines: 59 Approved: nanotech@aramis.rutgers.edu kickaha@smug.student.adelaide.edu.au (Sundance Bilson-Thompson) writes: >Dani Eder (eder@hsvaic.hv.boeing.com) wrote: > >: Human population >: over the past 10,000 years has been following a hyperbolic growth trend. > >I'd like to know where you drew that 'hyperbolic' bit from. Every other >reference to human population growth that I was aware of says it's been >exponential. This makes more sense because each generation has n children The source of that 'hyperbolic' bit is the actual population estimates, drawn from sources such as the US Bureau of the Census. I invite you to provide even one reference that shows an exponential trend for the period I cited (either from 8000BC to present or 1600AD to present). I want to hammer this point home because it is a widely believed fallacy: World population growth rates peaked at 2.0 percent per year at exactly the time that the "limits to Growth" study was published. They were never as high before, and without the effects of computer intelligence, would not be as high in the future. Exponential growth is characterized by a constant percentage increase per year. Human population has not exhibited this trend except for fleeting periods such as the 1960s and 1970s when the rate seemed to be constant because it was in fact peaking. The simplistic model that assumes that families average n children where n>2.1 - leading to a fixed growth rate is not in accord with the historical data. People have varied the number of children they have had over history. For example, in the United States in the 20th century birth rates were relatively low in the 1920s and 1930s, nearly down to the replacement rate (of 2.1), then rose to about 3.5 children per woman in the 1950s and 1960s, causing the well known 'Baby Boom ' generation. The more accurate model is that in times over 200 years ago the world was characterized by high birth rates and high death rates, with the difference between the two being relatively small - yielding a small population growth rate. Innovations in farming, the clearing of forests for agricuture, etc. Yielded a growing population. Each time the population doubled, there were twice as many minds to come up with the next improvement, so they came faster and faster. These improvements increased the survival rates for children and lowered death rates for adults, widening the gap between them and hence leading to a increasing population growth rate. After a while (about 2 generations in what is now the developed world, apparently a shorter period in the now developing world) people realized that they no longer needed to give birth to 4 children to get 2 survivors, so the birth rates come down. In the developed world we are mostly through the transistion, and approaching population equilibrium. In the developing world we are still in transition, but birth rates are headed down over most of the developing world. Dani Eder -- Dani Eder/Rt 1 Box 188-2/Athens AL 35611/(205)232-7467 Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: archer@frmug.fr.net (Vincent Archer) Newsgroups: sci.nanotech Subject: Re: Singularity Message-ID: Date: 1 Jun 94 18:43:47 GMT Sender: nanotech@planchet.rutgers.edu Organization: FrMug Usenet BBS Lines: 82 Approved: nanotech@aramis.rutgers.edu Dani Eder wrote: >What a hyperbolic trend means is this: if you plot 1/population versus >year you get a straight line. If you take population figures from >1600 AD to today and plot their inverses, it is very nearly a straight >line which crosses zero at about 2035 AD. If 1/population is zero, the ... >(1) A trend spanning all of human history will be broken within one >generation, or, If you count 4 centuries "all of human history", then there's something broken in my history textbooks :-) Hmm, I wonder. Do this model predict, say, 2 persons in year 4000BC? (lots of smileys added...) As we go back in time, the numbers are less and less accurate, and the number of population 'models' that can accomodate the distribution increase. I think that an logarithm of the population would also yield "very nearly a straigth line". >I don't find maximum lifespan as interesting from a practical standpoint >as average life expectancy. After all, as a relatively average person, >I am likely to live around the average expectancy. Only one person in >a million or some other small fraction reaches the maximum lifespan. >Since we are only now beginning to have an understanding of what causes >the maximum lifespan value to be around 110-120 years, I would say >it is premature to call it an 'inflexible limit'. We simply did not >have the knowledge before now to do anything about it. What is important is the curve "expected life span/current age". It is a rising curve (as you age, your expected life span increase. If you can expect to live to age 68 as a baby, you can expect to live to age 75 as an adult, and to age 82 when you retire), but that curve finally crosses the diagonal. And you die. >We do know that somehow the aging clock gets reset when babies are made, During which time, cells are scrambled, our gene mixed a lot, and everything restarted from scratch. >and that some lines of lab-grown cancer cells seem to be immortal, so >in principle there may be a way to get around the 120 year barrier. But don't expect to maintain your 'personal integrity' that way. Entropic decay still rages within your body, as in any structure. There's no way to avoid it, unless you produce a clean brand-new structure. You then just need to transfer your personality/memory/whatever in that clean slate. >In order for a self-replicating system to make economic sense, it must >replicate faster than the change in technology lowers costs. For example, Or faster than the change in technology renders old technology obsolete. The examples you are describing are akin to the fabled "man-month" that is the bane of project planners. Sure, with your system, you can produce 100 Apple ][ for the price of a mid-range 486 PC. But how would you use these? They're powerful enough but are they useful enough. Nobody believes that, by hiring the whole of China, a 1,000 man-day project can be completed in one hour. That's what your example is suggesting. >Also, there is nothing that precludes the existence of a self-improving >factory. As better processes are developed, you instruct the factory >to build a newer generation factory rather than simply a copy of itself. Now, that's what I want to get. Not merely faster, but better. >Note that the world economy is an example of such a self-improving system. >The interesting questions are can you build a system smaller than the >world as a whole - say the size of an industrial park or a pinhead, and >can you build a system that does not have humans in the loop (or a very >small requirement for humans), since humans are a sharply limited >resource (there's only 3 billion units in stock (adults) and the orders >for new production take 20 years to fill). Out of which about 10 to 15% (varying) of this resource is unemployed. -- Vincent Archer Email: archer@frmug.fr.net Don't worry if it doesn't work right. If everything did, you'd be out of a job. Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: jfitz@rain.com (John K. Fitzpatrick) Newsgroups: sci.nanotech Subject: Re: Singularity Message-ID: Date: 3 Jun 94 13:57:12 GMT Sender: nanotech@planchet.rutgers.edu Organization: Teleport - Portland's Public Access (503) 220-1016 Lines: 24 Approved: nanotech@aramis.rutgers.edu In article jlewis@bigdog.engr.arizona.edu (Jay A. Lewis) writes: > As for >technology, its growth is slowed by the ability of people to adjust to >new technology. But we are reaching the point where our technology adjusts to people. Since the Singularity may be the junction of the posthuman, people on this side of it, as it sweeps through the world, will just be along for the ride. The hyper-flexible, -extensible, -adjustable technology may adapt to our weaknesses, not waiting for us to tell it what we want from it. As it does so, we become posthuman being(s), possibly deathless. I think we should think hard about wether we can make it worth the machines effort to help us out. "Why should adapt to the puny humans?" Perhaps it will be our sensory systems that the machines will want to use, trading us immortality for the perpetual use. Or perhaps they will dis us. John John K. Fitzpatrick jfitz@rain.com Your part in the dance is to disapprove of my part in the dance. - Bela Bartok Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: robj@netcom.com (Rob Jellinghaus) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 3 Jun 94 13:58:50 GMT Sender: nanotech@planchet.rutgers.edu Organization: Netcom Online Communications Services (408-241-9760 login: guest) Lines: 43 Approved: nanotech@aramis.rutgers.edu In article dsiebert@icaen.uiowa.edu (Doug Siebert) writes: >If you sold a human-brain-power >equivalent box for $1000 we'd be no closer to duplicating a human brain (or >even a bee's brain, probably) than we are today. This is a pretty large assertion. Are you really saying that a, what, 10^12 or so increase in computer price-performance would make _no_ difference to pur ability to construct autonomous, intelligent-seeming robots? (Which is what'd we be doing in emulating a bee's brain.) >Getting the hardware is >by far the easiest part of the battle. Just read comp.risks sometime and see >how much problem simple non artifically intelligent software still gives us, The (excellent) RISKS digest discusses often under-engineered software primarily in terms of its failures to perform its (specific, complex) functions. That this is a problem is undeniable. But hey, lots of software does amazing things with pretty good reliability. The fact that we have problems building perfect systems doesn't say much about our ability to do a hell of a lot more given more cycles. In fact, human error itself (interacting with technology) is at the root of many of the failures in RISKS. What does this say about the failure modes of increasingly complex (or "intelligent", take your pick) systems? On another note, someone cogently points out that the "Singularity", as a measure of "technological advance", is bogus; there is no necessary asymptote to processor power, or population, or what have you. I sometimes think of the Singularity as the point in time beyond which we humans (1990's Homo Sapiens) are unable to envision at all what the world is like; technology has opened too many currently- unimaginable (literally) possibilities. This defines the Singularity as the event horizon of our current ability to foresee (i.e. look into) the future. By this measure, the Singularity is not a vertical slope on a time- axis graph, which we collide with in year X. It's more like a moving target, which recedes as we get closer to it. Of course, it's also so vaguely defined as to be virtually useless except as a thought- provoking diversion. Oh well :-) -- Rob Jellinghaus robj@netcom.com uunet!netcom!robj Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: cohenb@slc.com (Bruce Cohen) Newsgroups: sci.nanotech Subject: Risks (was Re: The Singularity) Message-ID: Date: 3 Jun 94 13:59:59 GMT Sender: nanotech@planchet.rutgers.edu Organization: ElectroPolitical Engineering Enterprises Lines: 32 Approved: nanotech@aramis.rutgers.edu In article dsiebert@icaen.uiowa.edu (Doug Siebert) writes: > Just read comp.risks sometime and see > how much problem simple non artifically intelligent software still gives us, > even software that is written for life and death and/or million/billion dollar > systems that you would expect would be checked out a bit more closely for bugs > than your average Microsoft program for your PC. Slightly off the subject of the singularity, but very much to the point of our ability to build and maintain extremely complex systems like software and nanostructures, but ... Everyone involved in the development and use of complex artifacts should read "Software Woes" by Lauren Wiener. This book makes a strong case for the notion that we will *never* be able to produce known-safe or bugfree complex systems (software is the particular subject here) because of the nature of the design and production processes and the way humans think and work. The lesson to be learned is that we're not perfect, our artifacts are not perfect, and that we need to plan for that. Especially when dealing with technologies whose potential impact on the world is so great as with nanotech, we ought to adopt the motto "There ain't no such thing as a bug-free product." -- ----------------------------------------------------------------------------- No object should exist that doesn't pay for itself by accepting computational responsibility. - Kent Beck ----------------------------------------------------------------------------- Bruce Cohen, Servio Corporation | email: cohenb@slc.com 15400 NW Greenbrier Pkwy, Suite 280 | phone: (503)690-3602 Beaverton, OR USA 97006 | fax: (503)629-8556 Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: brauchfu@fnugget.intel.com (Brian D. Rauchfuss) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 3 Jun 94 14:01:36 GMT Sender: nanotech@planchet.rutgers.edu Organization: INTEL.FOLSOM Lines: 31 Approved: nanotech@aramis.rutgers.edu In article eder@hsvaic.hv.boeing.com (Dani Eder) writes: >I invite Mr. Rauchfuss to support his claim that brains lose out to bodies >with increasing population. The simplest argument I can make is that >every brain comes with two hands to feed it, which is a linear problem. My simplest argument is to point out that the finite energy and matter supply of the nearby stars cannot support an exponential growth for very much time. 1000 years of the present 1.7% growth produces 20 million times the present population, or 10^17 people. 1000 years is a very short time, especially to people who expect nanotechnology to extend lives to this length. (Having most of the population moving away from the center at near the speed of light is a possible solution, though very resource intensive) >In more detail, the Census Bureau projects that world population growth >will fall to 1.4% per year by 2010, since birth rates are falling not >only in the developed world, but in much of the undeveloped world also. So perhaps the problem will solve itself. We do not have a good method of predicting human behavior; the exponential model merely sounds reasonable (as opposed to best fitting the data) given the human tendency to have the same size families as parents. It is difficult to decide what the population will do in a nanotech world. It may be that the economic cost of children falls low enough that everyone will want a few (and give them to the robonanny when they are difficult, so the time investment would be small too!). This plus extremely long lives would cause a population boom. OTOH, perfect birth control might produce a negative birth rate. Brian Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: fred.hapgood@channel1.com (Fred Hapgood) Newsgroups: sci.nanotech Subject: The Singularity Message-ID: Date: 3 Jun 94 14:02:06 GMT Sender: nanotech@planchet.rutgers.edu Organization: Channel 1(R) 617-864-0100 Info Lines: 28 Approved: nanotech@aramis.rutgers.edu DS> > Message-ID: > Newsgroups: sci.nanotech > Organization: Iowa Computer Aided Engineering Network, University of Iowa DS> jarice@delphi.com writes: >BYTE magazine date March 1994, pg. 32 has a short article about >the new Intel Ni1000 neural network chip. ... >... Still think it'll take 40 years? DS> No, it won't take 40 years. It'll take longer. Software development isn't moving at nearly the pace hardware is. There must be a better way to do this software business -- think of the tremendous functionality of a pigeon or rat or bat brain relative to the biggest of our machines. In the case of a pigeon you have a computer the size of a raison running at 20 hertz drawing virtually no power that can be trained to a young leaf of one species in one orientation and distance and that will then match an old leaf of the same species shown in a different orientation and distance. There's a good idea hidden in there somewhere, and we don't know it yet. Once we tumble to it, software development will become a lot easier. --- ~ SPEED 2.0b #1339 ~ Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: ttf@dsg59.nad.ford.com (Tihamer Toth-Fejel) Newsgroups: sci.nanotech Subject: The Singularity Message-ID: Date: 6 Jun 94 18:52:16 GMT Sender: nanotech@planchet.rutgers.edu Lines: 22 Approved: nanotech@aramis.rutgers.edu Rob Jellinghaus writes: the "Singularity", as a measure of "technological advance", is bogus; there is no necessary asymptote to processor power, or population. Are you saying that a human wall of flesh can expand from Earth at faster than the speed of light? WRT processor power - I will bet that when a brain becomes much bigger than a 1 AU, the speed of light will limit the communication between different portions of its consciousness, and it's consciousness will not advance beyond that point. Either scenario ends up in the Omega Point. But you are right. 1990's homo sapiens cannot imagine what anything will be like. It's more like a moving target, which recedes as we get closer to it. Hmm. Like Apollo chasing Zeno's tortoise? But he does catch up, because Zeno's paradox is flawed (fortunately, calculus took care of it). Tihamer Toth-Fejel Office: 313 594-2165 845-7918, 3646 (Secretary) Fax: 313 594-7837 Home: 313 662-4741 Concept 2010 Design Studio, Ford, Dearborn, Michigan. ***************************************************************** Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: bernardh@wimsey.com (Bernard J Hughes) Newsgroups: sci.nanotech Subject: Re: Singularity Message-ID: Date: 6 Jun 94 18:53:22 GMT Sender: nanotech@planchet.rutgers.edu Organization: Pacific Press Lines: 19 Approved: nanotech@aramis.rutgers.edu In article , jfitz@rain.com (John K. Fitzpatrick) wrote: > I think we should think hard about wether we can make it worth > the machines effort to help us out. "Why should adapt to the puny > humans?" Why should humans help each other out? I doubt if we will get to the Singularity unless we learn more about cooperation and what makes it work, then build that knowledge into our machines. As I see it, the line between machines/humans is a false one generated by the simple Newtonian machine examples we see today. I think any complex entities, wether biological or artificial, will have similar problems of identity, self interest etc. I expect to live in interesting times... Bernard Hughes (604) 251-7381 Vancouver B.C bernardh@wimsey.com ----- "Creative Laziness at its Best"-------- Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: kamchar@ibm.cl.msu.edu (SunCat) Newsgroups: sci.nanotech Subject: Critter minds WAS:Re: The Singularity Message-ID: Date: 6 Jun 94 19:04:44 GMT Sender: nanotech@planchet.rutgers.edu Organization: Money Sucking University Lines: 29 Approved: nanotech@aramis.rutgers.edu In article , fred.hapgood@channel1.com (Fred Hapgood) wrote: [...] >There must be a better way to do this software business -- think >of the tremendous functionality of a pigeon or rat or bat brain >relative to the biggest of our machines. In the case of a pigeon ... In _Queen of Angels_ Greg Bear uses the idea of using part of the intelligence in squirrels and horses and instantiating it in mobile AI-equiped robots. SunCat>>>>>>>>>>>>>>> kamchar@ibm.cl.msu.edu "They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety" - Ben Franklin [Oddly enough, in WWII B.F. Skinner ran a project for the military in which he built "smart bombs" using live trained pigeons ... Another SF reference of interest is Allen's The Modular Man. I would imagine, though, that by the time we can do this we'll have discovered better techniques. We can now build ornithopters but fixed wings and propellors are more efficient... --JoSH] Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: ebrandt@jarthur.cs.hmc.edu (Eli Brandt) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 7 Jun 94 18:30:36 GMT Sender: nanotech@planchet.rutgers.edu Organization: Harvey Mudd College, Claremont CA Lines: 11 Approved: nanotech@aramis.rutgers.edu In article , Brian D. Rauchfuss wrote: >(Having most >of the population moving away from the center at near the speed of light is >a possible solution, though very resource intensive) This is no solution to a demand for exponentially-increasing amounts of matter. Speed-of-light expansion gives you cubic increase, in a Euclidean space (which is close enough to the truth). Eli ebrandt@hmc.edu finger for PGP key. Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: news@zurich.ibm.com Newsgroups: sci.nanotech Subject: Re: Critter minds WAS:Re: The Singularity Message-ID: Date: 7 Jun 94 18:31:02 GMT Sender: nanotech@planchet.rutgers.edu Organization: IBM Zurich Research Laboratory Lines: 10 Approved: nanotech@aramis.rutgers.edu > Oddly enough, in WWII B.F. Skinner ran a project for the military in > which he built "smart bombs" using live trained pigeons ... I saw a program on danish TV some time ago, where they were using pigeons for finding people at sea. Put aboard a helicopter, the pigeon would ring a bell when seeing a life-vest in the water beneath. Because pigeons have far better eyes than humans that seemed to work quite well, but I don't know if it is in widespread use. Morten Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: robj@netcom.com (Rob Jellinghaus) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 7 Jun 94 18:37:33 GMT Sender: nanotech@planchet.rutgers.edu Organization: Netcom Online Communications Services (408-241-9760 login: guest) Lines: 30 Approved: nanotech@aramis.rutgers.edu In article ttf@dsg59.nad.ford.com (Tihamer Toth-Fejel) writes: > > Rob Jellinghaus writes: > the "Singularity", as a measure of "technological advance", is > bogus; there is no necessary asymptote to processor power, > or population. >Are you saying that a human wall of flesh can expand from Earth at >faster than the speed of light? I was talking about vertical asymptotes, not horizontal ones. There is (as far as we know) no point at which the rate of increase of processor power or population becomes infinite. There are probably hard limits to how fast either can grow, in fact. But one conventional notion of the "singularity" (as Dani Eder mentioned) is a point at which the graph of technology/population versus time acquires a vertical slope, and that's the notion I think is fallacious. > It's more like a moving target, which recedes as we get closer to it. >Hmm. Like Apollo chasing Zeno's tortoise? But he does catch up, because >Zeno's paradox is flawed (fortunately, calculus took care of it). No, not quite like that. As we get closer to the future, our foresight _of_ the future becomes clearer. I see no reason to think we will reach a point where we cannot make good guesses _at all_ about the future--even if we can only see (say) a year ahead at a time. (And by "we" I mean "life as of that future time", not necessarily "we ordinary homo sapiens reading this newsgroup right now.") -- Rob Jellinghaus robj@netcom.com uunet!netcom!robj Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: sarmdj@thor.cf.ac.uk (MATTHEW JONES) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 10 Jun 94 02:40:39 GMT Sender: nanotech@planchet.rutgers.edu Organization: University of Wales College at Cardiff Lines: 14 Approved: nanotech@aramis.rutgers.edu In article ttf@dsg59.nad.ford.com (Tihamer Toth-Fejel) writes: >when a brain becomes much bigger than a 1 AU, the speed of light will >limit the communication between different portions of its >consciousness, and it's consciousness will not advance beyond that point. >Either scenario ends up in the Omega Point. But you are right. 1990's >homo sapiens cannot imagine what anything will be like. > Arthur C. Clarke has a word to say on this one: paraphrasing from "Report from Planet Three": "When God's Children are in trouble, he/she's going to come as fast as he can. Which is the speed of light, unfortunately, in his Creation!" MATT Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: jarice@delphi.com Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 10 Jun 94 23:27:47 GMT Sender: nanotech@planchet.rutgers.edu Organization: Delphi (info@delphi.com email, 800-695-4005 voice) Lines: 58 Approved: nanotech@aramis.rutgers.edu Doug Siebert writes: >jarice@delphi.com writes: > >>BYTE magazine date March 1994, pg. 32 has a short article about >>the new Intel Ni1000 neural network chip. ... >>... Still think it'll take 40 years? > > > >No, it won't take 40 years. It'll take longer. Software development isn't >moving at nearly the pace hardware is. If you sold a human-brain-power Doug Siebert asserts that software development is moving slowly, and therefore developing human-equivalent intelligence is linked weakly, if at all, to raw processing power. I dispute that. I would argue that, in fact, software development is progressing roughly as fast as computer processing power. My rationale? The development of software development techniques which do not require human programmers to write every line of code, i.e., 1) Neural networks (which the Ni 1000 chip was designed to imitate), and more importantly, 2) Evolutionary programming, in which programs are generated randomly and then competed against a selection criteria. Then you take the top 10%, mix their programming code (mate them), generate new programs, and compete them, ad infinitum until a useful program is evolved. The evolutionary programming, more generally known as Artificial Life, is in my opinion more applicable to the development of human-level intelligence. To quote from the book "Artificial Life II", pg. 831, "Artificial life provides the possibility for Lamarckian evolution to act on the material composition of the organisms themselves. Once we can manipulate the genome directly...we can modify our offspring according to our perception of their needs." In other words, directed, intelligent evolution which occurs at lightning speed. I think we'll see the emergence of human-level machine intelligence within 20 years. That intelligence will do two things: It will continue to increase it's capabilities by directed self-evolution and by designing better computer hardware (cycle continues with no evident limit), and it will avail itself of existing technology and improve that technology which it needs. I think that will certainly include nanotechnology. So while machine intelligence doesn't need nanotechnology, if machine intelligence evolves prior to nanotech, nanotech (and virtually every concievable physically possible technology) will emerge soon after. Good luck to humans- it's either death or godhood. [It is also worth noticing that in, e.g., numerical analysis, which has a history long enough to begin to draw conclusions, algorithmic speedups on particular problems have tended to be about equal to hardware on the average; that is, over the same historical period that machines became 10 times as fast, new algorithms would be discovered that gave another factor of 10, for a total speedup of 100. --JoSH] Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: fred.hapgood@channel1.com (Fred Hapgood) Newsgroups: sci.nanotech Subject: The Singularity Message-ID: Date: 12 Jun 94 17:22:48 GMT Sender: nanotech@planchet.rutgers.edu Organization: Channel 1(R) 617-864-0100 Info Lines: 78 Approved: nanotech@aramis.rutgers.edu > Doug Siebert asserts that software development is moving slowly, > and therefore developing human-equivalent intelligence is linked > weakly, if at all, to raw processing power. I dispute that. > I would argue that, in fact, software development is progressing > roughly as fast as computer processing power. My rationale? > The development of software development techniques which do > not require human programmers to write every line of code, i.e., > 1) Neural networks (which the Ni 1000 chip was designed to > imitate), and more importantly, > 2) Evolutionary programming, in which programs are generated randomly > and then competed against a selection criteria. Then you take > the top 10%, mix their programming code (mate them), generate > new programs, and compete them, ad infinitum until a useful > program is evolved. I don't find these persuasive examples, but that's because I suspect that most of the truly interesting applications (like the robot engineers we will need to make nanotechnology really work) will require animal-type intelligence, and so far we have made zero progress in finding out how bats and rats do what they do. Bats and rats and pigeons (etc.) couple some kind of very powerful learning 'trick' to an equally impressive pattern classification/matching 'trick', and with dam close to no hardware at all. You can see these abilities at work in both their sensory modalities and motor control. Blows me away. Animals have figured the metaphor thing out. They know how to use their life experience to generate good connections between memories without generating bad ones; how to use metaphors to extend their own programming. Human animals can say things like 'this person's thinking is soft', thus making an implicit parallel with the experience of sinking down into a pillow. This is an amazing feat of connectiveness, especially when you consider that we don't get overwhelmed with crazy connections that make no sense. (Except for schizophrenics, who fall into exactly this disease. Even so, their minds are still amazing.) Try to think of what a machine would be like that could make and utilize that kind of connection on the fly and still be functional. Consider the difference between the history of Go and chess programming. It is still the case that after twenty years of development a human can pick up the rules and play better than the best Go program after only a month of practise, if that much, wheras the ratings of commercially available chess programs will probably pass that of the world chess champion in this decade. That's because chess programmers found a way to get machines to play chess that didn't require self-programming through metaphors; so far nobody has found any such workaround in Go. You read a lot of dishwater about neural nets, but the test is of course real applications. When the day comes when I see good Go programs out there -- programs that can beat half the professionals -- I will know that the metaphor trick has been solved for machines, too. (Or if I see a chess machine that can start off knowing nothing but the rules and get reasonably good just by playing games.) Once machines have animal-type intelligence they will become steadily wiser with time. Old machines will be worth more than new machines, since old machines will have had far more time to sit and think about their functions and materials, their ends and means, over a variety of operating environments and adventures. On the other hand they will also develop eccentricities and foibles and fetishes and personality traits which might diminish their value. There might no market for used machines anyway, since they will adapt to their owners and their owners, having like intelligence, will do so to them. Maybe owners will be as reluctant to sell them as you or I would be to sell the family pet. Perhaps even if they did, the machine would have a nervous breakdown. Anyway, getting machines to do the metaphor thing will be a giant step towards the singularity. --- ~ SPEED 2.0b #1339 ~ Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: jsn@cegt201.bradley.edu (John Novak) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 17 Jun 94 04:03:39 GMT Sender: nanotech@planchet.rutgers.edu Organization: Bradley University Lines: 66 Approved: nanotech@aramis.rutgers.edu In fred.hapgood@channel1.com (Fred Hapgood) writes: [Most of an interesting article on animal intelligence and manufacturing the same, deleted.] >Once machines have animal-type intelligence they will become >steadily wiser with time. Old machines will be worth more than >new machines, since old machines will have had far more time to >sit and think about their functions and materials, their ends and >means, over a variety of operating environments and adventures. Does this necessarily follow? I've been curious about this (as a non-expert in such fields) for quite some time. To make a blanket statement about older machines always being worth more than newer machines seems a bit excessive, unless we make the caveat that we are restricting the discussion to one particular generation of machine. A housefly, for instance, has the instinct and 'knowledge' to try to avoid my flyswatter. It does so very very well, in fact. (Well enough to make me get a better weapon-- a can of bug spray, in fact, but that's beside the point.) It does not, however, nor I suspect will it _ever_ figure out that my taking a swipe at it is a sure way to eventually get itself injured and killed. A dog, on the other hand, in a complex enough mechanism to figure out that, if it doesn't want to get a rap on the snout, it should stay the hell away from the candy dish, and off the couch. With sufficient training, dogs can not only avoid punishments, but can actively convey information, by barking up a storm if it smells drugs, for instance. Humans, of course, blow this operating criteria to hell and back by actually coming up with genuinely new ideas, and passing them on to other human beings. (Actually, I've been told that all primates can be taught to pass information on. Is this true?) So it seems to me that there are, for biological systems, certain levels of complexity, certain learning curves, and certain fundamental limits for each particular hardware and software arrangement. A housefly is never going to learn to wait until the human isn't looking before it divebombs the pot of chili, but a human will learn this right quickly. Following this, it seems likely that, yes, within a given generation of hardware, the oldest machines-- at least the machines which have been running the longest without going 'insane' due to overtraining, or idiosyncracies, or overconnectivity or whatever-- will tend to be the most valuable. But it further follows that, after a given technology revolution, we might be able to produce machines that, in one year's operating time become as adept at their functions as machines which previously needed to be lieft running for five or ten years. Or machines which have less a tendency to suffer mental-like diseases after long periods of learning, which could once again push the demand for physically newer machines. And, while there are other questions I could ask, I'm left with one semi-philosophical question. Is there, in fact, a fundamental limit to how quickly a machine can be taught to learn? And tangent to this, is it possible to create a machine which learns faster than a human being? -- John S. Novak, III jsn@cegt201.bradley.edu jsn@camelot.bradley.edu Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: svetter@maroon.tc.umn.edu (Steven C. Vetter) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 17 Jun 94 04:06:40 GMT Sender: nanotech@planchet.rutgers.edu Lines: 22 Approved: nanotech@aramis.rutgers.edu In message <9406102327.AA09935@planchet.rutgers.edu> writes: > I would argue that, in fact, software development is progressing > roughly as fast as computer processing power... > ... > [It is also worth noticing that in, e.g., numerical analysis, ... > algorithmic speedups on particular problems have tended to > be about equal to hardware on the average; that is, over the > same historical period that machines became 10 times as fast, > new algorithms would be discovered that gave another factor of 10, > for a total speedup of 100. > --JoSH] I agree. I once talked with a computer weather forecasting expert that had been active in the field for 30 years. He says that computer forecasting of the weather has improved 6 orders of magnitude in 3 decades and three of those magnitudes are attributable to hardware improvements and the other 3 to algorithmic improvements. Steven C. Vetter Computer Solutions, Inc. 9653 Wellington Lane Woodbury MN, 55125 USA Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: jarice@delphi.com Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 17 Jun 94 04:22:32 GMT Sender: nanotech@planchet.rutgers.edu Organization: Delphi (info@delphi.com email, 800-695-4005 voice) Lines: 64 Approved: nanotech@aramis.rutgers.edu Fred Hapgood writes: > > 1) Neural networks (which the Ni 1000 chip was designed to > > imitate), and more importantly, > > 2) Evolutionary programming, in which programs are generated randomly > > and then competed against a selection criteria. Then you take > > the top 10%, mix their programming code (mate them), generate > > new programs, and compete them, ad infinitum until a useful > > program is evolved. My original message stated that for the above reasons, software is advancing as rapidly as hardware development. Fred Hapgood responded: >I don't find these persuasive examples, but that's because I >suspect that most of the truly interesting applications (like the >robot engineers we will need to make nanotechnology really work) >will require animal-type intelligence, and so far we have made >zero progress in finding out how bats and rats do what they do. >Bats and rats and pigeons (etc.) couple some kind of very >powerful learning 'trick' to an equally impressive pattern >classification/matching 'trick', and with dam close to no >hardware at all. You can see these abilities at work in both >their sensory modalities and motor control. Blows me away. No- we've made substantial progress in determining how lower animals do what they do. Check out "Artificial Life II, Proceedings of the workshop on artificial life held Feb 1990 in Santa Fe, New Mexico". The concepts of "emergence", "complex systems", etc., are pertinent. Emergence is the idea that you take X number of components (for component substitute any structure/rule that reacts in a relatively simple way to inputs), put them together, and you get complex behavior from that interaction which could not be predicted from the rules that each sub-unit uses to interact. Examples include economies, the immune system, and the brain. Specifically, one can model the behavior of, say, a cockroach, by suprisingly few rules: Avoid light. If you detect food, move in the direction you are facing for a random number of seconds. If the smell of food has decreased, spin around randomly, and proceed in the direction you are facing for a random number of seconds. A computer simulation of these rules acts very similarly to a real cockroach. >Once machines have animal-type intelligence they will become >steadily wiser with time. Old machines will be worth more than >new machines, since old machines will have had far more time to >sit and think about their functions and materials, their ends and >means, over a variety of operating environments and adventures. No- old machines will 'give birth' to new machines via Lamarckian evolution. Lamarck, of course, was the Soviet geneticist that believed that acquired traits could be passed on to offspring- i.e., that if you cut the tails off of bulldogs enough times, the puppies will be born without tails. We now know that this is (mostly) not true for biological organisms. For machine intelligence, however, the machine will determine what traits, based on it's own experience, it's offspring should have, including traits the parent does not have. And so on. Blindingly rapid evolution. The newest will be the best, because they will have the cumulative experience of all previous machines, and will have been improved based on that experience. The robot gods are on their way, and I want to go with them. Jim Rice (JARICE@DELPHI.COM) Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: fred.hapgood@channel1.com (Fred Hapgood) Newsgroups: sci.nanotech Subject: The Singularity Message-ID: Date: 21 Jun 94 20:39:16 GMT Sender: nanotech@planchet.rutgers.edu Organization: Channel 1(R) 617-864-0100 Info Lines: 31 Approved: nanotech@aramis.rutgers.edu JN> And, while there are other questions I could ask, I'm left with > one semi-philosophical question. Is there, in fact, a > fundamental limit to how quickly a machine can be taught to > learn? Do you mean 'taught to learn' or 'learn'? I suspect there might be a fundamental limit on how fast _a_ machine can learn non-trivial things. Trivial learning is learning you can do with models, like a chess machine 'learning' about chess by playing with itself. Non-trivial learning requires interacting with new environments, dealing with surprises, coming up with theories that fail six times before they start to click. The more non-trivial the item the more time required for the particular integration. You can speed up hardware but you can't speed up the universe, and it is the universe that controls the rate of learning, by deciding how often it wants to slap you upside you head. On the other hand you can increase the number of intercommunicating entities interacting with the universe to very large numbers. Right now only N people on the planet are seriously involved with non-trivial learning. With good AI we could increase that number to N^N, which would goose the process nicely. > And tangent to this, is it possible to create a machine which > learns faster than a human being? Which human being? Me or Feynman? --- ~ SPEED 2.0b #1339 ~ Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: fred.hapgood@channel1.com (Fred Hapgood) Newsgroups: sci.nanotech Subject: The Singularity Message-ID: Date: 21 Jun 94 20:39:36 GMT Sender: nanotech@planchet.rutgers.edu Organization: Channel 1(R) 617-864-0100 Info Lines: 48 Approved: nanotech@aramis.rutgers.edu > >I don't find these persuasive examples, but that's because I >suspect that most of the truly interesting applications (like the >robot engineers we will need to make nanotechnology really work) >will require animal-type intelligence, and so far we have made >zero progress in finding out how bats and rats do what they do. j> > No- we've made substantial progress in determining how lower > animals do what they do. Check out "Artificial Life II, > Proceedings of the workshop on artificial life held Feb 1990 > in Santa Fe, New Mexico". I agree we can build machines to the point where they can emulate bugs -- Shannon was doing as much with relays in the 50s -- but bats and rats are a different order of creature. You can train a pigeon on a leaf of an unknown species of tree (a tree from a strange climate) and it will recognize another specimen of that leaf even when the new specimen a) is in a different orientation along all three axes, b) has different dimensions (is pictured at a different distance), and c) is older (darker; dirtier, more tattered). With latency rates of a fraction of a second! That, my friend, is learning-by-analogy with a vengeance and nothing we have done in thirty years of AI comes close to comparing with it. The fact that a pigeon can do this with a brain the size of a raisin, much of which is dedicated to other tasks, running at a clock rate not much faster than I can wave my finger, is egregiously humilating but undeniable. > The concepts of "emergence", "complex systems", etc., are pertinent. They sure are. They are examples of the lengths to which people will go to shield themselves from their own intellectual bankruptcy. If you want to build something that produces a certain output you have to have a theory about how to generate that output. It's hard, but that's life. You can run a foreign policy by waving your hands in the air and venting pieties about "complex systems" but you do not do engineering that way. It's not obvious that learning-by-analogy requires that much "complexity", defined however you like, anyway. We don't even know *that*. --- ~ SPEED 2.0b #1339 ~ Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: chris@efi.com (Chris Phoenix) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 22 Jun 94 22:35:07 GMT Sender: nanotech@planchet.rutgers.edu Organization: Electronics For Imaging Lines: 21 Approved: nanotech@aramis.rutgers.edu In article fred.hapgood@channel1.com (Fred Hapgood) writes: >Do you mean 'taught to learn' or 'learn'? I suspect there might >be a fundamental limit on how fast _a_ machine can learn >non-trivial things. Trivial learning is learning you can do with >models, like a chess machine 'learning' about chess by playing >with itself. Non-trivial learning requires interacting with new >environments, dealing with surprises, coming up with theories >that fail six times before they start to click. Unless I misunderstood something, much neural net work and all cellular automata and genetic algorithm work is "learning you can do with models," as is a lot of physical science nowadays. With chaos and emergent behaviors, models can be very complex and interesting, and in some problem spaces (such as AI, which is in this case both learner and subject) learning from models can be extremely useful. -- Chris Phoenix, chris@efi.com, 415-286-8581 "Yet money is a faithful _mirror_ -- for the more he works, the more he is paid; the better he works, the better he is paid ... except that more and better, _in_ the mirror, flatten to the same thing." -- Samuel R. Delany Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: fred.hapgood@channel1.com (Fred Hapgood) Newsgroups: sci.nanotech Subject: The Singularity Message-ID: Date: 24 Jun 94 06:53:07 GMT Sender: nanotech@planchet.rutgers.edu Organization: Channel 1(R) 617-864-0100 Info Lines: 56 Approved: nanotech@aramis.rutgers.edu c> Unless I misunderstood something, much neural net work and all > cellular automata and genetic algorithm work is "learning you can do > with models" ... One of the more apt definitions of intelligence, artificial or not, I've heard, is: that ability required to understand and manipulate the real, given, world. The real world, where light energies run through 10 orders of magnitude and every hour you face a different mixture of environmental water and dust and wind, and objects come in a hundred different textures, and sounds are always either muffled or echoed or both, and surfaces are slippery and dirty and crack and tear, and physical threats, sometimes animated by a high level of intelligence themselves, lurk everywhere. A world in motion, where the observer is also in motion, and everyone is interacting, often at high speeds. The world of bats and rats; not, I do not need to point out, cellular automata. This distinction should count for people interested in nanotechnology, since our robots are going to have to be out there, down there, in the real world, and completely on their own when it comes to dealing when the incidents and accidents of the nanoscale. They are going to have to figure out how to get their job done regardless of what surprises they find when they go to work in the morning. Therefore, it behooves nanotechies especially to keep a sharp eye on what is snake oil and what is not. From 65-75 AI people hoped to build a general purpose, highly-robust, context-independent, problem solver. Then they gave up on that, and from 75-85 tried to build highly specialized, context-dependent, aggregations of kludges that would get by, though without any generality. After 85 many people gave up on those, and threw themselves on the mercy of non-concepts like neural nets and emergent behavior and genetic algorithms, which are identical to hoping that if we give the computer the right food it will save us from the headache of figuring out what we're trying to do. I don't believe anybody is going to pull these chestnuts out of the fire for us. We have to go back to the objectives of the first generation and sit down and figure out what the trick is. It just can't be that hard. You can put a Norway rat in a ship or a field or a city street or a piece of functioning machinery and it will map it out and deal with it. It will find out where the food and water is and figure out what needs to be done to get the necessities, even if the behavior required is pretty non-standard. That's the skill we need for our machines -- learning by analogy, generalizing, figuring out how to describe the world in a way that both captures what is relevantly new and resonates with past experience. I refuse to believe the trick is impossible to decode. Anyway, we might as well get to it, 'cause nanotechnology is going nowhere until we do. --- ~ SPEED 2.0b #1339 ~ Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: eder@hsvaic.hv.boeing.com (Dani Eder) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 24 Jun 94 07:00:07 GMT Sender: nanotech@planchet.rutgers.edu Organization: Boeing AI Center, Huntsville, AL Lines: 39 Approved: nanotech@aramis.rutgers.edu fred.hapgood@channel1.com (Fred Hapgood) writes: > > That, my friend, is learning-by-analogy with a vengeance and > nothing we have done in thirty years of AI comes close to > comparing with it. The fact that a pigeon can do this with a > brain the size of a raisin, much of which is dedicated to other > tasks, running at a clock rate not much faster than I can wave > my finger, is egregiously humilating but undeniable. > Ah, but since a brain with 0.5 cc volume (such as a pigeon) should have about 30 million neurons, and guessing at 100 synapses per neuron, we have a 3 billion switch machine running at a peak rate of 1 kHz (the max firing rate of neurons). Allowing for an average firing rate of 100 Hz, you still have a gross bit rate of 300 billion/sec. (This assumes that a synapse firing is all or nothing, hence a binary activity). A 64 bit processor running at 100 Mhz has a gross bit rate of 6.4 billion, so I would not expect pigeon-level capability for a few more hardware generations. If we hypothesize that each synapse represents a connection with one of among 10,000 nearby neurons, then each synapse represents 13 bits of address data. Allowing nothing for the state data or the transfer function represented by the neuron summing it's inputs, you would still need 6 gigabytes of lookup data to equal the mapping complexity of a pigeon. Again, this is a few generations of RAM growth away. So I don't see that we should be humiliated. I don't think that we have the hardware you would expect to require to match a pigeon. That we have been able to get insect-level performance is not surprising, since we DO have computers of that capacity. Dani Eder -- Dani Eder/Rt 1 Box 188-2/Athens AL 35611/(205)232-7467 Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: im93gnt@brunel.ac.uk Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 27 Jun 94 04:53:38 GMT Sender: nanotech@planchet.rutgers.edu Lines: 40 Approved: nanotech@aramis.rutgers.edu ---much of message deleted... . . From 65-75 AI people hoped to build a general purpose, . highly-robust, context-independent, problem solver. Then they . gave up on that, and from 75-85 tried to build highly . specialized, context-dependent, aggregations of kludges that . would get by, though without any generality. After 85 many . people gave up on those, and threw themselves on the mercy of . non-concepts like neural nets and emergent behavior and genetic . algorithms, which are identical to hoping that if we give the . computer the right food it will save us from the headache of . figuring out what we're trying to do.. . . I don't believe anybody is going to pull these chestnuts out of . the fire for us. We have to go back to the objectives of the . first generation and sit down and figure out what the trick is. . It just can't be that hard. You can put a Norway rat in a ship . or a field or a city street or a piece of functioning machinery . and it will map it out and deal with it. It will find out where . the food and water is and figure out what needs to be done to . get the necessities, even if the behavior required is pretty . non-standard. That's the skill we need for our machines -- . learning by analogy, generalizing, figuring out how to describe . the world in a way that both captures what is relevantly new and . resonates with past experience. I refuse to believe the trick . is impossible to decode. Anyway, we might as well get to it, . 'cause nanotechnology is going nowhere until we do. --- . ~ SPEED 2.0b #1339 ~ ........... Yes, but a Rat, or a Human, or any mammal, and most of the vertebrates, are parallel-wired. In fact they are very complexly wired, with each neuron in the human brain being connected to not just the one in front, and the one on each side, but to many neurons, in all directions. This hypercoupling (for want of a better word) is almost certainly what makes the difference! When computers are wired up his way, as they probably will be in the next (21?) few years, THEN we will notice a difference. Meanwhile, we/you are comparing cheese with cows. Greg.Tingey. Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: ttf@dsg59.nad.ford.com (Tihamer Toth-Fejel) Newsgroups: sci.nanotech Subject: AI and nanotech (was The Singularity) Message-ID: Date: 27 Jun 94 04:55:48 GMT Sender: nanotech@planchet.rutgers.edu Lines: 46 Approved: nanotech@aramis.rutgers.edu intelligence: that ability required to understand and manipulate the real, given, world. But how does one measure an entity's "understanding"? The best objective measure I've seen is how well it predicts the future. from 75-85 [the AI researchers] tried to build highly specialized, context-dependent, aggregations of kludges that would get by, though without any generality. It seems that you are talking about rule-based systems, and in specialized applications, people are making money with them. You're right though, it did not fulfill our dreams. non-concepts like neural nets and emergent behavior and genetic algorithms. Again, for some specialized applications, these techniques *are* useful. OTOH, you are right, we still haven't figured out general intelligence. In fact, because neural nets aren't physical symbol systems, they are extremely difficult to debug, and I dispair of us ever getting anywhere with 'em. We have to ... figure out what the trick is. It just can't be that hard. Obviously it *is* very difficult, especially when you're working with a computer with as much processing power as a grasshopper. I refuse to believe the trick is impossible to decode. You listed some good subcomponents of the trick. Given that we've only been working on the problem for thirty or forty years, I'd say we're doing pretty well. Patience. We'll get there. Anyway, we might as well get to it, 'cause nanotechnology is going nowhere until we do. Wrong. Nanotech does not need AI. Don't be embarassed about making that error -- Drexler made the same one in EoC (he has since recanted). Protein recognition does not require intelligence -- it doesn't even need pattern recognition -- all you need is lots of templates and brownian motion takes care of the rest. And if you have a controlled input stream, you don't even have to depend on brownian motion. Tihamer Toth-Fejel Office: 313 594-2165 845-7918, 3646 (Secretary) Fax: 313 594-7837 Home: 313 662-4741 Concept 2010 Design Studio, Ford, Dearborn, Michigan. ***************************************************************** Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: robj@netcom.com (Rob Jellinghaus) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 27 Jun 94 04:56:47 GMT Sender: nanotech@planchet.rutgers.edu Organization: Netcom Online Communications Services (408-241-9760 login: guest) Lines: 62 Approved: nanotech@aramis.rutgers.edu In article fred.hapgood@channel1.com (Fred Hapgood) writes: > One of the more apt definitions of intelligence, artificial or > not, I've heard, is: that ability required to understand and > manipulate the real, given, world. ... > This distinction should count for people interested in > nanotechnology, since our robots are going to have to be out > there, down there, in the real world, and completely on their > own when it comes to dealing when the incidents and accidents of > the nanoscale. Are you talking about molecular manufacturing? If so, I think you're very wrong. Assemblers for molecular manufacturing will not have to deal with major randomness, any more than lithography devices for chipmaking have to today. Manufacturing is a controlled process. Certainly the lack of progress in AI has made very little difference to the ongoing progress of the computer industry, doubling price/ performance every year. If what you're talking about is autonomous nanotech-based robots dealing with the real world, then your argument has some force, but is no longer relevant to the near-term problems of nanotech-based industry. > After 85 many [AI] people gave up on [trying to build planned > intelligent systems], and threw themselves on the mercy of > non-concepts like neural nets and emergent behavior and genetic > algorithms, which are identical to hoping that if we give the > computer the right food it will save us from the headache of > figuring out what we're trying to do. This seems to me to bespeak a fundamental misunderstanding of how real-world intelligent systems work. For crying out loud, Fred, _you and I_ are intelligent _because_ of natural laws of emergent behavior! All the "intelligent" creatures you mention evolved through precisely the "dumb" mechanisms of variation and selection that you're criticizing here. Humans are simply not capable of creating systems beyond a certain degree of complexity; we _need_ to reach into nature's evolutionary bag of tricks to move forwards. > I don't believe anybody is going to pull these chestnuts out of > the fire for us. We have to go back to the objectives of the > first generation and sit down and figure out what the trick is. > It just can't be that hard. You can put a Norway rat in a ship > or a field or a city street or a piece of functioning machinery > and it will map it out and deal with it. You know how the Norway rat does it? Incredible amounts of processing power--a rat's brain is a parallel processor that may well be more powerful than any computer humans have _ever_ built--combined with million of years of programming via evolution. We _know_ the trick. The trick _is_ the "non-concepts" you're criticizing current researchers for studying. I strongly suggest you read Steven Levy's book _Artificial Life_ or Kevin Kelly's new book _Out of Control: The Rise of Neo-biological Civilization_ for more information on the "non-concepts" you are so erroneously deriding. In fact, I recommend the latter book to anyone interested in "the Singularity" (by whatever definition). -- Rob Jellinghaus robj@netcom.com uunet!netcom!robj Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: fred.hapgood@channel1.com (Fred Hapgood) Newsgroups: sci.nanotech Subject: The Singularity Message-ID: Date: 27 Jun 94 05:01:35 GMT Sender: nanotech@planchet.rutgers.edu Organization: Channel 1(R) 617-864-0100 Info Lines: 59 Approved: nanotech@aramis.rutgers.edu e> Ah, but since a brain with 0.5 cc volume (such as a pigeon) should > have about 30 million neurons, and guessing at 100 synapses per neuron, > we have a 3 billion switch machine running at a peak rate of 1 kHz > (the max firing rate of neurons). Allowing for an average firing > rate of 100 Hz, you still have a gross bit rate of 300 billion/sec. > (This assumes that a synapse firing is all or nothing, hence a binary > activity). In the first place, I think these numbers are way off. While my references are out on loan, my recollection is that 10 synapses and 20 hertz would be a lot closer, as averages. Granted there are specialized neurons that do better. But this response just collaborates with the very hardware fetishism that I find so depressing in the first place. Let me tell a couple of stories about this. For decades people thought the way to solve the problem of autonomous machine navigation was with vision. Millions of dollars worth of hardware and thousands of man-years of effort were flushed down that rathole. Then a mechanical engineer named Ed MacLeod, who know nothing about this history, was handed the same problem on a job of his and solved it by hanging a few transponders in the environment and sticking a laser on the robot. The laser sweeps the environment at a given speed and a computer records when it hits the transponders and calculates back to get the exact position of the robot. And I mean exact: positions within millimeters; updates in hundreths of a second; to tenths of a degree. System works in three dimensions with many interacting vehicles running at speeds in excess of 50 mph. Problem not just solved; problem crushed right into problem juice, and for about the same hardware that you have on your desk right now. What was the secret here? Not hardware, anyway. About five years ago a tropical biologist named Tom Ray decided it would be neat to have a computer program that would evolve by natural selection. Not evolve in the fake way most evolution modelling programs evolve, by selecting from a box of pretested, debugged mutations, but evolve the way natural organisms do, with every instruction modifiable in any direction. He did not of course know that people had been trying to do this for years, but failing, because (these people said) computer programs are just too brittle. Ray didn't know this history, but he did know a lot of biology, and whenever a problem came up he reasoned-by-analogy: "What is this like in nature," he asked himself, "and how did nature solve that problem?" That perspective was so powerful that in a few months he was running huge ecologies of co-evolving digital organisms, in which every new adaptation was a surprise. And he he was doing this on his Toshiba laptop. It is not true that if you build a big enough machine it will work, and if it doesn't work that proves you just have to build a bigger machine. That is Star Wars thinking. --- ~ SPEED 2.0b #1339 ~ Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: fred.hapgood@channel1.com (Fred Hapgood) Newsgroups: sci.nanotech Subject: Singularity Message-ID: Date: 27 Jun 94 05:03:21 GMT Sender: nanotech@planchet.rutgers.edu Organization: Channel 1(R) 617-864-0100 Info Lines: 68 Approved: nanotech@aramis.rutgers.edu e> Ah, but since a brain with 0.5 cc volume (such as a pigeon) should have about 30 million neurons, and guessing at 100 synapses per neuron, we have a 3 billion switch machine running at a peak rate of 1 kHz (the max firing rate of neurons). Allowing for an average firing rate of 100 Hz... In the first place, I think these numbers are way off. While my references are out on loan, my recollection is that 10 synapses and 20 hertz would be a lot closer, as averages. Granted there are specialized neurons that do better. But it doesn't matter. Engaging with the issue on this level is encouraging the hardware fetishism that I found so depressing in the first place. For decades people thought the way to solve the problem of autonomous machine navigation was with vision. Millions of dollars worth of hardware and thousands of man-years of effort were flushed down that rathole. Then a mechanical engineer named Ed MacLeod, who knew nothing about this history, was handed the same problem on a project of his and solved it by hanging a few transponders in the environment and sticking a laser on the robot. The laser sweeps the environment at a given speed and a computer records when it hits the transponders and calculates back to get the exact position of the robot. And I mean exact: positions within millimeters; updates in hundreths of a second; vectors to tenths of a degree. System works in three dimensions with many interacting vehicles running at speeds in excess of 50 mph. Problem not just solved; problem crushed right into problem juice, and for about the same hardware that you have on your desk right now. What was the secret here? Not hardware, anyway. About five years ago a tropical biologist named Tom Ray decided it would be neat to have a computer program that would evolve by natural selection. Not evolve in the fake way most evolution modelling programs evolve, by selecting from a box of pretested, debugged mutations, but evolve the way natural organisms do, with every instruction modifiable in any direction. He did not of course know that people had been trying to do this for years, but failing, because (these people said) computer programs are just too brittle. I know that Connection Machines had been pressed into service here, though I do not know why. Ray didn't know this history, but he knows lots and lots of biology, and whenever a problem came up he reasoned-by-analogy: "What is this like in nature," he asked himself, "and how did nature solve that problem?" That perspective was so powerful that in a few months he was running huge ecologies of co-evolving digital organisms, in which every new adaptation was a surprise. And he was doing this on his Toshiba laptop. Now you tell me a case in which somebody had no idea what he was doing, but solved his problem anyway by piling on the hardware. It is not true that if you don't know how to make a machine do what you want that the thing to do is make it bigger. That is Star Wars thinking. Generally the way to solve problems is to formulate them in the simplest possible terms, and when they are solved, the principles behind their solution can be also be demonstrated in simple models. This suggests to me that when learning-by-analogy is cracked it is likely to be by a Russian working in St. Petersberg on a 386 with 2 megs of ram. He won't be able to make a pigeon out it; he won't be able to do the leaf trick. But I bet it will play better Go than the biggest dam Connection Machine. --- ~ SPEED 2.0b #1339 ~ Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: mentat@telerama.lm.com (Godshatter) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 28 Jun 94 20:01:52 GMT Sender: nanotech@planchet.rutgers.edu Organization: Telerama Public Access Internet, Pittsburgh, PA Lines: 54 Approved: nanotech@aramis.rutgers.edu Fred Hapgood (fred.hapgood@channel1.com) wrote: > One of the more apt definitions of intelligence, artificial or > not, I've heard, is: that ability required to understand and > manipulate the real, given, world. Nanoscale robots don't need individual intelligence any more than ants do. If nonintelligent cells can create plants and animals much more complex than anything we will build with assemblers - at least to begin with - why do you assume it will take intelligence within each assembler to build things? A network of a million or a billion assemblers could be quite intelligent without the assemblers individually having as much intelligence as a frog. Besides, who said they would be on their own? > From 65-75 AI people hoped to build a general purpose, > highly-robust, context-independent, problem solver. Then they > gave up on that, and from 75-85 tried to build highly > specialized, context-dependent, aggregations of kludges that > would get by, though without any generality. After 85 many > people gave up on those, and threw themselves on the mercy of > non-concepts like neural nets and emergent behavior and genetic > algorithms, ... It seems like a reasonable approach seeing as we're here, and we are creating systems with collectively greater intelligence as well as sensory capabilities no bat or rat will ever attain on its own. The "nonconcepts" you refer to are the ones responsible for our being here. >I don't believe anybody is going to pull these chestnuts out of > the fire for us. We have to go back to the objectives of the > first generation and sit down and figure out what the trick is. What would you expect after a few billion years? Rats and other animals may do certain things like find food and a mate very well, but while the range of ability grows as you get to higher mamals, at the level of bats and rats that you're talking about the list of these activities is very short and they can learn very little that lies outside this narow range. That's why when teaching them to run mazes, they have to be rewarded with finding food or they would never learn. Besides, individual assemblers don't need to map the whole of whatever it is they are constructing any more than your neurons have a map of your brain. Anyway, ants and other social insects collectively build things no rat could concieve of with far less individual intelligence. The concept of emergence - if a bit fuzzy - does not seem to me an intellectually bankrupt concept since it seems the best way so far to try to account for intelligence arising out of an agglomeration of nonintelligent parts. You seem to want to tread old ground that was abandoned for good reason - the approaches of the first generation didn't work and the newer ones - while probably not a complete theory - at least work better. You seem to want to find a set of equations or a system of Aristotelian logic to describe intelligence; that's primarily what this so-called first generation you refer to was trying to do. I think you'll be looking a long time. Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: fred.hapgood@channel1.com (Fred Hapgood) Newsgroups: sci.nanotech Subject: The Singularity Message-ID: Date: 28 Jun 94 20:13:44 GMT Sender: nanotech@planchet.rutgers.edu Organization: Channel 1(R) 617-864-0100 Info Lines: 72 Approved: nanotech@aramis.rutgers.edu r> You know how the Norway rat does it? Incredible amounts of processing power ... This is just a theory. You have no way of knowing this. The competing theory is that the Norway rat "does it" by connecting its neurons up in the right way. That's my theory. Of course I have no way of knowing that either. Maybe you're right, and as soon as we build a big enough machine it will start to work all by itself. Historically it hasn't usually worked that way, plus waiting around for Intel to solve our problems seems intellectually debilitating. But to each his own. r> I strongly suggest you read Steven Levy's book _Artificial Life_ > or Kevin Kelly's new book _Out of Control: The Rise of Neo-biological > Civilization_ for more information on the "non-concepts" you are so > erroneously deriding. I strongly suggest you reread them. Emergent behaviors have their place, but the development of intellectually creative machines comes nowhere near that neighborhood. Emergence works as an engineering strategy when (among many other constraints) the specifications are flexible to the point of being absent. Mark Tilden is pursuing a sort of emergent robotics in which he builds lots of cheap machines with very long operating lives, sits around, and watches them interact. When he sees something interesting, he builds another machine in which that behavior is optimized, adds it to the mix, and watches some more. The thing to note here is that Tilden is enormously imaginative in thinking up interesting and offbeat 'applications' for the behaviors that he sees. One of his robots is an 'automatic bedwetter'. Another is a 'dustbunny cowboy'. These applications were dreamed up after the fact. You do not play around with emergent behaviors if you have a specific application in mind. Tom Ray is building an environment optimized to promote the emergence of very complex digital structures. I "strongly suggest" you read the paper he wrote describing his project, which could not be clearer about the proper role of emergent design in engineering. He makes the same point I have here, only more graphically. Learning-by-analogy is a specialized behavior, and building it is going to take focussing on what makes that kind of learning different. It is not of primary interest from the point of view of this objective that rats can learn to run mazes; nor that they can learn to learn to run mazes (so that they take less time to solve each new maze); nor that they can learn to deal with widely differing environments, of which mazes are just special cases. What matters from the pov of l-b-a is that all these modes of learning run off the same instruction set; they are iterations of the same trick. It is this trick we need to figure out. There is a representational language out there somewhere that facilitates the classification of patterns across radically different contexts, while allowing accurate specification of the differences between these patterns. (Learning by analogy fails if you start identifying the analogy with the reference; the symbol with the ground; the map with the model.) Expecting emergent behaviors to come up with this language is like waiting for them to spit out BASIC. r> Humans are simply not capable of creating systems beyond a certain degree of complexity. Uh-huh. --- ~ SPEED 2.0b #1339 ~ Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: minsky@media.mit.edu (Marvin Minsky) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 30 Jun 94 18:51:16 GMT Sender: nanotech@planchet.rutgers.edu Organization: MIT Media Laboratory Lines: 30 Approved: nanotech@aramis.rutgers.edu In article mentat@telerama.lm.com (Godshatter) writes: >Besides, individual assemblers don't need to map the whole of >whatever it is they are constructing any more than your neurons have a map of >your brain. Anyway, ants and other social insects collectively build things >no rat could concieve of with far less individual intelligence. >The concept of emergence - if a bit fuzzy - does not seem to me an >intellectually bankrupt concept since it seems the best way so far to try >to account for intelligence arising out of an agglomeration of >nonintelligent parts. You seem to want to tread old ground that was >abandoned for good reason - the approaches of the first generation didn't >work and the newer ones - while probably not a complete theory - at least >work better. On the contrary, one could argue that the newer ones work well on different problems, especially on some pattern recognition ones, but do not work as well on problem solving. As for the collective intelligence of those social insects, it is easy to be carried away by this. Ed Wilson points out that a solitary bumblebee shows almost all the behavior the "emerges" from the hives of social bees -- I think he said that he observed in bumblebees the order of 80 percent of the elements of behavior that he cataloged in honeybees -- during the course of one individual's lifetime. In other words, at one time it behaves like a worker, at other times a soldier (but not so suicidally), and at other times like a drone or a queen. (If anyone can supply the original quote, I'd appreciate it. I presume that some of the food-instruction dance would be among the missing 20 percent.) Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: robj@netcom.com (Rob Jellinghaus) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 30 Jun 94 18:51:50 GMT Sender: nanotech@planchet.rutgers.edu Organization: Netcom Online Communications Services (408-241-9760 login: guest) Lines: 87 Approved: nanotech@aramis.rutgers.edu First off, I apologize for my arrogance in the post to which you're responding. "I strongly suggest" was unnecessary and rude. In article fred.hapgood@channel1.com (Fred Hapgood) writes: >r> You know how the Norway rat does it? Incredible amounts of > processing power ... > > This is just a theory. You have no way of knowing this. The > competing theory is that the Norway rat "does it" by connecting > its neurons up in the right way. That's my theory. You omitted my mention of the millions of years of evolutionary programming, which led to the "right way" in the first place. But you are very correct: we might be able to leapfrog the algorithms nature came up with. Then again, we might find it beyond our ability, and we might need to fall back on artificial evolution for (at least) portions of the process. > waiting around for Intel to solve our problems seems > intellectually debilitating. But to each his own. Well, I'm not even utterly clear what problems it is we're talking about. The problems I'm most concerned with are getting molecular manufacturing online, and there is plenty to be done there that has nothing to do with emergent behavior. > Emergent behaviors have > their place, but the development of intellectually creative > machines comes nowhere near that neighborhood. This is a pretty aggressive claim. The notion of "engineering creativity" _without_ emergent behavior is somewhat problematic. Kelly goes into this in _Out of Control_ (the title is all about this issue). > You do not play > around with emergent behaviors if you have a specific > application in mind. Um, are you saying that all the people doing neural net technology for many useful applications (handwriting, vision, etc) aren't playing with emergent behavior? > Learning-by-analogy is a specialized behavior, and building it > is going to take focussing on what makes that kind of learning > different. It is not of primary interest from the point of view > of this objective that rats can learn to run mazes; nor that > they can learn to learn to run mazes (so that they take less > time to solve each new maze); nor that they can learn to deal > with widely differing environments, of which mazes are just > special cases. What matters from the pov of l-b-a is that all > these modes of learning run off the same instruction set; they > are iterations of the same trick. > It is this trick we need to figure out. An interesting way to describe it. The question then becomes, what kind of processing substrate will the "trick" require? Rod Brooks, for one, posits that the _only_ way to get this kind of high-level learning behavior at an acceptable speed is by creating a subsumption architecture with layered behaviors and a lot of parallel activity going on. This is again an emergent-behavior-based way of attacking the l-b-a problem. Is it your belief that there is a simple way of implementing this "trick" that doesn't require such an architecture? > Expecting > emergent behaviors to come up with this language is like waiting > for them to spit out BASIC. Maybe. But it's not like we must either let it run hands-off for millenia, or design the entire thing ourselves. It'll be interesting to see how amenable this language is to conventional (sequential) thinking. >r> Humans are simply not capable of creating systems beyond a > certain degree of complexity. > >Uh-huh. OK, point well taken. Augmented by computers, compilers, and software, humans may well be able to create arbitrarily complex systems. I do think, though, that Kelly in _Out of Control_ does argue convincingly that beyond a certain degree of complexity, humans do lose the ability to completely understand (or control) everything that's happening in the system. -- Rob Jellinghaus robj@netcom.com uunet!netcom!robj Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: fred.hapgood@channel1.com (Fred Hapgood) Newsgroups: sci.nanotech Subject: Singularity Message-ID: Date: 30 Jun 94 18:53:36 GMT Sender: nanotech@planchet.rutgers.edu Organization: Channel 1(R) 617-864-0100 Info Lines: 56 Approved: nanotech@aramis.rutgers.edu m> Message-ID: > Newsgroups: sci.nanotech > Organization: Telerama Public Access Internet, Pittsburgh, PA m> Nanoscale robots don't need individual intelligence any more than ants do. We want to use nanotech to build cities, rockets, cell repair machines. All in all, a more exacting set of applications than anthills. > If nonintelligent cells can create plants and animals much more complex than anything we will build with assemblers - at least to begin with - why do you assume it will take intelligence within each assembler to build things? A network of a million or a billion assemblers could be quite intelligent without the assemblers individually having as much intelligence as a frog. The closer the processor running the AI is to the sensors and actuators, the faster it can work, though obviously there are tradeoffs. The analogy with cells goes nowhere for me. Biology has had a long time to squeeze a tremendous number of tweaks out of a very narrow vocabulary of operators. We are going to want to work with many more kinds of tools over a much wider range of applications over a much shorter period of time. Given those constraints, the density of high-level decision making, both in terms of figuring out the initial design and in picking up after Murphy is going to be very high. That's why I think real AI is essential for nanotech. It's very hard for me to imagine nanotech happening if humans have to make all these decisions one by one themselves. m> It seems like a reasonable approach seeing as we're here, and we are creating systems with collectively greater intelligence as well as sensory capabilities no bat or rat will ever attain on its own. The "nonconcepts" you refer to are the ones responsible for our being here. I have no reason to believe that emergence is going to end up at the same place twice, not that the time it would take to find out is acceptable anyway. Even if we rewound the history of the earth to its starting point and let it go again there is no particular reason to believe that high-level intelligence would evolve a second time, and at least there you would be working with the physical world, not a computer simulation. m> Anyway, ants and other social insects collectively build > things no rat could concieve of with far less individual > intelligence. See top of message. Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: hagerman@ece.cmu.edu (John Hagerman) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 4 Jul 94 16:29:38 GMT Sender: nanotech@planchet.rutgers.edu Organization: Carnegie Mellon University Lines: 22 Approved: nanotech@aramis.rutgers.edu robj@netcom.com (Rob Jellinghaus) writes: > > Augmented by computers, compilers, and software, humans may well be > able to create arbitrarily complex systems. I do think, though, > that Kelly in _Out of Control_ does argue convincingly that beyond a > certain degree of complexity, humans do lose the ability to > completely understand (or control) everything that's happening in > the system. I haven't read Kelly so I don't know just what you mean by "lose the ability to completely understand or control." But it seems to me that losing understanding and control is the goal, from a certain point of view. We are impressed by the rat because we are unable to predict how it will behave in a new situation, beyond knowing that it will try to behave in its own best interest. Similarly, an artifical system whose goal is flexiblity must be able to handle new situations that the designer never considered. A designer must be willing to set his or her system free in the world trusting that the general instructions he or she gave the system are sufficient for the system to be able to deal with any situation it may encounter. :-) - John Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: fred.hapgood@channel1.com (Fred Hapgood) Newsgroups: sci.nanotech Subject: The Singularity Message-ID: Date: 4 Jul 94 16:33:15 GMT Sender: nanotech@planchet.rutgers.edu Organization: Channel 1(R) 617-864-0100 Info Lines: 80 Approved: nanotech@aramis.rutgers.edu r> First off, I apologize for my arrogance in the post to which > you're responding. "I strongly suggest" was unnecessary and rude. No problem. I got a little overenthusiastic myself. Especially by the high standards of this group. r> Well, I'm not even utterly clear what problems it is we're talking about. Perhaps they are in flux. The issues I raised at the start were the definition of the nature of creative reasoning and the role of automatic varients thereof in the development of nanotech. f> Emergent behaviors have their place, but the development of > intellectually creative machines comes nowhere near that > neighborhood. r> This is a pretty aggressive claim. The notion of "engineering > creativity" _without_ emergent behavior is somewhat problematic. > Kelly goes into this in _Out of Control_ (the title is all about > this issue). Perhaps we are thinking of different things by 'intellectual creativity'. I mean a computer that acts like a good engineering consultant does today: helps you refine your specifications, comes up with ingenious solutions, designs and builds the product, and bills you extra for special changes. f> You do not play around with emergent behaviors if you have a specific application in mind. r> Um, are you saying that all the people doing neural net technology for many useful applications (handwriting, vision, etc) aren't playing with emergent behavior? Ou, you tempt me. But I will take the high road. The term 'emergent behavior' needs definition. Bear in mind that I place learning right at the center of creative thought -- creative thought is just a special form of learning (i.e., learning by analogy), and l-b-a is very likely to be 'creative', that is, surprising to the observer. (Because l-b-a recycles or resuses the experience of that particular reasoning entity, forcing a rapid development of individuality.) So if you want to define 'emergent behaviors' to mean outputs not specified in the code, then I'm a fan too. (At the extreme you could argue that a *calculator* uses emergence.) I'm using the term to mean the hope that the trick of learning by analogy can itself be found by some kind of emergent lashup. f> What matters from the pov of l-b-a is that all these modes of learning run off the same instruction set; they are iterations of the same trick. It is this trick we need to figure out. r> Rod Brooks, for one, posits that the _only_ way to get this kind of high-level learning behavior at an acceptable speed is by creating a subsumption architecture with layered behaviors and a lot of parallel activity going on. This is again an emergent-behavior-based way of attacking the l-b-a problem. Is it your belief that there is a simple way of implementing this "trick" that doesn't require such an architecture? Yes. I like Brook's emphasis on the feedback provided by physical being, but the trick here is the representation language used to encode that feedback. There must be some way of writing it up so that the routines recognize analogies to themselves regardless of the nature of the original input. This is a high-level ability. It requires seeing connections among patterns no matter whether they come from the physical universe, the social universe, or the computational universe. It means being able to come up with metaphors like 'tender is the night' -- metaphors that not only cross semantical borders but work, make sense, are useful. I see this language as general-purpose and highly portable by definition. I can't imagine that developing it won't require reasoning from above. I know it developed from below before, but who knows what the odds were of that, or of it happening again? Especially given the tremendous difference in medium and environment. Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: kyfho@delphi.com (Thomas Radloff) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 4 Jul 94 16:57:51 GMT Sender: nanotech@planchet.rutgers.edu Organization: Delphi (info@delphi.com email, 800-695-4005 voice) Lines: 19 Approved: nanotech@aramis.rutgers.edu Fred Hapgood writes: > bankruptcy. If you want to build something that produces a > certain output you have to have a theory about how to generate > that output. It's hard, but that's life. You can run a foreign So what do you do with children? I have no "certain output" in mind for my herd beyond vague requirements of health, happiness, etc. I am certainly not a child psychologist and even if I was, my "theory" of how these critters operate would be quite simplistic. Nevertheless, my good ole seat of the pants guide-their-development approach may quite well work. [Indeed it may, but don't take it for granted in a discussion of this kind. One of evolution's favorite tricks to change the value of some parameter of a species is simply to kill off all the individuals with the wrong value... I'm guessing that the next millenium sees more variation in the basic human critter than the past 100 did. --JoSH] Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: kyfho@delphi.com (Thomas Radloff) Newsgroups: sci.nanotech Subject: Re: Teleportation Message-ID: Date: 12 Jul 94 19:43:21 GMT Sender: nanotech@planchet.rutgers.edu Organization: Delphi (info@delphi.com email, 800-695-4005 voice) Lines: 24 Approved: nanotech@aramis.rutgers.edu Welfare "rights" are a stellar example of anachronstic ethics that will be annihilated by these changes. Under the guise of helping the less fortunate, some groups exercise raw political power to extract resources from some to distribute to others. As much as I don't like that result, it is still compatible with the ethic of do what you like. However, it is an interesting question to wonder if a greater diffusion of power will liberate or enslave. The virtual realities discussed her and elsewhere, notions of the singularity, etc., _require_ that ethics be overhauled. The simple choice of doing nothing will only let entities that have evolved surviably superior ethics to dominate. Not a call for action, just food 4 thought. [Ethics are (one of the formulations of) sets of rules which encode the externalities in an otherwise individually-based scenario of interaction. Discussions of such rulesets, which include politics and religion, would be appropriate to sci.nanotech ONLY if they are: (a) dispassionate and analytical in tone, (b) analyze differences in the logic of externaltites in some specific detail ("everything's cheaper" is not acceptable), and (c) relate specific ruleset phenomena to specific technological differences. I.e. these are areas highly susceptible to flamage and a heavier-than-usual editorial hand will be exhibited. --JoSH] Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: kyfho@delphi.com (Thomas Radloff) Newsgroups: sci.nanotech Subject: Re: Cost of research Message-ID: Date: 12 Jul 94 19:44:25 GMT Sender: nanotech@planchet.rutgers.edu Organization: Delphi (info@delphi.com email, 800-695-4005 voice) Lines: 19 Approved: nanotech@aramis.rutgers.edu Bcousert writes: >Massive financing by the US Government and possibly others for the >development of Nanotechnology, perhaps as much as an Apollo mission >or Manhattan project. > >Couldn't post-breakthrough technologies provide a way to pay off the >debt in a short time? I would imagine that full scale nanosingularity stuff would render obsolete primitive artifacts that T Bonds. However, how to quantify the "cost" of letting Mad Uncle Sam have these things? Imagine the universe converted into a grey goo of innummerable nano-tax forms. I can imagine a future Declaration of Independence, giving new meaning to the phrase: "they have sent hither swarms of nanocrats to harrase our intelligences and eat out their substance. But seriously, the largest concentration of power on earth, with a demonstrated willingness to get its' way would be a dangerous thing to allow access to this stuff, if everybody else doesn't get it first. =:o) Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: mentat@telerama.lm.com (Godshatter) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 12 Jul 94 21:00:08 GMT Sender: nanotech@planchet.rutgers.edu Organization: Telerama Public Access Internet, Pittsburgh, PA Lines: 54 Approved: nanotech@aramis.rutgers.edu Marvin Minsky (minsky@media.mit.edu) wrote: > In article mentat@telerama.lm.com (Godshatter) writes: > >Besides, individual assemblers don't need to map the whole of > >whatever it is they are constructing any more than your neurons have a map of > >your brain. Anyway, ants and other social insects collectively build things > >no rat could concieve of with far less individual intelligence. > >The concept of emergence - if a bit fuzzy - does not seem to me an > >intellectually bankrupt concept since it seems the best way so far to try > >to account for intelligence arising out of an agglomeration of > >nonintelligent parts. You seem to want to tread old ground that was > >abandoned for good reason - the approaches of the first generation didn't > >work and the newer ones - while probably not a complete theory - at least > >work better. > On the contrary, one could argue that the newer ones work well on > different problems, especially on some pattern recognition ones, but > do not work as well on problem solving. True, but my response was to Mr. Hapgood's assertion that assemblers would need animal intelligence to perform their duties; and while "first generation" as he calls it AI has produced some limited success in such areas as expert systems, it seems to me to have been a failure in developing a system of rules that would exhibit that kind of intelligence. > As for the collective intelligence of those social insects, it is easy > to be carried away by this. Ed Wilson points out that a solitary bumblebee shows > almost all the behavior the "emerges" from the hives of social bees -- > I think he said that he observed in bumblebees the order of 80 percent of the > elements of behavior that he cataloged in honeybees -- during the > course of one individual's lifetime. In other words, at one time it > behaves like a worker, at other times a soldier (but not so > suicidally), and at other times like a drone or a queen. I've not heard of this research in particular, and it's been quite a while since I've read much about insects. But my original reference was to ants who have the most advanced societies. It would be hard for soldiers with those huge jaws to behave like workers, and for the workers to survive without protection or get much done without the chemical instructions from the queen. I have also read of experiments with wasps that showed that if only one more layer of tissue paper was placed over the nest exit than the wasp had placed there, it would wander around and die rather than make the very slight effort needed to escape. I don't think there's much intelligence displayed in that case. I was unclear in implying that an ant colony was intelligent. My thought was that we could stretch the idea of a group of programmed mechanisms communicating with each other and with smarter mechanisms either nearby or far away to create something a lot smarter than any one of them alone. Requiring that assemblers have intelligence is to me like thinking that your hands must have intelligence to type. Aside from some reflexive actions - hot stoves and the like - they get instructions for every movement. In a factory, assemblers could be operated the same way as conventional factory automation is now. Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: mentat@telerama.lm.com (Godshatter) Newsgroups: sci.nanotech Subject: Re: Singularity Message-ID: Date: 12 Jul 94 21:00:43 GMT Sender: nanotech@planchet.rutgers.edu Organization: Telerama Public Access Internet, Pittsburgh, PA Lines: 104 Approved: nanotech@aramis.rutgers.edu Fred Hapgood (fred.hapgood@channel1.com) wrote: > m> Message-ID: > > Newsgroups: sci.nanotech > > Organization: Telerama Public Access Internet, Pittsburgh, PA > m> Nanoscale robots don't need individual intelligence any more > than ants do. > We want to use nanotech to build cities, rockets, cell repair > machines. All in all, a more exacting set of applications than > anthills. We can already build cities and rockets without nanotech; as for cell repair machines, it seems that using biotech methods to stimulate the body to do what it does already - destroy old and damaged cells and replace them with new ones - would be a more efficient use of resources. Repairing cells seems like repairing microchips. A better idea is to alter the genetic code of cells to avoid the need for future repairs. You don't need intelligence on the assembler for that, though. > > If nonintelligent cells can create plants and animals much > more complex than anything we will build with assemblers - at > least to begin with - why do you assume it will take > intelligence within each assembler to build things? A network > of a million or a billion assemblers could be quite > intelligent without the assemblers individually having as > much intelligence as a frog. > The closer the processor running the AI is to the sensors and > actuators, the faster it can work, though obviously there are > tradeoffs. There are at least two ways I can think of around that problem. You could use micromachines near the assemblers. They can carry a lot more processing power and can act as supervisors and network managers. They could even operate the assemblers entirely so all the assembler would need are simple navigation and manipulative equipment. Anything the micromachines couldn't handle would be sent up the line to larger computers or even human intervention. You seem to think that nanotech will develop in a vacuum and that it will work in complete isolation from the rest of the universe. Another way is to have a human teleoperate the assemblers. In the human body, for example, a knowledgeable person could recognize problems and direct appropriate repairs. The assemblers would recieve a map of what the offending tissue Should look like and could bring it into line with its internal image. No intelligence required. The human could watch individual assemblers; but more likely would merely direct gross operations and let expert systems or some other software/hardware do the details. The assemblers wouldn't need to know anything beyond what they were working on at the time. > The analogy with cells goes nowhere for me. Biology has had a > long time to squeeze a tremendous number of tweaks out of a very > narrow vocabulary of operators. We are going to want to work > with many more kinds of tools over a much wider range of > applications over a much shorter period of time. Given those > constraints, the density of high-level decision making, both in > terms of figuring out the initial design and in picking up after > Murphy is going to be very high. That's why I think real AI is > essential for nanotech. It's very hard for me to imagine > nanotech happening if humans have to make all these decisions one > by one themselves. At the rate things are accelerating I doubt that time is anything to worry about. Besides, you don't have to have superintelligence right on top of things. Anything the micromachines can't handle or a notebook machine twenty years from now could go to a multimegateragigaprocessor halfway around the world or in orbit. No time constraint here. As to all those decisions: My computer makes lots of "decisions" very quickly whenever I use my OCR system. I tell it to scan and display the results. If I don't like them - the error rate, not the message - I change the contrast and/or other parameters and try again. It can even decide about the contrast if I set it that way. The same for building a model with a steriolithograph. Design what you want on the screen and the computer decides where to put the laser and when. How much intelligence does that take? Building things with assemblers will be similar though granted much more complex. But the principle is the same. Describe what you want - the more detail probably the better - and use the software and knowledge bases that will be built by people with computer assistance - writing the actual program code perhaps since object oriented programming will be much farther along a few years from now - and fine tune your results. After it's right, share the results of your success with everyone else. You can build up quite a set of designs and eliminate a lot of false trails very quickly when people can colaborate from anywhere. You don't seem to appreciate the synergy of communication and computers that is already having a major impact on research, business, popular culture around the world etc. This process is only going to accelerate. Besides, you still haven't explained why you think assemblers need to be smart enough to be on their own. As I recall from the "gray goo" debates, it didn't seem likely - or desirable - to be designing nanotech devices that could survive in the wilderness. They are more likely to be in a factory or some other controlled setting where the materials they need will be readily available and where they will not need to find food or water or recognize leaves from any angle. If they don't know what they're looking at, they can ask. More likely they will be tools at the ends of smart devices outside the body or right beside the assembly line - and the human foreman will be just a pager away. Path: igor.rutgers.edu!planchet.rutgers.edu!nanotech From: bam0511@utarlg.uta.edu (MOSELEY,BO,AUSTIN) Newsgroups: sci.nanotech Subject: Re: The Singularity Message-ID: Date: 14 Jul 94 16:15:47 GMT Sender: nanotech@planchet.rutgers.edu Organization: The University of Texas at Arlington Lines: 39 Approved: nanotech@aramis.rutgers.edu In article , mentat@telerama.lm.com (Godshatter) writes... >Marvin Minsky (minsky@media.mit.edu) wrote: > I've not heard of this research in particular, and it's been quite a >while since I've read much about insects. But my original reference was >to ants who have the most advanced societies. It would be hard for >soldiers with those huge jaws to behave like workers, and for the workers There is a lot of activity in the field of insect research. Texas has two developments which are interesting. One is the introduction of Africanized bees to the souhern part of the state. The bees are running into all kinds of new phenomena, from new ecological structures to new predators to a different climate that they have not seen before. It will be interesting to see what new behavior emerges. The fire ants are a different story. Twice this century they have been beaten back by native ants which are more agressive. But this time, the fire ants have a new social structure, sets of communal hills. Instead of one hill per 300 m^2, they now have ten times that density - since fire ants no longer fight each other, but cooperate or ignore fire ants from other nests. The result is am much higher carrying capacity and a " quantity has a quality" problem for the native ants. There is a lot of complaining that insecticides have hurt the fire ant's ant enemies, but pesticides have been used before. The culprit is the emergence of e new social structure for the ants. It's like Greek national identity vs city-states. Fire ants are also stressing everything that walks or crawls, from toads to rats. If a fire ant finds a rat nest, it's the end of the baby rats. Same for the bird nests. On the grassland to the east of Dallas, from my observations, the main food is immature crickets. Bee hives take a beating as well. The ticks and mosquitos have suffered as well. Fire ants also have a heat problem when they raise their young - which they do in batches - they carry all the eggs to the top layer of the nest, the largest on the outside, then take them down at night. I don't know if the batch phenomena is related to the heavy thunderstorms -which could kill off most of the ants -or if its is by design. Austin bam0511