Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Sci-Fi Science

IEEE Special Report On the Singularity 483

jbcarr83 writes "The IEEE Spectrum is running a special issue on the technological singularity as envisioned by Vernor Vinge and others. Articles on both sides of the will it/won't it divide appear, though most take the it will approach. I found Richard A.L. Jones' contribution, 'Rupturing The Nanotech Rapture,' to be of particular interest. He puts forward some very sound objections to nanomachines of the Drexler variety."
This discussion has been archived. No new comments can be posted.

IEEE Special Report On the Singularity

Comments Filter:
  • by BPPG ( 1181851 ) <bppg1986@gmail.com> on Tuesday June 03, 2008 @10:53AM (#23639339)
    aw, I just can't do it.
    • by spun ( 1352 ) <loverevolutionary@@@yahoo...com> on Tuesday June 03, 2008 @10:56AM (#23639387) Journal
      I for one welcome our apathetic first posting overlords, and remind them that as a trusted slashdot personality, I can be helpful in rounding up others to toil in their underground overlord welcoming centers.
      • by JebusIsLord ( 566856 ) on Tuesday June 03, 2008 @11:19AM (#23639707)
        Just think, when The Time comes, AIs will compete over first posts within picoseconds of eachother. New memes will be invented, spread and forgotten in milliseconds, and dupes will (hopefully) finally be a thing of the past.
        • Re: (Score:3, Funny)

          by Surt ( 22457 )
          Nahhh ... they'll be duping so fast the dupes will eventually achieve sentience and reproductive ability of their own, and live forever. Eventually, the entire universe will be filled with nothing but dupes.
    • by fm6 ( 162816 ) on Tuesday June 03, 2008 @11:35AM (#23639963) Homepage Journal
      What? Our singular overlords? Our eschatonic overlords.

      Is it just me, or does all this poorly-reasoned "singularity" crap have a religious feel to it?
      • by JebusIsLord ( 566856 ) on Tuesday June 03, 2008 @11:40AM (#23640053)
        I don't think it is poorly reasoned, but it does definitely have a religious feel to it - in a good way! It gives us God-is-dead scientific-types something to strive for (enlightenment, immortality) with an actual basis in fact.

        If you assume that:

        * Technology will continue to improve exponentially (it is right now - see Moore's law)
        * The brain is a fully deterministic computer.

        Then it is a fair assumption that we will eventually design a superbrain. The superbrain will design a super-duper brain, and the chain reaction (singularity) will be upon us. I can't wait!
        • by fm6 ( 162816 ) on Tuesday June 03, 2008 @12:32PM (#23640805) Homepage Journal

          Technology will continue to improve exponentially (it is right now - see Moore's law)
          Dude, Moore's "law" will cease to apply in just a few years. It's not a law of nature, it's an observation that chip fab facilities double their resolution every 18 months. But you can only take that so far. Soon, quantum tunneling will make it impossible to print circuits any finer.

          Understand, I'm not claiming that progress in creating smaller and cheaper circuits will suddenly come to a halt. There's too much economic demand for that to happen. But development will have to switch to new technologies, and progress almost certainly will be slower.

          Every kind of growth has limits. If it didn't bacteria would still be the most conspicuous life form on the planet. Science and technology are no different. They're dependent on natural resources, an educated pool of skilled workers, and a lot of other factors.

          Then it is a fair assumption that we will eventually design a superbrain. The superbrain will design a super-duper brain, and the chain reaction (singularity) will be upon us. I can't wait!
          Again with the silly curve drawing. "Smarter" isn't something you can make just by packing in enough circuits. People don't even know what "smarter" is. Probably they'll figure it out eventually. But it's already clear that "smarter" is more complicated than any technology we'll see in the next century or so.
          • by fyngyrz ( 762201 ) * on Tuesday June 03, 2008 @02:26PM (#23642421) Homepage Journal

            Dude, Moore's "law" will cease to apply in just a few years. It's not a law of nature, it's an observation that chip fab facilities double their resolution every 18 months. But you can only take that so far. Soon, quantum tunneling will make it impossible to print circuits any finer.

            When relays reached max density, tubes appeared. Tubes were shrunk, and at about the time when they couldn't get a lot smaller, transistors appeared. Those shrunk for a while, then IC's appeared. IC's have been shrinking for a while, with various technologies, each able to go smaller than before, driving that change. Now IC's are within reach of maximum density in the 2D zone, but the 3rd dimension beckons, especially to low-power (hence low heat) technologies. Two layers gives a doubling in the same 2d space; four does it again... that's probably good for quite a few doublings before the 3rd dimension becomes unwieldy. In the meantime, can we anticipate what might come next? Biologicals are one possibility; look at the brain. 3d and fits in a funny shape. Brains come in all sizes, and who is to say that the one we have is either the best design or the largest or the fastest? What if materials that work 2x as fast as our neurons are found? Look at the recent development of memristors; how many people saw that coming? Not many! And they're not even in hardware yet. They have the potential to spike memory density up, power down, speed up, and more... because they aren't transistors at all. And they're small. In fact, the smaller they are, the better they seem to work. There's a limit in there somewhere, but still, how cool is that?

            Furthermore, Moore's law is just one aspect of technology; we are also experiencing doublings along many other paths (see Ray Kurzweil's observations for details on that) and some of them aren't about materials or hardware, they're about knowledge leveraging next steps. For instance, in the late 1970's, we had microprocessors that were very capable, but we didn't have many kinds of software. If we had it at the time, we could have done more, earlier... nothing but knowledge. But instead, many of these software technologies didn't show up for years. Yet we could take one of those microprocessors (a 6809 or a z80, for instance) and program all *manner* of cool things on them today, were it called for. And build them huge memory spaces, too. To put it another way, with what I know after 40 years of programming, if I could go back in time to 1979, what I now know how to do with microprocessors would make me a very rich man. Technology has come a long way regardless of Moore's law. Technology multiplies itself.

            Honestly, there is nothing that falls so flat on my ears as doomlike predictions of technology reaching an unbreachable wall. Not going to happen. What's going to happen is technology will continue to double. The consequences of that are shrouded in mystery, but the one thing that is clear is that there will be extremely significant consequences.

            Here's an observation for you: When you have projects that are pendant upon technologies that are experiencing doublings in a particular time period, those projects will typically get 1/2 of the total work done in the last time period.

            For example, four time periods of doublings: 1 - 2 4 8 16... a total factor of 31, of which 16 occurred in the last period. It is because of this that projects like the human genome project look stalled at first; half of the work required to get them done will occur in the last doubling period (about a year in that case.) I suspect that's exactly what we're looking at with fusion as well; we're just not far enough up the curve yet.

        • by Tenebrousedge ( 1226584 ) <tenebrousedgeNO@SPAMgmail.com> on Tuesday June 03, 2008 @12:54PM (#23641087)
          Somewhere in the article(s) it mentions that exponential increases in intelligence would probably equate to exponential increases in resources. There are physical limits to intelligence that we'll run into sooner or later--there will be a point where we can't shrink that transistor, or find another part that is smaller that does the same task.
          • Re: (Score:3, Funny)

            by somersault ( 912633 )

            Somewhere in the article(s) it mentions that exponential increases in intelligence would probably equate to exponential increases in resources.
            We'll soon be needing more coffee than the arable land of the world can produce just to wake our smart-phones up in the morning!
        • Re: (Score:3, Insightful)

          * Technology will continue to improve exponentially (it is right now - see Moore's law)

          However this will not and cannot continue ag nausium because eventually the laws of thermodynamics catch up with you.

          * The brain is a fully deterministic computer.
          This is an unproven article of faith and contrary to a lot of evidence which would appear the brain uses at least some semi chaotic processes. Most natural system do.
          • Re: (Score:3, Insightful)

            by lgw ( 121541 )
            All natural systems are deterministic. Those who believe otherwise just want to belive that the soul is a real thing. :)
            • Re: (Score:3, Funny)

              by somersault ( 912633 )

              Those who believe otherwise just want to belive that the soul is a real thing.
              I just spent real money on a James Brown album, you insensitive clod!
  • Sounds like someone is giving up on checking for dupes, or expects a little too much from a bunch of people who can't even RTFA after it's posted. Or possibly some nice *whooosh*es for me.
  • by Anonymous Coward on Tuesday June 03, 2008 @10:59AM (#23639423)
    We're all living virtualized lives of our lives prior to the singularity happening. It's an infinite loop, but our only way of dealing with it.
  • by unity100 ( 970058 ) on Tuesday June 03, 2008 @11:00AM (#23639429) Homepage Journal
    Now all i need to do is to harness the power of this singularity using the nanotech rupture to build my army of Vernor Vingor nanomachines !!!
  • The what? (Score:3, Interesting)

    by Hatta ( 162192 ) on Tuesday June 03, 2008 @11:01AM (#23639443) Journal
    Singularity, that's the thing at the center of a black hole right? What's that got to do with nanotech and AI?
    • Re:The what? (Score:5, Informative)

      by BPPG ( 1181851 ) <bppg1986@gmail.com> on Tuesday June 03, 2008 @11:05AM (#23639507)

      Singularity, that's the thing at the center of a black hole right? What's that got to do with nanotech and AI?

      Mankind has been progressing technologically in steps that seem to get closer and closer together. The theory is that at some point, technological advances will begin to happen all at once, with the emergence of things like sentient AI and usable quantum engineering. Basically, technological transcendence.

      It's a pretty silly idea, but everyone has their own vision of nirvana.

      • Re:The what? (Score:5, Informative)

        by ushering05401 ( 1086795 ) on Tuesday June 03, 2008 @11:14AM (#23639631) Journal
        IIRC the term singularity can refer to anyplace that predictive systems appear break down.

        I was listening to a talk on hypercanes quite some time ago, and the lecturer was using the term singularity to describe the point beyond which the weather system became self-sustaining, a situation for which the predictive equations could not account. Once the predictive systems are expanded the 'singularity' is 'pushed back' to the point where the systems break down again.
      • Re:The what? (Score:4, Interesting)

        by maxume ( 22995 ) on Tuesday June 03, 2008 @11:16AM (#23639649)
        The real entertainment begins when we figure out that we are already living in the singularity, and that it is going to end soon. That is, a plateau is at least as likely an outcome of a hockey stick graph as a singularity. Hard physical limits and all that noise.
        • Re: (Score:2, Insightful)

          by Bazer ( 760541 )
          "Nature doesn't like singularities" That's a quote I often heard form my physics professor. We're physically bound to hit a wall.
        • Re: (Score:3, Interesting)

          by slashname3 ( 739398 )
          I think you have hit on something. We are getting close to a resource issue that will impact the continued increase in technology and knowledge. We are now operating on a global scale. Where previously we operated on a more local scale involving hundreds of square miles at most we now are drawing on resources all over the planet. This singularity will eventually stagnate and implode if we are not able to obtain additional resources. And at this point the only new resources that we can tap are going to
      • Re:The what? (Score:5, Interesting)

        by bunratty ( 545641 ) on Tuesday June 03, 2008 @11:17AM (#23639669)

        I think you're missing the point of the singularity. Mankind has progressed at a rate limited by his brain, which is determined by genetics. Our brains have a bounded capacity and rate of operation, and our brain can evolve at only a very slow rate. Therefore, our rate of advancement has been bounded.

        On the other hand, if we develop beings with an artificial intelligence equal to the smartest scientists, they could potentially develop a second generation that would be improved. That generation could operate more quickly and be smarter, and develop a third generation even more quickly. Essentially, the limit to our rate of advancement would be removed, and that would cause technological advances to happen very quickly. In a short period of time, we could find ourselves surrounded by beings that seem like gods to us. I think it's less a matter of whether it will happen, and more a matter of when and how it will happen.

        • by servognome ( 738846 ) on Tuesday June 03, 2008 @11:29AM (#23639863)

          On the other hand, if we develop beings with an artificial intelligence equal to the smartest scientists, they could potentially develop a second generation that would be improved.
          The scientists could just get laid, have kids and accomplish the same thing :)
        • Re:The what? (Score:4, Insightful)

          by Goaway ( 82658 ) on Tuesday June 03, 2008 @02:17PM (#23642303) Homepage

          On the other hand, if we develop beings with an artificial intelligence equal to the smartest scientists, they could potentially develop a second generation that would be improved. That generation could operate more quickly and be smarter, and develop a third generation even more quickly.
          This is withing the realm of possibility, although it represents a very narrow and simplified view of what intelligence is.

          Essentially, the limit to our rate of advancement would be removed, and that would cause technological advances to happen very quickly. In a short period of time, we could find ourselves surrounded by beings that seem like gods to us.
          However, this in no way follows from the earlier assumptions. This is the essential mistake made by all proponents of the "singularity".

          You are assuming that each generation of intelligences can not only create an intelligence smarter than themselves, but one that is as much smarter than themselves as they are smarter than their predecessors. That is definitely not something which is guaranteed to be true, and I would go so far as to say it is most likely false.

          It doesn't matter if your infinite series is always strictly increasing, it's not necessarily going to get to infinity.

          For instance, say you manage to create an intelligence twice as intelligent as you. This one puts all its intelligence into creating another intelligence, and manages to create one which is 2.5 times as intelligent as you. That one manages 2.75. And so on, until you top out at three times. No runaway evolution happens, because intelligence turns out to be really hard.
      • Yes! I predict in 40 years we'll predict in 40 years that we'll reach a prediction....
      • Re: (Score:3, Insightful)

        Believe what you want, but I sincerely doubt anything like that will happen when corporate interests keep stiffling innovation.

        We're still stuck with primitive programming languages, defective (by design?) platforms, unimplementable document formats (OOXML anyone?), carbon-based power plants, software patents....

        There CANNOT be any singularity. The chilling effect of Mankind's stupidity is a factor too great to ignore.
        • "The chilling effect of Mankind's stupidity is a factor too great to ignore."

          You raise a good point.

          Many proponents of the singularity suggest a 'critical mass' type scenario, where the option not to proceed with these developments is effectively removed from man's control in the very near future.

          This point of view seems more than a little optimistic right now, but if we succeed in surviving another couple of decades... who knows.
      • by Hatta ( 162192 )
        So, do they have graphs showing "technological advancement" approaching some sort of asymptote? How do you quantify a "step"? How good is the fit to the curve? What reason is there to believe that we're not going to reach a plateau?

        If we are approaching an asymptote, that asymptote must cross the time axis at some point. That would, in theory, be the date of the singularity. When is it predicted for?
    • I guess the idea is that at some point there will be some form of technological "superhuman" intelligence, and they call that point in time the "crossing the event horizon" of the technical singularity.
      Compared to the likelihood that we blow ourselves up using a nuclear war or a ecological catastrophe I'm not too worried.
      • by bistromath007 ( 1253428 ) on Tuesday June 03, 2008 @11:13AM (#23639619)
        People who consider the singularity something to be worried about missed the point and/or watch too many movies. A technological singularity is not a world-ending scenario. It's the first step on the road to divinity.
        • As a guy heavily into nano-research I personally am very worried about letting a genie out of the bottle that we won't be able to get back in. Your experience may vary.
          In case you missed it, recent research showed that carbon nanotubes, despite earlier reports to the contrary, carry quite a cancer risk. So making self organizing molecules with lethal potential sounds like as safe an idea as splitting the atom. Someone will do it, but where it leads only (enter deity/superhuman ai/advanced life form of c
        • by geekoid ( 135745 )
          If it creates a sentinat being that needs tro compete with humans, it could mean the end of the world, to humans.

          Not that It would happen. It's silly on the surface and doesn't hold up. It's the same failing in the human mind splayed with mystical technology instead of a security guard in the sky.
    • Re: (Score:3, Informative)

      by Lord Lode ( 1290856 )
      Look it up on Wikipedia: http://en.wikipedia.org/wiki/Technological_singularity [wikipedia.org] It's the point where machines will be able to evolve technology faster than human thinking can. So, it's the point where we are no longer the most significant sentient race on this planet.
    • Re: (Score:3, Informative)

      In this context it refers to a technological singularity as posited by Vernor Vinge, Eric Drexler et. al.: An accelerating feedback loop of technology leading to better, faster technology is cycling down towards zero time between improvements at which point progress will literally leap forward faster than we can imagine - a singularity in technology.

      Most singulitarians expect the first human-equivalent AIs circa 2025 and the singularity circa 2045.
  • hmmm. (Score:5, Interesting)

    by apodyopsis ( 1048476 ) on Tuesday June 03, 2008 @11:11AM (#23639591)
    Lets imagine you can upload your mind into a machine.

    It will not be you. It will be a copy. You will still be the one that dies afterwards.

    It would be you if a progressive upgrade path could be found from biochemical to mechanical/electrical system.

    The copy however will believe that he is is you as he will have no memory of his existence after the "transfer" unlike poor flesh you in the xerox machine.

    Who has legal rights until/after fleshy death?

    Even then the copy will be subject to mechanical breakdowns, loss of sensation, and other issues interacting with the real world.

    Would they want to interact with normal world? Would they prefer a virtual world?

    As a society I feel that we are nowhere near ready for such questions, and in any case I strongly suspect individual sanity would not survive transfer.

    For a good fictional account of this (there are many) I still hold the Gateway books by Frederick Pohl - and the death of Robinette Broadhead and the society of electronic people stored after death.

    In the book, to interact with us really slow and boring humans he creates an electronic avatar and animates it whilst having a fun time in virtual fantasy world, checking on it every while to see if anything interesting has been said and instructing it on what to say next.
    • Re:hmmm. (Score:4, Insightful)

      by Yvanhoe ( 564877 ) on Tuesday June 03, 2008 @11:19AM (#23639695) Journal

      Lets imagine you can upload your mind into a machine. It will not be you. It will be a copy. You will still be the one that dies afterwards.
      That is, sir, a very hotly debated point :-)
      When you copy a linux binary, it is a linux binary, as well as a copy of it. This whole thing touch at the essence of what "being" means. If you were to instantly copy yourself right now, you would have one instance of you thinking "Well, the copy is not me !" and another one thinking "Whee, I am the digital one, I am the one who get immortality, yay !"
      Thinking of people as instanciable things require a little time to adapt to the idea.
      • Re:hmmm. (Score:4, Interesting)

        by Nursie ( 632944 ) on Tuesday June 03, 2008 @11:40AM (#23640045)
        >That is, sir, a very hotly debated point :-)

        Why?

        Consciousnous will not be split across the two new instances, and if a non-destructive reading has taken place there is no magic that will make your consciousness jump to the computer. You, in meatspace, will still have a continuous existance and you, in meatspace, are not suddenly immortal.

        It would be like giving birth perhaps, you spawn off a part of yourself. To me it would feel utterly futile. Where's the benefit to me (by which I mean my internal monlogue, my continuous experience of life), other than in terms of vanity?
        • by Jhan ( 542783 )

          [When copied into computer]

          Consciousness will not be split across the two new instances, and if a non-destructive reading has taken place there is no magic that will make your consciousness jump to the computer. You, in meatspace, will still have a continuous existence and you, in meatspace, are not suddenly immortal.

          Consciousness, if defined as a unbroken chain of remembered events isn't singular. It will indeed split. Both beings - you in your flesh, and you in the silicon - will have exactly the sam

          • by Nursie ( 632944 )
            it's the flesh one I'm concerned about! And in fact it's the original point, that the copy on the machine is a copy and not a transfer.
    • Re:hmmm. (Score:5, Funny)

      by Rogerborg ( 306625 ) on Tuesday June 03, 2008 @11:34AM (#23639937) Homepage

      Would they want to interact with normal world? Would they prefer a virtual world?

      Surely that's not even a serious question. If I could choose between hanging out with you meatsicles, or living in a perfect copy of meatspace but with a flawless copy of the flawless Alyson Hannigan oiled up and duct-taped to a water-bed, it's not a matter of if, it's a matter of when and how much.

    • Re:hmmm. (Score:5, Interesting)

      by Bazman ( 4849 ) on Tuesday June 03, 2008 @11:35AM (#23639951) Journal
      So tell me, when you go to sleep at night, or perhaps go under general anaesthetic in hospital, what wakes up? Is it you, or is it a copy of you? And importantly, how could you tell?

      • Comment removed based on user account deletion
      • Re: (Score:3, Insightful)

        by naoursla ( 99850 )
        I am a different person now than I was ten years ago. That person from ten years ago is dead.

        In fact, every moment of the day I die and am recreated again as a slightly different individual.

        I am the phoenix and you can too!
      • Re: (Score:3, Interesting)

        by lysse ( 516445 )
        But if I meet another entity who originated as an exact copy of me, I will still recognise that the entity is not me, and the entity will recognise that I am not it. Self-consciousness is non-transferable by definition; we become self-conscious not with any positive realisation of our own individuality, but with the dawning awareness of our own isolation from others - it isn't "I am nobody else", it's "Nobody else is me". Exact duplication doesn't alter that.

        Moreover, even if we are merely the sum of our ac
    • Lets imagine you can upload your mind into a machine.

      It will not be you. It will be a copy. You will still be the one that dies afterwards.


      What if you slowly replace each brain cell that dies with a synthetic replica of a brain? Eventually, your mind will be synthetic or a machine, but if that machine is not you at what point do you loose your consciousness?

      We all have brain cells die all the time and grow new ones without (at least observantly to ourselves) loosing our concisenesses (more so than other aft
  • Light Speed Rule (Score:5, Insightful)

    by arthurpaliden ( 939626 ) on Tuesday June 03, 2008 @11:19AM (#23639705)
    Although new developments are happening faster and faster, the energy to generate them (money) is getting greater and greater. So to get to a point where developments happen concurrently or very very close together will require vase amouts of money. Probably more than is currently available.
    • by samkass ( 174571 )
      But what does "money" really represent? Money availability is not a constant thing, and money is a vague measurement of productivity or work done. I think if you go back to your original statement about "energy" it's actually more accurate. There is a lot of stored solar energy in the ground in the form of oil, but at some point we're going to catch up to the "now" and have only as much solar energy as is available each minute from the sun.

      There are really only a very small number of sources of energy on
      • I used 'money' as a representation of the resources required since it is a term that every one understands and generally equates to 'energy-materials-brain power-etc...'.
    • Re:Light Speed Rule (Score:4, Interesting)

      by vertinox ( 846076 ) on Tuesday June 03, 2008 @12:18PM (#23640541)
      Although new developments are happening faster and faster, the energy to generate them (money) is getting greater and greater.

      Short term. Thats what inflation does on its own.

      Long term. Technology actually saves costs and increases productivity. A single scientist today with a desktop computer and the internet is more productive than 100 in 1908 with slide rules and a large library.

      If nothing else, those scientists in 1908 had to deal with the time in looking up materials in their reference sources, do very complex calculations by hand, and if they needed to correspond with their peers they had to deal with the postal service and transatlantic journeys if their letters needed to reach their friends across the pond.

      So while the costs appear to increase (probaly due to inflation and energy costs) productivity increases just as fast if not faster.

      And speaking of energy crisis... I believe the current crisis will actually benefit alternative technologies and actually force companies to really consider more efficient ways of using and eventually creating their own energy.

      Imagine if you would a world were solar cells or so efficient you don't need to even have to bother with a real power grid. In reality, I don't think the singularity will be created by a bunch of nerds with fancy algorithms but by corporations who create technologies out of competitive necessity.
  • by Surt ( 22457 ) on Tuesday June 03, 2008 @11:23AM (#23639787) Homepage Journal
    Not going to happen. We're approaching atomic level composition already. Even with quark composition, the computing capacity of the fastest thing that can be built won't go up that much. Computers will never be more than a 10^15 times faster than they are today. Even quantum computing doesn't solve the NP problem.
  • by gmuslera ( 3436 ) on Tuesday June 03, 2008 @11:27AM (#23639841) Homepage Journal
    I think that is a bigger challenge than most of fans of any sort of singularity think. Yes, in the big scheme of things could be pretty close, but that "close" could be centuries from now.

    Think that in most classic sci-fi books we already should have humanoid bots walking between us, colonized most of the solar system planets (even visited and returned from other stars/galaxies), sent manned probes to jupiter, have flying cars and/or MrFusion (and not as exceptions, but as something that everyone have), etc. There are some "practical" issues that delayed a bit that, wasnt found a way to travel safely faster than light, antigravity wasnt discovered, duplicators just arent there, neither teletransporting (with flies in it or not), even getting a full grown clone with my memory and concience is a little hard to get.

    Worse than that, between the practical issues arent just technical ones. Economy, ethical, social, safety issues are as good stopping reaching some utopical sci-fi society as FTL travel.

    In this category falls any kind of machine that talks and in fact think like a human, including handling contexts and perceiving reality like human. Is something very common in movies and sci-fi stories, but afaik is still a bit far on time.
    • If I remember right, the singularists would agree with you; if we advance at the speed we currently do, AI is centuries away. Where they would disagree with you is in assuming that we will continue to advance at our current rate.

      I've heard estimates that we will experience 20,000 years of progress (at the current rate) during the next 100 years. It sounds insane, but if you think back to the year 1900... no polio vaccine, no relativity/quantum mechanics, no airplanes, no electronics, no radar, no X-rays,
    • by Hatta ( 162192 ) on Tuesday June 03, 2008 @12:27PM (#23640707) Journal
      AI won't ever be "just like" human intelligence for much the same reason that artificial flight won't ever be just like avian flight. AI would have to be designed exactly like a human brain, taking into account eons of evolutionary kludges. It doesn't make sense to do this, when a simpler design will work better in many ways.

      Of course, people will complain that it's not real intelligence if it can't be mistaken for a human. To them I say, a plane can not be mistaken for a bird, does it not really fly?
  • Real Singularity... (Score:2, Interesting)

    by ShiNoKaze ( 1097629 )
    Is apparently augmenting our intelligence with way to get more intelligence... I'm so going to laugh if they find out that intelligence is not a "value" that can be increased. Of course raw computing power can be, but that's not intelligence. It's far closer to actually being philisophical view points and the ability to distinguish one from another. They're totally gonna find the world's smartest computer is a philosophy major and sits around doing the electronic equivalent of smoking pot all day. Cuz
  • Debug time (Score:4, Interesting)

    by Animats ( 122034 ) on Tuesday June 03, 2008 @11:36AM (#23639975) Homepage

    Soon, if not already, biotech will be able to create genetically modified humans. But it will take a century or so to tell if a given mod was an improvement. It's going to be a very slow development cycle.

  • This is ridiculous (Score:5, Interesting)

    by LS ( 57954 ) on Tuesday June 03, 2008 @11:42AM (#23640077) Homepage
    The "singularity" will always be somewhere beyond the horizon of our predictive abilities. The flaw with the concept is that somehow this event will hit, like a sonic boom. But as we advance, our connectivity and knowledge advance, and our understanding of the world and ability to predict our future also advance (especially if we start augmenting our minds), so that singularity will always be ahead of us.

    From another angle, this is really no different from predictions of rayguns and flying cars decades ago. Have you seen the state of AI and nanotech? It hasn't progressed qualitatively for quite some time. We've got microscopic gears and shitty speech recognition. What makes everyone think that we aren't going to hit some serious physical limits, or that human civilization is stable enough to support this kind of continued advance?

    It's just religion. Nerd religion, but still religion.

    LS
  • slashdot post comments on you.
  • Qin Shi Huang (Score:4, Insightful)

    by bxwatso ( 1059160 ) on Tuesday June 03, 2008 @11:53AM (#23640211)
    Qin Shi Huang was the first emperor of China and his best scientists tried to develop the singularity of eternal life. To that end, he poisoned himself by ingesting mercury.

    Humans can make small machines, but that completely ignores the fact that we have very limited knowledge about the workings of our cells and we really don't even know what sentient life is.

    In the grand scheme of things, we are only a few steps down the road from Qin Shi Huang. Every generation talks up unlimited life spans, and it is always BS.

    In other words, be prepared to die like everyone else.

  • He puts forward some very sound objections to nanomachines of the Drexler variety.

    Really? I found his objections to be fairly imprecise. For instance:

    the cogs and gears ...have some questionable chemical properties. They are essentially molecular clusters with odd and special shapes, but it's far from clear that they represent stable arrangements of atoms that won't rearrange themselves spontaneously. These crystal lattices were designed using molecular modeling software, which works on the principle that if valences are satisfied and bonds aren't too distorted from their normal values,

  • All well and good (Score:4, Insightful)

    by vorlich ( 972710 ) on Tuesday June 03, 2008 @12:32PM (#23640801) Homepage Journal
    provided the dynamo of technological advancement in society is in some way related to scientific breakthrough. The real world does not appear to bear this out, since what we considered advancement is a phenomenon of the existing economic system. The right discovery has to appear at the right time or it falls by the wayside as being unprofitable.
    The slavery-based imperialist economies of the past relied on captive expendable human labour and looting. There was no compelling need for mechanical transport when slaves could carry you, no need for extensive infrastructure when the roads were primarily intended to enforce the rule of the empire through the rapid movement of armies. Nor was there any extensive profit in consumer retailing when the majority of the population, locked into feudalism did not have the surplus income to spend. The Romans had an extensive and often surprising level of technology that the traditional teaching classical history fails to address at a high school level. They had fast food similar to burgers but no extensive empire-encompassing franchise with the motto "Id amo", nor did their technological abilities extend much past properly constructed water and sewer systems and roads for the majority of the populace. They had all the resources both physical and intellectual to develop into a technologically advanced society but they did not and could not.
    It was not until much later, long after the system that was the Roman Empire had vanished, after the Black Death devastated the populations of Europe that feudalism ended and human labour became a valuable resource. It was at this point the cost effectives of machines became apparent and people were willing to invest time and money in their development and make a profit. The profit part doesn't necessarily appear as the direct result of new knowledge or research. On the contrary, some of the finest example of our technological advancements, anti-biotics and anti-malaria for example are a direct result of military strategic planning and had nothing at all to do with either venture capitalism or pro-bono publico development.

    So yes, The Singularity just like The End of History, (or dare I suggest even the Flying Car!) might be very pleasant but also equally difficult to either pin-down precisely or predict accurately.

The most difficult thing in the world is to know how to do a thing and to watch someone else doing it wrong, without commenting. -- T.H. White

Working...