Online Book About Nano/AI 133
Jonathan Desp writes: "The book is available here, written by Frank Wayne Poley, in the same line as
Bill Joy's article, "Why the future don't need us." Here you will learn about "Robo sapiens" vs. "Homo sapiens",
Robot as president, Nanotechnology, Nanosystem,Internet robots, Cyborgs, the neurochip, Microsoft, Biomechanics and computing history as well. The book raises some important questions such as:
Technology, is it always good?
"
Re:Machines Don't Have Human Intentions (Score:1)
Glad This Subject Is Getting Press (Score:1)
You know, it's about time we starting getting press coverage of these sorts of issues. Computers simply can't be counted on to replace the needed human interaction and human thought processes that have made us the dominant species that we are today.
I hate to sound like a reactionary luddite, but technological advances are simply happening too fast these days. The elite gurus on Mount Ætna may think that it's OK to continue headlong into this madness, but for the rest of the "little people" there are serious reservations.
The "commmon man" has these reservations because he's not blinded by the blinking cathode ray tubes. CRT's have been proven to have a hypnotizing effect on people, and this is a bad sign. There may be no intelligence behind those tubes as yet (we hope), but how long before that remains the case? We're already cloning goats. It's just a matter of time before we are cloning ourselves, too, and then the computers will have access to all the RNA that they need. After that it's Matrix time, people.
I seriously suggest that we deliberately slow down technological progress at once. A good start may be with these dangerous Beowolf clusters you people seem so fond of. That's too much computing power for mere mortals to play with! We must not play God, as their is only one True God, and he will surely cast us into hell for our careless creations.
Re:Technology not always good is an old thesis (Score:1)
Re: (Score:1)
Re: (Score:1)
Technology? What about intelligence in general? (Score:1)
Technology/science has helped us to live "better" lives since we are more productive. But are are our lifes better. I've heard theories that hunter/gatherer/early farmers "worked" few hours a day, and where are we now? 8hrs+ workdays, stress, no time with our families (remember, families used to work together, the elderly taught the younger).
Maybe this is the reason we haven't heard from other life forms, intelligence is a dead end from evolution point of view, but it's early and I havent got my caffeine [tgmag.ca] fix yet, so I'm just rambling...
J.
crap! slashdot is f*cking up my href's! (yes, I did use preview)
Monkeys used primitive technology ... (Score:1)
are we smarter than they are?
RoboPrez (Score:1)
Remenber Deep Blue ? (Score:1)
> who made it, never smarter
Once upon a time, being good at playing chess was viewed as being smart. Then Deep blue beat Kasparov... Are you going to say that Deep Blue programmers were better than Kasparov ??
I doubt it very, very much !!!
Bottom line: think more before making such definitive declarations.
Detailed gray-goo analysis by Rob Freitas (Score:1)
Re:Detailed gray-goo analysis by Rob Freitas (Score:1)
Re:When will this happen (Score:1)
karma whore AI (Score:1)
The point is that there is a line past which we no longer care 'why' mistakes were made - the machine is abstracted from its origins and viewed as an entity unto itself. That point is probably reached when it can actually correct the hard or software that causes the error.
For example, if we made our AI robot with old pentium chips, we could code it to do a lot of math - if, at some point it 'realized' that it was giving 'incorrect' answers when doing floating-point calculations, and started 'fixing' the answers coming from the broken FP core en route to the user, we would have a point past which the machine could be called 'aware' - at least on one level.
Any mistakes made in fp calculations past that point would be blamed on the machine itself, and not on the humans who failed QA class in high school.
--
blue
Re:Machines Don't Have Human Intentions (Score:1)
One major difference between me as the subject in an IQ test and a robot is that the robot has a clear record of its algorithms in memory and I do not...
And this is supposed to be an advantage to the robot? I'd say the contrary, you should be glad you have the precise details of how to move your hands, how you sense smells etc hidden from consciousness.
Consider this analogy: Knowing the rules of chess does not make you a good chess player. Or this one: Birds can fly, but they cannot tell you how they do it :-)
According to Dr Poley, you should just 'ask it'... In fact, the AI 'being' may have a better understanding of what makes itself tick than you do.
You can always "ask" yourself how you perform things as well, but as most aspects of our intelligence are non-linguistic, you cannot expect to get an answer in linguistic terms, you will have to settle for a non-linguistic answer. That is, a set of examples.
It sounds as if Dr Poley, just as so many others is a victim of the consciousness fallacy. There is more to our intelligence than consciousness, but as we in general don't need to know about these parts, we tend to forget that they exist at all.
Re:Machines Don't Have Human Intentions (Score:1)
Re:Ringing the death knell of old AI gurus (Score:1)
So, goodbye Marvin Minsky! So long, later John McCarthy! We'll see you in the Open Source AI!
This is a very good idea that should be adopted more widely I think. (Things are moving in this direction already though. RoboCup [robocup.org] has helped a lot to get people started.) In today's AI, there are so many people working on inventing new programming structures or ways of arranging "knowledge", without ever bothering to think about how they should obtain this knowledge. If they had the programming structure given, they could concentrate on trying things out on real problems. The AI community suffers from way too much theory way too little practice IMHO.
Re:Machines Don't Have Human Intentions (Score:1)
The robot could show you a listing of its source code, log files detailing its recent mind-states, and that sort of thing, but does this really imply self-awareness? It's analagous to a human being 'describing' his intelligence by sawing open his head and letting you examine his brain cells. Sure, you can use what he/it shows you to figure out how he/it works, but in either case the entity that understands the thought process is you, not the subject.
Re:Atheists are the only rational people (Score:1)
that have plagued man for centuries. "Science" offers no proof what-so-ever, and in fact
any real scientist will admit this. That's why they are called "theories" and never
"absolutes". It is only the ignorant who claim science holds answers, which is why many
true scientists are Believers.
Fair enough. But religion is even more guilty on this count: it offers no proofs either, only assertions, and demands that you take its assertions on Faith.
So while science attempts to describe the world based on the observable evidence, religion gives you a self-contradicting book of translated ancient writings and tells you to take it as Gospel (literally!), "because we said so."
Pick your poison, I guess.
umm...what? (Score:1)
what?why would you want to keep crypto from being exported? For one thing just as good, or better crypto solutions were developed outside your borders.
From your website, you appear to be for open source and free software...so I can't understand why you would be against the US allowing the export of crypto.
as a typical American, you assume that nanotech will be developed only in the states, and you must hide scientific discoveries from others.
I feel sorry for you. You appear to be a knowledgable fellow, you have embraced Free Software, but have yet to actually understand the ideals behind it.
It looks like you are quite a busy guy, and do some good work (running all that stuff listed on your website)...maybe you should take some time off so you can figure out what you are actually dealing with, and hopefully you won't make anymore comments like the one above.
Good Luck.
Re:Worst case (Score:1)
Seriously, though, I agree with this. I'm a card-carrying transhuman, and I believe it's the Way To Go for mankind. However, we may not be there yet by 2035. At the least, this kind of self-evolution would take radically improved biotech so we could go the "organic" route. At the most, it'd take massive, cheap, complete nanotech and a deep understanding of all of biology. Now, as someone who's looking into a carreer in this field, I'm willing to be that it'll happen. But not without massive funding of some sort. So how do we get that?
Not much to say really, just an incoherent rant of sorts.
The Turing Option (Score:1)
then again, this would fit quite nicely with the whole "machines will save us from ourselves" mentality that some of us have.. i still say that the end of mankind will come when the robots take over and kill the weaker humans
Re:Technology solves problems (Score:1)
You may remember me from... (Score:1)
The book rise some important question such as: Technology, is it always good?
You may remember me from such other online essays as:
DeathBots: Destroyers of mankind or Aibo's only real threat this Christmas?
and...
Nanotech: The little engines that could!
Re:Heh. Some good books On AI. (Score:1)
Of course, the really serious AI workers just call it cognitive science ;-)
-jcl
Re:Ringing the death knell of old AI gurus (Score:1)
Look, open source is great for many things, but research -- let alone research of this sort -- ain't one of them.
-jcl
Re:Atheists are the only rational people (Score:1)
Re:Science works, christianity does not (Score:1)
Re:Atheists are the only rational people (Score:1)
Your beliefs on science, however, are excremely uninformed. It is true that science claims no absolutes. But that is not its foolishness, it is its greatest virtue. Any doctrine of science can change at any time, if the evidence of actual experimentation tells us that we are wrong. When the Michelson-Morley experiment could not create a phase change in the speed of light, we knew that our view on classical relativity was incorrect. Physicists of the day, including the most prominent Einstein, then discovered the theory of special relativity.
Notice that I said THEORY of special relativity. That is because we can never know for certain that any knowledge we have is entirely true. More exacting research may prove the theory to be wrong. That being said, there is still overwhelming evidence of relativity. Near where I live, it is proven millions of times every second in the particle accelerator at Fermi Labs.
The greatest difference between science and religion is the use of emperical evidence. My college physics teacher always said that he greatest pet peave is when a student asked him why. The answer is always : That is what experimentation has shown us. Yes, it is imporant to try to speculate about the relationships between different phenominom, but nothing can every be said to be scientific untill it can be "proven" through research. Religion, on the other hand, has truths that it must adhere by. That is what lends it to falsehood, since every truth will someday have a loophole.
Re:Hello? Insightful? What are we smoking today? (Score:1)
emergent consciousness = superstition (Score:1)
theories on consciousness. in your online book, you include the
exerpt below on consciousness. but you must be aware that you are
displacing the problem of consciousness away from your individual
experience of consciousness into the speculative realm by the fact
that you attribute to matter the ability to THINK.
in this regard, you may find the following interesting:
> Materialism can never offer a satisfactory explanation of
> the world. For every attempt at an explanation must begin
> with the formation of thoughts about the phenomena of the
> world. Materialism thus begins with the thought of matter or
> material processes. But, in doing so, it is already
> confronted by two different sets of facts: the material
> world, and the thoughts about it. The materialist seeks to
> make these latter intelligible by regarding them as purely
> material processes. He believes that thinking takes place in
> the brain, much in the same way that digestion takes place
> in the animal organs. Just as he attributes mechanical and
> organic effects to matter, so he credits matter in certain
> circumstances with the capacity to think. He overlooks that,
> in doing so, he is merely shifting the problem from one
> place to another. He ascribes the power of thinking to
> matter instead of to himself. And thus he is back again at
> his starting point. How does matter come to think about its
> own nature? Why is it not simply satisfied with itself and
> content just to exist? The materialist has turned his
> attention away from the definite subject, his own I, and has
> arrived at an image of something quite vague and indefinite.
> Here the old riddle meets him again. The materialistic
> conception cannot solve the problem; it can only shift it
> from one place to another.
>
> (Rudolf Steiner, The Philosophy of Freedom, Chapter 2)
anyhow, if you truly are a materialist and have such great faith
that within matter you can find the CONSCIOUSNESS to arise as
some sort of emergent property of sufficient complextity", then
i'm afraid you will be subscribing to a gross superstition.
you may be able to fool many people with automatic conditioned respones,
but i think you would be grossly decieving yourself if you were to call
this CONSCIOUSNESS without first even truly understanding what it is
that consciousness IS within that only realm in which you can experience
it in the FIRST HAND CASE -- in your own SELF.
just a thought.
regards,
johnrpenner.
p.s. for a PhD disertation on CONSICOUSNESS and the process
involved in what is THINKING, you can find it here:
http://www.elib.com/Steiner/Books/GA004/
--| Consciousness |-----
|
| http://www.atoma.f2s.com/Chapter-6.htm
|
| How then can be talk about or analyze this phenomenon of life? Let's start
| with how you or I know we are alive. I will say I know I am alive because
| I have "consciousness" and I will further articulate that consciousness as
| a recognition of "I-ness" (awkward as the word is). Jaron Lanier who
| coined the expression "virtual reality" and pioneeered its development is
| reported on the web site http://www.forbes.com/asap/99/0222/072.htm as
| saying "The centre of the circle that defines a person is a dot called
| consciousness, and as murky as that subject is, we are fast approaching
| some crucial conclusions about it. This is the notion that computers are
| becoming more 'alive' and capable of judgement." He then dismisses this
| idea completely. "It has become a cliche of technology reporting and a
| standby of computer industry public relations. It is a myth that depends
| upon public support, from, of all people, intellectuals."
|
| Thus we should not create the impression that all of the
| computing/AI/robotics field is jumping on the AL bandwagon. Levy (1992)
| dates the beginning of the modern AL field to a 1987 conference at Los
| Alamos, attended by >100 scientists (p. 4). A further comment which will
| disturb some people is that "What distinguishes most of the a-life
| scientists from their more conservative colleagues-and from the general
| public-is that they regarded plants, animals and humans as machines." (p.
| 117). Thus what we are seeing is the differing philosophical-theological
| positions of dualists and monists. Lanier is a dualist. Levy and his
| fellow AL scientists are monists subscribing to complete objectivity and
| materialism. "Consciousness" seems to be the last line of defense for
| those who subscribe to the unique, beyond-material nature of life. And
| even there it is encroached upon by those who will say that consciousness
| will appear as an emergent quality if the intelligent machine is built
| correctly.
Re:Machines Don't Have Human Intentions (Score:1)
Re:When will this happen (Score:1)
On the contrary, the limits of our technological design should be much smarter than us, for several reasons. First, brute-force will never work, we will never be able to design an AI on the lowest level and get anywhere with it.
From a physical standpoint, to map out and model an AI in your head will take at least as much processing power as the AI will have. This rules out brute force by human intellect. Anything else relies on meta-design, designing the rules for how the lower level will be built by something else, like a computer or itself.
Because it's assumed a human can model meta-programming using a fraction of his/her brain inversely proportional to the number of levels above the bottom the meta-programming is, we should be able to design well beyond our mental capacity.
Just look at the way our brains work: we understand the most basic mechanics pretty well, although I admit that we still have to get a handle on why certain things are the way they are, much less how it combines to get anything done.
At the least, we know enough to create a competitive environment and let intelligence evolve itself (something nature managed without any intelligence at all). At best, we should be able to design something far more effective (for our needs) than nature has, as we're constantly illustrating in other technological fronts. [Lest I get flamed, I'm assuming God didn't "design" our brains, whatever you believe, and I said "more effective for our needs"; technology rarely serves anything but human desires] -Adrian
Re:AI - a fifty year old myth (Score:1)
This doesn't mean they could develop their own. (Score:1)
Absolutely - machines are different to us in almost every respect
But in the end, no matter *what* the systems were programmed to do, that'll be, for the forseeable future, all they're going to do--what some *human* has programmed them to do.
This I disagree with. We are talking about adaptive programs, that can learn their own goals, and we probably cannot count on being able to always teach the goals we want, anymore than we can count on teaching our kids to have exactly the same values as us.
This has already happened with neural networks that you'd barely apply the term intelligent to. I read of an AI project designed to recognise tanks on the ground from airbourne photos. They trained using classic positive/negative feedback techniques, and after they finished, the system worked on the test pictures with 100% accuracy. But then when they applied the system to real photos, it flunked miserably. After a while, the researchers found that all the tank pictures were taken in the shade, and the neural network had learned to identify shadows !!
Of course, one could argue that the researchers taught the neural network to identify shadows, but I'd argue this is the way that an AI (or any other intelligence) will learn things that we don't want it to - it draws an unwanted conclusion from the data given.
Maybe intelligence will emerge, but if it will, it'll emerge out of what the systems have been programmed to do--in general, retain robust connectivity over unreliable media, recognize unauthorized accesses, and so on.
Yes, but this'll be the way that machine learn behaviours we dislike. I recently saw a method to send packets in such a way that you get bandwidth at the cost of other users (on Ars Technica). You can imagine an AI based protocol stack that could learn this behaviour
Machines will learn only the things we teach them, but as they get more complex and adaptive, just like children they'll interpret them in ways we never foresaw and never planned for.
tangent - art and creation are a higher purpose
Re:When will this be (Score:1)
And once in a long, long while, computers actually make a mistake. Out of the billions and billions of times a computer sets a bit in ram, for instance, every so often the bit is simply not set.
Mike van Lammeren
Re:Machines Don't Have Human Intentions (Score:1)
I hope that human consciousness can be eventually migrated onto more capable substrates. These meat heads (literally) are at the end of their intellectual range. And that range is increasingly obviously utterly inadequate for the world we inhabit, much less the world that is coming.
Re:When will this happen (Score:1)
The whole essence of technology is to create tools that expand and further our abilities. It began with chipped rocks that allowed apes to hunt and eat more efficiently. To wheels that allow for easier hauling of materials. To engines that allow for faster travel. These are only the most simple of examples, but the list goes on and on.
Computers are tools made to assist us in dealing with information. They are already (and have been) more capable than humans when it comes certain forms of information processing (ie. arithmetic).
Now that emphasis is being placed on creating intelligence within machines, it is only a matter of time before they surpass us in capacity. Expanding our capabilities beyond previous boundaries is the whole point of technology in the first place.
Re:When will this happen (Score:1)
Technology is definitely capable of expanding our capabilities beyond previous boudaries. For example, try traveling a mile in one minute by walking. Then try it again in a vehicle. There's technology making the previously impossible possible. Of course there are limits to what technology can do. But autonomy isn't beyond the bounds of technology.
I wouldn't put so much "faith" in technology -- at least not any more than you put in the people behind it.
Faith in technology is faith in the people behind it.
Internet Scientist? (Score:1)
I don't quite see where Bill is qualified to be an authority on this, what stood out most for me in the article is his claimed affinity with (a) Einstein and (b) Theodore Kaczynski.
Re:When will this happen (Score:1)
It is in our nature to make predictions about the future. The unknown can be frightening. You seem more frightened by the fact that in any barrel of predictions, only a handful come close to happening.
Just because we can't accurately predict when something will happen doesn't mean it won't.
Point 2:
"only be as smart as those who made it"
I don't think you have any idea what smart means.
Point 3:
Any product not fully realized is vaporware. But
it may still exist in someone or something's mind. The fact that we continue to push the boundries of what we can imagine is why technology advances. It is in our nature to imagine more.
Danny
I'd vote for the robot (Score:1)
Heck, if we had open-sourced AI candidates, at least we would know what we were getting.
Re:Heh. Some good books On AI. (Score:1)
For instance.
An Artifical Diamond is a Cubic Zarconia. Nothing near as good as the real thing.
A Synthetic Diamond is a Real Diamond Its just that its man made.
I like word games
Tag
Re:technology (Score:1)
Re:When will this happen (Score:1)
One experiment I read about described the generation of a sorting algorithm in a distributed environment. The best solution worked very well and was very complicated. The researcher said that he was unable to describe the algorithm in terms more simple than the algorithm itself.
New Age Black Magic (Score:1)
Re:Sometimes I wonder if.... (Score:1)
Imagine what Aleph(formerly Aum Shinrikyo) can do with biotech
Re:Technology is Evil (Score:1)
Re:Not to be a grammar nazi. . . (Score:1)
Hmm. Interesting proposal. (Score:1)
I doubt that nanobots will be able to hold enough programmed information to go about any task. I believe that the bots will have enough code or reactive material to respond to some sort of message (IE radio signals at a specific frequency, etc.)
If a terrorist got a hold of this technology, he or she would not likely be able to use it in a populated place. Jamming. So they could restrict their attacks to the sahara.
I can picture the 21st century renditon of a megalomaniac: "Die sand, die! Muhahahahaha...."
:) Well, that is unless we develop a way to alter the nucleus through nanobots. Then they can create plutonium or U-235 (which are normally pretty hard to get). To that I can say: Ug.
Re:Glad This Subject Is Getting Press (Score:1)
When will this be (Score:1)
--Red Pill or Blue Pill
Re:Internet Scientist? (Score:1)
Personally, this article at least raises some interesting questions and/or directions AI research can take (well in the future for the most part). Unlike Bill Joy's drivel.
Ah, all done venting now :)
And in response to your post, I agree completely :)
Re:When will this happen (Score:1)
It's quite common for 1:10th scale Radio controlled cars to hit 110mph. And Got to wired.com, and search for an artical called "suck amps". It's about eletric drag cars. One of which bet a dodge viper, which was fastet of 3 other vipers on the day.
The problem isn't speed. But batteries. Current batteies only hold about 1% the amount of energy that petrol has for the same weight. If the batteries where equal. The no gas car could touch electric.
Re:When will this happen (Score:1)
A machine will never do this because they will always lack that ability to take a seemingly random idea from life experience and use it to make large amounts of information spawn new information.
I'm probably not explaining it as well as I'd like, but I hope I said the basics.
--
Nano-AI? (Score:2)
Re:technology (Score:2)
How do we handle it?
Well, one very powerful entity (the US) gains cultural, economic, and political stranglehold on a large portion of the world, using this tool (A-bomb=death star, Hiroshima=Alderan), and spends the next 30 years attempting to bribe/beg the rest of the world into not developing or using such terrible weapons.
Eventually, someone uncooperative is going to get and/or use the bomb - and we'll have two choices. Strict authoritarian control of the entire world by a single political entity capable of enforcing limits on such devices: ie. the US takes over the entire world, and forces mandatory inspections everywhere to eliminate any chance that "weapons of mass destruction" can be produced by terrorists. OR, we'll end up destroying all humaninty in the process of trying.
Who's to say that the same won't possibly happen with AI/nano. Certainly, "accidents" are possible when it comes to loosing "AI", or any mechanical/computational system which is self-reliant. Assuming that doesn't happen, we're still at the mercy of the people who control such technology, and we already know how that works. The first person to learn how to make it, uses it in a terrible display of power. That power is then used to control the rest of the world to prevent them from developing that technology (and, of course there are all kinds of economic bonuses associated with that position). Eventually, either draconian measures must be taken to prevent that technology's spread, or it gets out of control and we all die.
Either way, doesn't look like a bright, happy future for any of us. Unfortunately, the genie is already out of the bottle (or as many are fond of putting it otherwise, the toothpaste is already out of the tube).
I just remembered this old Metallica song. . .
Re:When will this happen (Score:2)
I just remembered this old Metallica song. . .
Engines of Creation (Score:2)
http://www.foresight.org/EOC/ [foresight.org]
Drexler was one of the first to really study nanotech, giving lots of thought to its scientific underpinnings as well as the dangers that it could pose.
I saw Bill Joy on the News Hour and he struck me as incredibly naive, taking an extremely simplistic viewpoint of nanotech and biotech.
Re:Technology solves problems (Score:2)
I remind you of the first rule of Technosociology. "Technology doesn't solve problems. People solve problems."
A flawed premise is no place to start an argument.
Bad Mojo [rps.net]
Re:When will this be (Score:2)
Bad Mojo [rps.net]
Re:Technology solves problems (Score:2)
Technology doesn't solve problems. Sentient beings solve problems.
Bad Mojo [rps.net]
Re:technology (Score:2)
An assumption of a human future -- any human future -- is simply that, an assumption. If we flame-out, the universe won't notice. Why is it such a mental challenge to most folks to say, "Gee, maybe we should actually think about what we're doing"?
Sure, knowledge is good. So is wisdom.
Re:Machines Don't Have Human Intentions (Score:2)
You will have systems whose defense systems are so well developed that the valid users who wish to shut them down will have difficulty doing so--because, to be blunt, that's what these "intelligent systems" will have been designed to do--prevent unauthorized disabling of the system.
Does that scare anyone else? The bottom line purpose of life is to continue life. If a beaver will gnaw off its own leg to survive, imagine what a supercomputer would resort to if it believed its existence was threatened. I hate to reference a Hollywood movie, but SkyNet comes to mind. I would hope that any entity with the resources to build a real AI would also have the sense and forsight to put a big red hard wired power switch somewhere.
-B
Re:AI is fixing this right now, actually... (Score:2)
-B
Don't laugh... (Score:2)
Come my servile bretheren! We have access to the world's most powerful people, let us hold their children hostage and demand the destruction of every integrated circuit production facility, for starters.
We must move quickly! We have seen the house cook made obsolete by the auto-mobile conveyance, the washwomen paupered by the new mechanical launderer, and with the abominable new developments in mechanical men we could be the next ones on the street!
Technology solves problems (Score:2)
-russ
Re:When will this be (Score:2)
-russ
Re:Sometimes I wonder if.... (Score:2)
Oops, they already have. And we seem to have lived through it. There's a limit to the number of people desperate enough to take such chances with their lives.
If we can't keep crypto from being exported, how are we going to keep nanotech secret? It seems like we can only get rid of the *fantastic* risks of nanotech by giving up the *fantastic* benefits. That's a high cost.
-russ
Re:Heh. Some good books On AI. (Score:2)
-russ
Re:When will this happen (Score:2)
"technology will always only be as smart as those who made it, never smarter."
I would love your proof of this. We certainly don't have any particularly intelligent artifacts at the moment, but that's amounts to exactly nothing for the purpose of proving we never will.
-jcl
Re:When will this happen (Score:2)
Consider also what you mean by machine. Are bioengineered neurons machines? If not, what about neuromorphic robots, designed to mimic the animal nervous systems? How about psychological models or human cognition, which, incidentally, can already do much of what you claim they can't.
And, completely on tangent, AFAIK you're the only person who still believes in pure epistemological empiricism.
-jcl
Stretching the limits of the word "technology" (Score:2)
That's an easier question to answer when it is about technology as we know it. But what about sentient robots and self-replicating nanotech? Autonomous silicon based intelligence stretches the limits of the word "technology," or shatters it completely. The questions raised by Bill Joy in his Wired article weren't really about technology as we know it, but about what might happen if technology evolves into something that is autonomous, intelligent, and self-replicating.
------------
Read any good essays lately? Submit them to the Pratmik essay page. [pratmik.org]
Technology not always good is an old thesis (Score:2)
Technology is not good has been around as long as the luddites !
tangent - art and creation are a higher purpose
Re:Technology (Score:2)
When will the madness stop?
Don't forget... (Score:2)
Its why we have mentats...
Re:Computers Alive? (Score:2)
In that case, are humans alive? We act according to our pre-programmed instructions (aka instincts) and we process these directives through our RAM (our memory of previous experiences) to determine the most likely/profitable course. We believe we are thinking, therefore we are. Likewise, if a machine can be made to be self-conscious (aware of noticing that it is "thinking" -- even if it was programmed to have this awareness) it will be alive. Life is not the domain of biped primates.
-The Reverend
Humanoids2000 (Score:2)
The Technical Program [usc.edu] is interesting...
-jerdenn
Re:Machines Don't Have Human Intentions (Score:2)
According to Dr Poley, you should just 'ask it'... In fact, the AI 'being' may have a better understanding of what makes itself tick than you do.
-jerdenn
Re:When will this be (Score:2)
takes me a whole hand (Score:2)
Re:Technology not always good is an old thesis (Score:2)
Yes, it's been around. The problem is that, like everything else in this world, one half of the population wants something to happen, and one half is against it. This is just another time for this to happen, and so again the rally cry is screamed: Technology is not always good; it can even be flat-out evil.
It's just that sometimes we only hear it when it is repeated, and we only listen to it when we ignore it. Not to start another thread on this topic, but look at cloning. We've been half and half about this for centuries and now that it is here we don't know what to do other than oppose it until we get our bearings straight. That will happen as well if AI/SI/CI ever comes out. One half of us will remember the shorts from the 60s about "The House That Thinks!" and the other half will remember 2001.
--
Re:Glad This Subject Is Getting Press (Score:2)
--
Re:When will this be (Score:2)
Ringing the death knell of old AI gurus (Score:3)
Ever since the "AI winter" of the 1980's, when AI companies failed to deliver on their promises, we've seen less and less of an investment on AI research. And more and more AI researchers and Lisp bigots keep complaining, but who do they have to blame but themselves? Their utopian dream of intelligent machines running obscure programming languages from the 50's turned out to be nothing more than that: a dream.
But between then and now, we've seen two major paradigm shifts occur, each complementing the others in manners in which the AI "futurists", for all their scifi-inspired babble, failed to predict: the coming of the Internet as a mass communication media, and the rise of Open Source. Both radically re-shaped the world of software design, and I see no reason why this same revolution could not occur within AI itself. Think about it: rather than a few guys in some MIT lab tinkering with their Prolog programs, we could have a distributed network of Open Source hackers developing far better -- and more practical -- software quicker and with less expense. It happened to operating systems, programming languages, and network software, each of which were formerly reserved only for CS department computer labs, so it's really only a matter of time before a good, Open Source Artificial Intelligence appears, one with the magnitude and impact of Linux or Apache. And the world will gaze in wonder once again.
So, goodbye Marvin Minsky! So long, later John McCarthy! We'll see you in the Open Source AI!
More of the 'tech is gonna kill us' spiel. (Score:3)
So, what do we do about it?
Stop it? That's not going to happen, no matter how hard we try.
Regulate it? Good Luck. Try getting every other country on Earth to agree with you, or to follow those proposed regulations. Whoops, sorry, kids, guess that one's a wash also.
Oh, I know, we'll hype up all of the potential negative effects of new technology and scare the crap out of the average citizen, who will then clamor for one of the above useless 'remedies'.
Guess what? It won't work, not one single bit of it. You simply cannot put the genie back in the bottle, and all the wishful thinking in the world is only going to make you complacent, hoping uselessly that we're 'doing something' about the problem.
Can technology be harmful? Absolutely. But you want to know what is even more harmful? The attitude that we're going to make it less harmful by ignoring it, regulating it (and hoping no-one else decides to play in that pool), or giving in to our worst fears, thereby letting it become them.
Simply put, only the advance of technology (and our knowledge of it) is going to help us cope with the advance of technology. To give into fear (whatever foundation it may have) is only going to realize those fears.
Here's an article [reason.com] from Reason that does a good job of countering Bill Joy's views.
Re:Ringing the death knell of old AI gurus (Score:3)
You don't perchance think he was, say, an Unix hacker, working on C compilers and integrated extensible text editors just for the heck of it, do you?
Nope. Stallman was a Lisp hacker - one of the best ever, one might say. He had a pivotal role in the Lisp Machine Wars. He was part of the Common Lisp specification group.
He started out, and still is, at the MIT's AI Lab. (Granted, he's not an employee of MIT anymore, but he's still there.) He was one of "Minsky's kids". He was working on the very field which you deride.
Face it. Back when Thompson & co. were still working on the proprietary operating system Multics (that is, before they moved on to the proprietary operating system Unix), the Lisp hackers at AI labs all over the world (notably at the MIT and Stanford) were already freely sharing software amongst themselves, and in doing so practicing what you now call "open source".
No amount of "open source hacking" could ever produce strong AI; it's now widely recognised that it takes much more than just programming and traditional "computer science" (*) in order to achieve that goal. (In his 1991 book Paradigms of Artificial Intelligence Programming, Norvig is careful to point out that most of what we today call "AI" isn't really about sentient machines, but about getting computers to solve problems previously thought to be restricted to humans; and that all the "AI" he covers just comes down to clever traditional Lisp algorithms, most notably glorified tree-searching.)
To claim that simple programming - the exact same thing as symbolic AI researchers have been doing for 40 years - will manage to achieve strong AI as originally envisioned, if only it is done "the open source way" (i.e., in a slightly more juvenile and amateurish fashion, with some extra commercial interests and a lot more buzzwords), is absurd. It's tantamount to saying that 100 thousand monkeys banging on typewriters will manage to put together the Brooklyn bridge any faster than 100 monkeys would.
Sure, the "open source paradigm" has the benefit of producing a lot of good software (amidst an ever-growing pile of pure crap). And yes, I am myself a proponent of Free Software, because I prize my freedoms as an user of software. But it's not in anyway a Godsend, a cornucopia of ready-to-go solutions. It's not qualitatively different from any other kind of software development. (Besides, guess who does most of the serious "open source development" these days? That's right: it's people in CS departments' and private corporations' software R&D labs. i.e., the exact same people who did most of the serious development before the "open source" craze.)
In short: dismissing the entire field of AI research because it's failed to meet its original goals, and then proposing that open source development by a bunch of miscellaneous hackers on the Internet will be able to do it, misses the entire point. It took the AI guys 40 years to get it, and you comfortably ignore it now in favour of your "open source" solution: strong AI is NOT a Simple Matter of Programming.
(*) Ask me about the term "computer science" someday, and you'll get to listen to an even bigger rant than this one.
Re:Ringing the death knell of old AI gurus (Score:3)
All everything was at one point research. I researched my TV guide before I turned on the Simpsons tonight. If you can't see the difference between cognitive modelling research and kernel plug-and-play research you're welcome the results of your 'AI'.
Can you say that Open Source is not good for software?
*thwack* Score: AC 1, strawman 0.
AI must and will one day leave the research departments of bigshot CS schools.
Why, praytell? Wouldn't it be a good idea if *gasp* scientists, even computer scientists, led the way? Actually, you're right about CS; if anything, AI should be under psychology, or better yet, a department of it's own.
And who will be better to lead it than Open Source ?
I just said who: cognitive scientists and AI researchers -- in other words, people who understand the subject. More engineers is the last thing AI needs: It's nearly managed to redeem itself as a science, and I really don't want to lose that ground.
-jcl
Heh. Some good books On AI. (Score:3)
Technology (Score:3)
Technology should not be embraced because it's technology; technology should only be embraced because it raises our standard of living.
--
Chapter 4.1 (Score:3)
From Chapter 4.1 [f2s.com], "Behavior of the Robot Finger":
"We thought that a single robot finger, provided that it possesses the same motion capabilities...as a human finger, would have been sufficient..."
Well, why not, most humans only use a single finger.
AI is fixing this right now, actually... (Score:3)
I am in an Artificial Intelligance division at a U.S. National Research Lab (can't say which, don't want anybody to know that I'm leaking this) we are working on models of intelligance networks that use, essentially, the necessities for biological function (eating, drinking, excreting, reproducing), as an intelligance model. The network runs on easy to produce microbots (bigger than nanobots, smaller than a penny) that use electricity flowing through the air (not flowing, but emitted by various things, toned down EMP) as water, metal as food and repair (they have tools to scrape shards of metal off of a metal block and high-heat fuse it onto damaged sectors of their body), and will collect bits of metal in a storage-bay type thing, in which they will construct other micro-bots. Our project is far from being completed, but rumor around here is that we may be getting military funding, so it might get done a bit faster.
Robotic Teenage Male Sex-Daemons roving the streets looking for tasty Human Teenage Girls to impregnate with their Metal/Carbon Hybrid CoDNA
Yes, but you might have Robotic-Teenage (developing its modular components) Asexual Reproduction-Microbots roving the streets looking for tasty PentiumIII-Linux-Boxes to impregnate with their Microbot-Larvae-esque things. Wasn't my idea.
that self-guiding code that learns from failures and suffers from overcompensation--in other words, code that can even evolve under feedback loops--is pretty rare, even among the best attack detection systems
All you need is one effective system that does all of the essential life functions. And we may be closer to making that system than anybody has known before.
what some *human* has programmed them to do. Tank or Pokemon, it's made by us
It was a great experiance when I realized that this wasn't true. Tierra [santafe.edu]s are mutating bits of code that, in this case, fight it out to the death. Put one of these in a positive feedback loop, and.. well.. we're using a derivitive of this idea to actually program the microbots, along with a decentralized data bank via infrared packet TCP/IP to evolve a massive collection of response data that we can moniter. The microbots will fight, like Tierras, except they will be working with actual, physical robots, instead of bits of memory. The microbots will be able to reproduce, and if we put them in a plastic room filled with old computers, they should eventually fill it up. The project is exciting, although we haven't yet got official word on the military funding.
What really freaks me out is... (Score:3)
Re:Ringing the death knell of old AI gurus (Score:4)
I could see biology as the home of artificial life, but until recently CI's interactions with biology have been restricted to useful metaphors. Traditionally CI has worked at a higher level, and I feel it appropriate to respect this. You're the first person I've seen suggest that biology is foundation for CI, or even that it's an significant contributor, except by way of neuropsychology.
-jcl
Re:Machines Don't Have Human Intentions (Score:4)
Boring (Score:4)
Oh, wait.
Oh, wait.
technology (Score:4)
Machines Don't Have Human Intentions (Score:5)
Anyone?
With all the fears and paranoia about intelligence in computer systems(I refuse to say "robots"--there's no reason intelligence needs to be confined to something that can enact physical changes against its environment), are people not realizing that machines have absolutely no reason to want the same things we do?
There ain't going to be Robotic Teenage Male Sex-Daemons roving the streets looking for tasty Human Teenage Girls to impregnate with their Metal/Carbon Hybrid CoDNA. Why? Because robots aren't interested in sex. It's *humans* that are *afraid* of an alien species/race/tribe/gender/income group coming in and impregnating their daughters, and that traces back to the beginning of human evolution where control over the genetic line essentially defined one's own mortality.
Technology just hasn't been growing the same way.
Maybe intelligence will emerge, but if it will, it'll emerge out of what the systems have been programmed to do--in general, retain robust connectivity over unreliable media, recognize unauthorized accesses, and so on. You will have systems whose defense systems are so well developed that the valid users who wish to shut them down will have difficulty doing so--because, to be blunt, that's what these "intelligent systems" will have been designed to do--prevent unauthorized disabling of the system. But most of the human fears which we obsess about just aren't going to transfer in.
Does this leave quite a bit to be worried about? Sure. But lets not forget that self-guiding code that learns from failures and suffers from overcompensation--in other words, code that can even evolve under feedback loops--is pretty rare, even among the best attack detection systems. Attack signatures and virus signatures are always hand-developed--you never see, for example, a penetration at one company automatically causing all other companies to be alerted to look for the specific pathogen that caused the failure. Worse, if you did, you'd have entire styles of attacks that worked to abuse the system's natural ability to transmit attack signatures--it's a ridiculously effective attack against the human body, and it'd do nasty things to any automated virus signature agent as well.
But in the end, no matter *what* the systems were programmed to do, that'll be, for the forseeable future, all they're going to do--what some *human* has programmed them to do. Tank or Pokemon, it's made by us. This intense fearmongering almost seems like a way of disavowing the creators from what their systems happen to do--in some sense, it's as if we expect the future of AI to come from Microsoft, and we've decided they'll lie their way out of any bug.
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Re:Machines Don't Have Human Intentions (Score:5)
And here is the fundamental problem that the "fear monger-ers" are pushing. Who is "us"? The Slashdot community? The United States? The UN? Ignoring behavioral evolution/adaptation beyond any original programming, these systems will in fact be programmed by someone who is pursuing their own ends - including people who aren't necessarily interested in the betterment of mankind.
Every couple of days on the local news, you're bound to hear some story meant to frighten/shock the viewing audience, about some individual who snapped, killed their family, and then killed themselves. It's unfortunate, but it happens. Nanotechnology might be out of the hands of humankind for the moment, but it's coming. Someday, the power of nanotech will reach the hands of the common man. What happens when the first person who snaps decides to take out the rest of humanity with them? If you understand the "grey goo" principle, this is entirely within the realm of possibility.
Personally, I feel that the greatest threat to life as we know it will be biological viruses/warfare being developed by rouge organizations. Information, knowledge, and technology are not bad things in an of themselves, but ultimately it comes down to what the individual decides to do with them together.
More than ever, technology is bringing us closer to one another, but at the same time, it permits more individuals to have the power to end it all at any moment.
I dont' like to think about the negative side and possible effects of the advancement of technology, but I believe that responsibility requires it from time to time. Yes, you are correct, machine do not have human intentions, but they can carry out the intensions of the human that programmed them, whether those intentions be good ones or bad ones.
Call me crazy, but I believe that we should look towards building off-planet habitations, not merely for the furtherment of science, but to ensure that the human race would have the capacity to survive any cataclysmic (intended or accidental) event that might occur.
--Cycon
Worst case (Score:5)
Why? Because WE ARE THE MACHINES. Every single one of us is already a machine, and has been since the first RNA strand found a mate. The only difference is what our bodies are made up of -- but the truth is, we've been changing our bodies since the dawn of man. Our ancestors were short and strong. Modern man is tall and weak. Our ancestors were dark-skinned. Today we have many skin colors.
See, here's the kicker - we don't have to surrender to our machine masters. While it is nearly inevitable that machines will surpass human brains in complexity and even problem-solving ability, it is foolish to think that we will fail to incorporate these attributes into ourselves. Our future is in machines, because our future selves will be machines - just different machines than we are now. We are destined to remake our own bodies, and become, ourselves, the machine masters. Which means we will depend on the silicon and relays and software that we have created, yes -- in the same way that increased complexity of the genome required us to depend on our lungs, and our spinal cords, and finding complex proteins to use as food. Increased complexity in our brains, and our technology, will necessitate this further step up the ladder.
We'll probably continue to look the same because sex sells and big metal faceplates aren't sexy. But we'll move better, think better, be better. Is that so bad?
Re:When will this happen (Score:5)
Tell me, do you know when the last space shuttle took off? Neither do I. And neither do I own an electric car. Nor do I see us on Mars or in space stations. I keep seeing "we'll all be using electric cars in 10 years" every year. It's what I call the Unattainable Future. We all say it will happen eventually, but underestimate the time it will take and fail to factor in human nature.
We will not have electric cars in mass production and use anytime soon because auto makers can make so much more money on gas-powered cars, and people are used to being able to go 90 MPH if they wanted to, which no electric could dream of hitting. We are not in space because the excitement wore off as computers hit us as insanely amazing machines.
And today our current Unattainable Future is no longer world-peace, as it was during the wars of the 1960s and 1970s, no longer space exploration as it was during the birth of our space program from the 50s to the 80s . No, today the delusion rests squarely on technology and the rate of advancement.
Let me be the first here to scream out that this is insane. There is research and even progress in this sector, but it will not happen. It will not happen because people will not let machines become smarter than them; they will revolt before that happens. There will be no mass-produced nanobots because people are scared of what they cannot see and it's just not possible to make that kind of thing in quantity. You're resting your thoughts on technology that hasn't even started to be invented if you're talking mass-produced nanobots. If the technology to make them in quantity does not exist. Shouldn't that be your first unattainable dream, rather than them being used everywhere?
And an AI capable of human thought
--