Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science Books Media Book Reviews

Things That Make Us Smart: Defending Human Attributes in the Age of the Machine 121

Something that we all strive for and pride ourselves on is our intelligence. But are there things that can make us smarter then the machines we've made? Clampe takes a look at Donald A. Norman's book Things That Make Us Smart: Defending Human Attributes in the Age of the Machine to find out.
Things That Make Us Smart: Defending Human Attributes in the Ag
author Donald A. Norman
pages
publisher Perseus Books
rating 8/10
reviewer Clampe
ISBN 0201626950
summary Hey, it turns out that we are smart!

The Scenario

Many people may be familiar with one of Norman's other books, "The Design of Everyday Things". Well, the good news is that this book is as engaging, and the bad news is that it isn't all that much different. Norman, the uber-advocate of person centered design, uses this book to debunk the motto of the 1933 Chicago World's Fair "Science Finds, Industry Applies, and Man Conforms".

Things That Make Us Smart has a double meaning. Norman spends a decent chunk of this book explaining how humans have very defined cognitive abilities, like pattern recognition. Not only do we have these cognitive abilities, but we're good at them. If you are ever at a cocktail party (which most geeks avoid) and someone says your name, you are likely to pick it up out of a host of other ambient noises in the room. So the first meaning of the title of this book is that humans have abilities that prove we are smart. The second meaning is that we have created cognitive artifacts to extend the limits of our mentalities. These are the "things" that make us smart. Now, we are used to thinking of tools expanding our physical abilities. We use a hammer so we don't pulp our hands while smashing them against the head of a nail. We use a car because we can't run that fast. What Norman explains is that we have also created a ton of tools that help us expand our mind's capability.

It starts with cuneiform. We have bad memories for facts, so when we wanted to remember facts we started writing them on clay tablets. Books were great innovations, since they help us not only remember stuff, but allow us to write down our thoughts and share them with people we'll never meet. Computers have developed as the latest tool we use to expand our cognitive abilities. They do things we can't do very well, and vice versa. We can only hold about 5-7 things in our memory at any one time. Computers can handle lots more. Still, before you start looking for your personal Hal, there are important things that we can do that our computers cannot. Even computers running un a Linux OS. A computer sees a picture of a butterfly as just dots on a screen (yes, I know they are working on this at Bellcore) while we are immediately able to apply meaning to those dots. My favorite example that Norman uses is shooting a free throw. The very things that for us are easy, like identifying the hoop, are incredibly tough for the computer. However, whereas we have big troubles with accuracy the computer can shoot all day once it has figured out the calculation.

Now this would be a great situation if we were intelligent about it. Technology helps us to do the things nature did not wire our brains to do. However, so much of the current market of technology is centered around what the machine needs or can do that we are expecting humans to conform to the technology, making us the tools to the machines. It's an easy trap to fall into. Humans can tolerate a lot of ambiguity. It's one of those things that makes us smart, but Norman argues that we should be designing in a way that augments our lives, not living in a way that validates our design.

What's Good?

This is a great introduction to Donald Norman for those who have not read him. A great bathroom book, you can skip around alot and the examples are engaging. The early part of the book also does a great job of teaching cognitive psychology, with sensical examples and descriptions of human cognitive processes. Also, the theory of user centered design is extremely important, and Norman does a wonderful job of supporting its tenets.

What's Bad?

If you already know a lot about Human Computer Interaction, or are pretty good with cognitive psychology, this book may seem to slow. Also, it's only a mild variation on other Norman books, though if you've not read any to this point, start with this one. The second half of the book is light on quantative evidence, but that's more because you've entered the land of large statements about the meaning of life from the author rather than that he doesn't know how to quantify results.

So What's In It For Me?

If you do any programming, or put together sites for general viewing, this is a valuable book for the argument towards user centered design. Almost anyone can find something out of this offering, from the defense of lowly human cognition, to the descriptions of how we can use technology more intelligently.

Other important links...

Buy this book at Amazon .

Buy Norman's latest book, The Invisible Computer, which we'll review soon. If you're interested in serious usability engineering, this is the book to get, Usability Engineering by Jakob Nielsen.

This discussion has been archived. No new comments can be posted.

Things That Make Us Smart: Defending Human Attributes in the Age of the Machine

Comments Filter:
  • by Anonymous Coward
    Plato gives a Socrates quote deriding writing as a crutch to good memory and more falsifiable than a good witness. Writing is one of the first artifical cognitive aids, of which computer "intelligence" is a still imperfect descendent.
  • by Anonymous Coward
    However, so much of the current market of technology is centered around what the machine needs or can do that we are expecting humans to conform to the technology, making us the tools to the machines.

    Isn't it ironic that such a statement (which is true, btw) would show up on a Linux related site? Linux is the epitome of user-hostile design. Reading this book (and using a Mac for a month or two) would probably open some eyes aroung here.

  • Would we? If we ended up with a truly conscious, thinking being, could we simply turn it off? Assuming that we can create such a thing, flipping the power off would be awfully close to killing a human.

    Your point? We kill other humans all the time, oftentimes for some pretty silly reasons (aka, nationalism). I don't think that assuring our place as top of earth's evolutionary ladder is at all as trivial as you seem to assume it is.

    Trust me -- if humanity in general ever had reason to think that its place at the top of the evolutionary food chain was in contest, we'd do what we had to do to ensure that is wasn't -- even if it means killing a potentially intelligent and sentiant being and (if necessesary, or maybe even if not) its creators. It's kind of a sad commentary, but we can't help it. It's hardcoded into our DNA by the experience of a thousand generations.

    ----

  • by Skyshadow ( 508 ) on Wednesday September 29, 1999 @05:24AM (#1650587) Homepage
    It'll happen; computers have the advantage of scaling much better than our wetware can. You could build a huge computer in orbit *far* easier than you could build a better human brain. Given that kind of scalability, it'll only be a matter of time and some pretty remarkible programming before we come up with a computer that's smarter than any human.

    At that point, it'll be interesting to see if Darwin takes over. I don't foresee a "Terminator"-type war or anything like that; chances are we'd just get nervous and lobotomize or power off the thing.

    I'm thinking that the evolutionary imperatives hardcoded into our wetware will probably eventually doom any really generally intelligent system. Instead, you'll probably see a Star Wars-type solution where you have systems that do one or two things really well, but are functionally inept in other places (C3P0 can understand 10 million forms of communication, but a unilingual battle droid could kick his shiny metal ass). That was, we can remain the masters (which, as a human, is the way I prefer it).

    ----

  • Your arguements ahve nothing to do with this book. You are looking at what you think this book says, what the mac is, and assume that the mac is the end all of human factors.

    Not so. A large part of user interface design is speeding up the expert. There are people who count the milliseconds it takes to complete a task.

    Sure, as an expert Unix is more productive then windows could be (if you were an expert in windows instead). Windows assumes some things that are not true for you, but are for most people. When playing a game windows is the wrong way to go, which is why most games bypass windows for a single full screen task.

    Your transmission arguement is a recignised part of human factors. AT&T spends a lot of money building a system that only expert operators can use, and then a lot of money training operators to beocme experts. They are not stuipd, they know that the expert only system will in the end save them money as the expert operators need less time to do their job, meaning they can help more people faster. This system that AT&T has designed is considered one of the triumphs of user interface design.

    proper user interface design requires the goals be set in stone. In AT&Ts case speed was the number one goal, in the macs case they can sacrafice a little speed once in a while. However Bruce Tognisky in TOG on Interface reports that for some operations users who used the keyboard shortcuts said they got the job done faster, but the stop watch said otherwise! This is regaurds a few very specific situations, and can't be generalized to everything. The point can though: Often while you think you are productive you are less productive, you just feel like you are working. Step back and examine your habbits objectivly, is Unix more productive or does it just seem that way because you have to think about the way to do things as opposed to the mac where you mearly had to move the mouse to the right position.

  • The way I look at it, the Open Source model forces a software corporation to abandon it's original business model, which is renting intellectual property, along with a little marketshare enforcement via proprietary protocols and file formats, and adopt a business model of selling service.

    Now, if a company makes all of it's money off of supporting it's product, there's NO incentive to engineer useability into that product.

    Now, we all know that if a software engineer put enough effort into automation and UI design that a kindergartner with a mouse could effectively admin a network of database servers. (we're talking ideal situations here). But if they can't make money off of supporting the software (because the software supports itself), and if they can't make money selling software licenses, and forcing upgrades, etc., then they can't make money period.

    Put that in your Bazarr Cathedral and smoke it.

    "The number of suckers born each minute doubles every 18 months."
  • And humanity may not ever make a machine of that magnitude again, for 100 years or so. But eventually, history will be forgotten, and someone will try it again, or perhaps it may happen "accidentally" through processes we do not understand, networking of many "stupefied" computers, genetic algorithms, etc. If that machine ever comes up with a "survival mechanism" that allows it to compete in the physical world, and if that same machine also puts 2+2 together (potentially, I am a threat to them, and if they see that they will destroy me, and they do not now, but eventually may, so I must destroy them as a matter of survival), you can kiss Humanity goodbye, along with McDonalds, tennis shoes, Madonna, and bad haiku.

    "The number of suckers born each minute doubles every 18 months."
  • The brain is designed to accept certain things being in it, and not others. Perhaps by "fooling" the brain into accepting a nanochip, we can avoid the whole rejection problem altogether. Again, our theoretical neuro-chemicals come into play, but it isn't really all that hard to see as a reality...

    I'm no doctor, but I was under the impression that rejection took place on a far lower level than sentience. I thought rejection was purely chemical in nature...?
  • I've always wondered why so much effort is spent in AI and trying to develop a computer that can think. What's the point? Computers are useful precisely *because* they are deterministic, fast at simple arithmetic and boolean operations, and able to store huge quantities of data -- all categories where the human mind happens to be less than perfect. Personally, I suspect that human intelligence is a design compromise -- that, for example, pattern-recognition and more 'fuzzy' logic needed to deal with ambiguity come at the expense of being a little finite-state machine. (yeah, computers can do this stuff, the same way humans can do coordinate space transformations..) And even if someone does build a intelligent machine, what's the point?
    It just seems like a big gee-whiz project to me. Maybe it's created 'incidental' advances in other areas of computing, but I think that once someone gets voice recognition working, the only AI project -that I'm aware of- which seems to have really interesting possibilities (dictating code would be cool) will be done.

    Daniel
  • What didn't make any sense is to have the droids speak to each other, since they all were centrally controlled by the mainframe in the blockade ship...


  • And you know this... how? Divine inspiration? Akashic record? If you can't give me evidence, at least offer a cogent arguement.



    Ah, yes. They may have feelings and thoughts, but not a soul, eh? Descartes said that a tortured dog mimicked pain, but he couldn't really feel it, because he had no soul. And you need a soul to feel, right?


    Penache? Savoir faire? Grit? Pheromones?
  • Perhaps no practical uses have been suggested for this because practical people have seen no evidence for this. *sigh* I would love for any of this to be so, really, but it seems that the better the measurements or statistical analysis, the closer to zero are the paranormal "micro" abilities.
  • Would we? If we ended up with a truly conscious, thinking being, could we simply turn it off? Assuming that we can create such a thing, flipping the power off would be awfully close to killing a human.

    Uh, hello? You do it every day, it's called "buying meat".

    I can't understand what's supposed to be bad with killing anyway. It doesn't matter to the killed afterwards does it? What's bad is the side effects, fear, grievance, pain and such. These feelings (and other suffering) is why I don't buy meat (or eggs, or milk, or ...).
  • It'll happen; computers have the advantage of scaling much better than our wetware can. You could build a huge computer in orbit *far* easier than you could build a better human brain. Given that kind of scalability, it'll only be a matter of time and some pretty remarkible programming before we come up with a computer that's smarter than any human

    You can look at the way civilization has evolved as kind of a scaling of human minds. It's pretty impressive, no? For that matter, the development of Linux is another example.

    As far as computers being "smater" than humans, think about other machines that exceed human abilities, but we don't much worry about them. For example, cars can go faster than you could run, but so what?

    We don't need to build "brains" anymore than we need to build "walkers" (i.e. machines that walk), wheels work much better.

    ...richie

  • Isn't it ironic that such a statement (which is true, btw) would show up on a Linux related site? Linux is the epitome of user-hostile design. Reading this book (and using a Mac for a month or two) would probably open some eyes aroung here.

    Linux is not hostile to users. Linux/Unix was designed for programmers and programmers love it.

    How many people do you know that go home at night to write code on their Windows or Macs for the fun of it?

    BTW, I've read all of Donald Norman's books and I think they are great.

    ...richie

  • From the moment these intelligent machines exist, technological change becomes unimaginably fast- machines designing better machines.

    How does this follow? Perhaps an "intelligent" machine will have so many layers over the basic computation units that it will be just as good as humans at, let's say arithmetic.

    Your brain cell can perform some pretty amazing electro-chemical feats, but do YOU know anything about what they do?

    People will be strangely irrelevant.

    On cosmic scale people ARE irrelevant. However, what matters is how people relate to other people. I don't believe that machines can replace that.

    ...richie

  • maybe they will use genetic algorithms. there is no need to understand how things work.

    Maybe. But keep in mind that evolution took 3 billion years to evolve human "intelligence". I don't see why the machines should be able to do it faster.

    ...richie "the skeptic"

  • That was exactly my point, but I once again managed to make it as clear as ketchup. A healthy brain is most likely already operating at 'optimum efficiency'.
    Still the analogy with a neural network holds...
    If one node sends information to another node faster than that node can process information, communication is stalled, or data loss occurs.
  • Cranking up the voltage doesn't make a CPU run faster, just hotter, and if you overdo it just a teenie tiny bit it stops working.

    The brain is a 'neural' computer. One of the factors that determines the speed of a neural system is the number of 'neurons'. If you could increase the number of neurons in your brain, and manage to insert them in the right place, you might find an increase in speed. The reverse is observed regularly. Brain damage that randomly takes out neurons, (due to diseases like altzheimer or encephalitis or drug/alcohol abuse for instance) often causes the victim to think more slowly, and have a lowered IQ.

    Another factor in overall speed is the speed of communication between the 'neurons'. This communication takes place via chemicals called neuro transmitters. If you could find a way to optimize the transmission or production of neurotransmitters in your brain you might increase the efficieny of communication, and thereby overall speed.
    The reverse is often observed. Horrible diseases like parkinson's disease decrease the availability of neurotransmitters (dopamine in the case of parkinson), causing the poor victims (among many other things) to think more slowly, sometimes coming to an almost complete stop.
    Also, many people use drugs to manipulate the levels of certain neurotransmitters. This seems to decrease the efficiency of the brain, although users of cocaine may not actually notice they are speaking raving nonsense when they're high, and long term users often display a decrease in receptors for certain neurotransmitters, probably caused by the brain trying to compensate for the abnormal levels of the chemical (some patients of diseases of the brain benefit by prescription drugs that manipulate levels of neurotransmitters, but if it ain't broken, don't fix it).

    There are probably other properties that determine the speed of a neural network, but I don't know much about the subject. Could someone post a link to some info?
  • Comment removed based on user account deletion
  • Would you be able to theoretically 'overlock' your brain?

    I think that's what caffeine is for.

  • My new sig...

    We are Borg, resistance is futile....

  • me: DHEA, L-cys., DMAE, 5-HTP...
  • The way I feel is that, yes, poetry is a lingual thing. And now I will enter a subject known as "theology". Angels communicate via telepathy--they're beings of spirit, after all, as opposed to beings with physical manifestations such as ourselves. They sing praises to God, and I have gotten the impression that the praises are beautiful.

    Anyway, in this half-baked bit of philosophy, perhaps there are/can be subtle nuances to telepathic communication that we, being non-telepathic, can't think of, as we've never experienced the medium. Or perhaps someone's memory of an ocean beach may actually be better in some way than poetry describing it.

    Not that poetry isn't a really neat (albeit one might say primitive ) art form. When I played around with a GCS called ZZT (zzt.org may still be up, if not, zeux.org has a ZZT section... ), and we tried to squeeze as much as we could out of it--people figured out how to develop several different genres of games out of the inherent top-down sort of thing. Its graphics were plain ASCII, but people squeezed really nifty graphics out of it--lush forests, arid deserts, etc.
    We did this despite the numerous other systems available, such as actually programming with C or Pascal (barring the hurdles that we call difficulty and learning )

    Anyway... I wonder if angels play around with language and poetry. We certainly could still do so if we developed some kind of primitive thought-transfer... system.

    But, as a side note, this was generated on-the-fly and is fairly stupid, and such.
  • If you've ever seen One Flew Over The Coo-coo's Nest, you're probably familiar with electro-shock therapy. It's similar to what a defibrilator does, only for the brain. It is used to disrupt the electrical pattern of the brain for a variety of reasons... Ending an epileptic siesure is one, stupification (forced complacency and the removal of higher level free thought) is another.

    Raising the charge in the brain is a very bad idea since the circuitry is designed for a very specific environment. However...

    Boosting the levels of neurotransmitters that facilitate signal propagation across the synaptic gap might be promissing. This is what many drugs do. But, while drugs tend to act on the brain as a whole, with a particular tweak in the pleasure center areas, a boost in the cognitive or sensory regions might be what you're proposing. This, coupled with repeatedly firing the desired pathways in order to 'pave the road' for future use, should improve whatever skill or perception you're interested in boosing.

    I'd recommend plenty of rest, exercise, a balanced diet, productive intellectual activity and adequate fresh air. :)
  • That brings up an interesting point of ESP, Telepathy. I'm not sure if any of you other /.'ers have noticed this, but I seem to think about things just before someone in my immediate vicinity says something about it. This is an extremely common occurance, I notice it maybe 20-30 times and day and I'm sure there are other instances which I do not notice it. The only time I really notice it is when the converstation switches.

    The only thing I could sum this up to is telepathy. My theory is this: A human brain operates on a freqency and emits a small amount of waves (radio waves?). Another brain which is tuned into that particular frequency and sensitive enough to the lowered power (by not being the transmitter) can recieve these thoughts. The only hitch-- I can't tell if these thoughts are mine or someone elses, so I don't know if I'm reading someone elses mind when I do this. Call me wacko, mebe I am.

    If two processors, which also run off a specific frequency, one of them simply powered up, the other one doing a specific task, would it be possible to register what the other processor is doing in the one which is simply powered on? Just a question

  • usually it doesn't have anything to do with whats going on around us. Most of the time its off a completely unrelated subject that has nothing to do with our surroundings. If it is external stimuli, I take that into account and really don't think about the possibilty of the thought originating from telepathic ability, I just attribute it to, 'Oh, I saw that the same time they did'
  • You'd likely die.

    The electrical impulses that run throughout the brain are the result of highly complex chemical reactions. Overclocking a computer is, in a very real sense, like trying to run 100 pounds-per-inch of water through a pipe only rated for 90: it might work, but don't blame the manufacturer if you bust the thing.

    Brains, on the other hand, are parts of bodies, which produce their own electricity (synthesized from nutrients and oxygen). Just adding more electricity to the mix would cause seizures, the possible disruption of autonomic processes (like your heartbeat), et cetera.

    And, if you do it enough, you'll explode. But anyone who watched Max Headroom knows that. ;)
  • Suppose you had a cake recipe that said "...bake at 250F for 30 minutes". If you cranked up your oven to 750F, could you bake the same cake in 10 minutes? No, you'd probably end up with something that was charred on the outside and undercooked on the inside. I suspect that "overclocking the brain" would lead to a similar result.
  • Sounds good, unfortunately, we are NOT simply electrical impulses... electrical signals run the lengths of the neurons, but chemical signals cross the synapses. Sorry... speeding up diffusion is much harder. You could heat things up, but then that would interfere with lots of other vital celular processes (including the generation of new neurochemicals to continue sending signals). You could replace the synaptic fluid with a different material, but nothing other than a water based material could possibly disolve all the chemicals needed. Basically, there's no good way I can come up with to simply speed up the brain.

    However... speed isn't everything. Massive parallellism is incredibly powerful.

    Michael Chermside

  • We really don't know what the limits of our brain is. We have examples of idiot savants and autistic people as extremes, but they aren't quite right. Or child geniuses, prodigies, and talent.

    I'd imagine that, as fast as we make our artificial brains and intelligences, that people would keep up and stay ahead. I'd imagine it would actually be easier to train and grow a person surrounded by weather data all their life and have a much more accurate prediction device than any supercomputer.

    Ick. Imagine. A kid, from birth, born blind and deaf but hooked up through various sensors to weather devices all over the world, and as an a adult being able to 'see' weather?

    We'll come up with ever more efficient message passing algorithms and encoding methods to overcome the natural inefficiency of communication, and begin to harness the power of multiprocessing, human style.

    We'll figure out how to train, teach, and educate kids so that we stay ahead of the race. Everyone knows how much of a joke most school systems in the US are, right? How much time is wasted with bullying, rote exercises, stupid lectures? Does anyone else think the system actually slows down and retards the children?

    Just how capable are we?


    -AS
  • Without going into flames here=)

    Newer VW cars have something called tiptronic transmissions; normally they behave as automatic, but when you flip a switch you can control the up or down shift at appropriate RPMs, giving you, in reality, both automatic and manual transmissions in one.

    And in the computing world we have BeOS and the soon to be released Mac OSX to give us intuitive interfaces and immediate productivity as well as powerful and useful design.

    So it may very well be possible to have your cake and eat it too =)


    -AS
  • I totally agree.

    I'm not saying rid ourselves of ambiguity and vagueness. They are way too much fun. But, like Grafiti is to Palms and Visors, some sort of reduced instruction set language for enhanced communications.

    Being able to think to each other will open other opportunities for poetry and creativity as well. Imagine 'seeing' someone else's imagination? I believe our imagination is every bit as powerful and stimulating as a real sunset, a real rainbow, a landscape, or artwork. Heck, every piece of art or poetry is, in effect, hampered by our inability to express the concept perfectly, limited as it were by two dimensional mediums, paints, pigments, materials, etc.

    Painting across the cyberscape =)


    -AS
  • by Anonymous Shepherd ( 17338 ) on Wednesday September 29, 1999 @06:47AM (#1650617) Homepage
    A good deal of the research is done in order to better understand ourselves.

    There are also those who study just to do it. I don't know that anyone is doing it to make a better brain(yet).

    There are also autistic people and idiot savants that actually demonstrate some of the tradeoffs we may have made to gain our intelligence. I also don't know if it can be reconciled, the ability to tap into our brains in the same way and still retain human flexibility and adaptability.

    But the fact that people are born with supercomputing capabilities means that it may be very possible, for a slight trade off in social ability or manual dexterity or what-not, to gain ever more brain power.


    -AS
  • by Anonymous Shepherd ( 17338 ) on Wednesday September 29, 1999 @06:57AM (#1650618) Homepage
    How do we know that it's possible to surpass human intelligence? That humans just won't keep advancing? Do we know that there are limitations to what we can do?

    I'd say that our brains are marvelously developed but have been underutilized for the past thousand years, barring the musical genius or the uber-warrior-general. We keep pumping information and data into ourselves and our children, and surprisingly, we keep up. Imagine humanity years from now, born plugged into a network, some sort of global consciousness. Is it possible? I think so. Can we keep up? Yes to that, too. Imagine communications without all of the ambiguity and vagueness of body language and speech. Sure it won't be perfect, but it's an incredible vision.


    -AS
  • by Anonymous Shepherd ( 17338 ) on Wednesday September 29, 1999 @09:21AM (#1650619) Homepage
    Would anyone be willing to comment on this?

    Just how much can we scale? No one really knows, right? We have extreme cases, like idiot savants and autistic people, as well as geniuses and prodigies and such, but as we become ever more connected and ever more indulged with an influx of information and sensory data, will we cope?

    I would like to think so. I would imagine that even as computers get faster and more powerful, as we get more resources we'd just adapt more deftly. We'd invent languages and message passing technologies to reduce overhead and miscommunication, and increase the capbility and efficiency of our world. It's now the world of 24x7, right? I guess one of the biggest limitations right now is sleep. Once we can tackle that problem, we'd have twice as much brainpower*time, right?


    -AS
  • What happens if you boost the electrical charge, just a little bit mind you.
    You'd turn into a jerry springer fan. The brain's cognitive strengths come from the fact that it's not a binary system; synapeses can be more than just on or off. To that extent, the brain's a lot like a quantum computer (is that in someone's sig?).
    If you increase the signal, you're changing the contents of the brain, not making it faster.
  • Any idea to what extent you would alter someones personality by minutely adjusting the signal to the brain
    If you can manage it, there's a Nobel prize in it for you.
    Of course, ECT will fry the brain in a random way, but that's not what you were on about, I suspect.
    As for governmental intervention: sounds like the CIA's MK-ULTRA experiments (sort of).
  • Another question, would the persons personality revert to 'normal' if you removed the extra power
    My guess is no. The brain works so well because it remembers its state. If it were to reset at the drop of a voltage, we'd all wake up every moring to a brand new experience.
  • Of course writing is more falsiifiable than a good witness --- that's why legal documents are required to be notarized and witnessed; otherwise they wouldn't be trustworthy, right? :)

  • by Kaa ( 21510 ) on Wednesday September 29, 1999 @06:08AM (#1650624) Homepage
    What happens if you boost the electrical charge, just a little bit mind you. Would you be able to theoretically 'overlock' your brain?

    Well, first, overclocking has to do with increasing the frequency of the basic 'pulse' of the chip, thus making all operations a bit faster. There is no direct connection with increasing the voltage supplied to the chip. If you just increase the voltage supplied to the CPU, it will not run faster.

    Second, if you stick electrodes into a human brain and send appropriate current through appropriate parts, you can get strange and interesting results. A typical effect is being able to remember with perfect clarity a scene that you thought you forgot completely. A Spanish researcher named Delgado (?) did a fair amount of experimentation around 20 years ago. I don't think a lot of people work on it now, mostly for ethical reasons. Most of the work is being done with people who are heavily mentally ill.

    Kaa
  • I've never taken a course in engineering, but let me ask you guys-- is the idea of "designing in a way that augments our lives, not living in a way that validates our design" really so radical? Do professors still teach that "man conforms" to technology? Do programmers still expect users to learn a new input behavior for each application?
  • Self awareness. This is the first step to an artificial consciousness. But in reality, the nuances of intelligence may be beyond the scope of artificial replication.

    We may create a computer fully aware of itself and others, conscious of all entities in the scope of its senses, capable of reactions and interactions. Capable of solving problems outside a static finite state automaton (or chatoically complex enough to make the states fuzzy).

    Emotions and curiosity and multivalent solutions are all within the scope of its abilities. Morality and sensibility can be replicated. Truth and beauty are open to consciousness.

    But will it be able to comprehend the abstract natures, the duality between non-linear mathematics and temporal reality. We have never solved these problems to the satisfaction of prediction. (See the satellites slowing down article ...). Will our successor be able to comprehend these things?

    If not, will another iteration of intelligence, perhaps an induced wet-matter being to succeed our artificial intelligence, be able to solve these problems?

    It is within our capability to do pinch a touch of mathematics and draw out reasonable truths. If the artificial intelligence does not draw these out, will it create intelligence that can? Knowingly doing so might undermine self-preservation, just as our procreation of artificial life undermines our own survival as the dominant species.

  • Just how much can we scale?

    I wouldn't say we can scale. At least not in the way a computer can. Our nueral processing capabilities are pretty much fixed. Also, humans don't share information very efficiently so it's difficult to solve a hard problem by solving more humans at it.

    It's interesting that we are so self-centric that we cannot solve many global problems (polution, population, welfare, etc). A computer "person" has the potential to be a single distributed being, so death would be very difficult and then all problems become global in nature.

    I guess one of the biggest limitations right now is sleep.

    It's been argued convincingly that much of the brain organization occurs during sleep. Without sleep, it's doubtful that we would be able to think at all.

    I would say our biggest limitation is death and decay of the body. All the information gained during a lifetime is lost and has to be relearned by later generations. There are big limits to what you can learn in 60 or so years - especially when you spend most of your time on survival.
  • ...keep in mind that evolution took 3 billion years to evolve human "intelligence". I don't see why the machines should be able to do it faster.
    Evolution is a horribly slow designer. Human intelligence works much faster than evolution. But we are limited by the fact that we haven't been able to design smarter people. If we manage to design a machine that is smarter than a person, and the machine is in turn able to design a smarter machine than anything we could come up with, then away we go! It doesn't particularly matter how much smarter each generation is; after five or twenty or a hundred generations, machine intelligence will dwarf human intelligence. And that will be nifty!
    ..brian "the optimist"
  • Yes, the post might be considered flamebait, but I'm going to treat it as a thread-starter on the issue of user-friendliness.

    I challenge the assumption that "users" are all the computer-clueless. I am certainly a user, but I also write software. For me, "user-friendly" takes on a whole different meaning. A Unix-like environment is most certainly friendly to me. I can go beyond interfaces that are designed to be "intuitive" and use interfaces that are designed to be "productive". Do I have to read a few HOWTOs? Yes.

    But the time spent learning (for example) VIM is repaid with gained productivity. I get more done in less time once I've mastered the interface.

    It is a natural trend when mastering any technology. You start out wanting to be productive right away. Later, as you master the basics, you look for the shortcuts to make yourself more productive.

    It may be a fair criticism to say that Unix concentrates on productivity shortcuts at the expense of immediate productivity. It may also be fair to say that interfaces perceived as "friendly" (like the Mac) concentrate on immediate usability at the expense of long-term productivity.

    I look at it this way:
    A stick shift is not as "friendly" as an automatic transmission. I still prefer a stick because I like having torque on demand. I hate waiting for the RPMs to get high enough for an automatic to downshift when I'm trying to pass a semi. I have more control with the stick.

    Comparing Mac to Unix is like comparing automatic to stick. It all depends on what you want to do with your machine. The ultimate interface might combine both, but it might also be so bloated and inconsistent that nobody would want to use it. Can you imagine having both automatic and manual transmissions in your car?
  • The LAN makes a lot of sense, considering the droids were networked to an orbital craft anyhow. I can't imagine it being to prevent ECM; if you can forge a LAN order, you can also jam the LAN and shut the force down.

    However, there is a good reason for them to speak verbally. To get back on topic, the use of verbal communications allows bioforms to interact better with battle droids. The Federation was at a tremendous disadvantage with an all-droid army, as those droids were especially bad at dealing with odd situations (such as Qui-Gon's request to board the ship--it had to think for a few seconds before deciding to arrest them).

    I'm assuming that the Federation simply didn't have the manpower and were stuck with an all-droid army. If you had the money and the manpower, you could have a small bioform officer core supplemented by droids. In that case, the bioforms have to know what their droids are doing, so the verbal communication is key.

  • It still does, to a degree. Even when the droids aren't talking to you, hearing them talk amongst themselves gives you (as a soldier working with the droids) vital clues as to what they were doing. The idea is that you, as their superior, could hear them quickly discuss a bad plan, and interrupt them. Besides, if I was one of the Federation types, I'd want my droids talkative so I could make sure they weren't going nonlinear behind my back.
  • I seem to think about things just before someone in my immediate vicinity says something about it. This is an extremely common occurance,...

    The only thing I could sum this up to is telepathy.

    Or it could be simply that both of you are noticing and responding to the same stimuli. You would notice this especially when the conversation switches because the old conversational thread was running down and your minds were casting about for something to stimulate a new one. You happen to attend to the same one at the same time because you are both looking for it just then.
  • Isn't it amazing that this topic should come up so near The Programmer's Stone, and yet be seen as a different one? What makes intelligence is understanding, which is the basis of mapping knowledge. (Or is it vice versa?) Machines don't understand anything. Machines can keep a lot of things in memory at once, and perhaps even make connections between these things, but they never (yet) gain any understanding of deeper connections.

    This was how Deckard spotted the replicants in Bladerunner. They knew the facts (packing), and even some connections between them, but not the meta-connections (mapping); at some level they failed to make connections that are obvious to most people.

    The ability to understand is what makes us smarter than machines. No matter how big and fast the machines are, they don't have imaginary friends.
  • In the Terry Pratchett book "Men at Arms" Cuddy the dwarf designs a cooling helmet for Detritus the troll. Pratchett's trolls are made of silicon, and have superconducting brains. Hence a warm troll is a stupid troll, and a cold troll is a very intelligent troll.

    (And this is not a troll).

    Paul.

  • Ick. Imagine. A kid, from birth, born blind and deaf but hooked up through various sensors to weather devices all over the world, and as an a adult being able to 'see' weather?

    Oh dear. You've just described The Child Buyer by John Hersey. Basically, the denfense industry takes super intelligent kids, cuts off their senses and turns them into hugely intellegent, but essentially mindless, computing devices.

    It was pretty disturbing. Pick it up if you can find a copy (It seems to be out of print -- maybe try a library?).

  • >>But keep in mind that evolution took 3 billion years to evolve human "intelligence". I don't see why the machines should be able to do it faster.

    It is because we do everything better than nature. We can evolve breeds of cattle much faster than nature did and we can provide more food than any other of nature's animals. We are merely the student that surpased his teacher. Since we also have nature's examples to work off of, it shouldnt take us that long to have "intellingent" machines. I just hope it takes about another 30 years so I am retired before it happens.
  • Why wouldnt a robot be able to be conscious? Our brains are simply a collection of neurons that pass chemicals to eachother and work electrically internally. This we know. That is not much different than how computers work. Since it took nature billions of years to make our brains, I can see why it is taking us a while to do this in computers. But I dont see conscious computers as impossible, we just might not see it for a few decades.

  • It is important to note that nature of human memory is fundamentally different than the externalized computer storage of "memory". If you study neuro-psychology, you will understand that scientists have had utmost difficulty in localising memory in the human brain. That is because human memory is not like RAM at all. Every time you recall something, you are not doing a lookup from a physical-electronic memory address. Instead, the impression is brought up as an entirely new creation within your consciousness. To speak of machine sentience, you must consider this very fundamental difference between machine "memory" and human memory which is an aspect of self-consciousness.

    see: http://home.earthlink.net/~johnrpenner/Articles/Hu mansNotRobots.html


  • Yes, it took billions of years to get from single cells to human intelligence - but as progress was made, progress became faster.

    It only took hundreds of millions of years to get from vertebrates to mammals, millions to get from mammals to hominids, hundreds of thousands to get from hominids to humans, tens of thousands to go from hunter-gatherer humans to agricultural ones, hundreds to go from agricultural humans to industrial ones, decades to go from industrial to electronic, and a matter of only years to go from electronic to digital computer networks. In fact change has been so rapid that while you and I, gentle reader, are cyberwhatering our brains out on /., there are still hunter-gatherers hunting and gathering in the Amazon.

    There's a Scientific American commentary [sciam.com] from about two years back on this subject that you might want to read.

  • Is it not feasable to utilize neuro-reactive chemicals to force faster synaptic responses, keep the "gates" between synapses open, and boost electric signal in the brain?

    Sure - it's called epilepsy. :-)
  • How long until Kryotech sells a helmet so you can
    cool your brain to -36 degrees celsius so you can overclock it?
  • However... speed isn't everything. Massive parallellism is incredibly powerful.

    now there's an interesting idea. how about getting brains to work together in parallel? I wonder how viable this idea is.

  • While an electrical impulse travels along the length of a single neuron, the transmission in the gap between neurons (which takes up most of the time and gives you most of the interesting side effects), or synapse, is actually a matter of chemistry.

    Which isn't to say that you wouldn't be able to reengineer the brain with electromechanical implants or somesuch - there are some things, like rote memory (as Norman and the reviewer point out), which the brain is bad at. Our eyes, as well - the primary processing could be very much improved, and the brain itself would almost certainly adapt to the increased or more acute signal coming to it. What would be needed is a method to both read and write to the brain, but I think that that is certainly possible given time and dollars.

    I would dare say that these would be implemented before a true artificial brain, because when compared to an artificial brain they seem so much easier to accomplish. (Not to mention that they're necessary preliminary steps to creating the artificial brain).

    --
  • Can you imagine having both automatic and manual transmissions in your car?

    At least one modern has car has had this. It never caught on. I've never had the opportunity to drive one, so I don't know if there were useability flaws, or other technical problems, or if the manufacturer just decided to drop it because it wasn't popular enough.

    (I believe that the car was the Mercury Mystique, but I haven't been able to find a link for you...)

    -Ender
  • Not overtly, but such ideas are built into all sorts of technologies, from doors on up on the complexity scale to nuclear power station control centers.

    Do programmers still expect users to learn a new input behavior for each application?

    Unfortunately, yes, and this is one area where Linux is weak. Coming from OS/2 and Windows, I'd grown accustom to standard keystrokes for all applications. Now, moving to Linux and loading up applix-ware, I suddenly have to remember a whole new set of keystrokes. It gives me 1988 DOS flashbacks.

    I had the good fortune of taking a class from Dr. Norman on user centered design over ten years ago, when he was just starting on the book publishing route. And while he sometimes sounds like a more literate Andy Rooney, everyone who wants to call themselves a programmer should read at least one of his books. At the very least, it will open your eyes to the number of "human errors" that are caused by poor design, and perhaps make you think a little more about how to avoid this sort of thing of thing in the first place.

    Usually the problem isn't so much that designers overtly demand that users conform to their design. Instead, it is merely a matter of designers not even thinking about what designs might confuse a user. The book that best describes this, I think, is Norman's "The Design of Everyday Things", as it uses simple things like doors, ovens, showers and such to show how simple design decisions can wildly effect how usuable an item is. If you've ever accidently pushed on a door marked "pull", you've run into an example of something that was designed poorly, without thought.
  • Off topic I guess, but has anybody wondered why those hi-tech battle droids in Star Wars TPM actually "spoke" or had opinions? A wireless LAN would be far more appropriate, with an AI/KBS voting system upon which decisions to take.

    Oh yeah, it's only a film you say but...

    (Don't you just hate geeks who point out technical flaws in technical films?)

    Mong.

    * Paul Madley ...Student, Artist, Techie - Geek *
  • You'd turn into a jerry springer fan. The brain's cognitive strengths come from the fact that it's not a binary system; synapeses can be more than just on or off. To that extent, the brain's a lot like a quantum computer (is that in someone's sig?).
    If you increase the signal, you're changing the contents of the brain, not making it faster.


    Hmmm... Any idea to what extent you would alter someones personality by minutely adjusting the signal to the brain? If what you say is right and altering the signal alters the contents and by extension the personality of someone then you could potentially 'adjust' anyone away from 'undesirable' behaviour with a simple voltage regulator type implant device... Unless the results are unpredictable. I wonder if the government has tried this one on an unsuspecting populace yet?

    Kintanon
  • If you can manage it, there's a Nobel prize in it for you.
    Of course, ECT will fry the brain in a random way, but that's not what you were on about, I suspect.
    As for governmental intervention: sounds like the CIA's MK-ULTRA experiments (sort of).


    Hmmm... I think I'll go buy a couple of white rats and some general anesthetic... Where's my exacto knife and my electrical tape...>:)

    I doubt I could do anything like this by jacking a Rat into a 9v Battery or anything... That'd just toast his head. Someone would need a way to barely increase the voltage until a noticeable difference occured without a significant risk of toasted the subjects brain. I don't think I have the equipment for that in my basement just yet...

    Another question, would the persons personality revert to 'normal' if you removed the extra power?
    I would so love to test this if I only knew of a safe way of boosting the signal through the brain...

    Kintanon
  • In the same breath, i can't help but think of the possibilities of integrating a peice of incredibly fast, single-task peice of hardware into a "slow" or "underdeveloped" portion of the brain, and improving it (the brain). I'm not talking about replacing the entirety of the pink mush, but rather, improving where improvement seems to be needed.


    It seems like you would have a LOT of concerns about rejection when implanting something into the brain... I imagine the possibility is there and the technology is almost there, but would even this much of a change in some 'unused' brain portion result in a personality change because of the interference?

    Kintanon
  • My guess is no. The brain works so well because it remembers its state. If it were to reset at the drop of a voltage, we'd all wake up every moring to a brand new experience.


    Hmmm... So, assuming someone found a way to do this, you could nudge the signal a little, keep the signal steady for a few days/weeks/months, then boost it again once the brain was adjusted to it. I wonder if what kind of behavioral patterns this would lead to. I wonder if anyone has tried it? *heads for google*
    Heck, if the principle is sound then in theory that would be a way to remove aggressive behaviour from our society (or brainwash the masses into obeying my every whim...).


    Kintanon
  • Suppose you had a cake recipe that said "...bake at 250F for 30 minutes". If you cranked up your oven to 750F, could you bake the same cake in 10 minutes? No, you'd probably end up with something that was charred on the outside and undercooked on the inside. I suspect that "overclocking the brain" would lead to a similar result

    Ahhh, but what if you only pushed it up to say... 260F? You could bake it for 27 minutes and it would come out fine, and voila! you saved 3 minutes.>:) I'm not talking about plugging your head into the wall. I'm talking about a minute signal boost of the electrical impulses flowing through your brain.

    Kintanon
  • Well, first, overclocking has to do with increasing the frequency of the basic 'pulse' of the chip, thus making all operations a bit faster. There is no direct connection with increasing the voltage supplied to the chip. If you just increase the voltage supplied to the CPU, it will not run faster.

    Forgive my miswording, sorry about that.
    Though as you say here:

    Second, if you stick electrodes into a human brain and send appropriate current through appropriate parts, you can get strange and interesting results. A typical effect is being able to remember with perfect clarity a scene that you thought you forgot completely. A Spanish researcher named Delgado (?) did a fair amount of experimentation around 20 years ago. I don't think a lot of people work on it now, mostly for ethical reasons. Most of the work is being done with people who are heavily mentally ill.


    Apparently pure current can have an affect on the brain. So it is theoretically possible that what I conjectured is applicable.

    Kintanon

  • As I was reading the review of this book I had a thought, just a tiny one. The brain is made up of synapses, neurons, etc... These things run off of electrical current much like the processor in the 'puters we have everywhere. Sooo... What happens if you boost the electrical charge, just a little bit mind you. Would you be able to theoretically 'overlock' your brain? What kind of effect would this have on your thought processes?
    I doubt it's possible even if I understood more about the workings of the brain than I do, but it's still an interesting idea. We are, after all, only electrical impulses at the base of our thought.

    Kintanon
  • I'm thinking that the evolutionary imperatives hardcoded into our wetware will probably eventually doom any really generally intelligent system.

    I'm not sure I agree with you on this...it has been pointed out, even by Richard Dawkins (the uber-evolutionist) that as soon as we gained consciousness, we stopped being slaves to our genetic heritage. Our behaviour is noew determined (mostly) by what we consider good for ourselves, rather than what is good for our genes. Contraceptives for example, or choosing career over family.

    As long as we believed we would benefit from an AI, I don't see us pulling the plug. And society has proven remarkably poor at seeing the negative consequences of technology we develop.

    Dana
  • Of course, this is not to say that computers will be incapable of acting human or simulating a self-aware being. I think eventually computers (robots) will be able to simulate human actions/emotion, but they will still lack that "human nature"

    OF course, this gives rise to the question - how do know that something which *seems* self-aware *is not* self-aware. Surely anything which has the appearance of conciousness - which passes the Turing test, whatever - shoudl be considered concious? After all, how do we know that anbybody else is truly concious? We can't - it's simply a reasonable assumption, given that
    a] they're humans as well (probably), and
    b] they seem self-aware.
    It would seem that if a is false, then many seem to consider b to be insufficient - why is that?

    - REAL /.ers enter their .sigs manually.
  • Yes, part of AI is the fact that it is impressive.

    However, it does have pratical aspects:
    Signal lag between Mars and Earth for the Pathfinder mission was about 10 minutes. The robot had to have a certain ability to make it's own decisions - whether to go on, turn back, etc. However, as such missions become more ecomplicated, and reach further from Earth (say, Europa, Titan) then obviously a computer capable of weighing various data and making intelligent decision based on them will be better suited than a dumb computer using what is essentially an if-then loop. The same goes for any situation where a computer is 'in charge' - nuclear power plants, etc. After all, if the computer has to consult humans, then there is immediate potential for misunderstanding, loss of time, etc.

    Secondly, one of the things that humans are good at is pattern recognition - but one of the things we're bad at is storing vast amounts of data in our concious minds (ever had the feeling that you recognise something/one from somewhere, but don't know where?). If a computer had such a pattern recognition ability, it could apply it to much larger sample sets of data than a human could in the same time. A recognition of this limitation is the basis of the novella "Sucker Bait" by Issac Asimov, where a Mneomic Service is trained form birth to have perfect memories, and then apply the human pattern recognition ability to this information - precisely because computers were unable to.

    - An AI discussion - one of the few things that brings me out of my shell.
  • The voltage of electricty does not overclock - increasing frequency does. All you'd achieve with increasing the voltage of any system is burning out connections.

    Harry
  • Hmmm... Any idea to what extent you would alter someones personality by minutely adjusting the signal to the brain? If what you say is right and altering the signal alters the contents and by extension the personality of someone then you could potentially 'adjust' anyone away from 'undesirable' behaviour with a simple voltage regulator type implant device... Unless the results are unpredictable. I wonder if the government has tried this one on an unsuspecting populace yet?

    Each neuron in the brain is linked to hundreds of other neurons, and each of those links are of different strenghts and can be excitatory or inhibitory. Thus if neuron A has an excitatory connection to neuron C, while neuron B has an inhibitory connection. The energy level of C will be (very roughly)=

    (energy level of A x connection strength between A and C) - (energy level of B x connection strength between B and C)

    In reality C will get inputs from lots more connections and it will also affect loads of other neurons. This means that a small change to the energy level of one neuron can have pretty unpredictable results.

    In order to accurately predict what effect your changes would have you'd really need a map of the interconnections currently existing between the neurons. Of course the pattern of interconnections is quite different for different people, so a generic map probably wouldn't work.

    Ah well, back to the old drawing board...

  • Trust me -- if humanity in general ever had reason to think that its place at the top of the evolutionary food chain was in contest, we'd do what we had to do to ensure that is wasn't -- even if it means killing a potentially intelligent and sentiant being and (if necessesary, or maybe even if not) its creators. It's kind of a sad commentary, but we can't help it. It's hardcoded into our DNA by the experience of a thousand generations.

    This is a common misunderstanding of evolutionary theory. The idea that humans, as a race, should be on top of the food chain hasn't been coded into our DNA at all. What has been coded into our DNA is the idea that we, personally, should reproduce as much as possible, and then protect those copies of the genes which we passed down.

    As a result of this, the most likely outcome is for us to allow machines to take over the world. Think about it. If the opportunity exists to use technology to make ourselves more smarter (and presumably therefore more successful in passing down our genes), those who do not want to use the technology will be left behind as a result of natural selection.

    To sum up, we'll do what's best for the propagation of our genes. Not the genes of other humans.

  • Horrible diseases like parkinson's disease decrease the availability of neurotransmitters (dopamine in the case of parkinson), causing the poor victims (among many other things) to think more slowly, sometimes coming to an almost complete stop.

    Of course, when the reverse happens (extra dopamine) you get schizophrenia. The brain is way more complicated than a computer processor, and any change (in electrical or chemical properties) can usually cause adverse effects. There's a reason why "you don't have to be a brain surgeon" to do most things. It's because of the sensitivity of the brain.

  • I don't find it impossible at all for systems to surpass human intelligence. In the past 5 years alone have seen immense advances in representing biological systems in software (neural networks, self organizing feature maps ala Kohonen, etc...).

    And what if we build a box with more smarts? A requisite of volition is a value system (what should I do and why?). Do you suppose that a system which surpasses human cognitive abilities would develop an inferior value system? Or would it see value in productive effort (good). Would determine that nihilistic, destructive behavior is evil? Perhaps we might learn something about love and kindness from a digital uber-brain.

    Morality, IMHO, doesn't come from on-high, or by vote, but like any other human advance must be discovered.
  • Imagine communications without all of the ambiguity and vagueness of body language and speech. Sure it won't be perfect, but it's an incredible vision.

    Incredible? Incredibley depressing. It's the ambiguity of our language that makes poetry beutiful--hell, the ambiguity of language makes poetry possible. It's the ambiguity of language that lets prose stimulate our imagination; if we know exactly what the writing is saying, no less, no more, there is room for creative thought on our end.

    Body language, and the inadequacies of spoken languages are what make human interactions so interesting. You can "read" a person you've known for years better than someone you've just met.

    There are certain aspects of our lives that I don't think should be optimized for the greatest effeciency possible.


  • faster != better

  • Actually, it would depend on the speed of the oven. It doesn't matter what temperature you set it to, ovens always heat up at the same rate. And if you've ever cooked something when you're hungry, you know that this rate is not fast enough.
  • Since a lot of people are making the comparison between a brain and a CPU, I suggest you check out this article at the Washington Post:

    The Infinite Brain [washingtonpost.com]

    It's mainly about the "plasticity" of the human brain; its ability to actually change the way that it is arranged. Imagine a computer being able to rewire its circuitry at will, daily, or when some error occured (like loss of a piece of hardware).

  • Poetry isn't hampered by our inability to express ourselves perfectly, it exists because of it. It is due to the flaws in languagens, the double meanings, the vastness of a language that enables us to twist words as poets do.

    If you enable instant and perfect communicaiton, you eliminate art, since art is essentialy a means to overcome our inability to express certain thoughts, ideas or emotions.

  • The problem with the Turing Test (and the test used to see if something is alive--not sentient, just alive) is the way that they were developed.

    People didn't say "Alright, here is what it takes to be sentient [or alive], let's go find out what fits our paradigm," they said "Alright, this is sentient [or alive], let's develop a standard to fit it."

    Which brings up another point: Can something be sentient, but not alive? One of the current definitions of something being alive is that it can reproduce. If we were able to create a computer that was sentient, but we created it in such a way that it could not produce others like itself, it would be (by current defintions) sentient but not alive.

    So our defintions of alive/sentient are biased towards what we know is alive/sentient. We haven't taken into other life forms, because we simply haven't had exposure to them.

    Flesh is very special, when you consider that despite all of our intellectual achievements, we haven't come anywhere close to developing something on the same level as the human body (including the brain).

  • Found somewhere...

    It is by caffeine alone I set my thoughts in motion.

    It is by the beans of Java that my thoughts acquire speed,
    the hands acquire shaking, the shaking becomes a warning.
    It is by caffeine alone I set my thoughts in motion.
  • We don't need to build "brains" anymore than we need to build "walkers" (i.e. machines that walk), wheels work much better.
    Do they? Wheels are efficient in their milieu, but they are limited. Wheels are great if you have a smooth roadway or a rail for them to ride on. When you get into slippery, boggy or uneven surfaces, you sometimes can't use wheels at all. There is a reason that the Dante robot (sent to probe a volcano) was a six-legged walker instead of a wheeled or tracked machine; legs can handle things that the others cannot.

    Roads and rails can be viewed as an example of fitting the problem to our solution, adapting the way we live to our technology instead of vice versa. (Wouldn't *you* rather have grass, flowers and trees than streets and highways?) One of the reasons we might want to build artificial brains is because our computers don't have the flexibility to deal with the world in the ways we'd like them to. How much hassle have we gone through to "pave over" problems to make them suitable for our computer systems to handle? Food for thought.

  • Things can only FEEL and LOVE, because they are living... Without an integral GROWTH or FEELING organism, you will never be able to teach machines how to CARE [earthlink.net] or LOVE. Logic--with all its merits, cannot be the basis for love, only LIFE can be the basis for love.

    Relevant point. Multiple and emotional intelligences might be effectively simulated by "yes/no" logic, but by believing it, are we even greater fools?

    Then again, if genetic algorhythms thrive less from our "teaching" and more from their own coopetive evolution, are they not alive? Isn't there some chance that some kinda nano-wetware [slashdot.org] "machine" clusters will somehow converge with our egomaniacally sentient, self-aware and "living" selves?

    Re: "feeling" and "self-awareness" as signs of "intelligence", many suspect that any & all life feels love (as desire to thrive), and fear (as aware of threat to survival). How well "life" avoids fear (pain) and realizes love (pleasure) maybe depends on how intelligence evolves.

  • I don't foresee a "Terminator"-type war or anything like that; chances are we'd just get nervous and lobotomize or power off the thing.

    Would we? If we ended up with a truly conscious, thinking being, could we simply turn it off? Assuming that we can create such a thing, flipping the power off would be awfully close to killing a human.

    I'm thinking that the evolutionary imperatives hardcoded into our wetware will probably eventually doom any really generally intelligent system. Instead, you'll probably see a Star Wars-type solution where you have systems that do one or two things really well, but are functionally inept in other places (C3P0 can understand 10 million forms of communication, but a unilingual battle droid could kick his shiny metal ass). That was, we can remain the masters (which, as a human, is the way I prefer it).

    I think this depends on how you look at things. When you say you prefer humans as masters, do you mean our particular species, or those with our peculiar qualities (rational thought, consciousness)? If (it isn't proven after all) our machines can develop these qualities, will they really be less "human" than we are?


  • I think humans are infinitely scaleable. I think we are naturally technological creatures, and I think the instruments we create are part of the natural cycle of human life. We don't typically think about it this way, but every tool we use is a technological implement of some type, from the fork to the PC. And not merely is our technological advance limited to physical implements. A significant part of human progress is refining intellectual tools, such as logic, languages, and conceptual structures. Such intangible advances have made the "real" ones possible. It's the nature of human being to progress outward towards greater intellectual and "spiritual" capacity (smell a Hegelian here?). I think we've always been inseparable from our technological instruments, and it's only a matter of time before the mechanical devices we create become physically (eventually, intellectually) integrated into ourselves.
  • I noticed this quote at the bottom of the page after reading this article:

    " ... though his invention worked superbly -- his theory was a crock of sewage from beginning to end. -- Vernor Vinge, "The Peace War" "
  • Even if bots could think, learn, have emotions, have feelings just like human, turning them off is not nearly as bad as killing a human, or, even a cockroach, for that matter.

    Killing a human is an irreversable process - while one can always retain bots' memory and turn them on again whenever it is convenient.

    It is more like force-feeding them sleeping pills.
  • Would we? If we ended up with a truly conscious, thinking being, could we simply turn it off? Assuming that we can create such a thing, flipping the power off would be awfully close to killing a human.

    I suppose we'd have to re-define what constitutes a "living" organism, and what makes our "ok to slaughter list", and what doesn't. Is the robot smart, on the same level as a cow? Kill it. Is the robot smart on the same level as a Dolphin? Kill it, unless you like it. Is the robot smart on the same level as Einstein? Well, it gets a little more difficult here, doesn't it.

    And is raw intelligence the only thing that defines a "killable" creature? Do plants "think" on such radically different terms that we cannot even begin to understand it? Is it ok to kill them?

    To kill a robot, would be to shut it off. It isn't much different to kill a human, in the same logic path. If the robot could think, learn, have emotions, have feelings, just like a human.. would it not be alive as well?

    Perhaps a slightly more philosophical take on it is required - does a "living" computer have a soul? It would be wrong to kill any creature with a "soul", would it not?

    As humans, are we nothing more than sophisticated "living computers"? Ah, the mysteries of life..


    .------------ - - -
    | big bad mr. frosty
    `------------ - - -
  • It seems like you would have a LOT of concerns about rejection when implanting something into the brain... I imagine the possibility is there and the technology is almost there, but would even this much of a change in some 'unused' brain portion result in a personality change because of the interference?

    Rejection is, of course, always an issue. Luckily, science and medicine have provided work-arounds for us. Even today, we have the ability to place all manner of bits and peices all over the human body, with only small chances of rejection. Now, the brain is an area which we haven't delved (pardon the pun) into as much as, say, hands or feet, but it is only a matter of time before we truly understand why and what makes it reject things.

    The brain is designed to accept certain things being in it, and not others. Perhaps by "fooling" the brain into accepting a nanochip, we can avoid the whole rejection problem altogether. Again, our theoretical neuro-chemicals come into play, but it isn't really all that hard to see as a reality...


    .------------ - - -
    | big bad mr. frosty
    `------------ - - -
  • I'm no doctor, but I was under the impression that rejection took place on a far lower level than sentience. I thought rejection was purely chemical in nature...?

    That's exactly the point - sentience is, in effect, chemical in nature. "Fooling" the brain, would be chemical in nature. Thus, we could use chemical additives, to alter the natural ones, and eliminate rejection.

    neat! :)

    .------------ - - -
    | big bad mr. frosty
    `------------ - - -
  • Hmmm... Any idea to what extent you would alter someones personality by minutely adjusting the signal to the brain?

    Is it not feasable to utilize neuro-reactive chemicals to force faster synaptic responses, keep the "gates" between synapses open, and boost electric signal in the brain?

    I'm sure that "optimizing" the way our brain works would make us very different people, to say the least.

    In the same breath, i can't help but think of the possibilities of integrating a peice of incredibly fast, single-task peice of hardware into a "slow" or "underdeveloped" portion of the brain, and improving it (the brain). I'm not talking about replacing the entirety of the pink mush, but rather, improving where improvement seems to be needed.

    Furthermore, it stands to reason that by using these "neuro-additives", in conjunction with the hardware, we could open the pathways to otherwise under-used or completely un-used portions of the brain, tapping their (currently unexplored) abilities.

    Could the next stage of Human evolution be the amalgamation of man and machine? It is the great question of all things Sci-Fi, but is it really that far from the truth?


    .------------ - - -
    | big bad mr. frosty
    `------------ - - -
  • just a thought - have you ever used speed??
  • When computers can consistently pass the Turing Test (or some equivalent response-based criteria) over a wide range of testers, I'll consider them as sentient as humans. Because when you get down to it, everyday life is nothing more than an extended version of the Turing Test. Think about it, why do we consider humans to be sentient? We are nowhere near understanding how the human mind truly works. All we really have to base our ideas of sentience on are people's actions, reactions, and flesh. And frankly I don't think the flesh is all that special. Figuratively speaking, it's just the medium on which the code is stored on.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...