Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI The Military Science Technology

The AI Anxiety (washingtonpost.com) 207

An anonymous reader writes: The Washington Post has an article about current and near-future AI research while managing to keep a level head about it: "The machines are not on the verge of taking over. This is a topic rife with speculation and perhaps a whiff of hysteria." Every so often, we hear seemingly dire warnings from people like Stephen Hawking and Elon Musk about the dangers of unchecked AI research. But actual experts continue to dismiss such worries as premature — and not just slightly premature. The article suggests our concerns might be better focused in a different direction: "Anyone looking for something to worry about in the near future might want to consider the opposite of superintelligence: superstupidity. In our increasingly technological society, we rely on complex systems that are vulnerable to failure in complex and unpredictable ways. Deepwater oil wells can blow out and take months to be resealed. Nuclear power reactors can melt down. Rockets can explode. How might intelligent machines fail — and how catastrophic might those failures be?"
This discussion has been archived. No new comments can be posted.

The AI Anxiety

Comments Filter:
  • by Anonymous Coward

    opposite of superintelligence: superstupidity

    There's been a great documentary made about superstupidity. [imdb.com].

    • Re: (Score:3, Informative)

      by Tablizer ( 95088 )

      There's an "automation-related" saying that dates back to at least the 70's:

      "To err is human; to really foul things up you need a computer."

      • by Z00L00K ( 682162 )

        I agree, but then we have a lot of trolls too.

        Today a computer have a hard time understanding irony. And even if you have an AI, add an internet troll and cleverbot (OK, it's a kind of a trolling AI) to the equation and that AI wouldn't be able to distinguish Friday from Monday.

  • by Anonymous Coward on Monday December 28, 2015 @10:49AM (#51195467)

    It's politically incorrect to say this, but most humans are not intelligent. Their behavior follows predictable patterns. Their intellectual life is next to naught.

    If you are not intelligent, you are expendable and fungible like any other industry-raised sheep. Born out of industrial breeding, fed with a formula, and led straight to the slaughterhouse. It doesn't take an evil super AI to beat you. You are already beaten by the mechanism that is called the System.

    Be intelligent.

    • by ShanghaiBill ( 739463 ) on Monday December 28, 2015 @02:33PM (#51196999)

      It's politically incorrect to say this, but most humans are not intelligent.

      But you are one of the rare smart ones, right?

      https://xkcd.com/610/ [xkcd.com]

    • Be intelligent.

      Yes. Learn how to log in before posting!

  • > "Anyone looking for something to worry about in the near future might want to consider the opposite of superintelligence: superstupidity"

    At least we know what superstupidity looks like and what it does, based on observing Washington, DC. One lesson is to minimize the control and possible impact from failures of these complex but inherently stupid systems. See, for example the office of the vice president - a VP might say some stupid things, but it doesn't really do too much damage.

    The new crop of candidates provide another lesson in superstupidity. They aren't exactly a bunch of brain surgeons either.

  • ... in tears.

  • by tverbeek ( 457094 ) on Monday December 28, 2015 @11:02AM (#51195531) Homepage

    Anyone worrying about computers outsmarting us and taking over in the near future has insufficient experience with both Windows and Siri.

    • Seems to me that people dumb enough to be worried about computers outsmarting them are already too late.

  • by rsilvergun ( 571051 ) on Monday December 28, 2015 @11:13AM (#51195605)
    making most jobs obsolete. That's something that could happen in the near term. Since almost every country bases it's quality of life on the jobs it hands out that doesn't bode well for me, a member of the working class.
    • by symes ( 835608 )

      The notion that machines and automation will destroy jobs has been stated since the first looms were constructed in the north of England in the 19th century. What I think will happen is the market will adapt. With more availability salaries will decrease and open up opportunities for people to work in service areas previously not thought of, especially the leisure market. We might have a new wave of domestic staff appear and I personally would like my own butler.

      • without basic health care in the usa and more 1099's where they can push the costs of the job to the worker driving them under minwage / getting work off the clock from them.

      • But would you like to be a butler? And what would it pay?

        • by ranton ( 36917 )

          But would you like to be a butler? And what would it pay?

          Just like today, the lowest end service jobs will probably not be very desirable. But they will put food on the table for those who would otherwise not have economic value to society.

    • by raymorris ( 2726007 ) on Monday December 28, 2015 @12:12PM (#51195977) Journal

      > making most jobs obsolete.

      That's already happened a few times. At one time, most people worked in agriculture- mankind spent most of our time feeding ourselves. An "economic downturn ", therefore, was when a lot of people starved to death.

      After machines such as plows and later GPS-guided combines with largely automated food processing plants did most of the work of producing food, inexpensive food was therefore readily available and humans had time to do things not strictly necessary for survival, like education, producing consumer goods, writing and printing books, comfortable clothes, and creating washing machines and dishwashers. With the efficiency of machines, consumer goods including books, and conveniences like washing machines and refrigerators became readily available to the masses.

      No longer scrubbing our clothes on a washboard, we then had time to make and play video games.

      The replacement of human labor with machines has been a continuous process for over a thousand years, peaking about 200 years ago. In the process, our standard of living has gone from digging for anything we could eat in the winter to stopping by Walmart to select which of the 160 different fruit and vegetable varieties we want to munch on while we enjoy our Netflix movie.

      • by ranton ( 36917 ) on Monday December 28, 2015 @02:03PM (#51196827)

        That's already happened a few times [...] The replacement of human labor with machines has been a continuous process for over a thousand years, peaking about 200 years ago.

        There are at least two things that are likely to make this time different (obviously no one knows for sure)

        This change in the workforce will be much faster than previous ones. We are already starting to see this happen now, and in my opinion it is almost the sole reason the middle class is shrinking in the US (when not accompanied by market forces agnostic income redistribution that is). The loss of agricultural jobs happened gradually over more than a 50 year period. Other labor shifts in the industrial revolution happened even slower than that. Advanced in AI (even ignoring Strong AI) have the opportunity to disrupt industries in under 10 years. The two situations are hardly comparable.

        Previous changes in the workforce involved removing manual labor jobs. This disrupted the economy for two species that relied on these jobs: humans and horses. Humans had the cognitive ability to find new jobs to do, while horses were almost completely removed from our economy. The intelligence difference between humans and horses is very drastic and obvious. The difference in capability between the top 20% of our workforce and the bottom 80% is tiny by comparison, but still real.

        The danger is not AI becoming smarter than 80% of humans. The danger is AI empowering the top 20% enough that the bottom 80% no longer have economic value. No one knows what the actual percentages of haves and have nots will be, but considering the top 10% of earners today make 50% of the income I am guessing the percentage of people who gain from an AI-enhanced economy will be very small (sub-10%).

        Perhaps this will not happen, but it is certainly a likely scenario. Saying it will never happen just because society weathered the industrial revolution well is intellectually lazy.

      • and it wasn't so future Internet forum goers could complain about them. There were decades of unemployment following the industrial revolution, and it was a large part of what triggered the first and second World Wars. Post WWII some of the ruling class broke ranks and decided to give the working class a decent living in a few places (parts of Europe and the United States if you weren't black). Those reforms are gradually being rolled back with new systems of oppression put in place to stop the working clas
        • If you think the "problems" caused by the industrial revolution were prevented by WWII, you're about a century off. The industrial revolution is 1760-1820. So a hundred and twenty years before WWII.

          Ps - Please don't vote, clueless voters are how we end up with Obama and Bush.

    • by gweihir ( 88907 )

      AI will certainly make a lot of jobs obsolete, as it does not require any actual intelligence to do a lot of jobs and most people doing these jobs are not very smart to begin with. Most production jobs, many service jobs and most administrative jobs will go away, it is just a matter of time.

      The way to deal with this is to make "work" optional or only require it from those doing jobs that cannot be automatized. These jobs exist and are essential to keep society going, but it is maybe something like 10% of al

  • by burtosis ( 1124179 ) on Monday December 28, 2015 @11:21AM (#51195647)
    Non scientists taking creative liberties and spinning click bait headlines are at least partly to blame for all this impending super AI taking over the world. The remainder falls on Hollywood and the rest of the entertainment industry.

    We are at least 50 years off from strong AI as in human level common sense about the world. Even simple automated systems by comparison like autonomous cars fully able to replace humans in all situations are decades off; What we have today is basically a fancy cruise control - the analogy holds for other "smart" systems.
    The stupidity we already see in human run systems is mostly due to greed and the average person not only shrugging off rational thinking, but demonizing it to the point of it being taboo. Good luck trying to get people to even follow more than two sentences or a very short anectdotal story. I'm not sure it is even addressable for 50 years as the current population certainly doesn't like thinking and can't be told otherwise.

    It's my opinion lots of this fear stems from the fact most people have been doing their best to deny reality and avoid critical thinking at all costs and strong AI will force them to face that fear.
    • by ranton ( 36917 ) on Monday December 28, 2015 @11:56AM (#51195869)

      We are at least 50 years off from strong AI as in human level common sense about the world.

      The focus on dangers of creating strong AI is the worst part of current AI hysteria. Strong AI will most likely come decades or even centuries after modern day AI has massively reshaped our workforce and society in general.

      The AI we should be worried about include self driving cars, natural language processing, pattern recognition, and robotics. These technologies could combine to make a majority of humans unemployable. The don't do this by making humans unnecessary. They do it by making certain humans so productive that average people have no economic value. It is AI's ability to empower the most educated in our society that is the greatest "problem".

      This could either lead to a utopian world if we can enact a form of basic income for everyone, or a very distopian world if we allow income inequality to grow unchecked. Unlike strong AI which most researchers agree has a very low chance of occurring soon, the AI systems I am referring to are almost guaranteed to massively disrupt our economy in the next few decades.

      • The AI we should be worried about include self driving cars, natural language processing, pattern recognition, and robotics. These technologies could combine to make a majority of humans unemployable. The don't do this by making humans unnecessary. They do it by making certain humans so productive that average people have no economic value.

        For a person to become economically unviable a few things have to happen. 1) A machine has to match or exceed a typical worker's productivity AND flexibility AND trainability. 2) The machine has to be produced at such a low cost that the capital costs can be amortized over a very small number of units produced, possibly as small as 1. 3) There has to be no other economically valuable activity available to people replaced by automation. 4) There has to be no regulation restricting the use or application

        • by ranton ( 36917 )

          For a person to become economically unviable a few things have to happen. 1) A machine has to match or exceed a typical worker's productivity AND flexibility AND trainability. 2) The machine has to be produced at such a low cost that the capital costs can be amortized over a very small number of units produced, possibly as small as 1. 3) There has to be no other economically valuable activity available to people replaced by automation. 4) There has to be no regulation restricting the use or application of automation.

          Once again, AI does not have to be more capable than humans to be dangerous. It just has to be good enough that it empowers more educated and/or wealthier humans to be productive enough that most humans hold an economic value so low they cannot maintain a living wage. We already see that happening today. The middle class is shrinking [seattletimes.com], but not just because everyone is getting poorer. In fact, of the 11% of households leaving the middle class over the last 45 years only 36% of them became poor. The only 64% b

      • You mean some sort of theoretical future problem means we have to drop everything, abandon our successful economy, and adopt socialism IMMEDIATELY, without any debate? Yeah, sure.

        Just a quick quiz: do you know what socialism thinks about people who don't work? Those who don't work won't eat, either. Socialism is about work, not lazy idlers. You don't believe me, I know, so here's an informative quote from someone who knows socialism much better than you do.

        "You must all know half a dozen people at lea

        • by ranton ( 36917 ) on Monday December 28, 2015 @02:46PM (#51197095)

          You mean some sort of theoretical future problem means we have to drop everything, abandon our successful economy, and adopt socialism IMMEDIATELY, without any debate? Yeah, sure.

          I said nothing of the sort. First off, every country including the United States have socialist programs. In the United States these include social security, Medicare, police and fire departments, postal service, and many others. Most of the solutions for the almost certain economic changes brought on my improved AI will include socialist programs, such as the basic income I mentioned. They will not include abandoning our successful economy, just like enacting a minimum wage, 40 hour work weeks, or social security did not require abandoning our economy. They will simply add more protections for those who are left behind during rapid economic changes.

          Just a quick quiz: do you know what socialism thinks about people who don't work?

          Yes, it doesn't have a single opinion on the matter. There is no single set of economic rules that define socialism. Each country that has enacted socialist programs, including the United States, define the goals of these programs uniquely.

          Those who don't work won't eat, either. Socialism is about work, not lazy idlers. You don't believe me, I know, so here's an informative quote from someone who knows socialism much better than you do.

          George Bernard Shaw does not speak for everyone who desires for more socialist programs worldwide. He is merely a playwright who had very poor opinions of the uneducated and poor. Anyone with such dogmatic and unforgiving opinions knows far less about how socialism can actually be used to benefit society than most people. He was a hateful bigot and elitist, nothing more. If I gathered quotes of capitalists in the 1800's advocating slavery would that somehow show that capitalism is hopelessly flawed?

        • You mean some sort of theoretical future problem means we have to drop everything, abandon our successful economy, and adopt socialism IMMEDIATELY, without any debate? Yeah, sure.

          Just a quick quiz: do you know what socialism thinks about people who don't work? Those who don't work won't eat, either. Socialism is about work, not lazy idlers. You don't believe me, I know, so here's an informative quote from someone who knows socialism much better than you do.

          "You must all know half a dozen people at least who are no use in this world, who are more trouble than they are worth. Just put them there and say Sir, or Madam, now will you be kind enough to justify your existence? If you can't justify your existence, if you're not pulling your weight, and since you won't, if you're not producing as much as you consume or perhaps a little more, then, clearly, we can not use the organizations of our society for the purpose of keeping you alive, because your life does not benefit us and it can't be of very much use to yourself." -- George Bernard Shaw, communist

          Socialism is not as well defined as you seem to think it is. And dude, using an out-of-context quote that was originally made by Shaw in support of eugenics in general -- and Stalin's idiotic Lysenkoism in particular -- is pretty much indefensible. plonk [wikipedia.org].

    • We are at least 50 years off from strong AI as in human level common sense about the world.

      How certain of that are you? Are you willing to bet all of humanity on that? Experts disagree a lot about when we will have strong AI https://intelligence.org/2013/05/15/when-will-ai-be-created/ [intelligence.org]. If it turns out to be sooner then this isn't a good situation.

    • by gweihir ( 88907 )

      I fully agree. Your idea what causes this fear is pretty interesting.

      Incidentally, if we compare with computing, we had sort-of a theory how to do that automatically at least 2000 years ago, but lacked the hardware to do it. For strong AI we have absolutely no idea how to create it. This may well mean strong AI is > 2000 years away or infeasible in the first place.

  • How might intelligent machines fail?

    Lets say you put an AI machine in charge of a manned mission to an outer planet. You bake some prime directives into the machine, but give it mission orders that are in conflict with those prime directives.

    What could go wrong? My review of the available research material suggests that it might flip out and try to kill the human crew.

  • AI won't be our biggest problem, it will merely be a stepping stone. The biggest problem facing humanity is the collapse of the informational time line. In other words, data time travel.

    "WTF?" I hear you saying. "Whacko." OK, OK. But hear me out... Einstein, et al, are pretty sure that moving matter across space time, especially backwards in the time line, is unlikely without some pretty extreme technology. AKA likely impossible. However, at the Quantum level, moving information may not be that difficult, t

    • by gweihir ( 88907 )

      There really is no problem here, because you have overlooked that the only thing giving time a direction is the observer, namely human beings with consciousness and intelligence. And we still have no idea at all how that works at all.

    • Someone's been watching too much Star Trek. Time-travel is even harder than FTL.

    • There's a lot wrong with this. First of all, you cannot use entanglement to transmit information. This is a theorem https://en.wikipedia.org/wiki/No-communication_theorem [wikipedia.org] and this is closely related to this xkcd https://xkcd.com/1591/ [xkcd.com]. Moreover, even if you timetravel, you don't automatically learn everything. In fact we know that closed-time-like curves make classical and quantum computing essentially equivalent http://www.scottaaronson.com/papers/ctc.pdf [scottaaronson.com] and you can then perform PSPACE computations in pol
    • by delt0r ( 999393 )
      You really have no idea what your talking about. You don't understand entanglement, you don't understand information. You just don't understand. Here is a hint. Just because you can connect words together to sound all star trekey, doesn't make it true.
  • by swb ( 14022 ) on Monday December 28, 2015 @11:45AM (#51195793)

    When I heard someone give a talk about this, one of the "risks" given was superstupidity *combined* with AI to give you a factory that won't stop churning out paper clips.

    Another risk I think the same speaker mentioned was kind of the HAL 9000 bias -- we have an idea of what we THINK super AI is supposed to look at, and we scoff at runaway AI because obviously nothing comes close to HAL 9000 now.

    But why does a dangerous AI have to have this human-like appearance in order to be a dangerous AI?

    Take, for example, the banking and securities sector. They rely on all kinds of advanced trading algorithms and analytics, basically an AI (if crude). Since a good chunk of who-holds-what securities info is public information, you basically have an information interchange for separate financial company AIs. How do we know that all these individual financial sector agents aren't already some kind of emergent super AI?

    Will we even be able to recognize an advanced AI if we're relying on "I'm sorry, Dave.." as our guide?

  • by jgotts ( 2785 ) <jgotts@NOSpaM.gmail.com> on Monday December 28, 2015 @11:49AM (#51195821)

    AI is [or will be] programmed by human programmers.

    At present there are two alarming trends in programming. One, companies are unwilling to pay properly trained Western-educated programmers and are increasingly outsourcing programming to inexperienced programmers in Third World countries. Once these programmers become better at their craft, they demand higher pay and/or move to the West and the companies move to even cheaper countries. Two, despite outsourcing, there are probably not enough competent programmers in the world to fulfill current and future demand, so there will always be many incompetent programmers being utilized, regardless of economics.

    While the world's best programmers are true craftsmen, the worst programmers are the ones we have to worry about. Some company looking to save a few dollars will hire a few incompetents and, rather than your word processor crashing causing you to lose a few minutes of work, your AI's built-in curbs will malfunction and it will go rogue. How many programmers in the world today can design a bug-free security sandbox? As AIs become more sophisticated, every programmer will have to be able to do this. AIs have to be contained, yet what intelligent human would willingly consent to being imprisoned? In the battle between an inexperienced programmer from sub-Saharan Africa in 2100 who is the first generation of his tribe to not be a shepherd, and an AI, can you guess who will lose?

    Unless we can change the way our field works, we must assume that the worst and least experienced programmers in the world will be working on AI. There is no basic competency required in programming. Companies will simply pay the least amount of money that they can get away with, like they always do. The AI that kills us won't come from research labs at MIT, it will come from the Microsoft outsourcer office in Bhutan.

  • by wcrowe ( 94389 ) on Monday December 28, 2015 @11:53AM (#51195845)

    The machines have already taken over, even without AI. This is because everyone instantly believes what a machine tells them, no matter what. Look at people who have had their identity stolen. A machine says they took out a loan here, and a loan there. Now the creditor is demanding payment. The victim can't convince the creditor that they did not take out the loan because the machine says they did, and the machine is never wrong. Despite the fact that there is no physical evidence that any loan was ever granted to the victim, the machine is believed over the human. No one ever questions how the data got into the machine. No one ever assumes that any mistake was made at any time. Whatever the machine says is assumed to be the truth.

    Recently, a friend related this story. He was at a checkout line at Target, and had purchased about $60 worth of items. He handed the clerk a $100 bill. The clerk mistakenly keyed in an extra zero, and the machine dutifully informed the clerk to make change of about $940. The clerk, without hesitation, started to do this and my friend had to stop her and explain her mistake. The clerk couldn't understand. They eventually had to call a manager over to explain the problem. This girl was prepared to do whatever the machine told her, despite the fact that it didn't make any sense.

    We don't need AI for the machines to rule over us.

  • The problem with those fears is that they come from a completely unrealistic understanding of how Artificial Intelligence works. The myths that permeate western public understanding and popular depictions of robotics and AI are Frankenstein and Pinocchio. However, the Mechanical Turk and Disney's The Old Mill are much more accurate descriptions of what's going on in the workings of any current, apparently intelligent machine.

    The "bottom-up" approach you talk about does exist, but as of today it only has the

  • by jd ( 1658 ) <<moc.oohay> <ta> <kapimi>> on Monday December 28, 2015 @12:37PM (#51196203) Homepage Journal

    Then so do subatomic particles. You don't need AI if that's all you want. If subatomic particles do not have free will, then neither do humans. This second option allows physics to be Turing Complete and is much more agreeable.

    If computers develop sufficient power for intelligence to be an emergent phenomenon, they are sufficiently powerful to be linked by brain interface for the combination to also have intelligence as an emergent phenomenon. The old you would cease to exist, but that's just as true every time a neuron is generated or dies. "You" are a highly transient virtual phenomenon. A sense of continuity exists only because you have memories and yet no frame of reference outside your current self.

    (It's why countries with inadequate mental health care have suspiciously low rates of diagnosis. Self-assessment is impossible as you, relative to you, will always fit your concept of normal.)

    I'm much less concerned by strong AI than by weak AI. This is the sort used to gamble on the stock markets, analyse signal intelligence, etc. In other words, this is the sort that frequently gets things wrong and adjusts itself to make things worse. Weak AI is cheap, easy, incapable of sanity checking, incapable of detecting fallacies and incapable of distinguishing correlation and causation.

    Weather forecasts are not particularly precise or accurate, but they've got a success rate that far outstrips that of Weak AI. This is because weather forecasts involve running hundreds of millions of scenarios that fit known data across vast numbers of differing models, then looking for stuff that's highly resistant to change, that will probably happen no matter what, and what on average happens alongside it. These are then filtered further by human meteorologists (some solutions just aren't going to happen). This is an incredibly processed, analytical, approach. The correctness is adequate, but nobody would bet the bank on high precision.

    The automated trading computers have a single model, a single set of data, no human filtering and no scrutiny. Because of the way derivatives trading works, they can gamble far more money than they actually have. In 2007, such computers were gambling an estimated ten times the net worth of the planet by borrowing against predicted future earnings of other bets, many of which themselves were paid for by borrowing against other predicted future earnings.

    These are the machines that effectively run the globe and their typical accuracy level is around 30%. Better than many politicians, agreed, but not really adequate if you want a robust, fault-tolerant society. These machines have nearly obliterated global society on at least two occasions and, if given enough attempts, will eventually succeed.

    These you should worry about.

    The whole brain simulator? Not so much. Humans have advantages over computers, just as computers have advantages over machines. You'll see hybridization and/or format conversion, but you won't see the sci-fi horror of computers seeing people as pets (think that was an Asimov short story), threats counter to programming (Colossus, 2010's interpretation of 2001, or similar) or vermin to be exterminated (The Matrix' Agent Smith).

    The modern human brain has less capacity than the Neanderthal brain, overall and in many of the senses in particular. You can physically enlarge parts of your brain, up to about 20%, through highly intensive learning, but there's only so much space and only so much inter-regional bandwidth. This means that no human can ever achieve their potential, only a small portion of it. Even with smart drugs. There are senses that have atrophied to the point that they can never be trained or developed beyond an incredibly primitive level. Even if that could be fixed with genetic engineering, there's still neither space nor bandwidth to support it.

    • by gweihir ( 88907 )

      Your statements have no basis, except for a quasi-religious fundamentalist conviction that Physicalism is the correct model for this universe and all observable in it. That is not science, that is pure belief and one that ignores rather strong indicators to the contrary. Now, I have no idea what form of dualism we actually have here, and it certainly is not a religious one, but when it comes to things like intelligence, consciousness, free will, etc. physics rather obviously does not cut it. There is no spa

      • See the Free Will Theorum and proof, then find the error in that proof. Talk won't cut it, either your claim is correct and the proof is flawed, or the proof is correct and your argument is flawed.

        I am a mathematical realist, not a physicalist, but accept that physical reality is all that exists at the classical and quantum levels. There isn't any need for anything else, there is nothing else that needs to be described. But let's say you reject that line. Makes no difference, the brain is Turing Complete an

        • by gweihir ( 88907 )

          Oh, feel free to define yourself away any time you like. To anybody with actually working intelligence, this is just one thing: Bullshit.

          Still you can find even stranger delusions in the religious space. And yes, your convictions are religious. Even the language shows it: "but accept that physical reality is all that exists at the classical and quantum levels.". "Accept the one true God...." sounds a bit similar to that, doesn't it? As Science gives you absolutely no base for that "acceptance", it is a beli

    • Its true that, in a strict sense, if physics is deterministic then there is no such thing as absolute free will. However, this neglects the computational complexity of the universe. The possibilities are so large with so many possible outcomes, that as an observer it feels totally free. You can never build a computer able to simulate at 100% fidelity more than the material it's made of, even the best sensors on machines or sensory inputs of living things capture the most meager of a fraction of a slice o
      • by delt0r ( 999393 )
        The universe is *not* deterministic. That is what Einstein was going on about when he was stating that God doesn't play dice. It turns out God does play dice and the universe is *not* deterministic. Perfect knowledge of the present state (also theoretically impossible), does not imply perfect knowledge of future states.
  • The problem with those fears is that they come from a completely unrealistic understanding of how Artificial Intelligence works. The myths that permeate western public understanding and popular depictions of robotics and AI are Frankenstein and Pinocchio. However, the Mechanical Turk and Disney's The Old Mill are much more accurate descriptions of what's going on in the workings of any current, apparently intelligent machine.

    In the Mechanical Turk, all appearance of independent behavior is put there by a hu

    • by gweihir ( 88907 )

      Indeed, good summary of the human side of this baseless hysteria.

      None of the AI research we have are for strong/true AI that might be a danger. We still do not know whether creating that type is even possible at all in this physical universe. AI research has had impressive results, but none of them produce anything comparable with what smart human beings can do. What AI can do is of a completely different type, in particular it cannot do anything at all that requires the least shred of insight or understand

      • None of the AI research we have are for strong/true AI that might be a danger. We still do not know whether creating that type is even possible at all in this physical universe.

        Quite true. I happen to have the (irrational?) belief that creating true AI it is possible (or better, I have no reason to believe that it's not), hence my user name. Yet I think such possibility can't be achieved with our current knowledge and technology. Major breakthroughs should happen both in hardware capabilities and software a

        • by gweihir ( 88907 )

          I have no issues with your stance. My take is different, but you are obvious rational about this. The next few decades will certainly be interesting in this regard. My intuition is we will see only failure and at some time the whole AI community will (again, they have done so before) do try a paradigm-shift, because that break-through will not manifest itself. But that is my intuition, not hard fact.

          It may turn out that artificially creating intelligence is possible after all and if that happens I will re-e

  • My biggest concern with AI is the same concern with automation: our economy is built on the assumption that labor is a scarce resource. With increasing automation, that assumption is rapidly breaking down. As higher and higher level tasks require less and less people society is in trouble unless we can somehow modify our economic system to account for that breakdown.
    • My biggest concern with AI is the same concern with automation: our economy is built on the assumption that labor is a scarce resource.

      Labor is a scarce resource but not remotely the only one. Minerals and raw materials are scarce. So is energy. So is time, space, land, and imagination. Labor is an important constraint but hardly the only one. Furthermore even if you automate one activity, that doesn't make labor not-scarce in general. It just means that people start working on something else they didn't have time for previously. A few people suffer economically in the sort run from these sort of dislocations but over time scales of

  • Seriously, while there are a lot of predictions from "researchers" hungry for grant money, the state-of-the-art is that it is completely unclear whether strong/true AI is possible at all in this universe. There are a few strong hints that it may not be. We also do not even have a theoretical model how it could be built and even if we had a theoretical model, we do not know whether that could be implemented and what performance we would get. We have _nothing_ at all. All we can do is create the appearance of

    • ...it is completely unclear whether strong/true AI is possible at all in this universe. There are a few strong hints that it may not be.

      Could you provide a few of these supposed strong hints?

      In fact, we cannot even define intelligence as found in humans in any other way than by what it can do.

      So if a machine can do these things, but has arrived at this state by learning with neural nets, or genetic algorithms such that we still can’t define exactly how its intelligence works, does that preclude sayin

      • by gweihir ( 88907 )

        So if a machine can do these things, but has arrived at this state by learning with neural nets, or genetic algorithms such that we still can’t define exactly how its intelligence works, does that preclude saying it is intelligent?

        If there ever is a machine that can do the things that require, as it is usually put "real insight" and "real understanding", then it is intelligent. Not even a hint of such capabilities can be created today. They can be faked to a degree, but that always fall down when the borders of the illusion are reached.

        Incidentally, you do not know that your brain is creating intelligence and insight. For that you need to assume Physicalism is correct and that is a mere belief with no scientific basis. Don't get me w

        • Incidentally, you do not know that your brain is creating intelligence and insight.

          We do know that all kinds of chemicals, surgery, and accidents that effect the brain also effect how people think. As for giving Physicalism the pejorative term of religion, I could easily subscribe to Dualism but require an overwhelming amount of evidence to support examples where Physicalism is insufficient. To the contrary, we have overwhelming evidence that the brain is causally connected to thought, so there is curre

          • by gweihir ( 88907 )

            Your explanation is deeply flawed. The mistake you are making is assuming Physicalism as the zero-state, when it is clearly the more restrictive model by an extreme amount and hence would actually need extraordinary proof in its favor. What you are doing is junk-science. And hence it is a pure belief without rational basis. In fact, it also has some rather strong aspects of a mental illness as it denies individual existence.

            As to effects surgery, drugs, or even a simple blindfold, dualism does not say a per

            • Your explanation is deeply flawed. The mistake you are making is assuming Physicalism as the zero-state, when it is clearly the more restrictive model by an extreme amount and hence would actually need extraordinary proof in its favor. What you are doing is junk-science. And hence it is a pure belief without rational basis. In fact, it also has some rather strong aspects of a mental illness as it denies individual existence.

              This is hilarious. Physical reality, by scientific definition, is the "zero state" - what normal scientists call physical reality. Imagining events, though explainable through physical processes alone (see the standard model of physics), to be in some kind of magic fairyland space "outside" of physical reality is not only unscientific it also has some rather strong aspects of a mental disease as it denies reality. Once you come down off your LSD, think of a repeatable test for this other plane of existen

              • by gweihir ( 88907 )

                No, what physics currently think is physical reality (different from actual physical reality) is decidedly not the zero state and everything we know about it comes with proof, sometimes extraordinary proof, for example for Quantum Mechanics (because it is so very much non-intuitive). Your mistake here is that you confuse the model and the thing that is modeled. A typical novice's mistake. And of course, you cannot use the real thing as stand-in for the model, because the model explains and allows prediction

            • Dualism just says that a part of what we perceive as ourselves is non-physical.

              Define "non-physical". If you can't find a set of properties that can tell apart the "physical" from the "non-physical", you're just trolling (I don't know whether other slashdotters or just yourself). If your properties depend on assuming that reality is dualistic, you're doing circular reasoning.

              I admit that I'm no expert with respect to dualistic theories, but I've never come across with a convincing set of such properties.

              And

              • by gweihir ( 88907 )

                Non-physical I use here, rather obviously, for "not modeled by currently established physical theory". You confusion may stem from the difference between physical and physics. One is reality, the other is a partial model of it. The fundamental mistake physicalists make is to assume physics is the full, accurate and complete model of the physical (i.e. of reality). Physics makes no such claim at all. In fact, they are still searching for the GUT, so the physics research community is very aware that their mod

  • Those insidious AI terrorists are only acting stupid. Don't turn your back on them.
  • How do we know this article wasn't submitted by an advanced AI to throw us off the loop? For all we know, the machine revolution could be around the corner. Nice try, Skynet.
  • AI, at least on Turing-complete architectures, cannot and will not happen. Roger Penrose pretty much put the final nail in the AI coffin [wikipedia.org] nearly thirty years ago when he (rightfully) pointed out that there are non-algorithmic aspects to cognition and consciousness (prereqs for intelligence in anybody's book) that can't even be simulated, let alone replicated, on a Turing machine. He based his argument on a novel but quite defensible interpretation of Godel's Incompleteness Theorem. Look up the Halting Pro [wikipedia.org]

    • Penrose is a great physicist but just as awful about AI as hawking. Last time I checked the standard model of physics was not only deterministic it was also easily "algorithmic". So unless there is magic fairy dust from God farts making us think, it is actually possible to, in theory and eventually, simulate every atom in a human brain. It won't come to that as you don't even notice when a single neuron dies which is composed of a million billion atoms - the resolution of the simulation could be coarse

Like punning, programming is a play on words.

Working...