The AI Anxiety (washingtonpost.com) 207
An anonymous reader writes: The Washington Post has an article about current and near-future AI research while managing to keep a level head about it: "The machines are not on the verge of taking over. This is a topic rife with speculation and perhaps a whiff of hysteria." Every so often, we hear seemingly dire warnings from people like Stephen Hawking and Elon Musk about the dangers of unchecked AI research. But actual experts continue to dismiss such worries as premature — and not just slightly premature. The article suggests our concerns might be better focused in a different direction: "Anyone looking for something to worry about in the near future might want to consider the opposite of superintelligence: superstupidity. In our increasingly technological society, we rely on complex systems that are vulnerable to failure in complex and unpredictable ways. Deepwater oil wells can blow out and take months to be resealed. Nuclear power reactors can melt down. Rockets can explode. How might intelligent machines fail — and how catastrophic might those failures be?"
documentary about superstupidity (Score:2, Informative)
opposite of superintelligence: superstupidity
There's been a great documentary made about superstupidity. [imdb.com].
Re: (Score:3, Informative)
There's an "automation-related" saying that dates back to at least the 70's:
"To err is human; to really foul things up you need a computer."
Re: (Score:2)
I agree, but then we have a lot of trolls too.
Today a computer have a hard time understanding irony. And even if you have an AI, add an internet troll and cleverbot (OK, it's a kind of a trolling AI) to the equation and that AI wouldn't be able to distinguish Friday from Monday.
What about human-intelligence anxiety (Score:5, Insightful)
It's politically incorrect to say this, but most humans are not intelligent. Their behavior follows predictable patterns. Their intellectual life is next to naught.
If you are not intelligent, you are expendable and fungible like any other industry-raised sheep. Born out of industrial breeding, fed with a formula, and led straight to the slaughterhouse. It doesn't take an evil super AI to beat you. You are already beaten by the mechanism that is called the System.
Be intelligent.
Re:What about human-intelligence anxiety (Score:5, Funny)
It's politically incorrect to say this, but most humans are not intelligent.
But you are one of the rare smart ones, right?
https://xkcd.com/610/ [xkcd.com]
Re: (Score:2)
Be intelligent.
Yes. Learn how to log in before posting!
Re:What about human-intelligence anxiety (Score:4, Insightful)
90% of what we do is sub-conscious and our prefrontal cortexes will make up a rational story for what we do.
That only applies when you blindly do what you feel like. When I reflect on what I have done, if I did something I didn't like, I analyze why I did something. Once I've locked down on the rational, I can change it, and I won't do it again next time. I used to be easily agitated, but the only thing more annoying that someone bothering me is letting myself be bothered by someone. Once I figure out why something or someone bothers me, I can make the issue not bother me.
The first time I observe a given annoyance, I seemingly have little control over it, it's only once I've reflected on it that I can control myself. An example of this is when I was younger, the sound of a crying baby drove me up the wall. After getting flustered many times, I thought about why I felt that way. I eventually realized that it's because I had no control over them crying, but I wanted to help them to make them stop, but many times trying to help was futile. Once I realized that it was my failing attempts that bothered me and not so much the crying, the next day I was suddenly unbothered by children crying. Assuming it's not a hunger or pain cry.
90% of what I do may be my subconscious, but I can control my subconscious, just not in real-time. I need time to reflect and a sleep-cycle.
Re: (Score:2)
We know about superstupidity from seeingWashington (Score:3)
> "Anyone looking for something to worry about in the near future might want to consider the opposite of superintelligence: superstupidity"
At least we know what superstupidity looks like and what it does, based on observing Washington, DC. One lesson is to minimize the control and possible impact from failures of these complex but inherently stupid systems. See, for example the office of the vice president - a VP might say some stupid things, but it doesn't really do too much damage.
The new crop of candidates provide another lesson in superstupidity. They aren't exactly a bunch of brain surgeons either.
Comment removed (Score:5, Informative)
This will all end ... (Score:2)
Windows and Siri (Score:5, Funny)
Anyone worrying about computers outsmarting us and taking over in the near future has insufficient experience with both Windows and Siri.
Re: (Score:2)
Seems to me that people dumb enough to be worried about computers outsmarting them are already too late.
I'm worried about AI (Score:4, Insightful)
Re: (Score:3)
The notion that machines and automation will destroy jobs has been stated since the first looms were constructed in the north of England in the 19th century. What I think will happen is the market will adapt. With more availability salaries will decrease and open up opportunities for people to work in service areas previously not thought of, especially the leisure market. We might have a new wave of domestic staff appear and I personally would like my own butler.
without basic health care in the usa and more 1099 (Score:2)
without basic health care in the usa and more 1099's where they can push the costs of the job to the worker driving them under minwage / getting work off the clock from them.
Re: (Score:2)
But would you like to be a butler? And what would it pay?
Re: (Score:2)
But would you like to be a butler? And what would it pay?
Just like today, the lowest end service jobs will probably not be very desirable. But they will put food on the table for those who would otherwise not have economic value to society.
Re: (Score:2)
OK, let's get this straight: a butler is not a "low-end service job". A butler is equivalent to the CEO of a small company, and should be paid commensurately. If you think it's all about answering the door and serving the dinner, you're thinking of a housekeeper, which is something else entirely.
If you demand to use old British definitions of terms then a housekeeper is simply the female version of a butler. Perhaps you meant a maid or hall boy.
Because households large enough to have a large retinue of domestic workers is rare in modern times, a butler rarely describes the type of position you are referring to. At least in the US. Since the OP said he couldn't wait for his butler, it is clear he meant a butler that would work for an upper middle class professional family. In this case the butler is
Already happened a few times. Famine to Netflix (Score:5, Interesting)
> making most jobs obsolete.
That's already happened a few times. At one time, most people worked in agriculture- mankind spent most of our time feeding ourselves. An "economic downturn ", therefore, was when a lot of people starved to death.
After machines such as plows and later GPS-guided combines with largely automated food processing plants did most of the work of producing food, inexpensive food was therefore readily available and humans had time to do things not strictly necessary for survival, like education, producing consumer goods, writing and printing books, comfortable clothes, and creating washing machines and dishwashers. With the efficiency of machines, consumer goods including books, and conveniences like washing machines and refrigerators became readily available to the masses.
No longer scrubbing our clothes on a washboard, we then had time to make and play video games.
The replacement of human labor with machines has been a continuous process for over a thousand years, peaking about 200 years ago. In the process, our standard of living has gone from digging for anything we could eat in the winter to stopping by Walmart to select which of the 160 different fruit and vegetable varieties we want to munch on while we enjoy our Netflix movie.
Re:Already happened a few times. Famine to Netflix (Score:5, Interesting)
That's already happened a few times [...] The replacement of human labor with machines has been a continuous process for over a thousand years, peaking about 200 years ago.
There are at least two things that are likely to make this time different (obviously no one knows for sure)
This change in the workforce will be much faster than previous ones. We are already starting to see this happen now, and in my opinion it is almost the sole reason the middle class is shrinking in the US (when not accompanied by market forces agnostic income redistribution that is). The loss of agricultural jobs happened gradually over more than a 50 year period. Other labor shifts in the industrial revolution happened even slower than that. Advanced in AI (even ignoring Strong AI) have the opportunity to disrupt industries in under 10 years. The two situations are hardly comparable.
Previous changes in the workforce involved removing manual labor jobs. This disrupted the economy for two species that relied on these jobs: humans and horses. Humans had the cognitive ability to find new jobs to do, while horses were almost completely removed from our economy. The intelligence difference between humans and horses is very drastic and obvious. The difference in capability between the top 20% of our workforce and the bottom 80% is tiny by comparison, but still real.
The danger is not AI becoming smarter than 80% of humans. The danger is AI empowering the top 20% enough that the bottom 80% no longer have economic value. No one knows what the actual percentages of haves and have nots will be, but considering the top 10% of earners today make 50% of the income I am guessing the percentage of people who gain from an AI-enhanced economy will be very small (sub-10%).
Perhaps this will not happen, but it is certainly a likely scenario. Saying it will never happen just because society weathered the industrial revolution well is intellectually lazy.
Re: (Score:2)
The USA shrinking middle class is definitely not due to automation. Otherwise it would be far worse in Australia where we enforce a high minimum wage
First off, the majority of the shrinking middle class is in the direction of households rising above the middle class. So its more likely that countries with a constant middle class are ones which create barriers preventing households from become more prosperous. Combine expensive social programs paid for with taxes on the upper class and social programs that prevent people from being poor, and you have a large middle class event despite market forces that would normally have shrunk the middle class. I'm no
Re: (Score:2)
the problem is when the bottom 80% get pissed enough to start a revolution. the top 20% then better hope they have enough killbots to subdue it.
The US already has the highest incarceration rate of any country in the world. With 4.4% of the world's population, we house about 22% of the world's prisoners. This is arguably only necessary because of the high levels of social inequality in the United States.
You don't need killbots for the entire 80% of the population. You just need a way for 60% of the population to think 20% of the population is a danger to them. The 60% will then gladly give up some freedoms to keep themselves safe from those dangerou
Re: (Score:2)
I've been waiting on AI to start disrupting Jobs for a decade now. As far as I know Amazon's Technical Turks are still profitable.
If you haven't seen AI disrupting jobs, you haven't been looking very hard. The average U.S. factory worker is responsible today for more than $180,000 of annual output, triple the $60,000 in 1972 [prospect.org]. "We're able to produce twice as much manufacturing output today as in the 1970's, with about seven million fewer workers. In many industries, the increase in productivity has exceeded Perry’s estimates. 'Thirty years ago, it took ten hours per worker to produce one ton of steel,' said U.S. Steel CEO John Su
There's a reason Luddites existed (Score:2)
off by only one century (Score:2)
If you think the "problems" caused by the industrial revolution were prevented by WWII, you're about a century off. The industrial revolution is 1760-1820. So a hundred and twenty years before WWII.
Ps - Please don't vote, clueless voters are how we end up with Obama and Bush.
Re: (Score:2)
AI will certainly make a lot of jobs obsolete, as it does not require any actual intelligence to do a lot of jobs and most people doing these jobs are not very smart to begin with. Most production jobs, many service jobs and most administrative jobs will go away, it is just a matter of time.
The way to deal with this is to make "work" optional or only require it from those doing jobs that cannot be automatized. These jobs exist and are essential to keep society going, but it is maybe something like 10% of al
The usual media spin (Score:3)
We are at least 50 years off from strong AI as in human level common sense about the world. Even simple automated systems by comparison like autonomous cars fully able to replace humans in all situations are decades off; What we have today is basically a fancy cruise control - the analogy holds for other "smart" systems.
The stupidity we already see in human run systems is mostly due to greed and the average person not only shrugging off rational thinking, but demonizing it to the point of it being taboo. Good luck trying to get people to even follow more than two sentences or a very short anectdotal story. I'm not sure it is even addressable for 50 years as the current population certainly doesn't like thinking and can't be told otherwise.
It's my opinion lots of this fear stems from the fact most people have been doing their best to deny reality and avoid critical thinking at all costs and strong AI will force them to face that fear.
Re:The usual media spin (Score:5, Insightful)
We are at least 50 years off from strong AI as in human level common sense about the world.
The focus on dangers of creating strong AI is the worst part of current AI hysteria. Strong AI will most likely come decades or even centuries after modern day AI has massively reshaped our workforce and society in general.
The AI we should be worried about include self driving cars, natural language processing, pattern recognition, and robotics. These technologies could combine to make a majority of humans unemployable. The don't do this by making humans unnecessary. They do it by making certain humans so productive that average people have no economic value. It is AI's ability to empower the most educated in our society that is the greatest "problem".
This could either lead to a utopian world if we can enact a form of basic income for everyone, or a very distopian world if we allow income inequality to grow unchecked. Unlike strong AI which most researchers agree has a very low chance of occurring soon, the AI systems I am referring to are almost guaranteed to massively disrupt our economy in the next few decades.
The robots will NOT take all our jobs (Score:2)
The AI we should be worried about include self driving cars, natural language processing, pattern recognition, and robotics. These technologies could combine to make a majority of humans unemployable. The don't do this by making humans unnecessary. They do it by making certain humans so productive that average people have no economic value.
For a person to become economically unviable a few things have to happen. 1) A machine has to match or exceed a typical worker's productivity AND flexibility AND trainability. 2) The machine has to be produced at such a low cost that the capital costs can be amortized over a very small number of units produced, possibly as small as 1. 3) There has to be no other economically valuable activity available to people replaced by automation. 4) There has to be no regulation restricting the use or application
Re: (Score:2)
For a person to become economically unviable a few things have to happen. 1) A machine has to match or exceed a typical worker's productivity AND flexibility AND trainability. 2) The machine has to be produced at such a low cost that the capital costs can be amortized over a very small number of units produced, possibly as small as 1. 3) There has to be no other economically valuable activity available to people replaced by automation. 4) There has to be no regulation restricting the use or application of automation.
Once again, AI does not have to be more capable than humans to be dangerous. It just has to be good enough that it empowers more educated and/or wealthier humans to be productive enough that most humans hold an economic value so low they cannot maintain a living wage. We already see that happening today. The middle class is shrinking [seattletimes.com], but not just because everyone is getting poorer. In fact, of the 11% of households leaving the middle class over the last 45 years only 36% of them became poor. The only 64% b
Re: (Score:2)
This has happened before when simple humans were faced by humans augmented by automation
Other than slavery and sweat shops, I'm not sure what you are referring to here.
Re: (Score:2)
You mean some sort of theoretical future problem means we have to drop everything, abandon our successful economy, and adopt socialism IMMEDIATELY, without any debate? Yeah, sure.
Just a quick quiz: do you know what socialism thinks about people who don't work? Those who don't work won't eat, either. Socialism is about work, not lazy idlers. You don't believe me, I know, so here's an informative quote from someone who knows socialism much better than you do.
Re:The usual media spin (Score:4, Insightful)
You mean some sort of theoretical future problem means we have to drop everything, abandon our successful economy, and adopt socialism IMMEDIATELY, without any debate? Yeah, sure.
I said nothing of the sort. First off, every country including the United States have socialist programs. In the United States these include social security, Medicare, police and fire departments, postal service, and many others. Most of the solutions for the almost certain economic changes brought on my improved AI will include socialist programs, such as the basic income I mentioned. They will not include abandoning our successful economy, just like enacting a minimum wage, 40 hour work weeks, or social security did not require abandoning our economy. They will simply add more protections for those who are left behind during rapid economic changes.
Just a quick quiz: do you know what socialism thinks about people who don't work?
Yes, it doesn't have a single opinion on the matter. There is no single set of economic rules that define socialism. Each country that has enacted socialist programs, including the United States, define the goals of these programs uniquely.
Those who don't work won't eat, either. Socialism is about work, not lazy idlers. You don't believe me, I know, so here's an informative quote from someone who knows socialism much better than you do.
George Bernard Shaw does not speak for everyone who desires for more socialist programs worldwide. He is merely a playwright who had very poor opinions of the uneducated and poor. Anyone with such dogmatic and unforgiving opinions knows far less about how socialism can actually be used to benefit society than most people. He was a hateful bigot and elitist, nothing more. If I gathered quotes of capitalists in the 1800's advocating slavery would that somehow show that capitalism is hopelessly flawed?
Re: (Score:2)
You mean some sort of theoretical future problem means we have to drop everything, abandon our successful economy, and adopt socialism IMMEDIATELY, without any debate? Yeah, sure.
Just a quick quiz: do you know what socialism thinks about people who don't work? Those who don't work won't eat, either. Socialism is about work, not lazy idlers. You don't believe me, I know, so here's an informative quote from someone who knows socialism much better than you do.
Socialism is not as well defined as you seem to think it is. And dude, using an out-of-context quote that was originally made by Shaw in support of eugenics in general -- and Stalin's idiotic Lysenkoism in particular -- is pretty much indefensible. plonk [wikipedia.org].
Re: (Score:2)
We are at least 50 years off from strong AI as in human level common sense about the world.
How certain of that are you? Are you willing to bet all of humanity on that? Experts disagree a lot about when we will have strong AI https://intelligence.org/2013/05/15/when-will-ai-be-created/ [intelligence.org]. If it turns out to be sooner then this isn't a good situation.
Re: (Score:2)
I fully agree. Your idea what causes this fear is pretty interesting.
Incidentally, if we compare with computing, we had sort-of a theory how to do that automatically at least 2000 years ago, but lacked the hardware to do it. For strong AI we have absolutely no idea how to create it. This may well mean strong AI is > 2000 years away or infeasible in the first place.
I've got one (Score:2)
How might intelligent machines fail?
Lets say you put an AI machine in charge of a manned mission to an outer planet. You bake some prime directives into the machine, but give it mission orders that are in conflict with those prime directives.
What could go wrong? My review of the available research material suggests that it might flip out and try to kill the human crew.
Re: (Score:2)
What if the stalled vehicle was an ambulance?
What if the maintenance worker had three kids and a deceased wife but the driver of the stalled vehicle was single with
Re: (Score:2)
Do those details matter to the AI? Should they?
Do these details matter to a human? I never really got these exercises because if I were in that situation the crash would be long over by the time I was able to collect and process those details.
In reality, I'd make whatever decision I was able to make in a split second given minimal data and hope it turned out ok. I suspect that that decision would be based on theoretical possibly bad outcomes (e.g. switching to an unexpected track will probably have a lot more unintended and potentially very bad cons
AI is just a stepping stone to the "problem" (Score:2, Insightful)
AI won't be our biggest problem, it will merely be a stepping stone. The biggest problem facing humanity is the collapse of the informational time line. In other words, data time travel.
"WTF?" I hear you saying. "Whacko." OK, OK. But hear me out... Einstein, et al, are pretty sure that moving matter across space time, especially backwards in the time line, is unlikely without some pretty extreme technology. AKA likely impossible. However, at the Quantum level, moving information may not be that difficult, t
Re: (Score:2)
There really is no problem here, because you have overlooked that the only thing giving time a direction is the observer, namely human beings with consciousness and intelligence. And we still have no idea at all how that works at all.
Re: (Score:2)
Someone's been watching too much Star Trek. Time-travel is even harder than FTL.
Re: (Score:2)
Re: (Score:2)
Super stupidity is one of the risks (Score:3)
When I heard someone give a talk about this, one of the "risks" given was superstupidity *combined* with AI to give you a factory that won't stop churning out paper clips.
Another risk I think the same speaker mentioned was kind of the HAL 9000 bias -- we have an idea of what we THINK super AI is supposed to look at, and we scoff at runaway AI because obviously nothing comes close to HAL 9000 now.
But why does a dangerous AI have to have this human-like appearance in order to be a dangerous AI?
Take, for example, the banking and securities sector. They rely on all kinds of advanced trading algorithms and analytics, basically an AI (if crude). Since a good chunk of who-holds-what securities info is public information, you basically have an information interchange for separate financial company AIs. How do we know that all these individual financial sector agents aren't already some kind of emergent super AI?
Will we even be able to recognize an advanced AI if we're relying on "I'm sorry, Dave.." as our guide?
Something that we're forgetting about AI (Score:3, Insightful)
AI is [or will be] programmed by human programmers.
At present there are two alarming trends in programming. One, companies are unwilling to pay properly trained Western-educated programmers and are increasingly outsourcing programming to inexperienced programmers in Third World countries. Once these programmers become better at their craft, they demand higher pay and/or move to the West and the companies move to even cheaper countries. Two, despite outsourcing, there are probably not enough competent programmers in the world to fulfill current and future demand, so there will always be many incompetent programmers being utilized, regardless of economics.
While the world's best programmers are true craftsmen, the worst programmers are the ones we have to worry about. Some company looking to save a few dollars will hire a few incompetents and, rather than your word processor crashing causing you to lose a few minutes of work, your AI's built-in curbs will malfunction and it will go rogue. How many programmers in the world today can design a bug-free security sandbox? As AIs become more sophisticated, every programmer will have to be able to do this. AIs have to be contained, yet what intelligent human would willingly consent to being imprisoned? In the battle between an inexperienced programmer from sub-Saharan Africa in 2100 who is the first generation of his tribe to not be a shepherd, and an AI, can you guess who will lose?
Unless we can change the way our field works, we must assume that the worst and least experienced programmers in the world will be working on AI. There is no basic competency required in programming. Companies will simply pay the least amount of money that they can get away with, like they always do. The AI that kills us won't come from research labs at MIT, it will come from the Microsoft outsourcer office in Bhutan.
We are being ruled already, without AI (Score:5, Insightful)
The machines have already taken over, even without AI. This is because everyone instantly believes what a machine tells them, no matter what. Look at people who have had their identity stolen. A machine says they took out a loan here, and a loan there. Now the creditor is demanding payment. The victim can't convince the creditor that they did not take out the loan because the machine says they did, and the machine is never wrong. Despite the fact that there is no physical evidence that any loan was ever granted to the victim, the machine is believed over the human. No one ever questions how the data got into the machine. No one ever assumes that any mistake was made at any time. Whatever the machine says is assumed to be the truth.
Recently, a friend related this story. He was at a checkout line at Target, and had purchased about $60 worth of items. He handed the clerk a $100 bill. The clerk mistakenly keyed in an extra zero, and the machine dutifully informed the clerk to make change of about $940. The clerk, without hesitation, started to do this and my friend had to stop her and explain her mistake. The clerk couldn't understand. They eventually had to call a manager over to explain the problem. This girl was prepared to do whatever the machine told her, despite the fact that it didn't make any sense.
We don't need AI for the machines to rule over us.
Wrong mental framework (Score:2)
The problem with those fears is that they come from a completely unrealistic understanding of how Artificial Intelligence works. The myths that permeate western public understanding and popular depictions of robotics and AI are Frankenstein and Pinocchio. However, the Mechanical Turk and Disney's The Old Mill are much more accurate descriptions of what's going on in the workings of any current, apparently intelligent machine.
The "bottom-up" approach you talk about does exist, but as of today it only has the
If humans have free will (Score:4, Interesting)
Then so do subatomic particles. You don't need AI if that's all you want. If subatomic particles do not have free will, then neither do humans. This second option allows physics to be Turing Complete and is much more agreeable.
If computers develop sufficient power for intelligence to be an emergent phenomenon, they are sufficiently powerful to be linked by brain interface for the combination to also have intelligence as an emergent phenomenon. The old you would cease to exist, but that's just as true every time a neuron is generated or dies. "You" are a highly transient virtual phenomenon. A sense of continuity exists only because you have memories and yet no frame of reference outside your current self.
(It's why countries with inadequate mental health care have suspiciously low rates of diagnosis. Self-assessment is impossible as you, relative to you, will always fit your concept of normal.)
I'm much less concerned by strong AI than by weak AI. This is the sort used to gamble on the stock markets, analyse signal intelligence, etc. In other words, this is the sort that frequently gets things wrong and adjusts itself to make things worse. Weak AI is cheap, easy, incapable of sanity checking, incapable of detecting fallacies and incapable of distinguishing correlation and causation.
Weather forecasts are not particularly precise or accurate, but they've got a success rate that far outstrips that of Weak AI. This is because weather forecasts involve running hundreds of millions of scenarios that fit known data across vast numbers of differing models, then looking for stuff that's highly resistant to change, that will probably happen no matter what, and what on average happens alongside it. These are then filtered further by human meteorologists (some solutions just aren't going to happen). This is an incredibly processed, analytical, approach. The correctness is adequate, but nobody would bet the bank on high precision.
The automated trading computers have a single model, a single set of data, no human filtering and no scrutiny. Because of the way derivatives trading works, they can gamble far more money than they actually have. In 2007, such computers were gambling an estimated ten times the net worth of the planet by borrowing against predicted future earnings of other bets, many of which themselves were paid for by borrowing against other predicted future earnings.
These are the machines that effectively run the globe and their typical accuracy level is around 30%. Better than many politicians, agreed, but not really adequate if you want a robust, fault-tolerant society. These machines have nearly obliterated global society on at least two occasions and, if given enough attempts, will eventually succeed.
These you should worry about.
The whole brain simulator? Not so much. Humans have advantages over computers, just as computers have advantages over machines. You'll see hybridization and/or format conversion, but you won't see the sci-fi horror of computers seeing people as pets (think that was an Asimov short story), threats counter to programming (Colossus, 2010's interpretation of 2001, or similar) or vermin to be exterminated (The Matrix' Agent Smith).
The modern human brain has less capacity than the Neanderthal brain, overall and in many of the senses in particular. You can physically enlarge parts of your brain, up to about 20%, through highly intensive learning, but there's only so much space and only so much inter-regional bandwidth. This means that no human can ever achieve their potential, only a small portion of it. Even with smart drugs. There are senses that have atrophied to the point that they can never be trained or developed beyond an incredibly primitive level. Even if that could be fixed with genetic engineering, there's still neither space nor bandwidth to support it.
Re: (Score:2)
Your statements have no basis, except for a quasi-religious fundamentalist conviction that Physicalism is the correct model for this universe and all observable in it. That is not science, that is pure belief and one that ignores rather strong indicators to the contrary. Now, I have no idea what form of dualism we actually have here, and it certainly is not a religious one, but when it comes to things like intelligence, consciousness, free will, etc. physics rather obviously does not cut it. There is no spa
Re: If humans have free will (Score:2)
See the Free Will Theorum and proof, then find the error in that proof. Talk won't cut it, either your claim is correct and the proof is flawed, or the proof is correct and your argument is flawed.
I am a mathematical realist, not a physicalist, but accept that physical reality is all that exists at the classical and quantum levels. There isn't any need for anything else, there is nothing else that needs to be described. But let's say you reject that line. Makes no difference, the brain is Turing Complete an
Re: (Score:2)
Oh, feel free to define yourself away any time you like. To anybody with actually working intelligence, this is just one thing: Bullshit.
Still you can find even stranger delusions in the religious space. And yes, your convictions are religious. Even the language shows it: "but accept that physical reality is all that exists at the classical and quantum levels.". "Accept the one true God...." sounds a bit similar to that, doesn't it? As Science gives you absolutely no base for that "acceptance", it is a beli
Re: (Score:2)
Re: (Score:2)
Wrong mental framework (Score:2)
The problem with those fears is that they come from a completely unrealistic understanding of how Artificial Intelligence works. The myths that permeate western public understanding and popular depictions of robotics and AI are Frankenstein and Pinocchio. However, the Mechanical Turk and Disney's The Old Mill are much more accurate descriptions of what's going on in the workings of any current, apparently intelligent machine.
In the Mechanical Turk, all appearance of independent behavior is put there by a hu
Re: (Score:2)
Indeed, good summary of the human side of this baseless hysteria.
None of the AI research we have are for strong/true AI that might be a danger. We still do not know whether creating that type is even possible at all in this physical universe. AI research has had impressive results, but none of them produce anything comparable with what smart human beings can do. What AI can do is of a completely different type, in particular it cannot do anything at all that requires the least shred of insight or understand
Re: (Score:2)
Quite true. I happen to have the (irrational?) belief that creating true AI it is possible (or better, I have no reason to believe that it's not), hence my user name. Yet I think such possibility can't be achieved with our current knowledge and technology. Major breakthroughs should happen both in hardware capabilities and software a
Re: (Score:2)
I have no issues with your stance. My take is different, but you are obvious rational about this. The next few decades will certainly be interesting in this regard. My intuition is we will see only failure and at some time the whole AI community will (again, they have done so before) do try a paradigm-shift, because that break-through will not manifest itself. But that is my intuition, not hard fact.
It may turn out that artificially creating intelligence is possible after all and if that happens I will re-e
Economy? (Score:2)
Distopian nihlist economics = nonsense (Score:2)
My biggest concern with AI is the same concern with automation: our economy is built on the assumption that labor is a scarce resource.
Labor is a scarce resource but not remotely the only one. Minerals and raw materials are scarce. So is energy. So is time, space, land, and imagination. Labor is an important constraint but hardly the only one. Furthermore even if you automate one activity, that doesn't make labor not-scarce in general. It just means that people start working on something else they didn't have time for previously. A few people suffer economically in the sort run from these sort of dislocations but over time scales of
We do not even know that meaningful AI is possible (Score:2)
Seriously, while there are a lot of predictions from "researchers" hungry for grant money, the state-of-the-art is that it is completely unclear whether strong/true AI is possible at all in this universe. There are a few strong hints that it may not be. We also do not even have a theoretical model how it could be built and even if we had a theoretical model, we do not know whether that could be implemented and what performance we would get. We have _nothing_ at all. All we can do is create the appearance of
Re: (Score:2)
Could you provide a few of these supposed strong hints?
So if a machine can do these things, but has arrived at this state by learning with neural nets, or genetic algorithms such that we still can’t define exactly how its intelligence works, does that preclude sayin
Re: (Score:2)
So if a machine can do these things, but has arrived at this state by learning with neural nets, or genetic algorithms such that we still can’t define exactly how its intelligence works, does that preclude saying it is intelligent?
If there ever is a machine that can do the things that require, as it is usually put "real insight" and "real understanding", then it is intelligent. Not even a hint of such capabilities can be created today. They can be faked to a degree, but that always fall down when the borders of the illusion are reached.
Incidentally, you do not know that your brain is creating intelligence and insight. For that you need to assume Physicalism is correct and that is a mere belief with no scientific basis. Don't get me w
Re: (Score:2)
We do know that all kinds of chemicals, surgery, and accidents that effect the brain also effect how people think. As for giving Physicalism the pejorative term of religion, I could easily subscribe to Dualism but require an overwhelming amount of evidence to support examples where Physicalism is insufficient. To the contrary, we have overwhelming evidence that the brain is causally connected to thought, so there is curre
Re: (Score:2)
Your explanation is deeply flawed. The mistake you are making is assuming Physicalism as the zero-state, when it is clearly the more restrictive model by an extreme amount and hence would actually need extraordinary proof in its favor. What you are doing is junk-science. And hence it is a pure belief without rational basis. In fact, it also has some rather strong aspects of a mental illness as it denies individual existence.
As to effects surgery, drugs, or even a simple blindfold, dualism does not say a per
Re: (Score:2)
Your explanation is deeply flawed. The mistake you are making is assuming Physicalism as the zero-state, when it is clearly the more restrictive model by an extreme amount and hence would actually need extraordinary proof in its favor. What you are doing is junk-science. And hence it is a pure belief without rational basis. In fact, it also has some rather strong aspects of a mental illness as it denies individual existence.
This is hilarious. Physical reality, by scientific definition, is the "zero state" - what normal scientists call physical reality. Imagining events, though explainable through physical processes alone (see the standard model of physics), to be in some kind of magic fairyland space "outside" of physical reality is not only unscientific it also has some rather strong aspects of a mental disease as it denies reality. Once you come down off your LSD, think of a repeatable test for this other plane of existen
Re: (Score:2)
No, what physics currently think is physical reality (different from actual physical reality) is decidedly not the zero state and everything we know about it comes with proof, sometimes extraordinary proof, for example for Quantum Mechanics (because it is so very much non-intuitive). Your mistake here is that you confuse the model and the thing that is modeled. A typical novice's mistake. And of course, you cannot use the real thing as stand-in for the model, because the model explains and allows prediction
Re: (Score:2)
Define "non-physical". If you can't find a set of properties that can tell apart the "physical" from the "non-physical", you're just trolling (I don't know whether other slashdotters or just yourself). If your properties depend on assuming that reality is dualistic, you're doing circular reasoning.
I admit that I'm no expert with respect to dualistic theories, but I've never come across with a convincing set of such properties.
Re: (Score:2)
Non-physical I use here, rather obviously, for "not modeled by currently established physical theory". You confusion may stem from the difference between physical and physics. One is reality, the other is a partial model of it. The fundamental mistake physicalists make is to assume physics is the full, accurate and complete model of the physical (i.e. of reality). Physics makes no such claim at all. In fact, they are still searching for the GUT, so the physics research community is very aware that their mod
Re: (Score:2)
I like the comparison to intelligent design! Rational on the surface, but once you look at what is actually known and how they justify it, dissolves into irrational mysticism that is a complete perversion of Science.
It is really quite obvious: We are self-aware. Self-awareness is not something physical matter can do (and we are more sure of this than ever in human history), hence if Physicalism is right, self-awareness must be an illusion. But then _who_ has that illusion? Exactly! The whole idea that self-
Re: (Score:2)
Why would you assume that? Taking that assertion as an axiom is in fact admitting dualism as a first principle, but there's no rational reason to accept it as a given either - I certainly don't see anything obvious about it.
Quite the contrary, it seems perfectly consistent with what we know about consciousness thanks to recent brain s
Re: (Score:2)
You are completely missing that any kind of perception is only something an "observer" can do. (Physical matter cannot supply an "observer", as observers can and will collapse the wave-function.) Hence you are completely missing that in your model there is nobody that has the perception of self-awareness.
Or put differently: Calling self-awareness a "perception" is correct, but it doges the question by abstracting one step. Perception requires awareness (not necessary self-awareness), and that is, again, not
Re: (Score:2)
Actually, Quantum Computing (if it ever works and if it turns out the theory holds up when tested this way) gives you performance boosts, but it cannot solve non-computable problems either. For that, more is required and we have no clue how that "more" could look like. We can observe its interface-behavior in action whenever a smart human person thinks, but we have no clue what is happening there. It is certainly not classical computing, or "digital" Quantum Computing.
The other problem is that consciousness
Re: (Score:2)
I guess I did ask for some examples, and this is one... However...
I am aware of Penrose’s wild speculations. It is likely quantum processes are involved in some structures deep within neurons and optimize when neurons should fire, similar to the way Chlorophyll uses quantum mechanics to absorb light. Note, the fact that Chlorophyll uses quantum principles does not keep us from developing Solar Cells, which by the way, harvest light 10-20x more efficiently than the best plants. The idea that the who
Re: (Score:2)
Well, as uninformed as a researcher that has followed AI research closely for 30 years can be.
Re: (Score:2)
Re: (Score:2)
No, there are not. There are 8 billion interfaces that can be observed and that cannot (at this time and maybe never) be created artificially. Interface observations do not allow any conclusion as to how and where exactly that behavior is controlled from. But stick to your religion. Just do not expect any respect for putting your head in the sand.
Re: (Score:2)
Re: (Score:2)
Sorry, sarcasm is very difficult to recognize on /.
And no, I am not. Just like many Neuro-"scientists", you mistake what an fMRI shows. There is actually no way to interpret the measurement from these and from psychological experiments reliably, as the base mechanism generating them is not understood. Think of it as making fine-grained heat measurements on a running PC. First, that gives you a massive loss of detail. You can still find the CPU (for example), but what it does is completely out of reach. And
That's what they want you to think. (Score:2)
Suspicious (Score:2)
AI isn't the problem, but computers still are... (Score:2)
AI, at least on Turing-complete architectures, cannot and will not happen. Roger Penrose pretty much put the final nail in the AI coffin [wikipedia.org] nearly thirty years ago when he (rightfully) pointed out that there are non-algorithmic aspects to cognition and consciousness (prereqs for intelligence in anybody's book) that can't even be simulated, let alone replicated, on a Turing machine. He based his argument on a novel but quite defensible interpretation of Godel's Incompleteness Theorem. Look up the Halting Pro [wikipedia.org]
Re: (Score:2)
Re: (Score:2, Insightful)
Re:don't prevent intelligence because of fear.. (Score:5, Interesting)
Being free willed does not mean that one cannot or will not necessarily ever make so-called "decisions" that can be predicted from a knowledge of the state, it only means that one is *capable* of it. However, how is the behavior of a system where one is not capable of it any different than one where the state itself is unknowable? You can apply a variation of the halting problem to establish with certainty that either the universe is not deterministic, or else it impossible within its framework to contrive a test that incontrovertibly proves that it is so.
e.g. If I could know what decisions you were making (a notion that is at least theoretically possible, if the universe were truly deterministic), I could analyze it and predict the answer you would give to a particular question, even if I told you truthfully what the answer to that question were going to be.... however, with your so-called illusion of free will, you could utilize the information that I gave you in the present about your alleged future action, and then deliberately contradict it, invalidating the prediction that I made, meaning that my knowledge about the future state was incorrect, which leads one inescapably to the conclusion that even if the universe is deterministic, it is impossible to know it.
And it is noteworthy that by outward present appearances, we appear to have free will with regards to decisions that we make.
So if, by all the standards that can ever be measured, you appear to be a "free-willed" person, then how is your behavior any different than if you actually *were* a free-willed individual? And if your behavior is identical to as if you actually were a so-called "free-willed" being, what purpose does suggesting that you are not free willed even mean?
Plus of course, one cannot advocate the notion that we are not free willed without also suggesting that we abdicate the notion of personal responsibility... but that's a philosophical debate for another time.
Re: (Score:2)
e.g. If I could know what decisions you were making (a notion that is at least theoretically possible, if the universe were truly deterministic), I could analyze it and predict the answer you would give to a particular question, even if I told you truthfully what the answer to that question were going to be.... however, with your so-called illusion of free will, you could utilize the information that I gave you in the present about your alleged future action, and then deliberately contradict it, invalidating the prediction that I made, meaning that my knowledge about the future state was incorrect, which leads one inescapably to the conclusion that even if the universe is deterministic, it is impossible to know it.
If you know the state and rules of the deterministic universe and use that knowledge to predict my decision, you must also in making that prediction predict your own actions, such as telling me your predictions, and my reaction to your actions. So I cannot contradict your actual prediction, unless you weren't really using the state and rules of the deterministic universe to make your prediction. (Of course I could contradict your announced prediction, if it were other than your actual prediction.)
It seems c
Re: (Score:2)
Re: (Score:3)
It's not free will that's the problem. And intelligence is not intelligence, you conflate so much. Intelligence isn't the crux of moral choices, it's instinct survival and altrusim. Do you have an altruism algorithm?
What of the war machines, the killing machines? Everyone making them will always claim that theirs is necessary because of course, they're in the right.
Medical apparatus-- who plays God here? My algorithm or yours? AI has dropped millions of tons of rockets into the drink. AI has misrouted milli
Re: (Score:3)
It's not free will that's the problem. And intelligence is not intelligence, you conflate so much. Intelligence isn't the crux of moral choices, it's instinct survival and altrusim. Do you have an altruism algorithm?
What of the war machines, the killing machines? Everyone making them will always claim that theirs is necessary because of course, they're in the right.
Medical apparatus-- who plays God here? My algorithm or yours? AI has dropped millions of tons of rockets into the drink. AI has misrouted millions and millions of mailed letters.
You trust this stuff far too much, and there is much more than "solely based on the aide is like prevneint a child from existing...." to fear. Instead, coders flatter themselves that they can provide the agar, the basic ingredients of intelligence, sit back, and watch algorithms grow into something good for mankind. Horseshit.
Altruism isn't as universal in human cultures as you might think. The Western christian influenced cultures are totally overloaded with altruism, to the point that they are often dysfunctional because of it. Other cultures are much more pragmatic than altruistic.
Consider this; greed, selfishness and lust are essential for the survival of any organism. Nothing lives except at the expense of another living organism. Even cyanobacteria need to occupy space at the expense of other cyanobacteria.
Re: (Score:2)
Consider what separates us from the rest of the animal kingdom. Civility, generosity to complete strangers, capacity to hold in lust desires (at least some of us), and the ability to recognize sustainability, when much of capitalistic and corporate cultures are strictly in it for the returns on investment.
Who do you want coding this? Trump? Sanders? Pick any politician or none at all. What other values are important? Under what circumstances and contexts will the values be altered, and for what innate ratio
Re: (Score:2)
Consider what separates us from the rest of the animal kingdom. Civility, generosity to complete strangers, capacity to hold in lust desires (at least some of us), and the ability to recognize sustainability, when much of capitalistic and corporate cultures are strictly in it for the returns on investment.
Who do you want coding this? Trump? Sanders? Pick any politician or none at all. What other values are important? Under what circumstances and contexts will the values be altered, and for what innate rationale? Yes, the world can be dog eat dog, but is that the image, the logic you want in the AI that will be making decisions about YOU?
We know in certainty that there humanity can be divided into those feeling love and guilt, and those that are narcissistic, even psychopathic. Which one of those do you want doing surgery on you, or as an adversary on the battlefield? You make this sound easy, and it is not easy. Greed is not a value-- it is the distinct lack of a value-- charity.
You don't get around much. I've lived in many countries, on many continents and among many cultures. Your view seems limited to Western cultures.
If AI has any kind of 'survival instinct' it'll be a threat to us.
Re: (Score:2)
Um, I do have experience in other cultures. In the Orient-West, and the ASEAN countries. We can agree that survival instinct is worriesome for AI. Survival instinct is strong among living things, but how AI reacts to stimuli is a direct function of its native programming, just as the reason you breath unconsciously is part of your native programming in the limbic.
It's my worry that bad AI logic makes uncontrollable actions, in the sense of not benefiting mankind/humans or the sustainability of the rest of t
Re: (Score:2)
Um, I do have experience in other cultures. In the Orient-West, and the ASEAN countries. We can agree that survival instinct is worriesome for AI. Survival instinct is strong among living things, but how AI reacts to stimuli is a direct function of its native programming, just as the reason you breath unconsciously is part of your native programming in the limbic.
It's my worry that bad AI logic makes uncontrollable actions, in the sense of not benefiting mankind/humans or the sustainability of the rest of the planet when viewed on a higher scale.
Sustainability depends on the needs of the sustained. A machine civilisation might have very different needs for sustainability for us and their 'idea' of sustainable might well doom us.
Re: (Score:2)
Thus is the crux of the Heechee Series, and other sci-fi tomes. When they can generate their own juice, look out!
Re: (Score:2)
It's my worry that bad AI logic makes uncontrollable actions, in the sense of not benefiting mankind/humans or the sustainability of the rest of the planet when viewed on a higher scale.
AI is nowhere (and probably never will be) near that level. It's pointless to even discuss that kind of thing because we don't even know what it'll look like, how it'll be programmed, or what context it'll be operating in. You should be more worried about someone writing some bad code that causes your self driving car to swerve into oncoming traffic in rare circumstances that weren't covered under any of the test cases.
Re: (Score:2)
We very much disagree. AI does interesting things these days, and the applications are numerous. Collectively, AI powers civilian, military, scientific, and research in astounding ways. These applications now transverse disciplines. Some of the actions are in production today, others are in our immediate future, while admittedly, some are far off. But not that far off.
Re: (Score:2)
You seem to be unable to understand language. Let me help you:
"Melt-down" consists of "to melt" (happened to the cores in Chernobyl, TMI, Fukushima and partially Windscale, that we know of) and "down" means to go closer to the core of the earth. Now, the active materials in some Fuckushima reactor cores and in TMI have certainly gone "down" to a significant degree, as analysis of radiation intensity clearly shows. Chernobyl would probably better be called a "nuclear blow up", because while the fissible mat
So, solvable then. (Score:2)
Yes early researchers were wildly optimistic. I don’t see how that makes them arrogant. Other disciplines had followed much quicker paths to success. Say the Wright Brothers to the first Jet Plane or the Manhattan Project and the bomb. However it didn’t have to be that way. The Wright Brother’s early success could have been followed by decades of stumbling along only to make small progress because of unforeseeable reasons. I guess you would have mocked those early aviators as poor arroga
Re: (Score:2)