BT Futurologist On Smart Yogurt and the $7 PC 455
WelshBint writes, "BT's futurologist, Ian Pearson, has been speaking to itwales.com. He has some scary predictions, including the real rise of the Terminator, smart yogurt, and the $7 PC." Ian Pearson is definitely a proponent of strong AI — along with, he estimates, 30%-40% of the AI community. He believes we will see the first computers as smart as people by 2015. As to smart yogurt — linkable electronics in bacteria such as E. Coli — he figures that means the end of security. "So how do you manage security in that sort of a world? I would say that there will not be any security from 2025 onwards."
Yogurt is already smarter than me (Score:5, Funny)
Re:Yogurt is already smarter than me (Score:5, Funny)
And if you were a hot chick, we'd demand proof.
Re: (Score:2, Funny)
Re:Yogurt is already smarter than me (Score:5, Funny)
I used to suffer from the same problem. Try opening the container from the other side.
Re: (Score:3)
See, that's why I'm an optimist about technology. Human intelligence should be able to keep one step ahead of yogurt.
Re:Yogurt is already smarter than me (Score:5, Funny)
Re: (Score:3, Funny)
That's it, I'm sticking with ice cream.
Re: (Score:3, Funny)
I tried this, and it just made things worse. First, I had to get a knife to cut through the bottom side, then all the yogurt fell onto the floor.
Re: (Score:2)
Just breathe through your nose, honey.
Oh - wait - sorry, wrong problem...
- David Stein
Re:Yogurt is already smarter than me (Score:5, Funny)
Futurologists... (Score:5, Insightful)
*ducks*
It's a synonym for "Author Of Speculative Fiction" (Score:2)
Right. (Score:5, Interesting)
Get a grip, for God's sake.
Re: (Score:2)
Why would it need telephone operators? Isn't Manhattan Island one big prison [imdb.com]?
Re: (Score:3, Insightful)
Re:Right. (Score:4, Insightful)
That is the role of the "automatic" part in the modern phones
Smartitude: people vs computers (Score:5, Funny)
Computers as smart as "some" people im sure (Score:4, Insightful)
Re: (Score:2, Interesting)
At least one as smart as our President.
Re:Computers as smart as "some" people im sure (Score:5, Insightful)
When you have a computer that can beat you at chess without having a chess program installed, it's time to be concerned.
Re: (Score:2)
Now, when you can just tell it the rules of the game, and it just remembers previous games to become better, you'll have a shadow of AI.
Re:Computers as smart as "some" people im sure (Score:4, Insightful)
Hmm. That doesn't explain why the chess program I wrote in college kicked my own ass every time I played against it
Re: (Score:3, Informative)
Your computer didn't beat you at chess, a programmer did.
This is a common misconception. People say "A computer only follows the rules that the programmer gave it, so it's the programmer's knowledge and skills that are used to play the game." The misconception is that the programmer is NOT actually telling the computer how to play chess. The programmer only tells the computer how to THINK ABOUT playing chess. And by executing this thinking program, the computer designs its own chess-playing strategies.
Re: (Score:2)
Then we'd elect it as president.
Re:Computers as smart as "some" people im sure (Score:5, Insightful)
Intelligence is like terrorism (or pornography), in that it's definable only with broad, nebulous, debatable borders. Chess is one kind of intelligence, and our current logic models are excellent here. Art is another kind of intelligence, and our current logic models are terrible here.
The problem with modern AI (and the flaw in Ian Pearson's predictions) is that we really don't understand many kinds and elements of intelligence. For instance:
Anyone who tells you differently is trying to sell you their book. ;)
- David Stein
Re:Computers as smart as "some" people im sure (Score:4, Insightful)
The other side of AI says that "my brain is magic, and I'm really smart and you can't possibly produce a robot as clever as me". I don't subscribe to that one - I think that's nonsense.
At minimum he's misrepresenting "the other side" of AI. As one professor (in the only college class on philosophy I've taken, btw) recounted, in the 60's a universal human language translator was inevitable and right around the corner. The problem is that these predictions were being made by technologists and not linguists -- people who didn't understand the problem. And language, on the surface, is a simple problem: languages have rules, exceptions to these rules and vocabulary that can be exhaustively enumerated -- a custom-fit problem for computers, right?
Turns out that machine translation from one language to another was a tad more complicated -- all due to a lack of understanding of linguistics. It's a problem for a linguist to solve, not a programmer or "AI Researcher".
We'll first understand how our minds work, and then we'll be able to create strong AI. A shrink can better tell us when this might happen than a technology futurist (and of course, there's plenty of good arguments that this will never happen).
IMHO, you're very right in pointing out that you run into basic problems once you start out trying to define what we mean by human intelligence; in fact, there's a very good argument to be made that when you start peeling away layers, a lot of what we understand as human intelligence is innately biological. As in, no strong AI in 100 years, no strong AI ever -- not created by human intelligence at any rate.
This doesn't seem to be a popular view of intelligence with technology-minded people: we seem to assume that the brain is the hardware and that our mind is the software -- so all you need is the right program running on your 386 and "poof" you have human intelligence.
That's how I felt myself actually; the prof's arguments didn't make sense to me until after a few years after I got my C in his class during an discussion of AI that I finally understood what he was saying all those years ago
Re: (Score:3, Interesting)
I don't think so. While — as an independent AI researcher — I would not absolutely rule out a "we programmed it" solution, I really don't think that's what we're looking at. No more than we were "programmed" to be intelligent, in any case.
What is needed is a hardware and probably somewhat software (at least as a serious secondary effort after pure software simulation uncovers what we need to do) syst
Re: (Score:3, Informative)
Oh, I wholeheartedly disagree.
Modern AI simulates creativity in, essentially, a two-step process:
Certainly many kinds of inventions are created that way. We have a term for that, and it's not "creative invention" or "engineering" - it's "serendipity."
There are at least two other ways in whic
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
I'm sure I'm not the first person to have thought of this, but what if we just simulated a biological human in software? Computers are already pretty good at simulating chemical reactions, physics, even as far as protein folding . . . why not learn to build an AI by taking it a step further and refine the simulation models until they can accurately simulate single cells up to macro-sized multicellular animals and eventually humans? On a powerful enough computer (no doubt well beyond what's feasible today)
Re: (Score:3, Interesting)
Note that robots have effectors, so that's not an insurmountable problem, merely a very different one (that's already being worked on). Note also how completely separated it is from intelligence.
Then there's motivation. Why should an AI want to do any parti
$7 PC: Wrong (Score:5, Insightful)
Well, no one except Nintendo.
Re: (Score:2)
Re: (Score:3, Insightful)
Actually, you wouldn't have. Everybody back then thought we were going to build computers that took up entire city blocks and would get up into the millions of computations per second range. The personal computer took them by almost complete surprise f
Re: (Score:2)
OK, so they might not see how far somethings will go but over estimate others (moon bases?). But I would say that their accuracy is probably not as bad as you think (I'll take Nostradamous as 0-1% accurate). If you look at the original star trek then quite a few things which they had we are getting close t
Re: (Score:3, Insightful)
Up until recently, you could get a pretty functional PC up here in Canada for around $1000. Back in 1988 it was $2000, and now its probably $600, but the principl
And flying cars and moonbases (Score:5, Insightful)
The problem is that they can only detect trends and can't really predict real things. So when you see a futurist going out on a limb and claiming that X is only 10 years away, they are hedging their bets that you will forget they ever made such a silly prediction 10 years from now. If they do manage to get something right, you can bet they'll be working overtime trying to get grants from RAND and MITRE for more futurism.
However, the reading of trends is a very important role of sociology. Only by accurately predicting what sorts of stresses and issues we will face in the near-term future can we sufficiently prepare ourselves for them. The Rand corporation has a list of 50 books for thinking about the future. (http://www.rand.org/pardee/50books/) These offer insights into the past and present and into the minds of successful futurists.
The one thing you will notice about successful futurists is that they don't go overboard predicting killer electronic e coli yogurts. Rather, they outline the likely changes in society and provide suggested remedies for foreseeable problems as well as suggested directions for societal growth.
The area of futurism is very interesting and a strong futurist school of thought is vital to our success as a society. Cranks who like to come up with doomsday scenarios do the entire field a disservice.
Re:And flying cars and moonbases (Score:4, Informative)
Some of these trends are predictably reliable, though. Moore's Law is by no means perfect, but it's extremely likely that computers will continue to grow in processing power at a steady, exponential rate, at least for the next few decades.
The problem is that some - including the typically brilliant Ray Kurzweil - believe that AI is limited by computational power. I don't believe that's the case. I believe that AI is limited by a woefully primitive understanding of several components of intelligence. It is impossible to produce artistic, emotive, sentient machines by applying today's AI models to tomorrow's supercomputers.
Reliable predictions:
- David Stein
Robot brains getting Master Degrees in 20 years? (Score:2)
We're getting a greater understanding of neuroscience, and starting to get some of these concepts built into the way that computers will work, and computers don't have to be a grey box with a whole stack of silicon chips in it - there's no reason why
Re:Robot brains getting Master Degrees in 20 years (Score:2)
You just ruined my day. Your line above gave me a vision of Robot Lawyers....
Re:Robot brains getting Master Degrees in 20 years (Score:5, Insightful)
Like I said... I don't doubt that eventually we'll develop a computer that can match the processing power of the human brain. But I doubt it'll be soon. It *might* be within my lifetime, but I'm not holding my breath on that one... The brain isn't magical, it's a trillion-core symmetric computer with a staggering memory retention and bandwidth, and a programming so complicated that we're nowhere near matching it. Oh, and that's not mentionning that the brain doesn't work in binary switches, either. It works in chemical switches, with about 50 possible states running in parallel.... Some day, we'll beat out the human brain with a computer. Humans are just too arrogant to believe that we can't, and so somebody will eventually do it. But it's not going to be tomorrow.
Smarter and Smaller. At least one's a good bet. (Score:2)
That's bolder than a lot of strong AI proponents. Traditionally, it's 20-30 years down the road.
As to smart yogurt -- linkable electronics in bacteria such as E. Coli -- he figures that means the end of security. "So how do you manage security in that sort of a world? I would say that there will not be any security from 2025 onwards."
Unless you've got equally effective opposing nanotech, which I suspect there will be some research in.
Re: (Score:3, Interesting)
The confusing thing about all of his "yogurt"-preditctions is that they are internally inconsistant. At first he discusses how electrically-active bacteria could be oriented in such a way as to design a computer. This is entirely reasonable and is, in fact, how animal nervous systems function. THEN he goes on to these ridiculous claims about bacteria hacking electronics after being released in air cond
Re: (Score:2)
Says something about businesses (Score:2)
So yeah, the deck is stacked unless the planet is hit with a asteroid the size of Manhattan. Well that's something to look forward to I guess.
Re: (Score:2)
historically not true.
People want a safe enviroment for there children and to be left the hell alone.
Witch Doctors, Futurologists, and Cranks (Score:5, Interesting)
I predict that in 2015, this guy will still be making predictions. His track record will be no better than random probability would have resolved. The time you have spent reading his predictions and even this response is time out of your life that you will never recover, and reading it will not put you to any better advantage than if you had not.
Re: (Score:3, Informative)
Androids for "Companionship"?? (Score:2)
Re: (Score:3, Insightful)
This is the KICKER (Score:3, Funny)
Maybe not, but I'll have the job debugging all the mistakes the androids will have in their code. They'll be outsourcing debug work to us humans.
Re: (Score:2)
Re: (Score:3, Interesting)
And the funny thing is that no one realizes how many times it's happened to different degrees.
"You'll just describe your problem to the computer"
Sure, current languages aren't exactly plain english descriptions, but if described to someone writing assembly code or laying out punch cards years ago they'd probably view it as darn close.
Languages like Prolog take the co
Man's a fool (Score:3, Insightful)
The other side of AI says that "my brain is magic, and I'm really smart and you can't possibly produce a robot as
clever as me". I don't subscribe to that one - I think that's nonsense.
Tells me all I need to know about this guy's predictions.
He fails to understand that in the 40+ year history of AI research noone has demonstrated even the inklings or foundations upon which actual AI can be built upon.
They may be nothing special about the human mind, but what ever the case is, we certainly havent figured it out yet. It's more likely that we'll have cold fusion by 2015 than AI.
Re: (Score:2)
Even Turing failed miserably, predicting machines would be passing his Turing test around 2000.
It doesn't sound like we've progressed that much from the time when he made his prediction;
machines keep getting faster but as far as mathematics and theory is concerned, it doesn't seem we've come that far from where we were 50 years ago.
Re: (Score:2)
Yep (Score:2)
Re:Yep (Score:5, Insightful)
Re: (Score:3, Interesting)
Re: (Score:3, Insightful)
How does he misunderstand that? All he's saying is that there isn't any sort of magical power or cartesian dualism in the brain which somehow creates an immaterial mind/soul separate from the physical world. He isn't making claims that AI researchers have actually figured out things yet.
And yes, I have spoken to people that think it's impossible to
No security (Score:2)
Re: (Score:2)
When do I get my flying car?!?! (Score:2)
Lollipop! (Score:3, Interesting)
Uh, I thought that, explaining what you want to a computer, is precissely what programming is all about. Isn't source code a program's best specification? What are programmers doing if not explaining what they want from the computer?
When someone says "I want a programming language in which I need only say what I wish done," give him a lollipop.
Re: (Score:2, Interesting)
I wish that they would sell it.
It never does just what I want,
But only what I tell it.
Re: (Score:3, Insightful)
Clearly, between this, and the AI prediction, this guy is completely unaware of computing history.
Only a fool would try to predict the future with no knowledge of the past.
Dijkstra dismissed the idea long ago. But of course, I'm sure this no-name doofus knows better!
http://www.cs.utexas.edu/users/EWD/transcriptions/ EWD06xx/EWD667.html [utexas.edu]
http://en.wikipedia.org/wiki/Natural_language_and_ computation [wikipedia.org]
yes yes... (Score:2)
"Futurology" is bunk (Score:5, Insightful)
In 1955 Heinlein, in Revolt in 2100, had the protagonist heading to "the Republic of Hawaii", not able to forsee that fpur years later it would become a state.
Roddenberry had automatic doors, cell phones, and flat screen monitors 200 years in the future rather than 30 years later (now). His writers had McCoy give Kirk a pair of reading glasses in Star Trek IV, not forseeing that twenty years later the multifocus IOD would be developed.
This guy says we'll have six hundred million androids in ten years. He doesn't understand computers, or that AI is just simulation. "I'm in the 30-40% camp that believes that there's really not anything magical about the human brain." But he doesn't see that it is analog, and that thoughts, memories, and emotions are chemical reactions while digital computers are complex abacuses working exactly like an abacus (except it ises base 2 instead of base 10).
He talks of that Warwick guy - "Kevin isn't really the first human cyborg". Nope, he isn't. Vice President Cheney is a cyborg, as he has a device in his heart. I'm one, as I have a device in my left eye (the aformentioned IOD). People have artificial hips and knees. "Captain Cyborg" isn't really a real cyborg, he's a moron like the writer of TFA.
Nothing to see here - at least, nothing for anyone intelligent to see here.
Re: (Score:2)
What is a multifocus IOD??
Re: (Score:3, Informative)
http://en.wikipedia.org/wiki/Darpa_grand_challenge [wikipedia.org]
There was a competition of self-driving cars (or SUV's, mostly, and one big truck) put on by DARPA last year, and five of them managed to complete a 132 mile desert course. Next year's DARPA challenge is in an urban environment with the requirement of obeying traffic laws. The U.S. Army is attempting to use robots for a significant portion of its noncombatant groun
Re: (Score:3, Interesting)
But not writing fiction:
NISTEP [taoriver.net] used the delphi method [wikipedia.org] to great effect.
Some examples:
Self-Programming Computers, HA! (Score:4, Insightful)
Boy have I heard this one before. It just used to be that computer languages would become so simple that the profession of Programmer would disappear because everyone would just be able to write their own programs. Sure hasn't happened yet.
Someone once famously said: Computers are useless, they can only give answers.
The problem here is, even if you had a computer like the one described here, you still need to be able to understand your problem well enough to cogently explain it to your computer. And that's where most people will fail. They don't understand their problems in the first place, and have no idea how to communicate the solutions they actually need.
Re: (Score:2)
That describes my day job. The clients never know what they really want, and why should they? I'm the solution guy. I give them what they don't know they want.
Research, a few (million) questions, some creativity and blam, presto, a system that does more-or-less what they didn't-know-they-wanted.
If I, with an IQ of a measl
Re: (Score:3, Insightful)
Already here (Score:2)
As for simulating real humans: we're no closer today than in 1960.
this android disagrees (Score:2)
I agree with his point that there is nothing magical about the brain, but I think he's off his rockers to say it will happen in 10-15 years. Perhaps he should beef up on some neuroscience papers before such grand claims. While I think there should eventually be a link between the AI and neuroscience worlds, it real
In the future (Score:2)
My hope for the future is that someday everyone will create web pages with software that uses standard ascii, so that I see quotes, dollar signs, pound signs, etc instead of things being broken by things like "smart" quotes. Note that the above is how the last line renders with Firefox; my guess is it probably looks just fine if you are using Internet Explorer.
Um, this "futurologist" is a moron... (Score:2, Interesting)
Hmmm... Well, let's tackle the AI thing.
AI = Human Intelligence isn't going to happen. Ever. You might be able to get a machine that can take as many input data points as the human brain, and get it to execute as many data output points as the brain, but that's not intelligence. That's I/O and there's a big fat difference.
Security won't exist. Really? So if some asshat barges into my house I won't be able to pound his skull to a bloody pulp wit
Re:Um, this "futurologist" is a moron... (Score:4, Insightful)
Frankly, making broad statements like that sound an awful lot like insisting 640kb of RAM ought to be enough for anyone, or that a computer will never be smaller than a gymnasium, or that man will never fly or travel to the moon.
Hmm.. maybe no security beyond 2017 won't matter. (Score:2)
HA! (only joking, don't throw me any flamebait)
Re:Hmm.. maybe no security beyond 2017 won't matte (Score:2)
$7 for a computer - outrageous (Score:5, Insightful)
item# 172008
http://www.officedepot.com/ddSKU.do?level=SK&id=17 2008&x=0&Ntt=organizer&y=0&uniqueSearchFlag=true&A n=text [officedepot.com]
A lot depends on how you define a computer, but think about what this would have been like in 1970?
$15 PC = discount cell phone (Score:2)
What makes people say weird things? (Score:2)
It's interesting to try to think about what must happen to a person to make them so divorced from reality that they make such claims. I can understand someone who knows nothing about computers making such a claim. But this is by someone who is supposedly an expert in the field. Is this person deliberately lying to be provocative? (No, the guy responds by saying it's 'realistic'.) Do they have no idea what is going on in AI? (Surely no
Strong AI by 2015? (Score:2)
Technology advances very rapidly, but rarely in the directions
More than just "not in the direction" (Score:5, Insightful)
But solving something that's NP-complete is not "just a little more difficult" than writing a word processor or an OS. It's so much harder that we need a totally new theoretical framework. Faster processors aren't enough to get us there. And the theoretical breakthroughs come a whole lot less frequently than processor speed increases.
Flying cars? You know, we could probably do that today. It's just a personal STOL aircraft, basically. We can solve the technological problems there. What we can't solve is the rest of it. Between the power requirements (cost) and the driver knowledge needed to operate it, the market size is too small to be worth the effort to create such a beast.
AI? We have the computers that could run the code (maybe). We don't know how to write the code. We probably won't know how next decade, either, or the decade after that.
Smart bacteria? We could perhaps create them. Making them spy on keypresses? Possible. Finding the data you want in the stream of data coming from a trillion (or quadrillion) bacteria? He seems not to have addressed that one.
Sending cans to other parts of the solar system? We've done that. Permanent colonies? It's a lot harder than just sending a bigger can with more stuff in it.
We have breakthroughs in one area (CPUs, for instance) and people assume that other, related problems must be "only a little harder" and therefore about to be solved. But problems differ enormously in difficulty; the level of breakthroughs that we have now is nowhere near what we need for certain problems.
A long way to go for robots (Score:2, Informative)
Unfortunately, the current state of robotics is, in terms of cost-effectiveness, about where computers were circa 1955. For example, Honda's "little android," the Asimo (at least according to Wikipedia) still costs about $1 million per
Smart Yogurt (Score:5, Funny)
Job market? (Score:2)
human-level AI requires subatomic brain model (Score:2)
So why not just have sex and make a real human?
Well, the advantage would (presumably) be that the simula
Strong AI is total fantasy. (Score:3, Insightful)
The problem is that people who want to build strong AI are trying to do in decades what took nature billions of years. Certainly, directly engineering something is usually faster than evolving it, but even orders of magnitude speed up won't have strong AI systems any time soon.
Since the dawn of computing, people have been assuming that once computers got fast enough, the AI problems would just solve themselves. The problem is that we're talking about very hard problems. Things that are easy for us (walking, visually recognizing objects, etc.) are hard for computers. Things that are hard for us (math, data processing, etc.) are easy for computers. Why? To do things that are hard for us, humans have developed, over thousands of years, detailed and exacting formalisms. Math has axioms, a syntax, and a set of mechanical processes to carry about. Even complicated proofs involve an extraordinary amount of simple symbol-pushing that a computer could do easily. Computers are based on exactly those same formalisms, so it makes sense that it would be easy to program a computer to do those things. Computers are NOT, however, built anything like how the human brain works, and that's why AI researchers use neural nets and genetic algorithms for so many things.
Long before we're plagued by computers thinking for themselves, demanding rights, and taking over the world, we'll simply continue to be plagued more and more by increasingly catastrophic bugs introduced into increasingly more complex applications. Far from having autonomy, our bug-ridden software does exactly what it was coded to do, right or wrong, and we'll suffer from it. And all along the way, the blame for the problems will fall squarely on the human engineers who made the mistakes in the first place.
Ok, how? (Score:3, Insightful)
I suppose I'm one of those 60% - 70% of the "AI community" (or of the former AI community, because I left AI and went into algorithms, which was what I wanted to do more anyway, when people started spouting this nonsense), but I'm skeptical of any claims that we will develop strong AI by a certain time period that do not propose a reasonable way of doing so.
Here's a hint: it's not about raw processing power! The challenge is still theoretical. We can't even implement AI on an oracle the way things are now, much less a real machine. Great, so you'll have a powerful enough CPU to simulate a brain. Any idea how you're going to write a program that simulates one, considering we don't know anywhere near the requisite level of detail of the brain's operation yet?
No, it isn't IT. We design all of the hardware in IT; we know under what circumstances signals are going to be sent. It's kind of hard to have that level of control when you're talking about the nervous system, especially with how little we understand about it now. The best analogy I can think of is attempting to prove a theorem in physics and mathematics (the former is regarded as bad science; the scientific method is empirical). We make the rules in math, so we know when something agrees with those rules. New rules come from old ones, so everything remains intact. We don't make the rules of physics (or biology), and there's plenty we still don't know about those rules.
BT? (Score:3, Insightful)
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
Of course, once you have the computing power, you still have to solve the software problem. How long that will take is anyone's guess. Having huge machines available for research (to simulate neural
Evolution of thought (Score:2)
The brain has evolved to manage chemical and electro-chemical reactions in a quasi-chaotic state. (That is, it looks like chaos to the untrained eye, much like my office.) Where the computer is based in mathematics, the brain is not, at least not directly.
Let me make a prediction. I like saying stupid-assed things as if I knew what I was talking about. Check my posting history if you don't believe me.
Here's my prediction:
If we ever stumble upon hard AI, it w
Re: (Score:2)
Re:The Color of America? (Score:4, Insightful)
You do realize how fluid the definition of "white" is over the decades, right? A century ago, most of Europe itself wasn't considered "white," especially the southern and eastern bits. When you get right down to it, the only reason Italians, Pols, or even the Irish are considered "white" nowadays is because their emigrants have made a name for themselves in "white" countries like the United States.
Immigrant-friendly "white" countries have taken in plenty of disparate newcomers, far removed from the populations they tried to integrate into, and instead of these countries ceasing to be considered "white" (as many xenophobic contemporaries have always feared), the definition of "white" has simply expanded.
Re: (Score:2)
If it were, would it really be spending all its time strapped to your stinky wrist?