Effort to Create Virtual Brain Begins 454
bryan8m writes "An IBM supercomputer running on 22.8 teraflops of processing power will be involved in an effort to create the first computer simulation of the entire human brain. From the article: 'The hope is that the virtual brain will help shed light on some aspects of human cognition, such as perception, memory and perhaps even consciousness.' It should also help us understand brain malfunctions and 'observe the electrical code our brains use to represent the world.'"
Thoughts on virtual thoughts (Score:5, Insightful)
Seriously, they expect it to take a decade to complete. By 2015, we could probably get processors with that kind of power from the local computer store. Then everyone could have their own virtual brain...wait, are they going to GPL this?
So what happens if this thing develops a consciousness?
Re:Thoughts on virtual thoughts (Score:5, Funny)
We kill things with consciousness all the time.
Re:Thoughts on virtual thoughts (Score:5, Funny)
Re:Thoughts on virtual thoughts (Score:3, Funny)
Re:Thoughts on virtual thoughts (Score:4, Funny)
Re:Thoughts on virtual thoughts (Score:3, Funny)
Re:Thoughts on virtual thoughts (Score:5, Informative)
for those of you who didn't get that joke [eeggs.com].
Re:Thoughts on virtual thoughts (Score:3, Funny)
Had to be said... (Score:5, Funny)
Re:Thoughts on virtual thoughts (Score:2, Insightful)
TFA does mention mouse brain, and human only as the goal in 2015... plenty of time to increase the flops.
Re:Thoughts on virtual thoughts (Score:5, Funny)
Re:Thoughts on virtual thoughts (Score:3, Funny)
The factorial of 42 is a two-digit number?
What base are you using? Base 37483411234209726053065806?
(because in this base, it would be two-digits:
The more significant digit would have the value 37483411234209726053065805, and the less significant digit would have the value 33187259034871818286636170)
Re:Thoughts on virtual thoughts (Score:3, Funny)
SerpentMage's key insight, of course, was that we are only interested in a particular number - the factorial of the Great Answer. This will be the only use we will ever
Re:Thoughts on virtual thoughts (Score:3, Insightful)
But if a cpu in 2015 can simulate 100 billion neurons sending signals to each other a couple hundred times a second over 100 trillion morphing connections asynchronously
Re:Thoughts on virtual thoughts (Score:3, Insightful)
A neuron is *very* simple. Maybe just a sigmoid function over a sum. If thing actually is doing 22.8 terraflops (unlikely, I'm guessing that's the theoretical peak for the machine) then that gives 228 instructions per neuron. That is in the right range for operation.
There are not 'morphing' connections, they tend to mostly stabilize within the first few years of life. I can't remember the figure, its maybe on the order of a 1000 connections per neuron, so 228 floating point op
Re:Thoughts on virtual thoughts (Score:5, Informative)
You're pretty correct on the wiring, although not at the level you wrote. The basic connectivity and structure is known, but each and every brain is wired from experience, not just birth.
It's worth trying, and we will learn a lot regardless. We just won't learn as much about the brain as one might think.
Re:Thoughts on virtual thoughts (Score:5, Informative)
Hold your horses! There is abundant evidence that single neurons can perform more complex operations than a mere 'sigmoid fuunction'. That is a working approximation that can be useful from the point of view of simulations but that is all.
Single neurons can potentially perform computations at the level of the of the passive cable equation. At the level of active membrane properties when added to those passive canle equation solutions. At the level of genetic instructions becoming activated in the nucleus and dendrites in response to activity. And finally the plasticity or learning rules that neurons use are not only computational very important but probably quite varied from brain region to region. Spike timing dependent plasticity for example allows the brain to pick out persistent correlations within highly noisy inputs. None of this is included in the impoverised neural-network viewpoint of 'sigmoids'
The real question is why are they doing this? Markram is a top researcher and knows what he is doing. But i quesiton the motivations of big blue. i wouldnt be suprised if they didnt give two hoots about the science but rather are only doing this so that they can get the kind of publicity that posts on slashdot bring. Remember 'Deep Blue'? Lets hope they dont treat Markram like they did Kasparov
Re:Thoughts on virtual thoughts (Score:4, Insightful)
Will they use some kind of skin grafting onto a chip to let it "feel" things using the nerves in it, instead of simply simulating it with pressure/temperature sensors?
And what of other stuff like taste and smell?
Re:Thoughts on virtual thoughts (Score:2)
I think you've been spending a little bit too much time in science fiction fantasy land.
Re:Thoughts on virtual thoughts (Score:2)
Are you processing what I'm processing??
err.. thinking.
Re:Thoughts on virtual thoughts (Score:3, Funny)
>
> Are you processing what I'm processing??
Seeing as how they're using slices of mouse brain, I believe the correct answer would be along the lines of...
"Umm, I think so, Brain, but a billion parallelized microprocessors and a human named CmdrTaco? What would the children look like?"
Re:Thoughts on virtual thoughts (Score:5, Informative)
From the article:
In other words, one day they hope to simulate a whole brain, but to begin with they'll be modelling the behaviour of a particular neural unit - with physical data derived from many, many slices of mouse brains.
In terms of deciphering the behaviour of relatively large numbers of neurons, it could be incredibly useful (and once the model is tuned would mean fewer messy, difficult and unpleasant experiments involving live animals, brain electrodes and whatnot) - but it's admittedly only a small first step toward modelling a whole brain of any species. Still, it's one of the necessary building blocks - and any moral issues are left as an exercise for the reader...
Re:Thoughts on virtual thoughts (Score:2)
Re:Thoughts on virtual thoughts (Score:5, Informative)
Some of the most successful early computers were analog computers, capable of performing advanced calculus problems rather quickly. Before digital computers became the mainstay of computing, analog computers were quite common. Analog computers use varying voltages and currents to represent variables, and various types of amplifiers to represent factors in differential equations, with the result being a final voltage or current that can be read out on a meter or graph. Analog computers were heavily used in process control situations, such as calculating the correct aiming of the big guns on board a battleship. Many variables had to be considered simultaneously, including the position of the ship, the position of the target, the type of ammunition, the wind and other weather conditions, the constant motion of the ship from the action of the sea, and myriad other variables. The analog computer would simultaneously combine all of these variables to generate a real-time result that would control the large servomechanisms that aimed the guns to assure that their ordinance would be delivered accurately to the target.
They were,however, a real bitch to sort out. So the computer world focused upon digital designs, which , it turned out, were a lot easier to do.
Re:Thoughts on virtual thoughts (Score:4, Informative)
A key factor is that analog computers are inherently lossy; components aren't precise enough to make a large analog computation as the imprecisions tend to add up...
And then there's the whole Turing concept of code as data. Analog computers were "programmed" by adding and subtracting components; software as bits is a lot more mutable. Even so, with the appropriate switching devices, an analog circuit that's programmable is theoretically possible.
But why bother when digital is so much more precise?
On the flip side, analog computers STILL see some life in minor subsystems everywhere. With proper design they happen to be quite handy for feedback-control applications...
Re:Thoughts on virtual thoughts (Score:3, Informative)
Because quantiztion and roundoff error play HELL with derivatives. Bigger, faster, cheaper digital computers had to be developed and better algorithms discovered before digital could take over the job. Once that had been done, digital's flexibility won out.
Analog computer technology was an outgrowth of audio and radio, and developed quickly during and immediately after WW II. A couple dozen components would make the fundamental building block, which
Re:Thoughts on virtual thoughts (Score:5, Interesting)
Yes. That's what has me thinking. Not that I think we should stop, but it's going to be a disturbing moment when the techs running these things get to a point where they ask a simulation brain questions, get it to perform tasks, get it to react like a human does...
Re:Thoughts on virtual thoughts (Score:3, Funny)
Re:Thoughts on virtual thoughts (Score:5, Informative)
You are.
According to the Business Week article [businessweek.com] this thing will be simulating about 10 thousand neurons. The human brain has about 100 billion neurons. This will be simulating a small section of cortex, not an entire brain. The goal seems to be to understand how cortical columns work, not to create a simulated mind. They actually will not even have enough "neurons" to match one human cortical column, but will probably still learn alot about the circuitry....
Re:Thoughts on virtual thoughts (Score:3, Informative)
Again from the article:
Sounds like they'll use
Re:Thoughts on virtual thoughts (Score:2)
Interesting. IBM are not the first BTW (Score:2)
http://www.ad.com/ [ad.com]
They've been at it for several years so looks like IBM are a bit behind.
This research is probably not theory-driven. (Score:3, Insightful)
My guess is that the Business Week article linked in the parent comment is better than the New Scientist article at explaining the researcher's intentions. Here's a quote from the Business Week article: "The Blue Brain Project will search for novel insights into how humans think and remember."
If you've been around scientific research, it is not difficult to understand that this research has little chance of producing anything valuable.
There are several reasons:
1) The research is equivalent to tryin
Re:Thoughts on virtual thoughts (Score:4, Funny)
Give me a shovel and a dark night and I'll get you some real brains, second-hand. And at only 1/2 the cost.
Sincerely,
Igor
Re:Thoughts on virtual thoughts (Score:3, Insightful)
How would you tell? Seriously. It's not like you can just stick a ruler in and measure the length of the consciousness gland.
Re:Thoughts on virtual thoughts (Score:2)
Is a machine that does 100 teraflops, but which does multiplication by adding in a loop better than a 50 teraflops machine which does it with a more intelligent algorithm?
I'm pretty sure that eventually we'll understand how the brain works, which will enable us to produce something that emulates its function, but in a much more efficient way. Just like we can make machines that are better at multiplication I'm sure that some day we'll make mac
Re:Thoughts on virtual thoughts (Score:2)
> I thought I was smarter than that.
A rough guess seems to come in at around 100 teraops or more.
In a paper by Hans Moravec [transhumanist.com], one guess is 10^14 instructions per second (Extrapolation of retina
equivalent computer operations.)
While another by Ralph Merkle [merkle.com], suggests 10^13 - 10^16 operations per second, based on power consumption,
and yet another by Robert McEachern [aeiveos.com] suggests 10^17 FLOPS (Floating Point Operation Per Second, more comparable to c
Re:Thoughts on virtual thoughts (Score:3, Insightful)
Re:Thoughts on virtual thoughts (Score:2)
Re:Thoughts on virtual thoughts (Score:5, Funny)
Neutrons are responsible for indifferent behaviour towards females. Recent study shows that slashdotters have enough neutrons emitted from their brain, that, they could be used as substitude of Californium 252.
Electrons decide the level of excitement. Thats why you feel charged, after couple of beers:)
Re:Thoughts on virtual thoughts (Score:2, Insightful)
In this light, one could almost consider a search engine's racks
Re:Thoughts on virtual thoughts (Score:3, Funny)
Re:Thoughts on virtual thoughts (Score:4, Interesting)
No sensationalism here. Move along.
Obligatory... (Score:3, Insightful)
Obligatory HAL quote (Score:5, Insightful)
2001 [imdb.com]
Longer article (Score:4, Informative)
http://www.businessweek.com/technology/content/ju
Structure and Function (Score:5, Interesting)
Our brains are made of mostly water, carbon, etc.... which form neurons. This is only important in the sense that we are what we are because these neurons are able to take a set structure, where neurons interconnect, and then have a specific function, where they fire.
There's nothing magical about these neurons. Let's say that you could replace these neurons with say, ultra-small marbles, that could take the same structure and perform the same function... It is logical to think that this marble-brain would be an actual brain, the same as any other. It would be a person.
So if they're simulating a brain virtually, but this virtual construct simulates the structure and function correctly, would this virtual brain be aware? Would it be a "person"? I personally, would say that it would. But then, is it moral to ever shut such a simulation off (murder)? Or create it in a virtual world without any other virtual brains to talk to (torture)? Or create it at all for the use of an experiment?
Re:Structure and Function (Score:3, Interesting)
But in regards to this simulation, it is not being built to do the things that a human brain does. That is, as far as I can tell from the article, it does not have any perceptual, motor, or cognitive functions, it is simply an isolated circuit designed to understand how assemblies of neurons work together.
A growing movement in cognitive neuroscience stresses an understanding of the mind as an "embodied". That is, much of our cognition relies upon and draws from the p
Re:Structure and Function (Score:2)
Oversimplification which loses sight of that fact does nothing for your argument.
Life.. don't talk to me about life.. (Score:5, Funny)
"I think you ought to know that I'm feeling very depressed"
In other news (Score:5, Funny)
They decided on George W. Bush.
Let's just hope....
hmmm....
I for one welcome our new artificial dumb military overlord.
Re:In other news (Score:3, Interesting)
Re:In other news (Score:2)
Re:In other news (Score:2)
Re:In other news (Score:3, Funny)
In further developments, the allegedly dimwitted IBM computer 'test brain' has again outpolled the latest Democratic presidential hopeful, leaving the former "major" political party now in third place and scrambling for some good news. Leading mainstream media sources have suggested anonymously that somehow this computer has managed to run a global repressive conspiracy, convince congress to throw the country into a war for its personal enrichment, and personally engineered a massive McCarthy
Where is the content? (Score:2, Interesting)
It is su
Re:Where is the content? (Score:2, Interesting)
In order to simulate a mammalian cortical column, the weight and bias of each synapse needs determined (experimentally or by simulation through trial and error) relative to the other synapses in that column (and there are probably tens of millions of synapses in a column consisting of 70,000 neurons).
This
What if the simulated brain is a person? (Score:2, Insightful)
Re:What if the simulated brain is a person? (Score:4, Insightful)
Luckily the situation is more convenient. Call something like "suspend to disk", backup the whole state and you have the equivalent of hibernation. Can be "defrozen" and brought back to life anytime.
Umm... (Score:5, Funny)
They work quite differently you know.
Some even speculate that one of those two kinds of brain might need even less than 22.8 Teraflops to simulate.
Re:Umm... (Score:4, Funny)
Re:Umm... (Score:2)
You're a bit diffuse about some things I'd like to see more about, like how a powered down brain couldn't be wrong and what the brains were "correct" about at all.
not there yet (Score:2, Informative)
Will come to nothing (Score:3, Interesting)
My prediction is that this project will achieve very little. I doubt they know as much as they think they do, but more importantly they won't be able to bootstrap this thing to be comparable to a real person.
Re:Will come to nothing (Score:2)
Teraflops vs paralellism. (Score:2)
On the other hand, a good setup of several FPGA boards, where a small group of gates could work as a neuron, and there would be billions of them, all working in paralell (just like brain does), this could work. P
Re:Teraflops vs paralellism. (Score:2)
When we're looking at the question of how the brain works, we need these interpretation stages, because when we look at it as is (either by looking at a physical brain or a hardware model of same) there's just too much chaos to pick out the useful order.
Re:Teraflops vs paralellism. (Score:2)
Brain simulation? I doubt it (Score:3, Interesting)
While it is true that Moore's Law suggests we will soon have the processing power of the human brain, that doesn't mean we will soon have AI on our hands. If we built this computer and fed into it a "Hello World" program written in Pascal, it isn't going to suddenly become self-aware.
We only have one type of working brain, so it would make sense to replicate this in every way possible in order to create a simulated intelligence. However, this has a great deal of complexity that we neither have the bioloical knowledge to understand nor the technical knowledge to emulate. Literally millions of neurons are connected inside us, forming cortical maps and working at different levels of awareness, from the lower, barely perceptible levels (reflex actions), to the higher, seemingly conscious, levels (deciding whether to order toast or a bagel for brunch).
Anyone who's interested in AI (or indeed the operation of the human brain) should read Steve Grand's book. It is highly enlightening, and very thought-provoking.
Uh oh..... (Score:2)
An AI Essay (Score:3, Interesting)
From Socrates to Expert Systems [berkeley.edu].
It argues that rules based AI is a dead end. It also classified levels of expertise.
It would seem like this non-rules-based IBM brain simulation method would be one which could possibly go beyond the 'advanced beginner' stage that Professor Hubert Dreyfus proves that rules base systems are limited to.
Re:An AI Essay (Score:2)
*some* (Score:2)
"If the human mind was simple enough to understand, we'd be too simple to understand it." -- Emerson Pugh
Of course, back when he said that 720k really was all the memory you would ever need.
My how things do change. One step closer to a neural shunt every day.
Brain != Thinking (Score:3, Insightful)
As of this posting, there have been several "what if" posts about the project accidentally leading to the creation of artificial intelligence. Systems such as the fictitious Skynet will not rival the flexibility and depth of a single human mind until we fully understand the mind ourself. Lisa Fittipaldi, an astonishingly talented painter, is able to create beautiful scenes on what was once a blank canvas. At the same time, Ms. Fittipaldi is unable to paint an accurate portrait - she is blind.
We can only recreate what we understand.
Wishful thinking (Score:5, Insightful)
As someone who's spent many years as a neurophysiology researcher before becoming a programmer I feel I may have a bit more insight than the average person. What this project boils down to is a simplistic model of the simplist unit of operation of one area of the brain (neocortical column). Anyone who has followed research into areas such as epilepsy and memory will know of the massive gaps in our understanding of the realtionship of the brain and the mind. So this "first computer simulation of the entire human brain" is neither accurate in the sense that they are not simulating the human brain, nor are they the first to try what they are attempting. They only difference here is that they have the very public backing of a major corporation who understand the benefit of good publicity.
This sort of research is fascinating and despetately needs to be done, but it does no one any favours when people associate tabloid style headlines to it. The days when we wear Richard Morgan style "stacks" are still as far away as ever unfortunately.
Re:Wishful thinking (Score:3, Insightful)
it constantly amazes me that people still assume that once a certain amount of computing 'power' is available, a computer could suddenly become sentient, as if someone just flicked a switch.
we don't even know what sentience and consciousness really mean ourselve
Consciousness (Score:2)
The question of morality of this replication of a brain (mouse, human, whatever - let's speak hypothetically, it's easier) boils down to the existence of a soul.
If you have a wiring model that responds to stimuli in the same way as the real brain being modelled would be, then there's no way to distinguish between the two.
This is made more complicat
Re:Consciousness (Score:4, Insightful)
I've tried to keep the following part objective. It is not intended as a troll. Please read it objectively, and consider as part of a discussion over brain simulation and its repercussions rather than about religion. I believe what I believe, you believe what you believe.
I consider there to be no evidence - as such - for religion. Christians point to the bible, others to their own spiritual texts, but I'm quite cynical about the whole thing because there's no manifest evidence. But I don't go out and try to convince them that it's untrue, because I also don't have evidence to the contrary, and I'm also not fussed enough to feel an urge to bring people round to my way of thinking on that. However, as seen over and over (crusades, holy wars, jihads... the list goes on) the percieved insults against a religion are, often enough, responded to with force. US currency and (I think) the White House sigil bears the words "In God We Trust", even though the state is nominally unaffiliated with a religion; can you think of what would happen if it was motioned to be changed? Enough of the US population *believes* it enough that there would be outrage.
These same people believe that the creation of life, of soul, is for their God alone, and creation of new life by humans (other than the conventional way
Not the first (Score:4, Informative)
30 years too early, according to Moore's Law (Score:3, Interesting)
Based on this (incredibly rough and inaccurate) analysis, I would predict that this type of project will be successful around the year 2040.
Re:30 years too early, according to Moore's Law (Score:4, Interesting)
But, once you determine that information-processing behavior, one should in theory be able to simulate that without a detailed model of the underlying structure. I mean, if I know that impulses from X input synapses cause the voltage at the soma to raise/lower according to a certain time function, and that a certain voltage at the soma causes an action potential to be fired, which will trigger the neuron's own output synapses to fire Y milliseconds later, I should be able to simulate these properties without going to the pain of modelling the ion channels, capacitance, and resistance of every patch of membrane on the whole neuron's surface.
That should buy a few years' worth of Moore's law for your prediction. Consider yours an upper bound, and assume we can make shortcuts to bring it sooner than 2040.
I actually think the top supercomputers are within spitting distance of modelling a human brain - or at least smaller mammalian brains now. The trouble is that despite what TFA leads you to believe, far too little is known yet about the interconnections of those neurons. Even less is known about their learning functions. The state of the art in much of the brain is to stick a few electrodes in, hope you find a couple of neurons that are connected in some way, record for a while and then do statistics on their firing patterns to estimate the strength an type of their pairwise connection. Then by using that they hope to work backwards to deducing the connection patterns of whole clusters of neurons. It's slow, messy work.
The group in TFA uses thin slices of brain where they can more accurately observe which neurons are connected to which, and which neurons they are recording from. It's a useful technique, but since the connections in the brain are three-dimensional, taking thin slices fundamentally alters the structure. It can't tell us anything.
Much of the brain is still a black box, effectively. It will still be a while before we can model an entire brain, regardless of CPU power available. My personal gut feeling is that the understanding of the neuronal network is far more the limiting factor at this point.
What if it works? (Score:3, Interesting)
And even if it's not as smart as a human, what then? What ethical guidelines are appropriate? When is it okay to destroy a thinking being, even if you created it yourself? And how complex must it be? Killing a beagle or a dolphin isn't murder, after all, but it's still considered wrong in many cases to do so.
Are AIs cute and cuddly and protected by humane-treatment laws, or scary and kill-on-sight, like spiders and snakes are for many people?
How smart does an AI have to be to have rights against termination?
We've been sort of doodling around with these thoughts for a long time, but it's getting to the point where we may actually need the answers.....
Re:What if it works? (Score:3, Insightful)
Re:What if it works? (Score:3, Interesting)
Re:What if it works? (Score:3, Insightful)
For Heaven's sake ... (Score:3, Funny)
... make sure you install a huge fire axe near the main power cord in case this thing decides it doesn't need us anymore!
is the brain a digital computer? (Score:3, Interesting)
John Searle
There is a well defined research question: "Are the computational procedures by which the brain processes information the same as the procedures by which computers process the same information?"
What I just imagined an opponent saying embodies one of the worst mistakes in cognitive science. The mistake is to suppose that in the sense in which computers are used to process information, brains also process information. To see that that is a mistake contrast what goes on in the computer with what goes on in the brain. In the case of the computer, an outside agent encodes some information in a form that can be processed by the circuitry of the computer. That is, he or she provides a syntactical realization of the information that the computer can implement in, for example, different voltage levels. The computer then goes through a series of electrical stages that the outside agent can interpret both syntactically and semantically even though, of course, the hardware has no intrinsic syntax or semantics: It is all in the eye of the beholder. And the physics does not matter provided only that you can get it to implement the algorithm. Finally, an output is produced in the form of physical phenomena which an observer can interpret as symbols with a syntax and a semantics.
But now contrast that with the brain. In the case of the brain, none of the relevant neurobiological processes are observer relative (though of course, like anything they can be described from an observer relative point of view) and the specificity of the neurophysiology matters desperately. To make this difference clear, let us go through an example. Suppose I see a car coming toward me. A standard computational model of vision will take in information about the visual array on my retina and eventually print out the sentence, "There is a car coming toward me". But that is not what happens in the actual biology. In the biology a concrete and specific series of electro-chemical reactions are set up by the assault of the photons on the photo receptor cells of my retina, and this entire process eventually results in a concrete visual experience. The biological reality is not that of a bunch of words or symbols being produced by the visual system, rather it is a matter of a concrete specific conscious visual event; this very visual experience. Now that concrete visual event is as specific and as concrete as a hurricane or the digestion of a meal. We can, with the computer, do an information processing model of that event or of its production, as we can do an information model of the weather, digestion or any other phenomenon, but the phenomena themselves are not thereby information processing systems.
In short, the sense of information processing that is used in cognitive science, is at much too high a level of abstraction to capture the concrete biological reality of intrinsic intentionality. The "information" in the brain is always specific to some modality or other. It is specific to thought, or vision, or hearing, or touch, for example. The level of information processing which is described in the cognitive science computational models of cognition , on the other hand, is simply a matter of getting a set of symbols as output in response to a set of symbols as input.
We are blinded to this difference by the fact that the same sentence, "I see a car coming toward me", can be used to record both the visual intentionality and the output of the computational model of vision. But this should not obscure from us the fact that the visual experience is a concrete event and is produced in the brain by specific electro-chemical biological processes. To confuse these events and processes with formal symbol manipulation is to confuse the reality with the model. The upshot of this part of the discussion is that in the sense of "information" used in cognitive science it is simply false to say that the
Re:how about integer performance? (Score:2)
Re:brains for those who have none ... (Score:5, Funny)
Dual boot!
Re:brains for those who have none ... (Score:4, Informative)
Re:brains for those who have none ... (Score:3, Interesting)
This irks me, too. The hell that schizophrenics live in is far worse than the experience of a person who simply shifts between multiple personalities. Confusing the two does a disservice to those who suffer with this condition.
Schizophrenia literally means "Shattered Mind," a person who's cognitive processes are so discombobulated that they can't differentiate the real from the unreal. It's not being Josh one day and
Re:brains for those who have none ... (Score:3, Informative)
Re:brains for those who have none ... (Score:2)
Re:brains for those who have none ... (Score:3, Funny)
Re:brains for those who have none ... (Score:2)
Re:Here it comes... (Score:3, Funny)
In Soviet Russia, supercomputers welcome you!
I'll get me coat...
Mentifex (Score:5, Informative)
There's a fairly extensive FAQ on him here:
http://www.nothingisreal.com/mentifex_faq.html [nothingisreal.com]
Re:Mentifex (Score:5, Funny)
Noone seem to have real contact with murray, and his adress was not really known. He also seem to have a little to much time on is hands, posting huge amounts of usenet posts etc. What if Murray did succeed a long time agoo, and is now letting his virtual brain (that somehow thinks it is murray) do all his spamming for him.
Of course, this theory lacks in many points...
Re:Mentifex (Score:2)
Re:Brain impairment (Score:2)
sentient: Having sense perception; conscious
sapient: Having great wisdom and discernment.
I don't think he means sapient. Sapience is really more about wisdom and insight - sentient is closer to having conscious experience, feeling(sentire, to feel).