Linking Hardware To Wetware 195
Vikki_R. writes: "Wired has an article about grafting a microelectric circuit directly onto a human brain cell. Researchers at the University of Texas at Austin have been working on developing an interface between semiconductors and neurons. Imagine being able to give your computer a piece of your mind ..." Update: 11/25 22:54 GMT by T : Here's an earlier post linking to a different article on the same research.
Re:No thats not it (Score:3, Insightful)
I appreciate your attempt to reign in wild speculators who think mind-reading is as easy as sticking a wire in the brain, but "never" is a long time. If Moore's Law continues and noninvasive scanning technologies continue to improve at the rate they have for the last century, then the technology for this might be available next century, or even this century. Or genetic engineering might have the capability to plant a large number of noninjurious artificial probes.
What's much harder than the collection is the data analysis. We still don't really have much of a grasp on a scientific level of what consciousness is or what thoughts are. What's more, the data so far concerning machines that try to allow the handicapped to write via brain waves indicate that the brain's state is extremely variable from hour to hour, so that the same signal patterns don't recur even when the person is having what they think is the same thought. I have a feeling we will only begin to make substantial progress in consciousness research once we create the necessary data collection technologies, and even then it may be decades or more before the problem is solved to the extent that, say, the human genome is now.
On the other hand, simple affectors and effectors are certainly a much easier problem than global mind-reading or the direct absorption of information. When the first generally useful neural interfaces become available, they'll probably function a lot like a modern head-mounted computer with speech recognition. There will be a virtual screen with translucent overlay of the visual field, a way to speak into the system while not speaking aloud, and some way to point on the screen. This is probably not more than two or three decades away. The question is what it will be good for, since the same technology in an external form through wired glasses, a miniature microphone, gesture recognition wristwatch or ring, and tiny personal computer will all be available without surgery or bioengineering. Privacy in public spaces is the only major advantage that comes to mind.
Tim
Methods of Learning (Score:2, Insightful)
Security issues aside, having a networked brain and the capacity to access an unlimited wealth of information is surely my wettest dream. However, getting from the point of attaching neurons to computer circuits to the point of downloading knowledge a la the Matrix ("Now I know how to fly a huey, yahoo!") is a much harder problem.
The human brain 1) develops over many many years and throughout that time develops patterns unique to the individual's experiences; and 2) it develops in relation to a body via which it interacts with the world. This is why so much CogSci research focuses on the issue of "embodiment". The paradigm of brain as discursive controller is fading away in favor an emphasis on the role of the physical (both body and environment) in what we typically regard as cognition. (See Being There by Andy Clark for an amazing read).
Given this, knowledge, especially knowledge that manifests in physical behavior, must either be "installed" in a manner highly sensitive to the idiosyncracies of the person, or through a long period in which the body and brain are trained to work together on a problem. Therefore, I don't think it's all that plausible to instantaneously know how to fly a huey, to drive a car, to type, etc.. As someone else observed, we'll see this technology used in prosthetics far sooner than we'll have Matrix/Johny Mnemonic style scenarios.