Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

Scientists Activate Neurons With Quantum Dots 30

A.L. Blais writes: "By using the molecular-recognition capabilities of living cells, scientists have made selective electrical contacts to neurons."
This discussion has been archived. No new comments can be posted.

Scientists Activate Neurons With Quantum Dots

Comments Filter:
  • to get fired up about...
  • 'Connect the dots' that accomplishes something more useful than fridge art.
  • Color me thick, but I dont see the potential...

    Increase/restore memory?
    Fight neurologic disease?
    Organic computer components?
    Treat drug addiction?
    Troll for bozos in the Science selection?

    • "Increase/restore memory?"

      Well.. If you can hook up some cable from your computer, then you could download all the information you could find off the web into your mind. Imagine the leaps and bounds in science and just everything in our society if everybody could draw in every bit of information on anything... and then improve upon it...
      • if everybody could draw in every bit of information on anything... and then improve upon it...

        Not everybody. But I'll be more than happy to license this capability to you. I've left my card on your website.

      • by nusuth ( 520833 )
        Downloadin and uploading information directly to brain has much bigger hurdles than easily connecting to neurons. Our understanding of brain strongly suggests (but does not prove) that information is encoded in a unique way for each individual. Therefore any attept at uploading data to brain should start with complete decryption of his/her brain structure and information encoding first. This must be done as the brain changes (albeit slowly), and without access to much of the information already encoded (how can you think about everything you know while someone is scanning you?) I would say that it is impossible. Motor and sensory areas may possibly be augmented, but not cognitive functions without hardware so powerful that, existance of it would render the goal of augmenting people's brains useless.
        • look at the size of the hurdle we just got over... Not too long ago this thought wasn't even concidered sane, yet today it's reality. Who knows what we'll have accomplished 50 years from now? Personally, I'll be more than happy to wait that long to upload the encyclopedia to my mind (...first decipher my brain, backup motor/sensory sections... etc etc etc...) plus much much more...

          I think as humans we greatly underestimate the capibility of our own mind...

          "The one who says it cannot be done should never interrupt the one who is doing it." : The Roman Rule
          • I think as humans we greatly underestimate the capibility of our own mind...

            With huge amount of data collected from different people, it will be known what parts of cognitive functions share a common basis among individuals, and what parts are highly variable. For example we know that the concept of "red" is perceptually grounded and somewhat constant concept among individuals, we can look for redness encoded in the brain, starting from visual fields and going to deeper parts. But what about "grandmother"? Does its encoding share anything at all among individuals? The answer is probably not.

            If it turns out that variables are so much higher in number than constants, symbolic uploading of information into the brain is impossible without a machine that can already do much more than an augmented brain. The first part of this if statement is an open question, second part is a logical consequence. It may turn out that even though people can make super AI machines, they still prefer to do thinking themselves. Then your dream may come true. I wouldn't bother with thinking if something else is doing it provably better than me though.

            • I think as humans we greatly underestimate the capibility of our own mind...

              I guess you didn't think too much about that quote before you replied about it... We haven't even interfaced with our own brains yet, and already you're speculating on its exact functions. You said yourself that our brain (even halves of brains) has the function to reconfigure itself to become normal once again, so why can't our brain adapt to take input from something we present to it? People that have gone blind have had their brains start to use entirely different lobes to again take in visual input. So since it adapted to retreive and store data from our eyes in an entirely different way that is started out as, then why can't it adapt to a set of wires and a simple stream of 1's and 0's?

              Your greatest flaw is comparing our brain to large chunk of transistors... the human brain has much more capibility than that...
              • I think as humans we greatly underestimate the capibility of our own mind...

                Oh, I got it wrong. I thought you were talking about technology. Saying something like "we can invent things that can overcome these problems, although we now have no idea how to attack them. we are clever." Your greatest flaw is comparing our brain to large chunk of transistors... the human brain has much more capibility than that...

                That is my job, in a sense. I'm a cognitive scientist (and a chemical engineer, so what?)

                • Sweet job, I'm a little envious (my friend is persuing a career in that field), but I've always thought that trying to compare set of static transistors with our own neurons and thinking that the "language" that our mind uses to process everything is like our own method of 1's and 0's is a good start for the basics... but doesn't finish up the job for fully understanding our mind. Take the projects in AI (which you should be familiar with)... people have been writing logistic programs to make simple conversations (simple, yet horribly complex and long I might add) with people, yet the projects dealing with the mimicking of our own neuron connections looks much more promising for the long run... but, thats my own opinion...
                  • Not all cognitive scientists are stuck in 60ties to 80ties paradigm of "brain==computer", "language==computation." Actually, save for cognitive linguistics field where majority is following that idiot "I said so. QED" Chomsky, computer methaphor is usually not taken seriously. Cognitive psychology has quite reliable findings that human mind does not operate like a computer, executing algorithms on data collection, at all.

                    OTOH neural approaches (artifical models and neurocognition) are plagued by lack of explanatory(sp?) power. I believe the process based explanations, based on experimental data from cogs.psych field, rather than pure theory from phil or cogs.ling fields, will prove to be the most useful findings in future; both by explaning how natural intelligence works, and helping how to build artificial ones.

                    Even though I like cognitive psychology more, my line of research is artificial life because there are few options here to go for experimental route :(

                • I have a question for you... (not mocking, being a smart-ass or anything... I'm just curious). Since you are a cognative scientist, do you know of any research that has delt with the direct mimicking of the signals sent through the optic nerve? I imagine that you could eventually trace the impulses sent through there to figure out what means what (after some decoding)... then from that couldn't you create your own impulses and pretty much make anybody "see" anything you want?
                  • Artificial eyes do exist, but I don't know where exactly they connect to. Actually primary visual cortex is quite well understood (pray for souls of 100s of macaque monkeys died for the deed, if you are so inclined) and is not too complicated. The eyes send signals quite like an unfocused camera with moderate amount of noise, the first stages of processing also quite nicely coincides with the artifical vision models for robots (contour detection, line detection, movement detection etc. are somewhat modularized and follow basically same route in both cases.) Models for visual functions do exist too, and an interesting property of them is gven the task of seeng, artifical neural networks tend to organize themselves into modules resembling real ones. So sending visual sensetion of a chair would not be too hard tomorrow, to any primary level.

                    But object recognition (which is a prerequisite for experineces related to vision) is another thing entirely. It is not like that eyes see a chair and somewhere along the line of the visual path, a module become aware of seeing a chair and sending information to later processes "I'm looking at a chair. My experience is seeing a chair." The information is represented in a distributed manner (that is a population of neurons contribute to a representation by assuming different levels of activation for each item, but no single neuron define an object, or an attribute) which makes very hard to link concepts to activities of population of neurons, and impossible to link activities of single neurons. So an experience is not represented anywhere by itself. The transition from sensation to experience is gradual too.

                    There are also encouraging findings. It seems it is possible to differentiate between a small list of predetermined items a person is seeing/imagining by monitoring activites of relatively small number of neurons. Distinguishing members of a closed set is far from manipulating activites to feed whatever experinece you like. But that is a step in the right direction.

                    Please don't confuse feeding experiences with feeding information. In the first case, it is sufficient that experience occurs, in the second case it must also make sense to the host, and preserved over time. First is very difficult, second is a few orders of magnitude more difficult. Sort of like the difference between copying a previously unknown ancient glyph to paper and understanding what that glyph means.

        • Early in life, the brain is highly adaptible. Perhaps we could just decide on a simple protocol, then allow the brain to learn to use it? IIRC, you aren't born knowing exactly what parts of your brain hook up to your eyes, ears, nose, etc.
          • Early in life, the brain is highly adaptible. Perhaps we could just decide on a simple protocol, then allow the brain to learn to use it? IIRC, you aren't born knowing exactly what parts of your brain hook up to your eyes, ears, nose, etc.

            That is partly true, the brain is not a homogenous lump of neurons even at birth. Cell differentiation is already done, and which parts get which neurons is determined by genome almost exclusivly. Sensory input to the brain is also not to random locations, visual information goes to back side, auditory information goes to middle, inner side etc. But the barin is higly plastic, if in early life (a few weeks at best) the feeding locations are altered, or if they were wrong to begin with, the brain can adopt to changes, usually without any visible consequences. Children born with a whole hemisphere of brain missing can become normal individuals, that is the extend of amazing plasticity of brain.

            If you consider it carefully, though, it works against external information feeding. It would be much easier if people had fixed locations for differents functions. Discover once, use for all. Since that is not the case, something must be done either to dechipher individual brains, or as you suggest make different brains look alike. But a single interface would not solve the problem, the interfaces to the external interface would still be higly variable. To solve that you have to standardize whole brain, which means controlling precisely what information goes into brain in early life, from which locations it goes, and no genetic differences in the brain encoding genes. I'm not sure that is a good idea. I would rather have a fool child than an identical child.

            • Since that is not the case, something must be done either to dechipher individual brains, or as you suggest make different brains look alike. But a single interface would not solve the problem, the interfaces to the external interface would still be higly variable.

              The 'interface' of the huge bundle of neurons that makes up the optic nerve is not the same for each individual, but almost all individuals can see. I expect this is similar for all of the interfaces which humans have (the development of motor-control is very interesting in young humans - 5 year olds can't write well not because they don't know what shapes to draw but because they don't have the fine control required for such delicate work).

              I am not a neurologist and my understanding of the brain is very superficial (how many neurologists read ./ anyway? :P ), but I'd expect that a Meeting-The-Problem-Halfway(TM) approach would be successful: don't expect the brain to work with the interface, or vice versa - let at least one of them adapt. The adaptability of even adult brains shouldn't be overrated (but perhaps not up to the level required): the good old 'walk around with vertically inverting glasses, and after a week or so you're brain with 'adapt' to see the world correctly' experiment has shown some significant adaptation.

              Certainly, neurological interfaces are one step closer - but for now I'm more interesting in finding nerves which I can control, or learn to control which aren't necessarilly in my brain! That extra limb can come in handy :) (I can now hold the Big Mac with both hands, /and/ hold the carton underneath to catch the lettuce that falls out the other side!)

              Ian Woods
              • Nothing is stopping you from augmenting your brain with extra senses or extra limbs. Go ahead, you can do it now, with today's technology. You might not be able to learn to control/make sense of them (since you are old enough to write on slashdot) but then again, you might.

                I have been talking about impossibility (or let me sugar coat it, since the idea doesn't seem to be well received: extreme improbability) of symbolic knowledge feeding to/extraction from brain.

            • I don't think that's entirely true. For example, eyes always do the same thing- they detect light. That information travels along nerves and into the brain. The signals that travel along the optic nerve are basically the same for everyone, but that does not mean that every brain is identical. The same would (I think) be true of external senses. The data always enters the brain the same way, but once inside, it's up to the individual brain to assimilate the data. If that makes any sense :)
              • I don't think that's entirely true.

                Huh? The data always enters the brain the same way, but once inside, it's up to the individual brain to assimilate the data.

                Yes, but I don't remember objecting to that?

  • Well, at the very least, I can think of a couple of possibilities. We could build some sort of intra-brain prosthetic for people with brain damage. For example, if someone had a lesion which interfered with their ability to see, presumably, we could eventually "rewire" that part of the visual cortex back up.

    On the other hand, we could develop some kind of organic computing. Science fiction aside, though, I'm not sure what advantage an organic computer would give. But a lot of people seem to think this sort of thing would be useful, so what the hey...

    • For example, if someone had a lesion which interfered with their ability to see, presumably, we could eventually "rewire" that part of the visual cortex back up.

      An organic replica of the damaged region is probably easier and more compatible. Rewiring brain does not have to be made with perfect accuracy, neurons are capable of learning new connections and destroying unused ones. Just provide appropriate mix of neuron types, and connections to outside, brain will heal itself. Stem cells extracted from adults is the way to go. If that proves unfeasible, other ways of introducing correct type of fresh neurons should be considered before going non-organic.

      But a lot of people seem to think this sort of thing would be useful, so what the hey...

      If they are fast, who cares what they are made of. I can't think of a reason to prefer an organic computer to a non-organic one, except for coolness rating, if they perform more or less the same. I'd rather halt than kill my computer.

    • We could build some sort of intra-brain prosthetic for people with brain damage. It would be easier to just repair the nerves. I imagine that the technology would be in it's early stages, and would be quite bulky too. Nerve regeneration technology is, admittedly only in comparisan, much more advanced. On the other hand, we could develop some kind of organic computing. You would have to build the devices completely from organics, or you would have too many problems with speed, and translation between Organic and Synthetic components.
  • Maybe I don't understand this well enough to ask questions, but it seems that this could have huge potential for some things like skin cancer...

    It seems to me that an important point is that we've figured out how to connect to individual cells. To quote the article:

    • In this new biological application, attaching quantum dots directly to cells eliminates the need for external electrodes. [...] And since molecular recognition is used, it is a "smart" technology that can pick precisely which capability will be controlled on each neuron to which a quantum dot is attached.
    Which means that we might be able to connect (at least electronically) to individual cancer cells on the skin surface. Does that allow us to do things like disrupt the cell electrically, or possibly electrocute the cell?

    I admit it's more fun to think about controlling neurons and the brain, but I think the aspects that we can be precise at the cellular level on a living being is much more interesting.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...