Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

No Bones About It: People Recognize Objects By Visualizing Their 'Skeletons' (scientificamerican.com) 49

An anonymous reader shares a report from Scientific American: Humans effortlessly know that a tree is a tree and a dog is a dog no matter the size, color or angle at which they're viewed. In fact, identifying such visual elements is one of the earliest tasks children learn. But researchers have struggled to determine how the brain does this simple evaluation. As deep-learning systems have come to master this ability, scientists have started to ask whether computers analyze data -- and particularly images -- similarly to the human brain. "The way that the human mind, the human visual system, understands shape is a mystery that has baffled people for many generations, partly because it is so intuitive and yet it's very difficult to program" says Jacob Feldman, a psychology professor at Rutgers University.

A paper published in Scientific Reports in June comparing various object recognition models came to the conclusion that people do not evaluate an object like a computer processing pixels, but based on an imagined internal skeleton. In the study, researchers from Emory University, led by associate professor of psychology Stella Lourenco, wanted to know if people judged object similarity based on the objects' skeletons -- an invisible axis below the surface that runs through the middle of the object's shape. The scientists generated 150 unique three-dimensional shapes built around 30 different skeletons and asked participants to determine whether or not two of the objects were the same. Sure enough, the more similar the skeletons were, the more likely participants were to label the objects as the same. The researchers also compared how well other models, such as neural networks (artificial intelligence-based systems) and pixel-based evaluations of the objects, predicted people's decisions. While the other models matched performance on the task relatively well, the skeletal model always won.
On the Rumsfeld Epistemological Scale, AI programers trying to duplicate the functions of the human mind are still dealing with some high-level known-unknowns, and maybe even a few unknown-unknowns.
This discussion has been archived. No new comments can be posted.

No Bones About It: People Recognize Objects By Visualizing Their 'Skeletons'

Comments Filter:
  • by mschaffer ( 97223 ) on Wednesday September 11, 2019 @10:46PM (#59184064)

    Now the only need to prove that people use AI models that are designed to "function like the human mind".

  • This is News? (Score:5, Interesting)

    by Gamer_2k4 ( 1030634 ) on Wednesday September 11, 2019 @11:13PM (#59184096)
    Art instructors have been teaching to start with the skeleton for decades, if not centuries. We determine form by internal structure. So why are we just now learning that that's how people actually perceive the world?
    • Everything is new when it's on a computer. I thought we established this many years ago.
  • This one doesn't look like it has any skellington to speak of [fandom.com].

    • by tepples ( 727027 )

      I see seven recognizable "bones" in the illustrations of Zero on that page. One runs from the nose to the back of the head, another up the jaw, and another on each ear. There are also three on the sheet that makes up Zero's body: one down the back and one to each front corner.

      • It isn't really "bones" they look at, but imaginary lines drawn down the center of the shape. At least that's what I imagine. I didn't bother to look. It seems doubtful that is how humans recognize objects. It is probably a combination of outline + "bones"

  • came to the conclusion that people do not evaluate an object like a computer processing pixels

    If you do work or research based on the assumption that they do, you should probably stop. You are wasting someones money by either being amateurish and/or ignorant, or a shill.

    While sometimes useful and effective, I doubt that the people inventing the clever image recognition algorithms believe that it works the same way as humans.

    • But they can possibly make them better by learning how humans do it. Artificial neural networks start out homogenuous; brains don't. By making them more like brains, we may be able to make them better. And even if we don't, at least we may learn something about brains.
      • You can't make neural networks like brains, because they are just computer programs which some idiot dubbed "neural networks" to fool people a long time ago. That is like saying you want to make the sky more like a cat. Neural Nets are nothing like neural networks.

  • Wireframe sounds like a better term than skeleton. Who are these ppl? Lulz lmao jk
    • Wireframes model the surface shape of the object. That's not what this article says our brains do.

      • Whoosh. I get that. But you don't actually have x-ray vision; your brain is constructing the underlying skeletal model out of visual (light) information reflected off the face you're seeing.

        This was a comment about how my wife *doesn't* have the mechanism mentioned in the article, and it causes us to disagree on the similarity of faces. She sees surface details only, and not well-organized. Her brain doesn't construct the landmarks and skeleton for her.

        • Wow, just gonna go to work now. I thought GP was a reply to my comment below this one.

          Guess I have no spatial relations either. My apologies, GP.

  • Prosopagnosia (Score:4, Interesting)

    by weilawei ( 897823 ) on Thursday September 12, 2019 @03:23AM (#59184438)

    My wife is face blind, and this leads to an interesting situation where I'll say two people look similar (because of underlying facial structure), and she'll say they look completely different.

    This story makes sense in that light, because other tasks requiring the arrangement or orientation of things to be assessed is not her strong point.

    • Human brains have a fairly large area dedicated to recognizing faces. Some people have brain damage in that area, and can still see properly, and describe all the facial details, but lose the ability to recognize people, even close family members.

      That's probably not strictly related to this article.

      • by shmlco ( 594907 )

        Not particularly, but it does speak to the fact that us humans have many large dedicated neural nets each devoted to solving some specific type of recognition problem.

    • Interesting. That is apparently not uncommon. Is it only for human faces, or also for animals? And also for other objects that faces?

      I wondered about this a few days ago and perhaps I can ask you now: Does she have difficulty identifying objects if they are upside down?

      If you train a naive image recognition network with cats in only the normal direction, it will probably have no clue what an upside down picture of a cat depicts. Does that resonate with how your wife experiences her environment?

      Apologies for

  • That is interesting (Score:4, Interesting)

    by Sqreater ( 895148 ) on Thursday September 12, 2019 @03:45AM (#59184496)
    About twenty years ago I lost the ability to recognize objects using one of my eyes. It lasted for about 12 hours and then recovered. Probably a temporary clot or what is called a "mini stroke." If I covered the good eye and looked out of the impacted eye I could see the object but just did not know what it was. If I covered the unimpacted eye and looked out of the normal eye I could identify objects normally. Whatever allows us to identify objects must exist separately in both hemispheres of the brain and in specific places. I suggest also that recognizing an object and evaluating or judging an object are two different things. We make evaluations of acceptability and quality according to the "EPs" or "elements of perception" (mine) that append to the object. Why is that a "good" lamp and that one not? Why is that one type of apple and not another? Shape, size, color, smell. Also, context allows a speedier determination of an object. An object can be more easily determined if it is among objects it is likely to be around. Sometimes it can be erroneously determined if it is among many unexpected objects. So an object can gain its recognition from objects around it. Wasn't there an elephant-in-the-room discombobulation of an AI recognition program recently? Recognition in the wild, in a constantly changing world, is an incredible problem that humans solve with ease. In humans it is far more than wire frame recognition. Amazing.
    • by asylumx ( 881307 )
      You are incredibly lucky to have come through that if with little or no permanent damage. While I'm glad you learned something from the experience, I'm sorry you had to go through that. Strokes (of any size) really do some crazy things, don't they?
      • Yes they do apparently. No damage though. They can be a warning of a damaging stroke though and one should go immediately to the emergency room on having one. I did not know that at the time and was just interested in the phenomenon and what it said about the brain.
    • by shmlco ( 594907 )

      It is far more than wire-frame/skeletal recognition... and it's not. As mentioned above, we humans have many large dedicated neural nets each devoted to solving some specific type of recognition or cognition problem.

      In fact, this "stick figure" approach to recognition might go a long way towards explaining the "ness" problem. What is chair-ness or table-ness or cat-ness? Why is it that you can look at hundreds of different types of tables or chairs or cats and instantly classify them as "table" or "chair" o

      • I've often wondered why lions in the Serengeti don't seem to attack humans in cars. To me it seems they don't recognize them as animals. Four wheels do not equate in their perceptions to the four legs of prey I think. Stick figure representation? A rhino on the other hand will attack a car as I have seen in a youtube video. Would it attack a rock? Probably not. What is the difference in perception? Movement?
    • by czert ( 3156611 )

      Whatever allows us to identify objects must exist separately in both hemispheres of the brain and in specific places.

      I don't think that's a valid conclusion. Both hemispheres get images from both eyes: the left hemisphere gets the right half of your view (from both eyes), the right hemisphere gets the left half. You're experience might instead suggest that object identification takes place before the signal from each eye reaches the brain, at least in part.

      • Interesting. You mean in the retina and the optic nerve?
        • by czert ( 3156611 )
          Possibly. We already know there's a lot of processing going on on the retina and in the optic nerve, so I wouldn't be surprised if it's even more complex than we thought.
  • "people do not evaluate an object like a computer processing pixels, but based on an imagined internal skeleton"

    Who would've thought? I think it's been clear for ages that the mind doesn't map from "pixels" to concepts directly (even apart from the fact that the brain/retina combo is totally not pixel based, save for the most physical aspect of light capture, which is the field of photosensitive cells spread all over the retina, kindof). It's been known that there's deep, graph-like processing path (not mer

    • I think it's been clear for ages that the mind doesn't map from "pixels" to concepts directly

      And it's also wrong to claim that a image recognition system on a computer does that.

  • What do people visualize to recognize skeletons? Checkmate.

  • Really? It feels like my brain is simply trying to find a match in memory, not make from wire frame out of the object..
  • So science is confirming the idea that the mind recognizes forms?
  • Cartoon dogs can be a looooong way from any real "skeleton" regardless of how we define that.

    I'm bored with AI research that claims that this week's pattern matching is revelatory. I'm just plain old bored with AI, in fact.

    • Cartoon dogs can be a looooong way from any real "skeleton" regardless of how we define that.

      How do you think the parts of the CGI dog are animated, other than by mapping the vertices of its surface mesh to one or more nearby bones on an armature?

  • by twocows ( 1216842 ) on Thursday September 12, 2019 @08:23AM (#59185064)
    When I visualize an object in my mind, I visualize the way it looks from the outside: the orientation of its surfaces and how they're arranged. Rather than an internal skeleton, it's more like an external skeleton like a bug might have (even if the surface in question is not skeleton-like in any way). If I try to picture an internal skeleton for an object, it just feels off.
  • This is the most interesting article I've read in a long time on Slashdot.....but that is a very low bar.
  • "... people do not evaluate an object like a computer processing pixels, but based on an imagined internal skeleton." When they say "people", what they really mean is the 42 people who participated in the study. Maybe, in this case, it's true of almost all people, but sometimes it isn't. I specifically remember another psychology study which concluded that people process time with past on the left and future on the right [uni-tuebingen.de], based on experiments with 30 to 118 participants -- all college students. Such homogen

  • The researchers also compared how well other models, such as neural networks (artificial intelligence-based systems) and pixel-based evaluations of the objects, predicted people's decisions. While the other models matched performance on the task relatively well, the skeletal model always won.

    Wait, what? Are they saying their neural networks don't recognize images by extracting skeletal features? How do they know?

    Some of the oldest machine learning (not deep learning) algorithms use things like the structure tensor or gradient approximation methods to skeletonize images before classifying them. For deep networks it is not as easy to figure out what features are being extracted at each convolutional layer, but some form of edge detection is always used somewhere.

    A set of humans that happen to per

Talent does what it can. Genius does what it must. You do what you get paid to do.

Working...