Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Biotech Stats

Gut Bacteria-Autism Link May Just Be Misinterpreted Data From a Confusing GUI (medium.com) 83

Remember that mouse study which concluded gut bacteria may contribute to autism symptoms?

Jon Brock, a cognitive scientist with 18 years research experience on neurodevelopmental conditions, including autism, has posted a Medium post summarizing new critiques of the research emerging online. (For example, from Professor Thomas Lumley, a statistical researcher who has concluded that the study's analysis "is wrong," and "arguably due in part to a poor GUI design.") Soon after publication, scientists began expressing concerns about the paper on social media. These were echoed in a blogpost by drug discovery chemist Derek Lowe and then in a series of comments on the PubPeer website. Looking more closely at the data, the results are a whole lot less compelling than the media coverage, the press releases, and even the paper itself suggest...

The differences between mice with autistic and non-autistic donors are subtle if they exist at all. And there are reasons to be skeptical about even these small effects. Mice are not tiny humans with tails. Autism is defined in terms of human behaviour. And so the claim that mice showed "autism-like" behaviour relies on an assumption that the mouse behaviours under investigation are in some sense equivalent to the behaviours that define autism in humans...

But even if we accept the premise that mouse behaviours are directly analogous to behaviours exhibited by autistic humans, the evidence is both weak and inconsistent. It's fair to say, I think, that the authors have presented the data in its most flattering light... Since posting this critique last week, further developments have cast more doubt on the conclusions of this study. The authors responded to criticisms on PubPeer. In doing so, they released the code for their analyses, which appear to show important discrepancies between how the analyses were described in the paper and how they were actually conducted.... Lumley suspects that the culprit is the confusing interface of the SPSS software the authors used for their analyses. There's no reason to see this as anything other than an honest mistake. But, as Lumley notes in his post, the episode shows the importance of researchers sharing their analysis code as well as their data.

This discussion has been archived. No new comments can be posted.

Gut Bacteria-Autism Link May Just Be Misinterpreted Data From a Confusing GUI

Comments Filter:
  • and helicopter parents are not as I type this, shopping Craigslist postings for premium poop they plan to jam up the butts of their precious offspring.

    If you're looking for a good pyramid scheme, it may also be too late to invest... /s

    • by e3m4n ( 947977 )

      the most brilliant article on slashdot in the last decade had to be the guy that started the twitter channel justsaysinmice ... all those 'correlation' that damn near everyone takes for gospel is based on a single unverified study on _mice_

      • The problem is, there are things we are averse to subjecting humans to - shooting them up with random untested drugs, risky surgical procedures, etc. (Post-WW2 era anyway). Animal testing is the *only* way to initiate certain lines of inquiry. And for mountains of stuff, it is relevant and useful.

        Something as complex and seemingly human-specific as autism would not appear to be one of them. And here, we saw the peer-review process kick in almost immediately, within days. Exactly the way science should work
        • by e3m4n ( 947977 )

          I’m far from anti-science. I’m just getting sick of these headlines that contradict themselves. I’m not talking specifically about animal testing of medication. I’m talking about the fact that they use statements like “people who eat worms might help you live longer”. Then they go cite a study where out of two uncontrolled groups, the group that ate worms, on average, lived 43 seconds longer than the group that did not. I made up the worm eating part for illustration.

  • by Livius ( 318358 ) on Saturday June 22, 2019 @07:57PM (#58806510)

    It's incredible that, after people worked out how to make user interfaces decades ago, there are still terrible user interfaces. And I don't mean just new styles, I mean user interfaces that are inefficient, inconsistent, needlessly complex, and error-prone.

    • What we need are more user interface designers in the same way we need more people coding to become programmers. After all, look at how flawless all the code out there is.

    • They have significantly decreased in usability in the last 5 years.

      Why label an icon or use colour or shading, when you can make EVERYTHING BRIGHT WHITE with unlabelled black and white icons!!!

      Sigh

    • It's incredible that, after people worked out how to make user interfaces decades ago, there are still terrible user interfaces. And I don't mean just new styles, I mean user interfaces that are inefficient, inconsistent, needlessly complex, and error-prone.

      That's what happens after decades of user interfaces being dragged through decades of trying to maintain some level of backward compatibility.

      Without that backward compatibility the market for what's new is limited. If the backward compatibility is non-linear, and starts to look like a tree or web, then UI constructs can be duplicated or conflicting.

      • by Livius ( 318358 )

        It's the opposite of backwards compatibility. It's trying to be creative and original with respect to something that was already more or less optimal.

      • Most of the crappy UIs I can think of in the past 15 years were actually clean breaks with zero thought to backwards compatibility. Microsoft's ribbon, phone UIs... And the flat/borderless button shit they have rolled out more recently in Office.
    • You should see the software that academics cobble together. Itâ(TM)s often so confusing that only the original research team can even tell what itâ(TM)s supposed to do. Thatâ(TM)s what happens when your code is written by a biology grad student who happened to have computer science as half of his double major in undergrad.

      • by Ormy ( 1430821 )
        I can attest to the truth of the above post. My Physics degree (York university in UK) entailed one module of actual coding, one 5-hour session of Fortran per week for ten weeks. We also did some LabVIEW stuff here and there. I picked up the basics but I wouldn't say I ever approached 'competence' in coding, I know enough to make tweaks to other people's software for my own purposes (with their permission/under terms of license of course), that's about it. The majority of the students that were really gr
    • by gweihir ( 88907 )

      Actually GUIs are not much of a problem if you have people doing research with them that are actually smart and understand what they are doping. These bad GUIs just make it more obvious that many "scientists" have no business doing science because they have no clue what they are doing. You can conclude the same from doing paper reviews (which I regularly do) and this is not a new thing. Probably because people that are boring and not inventive and hence will not "rock the boat" have a much easier time findi

    • I think they all start out wanting to build good interfaces; but most of them give up after they learn they can’t actually track the killers by making a GUI with Visual Basic.

    • "It's incredible that, after people worked out how to write novels centuries ago, there are still terrible novels."

      I don't know, maybe it's because it's nowhere nearly a mechanical process, and it requires a lot of skill and talent to find out what the final users will find good and enjoy?

      • by Livius ( 318358 )

        "It's incredible that, after people worked out how to write novels centuries ago, there are still terrible novels."

        I suppose there is some software where the objective is for the user to "enjoy" the experience, but most is meant to be practical and efficient. A database front-end that is effectively data-entry plus validation rules does not need, and is compromised by, a user interface that expresses its designer's creativity.

        Most software is a tool for getting something done. Things like hammers and cutting tools haven't had their user interfaces change since the Stone Age.

        • "A database front-end that is effectively data-entry plus validation rules" is a terrible user interface for any application other than an input form for data dumps, which was my point. It is well known that attractive products are easier to use [nngroup.com] than functionally identical ugly ones, even if that blows your developer's rational mind, BTW.

          Is your assertion that "after people worked out how to make user interfaces decades" meant to imply that we know how to create CRUD, and therefore all such interfaces can b

          • by Livius ( 318358 )

            I don't see how any of that relates to user interfaces that are "inefficient, inconsistent, needlessly complex, and error-prone".

            I am referring to things like:
            Words such as 'accept', 'continue', and 'save' being used inconsistently so the user has to memorize the behaviour for every different window and dialogue box.
            Dialogues asking yes/no questions and not offering 'yes' and 'no' as options.
            Error messages describing two courses of action and then offering 'confirm' and 'cancel' as options without clearly s

    • by Trogre ( 513942 )

      Ah, you're familiar with Windows 10 I see.

  • by Roger W Moore ( 538166 ) on Saturday June 22, 2019 @08:04PM (#58806536) Journal
    This paper highlights the problem with medicine when it tries to use the tools of science. In science, noticing a correlation is merely the first step. The next thing you do is develop a hypothesis to explain that correlation and then perform an experiment to test that hypothesis. If it works then you might, at that point, have enough for a paper that will interest people.

    Medicine seems to concentrate entirely on looking for any correlation and then immediately publishes that correlation even if there is no hypothesis as to why that correlation exists, let alone an experimentally tested hypothesis. This is such a low bar that a high rate of false results is very likely even if the data are collected and analysed properly.

    I get that the systems medicine studies are vastly more complex and harder to understand in detail than those we deal with in physics (Higgs bosons don't decay differently based on their individual lifestyle choices!) but given the number of garbage studies like this which keep getting published and then debunked isn't it time the medical field started to raise the standards required for publication before its credibility gets even more damaged?

    We already have dangerous movements like the anti-vaxxers who deny well established medical facts. The publicity around bad results like these just encourages more people to doubt what their doctor is telling them.
    • by Livius ( 318358 )

      Medicine in general seems reluctant to admit that they're really not good at being scientists. Very, very smart researchers with lots of data, with lots of good science behind them, but when they try to innovate they don't seem to get the scientific method as well as they think they do.

    • As my stats professor explained, the medical people require roughly a 5% degree of confidence in any given statistical result. The PhD students require a positive result to graduate. Thus, if you want to graduate, run a study with at least 20 different questions/tests, and you will get your positive result.

      The details are a little more complex than that. In practice, you probably want more than 20 questions, and the statistical methods are usually good enough to detect completely random nonsense. Howev

      • by ceoyoyo ( 59147 )

        Physics traditionally requires 5 sigma (which is a silly metric because it only works if you assume a Gaussian distribution) to accept a particle *discovery*. Most of those discoveries have thousands, or tens of thousands of papers associated with them, each of which has a much lower bar.

        If you want to draw that comparison, what medicine lacks is a meta-discovery standard. Something like a rigorous, quantitative meta-analysis with reasonable N from multiple labs and p 10^-6. That's a good idea.

    • by ceoyoyo ( 59147 )

      It's important to publish observations promptly, so other people can look at it. Science should be aiming to encourage relatively rapid publishing and understanding among readers that just because something is published doesn't make it correct. Journalists especially should have enough professional ethics to resist the urge to take a study in mice based on an N of six, and splash it all over the web without at least some very strong caveats.

      Physics is the go-to example of hard science, and they tend to do i

  • by Anonymous Coward

    Especially with the broadening it has undergone over the years to mild symptoms that in some way suggest milder forms of the full fledged symptoms. It's a symptom just like other symptoms such as coughing or insomnia. A treatment that helps in some scenarios but not all is not valueless, but of course it would be helpful to find a method for determining if a technique would work for a patient without trying it first.

  • Gut Bacteria-Autism Link May Just Be Misinterpreted Data From a Confusing GUI

    And that's why people study GUI design, rather than just going with their gut - bacteria.

  • Science is a process and taking the steps along the way as the answer is premature. Newtonian science worked well enough to put a man on the moon but it was a working hypothesis. Similarly medicine is working well enough to vaccinate populations against harmful pathogens but the tiny number of adverse effects show that human biology is not completely understood. The anti vaxxers think they have the answer. They do not have the answer any more than scientific medicine does, but they are currently completely wrong in drawing the conclusion that vaccination causes autism because there is no evidence for that.

    Research like this seeks to address concerns like this and needs to be done. It also needs to reach completion before you can draw any conclusions from it. As this latest interpretation shows the answer is still out of reach.

    The point is that some fields of science have got a lot further along the road towards the answer - like climate change science. Human medicine has only recently discovered the micobiome and the G verses E dynamic. (Genomics verses Environment). One small study on mice does not equal the paradigm shift of shifting the earth from the center of the universe to the sun becoming the center. This also worked well until we understood that there is no identifiable center of the universe.

    You need to work with what is currently known until you know better for sure.

  • But man made global warming link is still 100% solid, amirite? The "science" is settled?

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Yes - because there are lot of independent scientists and lots of independent papers supporting it.

      The bacteria-autism thing has a single paper supporting it. Nobody should really trust a single paper, it's when other scientists reproduce it and also publish their own papers that science as a whole trusts it.

    • But man made global warming link is still 100% solid, amirite?

      97% solid. Get your numbers right, man.

  • This kind of thing happens all the time in statistical analysis, and I've done this myself before I made it a policy to do all my data analysis with a script-based approach. Anything that is point and click is not reproducible and very prone to mistakes. Since most papers don't include the data and the analysis steps, it's impossible to know how people got their results. This is what I consider the dirty secret of science: it's based on a ton of trust that the people who did the analysis knew what they were

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...