John Carlisle, a consultant anesthetist at Torbay Hospital, used statistical tools to conduct a review of thousands of papers published in leading medical journals. While a vast majority of the clinical trials he reviewed were accurate, 90 of the 5,067 published trials had underlying patterns that were unlikely to appear by chance in a credible dataset. The Guardian reports: The tool works by comparing the baseline data, such as the height, sex, weight and blood pressure of trial participants, to known distributions of these variables in a random sample of the populations. If the baseline data differs significantly from expectation, this could be a sign of errors or data tampering on the part of the researcher, since if datasets have been fabricated they are unlikely to have the right pattern of random variation. In the case of Japanese scientist, Yoshitaka Fuji, the detection of such anomalies triggered an investigation that concluded more than 100 of his papers had been entirely fabricated. The latest study identified 90 trials that had skewed baseline statistics, 43 of which with measurements that had about a one in a quadrillion probability of occurring by chance. The review includes a full list of the trials in question, allowing Carlisle's methods to be checked but also potentially exposing the authors to criticism. Previous large scale studies of erroneous results have avoided singling out authors. Relevant journal editors were informed last month, and the editors of the six anesthesiology journals named in the study said they plan to approach the authors of the trials in question, and raised the prospect of triggering in-depth investigations in cases that could not be explained.