Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Space Science

Space Telescope Data Reignites Debate Over How Fast Our Universe Is Expanding (science.org) 20

"A new front has opened in the longstanding debate over how fast the universe is expanding," writes Science magazine: For years astronomers have argued over a gulf between the expansion rate as measured from galaxies in the local universe and as calculated from studies of the cosmic microwave background (CMB), the afterglow of the Big Bang. The disparity was so large and persistent that some astronomers thought the standard theory of the universe might have to be tweaked. But over the past week, results from NASA's new James Webb Space Telescope orbiting observatory suggest the problem may be more mundane: some systematic error in the strategies used to measure the distance to nearby galaxies.

"The evidence based on these data does not suggest the need for additional physics," says Wendy Freedman of the University of Chicago, who leads [the Carnegie-Chicago Hubble Program, or CCHP] that calculated the expansion rate from JWST data using three different galactic distance measurements and released the results on the arXiv preprint server. (The papers have not yet been peer reviewed.) The methods disagreed about the expansion rate, known as the Hubble constant, or H0, and two were close to the CMB prediction.

Specifically, the team used JWST to measure the distance to 10 local galaxies using three stars with a predictable brightness: Cepheids, the brightest red giant stars, and carbon stars. Science notes that the last two methods "agreed to about 1%, but differed from the Cepheid-based distance by 2.5% to 4%." Combining all three methods the team derived a value "just shy of 70 km/s per Mpc," according to the article — leading the University of Chicago's Freedman to say "There's something systematic in the measurements. Until we can establish unambiguously where the issue lies in the nearby universe, we can't be claiming that there's additional physics in the distant universe."

But the controversy continues, according to Adam Riess of Johns Hopkins University (leader of a team of Hubble Constant researchers known as SH0ES). Riess points out that other teams have used JWST to measure distances with all three methods separately and have come up with values closer to the original SH0ES result. He also questions why CCHP excluded data from telescopes other than JWST. "I don't see a compelling justification for excluding the data they do," he says.
Thanks to long-time Slashdot reader sciencehabit for sharing the article.
This discussion has been archived. No new comments can be posted.

Space Telescope Data Reignites Debate Over How Fast Our Universe Is Expanding

Comments Filter:
  • by mmell ( 832646 ) on Saturday August 17, 2024 @12:05PM (#64713756)

    The /. summary has me quite confused. The summary almost appears to be about a different document that the linked article, "Status Report on the Chicago-Carnegie Hubble Program (CCHP): Three Independent Astrophysical Determinations of the Hubble Constant Using the James Webb Space Telescope." From TFA's summary (shorter and more concise than the one given here):

    The infrared sensitivity and high resolution of the JWST is providing a powerful new means of measuring the distances to nearby galaxies, and thereby enabling new and independent determinations of H0.

    In this paper, we have measured the distances to 10 nearby galaxies using three independent astrophysical distance indicators: Cepheids, the TRGB and JAGB/carbon stars. SNe Ia have previously been well observed in all of these galaxies. For the JAGB distance scale, the data analysis from the raw data frames through to determination of H0 was carried out blind.

    An inter-comparison of the galaxy-to-galaxy distances results in agreement with a combined scatter in each case (i.e., Cepheid versus TRGB, TRGB versus JAGB, JAGB versus Cepheid) of less than 4%. This agreement represents a remarkable improvement from recent decades. In the case of the JAGB and TRGB distances, the results are even more striking, with agreement at a level of less than 1%.

    Maybe I'm out of the loop here - but are status reports routinely subjected to peer review? For that matter, this is a status report, yes? I get the impression they made measurements using JWST which correlate with previous measurements and theoretical models. Good science, routine science, nothing exciting to see here, and an excellent starting point from which to validate further work they may perform in this area.

    • It looks llike the paper has been submitted for publication in The Astrophysical Journal. If accepted for publication it will have been peer reviewed. For now it's just a pre-print on the arxiv.
    • I found the comment by Adam Riess, who did the cepheid studies getting a different value, shows off the great value of the new study ( "I don't see a compelling justification for excluding the data they do"). We have two precise estimates of the Hubble Constant that don't agree -- they don't agree with great precision, no error overlap between them. The reason for this is suspected to be systematic errors in one of the two estimates. Using data from one instrument -- our best instrument to measure HC in thr

  • by laughingskeptic ( 1004414 ) on Saturday August 17, 2024 @01:48PM (#64713906)
    They are calling out both the asserted Cepheid accuracies and their methods. This not surprising.

    Computing distance from a 2 GHz microwave source here on Earth using only power is difficult ... and gets more difficult the farther you are from the emitter. At 10km you have 0.2% errors (20m). It is amazing that astronomers can get errors down to 4% at intergalactic scales for Cepheid brightness .. but as I understand it, they have been effectively claiming 2%. Double their error windows and there is not discrepancy.
    • Wish I understood all of this better, and thank you for the explanation! Wish I had an animation to help me visualize more of this (guess I should go looking).

      But I wonder why they claimed a more precise result than you'd estimated/claimed. We'd probably need to get WAY deeper here to explain it all though. And honestly 'off by half/double' feels easy to do, especially with things we can't verify easily.

      From "Office Space" - https://news.ycombinator.com/i... [ycombinator.com] (Added one more letter to make obvious the 4 l

      • It may be that they're not measuring anything in the radio spectrum, they're measuring changes in the luminosity, using visible light. This gives us the star's pulsation frequency, and that plus the apparent magnitude, which gives us the distance.
      • . Double their error windows and there is not discrepancy.

        But I wonder why they claimed a more precise result than you'd estimated/claimed.

        That one is easy. Because it is what their measurements show.

        They aren't making stuff up - they are using real (and very careful and sophisticated) statistical methods that have been checked many times to calculate the uncertainty. Claiming anything other that the error they calculate from their data would not be science, it would be scientific error or malpractice.

        But if there is a systematic bias in the method then it is systematically shifted from the true value, whatever it is.

        Previous poster suggesting

        • Did you just say that scientists find excuses to throw out outliers to affirm their mood affiliation for one particular consensus model?

          What if nature, being complete, is inconsistent, and you just have to embrace the outliers?

      • by colinwb ( 827584 )
        "Wish I understood all of this better, and thank you for the explanation! Wish I had an animation to help me visualize more of this (guess I should go looking)."
        --I'm back on Slashdot after a long absence, hence I may be unaware of any changes in Slashdot memes. That said:
        Where's the car analogy for this?
  • Because differences in sensors and other systemic issues might influence the results. Using one source means you limit those confounding factors. Is it always better to do so? Probably not (more diverse data is better, often). But when we're searching for something as subtle and hard to prove as this, I can understand trying to limit ALL the sources of error that you can.

    Sure additional sources of data could be included, and it would add more work to decode and define any differences. Great work for th

  • First, why do they mix units (Km per sec per megaparsec)

    And why don't we simplify it?

    Velocity per distance
    =distance per time per distance
    the distance cancels out
    so we are left with a per time
    that is frequency or 1/period
    the period is the time taken for the universe to double in size.

    • Because Hubble originally plotted red shift in km/s vs distance in Mpc? For measurements of nearby galaxies it seems more intuitive.
      • Correct. It's a relationship between velocity of an object relative to us, and the distance away from us.

        Another fun fact: Hubble's constant has simplified units of (distance/time) / distance = 1 / time. Its reciprocal has units of time and is an estimate of the age of the universe.

  • by Petersko ( 564140 ) on Sunday August 18, 2024 @04:54AM (#64715190)

    Scientists are baffled / terrified / depressed. All of them, you know. The collective. Every scientist ever. All completely befuddled and weeping.

    And it must be true. I can't identify the channels easily, but they have a thumbnail with Neil Tyson, so it can't be wrong.

10 to the 12th power microphones = 1 Megaphone

Working...