Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Space

Evidence of Controversial Planet 9 Uncovered In Sky Surveys Taken 23 Years Apart (space.com) 88

Astronomers may have found the best candidate yet for the elusive Planet Nine: a mysterious object in infrared sky surveys taken 23 years apart that appears to be more massive than Neptune and about 700 times farther from the sun than Earth. Space.com reports: [A] team led by astronomer Terry Long Phan of the National Tsing Hua University in Taiwan has delved into the archives of two far-infrared all-sky surveys in search of Planet Nine -- and incredibly, they have found something that could possibly be Planet Nine. The Infrared Astronomy Satellite, IRAS, launched in 1983 and surveyed the universe for almost a year before being decommissioned. Then, in 2006, the Japanese Aerospace Exploration Agency (JAXA) launched AKARI, another infrared astronomy satellite that was active between 2006 and 2011. Phan's team were looking for objects that appeared in IRAS's database, then appeared to have moved by the time AKARI took a look. The amount of movement on the sky would be tiny -- about three arcminutes per year at a distance of approximately 700 astronomical units (AU). One arcminute is 1/60 of an angular degree.

But there's an extra motion that Phan's team had to account for. As the Earth orbits the sun, our view of the position of very distant objects changes slightly in an effect called parallax. It is the same phenomenon as when you hold your index finger up to your face, close one eye and look at your finger, and then switch eyes -- your finger appears to move as a result of you looking at it from a slightly different position. Planet Nine would appear to move on the sky because of parallax as Earth moves around the sun. On any particular day, it might seem to be in one position, then six months later when Earth is on the other side of the sun, it would shift to another position, perhaps by 10 to 15 arcminutes -- then, six months after that, it would seem to shift back to its original position. To remove the effects of parallax, Phan's team searched for Planet Nine on the same date every year in the AKARI data, because on any given date it would appear in the same place, with zero parallax shift, every year. They then also scrutinized each candidate object that their search threw up on an hourly basis. If a candidate is a fast-moving, nearby object, then its motion would be detectable from hour to hour, and could therefore be ruled out. This careful search led Phan's team to a single object, a tiny dot in the infrared data.

It appears in one position in IRAS's 1983 image, though it was not in that position when AKARI looked. However, there is an object seen by AKARI in a position 47.4 arcminutes away that isn't there in the IRAS imagery, and it is within the range that Planet Nine could have traveled in the intervening time. In other words, this object has moved a little further along its orbit around the sun in the 23 or more years between IRAS and AKARI. The knowledge of its motion in that intervening time is not sufficient to be able to extrapolate the object's full orbit, therefore it's not yet possible to say for certain whether this is Planet Nine. First, astronomers need to recover it in more up-to-date imagery. [...] Based on the candidate object's brightness in the IRAS and AKARI images, Phan estimates that the object, if it really is Planet Nine, must be more massive than Neptune. This came as a surprise, because he and his team were searching for a super-Earth-size body. Previous surveys by NASA's Wide-field Infrared Survey Explorer (WISE) have ruled out any Jupiter-size planets out to 256,000 AU, and any Saturn-size planets out to 10,000 AU, but a smaller Neptune or Uranus-size world could still have gone undetected. Phan told Space.com that he had searched for his candidate in the WISE data, "but no convincing counterpart was found because it has moved since the 2006 position," and without knowing its orbit more accurately, we can't say where it has moved to.
"Once we know the position of the candidate, a longer exposure with the current large optical telescopes can detect it," Phan told Space.com. "However, the follow-up observations with optical telescopes still need to cover about three square degrees because Planet Nine would have moved from the position where AKARI detected it in 2006. This is doable with a camera that has a large field of view, such as the Dark Energy Camera, which has a field of view of three square degrees on the Blanco four-meter telescope [in Chile]."

Evidence of Controversial Planet 9 Uncovered In Sky Surveys Taken 23 Years Apart

Comments Filter:
  • by JamesTRexx ( 675890 ) on Saturday May 03, 2025 @04:01AM (#65349123) Journal

    If they'd'let loose an AI on all the solar system data and it would uncover more (circumstancial) evidence for the planet, that would be a fitting task and actually ineresting AI news for once.

    • Re: (Score:2, Informative)

      Are you talking about LLMs that can barely multiply even small matrices? They are language models — terrible at numerical analysis, especially for the vast data sets in this scenario. ML signal analysis and image processing, on the other hand, may be useful here.
      • No, he is talking about image analysis tools.
        Facepalm.

        Or how does your phone know a photo contains a face?

        Lets check the matric multiplication, shall we?

        Can you explain how one would implement a matrix multiplication?

        What does the LLM say?

        Matrix Multiplication Implementation
        Matrix multiplication is a fundamental operation in linear algebra. Here's how to implement it:

        Basic Concept
        For matrices A (m * n) and B (n * p), the product C = A * B (m * p) is calculated as:

        Each element C[i,j] is the dot product of row i of A and column j of B
        C[i,j] = Summ(A[i,k] * B[k,j]) for k = 0 to n-1

        Seems that LLM has no problem whit it.

        The rest is computer jargon bla bla about space and time efficiency and optimizations, which you probably won't grasp ... so I leave them out here ... rofl.

        Wait, I ask it again:

        So, the algorithm is independent from the size of the matrix?

        And it answers:

        Matrix Multiplication and Matrix Size
        You're asking a good clarifying question. The general algorithm is the same regardless of matrix size, but there are important considerations: ...

        So I guess it can multiply for you matrices of any size. As long as their dimensions fit ...

        • I have decades of experience in real time and large-scale numerical analysis and simulation. I know what a matrix multiplication is. My point was that LLMs (themselves) cannot even perform a moderate matrix operation reliably. This is because they are language models, and have very poor performance on large numerical tasks. LLMs know how to describe the operations needed to perform large matrix multiplications, or land an aircraft, or bake muffins. This is because they have read descriptions of these acti
          • LLMs can reliably do matrix multiplication, as long as they have been trained to use sufficient context to do the steps- i.e., Chain of Though.
            I just tried on a smaller local model, and it aced 10x10 with ease. Of course it took it 30k tokens to do it, but there's no reason it wouldn't reliably scale to the size of the context. A million-token context window should be able to do a decently sized matrix.

            It's highly unlikely that a non-CoT, zero-shot prompt would be able to do it, though.
            Each token is com
            • Sorry this is ridiculous. Can we stop this please? LLM are extremely poor at numerical operations. We clearly have different ideas of what constitutes a large matrix. There are very big reasons why the LLM architecture does not scale to a decently sized matrix. Even a small 1000x1000 matrix would be completely intractable. I chose matrix multiplication as an example, but the same applies to real numerical methods used in astronomy for data analysis, such as Fourier for frequency analysis, convolution, de
              • Also keep in mind that the 10x10 matrix multiply example you gave should take 1000 float fused multiply-adds (fma) operations. On the LLM it would have taken many orders of magnitude more, likely billions of fma and millions of times less efficient, assuming it even obtained the correct result (did it?). Ironically, the LLM is implemented USING matrix and tensor operations, but is very poor at DOING these operations at the token generation and inference level.
                • Also keep in mind that the 10x10 matrix multiply example you gave should take 1000 float fused multiply-adds (fma) operations.

                  Correct.

                  On the LLM it would have taken many orders of magnitude more, likely billions of fma and millions of times less efficient, assuming it even obtained the correct result (did it?).

                  What? lol. That's pure lunacy.
                  You think it can't do basic math iteratively?
                  And yes, it did come to the correct answer. Of course, like I said, it took about 30k tokens to do it, which is ridiculously inefficient, but the point was to prove that you were wrong, not to prove that it was an efficient matrix multiplier.

                  Ironically, the LLM is implemented USING matrix and tensor operations, but is very poor at DOING these operations at the token generation and inference level.

                  Poor? No, it's perfect fine at doing it- it's just not efficient, for very obvious reasons.

                  • Maybe read the thread. It is ALL about efficiency and reliability: whether an LLM can do numeric processing on a scale needed for processing the astronomy imaging datasets discussed in the article. That is the point.

                    My original claim was this: “Are you talking about LLMs that can barely multiply even small matrices? They are language models — terrible at numerical analysis, especially for the vast data sets in this scenario.”

                    My follow up statement, to which you replied, was this:

                    • Yeah, this exactly, LLMs are word-optimized not raw-astrophysics-data-optimized models:
                      > LLMs are extremely inefficient at even moderate numerical processing
                    • Maybe read the thread. It is ALL about efficiency and reliability: whether an LLM can do numeric processing on a scale needed for processing the astronomy imaging datasets discussed in the article.

                      That's not how AI-assisted research works.
                      The relevance here of showing that they can reliably multiply matrices is to demonstrate that they understand the fundamentals.

                      You'd no sooner have an LLM manually compute a model than you'd have a human. The LLM would design it, including its training methodology, and look at the results.

                      My follow up statement, to which you replied, was this: “LLMs (themselves) cannot even perform a moderate matrix operation reliably. This is because they are language models, and have very poor performance on large numerical tasks.”

                      And this statement is absurdly false.
                      The reliability isn't the problem, the problem is you're treating the LLM like it's a first year college student. Why on earth would it com

                    • * incapable of
                    • I didn’t move goal posts. My original concern was about efficiency to the extent of the task being impossible.

                      Read my very first statement: “Are you talking about LLMs that can barely multiply even small matrices? They are language models — terrible at numerical analysis, especially for the vast data sets in this scenario. ”

                      Not sure what your inference skills are like, but my concern there is efficiency of LLMs themselves. Read the quote. They CANNOT perform large numerical cal

                    • Of course, even though LLMs are extraordinarily bad at numeric processing — inefficient in compute time and memory to the extent that even moderate tasks are intractable — they are good at understanding and explaining this limitation. https://chatgpt.com/share/6816... [chatgpt.com]
                    • You are moving the goalposts.
                      You said:

                      My point was that LLMs (themselves) cannot even perform a moderate matrix operation reliably.

                      Your point was reliability, and now you're trying to walk it back. Just fucking admit it, ffs.

                      That LLMs can barely do a 10x10 matrix multiply, and likely not a 100x100 matrix multiply, proves my point.

                      No, it doesn't. The fact that it can do it reliably flatly disproves your original point. Your new point was never contested by me. Trying to pretend like it is won't save your argument.

                      Yes- LLMs are not great number crunchers, as we have both said.
                      They are, however, perfectly capable of being reliable number crunchers, just like a person is.

                      There's a reason my response l

                    • Of course, even though LLMs are extraordinarily bad at numeric processing

                      No, they're not.

                      inefficient in compute time and memory to the extent that even moderate tasks are intractable

                      Yes on inefficient in compute time and memory. To the extent that even moderate tasks are intractable? Now that's just silly.

                      they are good at understanding and explaining this limitation. https://chatgpt.com/share/6816 [chatgpt.com]... [chatgpt.com]

                      It did no such thing. You asked it how inefficient it was for it to multiple a couple of matrices. It correctly answered. You keep beating this dead horse.
                      The only reason I brought it up, is because you claimed it couldn't be reliably done, which was wrong. It can't be efficiently done- which is perfectly true.

                      From the perspective of data analysis, if you're using i

                    • Not sure what your inference skills are like, but

                      He's a neckbeard. He may sometimes use technical words and make technical claims, but any argument with him boils down to he's right and you're wrong. No logical fallacy is too large or blatant in service of that goal. And he doesn't actually understand any of it, anyway; what he uses in place of thinking is not that dissimilar to the LLM. He's just spewing free associations that feel like they support him.

                      He's been "arguing" like this since he created his account a couple decades ago.

              • The point was to demonstrate that the size it can do is arbitrary, limited by its context.
                It's not like multiplying a matrix involves different operates depending on its size, which should be the first clue to you that it's not a difficult task for them.

                There are very big reasons why the LLM architecture does not scale to a decently sized matrix.

                No, there's 1 reason, and 1 reason alone. Because they use their context to do the math, and their context is limited.
                Look at it this way, it can multiply any two arbitrary matrices better than you can, without a tool to help.

                Even a small 1000x1000 matrix would be completely intractable.

                Indeed. It would be for you t

                • This is really going around in circles.You really seem to have missed my point. Let’s summarise shall we?

                  Large scale linear anlgebra via matrix and tensor operations are excellent for implementing LLMs.A large proportion of their implementation and compute cost is precisely that.

                  But LLMs (themselves) are terrible for performing large scale matrix and tensor operations.

                  Using an LLM to do large scale numeric processing for astronomy images or signal detection is utterly ridiculous, No toy example

                • > > Even a small 1000x1000 matrix would be completely intractable.
                  > Indeed. It would be for you too.

                  No, because if attempting to compute a large matrix a reasonable person approaching this from first-principles perspectives would use a math tool specialized in multiplying large matrices if that were the actual goal.

                  That is why an LLM is a poor tool for the task...it is a generalized language tool that can be forced to emulate the base functionality of its own tech stack somewhat poorly with ho
                  • No, because if attempting to compute a large matrix a reasonable person approaching this from first-principles perspectives would use a math tool specialized in multiplying large matrices if that were the actual goal.

                    So would an LLM, which if you had read further along the thread, you would have seen it did.

                    That is why an LLM is a poor tool for the task...it is a generalized language tool that can be forced to emulate the base functionality of its own tech stack somewhat poorly with horrible efficiency. It wouldn't even show up in the top contenders for the task of multiplying large matrices.

                    Of course it wouldn't... what kind of idiotic fucking point are you trying to make? Do you imagine that someone contested this?
                    The discussion was on reliability.

                    Dude, you're a fucking imbecile- get lost.

                    • So would an LLM

                      "would" is doing a lot of lifting.

                      "Doesn't" would be more accurate, though.

                      Just because you imagine it doesn't mean it is actually so. And when you argue against what is using only your imagination, it just makes you an idiot. It doesn't make you a visionary, or Future Man, or whatever.

                      Go and invent your LLM that is better at math than mathematicians using specialist tools, then you can talk. Until then, stfu, it doesn't exist.

                    • "would" is doing a lot of lifting.

                      No, it isn't.
                      Because it did.

                      "Doesn't" would be more accurate, though.

                      Again with your reading problem. You know- you could take classes for that. They're very good at bringing special needs kids up to speed, these days.

                      Just because you imagine it doesn't mean it is actually so. And when you argue against what is using only your imagination, it just makes you an idiot. It doesn't make you a visionary, or Future Man, or whatever.

                      What in the fuck are you talking about, you intellectually handicapped simpleton?
                      There's no imagination anywhere here. We're talking numbers, and observables. You're the shit-for-brains over here trying to inject with nothing but some dumbshit hallucinations.

                      Go and invent your LLM that is better at math than mathematicians using specialist tools, then you can talk. Until then, stfu, it doesn't exist.

                      Doesn't need to be better than a mathematician- just needs to be better th

          • I try that tomorrow.

            While you have a point, I doubt you are right.

            Anyway, the proposed AIs in question are image analysers and have nothing to do with LLMs, except that they also run on ANNs.

        • by msauve ( 701917 ) on Saturday May 03, 2025 @06:56AM (#65349289)
          Whoosh. Do you also think that a math book which explains matrix multiplication can do matrix multiplication?
          • Depends on the book, I guess :P

            Wait a moment, I ask the little LLM I asked previously ...

            No problem, invents matrices, multiplies them and gives step by step analysis/reasoning what it is doing.

            Just do not know which LLM it was, I got a random one served.

            • Get the LLM to transform a 1000-element vector via a 1000x1000 matrix.

              That is a very small numerical processing task that requires only one million mused multiply-adds and would normally take only microseconds on teraflop-class hardware with the data in local memory. Negligible compared to the processing required for the astronomy analysis in the article.

              I’ll wait.

      • Of course, even though LLMs are extraordinarily bad at numeric processing — inefficient in compute time and memory to the extent that even moderate tasks are intractable — they are good at understanding and explaining this limitation. https://chatgpt.com/share/6816... [chatgpt.com]
    • Astrology would probally be more accurate,
    • Obviously.
      Now explain to the class what Nibiruuuuuuu is.

      Wait. Let me put on a huge tall hat first. Does this hat make me look like a God?
      • I had to listen to a nutter on this subject several times because I was living at a former commune-cum-sawmill where he still lived. He kept trying to tell me that it was passing close enough to Earth for its gravitation to affect our orbit, but it still somehow couldn't be detected.

        • First, it's important to note how this big tall hat makes me look like my head is bigger than yours, I want to that point out. You know like the Pope's hat, Like the Water Buffalo Lodge hats from the Flintstones. Everybody does it. Why is that? Why does every Religion have remarkable headgear? The Asians and Hindus really go to town with that stuff. Nonetheless, think about this. The tablets that Velikovsky studied are now not just the domain of a dusty university library. Somewhere along the line a diction
      • Can't, the lizard people won't let me. I only got that one post out before

  • Planet Nine? (Score:5, Insightful)

    by backslashdot ( 95548 ) on Saturday May 03, 2025 @04:18AM (#65349147)

    That is Pluto you ninnies. It was discovered in 1930.

    • Re: (Score:2, Flamebait)

      Pluto is a dwarf planet in the Kuiper belt, and has about 1/6 of the Moon's mass.
      I think there are five dwarf planets, and about seven candidates.
      • Re: (Score:3, Interesting)

        by sarren1901 ( 5415506 )

        Yep, and Pluto, on average, is only 40 AU from the Sun. This supposed planet 9 is possible 700AU from the sun. The heliopause is 120 AU and basically considered the edge of our solar system.

        So calling something 700AU from the Sun a planet sounds suspicious at best. I'm likely showing my ignorance here but this still seems out there.

        Distance from the sun of each planet and Pluto https://phys.org/news/2014-04-... [phys.org]

        • by Slayer ( 6656 )

          So calling something 700AU from the Sun a planet sounds suspicious at best. I'm likely showing my ignorance here but this still seems out there.

          The main factors appear to be "Is the trajectory of this object governed mostly by the sun's gravity?" and "Is it going to stay on a more or less elliptic trajectory around the sun?", and these factors make a lot more sense than "How close is it to the sun?".

        • The distance between Centauri AB and Proxima is more than that. Maybe our sun has a brown dwarf companion?
      • Pluto is a planet. I didn't make the "IAU" my word definition body. And their definition doesn't actually make any kind of sense. Defining what one object is based on the characteristics of other objects is not something that passes any standard of rigor. And you're doing it again with your pluto/moon comparison nonsense.

        It orbits a star, it is massive enough to collapse into a sphere under its own gravity, then it's a planet. If it doesn't go spherical, then we can talk about other names for it.

        Pluto

        • By your own definition, that makes Pluto at least #10.

          What's your excuse for Ceres other than they didn't tell you that it was a planet in elementary school?

      • This is the 2020s.

        Scientific facts are out; nostalgia is in.

        We can expect an executive order regarding the status of Pluto in the upcoming weeks.

        • by shanen ( 462549 )

          Mod parent Funny, but I already dibbed "Neo-Pluto".

      • For those of you confused by this debate, Pluto was reclassified from a planet to a dwarf planet in 2006, when the International Astronomical Union came up with three requirements for an object to be classified as a planet. Pluto did not meet the third criterion, but there has been debate among astronomers about that, with some arguing that geology should be taken into account. But as far as I know, the IAU still stands by the 2006 definition.
    • It was known pretty soon after 1930 that Pluto isn't massive enough to affect the orbit of Neptune the way that Planet 9 must.
  • Not planet 9 (Score:5, Informative)

    by simlox ( 6576120 ) on Saturday May 03, 2025 @04:31AM (#65349159)
    according to Reddit comment https://www.reddit.com/r/space... [reddit.com] Probably just noise.
    • by Anonymous Coward

      Yes, reddit, that most credible source of scientific knowledge.

    • Noise can be a planet too, as long as it clears its path. One theory is the universe is filled with black noise.

    • While it is absolutely NOT Planet 9 (the orbit, while not yet nailed down much, would obviously be well outside the required paths), it could be something else other than just noise.

      That could be just as potentially exciting and interesting as finding Planet 9.

  • https://en.wikipedia.org/wiki/... [wikipedia.org] full of zombie creating aliens. That's my expert opinion.
  • No, it's "Planet Nine." Earth is no longer a planet because we have not cleared the satellite debris field.
  • ... a manhole cover. Nothing to see here. Move along now.

  • Worst movie ever. So bad you have to see it.

    Oh wait, that was Plan 9...

  • .....and it was big enough for your mom.

You should never bet against anything in science at odds of more than about 10^12 to 1. -- Ernest Rutherford

Working...