
Evidence of Controversial Planet 9 Uncovered In Sky Surveys Taken 23 Years Apart (space.com) 93
Astronomers may have found the best candidate yet for the elusive Planet Nine: a mysterious object in infrared sky surveys taken 23 years apart that appears to be more massive than Neptune and about 700 times farther from the sun than Earth. Space.com reports: [A] team led by astronomer Terry Long Phan of the National Tsing Hua University in Taiwan has delved into the archives of two far-infrared all-sky surveys in search of Planet Nine -- and incredibly, they have found something that could possibly be Planet Nine. The Infrared Astronomy Satellite, IRAS, launched in 1983 and surveyed the universe for almost a year before being decommissioned. Then, in 2006, the Japanese Aerospace Exploration Agency (JAXA) launched AKARI, another infrared astronomy satellite that was active between 2006 and 2011. Phan's team were looking for objects that appeared in IRAS's database, then appeared to have moved by the time AKARI took a look. The amount of movement on the sky would be tiny -- about three arcminutes per year at a distance of approximately 700 astronomical units (AU). One arcminute is 1/60 of an angular degree.
But there's an extra motion that Phan's team had to account for. As the Earth orbits the sun, our view of the position of very distant objects changes slightly in an effect called parallax. It is the same phenomenon as when you hold your index finger up to your face, close one eye and look at your finger, and then switch eyes -- your finger appears to move as a result of you looking at it from a slightly different position. Planet Nine would appear to move on the sky because of parallax as Earth moves around the sun. On any particular day, it might seem to be in one position, then six months later when Earth is on the other side of the sun, it would shift to another position, perhaps by 10 to 15 arcminutes -- then, six months after that, it would seem to shift back to its original position. To remove the effects of parallax, Phan's team searched for Planet Nine on the same date every year in the AKARI data, because on any given date it would appear in the same place, with zero parallax shift, every year. They then also scrutinized each candidate object that their search threw up on an hourly basis. If a candidate is a fast-moving, nearby object, then its motion would be detectable from hour to hour, and could therefore be ruled out. This careful search led Phan's team to a single object, a tiny dot in the infrared data.
It appears in one position in IRAS's 1983 image, though it was not in that position when AKARI looked. However, there is an object seen by AKARI in a position 47.4 arcminutes away that isn't there in the IRAS imagery, and it is within the range that Planet Nine could have traveled in the intervening time. In other words, this object has moved a little further along its orbit around the sun in the 23 or more years between IRAS and AKARI. The knowledge of its motion in that intervening time is not sufficient to be able to extrapolate the object's full orbit, therefore it's not yet possible to say for certain whether this is Planet Nine. First, astronomers need to recover it in more up-to-date imagery. [...] Based on the candidate object's brightness in the IRAS and AKARI images, Phan estimates that the object, if it really is Planet Nine, must be more massive than Neptune. This came as a surprise, because he and his team were searching for a super-Earth-size body. Previous surveys by NASA's Wide-field Infrared Survey Explorer (WISE) have ruled out any Jupiter-size planets out to 256,000 AU, and any Saturn-size planets out to 10,000 AU, but a smaller Neptune or Uranus-size world could still have gone undetected. Phan told Space.com that he had searched for his candidate in the WISE data, "but no convincing counterpart was found because it has moved since the 2006 position," and without knowing its orbit more accurately, we can't say where it has moved to. "Once we know the position of the candidate, a longer exposure with the current large optical telescopes can detect it," Phan told Space.com. "However, the follow-up observations with optical telescopes still need to cover about three square degrees because Planet Nine would have moved from the position where AKARI detected it in 2006. This is doable with a camera that has a large field of view, such as the Dark Energy Camera, which has a field of view of three square degrees on the Blanco four-meter telescope [in Chile]."
But there's an extra motion that Phan's team had to account for. As the Earth orbits the sun, our view of the position of very distant objects changes slightly in an effect called parallax. It is the same phenomenon as when you hold your index finger up to your face, close one eye and look at your finger, and then switch eyes -- your finger appears to move as a result of you looking at it from a slightly different position. Planet Nine would appear to move on the sky because of parallax as Earth moves around the sun. On any particular day, it might seem to be in one position, then six months later when Earth is on the other side of the sun, it would shift to another position, perhaps by 10 to 15 arcminutes -- then, six months after that, it would seem to shift back to its original position. To remove the effects of parallax, Phan's team searched for Planet Nine on the same date every year in the AKARI data, because on any given date it would appear in the same place, with zero parallax shift, every year. They then also scrutinized each candidate object that their search threw up on an hourly basis. If a candidate is a fast-moving, nearby object, then its motion would be detectable from hour to hour, and could therefore be ruled out. This careful search led Phan's team to a single object, a tiny dot in the infrared data.
It appears in one position in IRAS's 1983 image, though it was not in that position when AKARI looked. However, there is an object seen by AKARI in a position 47.4 arcminutes away that isn't there in the IRAS imagery, and it is within the range that Planet Nine could have traveled in the intervening time. In other words, this object has moved a little further along its orbit around the sun in the 23 or more years between IRAS and AKARI. The knowledge of its motion in that intervening time is not sufficient to be able to extrapolate the object's full orbit, therefore it's not yet possible to say for certain whether this is Planet Nine. First, astronomers need to recover it in more up-to-date imagery. [...] Based on the candidate object's brightness in the IRAS and AKARI images, Phan estimates that the object, if it really is Planet Nine, must be more massive than Neptune. This came as a surprise, because he and his team were searching for a super-Earth-size body. Previous surveys by NASA's Wide-field Infrared Survey Explorer (WISE) have ruled out any Jupiter-size planets out to 256,000 AU, and any Saturn-size planets out to 10,000 AU, but a smaller Neptune or Uranus-size world could still have gone undetected. Phan told Space.com that he had searched for his candidate in the WISE data, "but no convincing counterpart was found because it has moved since the 2006 position," and without knowing its orbit more accurately, we can't say where it has moved to. "Once we know the position of the candidate, a longer exposure with the current large optical telescopes can detect it," Phan told Space.com. "However, the follow-up observations with optical telescopes still need to cover about three square degrees because Planet Nine would have moved from the position where AKARI detected it in 2006. This is doable with a camera that has a large field of view, such as the Dark Energy Camera, which has a field of view of three square degrees on the Blanco four-meter telescope [in Chile]."
Dibs on "Neo-Pluto" (Score:1)
Subject is the joke I was looking for. Nothing modded Funny in the "matured" discussion.
(Never looking for the raw brain fart from nothing nowhere.)
A fitting use for AI (Score:5, Insightful)
If they'd'let loose an AI on all the solar system data and it would uncover more (circumstancial) evidence for the planet, that would be a fitting task and actually ineresting AI news for once.
Re: (Score:2)
Re:A fitting use for AI (Score:5, Interesting)
Re: (Score:2, Informative)
Re: (Score:1)
No, he is talking about image analysis tools.
Facepalm.
Or how does your phone know a photo contains a face?
Lets check the matric multiplication, shall we?
Can you explain how one would implement a matrix multiplication?
What does the LLM say?
Matrix Multiplication Implementation
Matrix multiplication is a fundamental operation in linear algebra. Here's how to implement it:
Basic Concept
For matrices A (m * n) and B (n * p), the product C = A * B (m * p) is calculated as:
Each element C[i,j] is the dot product of row i of A and column j of B
C[i,j] = Summ(A[i,k] * B[k,j]) for k = 0 to n-1
Seems that LLM has no problem whit it.
The rest is computer jargon bla bla about space and time efficiency and optimizations, which you probably won't grasp ... so I leave them out here ... rofl.
Wait, I ask it again:
So, the algorithm is independent from the size of the matrix?
And it answers:
Matrix Multiplication and Matrix Size ...
You're asking a good clarifying question. The general algorithm is the same regardless of matrix size, but there are important considerations:
So I guess it can multiply for you matrices of any size. As long as their dimensions fit ...
Re: (Score:2)
Re: (Score:2)
I just tried on a smaller local model, and it aced 10x10 with ease. Of course it took it 30k tokens to do it, but there's no reason it wouldn't reliably scale to the size of the context. A million-token context window should be able to do a decently sized matrix.
It's highly unlikely that a non-CoT, zero-shot prompt would be able to do it, though.
Each token is com
Re: (Score:3)
Re: (Score:2)
An LLM is the wrong tool for this job, you cannot bullshit an approximation to an established algorithm.
And using a bullshit approximation layer to translate mathematical algorithm results to words is fraught with the risk of uncaught hallucinations, so adding an LLM would take you further away from the goal, rather than closer.
Re: A fitting use for AI (Score:4, Informative)
Re: (Score:2)
Pro tip: the LLM isn’t spotting differences in photographs. It is delegating out to external image processing algorithms to do these operations, which are implemented in native libraries and languages, similar to how the LLM itself is implemented.
Incorrect.
The LLM isn’t doing any image operations any more than your web browser or network router is doing image operations.
Laughably incorrect.
I can see I gave you too much credit before.
Re: (Score:3)
Instead, images are processed by a dedicated vision model—such as CLIP or Flamingo—which encodes them into embeddings. These models handle the compute-intensive operations using optimized native kernels. Once the image is converted into an embedding, it is passed to the LLM for reasoning and inference.
LLMs could not directly handle even modest image data—for example, the raw pixel values from a low-resolution image conta
Re: (Score:2)
Nope. As I said, the LLM itself does not perform image processing.
Instead, images are processed by a dedicated vision model—such as CLIP or Flamingo—which encodes them into embeddings.
Correct. Trying to separate those is absurd. The LLM is trained to understand (process) those embeddings. If it were not- it would not be able to make heads or tails of them. The embeddings are merely an encoding of the image, and it is therefor being processed by the LLM.
You said:
It is delegating out to external image processing algorithms to do these operations, which are implemented in native libraries and languages, similar to how the LLM itself is implemented.
A frontend CLIP model is not "implemented in native libraries and langauges". It's simply a different encoder layer than the one that handles the embedding of tokens from the context.
You really are making a habit of trying to mo
Re: (Score:2)
In Clip or Flamingo, the per-pixel compute operations for image processing are performed in native libraries and languages, with the base operations (FMA) executing on hardware.
As I’ve repeatedly stated from the very beginning, LLMs cannot perform numeric processing at any practical scale (millions or billions of elements) due to prohibitve compute cost and memory use. They are many orders of magnitude slower at the toy examples they actually can do (eg a 16x16 matrix multiply).
This has been my po
Re: (Score:1)
Because no amount of statistically significant wordbarf will accurately represent a precise computation.
You're an idiot.
What do you think produces that "statistically significant wordbarf".
An LLM is the wrong tool for this job, you cannot bullshit an approximation to an established algorithm.
LLMs can execute an algorithm just fine.
You seem to be very confused about how they actually work.
While they are trained statistically, there are no statistics involved in their inference until the very end (token sampling)
And using a bullshit approximation layer to translate mathematical algorithm results to words is fraught with the risk of uncaught hallucinations, so adding an LLM would take you further away from the goal, rather than closer.
Approximation layer? lol.
It's a series of math equations done on a large set of vectors. I love that you see to think that a set of ReLU parameters trained on several trillion tokens can't do math, lo
Re: (Score:2)
>While they are trained statistically, there are no statistics involved in their inference until the very end (token sampling)
And yet you also say
> It's a series of math equations done on a large set of vectors.
Which is it? Is it math, or are statistics uninvolved?
You say that distinguishing between the LLM and image operations is laughably incorrect, but then you say:
> LLMs can execute an algorithm just fine.
Which is it? Is the LLM distinct from things it executes as child pr
Re: (Score:2)
It's a series of math equations done on a large set of vectors. I love that you see to think that a set of ReLU parameters trained on several trillion tokens can't do math, lol.
You’re conflating “being implemented using X” with “being able to perform X as a task”.
Yes, LLMs are built using large-scale linear algebra—billions of parameters, tensors, and matrix operations. But that doesn’t mean they can do large-scale linear algebra themselves. In fact, they perform quite poorly at precise numerical computation, even at relatively small scales.
A useful analogy: the human brain runs on incredibly complex biochemical and electrical proces
Re: (Score:2)
Which is it? Is it math, or are statistics uninvolved?
Oh, is that the game we're playing? All math involved on a statistical inference is statistics?
You say that distinguishing between the LLM and image operations is laughably incorrect, but then you say:
Which is it? Is the LLM distinct from things it executes as child processes, or are we lumping all tech that an LLM touches under the collective umbrella of "AI" and nothing gets to be something else?
As I said, you're an idiot.
There's no child process involved.
VLMs and LLM+CLIP do not use "child processes" to do "image operations."
As suspected, you're a fucking idiot.
Re: (Score:2)
You’re conflating “being implemented using X” with “being able to perform X as a task”.
No, I'm not.
I'm laughing at someone who says that "being implemented using X can't do X."
There is absolutely nothing that guarantees an LLM can do any particular kind of math.
There is, however, a state of ignorance that would lead someone to say that it can't.
Your understanding clearly doesn't go as far as "reading".
Re: (Score:2)
You’re conflating “being implemented using X” with “being able to perform X as a task”.
No, I'm not.
Yes. You are.
Re: (Score:2)
Let's go over the sentences that were apparently too difficult for you to read.
Because no amount of statistically significant wordbarf will accurately represent a precise computation.
I love that you see to think that a set of ReLU parameters trained on several trillion tokens can't do math, lol.
Get it?
Are you also stupid enough to make the claim that an LLM can't do math?
Re: (Score:2)
As I’ve said from the very, very beginning:
Are you talking about LLMs that can barely multiply even small matrices? They are language models — terrible at numerical analysis, especially for the vast data sets in this scenario.
An LLM can barely do a 16x16 matrix multiply, or similar amount of processing (eg FFT of 256 element 1D array). A task that would be trivial in native implementation on teraflop-class hardware (now just a commodity GPU) — such as 1024x1024 2D FFT for image analysis — would be impossible to perform on an LLE due to the massive inefficiencies i
Re: (Score:2)
Re: (Score:2)
Also keep in mind that the 10x10 matrix multiply example you gave should take 1000 float fused multiply-adds (fma) operations.
Correct.
On the LLM it would have taken many orders of magnitude more, likely billions of fma and millions of times less efficient, assuming it even obtained the correct result (did it?).
What? lol. That's pure lunacy.
You think it can't do basic math iteratively?
And yes, it did come to the correct answer. Of course, like I said, it took about 30k tokens to do it, which is ridiculously inefficient, but the point was to prove that you were wrong, not to prove that it was an efficient matrix multiplier.
Ironically, the LLM is implemented USING matrix and tensor operations, but is very poor at DOING these operations at the token generation and inference level.
Poor? No, it's perfect fine at doing it- it's just not efficient, for very obvious reasons.
Re: (Score:2)
My original claim was this: “Are you talking about LLMs that can barely multiply even small matrices? They are language models — terrible at numerical analysis, especially for the vast data sets in this scenario.”
My follow up statement, to which you replied, was this:
Re: (Score:2)
> LLMs are extremely inefficient at even moderate numerical processing
Re: (Score:2)
Maybe read the thread. It is ALL about efficiency and reliability: whether an LLM can do numeric processing on a scale needed for processing the astronomy imaging datasets discussed in the article.
That's not how AI-assisted research works.
The relevance here of showing that they can reliably multiply matrices is to demonstrate that they understand the fundamentals.
You'd no sooner have an LLM manually compute a model than you'd have a human. The LLM would design it, including its training methodology, and look at the results.
My follow up statement, to which you replied, was this: “LLMs (themselves) cannot even perform a moderate matrix operation reliably. This is because they are language models, and have very poor performance on large numerical tasks.”
And this statement is absurdly false.
The reliability isn't the problem, the problem is you're treating the LLM like it's a first year college student. Why on earth would it com
Re: (Score:2)
Re: (Score:2)
Read my very first statement: “Are you talking about LLMs that can barely multiply even small matrices? They are language models — terrible at numerical analysis, especially for the vast data sets in this scenario. ”
Not sure what your inference skills are like, but my concern there is efficiency of LLMs themselves. Read the quote. They CANNOT perform large numerical cal
Re: (Score:2)
Re: (Score:2)
You said:
My point was that LLMs (themselves) cannot even perform a moderate matrix operation reliably.
Your point was reliability, and now you're trying to walk it back. Just fucking admit it, ffs.
That LLMs can barely do a 10x10 matrix multiply, and likely not a 100x100 matrix multiply, proves my point.
No, it doesn't. The fact that it can do it reliably flatly disproves your original point. Your new point was never contested by me. Trying to pretend like it is won't save your argument.
Yes- LLMs are not great number crunchers, as we have both said.
They are, however, perfectly capable of being reliable number crunchers, just like a person is.
There's a reason my response l
Re: (Score:2)
Of course, even though LLMs are extraordinarily bad at numeric processing
No, they're not.
inefficient in compute time and memory to the extent that even moderate tasks are intractable
Yes on inefficient in compute time and memory. To the extent that even moderate tasks are intractable? Now that's just silly.
they are good at understanding and explaining this limitation. https://chatgpt.com/share/6816 [chatgpt.com]... [chatgpt.com]
It did no such thing. You asked it how inefficient it was for it to multiple a couple of matrices. It correctly answered. You keep beating this dead horse.
The only reason I brought it up, is because you claimed it couldn't be reliably done, which was wrong. It can't be efficiently done- which is perfectly true.
From the perspective of data analysis, if you're using i
Re: (Score:2)
Not sure what your inference skills are like, but
He's a neckbeard. He may sometimes use technical words and make technical claims, but any argument with him boils down to he's right and you're wrong. No logical fallacy is too large or blatant in service of that goal. And he doesn't actually understand any of it, anyway; what he uses in place of thinking is not that dissimilar to the LLM. He's just spewing free associations that feel like they support him.
He's been "arguing" like this since he created his account a couple decades ago.
Re: (Score:2)
Are you talking about LLMs that can barely multiply even small matrices? They are language models — terrible at numerical analysis, especially for the vast data sets in this scenario.
and
LLMs (themselves) cannot even perform a moderate matrix operation reliably. This is because they are language models, and have very poor performance on large numerical tasks.
LLM are language models. They CANNOT do even very simple numeric processing tasks, eg say a 1024x1024 2D FFT. This is because the scaling factors in time and space make this task IMPOSSIBLE in the LLM. On small operations they can do they are thousands or millions of times slower than a native implementation. On larger tasks they FAIL due to compute
Re: (Score:2)
It's not like multiplying a matrix involves different operates depending on its size, which should be the first clue to you that it's not a difficult task for them.
There are very big reasons why the LLM architecture does not scale to a decently sized matrix.
No, there's 1 reason, and 1 reason alone. Because they use their context to do the math, and their context is limited.
Look at it this way, it can multiply any two arbitrary matrices better than you can, without a tool to help.
Even a small 1000x1000 matrix would be completely intractable.
Indeed. It would be for you t
Re: (Score:2)
Large scale linear anlgebra via matrix and tensor operations are excellent for implementing LLMs.A large proportion of their implementation and compute cost is precisely that.
But LLMs (themselves) are terrible for performing large scale matrix and tensor operations.
Using an LLM to do large scale numeric processing for astronomy images or signal detection is utterly ridiculous, No toy example
Re: (Score:2)
> Indeed. It would be for you too.
No, because if attempting to compute a large matrix a reasonable person approaching this from first-principles perspectives would use a math tool specialized in multiplying large matrices if that were the actual goal.
That is why an LLM is a poor tool for the task...it is a generalized language tool that can be forced to emulate the base functionality of its own tech stack somewhat poorly with ho
Re: (Score:2)
No, because if attempting to compute a large matrix a reasonable person approaching this from first-principles perspectives would use a math tool specialized in multiplying large matrices if that were the actual goal.
So would an LLM, which if you had read further along the thread, you would have seen it did.
That is why an LLM is a poor tool for the task...it is a generalized language tool that can be forced to emulate the base functionality of its own tech stack somewhat poorly with horrible efficiency. It wouldn't even show up in the top contenders for the task of multiplying large matrices.
Of course it wouldn't... what kind of idiotic fucking point are you trying to make? Do you imagine that someone contested this?
The discussion was on reliability.
Dude, you're a fucking imbecile- get lost.
Re: (Score:2)
So would an LLM
"would" is doing a lot of lifting.
"Doesn't" would be more accurate, though.
Just because you imagine it doesn't mean it is actually so. And when you argue against what is using only your imagination, it just makes you an idiot. It doesn't make you a visionary, or Future Man, or whatever.
Go and invent your LLM that is better at math than mathematicians using specialist tools, then you can talk. Until then, stfu, it doesn't exist.
Re: (Score:2)
"would" is doing a lot of lifting.
No, it isn't.
Because it did.
"Doesn't" would be more accurate, though.
Again with your reading problem. You know- you could take classes for that. They're very good at bringing special needs kids up to speed, these days.
Just because you imagine it doesn't mean it is actually so. And when you argue against what is using only your imagination, it just makes you an idiot. It doesn't make you a visionary, or Future Man, or whatever.
What in the fuck are you talking about, you intellectually handicapped simpleton?
There's no imagination anywhere here. We're talking numbers, and observables. You're the shit-for-brains over here trying to inject with nothing but some dumbshit hallucinations.
Go and invent your LLM that is better at math than mathematicians using specialist tools, then you can talk. Until then, stfu, it doesn't exist.
Doesn't need to be better than a mathematician- just needs to be better th
Re: (Score:1)
I try that tomorrow.
While you have a point, I doubt you are right.
Anyway, the proposed AIs in question are image analysers and have nothing to do with LLMs, except that they also run on ANNs.
Re:A fitting use for AI (Score:5, Insightful)
Re: (Score:1)
Depends on the book, I guess :P
Wait a moment, I ask the little LLM I asked previously ...
No problem, invents matrices, multiplies them and gives step by step analysis/reasoning what it is doing.
Just do not know which LLM it was, I got a random one served.
Re: (Score:2)
That is a very small numerical processing task that requires only one million mused multiply-adds and would normally take only microseconds on teraflop-class hardware with the data in local memory. Negligible compared to the processing required for the astronomy analysis in the article.
I’ll wait.
Re: (Score:2)
Re: (Score:1)
Not sure for what you wait.
We do not use LLMs to analyse photos.
And an LLM that can produce a cross product for vectors of length 3, does it just fine for any length of vectors.
Re: (Score:2)
We do not use LLMs to analyse photos.
That was my original point. I simply asked:
Are you talking about LLMs that can barely multiply even small matrices? They are language models — terrible at numerical analysis, especially for the vast data sets in this scenario.
and
LLMs (themselves) cannot even perform a moderate matrix operation reliably. This is because they are language models, and have very poor performance on large numerical tasks.
They CANNOT do the numeric processing required for the astronomy analysis in the article. You and DamnOregonian seem to be having an argument no one else is happening.
And an LLM that can produce a cross product for vectors of length 3, does it just fine for any length of vectors.
Incorrect. LLMs are NOT just fine for any length of vectors: they FAIL on very small amounts of data (eg one million data elements) that would be trivial for conventional numeric processing. Because the overhead makes these tasks is prohibitive.
(I’ll leave as an aside that cross product
Re: (Score:2)
Re: (Score:3)
Nibiruuuuuuuuuu (Score:1)
n/t
Re: (Score:2)
Now explain to the class what Nibiruuuuuuu is.
Wait. Let me put on a huge tall hat first. Does this hat make me look like a God?
Re: (Score:2)
I had to listen to a nutter on this subject several times because I was living at a former commune-cum-sawmill where he still lived. He kept trying to tell me that it was passing close enough to Earth for its gravitation to affect our orbit, but it still somehow couldn't be detected.
Re: (Score:2)
Re: (Score:2)
Can't, the lizard people won't let me. I only got that one post out before
Planet Nine? (Score:5, Insightful)
That is Pluto you ninnies. It was discovered in 1930.
Re: (Score:2, Flamebait)
I think there are five dwarf planets, and about seven candidates.
Re: (Score:3, Interesting)
Yep, and Pluto, on average, is only 40 AU from the Sun. This supposed planet 9 is possible 700AU from the sun. The heliopause is 120 AU and basically considered the edge of our solar system.
So calling something 700AU from the Sun a planet sounds suspicious at best. I'm likely showing my ignorance here but this still seems out there.
Distance from the sun of each planet and Pluto https://phys.org/news/2014-04-... [phys.org]
Re: (Score:2)
So calling something 700AU from the Sun a planet sounds suspicious at best. I'm likely showing my ignorance here but this still seems out there.
The main factors appear to be "Is the trajectory of this object governed mostly by the sun's gravity?" and "Is it going to stay on a more or less elliptic trajectory around the sun?", and these factors make a lot more sense than "How close is it to the sun?".
Re: (Score:2)
Re: Planet Nine? (Score:2)
Re: (Score:3)
Pluto is a planet. I didn't make the "IAU" my word definition body. And their definition doesn't actually make any kind of sense. Defining what one object is based on the characteristics of other objects is not something that passes any standard of rigor. And you're doing it again with your pluto/moon comparison nonsense.
It orbits a star, it is massive enough to collapse into a sphere under its own gravity, then it's a planet. If it doesn't go spherical, then we can talk about other names for it.
Pluto
Re: (Score:3)
By your own definition, that makes Pluto at least #10.
What's your excuse for Ceres other than they didn't tell you that it was a planet in elementary school?
Re: (Score:2)
This is the 2020s.
Scientific facts are out; nostalgia is in.
We can expect an executive order regarding the status of Pluto in the upcoming weeks.
Re: (Score:2)
Mod parent Funny, but I already dibbed "Neo-Pluto".
Re: (Score:2)
I would think Trump would go for renaming Jupiter, the largest planet. Probably call it "America" or some such.
The Earth will become "Planet Trump". Elon or Bezos get to rename Mars; depending on who gets there first.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Please name some of the “planets larger than Pluto” known before 1930 (aside from the 8 other solar system planets) .. Since there are thousands you should be able to name one or two. You know what, I’ll even accept one that is not a moon (you said planet) and confirmed larger than Pluto and within the Solar system.
Not planet 9 (Score:5, Informative)
Re: (Score:1)
Yes, reddit, that most credible source of scientific knowledge.
Re: (Score:2)
I get what you're saying, but that particular Redditor is renowned, look her up.
Re: (Score:2)
Noise can be a planet too, as long as it clears its path. One theory is the universe is filled with black noise.
Re: (Score:2)
While it is absolutely NOT Planet 9 (the orbit, while not yet nailed down much, would obviously be well outside the required paths), it could be something else other than just noise.
That could be just as potentially exciting and interesting as finding Planet 9.
Re: (Score:2)
Leave it alone (Score:1)
Re: (Score:2)
Re: (Score:2)
Number 9 number 9 number 9 (Score:1)
Ed Wood almost got it right! (Score:2)
It's just ... (Score:2)
Planet 9 From Outer Space (Score:2)
Worst movie ever. So bad you have to see it.
Oh wait, that was Plan 9...
Pluto has been a known entity since 18 FEB 1930 (Score:1)
.....and it was big enough for your mom.