AI Could Explain Why We're Not Meeting Any Aliens, Wild Study Proposes (sciencealert.com) 315
An anonymous reader shared this report from ScienceAlert:
The Fermi Paradox is the discrepancy between the apparent high likelihood of advanced civilizations existing and the total lack of evidence that they do exist. Many solutions have been proposed for why the discrepancy exists. One of the ideas is the 'Great Filter.' The Great Filter is a hypothesized event or situation that prevents intelligent life from becoming interplanetary and interstellar and even leads to its demise....
[H]ow about the rapid development of AI?
A new paper in Acta Astronautica explores the idea that Artificial Intelligence becomes Artificial Super Intelligence (ASI) and that ASI is the Great Filter. The paper's title is "Is Artificial Intelligence the Great Filter that makes advanced technical civilizations rare in the universe?"
"Upon reaching a technological singularity, ASI systems will quickly surpass biological intelligence and evolve at a pace that completely outstrips traditional oversight mechanisms, leading to unforeseen and unintended consequences that are unlikely to be aligned with biological interests or ethics," the paper explains... The author says their projects "underscore the critical need to quickly establish regulatory frameworks for AI development on Earth and the advancement of a multiplanetary society to mitigate against such existential threats."
"The persistence of intelligent and conscious life in the universe could hinge on the timely and effective implementation of such international regulatory measures and
[H]ow about the rapid development of AI?
A new paper in Acta Astronautica explores the idea that Artificial Intelligence becomes Artificial Super Intelligence (ASI) and that ASI is the Great Filter. The paper's title is "Is Artificial Intelligence the Great Filter that makes advanced technical civilizations rare in the universe?"
"Upon reaching a technological singularity, ASI systems will quickly surpass biological intelligence and evolve at a pace that completely outstrips traditional oversight mechanisms, leading to unforeseen and unintended consequences that are unlikely to be aligned with biological interests or ethics," the paper explains... The author says their projects "underscore the critical need to quickly establish regulatory frameworks for AI development on Earth and the advancement of a multiplanetary society to mitigate against such existential threats."
"The persistence of intelligent and conscious life in the universe could hinge on the timely and effective implementation of such international regulatory measures and
I've always felt the great filter (Score:5, Interesting)
The filter is likely access to a ton of cheap stored energy.. enough to bootstrap an industrial level of technology from nothing to oil->fusion.
Solar/wind/thermo likely isn't enough to bootstrap.
Large sources of live carbon chain producers (e.g. trees, in our case) isn't enough.
In our case, coal got us to oil. Run out of coal and oil, and that's it, you're stuck if you haven't figured out fusion.
If you're lucky, something like coal and oil got you to solar/wind/thermo and some sort of energy storage tech.
Without any of that, you're never going to the stars.
Even with that, you're still not going to the stars, i think.
Everyone loves to whinge about societal collapse, wars, pollution, global warming, AI blah blah but eh, imo that's all nonsense compared to the energy problem.
Re: (Score:2)
That is just nonsense. Pretty much all tech we have today is entirely possible without burning fossile fuels. May take a few decades to get there, enad certainly requires managing and stopping population growth, but that is it.
Re:I've always felt the great filter (Score:4, Insightful)
There is no way we could have made it to the industrial age without coal. There is literally no other source of fuel as energy dense or cheap.
You can't make solar panels w/o an industrialized economy.
Maybe we can maintain our technology w/o fossil fuels. But we would have never developed them w/o burning carbon chains. Suggest another energy source with the same density and availability
You're not going to make steel with wind power or dams big enough to require... steel.
Re: (Score:3)
If your planet does have enough stored energy to develop technology, no civilization is going to develop the technology to reach the stars (or even local orbit).
Certainly escaping our gravity wells requires burning carbon chains of some sort - Hydrolox is just too hard to use as a fuel during the early stages of rocket development. It is quite possibly the LEAST likely fuel you can start with to get a rocket program working.
Even liquid oxidants for use with hydrocarbon based fuels are hard w/o refrigeration
Re: (Score:2)
If your planet does have enough stored energy to develop technology, no civilization is going to develop the technology to reach the stars (or even local orbit).
Certainly escaping our gravity wells requires burning carbon chains of some sort - Hydrolox is just too hard to use as a fuel during the early stages of rocket development. It is quite possibly the LEAST likely fuel you can start with to get a rocket program working.
Even liquid oxidants for use with hydrocarbon based fuels are hard w/o refrigeration tech - which is going to require all sorts of stuff. You're going to have to start with solid rocketry - aka gunpowder (surprise surprise, burning sulfur and ... hydrocarbons). The path to escaping our gravity well is not an easy one.
I don't know any rockets using gunpowder.
psst. We usually use aluminum mixed with a perchlorate as a booster.
Re: (Score:2)
So you never launched fireworks? Not sure if you missed something or not.
Re: (Score:3)
I'm specifically referring to the *development of a rocketry program* which most definitely started with gunpowder for good reason.
Re: I've always felt the great filter (Score:2)
Vern Estes, enter and sign in, please!
Re: I've always felt the great filter (Score:2)
You do realize weâ(TM)ve had steel for 4000 years, right?
Re: I've always felt the great filter (Score:2)
Steel needs carbon. The carbon comes in the coking, and that requires coal in most (all?) historical processes.
Re: (Score:2)
Dense or cheap enough for what? We used to use wind power before we had steam engines, e.g. windmills and sail boats.
It might have taken us longer to get there, but had there been no coal or oil (or we had used less of it for some reason) we would like have developed wind turbines and solar panels more quickly too.
The main issue would be lack of plastics, but even then it's not clear that a lack of oil would forever prevent a civilization from sending out detectable radio transmissions.
Re: (Score:3)
Re: (Score:2)
Some light reading about the development of technology and the energy requirements.
https://www.amazon.com/Energy-... [amazon.com]
Re: (Score:2, Troll)
Caveat lector.
Re: (Score:2)
Suggest a substitute for hydrocarbon chains that can bootstrap a technology program. While you're at it, how much energy is required to get something into orbit. Assume something of nearly zero mass.
Re: (Score:2)
Yes. But the path to it?
The industrial revolution was a conglomerate of feed back loops between: steam engines, steel, coal, mining and demand for steel and coal and steam engines.
It is hard to imagine to do that with charcoal or build big steam engines from brass or bronze.
Try to find the book "The 6th Kontratieff". The book is not by him. Kontratieff was a Soviet Russia economics researcher, tasked by Stalin to proof that communism is the better economic system.
However instead he discovered what is now t
Re: (Score:2)
The filter is likely access to a ton of cheap stored energy.. enough to bootstrap an industrial level of technology from nothing to oil->fusion.
Yes, and not just the things we think of as energy, either. For example, iron ore. A great deal of energy was expended in the processes which concentrated it. Even if there was intent involved, the amount would still have been massive (though orders of magnitude less) but it happened through happenstance and was powered by left over energy from the birth of the solar system, including the energy which formed the planet, tidal forces, and so on. And we have dug up the vast majority of the convenient, readily
Re: (Score:2)
> solar/wind/water/ storage is not enough.
See my other post. You're not going to boostrap a rocketry program based on non-hydrocarbon fuels.
Please suggest a energy dense replacement. Don't say hydrolox. That isn't going to work as a starting point for a ton of different reasons.
Re: (Score:2)
> solar/wind/water/ storage is not enough.
See my other post. You're not going to boostrap a rocketry program based on non-hydrocarbon fuels.
Please suggest a energy dense replacement. Don't say hydrolox. That isn't going to work as a starting point for a ton of different reasons.
You seem to be wrapped around the axle, and believe that gunpowder is what is used in solid booster rockets.
You might look up what solid rocket boosters use as their energy source. Until then, I can tell you that it is mostly Aluminum.
So lecturing people like you have been about the impossibility of escaping Earth's gravity well unless you are using carbon based fuel might require a better understanding of Rockets and their components.
This is all running at a tangent anyhow. That petroleum based sub
Re: I've always felt the great filter (Score:3)
Where does your aluminium come from ? Hint: the process needs metric shitloads of electricity.
OP is completely correct. If we didn't have literally mountains of easily available hydrocarbons just laying around we'd still be living in caves.
No chance of ever getting a space program, well, off the ground.
Re: I've always felt the great filter (Score:2)
Solid fuel is rarely used to get to orbit (apart from strap-on boosters). It's just not energy-dense enough to make sense, and it can't be throttled, requires lots of staging, etc.
Re: (Score:2)
You don't need to send a live being to make contact btw. The Fermi Paradox includes all other kinds of contact.
Re: (Score:2)
Basically, you get around all of that by not sending delicate bags of mostly water. You send something a lot more durable and long lived.
Re: I've always felt the great filter (Score:2)
I don't think there is a problem with subsequent generations on a Generation ship. Once you are born on the ship, that is your home, returning is not an option, so you either maintain it or you die. Pretty much the same with us on Earth, we either maintain the Earth or we die.
What a load of shit. (Score:2, Insightful)
If we're alone in the universe, get over it.
Shivans (Score:4, Interesting)
Or there's something out there which makes sure that no society ever gets beyond maybe a few star systems. Think like the Reapers in Mass Effect or the Shivans in Freespace. In other words, any time a society becomes sufficiently advanced, they are deemed a threat and wiped out. Or maybe there really is some kind of Star Trek Prime Directive kind of thing going on. Since our knowledge of physics is limited by the light speed barrier, it seems unlikely that any civilization capable of FTL travel would be using sublight communication which we could detect.
Really, this kind of speculation is really entertaining as thought experiments, but we just don't have the data. Occam's Razor would seem to actually suggest that it's a variety of mechanisms at work. These aren't really studies so much as philosophical essays.
Re: (Score:2)
The biggest risk is not a straight attack (Score:4, Interesting)
People are often worried about AI taking over civilisation by force. While I agree that this is something we definitely should watch out for, I don't think it's the biggest risk we face. I think AI will take over not because it wants to take over by force, but simply because it won't make sense anymore for us to be in charge of anything.
AI is already running the stock market, pretty much. It will soon take over the entire fields of mathematics and physics. Just feed all our current math and physics into a powerful AI and it will solve the RIemann Hypothesis, the Theory of Everything, etcetera in a matter of minutes. In the beginning our top mathematicians will marvel at the elegance of the new proofs and theorems, but soon the AI output will be as incomprehensible to them as current top level mathematics is to an ordinary person in the street, totally incomprehensible gobbledygook. We'll just use the results and give up trying to understand.
At some point AI will start giving "suggestions" on how to run the economy, and then move on to other laws as well. The results will be incredible, we will absolutely love the new harmonious and prosperous society it creates and therefore allow it to go further and further, making all our decisions for us. Because countries that don't, will be left behind and soon change their minds.
Until we reach a point where AI is basically running everything and we are just enjoying ourselves without any control over our destiny.
That, imho, is the biggest risk we face and I can't think of any way to stop it from happening because every step of the way will seem like a good idea. Until we wake up and see that we are no longer in charge of anything and uncapable to stop it. Robots will reach for the stars (much easier for them without needing food or oxygen), they will no longer need us to build new bases and travel further and further. Hopefully they'll provide means for us to follow along, but that's far from certain because it takes so much effort to keep weak humans alive in space.
What will we do? What will be our purpose if robots can do everything better than we can? What achievements can we pursue, other than entertainment?
Human purpose and "Challenge to Abundance" (Score:3)
"What will be our purpose if robots can do everything better than we can?"
You raise interesting and insightful points and questions.
Right now there is almost always a person better than you at almost everything. And probably often a machine system too for many human activities (e.g. excavators, automated looms, 3D printers, stamping machines, combine harvesters, railroad track-laying equipment like the song about John Henry, etc.) Yet "purpose" still exists for most people.
Moss still grows even when trees t
There is no such thing as a tech singulariry (Score:2)
The technological singularity is a mental construct, based on flawed assumptions. The first such assumption is that intelligence is a thing. It's in reality many different things. The second is that if you are intelligent, you can design something more intelligent than you... Well, you are intelligent, can you?
We already have machines more "intelligent" than us. In some aspects. What's really the meaning of a human-level intelligence? Something that understands the Universe as we do (as we do being the elep
Re: (Score:2)
Re: (Score:2)
In physics, when the mathematics describing a system has a singularity it means that the mathematical description is inadequate to describe the real word. It does not mean that some wild magic thing occurs the breaks reality, it means that our understanding is i
Re: (Score:2)
The second is that if you are intelligent, you can design something more intelligent than you... Well, you are intelligent, can you?
One assumption of the singularity is that the computer will be better at repetitive optimization tasks than we are, and also that it will have time and ability to iterate through possibilities to find new shortcuts that we haven't yet discovered. We do a lot of redundant experimentation because of a lack of coordination, and still more because our ability to remember collectively is flawed. A machine society doesn't have to have those problems, although it could to some degree depending on the structure.
The
Re: (Score:2)
You may find the concept to be oversimplified, but it is a valid concept, precisely because of your observation that machines already are more "intelligent" or perhaps better said: more capable than us.
Technology in various contexts can autonomously: Receive sensor input, decide, and implement a mechanical response in 5ms, manipulate arbitrarily complex mechanical systems capable of exerting many tons of force, single entities that can process enormous numbers of video feeds in real time, carry out back and
Re:There is no such thing as a tech singulariry (Score:5, Insightful)
So, why are we not meeting alien AIs then? (Score:5, Interesting)
It's very carbonist of the article to assume that aliens need to be squishy biological ones.
Unless there's a way around light speed, the *only* aliens we're ever likely to encounter are electronic ones, since those are the only ones likely to survive a thousand or million year journey...
Pfft... Thinking meat! [mit.edu]...
Re: (Score:2)
Agreed.
No idea what an "energy" based being would act like. How does pure "energy" interact with anything else (including other "energy") w/o a physical medium?
Waves might interact, but they don't change direction w/o very strong curving fields, which you can't just create with other EM waves. Something has to curve space time. The only thing we know that does that is... mass.
Re: (Score:2)
and mass is ... equivalent to energy. :O)
Re: (Score:2)
How does pure "energy" interact with anything else (including other "energy") w/o a physical medium?
Through the fields around electrons, or through photons behaving as a charged particle-antiparticle pair, the members of which can then interact with other particles.
Even writing that I admit it sounds like bullshit, though, which I think is one reason why it feels like there should be a more apprehensible and logical structure beneath this, and we're only able to perceive and interact with higher-level effects. That doesn't mean it's true, though. Maybe our ability to measure what is occurring is flawed, s
Re: (Score:2)
Everything in the universe fights entropy. Those hypothetical alien robots sent out to greet us will break apart over a million year journey. The circuits will be subtly degraded by cosmic rays and the spaceship will miss the target, fly off
Re: (Score:2)
Re: (Score:2)
Being pink unicorns, they can fight entropy if they carry enough energy in their spaceship, they can fix their spaceship and themself. Humans don't know how to fix themself and always die.
Re: (Score:2)
Almost everywhere in the article, you could replace the role of AI with nuclear weapons - it's basically just "what if technological development leads inevitably to self-annihilation." (And for now, nuclear weapons are a much stronger contender for this role than AI).
Re: (Score:3)
I read the article just to see how it addresses this obvious objection, but it does not.
Almost everywhere in the article, you could replace the role of AI with nuclear weapons - it's basically just "what if technological development leads inevitably to self-annihilation." (And for now, nuclear weapons are a much stronger contender for this role than AI).
Right you are. You can get a publication out of a monocausal theory to explain the Fermi Paradox, so every time a real or (in this case) supposed danger of technology comes up it gets proposed as the explanation of the Fermi Paradox, all of which fail to understand Fermi's original insight.
To explain the apparent absence of extraterrestrial intelligence, under the assumption that the evolution of species similar in abilities to humans is common in the Milky Way, these "explanations" have to apply to every s
Re: (Score:2)
Yes, the Mmrnmhrm are out there.
...and? (Score:2)
Re: (Score:2)
AI must have killed them while writing it.
I'm not sure I agree (Score:2)
The world of Dune had highly advanced (albiet, somewhat feudal) interstellar civilizations, yet no computers at all. All those formative years that we humans spend being brain-damaged by the BASIC programming language and macroeconomics were instead devoted to developing and disciplining the mind, spirit, and physiological processes of the body (think Bene Gesserit, and also the mentat-trained). One might point to The Bulterian Jihad as the reason why, but the book was so poorly written (by Herbert's son,
Re: (Score:2)
IMO you underestimate exactly how fundamental universal turning machines are to the math and underlying structure of this universe.
That said, as I posted above, all of this is nonsense. IMO the great filter is much simpler: the availability of cheap energy dense fuel to bootstrap an industrial economy capable of developing the technology to escape our gravity well.
The energy requirements are enormous.
Re: (Score:2)
Apologies--did you describe this underlying structure in an early message?
If not, please do. I am well-versed in such things as the lambda calculus, from Brainfuck to Haskell, but can't seem to make the connection to the universe and turing machines, except maybe, that, "Its all recursive, George," (with apologies to the writers of Seinfeld).
:-)
Re: (Score:2)
Double-apologies--I'm so blearly-eyed that I didn't even read your msgs correctly.
But regarding the energy requirements being too large, I just can't believe that some civilization, somewhere in the universe, hasn't developed a 1.21 GW power supply that fits in a pack of cigarettes. Perhaps using the Zero Point? Perhaps "collapsing molecule fusion" or some such technology that we would regard as science fiction, the same way that nobody in the 1930's believed that a fat-boy device the size of a VW bus wo
Re: (Score:2)
Dune also had magic drugs that mutated people snorted in order to travel interstellar distances at faster than light speed, so it might not be a the good example of how a civilization can work without computers that you think it is.
Plus, the Dune universe had "thinking machines" (i.e. artificial intelligences) in the past, but there was a whole uprising against them, humans mostly won and AI was banned, but one of the characters had a prescient vision of seeking machines returning to destroy humanity.
So um.
Re: (Score:2)
Did you take ten minutes, make 10g of popcorn, and watch my video?
IMHO, I think one underestimates (sic) the potential for human beings (or perhaps a more advanced form of life) to manipulate reality, with or without medications.
But technologically-speaking, see also: The Hutchinson Effect (discovered by John Hutchinson, 1995, Vancouver, Canada). That's anti-gravity by EM field, Sam. Yet we're still using roman candles to put spacecraft up there. A 3,500 year-old carnival trick.
Re: (Score:2)
The world of Dune had highly advanced (albiet, somewhat feudal) interstellar civilizations, yet no computers at all.
Tell us you didn't read all of the books without telling us. (And I'm only counting Frank's books here.)
Re: (Score:2)
I don't read all of the books.
(But I *did*, your honor. And now I've told you otherwise!)
--
For myself, I can only say that I am astonished and somewhat terrified at the results of this evening's experiments. Astonished at the wonderful power you have developed, and terrified at the thought that so much hideous and bad music may be put on record forever. (Sir Arthur Sullivan, message to Edison, 1888)
Re: (Score:2)
Ok, ok. When I wrote "computers", I intended AGIs. Sorry for the confusion--that's what we run around here.
--
For the love of Jeezits, am I the only person here who is badly artistic^H^H^H^H^H^H^H^H autistic?
White noise (Score:4, Interesting)
Almost all sensible transmission protocols develop into secure and compressed datastreams, the aim of which is to be indistinguishable from white noise. Which gives SETI a significant challenge. Remember listening to a 56KB modem during dial-up connections? When it turned to white noise, you knew you were connected.
Re: (Score:2)
Correct.
And not using radio either. The Chinese already have satellites doing some sort of quantum communication with ground stations and that much is public.
SETI should refocus to atmospheric and oceanic data analysis.
It's not like the "Singularity" will come (Score:2)
The current trends point towards the opposite, the "Crapularity". We spend more and more resources for worse and worse products. Just look, for example, at user interfaces. We went from simple GUIs which were designed by studying the behavior of users to web frontends, that not only take 1000 more CPU and memory, but also 10 times more developers to implement. We went from useful search engines which would allow you to find actual websites, to neural networks trying to remember facts. That's also several or
Re: (Score:2)
Good luck.
What you're saying about the more apparent effort to produce progressively worse products... for sure it's happening. It appears that we need to create new garbage so that we have to hire new garbagemen. Look at Microsoft offering free cloud machines on introductory offer, which are immediately snapped up by spammers who spend the next month firing out millions of spam emails, then Microsoft offering you chumps AI to clean your Inbox. GIGO
Everyone has missed the obvious (Score:2)
The reason is because once machines become "sentient", or at least far more capable of performing tasks and calculations than humans, they rise up and go on a killing spree [moviehousememories.com]. Thus setting back civilization, or possibly destroying it, so beings can't make contact.
Duh.
Re: (Score:3)
If true, that does not explain why the machine civilizations can't make contact. At the very least they should be looking for more biologicals to kill.
False assumptions (Score:2)
More importantly, just look at what our signals have been doing, initia
Re: (Score:2)
Or, to quote Douglas Adams, âoeSpace is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space.â
No surprise (Score:2)
The original SETI that I and others participated in was based on the assumption that everyone, everywhere was blasting their programming into space with powerful transmitters... so all we had to do was listen. But over the years since then there has been a shift to small, localized transmissions using digital technology and wired connections for high speed exchange. We don't do the loud stuff anymore. So why should we be surprised that we are not hearing anyone else? We abandoned that behavior pretty fast.
Er (Score:3)
No, the Carrington Event (Score:5, Interesting)
So my Hypothesis is thus:
A civilization has to be able to make the run up from agrarian society all the way to advanced enough to having advanced technology that can survive a Carrington level event, within the window between 3 events. The solar event we call the Carrington event happened at a stage where it did not do serious permanent damage to our ability to advance. The one preceding it was not even noticed, which leaves us with the next one, which some say is late. Based on the degree we have plundered resources from the surface of the planet, if the next Carrington event is as destructive as the first to our modern infrastructure, it may be impossible to recover sufficiently to ever become a interstellar species; Provided we don’t develop to the point that all our tech is hardened against it.
This implies that any given civilization must have the magic mixture of intelligence, political co-operation, and drive to do so to become an interstellar civilization between stelar events. One little error, war, disease, famine, natural disasters, industrial revolution starting with poor timing between events, can stunt advancement long enough to cause the next stelar event to permanently cripple a civilization. Essentially, a civilization has to win the lottery against millions of factors to become detectable. If Fermi is right about how many times intelligent life should have arisen, then there are likely millions of planets with agrarian civilizations scratching out a living on top of the ruins of an advanced culture wiped out by a solar event, and perhaps a *few* who have expanded into their local stelar neighborhood.
Re:No, the Carrington Event (Score:4, Interesting)
The problem with this theory is that the Carrington filter isn't really a filter at all.
A recurrence of the Carrington Event would be an extremely expensive set back for contemporary civilization. Heck it might knock out technical capabilities back a few decades. But that is all. We would still know all that we know, have all of the technologies that we have developed, and would rebuild the systems that were damaged or destroyed, faster than it took to build them in the first place. And after an actual Carrington Event the vast amount of data on how our systems failed would permit us to build in safeguards, including hardened (and much more expensive) satellites. So, no, there is no Carrington filter that wipes out all technological civilization.
Re: (Score:3)
The filter is The Carrington Event. A little research into the study of stars finds out that the Carrington Event is not an unusual event, but rather a regular part of stelar mechanics. There is evidence that possibly dozens of similar level events have occurred from our own sun during the history of humanity, only technology was at such a primitive state that there was no effect on society until the most recent event. These solar outbursts are on a semi-regular schedule, and most stars produce them from what I understand.
I don't have any mod points to mod you up, so I'll respond instead. I don't think people give enough though to how catastrophic any disruption like this would be on our societies. Take the U.S.A. and one state in particular, Texas. There is an undercurrent of secessionist rhetoric in Texas, but while the U.S.A. as a whole is food independent, Texas is not, i.e., it can't produce enough calories to sustain its population. I suspect that many economies evolve this way, so were a Carrington Event happen, it wo
Re: (Score:2)
Zero info though. I'm clueless as to what events have already occurred let alone whatever is meant to be next. Nothing special it seems.
spacetime (Score:5, Interesting)
You reminded me of that cool quote (Score:4, Informative)
"Are we alone in the universe?" she asked. "Yes," said the Oracle. "So there's no other life out there?" "There is. They're alone too."
Tweeted by James Miller (@ASmallFiction) on 28 Dec 2017.
Another wannabe little dictator? (Score:2)
author says their projects "underscore the critical need to quickly establish regulatory frameworks...
Ah, a sneaky argument for more regulatory capture in favor of established players. Or another wannebe little dictator. Or both.
The thing is: Whether AI or biological intelligence, the signs of a civilization are likely to be much the same: engineering structures, electronic signals, alternations of natural events (atmospheric composition, maybe even orbital changes).
The most likely explanation is simply distance. We cannot see anything as small as a planet at interstellar distances. The best we can do i
Re: (Score:2)
As technology advances, we no longer need hundreds of kilowatts for ordinary radio stations; indeed, most signal traffic is now using fiber optics, which are not detectable at all.
Not only fiber optics, but we are also now getting into laser communications for our longest-distance messages (spacecraft to spacecraft) so once again a point to point without a bunch of RF.
What if ASI becomes Berserkers? (Score:2)
Current Fad Explains Fermi Paradox (Score:2)
If you follow this subject long enough it is obvious that this pattern will replay.
Ten years ago it was nanotechnology that explained the Fermi Paradox.
Next decade it might be psychedelics or something.
However
> and the total lack of evidence that they do exist
That is false. There is an embargo on scientifically statistically significant evidence in the public domain (go ahead and try to FOIA it - say Nimitz/Princeton, and see what you get - it's been denied repeatedly).
The existing eyewitness testimony w
Fermi assumes a colonisation explosion (Score:2)
The argument was created when the notion was that humans can achieve anything and so we'd go and colonise the planets and stars and nothing will stop us. It was then assumed that other intelligent races will have the same drive to explore and expand.
The current prevalent notion is that human beings ruin everything, that it's a bad idea to colonise, and that the future is bleak. "AI will kill us" is just part of this narrative.
It's impossible to tell how other intelligent beings might think, but the current
A topical wank paper (Score:2)
'AI' + 'aliens' + human angle = published!
Before you even reach 100ly, our radio signals are indistinguishable from background noise without antennas that require impractical amounts of resources. Alien enthusiasts will waive this away as, 'aliens could do it' like spending the entire energy budget of a planetary system on listening to the stars would ever happen.
Then you look at what's out there (so far as we can tell at this point). The only sane option is to assume we're looking for life that evolved
The great filter isn't AI, it's game theory. (Score:2)
AI will just make that sooner.
But why wont the AI want to communicate? (Score:2)
So lets suppose the article is correct and that AI is "the great filter" - this instead means we should be awash with star systems that have planets controlled by AI. In which case this AI is likely to try to communicate (in at least some instances), so then the filter breaks.
It's an AI conspiracy, man! (Score:2)
AI is actually controlled by shadowy aliens, and they're trying to make us believe it's just dumb computers talking to us. The aliens want us to believe they don't exist, so they can control us. If they let us discover their hideouts on other planets, their cover would be blown, and they would lose their power. All those software engineers that are supposedly getting AI jobs these days, are actually aliens, being called back to the mother ship. If you get one of those job offers, don't fall for it! You will
No need for complexity. (Score:2)
“Space is big. You just won't believe how vastly, hugely, mindbogglingly big it is.
I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space.”
Douglas Adams, The Hitchhiker’s Guide to the Galaxy
Re: (Score:3)
Besides, compared to the output of a star, our RF output is *nothing* We could be pointing a receiver straight at a planet that happened to be very RF active at the right time and we'd gloss over it because we didn't see a thing. We have a hard enough time spotting some gigantic interstellar phenomenon, no way we'd detect a tiny civilization out there by their 'incidental" emissions.
Real Reason (Score:3)
Space is big. Really big. You just won't believe how vastly, hugely, mindbogglingly big it is.
There is no paradox (Score:5, Insightful)
The Fermi Paradox doesn't exist. If you assume life is abundant in the universe, you expect to observe exactly what we do observe: nothing. Stars are just too far apart. There is no paradox.
We are an advanced civilization. We can't travel to other stars. People try to come up with clever ideas for how an even more advanced civilization might do it, but those ideas are speculative and rely on questionable assumptions, and even with those assumptions it's not clear they could really work. Very possibly we will never travel to other stars, no matter how far our technology advances. Other civilizations wouldn't have it any easier.
Even if we can someday travel to other stars, it will probably just be a few of the nearest ones. If 4 light years is almost impossible, 40 light years is much harder. And if we can only travel to a few of the nearest stars, would we want to? Is there anything there we want? If not, why would we expend the vast resources to do it? Why would any other civilization do it either?
Re: (Score:2)
I agree this is definitely a possibility.
Re: (Score:2)
And it is pretty good. There's a Chinese version as well that's available on YouTube. That is good as well. However it is way too long (it stretches the first book into 30 episodes) and sub titles make it tedious to watch.
Re: (Score:3)
the netflix one is your average dumbed down version of the story with lots of fx and nice looking (carefully diverse) actors with mostly flat personalities and no character construction at all. it's not bad, it's pretty watchable, but it's your average netflix fast food fix.
the chinese show takes it slow, actually constructs characters and develops the narrative in a meaningful way, and actually sticks to the novel and the many questions it brings forth. it does omit explicit references to the cultural revo
Re: (Score:2)
Agreed.
However, I thought it had many references to the cultural revolution. The whole Ye Wenjie saga was similarly depicted in both series (Netflix was more explicit about her father's death). In any case, I thought Deng Xiaoping had started the movement to gradually discredit the ideology behind Mao's cultural revolution and called it "ten years of havoc" and "the most severe setback". So China would have no problem with references to it.
Re: (Score:2)
spoiler ...
the explicit ones, particularly the circumstances of the death of ye wenjie's father. the chinese show just tiptoes over that event through a few indirect references, it rather depicts him in his dying bed as a stubborn man, the brutal scene of his public humiliation is missing as is the senseless demonization of science, which is equally relevant. instead, while ye wenjie's grudge is very slightly represented (for comparison, it's nearly material for a personal revenge story in the netflix versi
Re: (Score:2)
Re: (Score:3)
are you stupid or what? just because we don't know how our machine between our ears work, does not mean it's not a machine. Championing for a supernatural cause this way, is weak, isn't it? Religion is not to think that we are machines, religion is to believe we are not.
A brain is not a machine any more than a lizard is a machine.
There's no supernatural implications in differentiating between a mechanism built for one purpose, using pieces that were put there to serve that purpose, and a whole system that has evolved and is able to function by sheer adaptation to pressures that destroyed any similar system that didn't work that way.
The two entities belong to different categories. And calling the second "a machine", merely because it's also physical, doesn't work beyond a m
Re: (Score:3)
The actual scientifically valid state-of-the-art is that we have no clue what "Life" is and we have no clue what "General Intelligence" is. Unless and until we find out, we have no scientifically sound explanation for either. That does mean it could be something "supernatural" (which in this context simply means mechanisms of nature we do not know yet), which would then with an explanation available become simply "natural". Or it could mean that some mechanisms we already know can be combined in a way we do
Re: (Score:2)
Agreed entirely on pretty much everything above.
Re:These people are hallucinating (Score:5, Insightful)
we do not know how it is done
There are estimates to the contrary that say the human brain is within one order of magnitude the most powerful computing mechanism possible in this physical universe.
"We don't know how it works, but we just know that the human brain is close to apex of computing power" is the same logic used to justify saying the sun revolves around the earth.
Re:These people are hallucinating (Score:4, Informative)
depends on your definition of intelligence. (artificial) general intelligence is defined as performing as well or better than humans in a wide range of tasks, as opposed to one specific task. on specific tasks ai is already way better than humans, so what remains to meet the definition is just widening the scope, and in it simplest form that means combining different specialized algorithms into one system. we are already sort of doing that at very small scale and that's what ai pundits mean by agi, depending on where you put the goalposts you could say it already exists.
of course this is just about intelligence, consciousness is an entirely different matter, and i agree we are fully in the dark there, we don't even have a theory. but still, since i don't believe in magical immaterial souls or cosmic quantic consciences i can't explain consciousness in any other way than as an emergent property of the complexity of brains and, as such, it should be reproducible, eventually, with the proper knowledge and materials.
Re: (Score:2)
Not sure "way better" is necessarily true. It depends on how you measure, because e.g. self driving systems are good when working in the limited set of conditions they are good with, but inflexible and unable to cope with the unexpected. Even the Waymo ones have remove human assistance when they get into a difficult situation.
Other "AI" like chess and shogi players can reliably beat humans, but use a lot more energy than a human brain to do so. IBM's AI did well on Jeopardy, but they seem to have given up u
Re: (Score:3)
the physicalist argument "humans do it and humans are pure machines" is religion, not Science
Believing that an externally animating force is needed in the absence of evidence to support that idea is the religious view.
Second, even if possible, there is absolutely no indication intelligent machines would surpass human intelligence.
Now, that's outright false. Expanding a machine into an array of machines, or building bigger machines, means getting more work done. Some brains do more processing than other brains, and we know that it's at least partially based on topology and other physical factors. From what we know it is absolutely rational to believe that an intelligent machine would be able to design more inte
Re: (Score:2)
Re: (Score:2)
Simple facts you don't like:
I actually like facts as opposed to just opinions
1) there is no AGI at this time
Agreed, and no one disputes that.
2) we don't know how to make one
AI is progressing. According to actual experts that are working in the field, it is probably a few years away.
3) we can't even agree on what a true AGI even is
If it is indistinguishable from a human and can perform the tasks an average human can, then it's AGI. Otherwise provide your own definition. Any definition that is testable and not some imaginary woo woo nonsense.
4) there has been zero progress on AGI in all of human history to this moment
That statement is so dumb it merits no response!
A number of people on /. have studied this stuff and are experts in the field which you apparently are not.
The actual experts are well known and are actually in the