To Boldly Go Nowhere, For Now 308
An anonymous reader writes "A recent Slate article makes the argument that manned space exploration is not useful and we should concentrate on Robots. The article makes the claim that manned space exploration was never popular and by diverting money to robotic space exploration we can get more bang for the buck. From the article: 'Most of the arguments in favor of manned space exploration boil down to the following: a) We need to explore space using people since keeping the entire human race on a single piece of rock is a bad strategy, and even if we send robots first, people would have to make the journey eventually; and b) humans can explore much better than robots. Both these arguments are very near-sighted—in large part because they assume that robots aren’t going to get any better. They also fail to recognize that technology may radically change humans in the next century or so.'"
only from a short sighted perspective (Score:5, Interesting)
Yes, it's true, sending humans in little cans around to the moon or low earth orbit is not directly valuable in any short to medium term way.
But it's valuable in ways that matter if you're not an MBA.
It gets a new generation of children enthused about math, science, and engineering.
It instills a sense of curiosity and a desire to explore in the next generation.
How do I know? Because I grew up watching the Apollo program, and probably would not have gone into a STEM field if not for that. It kept me dreaming when the schools failed to do so. This is true of friends my age too. We didn't become astronauts, but we DID watch one of the most amazing feats undertaken by humanity, and grew up with desired formed by that experience. Arguably, it influenced the entire US culture for a generation, and gave a "can do" attitude that seems almost extinct now.
It's worth it for that alone. If you get some nice spinoffs from it, hey, bonus!
It's not just about the data (Score:3, Interesting)
The engineering problem of sending a human to another planet is very different from that of sending a robot. And the resulting knowledge will be different too. Why not do both?
Re:It's not just about the data (Score:3, Interesting)
Because [...] the media will be full of stories for months after you kill a human crew in deep space, whereas a failed unmanned mission makes a brief story on page ten for a day.
So what you're saying is that all we have to do to get the media to focus for months on science and exploration instead of salivating over war and celebrities is to sacrifice a few astronauts?
Can I sign up for the program? I'd consider that a noble use of my life in and of itself.
Horse-Puckey! (Score:2, Interesting)
You people are making me start to take Col. Corso and Whitley Strieber seriously. What rational purpose is there to keep humanity earthbound? Yes, I know all the wornderful economies and the things that robotics can do, nothing against that, but why do the "scientists" pushing, or letting themselves be used to push this anti-human crap think that anyone will really care in other than a marginal way about space exploration if there is no prospect of ever going there? To be crassly blunt about it, how long do they think they will have their ivory towers from which to make these pronouncements, if everyone else has much more pressing concerns, such as surviving, period, than questions such as, oh, whether there was ever running water on Mars, Where do they think their funding is going to come from? Who will fucking care whether they live, die, or get to do science, or get sent packing to flip burgers or sell widgets, or whatever, like the rest of us?
What a mind job. Such arrogance. Let me guess, people who make a Chicago community organizer way out of his league look like Putin.
Humans will boldly die in space! (Score:4, Interesting)
Intense radiation levels alone during Solar storms & extra-solar Gamma rays along with normal human frailties in health will doom long extended space voyages in any near term.
Way in the future, extra-long multiple lifespan voyages at super-high speeds will also be futile as "space" is note "empty space" but full, chocked full of ions and molecules which spacecraft will hit at these projected "hyper-velocities". This effect on metals and other surfaces is similar to what is experienced on earth in plasma cutting now: See Wikipedia on "plasma cutting"
People think "space" is automatically 'cold'. That may be true in most places, but if you get into high velocities and run into a string of hot gas, you may find your spacecraft melts surprisingly fast. True, it is not likely as sensors should let spacecraft avoid these areas, but we simply don't know. Our own Sun throws out these super-hot plasmas, so it is not uncommon.
Robotics seems to have great advantages the minute you leave immediate Earth orbit.
Slate is part of the "premier" liberal press (Score:2, Interesting)
This means, besides the fact it is the mouthpiece of the usurious billionaire ruling class, it abides by one singular ethic: hedonism.
If it does not maximize pleasure, it is evil. If it maximizes pleasure, it is good.
There is a reason the members of this class seem like robots, especially to the European Faustian soul - they possess no understanding of what differentiates men from animals. Their ethic is that of a dog, that shivers when it is cold and wishes it was not so. They do not understand why men would go to the moon anymore than they can understand why Europeans commenced on the creation of the modern world with the beginning of the Age of Discovery. They can conceptualize the works of great artists only in terms of brilliance that would get them into Harvard or some other type of artificial hierarchy. They do not feel deeply, they do not see further. They have nothing to say except that they want to be good.
Which brings us back to their pathetic and simplistic ethic.
It is all simple pacification.
Re:It's not just about the data (Score:5, Interesting)
American taxpayers love the military. At least most of those I know.
Let's clear something up here (Disclosure: I am a military veteran):
Americans love the military solider, sailor, airman, coastie, and marine. They love the really bad-assed hardware (well, most guys do). They love the sense of self-testing, charater-forging and adventure that often accompanies service. Hell, nothing was more exciting to the 22-year-old kid I was than to tweak and tune a multi-million-dollar aircraft capable of doing heavy damage on anything that you care to point it at.
Now - that said: Americans (*especially* those who served in the military) most definitely do not love the chain-of-command, the privations, the suspension of rights required to serve, or the really fucked-up ways in which the aforementioned chain-of-command often expresses themselves.
TL;DR? "Loving" the military is too simplistic. Try something other.
Here's an idea: you go and stand for Congress on a pledge of giving $500,000,000,000 to NASA to put an astronaut on Mars. Let's see how it goes.
It's a mere question of priority. I'm willing to wager that if a comet were projected to slam into the Earth in 5 years, Congress would quickly spend 100x that sum, just to put as many people on Mars (and Moon, and orbital colonies, etc) as they humanly could.
Re:We need to send more autonomous robots in space (Score:5, Interesting)
Were you being sarcastic? I have compiled thousands of pieces of code in the last 30 years. None of them have magically transformed into anything other than what I compiled. AI is not voodoo, magic, or anything else.
It's not magic. Neither is cognition. Your big ass-brain is highly inefficient, it's a poor standard to gauge others' sentience against. Did you know the machines are exploring Mars all by themselves now? Curiosity has a machine learning system, for navigation, among other things.
It only takes a few cyberneticists being a bit disenchanted with humanity's forty years of failure to realize the spark of life must spread to the galaxy by another means... I'm getting ahead of myself. It only takes one learning program and a super computer's worth of power and a bit of time to create a learning machine system as complex as your mind is.
Check out my little AI children. [vortexcortex.com] (up/down arrow to change sim speed). Click one and you can see the neurons firing. Aren't they cute? It takes about 300 lines of code (mostly boilerplate and environment sim) to create programs (plural) that can learn (there are 20 here, learning). It really only takes 4 neurons to get them to collect dots. However, I added a hidden layer and some extra input about their neighbors energy status and location. Neurons Left to Right: [leftness of food], [forwardness of food], [other's energy - my energy], [leftness of other AI], [rightness of other AI]. There are enough neurons in the hidden layer to allow each input to be considered against all the other inputs. The outputs work like tank treads, or thrusters in space, sans inertia. Their "eye" neurons are like simple directional antennae, with only two neurons required to pick up a full 360 direction AND distance due to fall-off (inverse square of distance law).
This environment applies natural selection to the brains. The only selection criteria is those that have more energy get chosen to breed more often. This results in various strategies for movement in different runs of the sim: slow, fast, forward, backwards, spiraling, aiming just past the target, then stopping and reversing into the target. Different social behaviors: Bumping to share energy among a group of possibly like minded individuals, or avoiding each-other to save energy, sometimes switching between the strategies depending on the neighbor's energy level vs one's own... Their brains start blank, and in only a few generations movement is emerged via selection. Steering towards dots comes next, then avoidance or collision, Usually a hundred or so generations the social status becomes a factor to compete via.
Such variation from so minimal input. Intelligence is an emergent property of complexity, you see. Tailor the complexity such that the information is self reflective, and self improving and you get intelligence. Instincts are basic intelligence encoded in genes, expressed as brain structure (firmware), culture is your software, and evolves much faster. Unfettered from a life cycle of years natural selection can be very powerful, with a bit of guidance it could blow your mind...
So, Just create a problem space, and goal. Connect a few dozen neurons, and without any guided training a good solution can be arrived at given a bit of time. This is how a machine learning system could come up with ideas and solutions. Consider the sim not many smaller AIs but one AI made of 320 neurons solving the problem of most efficiently collecting dots via swarm of bodies.
Each brain is 32 neurons, there are 8bits worth of strengths (weights) for each neuron, so 256 bits in the genome (though note: I could make them evolve to move towards dots with only 32bits in their heads). Machine intelligence is efficient. It can do far more with much less. The barrier for sentience is far lower than you think.
Your brain is 100 billion neurons, but is VERY inefficient, and mostly not concerned