Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science

Ask Slashdot: What Are the Most Dangerous Lines of Scientific Inquiry? 456

gbrumfiel writes "The battle over whether to publish research into mutant bird flu got editors over at Nature News thinking about other potentially dangerous lines of scientific inquiry. They came up with a non-definitive list of four technologies with the potential to do great good or great harm: Laser isotope enrichment: great for making medical isotopes or nuclear weapons. Brain scanning: can help locked-in patients to communicate or a police state to read minds. Geoengineering: could lessen the effects of climate change or undermine the political will to fight it. Genetic screening of embryos: could spot genetic disorders in the womb or lead to a brave new world of baby selection. What would Slashdotters add to the list?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: What Are the Most Dangerous Lines of Scientific Inquiry?

Comments Filter:
  • by abigor ( 540274 ) on Thursday April 26, 2012 @08:07PM (#39814683)

    Where I live, certain ethnic minorities (actually, taken together they are actually a majority) are notorious for screening embryos for gender. Then they abort the females until a male is born first. It's become such an issue that it's now illegal to specify an embryo's gender until the window for legal abortion has passed (I don't remember how many weeks/months that is).

    If you're white, the doctor will still tell you if you ask though.

  • Re:In other words... (Score:5, Interesting)

    by Guppy ( 12314 ) on Thursday April 26, 2012 @09:01PM (#39815329)

    Speaking of Sci-Fi, the lead female character (Mira) in the book "Evolution's Darling [kirkusreviews.com]" is an assassin who targets scientists that have been judged by Mira's AI-overlords as being too close to making undesirable discoveries.

    For instance, one of her past targets included a researcher working on teleportation (which they calculate will lead to the collapse of civilization), and much of the story involves her mission to assassinate a rogue AI who has developed a method of making perfect copies of AI minds. All for the protection of society of course.

  • by Beryllium Sphere(tm) ( 193358 ) on Thursday April 26, 2012 @09:02PM (#39815335) Journal

    There would be ethical and humanitarian applications for it, but mere death and pain would be hard pressed to compete with the potential damage of perfect propaganda. If some combination of psychology, hypnosis, drugs in the water, drugs in the drugs, or whatnot made it possible to get people to believe anything you said, that could be the end of all freedom forever.

  • by vistapwns ( 1103935 ) on Thursday April 26, 2012 @09:41PM (#39815729)
    Who needs a soul or magic? With nanobots and AI, someone could torture someone (or everyone), well, forever. What would people do if they knew that, and knew such technologies were coming soon? Perhaps this is the reason most people call the singularity a 'nerd rapture' and other things, there are very unpleasant possibilities inherent in a very technologically advanced universe and it's better if nobody acknowledge they're coming to keep people from panicing.
  • Re:Nothing... (Score:5, Interesting)

    by Prune ( 557140 ) on Thursday April 26, 2012 @11:10PM (#39816571)
    I have a question for you, which may or may not be one from a devil's advocate standpoint (frankly, I haven't made up my mind yet). It's based on two trivial observations: 1) science and engineering are enablers of increasing reach of influence with decreasing effort, and 2) destruction is generally easier than creation and restraint. Having spelled them out explicitly, I think you know what I'm about to say is the obvious implication: technological progress over time allows an ever smaller group the ability to cause bigger death and destruction upon increasing areas and populations, with countermeasures and constraints lagging behind this ability (human history has been following this trend, where we went from massacring competing tribes to the ability to cause nuclear winter and kill most of the population). Taken to its logical limit, we are going towards the point where an individual will be able to destroy all of humanity (the specific method, be it "grey goo" or bioweapons or nuclear weapons or computer virus when we're all wired or have uploaded our minds into machines are details that don't affect this argument). The fundamental asymmetry of destructive power versus reactive protection schemes mean that even if many attempts are thwarted, eventually one is bound to succeed as time goes on. It seems to me that the ONLY way to deal with it is the most distasteful one--proactive countermeasures--constantly monitoring, privacy and anonymity nullifying pervasive surveillance (be it by people or machines, all the same) that know what everyone is doing at any time. I'm still waiting for a good counterargument, since I would LOVE it if there was a nicer alternative that would satisfy my warm feelings about freedom etc.
  • Tasp... (Score:5, Interesting)

    by AliasMarlowe ( 1042386 ) on Friday April 27, 2012 @03:11AM (#39817725) Journal

    Successfully making a tasp or droud [wikimedia.org] would probably lead to the end of humanity in a generation or so. At least the end of any non-stone-age parts.

  • Re:Nanotechnology (Score:2, Interesting)

    by Mindcontrolled ( 1388007 ) on Friday April 27, 2012 @04:04AM (#39817977)
    The gray goo scenario already happened anyway. Except that it is green. We now call it the biosphere.

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...