Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

Virtual Astronomy 74

DarkKnightRadick writes: "In this day and age, data sharing, data mining and distributed computing are words most of us know well enough, but until recently, those phrases were connected with such projects as DNET, and more recently with SETI@Home. Now we should all welcome the newcomer, Virtual Astronomy. With the framework being developed by three different groups (one in the UK, one in the US, and one in Australia), one would expect this to be a very competitive field, but alas, this is not the case. The three groups are working together so that they can have it all up in running the in the projected 15 years that it will take to put all this data into an electronic format."
This discussion has been archived. No new comments can be posted.

Virtual Astronomy

Comments Filter:
  • It was a lot of fun to participate in the Seti@home project and a worthy cause for spare cpu cycles. Sign me up Scotty.

    Kevin
  • As a NASA employee, I have seen a pronounced shift in the focus of the organization, from data collection to data distillation and mining. As it turns out, we have tens of years of work ahead of us in processing the data that we collected from space in the 1980s and 1990s alone. In fact, we have not yet developed software to mine the information out of this data that we need - the bottleneck is 95% manpower and 5% CPU cycles. And that is in spite of the budget shifting substantially toward processing and away from missions (which are expensive, misunderstood, and often goofed up).

    I fully expect that by 2010 or so, we probably will not be doing launches more than once every few years. Indeed, it is rumored that the recent space shuttle launch was intended only to intimidate third-world nations in the Middle East and make them realize our superior technology - not for any scientific purpose. I do not believe that that launch would have taken place in the absence of the 9/11 events. When I checked the calendar several months ago, it did not show any major launches until late 2002.

    ~wally
    • Its not just in astronomy, or even Science that the data gathering is streaking ahead of the data analysis.

      In business VAST amounts of information is now routinely being gathered. In most cases nothing useful is being done with it. One of the things we do for clients is identify the data that can be usefully REPORTED, rather than gathered.

      That way you only gather data that you will be getting value from, reducing the gathering effort and cost dramatically.

      If all scientific data were to be treated similarly - i.e. gather as you need - more resource could be put into the analysis as less is put into gathering. Data for its own sake is fine if you have unlimited resource. Most of us don't - not even Nasa during wartime.
      • If all scientific data were to be treated similarly - i.e. gather as you need - more resource could be put into the analysis as less is put into gathering. Data for its own sake is fine if you have unlimited resource. Most of us don't - not even Nasa during wartime.

        But in astronomy at least, new missions (in the case of NASA that means spacecraft, but it applies to ground observatories as well) are mainly driven precisely by need. We've got all this great X-ray data from Chandra, and in many cases it leads to more questions that can't be answered by the data it can provide. Ergo, we build the next generation instrument.

        Sure, we end up with lots of data that has other uses, and even some data that is never analyzed for lack of time. But don't think for a minute that we could learn anything like what we know by sitting back and just analyzing old data. You want to check the Iron K-line complex? Sorry, no existing data has the resolution.

        Mind you, the project in question is A Good Idea. But I assure you that new missions are very competetive, and are approved only if the science to be gained is not achievable with current instruments.

    • by Detritus ( 11846 ) on Monday November 26, 2001 @06:01AM (#2612506) Homepage
      Please do not feed the trolls.

      If you want facts, as opposed to fiction, see the current NASA launch forecast [nasa.gov].

  • -sigh- (Score:2, Insightful)

    by kgarcia ( 93122 )
    Am I blind, or is there no information whatsoever on where I can be part of this. From what I could tell in the article, this is a project in which 3 labs are cooperating to make a lot of data available to researchers, but nowhere it says that it is a cooperative effort at all similar to SETI, at least, not in 15 years. . .
  • So I have to say the concept and application of distributed computing has really opened the door to many research facilities and other projects to get off the ground and produce results. And the application of distributed computing for the crunching and modelling of data that is just "sitting around" to discover new astromical bodies or refine what has already been discovered is a very good idea. But what does this provide the end-user? From the article, it sounds as if this is aimed at the average Internet user who has a net connection:

    ...instead with little more than a laptop computer, an Internet connection and a learned and persistent amateur...

    So my comment is really this: Examples used to "sell" distributed computing before were pretty much aimed more at a larger group of people. D.NET had RSA sponsoring the RC5/64 competition with a cash prize. Also the bragging rights of how many Gigakeys you pumped out in a day was worth it to some. Also, with the Cure for Cancer project, this is something that people will see having a very positive impact to each and every one of us. But, with this, where is the incentive? What will it matter to the end-user (who will be donating his/her spare CPU cycles) what a space rock orbiting Pluto's diameter is? I see the concept will be very powerful if applied to this scenario, but I don't see it really catching on with people. Maybe those who are/have participated in Seti@Home can comment, since this seems to be a similar project?
    • You're completely missing the boat here. The "averate Internet user" is not a "learned and persistent [sic] amateur." This is not an SETI@Home-like effort to get end-users to donate "his/her spare CPU cycles." It is an effort to distribute data to interested researchers, including amateurs.
  • This story may have "star-reaching-implications", but their website has the same old banner ads...
  • The way I read the article, it doen't seem to be dealing with the issue as a burner of spare CPU cycles (as with seti@home), but more as a big resource for people who actually have some expertise in the field of astronomy.
  • Competition (Score:4, Insightful)

    by pubjames ( 468013 ) on Monday November 26, 2001 @05:09AM (#2612429)
    With the framework being developed by three different groups (one in the UK, one in the US, and one in Australia), one would expect this to be a very competitive field, but alas, this is not the case. The three groups are working together so that they can have it all up in running the in the projected 15 years that it will take to put all this data into an electronic format.

    Our village needs a new town hall. Because we're modern progressive thinkers, rather than build one, we've decided that we're going to divide the village into three teams, and then compete to see which team can build a town hall first. Each team will get a grant from tax-payers money to build their hall. Obviously this will give us the best and most efficient result.
  • More information (Score:3, Informative)

    by Internet Ninja ( 20767 ) on Monday November 26, 2001 @05:33AM (#2612456) Homepage
    The paper and more can be found here [hawaii.edu]
  • Uh... (Score:5, Informative)

    by Graymalkin ( 13732 ) on Monday November 26, 2001 @05:53AM (#2612487)
    The goal of this project isn't to recreate SETI@Home but to give astronomers all over the place access to data collected by instruments in places where they aren't. We've got thousands of instruments gulping down data but most of it doesn't ever get processed, just stored for later. Like the article says, anybody can have access to massive amounts of raw data. A grad student in the UK can download data gathered from telescopes in Hawai'i and write his or her own program to process them looking for the data they want. A group of amateur astronomers could request a bunch of wide field images and scavange through them looking for comets or asteroids.
  • by pease1 ( 134187 ) <bbunge@ladyandtr ... m minus language> on Monday November 26, 2001 @09:53AM (#2612972)
    As an amateur astronomer who has contributed to various research papers with professionals over the years, I have a number of friends who have been mining SOHO, , IUE, HST, MASS and other astronomical data archives for a number of years. Most have made some discoveries, usually in the form of new objects, clusters or comets. It's time consuming, and sometimes a bit mind numbing, but very doable for anyone with a decent machine and net connection.

    Works even better if you run Linux and can get IRAS running and have a good display, especially if you want to fool around with the Hubble archives [stsci.edu]. Professional astronomers have been doing their research on unixes for 20 plus years. Tools are available for the asking and most professionals and grad students are willing to help out an amateur who is serious. Linux brings, to an amateur, the same desktop power, but at a very low cost.

    Astronomy is one of the few hard sciences where an amateur can contribute serious work, either with nothing more than a telescope and a webcam to digging into the very numerous digital archives that are available for free.

    And to add to that, there is a long, long, tradition of amateurs and professional astronomers working together. For a great example see theAmerican Assoc. of Variable Star Observers [aavso.org].

  • by Dan Hon ( 17145 ) on Monday November 26, 2001 @10:23AM (#2613070) Homepage
    Most posters here haven't seem to have grasped the fact that these projects aren't dealing with letting the public access data in a Seti@home manner. That's not the aim at all. What they're trying to do is consolidate all the data that they do have available, and make that accessible to researchers. That way, you don't have to bid for expensive telescope time, you just make a requisition for the data, which would just get squirted at you over the net.

    Want a particular portion of the sky at a particular wavelength? Just check the database for it. Simple as that. With the amount of machine-controlled telescopes and the new arrays developed, sucking in all this data, managing it, consolidating it and allowing people to access it in an easy way is a great move forward.
  • Astronomy is a field where non-professionals still make significant discoveries. Virtual astronomy will further facilitate this. Any high school student with enough patience an accumen could learn something significant.
  • We here at SETI@home collected hundreds and hundreds of 35 GB DLTs worth of data, all staring at the hydrogen line. So with relatively little effort at all, we can construct a rather robust map of hydrogen in the sky above Arecibo Observatory.

    Details at: http://setiathome.berkeley.edu/newsletters/newslet ter10.html [berkeley.edu]

    - Matt Lebofsky - SETI@home

  • This initiative started when the Decadal Survey was released about 2 years ago now. At the moment all the "publicly" available data is spread across multiple depositories, one major one in France and then the NASA Hubble/Chandra/everything else depository and then the individual surveys (2MASS, Sloan, NOAO Deep Wide, etc. . .) are archived but can prove to be a pain to obtain. More than anything whomever does all this archiving must have a bunch of hard drive space. . . .for the NOAO Deep Wide Field we've got something like a terrabytes worth of data and we're still getting more. . . .to heck with travel expenses when you have to pay for new 100 gig hard drives every other month. . . .
  • Perhaps this is off-topic, but the link to distributed.net is broken in the story paragraph. The 'p' in 'http:' is missing.
  • The Canadian Astronomy Data Centre [hia.nrc.ca] (CADC) has been doing this since 1986.

    From their web pages:

    The CADC was established in 1986 as one of three world-wide distribution centres for data from the Hubble Space Telescope. HST archive is possible through a grant from the Canadian Space Agency.

    Most of the CADC software development is done in collaboration with the European Southern Observatories located in Garching, near Munchen, Germany and the Space Telescope - European Coordinating Facilities.

    The mandate of the CADC includes

    • operating and maintaining an archive of all the scientific data collected by the Hubble Space Telescope (HST).
    • operating and maintaining an archive of all the scientific data collected by the Canada France Hawaii Telescope (CFHT).
    • developing software tools for maximizing the scientific usefulness of astronomical archives. promoting the concept of astronomical data archives in the community.
    • providing technical information and user support on using data from the our archives.
  • I have the feeling that some may be confused about what virtual astronomy is. So perhaps I can try to clear up a little bit.

    Much of astronomy now is done by individual astronomers going to telescopes and pointing to interesting objects in the sky. Each researcher has lists of several objects they'd like to observe, and on their night at the telescope, they skip from object to object. This is good for the individual astronomer, but unfortunately, wastes a lot of time, because a lot of the time must be spent on finding the object, moving the telescope, etc. Because observing time is so precious, a new way has been discovered to make it more efficient -- virtual observing.

    Virtual observing mas made possible by the great advances in database technology and hardware storage technology. It works by having a telescope (which used to be used by astronomers for individual objects, for example) survey broad areas of sky, subsequently storing that data on disk. Efficiency is increased, because you essentially put the telescope in one position and let the sky move over it, instead of having to point from object to object. Also, setting the telescope up for one survey run is much better than setting it up for the 2 or 3 observers each night, who may have different requirements.

    Then, when an "observer" wants to look at an object, instead of asking the telescope to point there specially for him/her, now he/she just goes to the database and retrieves the image. It's better for the astronomer too -- quicker, no need to wait for a clear night, or your time to observer, and no need to fly out to the observatory. There are also lots of gains to be had in the science, too, because some experiments require large swaths of continuous sky to analyze, instead of just individual objects. Much of the work demonstrating the expansion of the universe relies on having such data, and it's only been possible recently with the first virtual observing projects.

    The challenges are, as stated in previous posts, compiling all the data so that it can be accessible in an easy way by observers around the world, storage, and data processing. It's going to be an exciting time for astronomy, and I think that our knowledge is going to increase rapidly!
  • You can also learn alot about astronomy with currently available databases.. Tycho-2 for example is huge. The most enjoyable software I've used so far is Starry Night on the Mac (and now PC I believe as well). On linux I have starcat, skycalV5, and xephem (which is serious scientific software!).

    Xephem (a planetarium and analysis program for linux) is very cool because it can both pull the sky from your LX200 telescope or by replacing the telescope driver with a perl script, it can download part of the sky from an online database, after which you can do realtime image processing on it.

    It can also match stars in the sky to stars in the database. So far I have only been able to pull down large segments of the sky at once, but as soon as I can clear the disk space I'll be trying some other pieces of software to try and download smaller pieces of the sky. Starry Night also downloads DSS [stsci.edu] (Digital Sky Survey) images I believe.

    NASA Skyview service [nasa.gov]
    Multimission Archive [stsci.edu]
    StarView [stsci.edu]
    Software [sfasu.edu] for different platforms (or check freshmeat.net)
    Serious scientific platforms/data [caltech.edu]
    Skyview (available at IPAC) is available as linux binary and installs quickly at 10mb. It lets you do image analysis with text commands. I have not used it a lot myself.
    AstroWeb [nrao.edu]

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...