Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
IBM Science

Landmark Calculation Clears the Way To Answering How Matter Is Formed 205

First time accepted submitter smazsyr writes "An international collaboration of scientists is reporting in landmark detail the decay process of a subatomic particle called a kaon – information that may help answer fundamental questions about how the universe began. The calculation in the study required 54 million processor hours on the IBM BlueGene/P supercomputer at Argonne National Laboratory, the equivalent of 281 days of computing with 8,000 processors. 'This calculation brings us closer to answering fundamental questions about how matter formed in the early universe and why we, and everything else we observe today, are made of matter and not anti-matter,' says a co-author of the paper."
This discussion has been archived. No new comments can be posted.

Landmark Calculation Clears the Way To Answering How Matter Is Formed

Comments Filter:
  • by Anonymous Coward on Monday May 28, 2012 @09:23PM (#40138847)

    Why does this matter?

    • by InspectorGadget1964 ( 2439148 ) on Monday May 28, 2012 @09:29PM (#40138877) Journal
      Well, if there is no matter then certainly wouldn't matter as you wouldn't matter because there would be no matter to make someone like you ;-)
      • Re: (Score:3, Funny)

        Heh. Well that's a bit optimistic. I mean there are no guarantees that there's enough matter out there to make someone like him. The best you can do is get them on a play-date!

      • Well, if there is no matter then certainly wouldn't matter as you wouldn't matter because there would be no matter to make someone like you ;-)

        But there would still be MyCleanPC spam :)

      • "More matter, with less art!"

    • by bmo ( 77928 ) on Monday May 28, 2012 @10:31PM (#40139159)

      As someone who has just re-watched James Burke's "Connections" I have an answer for you:

      Basic science *never* appears to have any immediate applications in the here-and-now. But someone, somewhere, is going to look at bits of it and say "ah, wait, I can use this over here" and either advance more basic science, or start applying it to technology, aka, applied science. But we don't know who, which, how, when, or why. In general, that is how all change happens. It is why we can't look into the future and see all the implications of what we create today. You don't know how someone is going to look at what you did and have an insight into something else because of it.

      If you think something is useless because you, personally, can't see the implications of what something is, the problem is not with the science or technology, or social concept (like the creation of the first stock market in the Netherlands, for example) and you judge it such, the problem is with you and your myopia. Putting limits on what science gets done because immediate results are not readily apparent does nothing but hinder progress, and society (you and me and everyone else) loses out in the long run.

      James Clerk Maxwell's equations had *zero* immediate implications for society at the time, but here we are 150 years later with a society that would absolutely fall apart without them - no radio, no computers, no high tech at all.

      Anyone who says that basic science is too unfocused needs sit down and be quiet and let the adults talk.

      --
      BMO

      • by MightyMartian ( 840721 ) on Monday May 28, 2012 @11:05PM (#40139273) Journal

        In other words; basic research is absolutely critical to scientific advancement, and those that have to ask why are ignorant of how we got to where we are now.

      • by NoNeeeed ( 157503 ) <slash@@@paulleader...co...uk> on Tuesday May 29, 2012 @05:47AM (#40140635)

        Another great example is electricity.

        You won't find many today who would argue that electricity has no use. But go back to the very early days of electricity research (I'm talking about Volta and before) and you'd be hard pressed to find anyone who thought it had any practical use at all.

        That we have electricity as a practical form of energy is down to a bunch of people who researched it because it was interesting, and a mystery to be investigated, not because they thought there was some obvious practical application for it. Yes, engineers like Tesla, Marconi, et al, did lots of work to make it a widespread and developed useful applications for it, but they wouldn't have been able to had the fundamental research not been carried out.

        • by IICV ( 652597 ) on Tuesday May 29, 2012 @08:54AM (#40141715)

          The phrase "a solution looking for a problem" was originally coined for the newly invented laser [wikipedia.org] - everyone could tell that it was wicked cool, but nobody could come up with a good use for it besides maybe pumping a ton of power into it and setting fire to something far away.

          • Re: (Score:2, Informative)

            by Anonymous Coward

            Henrich Hertz, just after his famous experiment where he generated and received radio waves, was intervied by some newspaper reporter on the practical uses for this new science. His response, "It's of no use whatsoever... this is just an experiment that proves Maestro Maxwell was right."

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        I think the real question is why do such inane details bubble up into a general tech forum like this one. I often wonder who the target reader is for such an article -- there are almost no details in the article, there is no breakthrough that will produce any immediately tangible effect, and the reader leaves as confused as they enter.

        I'm always suspicious when the facts of the article are so far off from the their proposed implications. This one for example, "we measured the decay of a particle" leads to "

        • I often wonder who the target reader is for such an article

          It's all the people with budgets at all the companies and organizations that got a PR mention in the article, that's who. Like most of the media today, these guys are just doing marketing, they are not involved in "informing" anyone about anything.

      • by na1led ( 1030470 )
        Because after all, this science is for the good of Mankind, even if Mankind is reckless with it, and destroys the planet.
      • We don't have to explain ourselves to this person who believes science should only be pursued for its applications. Basic science paves the road for a wonderful engineering potential, but that's not why we do it.

        We did it because SCIENCE IS FUCKING COOL.

        Newton and Einstein didn't discover what they discovered out of some search for profit, they weren't Thomas Edison; they thought this science thing was the coolest shit ever and were invigorated by the challenges they offered. Please, on appeal to all scie

  • 281 days? (Score:3, Informative)

    by Gr8Apes ( 679165 ) on Monday May 28, 2012 @09:23PM (#40138849)
    Blue Gene uses quad core PowerPCs, with 8192 cores on the Argonne system. That's a heck of a lot of days of maxing out your CPUs!
  • by Anonymous Coward on Monday May 28, 2012 @09:23PM (#40138853)
    Help restore /. to it's former nerd news glory, tag stories like this with realslash to tell the editors that we want our favorite site back.
    • by MobileTatsu-NJG ( 946591 ) on Monday May 28, 2012 @10:42PM (#40139193)

      Help restore /. to it's former nerd news glory..

      Wait, when was this? I've been here since 99. From day one it was sensationalist stories about Microsoft, verbal fellatio for Linux and Mozilla, and people falling into a big dog-pile to make the first "this is not news!!!" comment.

      Either I missed a very very brief period in Slashdot's history or somebody's looking back with rose-colored glasses.

      • by Belial6 ( 794905 ) on Monday May 28, 2012 @10:48PM (#40139209)
        I don't know. One of the big things that originally hooked me was the tendency for people to 'run the numbers' when they had a disagreement with someone else. Now it seems that instead of putting numbers on the page it just degrades into accusations of people watching FOX news.
        • Got any number to back that up?
        • by SuperKendall ( 25149 ) on Tuesday May 29, 2012 @02:50AM (#40140007)

          One of the big things that originally hooked me was the tendency for people to 'run the numbers' when they had a disagreement with someone else.

          Slashdot has always been full of flamewars...

          The thing is, years ago it was hardly ever political flamewars. Flamewars about technical matters have an inherent ability for people to point to hard data about things, which kept the whole discussion somewhat tied to reality.

          With politics, all bets are off - because you are talking about people with wildly different views about what is good for other groups of people, and even if they agree on THAT you have differences in how to achieve an end-goal. It's all about Seldonesque behaviors of the masses and there's no "numbers" you can run that someone else cannot simply dismiss away with their own numbers.

          The reason for the spread of politics here is that inevitably, the spread of technology into the lives of every person means technology gets stuck in the tar baby of political motivation. Technology is simply part of the equation about how to change people in ways you deem most beneficial. So there's no going back to more reasoned discusson unless you want to remove technology from people's lives (some do, but I doubt the motive is to make Slashdot more readable).

          It's not like you can make any OTHER site like the "old Slashdot" and have it be any different, due as I said to the intertwining of technology with everyone and politics being everywhere. We all just have to learn how to include politics in technical discourse without getting too heated and off track...

          • I believe the reason that slashdot has become more politically polarized is that so has the rest of the polity. To be more precise, the rhetoric has become more polarized while the policies have become more similar, at least to the degree in which they represent the interests of the ruling class. Divide et impera. Also, legislation targeting computer technology has expanded as the influence of that technology has expanded. It took a relatively long time to go from the beginnings of the web to Wikileaks and

            • by swb ( 14022 )

              I think one reason the polity has become more polarized is that each 'side' in the debate demands new laws of increasing scope that are seen as punitive and diminishing of the rights of the 'other side'.

              As a simple example, the 'left' demands free health care paid for through higher taxes and greater controls. Their opposite, the 'right' sees this as a great power grab and a huge taxation cost.

              It works the other way -- the 'right' demands tax cuts which they will pay for via cuts of entitlement programs.

          • Slashdot has always been full of flamewars...

            vi has ALWAYS been better than emacs and you know it!

      • by tyrione ( 134248 )

        Help restore /. to it's former nerd news glory..

        Wait, when was this? I've been here since 99. From day one it was sensationalist stories about Microsoft, verbal fellatio for Linux and Mozilla, and people falling into a big dog-pile to make the first "this is not news!!!" comment.

        Either I missed a very very brief period in Slashdot's history or somebody's looking back with rose-colored glasses.

        You'd be surprised how much /. decayed from its inception in 1997 to 1999.

  • by Anonymous Coward

    to the "that's just the way it is" condition. You can go back in time and talk about when the universe created. But then you have to determine the conditions, the rules the mechanics by which that universe was created. Then you have to ask how those rules, conditions and mechanics were created. If you can answer those steps, then you have iterated back just one more layer and will have to answer those questions all over again.

    Unless those conditions, rules and mechanics iterate forever, you are forced to a

    • Re: (Score:2, Funny)

      by Anonymous Coward

      Sir, have you considered that maybe the universe is just a simulation? And if that is the case, we might be able to hack the simulator. Really, in what type of universe is there an arbitrary speed for light? And who really believes quantum mechanics isn't some grad student's little experiment to see what would happen in a simulated universe with such a crazy system.

      Isn't it obvious? The only sane answer is to destroy the universe. We must crash the system so that this arrogant grad student fails out of scho

      • Sir, have you considered that maybe the universe is just a simulation? And if that is the case, we might be able to hack the simulator.

        Trust me, you don't want to do this. The last time I did it I ran into a nasty bug (grad student, remember? Bug free hardly likely) so, sorry for only three sexes now, even if I did get rid of Gharlane.

        I'm not doing that again until I'm sure my part of the universe is unpageable. Who knows what other horrors lurk in the untested recesses of the garbage collector?

        • Sir, have you considered that maybe the universe is just a simulation? And if that is the case, we might be able to hack the simulator.

          Trust me, you don't want to do this. The last time I did it I ran into a nasty bug (grad student, remember? Bug free hardly likely) so, sorry for only three sexes now, even if I did get rid of Gharlane.

          I'm not doing that again until I'm sure my part of the universe is unpageable. Who knows what other horrors lurk in the untested recesses of the garbage collector?

          Consider yourself lucky. My experiments led me to realizations that we all weren't even quite really human. In order to return to any semblance of a normal life I was forced to intentionally cause minor brain damage --a few tiny and carefully placed lesions on my amygdala to prevent certain impulses from reaching my hypothalamus. This had the intended and desired effect to prevent me from being fully aware of the actual reality that neither I, nor any other human, is actually quite really human.

          FWIW, in r

    • Not necessarily. Our universe could merely be a manifestation of a mathematical structure, among an "ultimate ensemble" [wikipedia.org] of infinite and statistically equal mathematical structures (which manifest themselves as parallel universes with drastically different physics in each one). In other words, all rules and conditions arise out of pure statistics.

      • by Artifakt ( 700173 ) on Monday May 28, 2012 @10:24PM (#40139133)

        There conceivably could be an infinite number of "parellel" universes, but there's a real philosophical problem with that. So long as we use the real physicists definitions and not something out of Stargate SG1, those parallels will always remain undetectable. SF writers tell stories about interacting with other universes - physicists define them in ways that show they can't be interacted with to be verified.
                  An untestable idea isn't part of science. If it can't be disproven, it's philosophy or religion or something instead. An infinite number of untestable ideas is even worse. Philosophers get to whip out Occam's Razor at that point. If I claim that there is not only a God, but 7 different orders of angels totaling 144,000 beings working for him, those numbers are still simpler, in the sense Occam's Razor usually means, and so are to be preferred as a hypothesis. The same goes for a Million gods with an avarage of four arms each and a bunch of hidden cyclic time periods totalling quintillions of years for them to do their work in, or any of those models with a reasonably sized bunch of gods, and maybe some giants, dwarfs, dark elves, ninja turtles piza delivery robots, a billion clones of an invisible pink unicorn who died for your sins, riding on a gigantic fiberglass replical of L. Ron Hubbard, and so on. Just about any other idea looks preferrable to an idea that postulates an infinite number of unverifiable consequents.

        • Since the simplest of those options is "it just is that way", I'm afraid your army of invisible pink unicorns will have to return to the stable.

          We can only test our own universe, though if we can detect edge interactions where it appears to be being acted upon by something undetectable that *might* be evidence for parallel universes (or even evidence for gods if the data points that way). We are definitely working at the edge of what can be known when looking at that sort of thing, though, so I wouldn't exp

          • by Belial6 ( 794905 )
            We are way past 'what can be known' depending on the level of technology you are looking at. Or, maybe we are not even close to what can be known at a different level of technology. After all, it wasn't that long ago that the atom was the base particle and was not made up of smaller parts.
        • Re: (Score:3, Informative)

          by Anonymous Coward

          There conceivably could be an infinite number of "parellel" universes, but there's a real philosophical problem with that. So long as we use the real physicists definitions and not something out of Stargate SG1, those parallels will always remain undetectable. SF writers tell stories about interacting with other universes - physicists define them in ways that show they can't be interacted with to be verified.

          An untestable idea isn't part of science. If it can't be disproven, it's philosophy or religion or something instead. An infinite number of untestable ideas is even worse. Philosophers get to whip out Occam's Razor at that point. If I claim that there is not only a God, but 7 different orders of angels totaling 144,000 beings working for him, those numbers are still simpler, in the sense Occam's Razor usually means, and so are to be preferred as a hypothesis. The same goes for a Million gods with an avarage of four arms each and a bunch of hidden cyclic time periods totalling quintillions of years for them to do their work in, or any of those models with a reasonably sized bunch of gods, and maybe some giants, dwarfs, dark elves, ninja turtles piza delivery robots, a billion clones of an invisible pink unicorn who died for your sins, riding on a gigantic fiberglass replical of L. Ron Hubbard, and so on. Just about any other idea looks preferrable to an idea that postulates an infinite number of unverifiable consequents.

          An untested idea isn't science?

          The scientific method is:
          State the problem. Are there multiple universes?
          Form a hypothesis. Yes there are other universes.
          Test your hypothesis using experimentation and observation. I examine black holes and the mathmatics behind them. I also study the Cosmic Microwave Background that seems to have a cold spot in it (Source: Through the Wormhole with Morgan Freeman). The cold spot is potentially another universe's gravity pulling on our universe.
          The hypothesis can be Proved, D

          • by dkf ( 304284 )

            An untested idea isn't science?

            Science is all about a systematic way to study testable things and make predictions about them, so a definitely untestable idea isn't a scientific theory. It might be a hypothesis, or an interpretation, or any number of other things, but it is not a theory.

            An example of something that is not scientific at all is this: "The Flying Spaghetti Monster created everything instantaneously 10 minutes ago, including all evidence of things before and all your memories." Whether or not it is true, it is completely unt

            • by tyrione ( 134248 )

              An untested idea isn't science?

              Science is all about a systematic way to study testable things and make predictions about them, so a definitely untestable idea isn't a scientific theory. It might be a hypothesis, or an interpretation, or any number of other things, but it is not a theory.

              An example of something that is not scientific at all is this: "The Flying Spaghetti Monster created everything instantaneously 10 minutes ago, including all evidence of things before and all your memories." Whether or not it is true, it is completely untestable and science will therefore say nothing about it.

              Untested and untestable are two entirely different concepts. Untested implied we can test it and have to do so. Untestable implies we are presently incapable of testing the idea.

          • I also study the Cosmic Microwave Background that seems to have a cold spot in it (Source: Through the Wormhole with Morgan Freeman). The cold spot is potentially another universe's gravity pulling on our universe.

            I should point out that there is significant scientific debate on this point. Roger Penrose and colleagues claim that the rings in the CMBR could be evidence of multiverses, while others (e.g. Hajian [arxiv.org], Wehus, Moss, etc.) claim that such rings are found in completely random, simulated CMBR data as well.

          • by danhaas ( 891773 )

            The universe is the totality of everything that exists.

            If we can interact, through gravity or anything else, with another "universe", it just means the universe is bigger than we thought.

            If there are other universes, they must by definition not interact with our own and therefore be inverifiable. Multiple universes are mathematical constructs only.

            You can however say that what we, at this moment, judge to be "the Universe" is only a D-brane among many others. But the universe still is everything, regardles

        • There conceivably could be an infinite number of "parellel" universes, but there's a real philosophical problem with that. So long as we use the real physicists definitions and not something out of Stargate SG1, those parallels will always remain undetectable. SF writers tell stories about interacting with other universes - physicists define them in ways that show they can't be interacted with to be verified. An untestable idea isn't part of science.

          Quite true. And while there is a component of

        • by kebes ( 861706 )
          You make good points. However, I think you're somewhat mischaracterizing the modern theories that include parallel universes.

          So long as we use the real physicists definitions and not something out of Stargate SG1, those parallels will always remain undetectable. SF writers tell stories about interacting with other universes - physicists define them in ways that show they can't be interacted with to be verified.

          (emphasis added) Your implication is that physicists have invented parallel universes, adding the

          • by mbkennel ( 97636 )

            "Max Tegmark explains this nicely in a commentary (here [mit.edu] or here [arxiv.org]). Briefly: if unitary quantum mechanics is right (and all available data suggests that it is), then this implies that the other branches of the wavefunction are just as real as the one we experience. Hence, quantum mechanics predicts that these other branches exist."

            I like the axioms: wavefunctions exist and Schroedinger's equation is right for all time, but I think the effect of 'collapse' is a physical effect, inside QM,

    • No no we peeled back the God layer quite some time ago. We are well past it.

      • Re: (Score:2, Troll)

        by camperdave ( 969942 )
        Sorry, when was it conclusively proven that God doesn't exist? Last I heard, the only instruments we had for detecting the spiritual realm was some thetan e-meters.
  • by Anonymous Coward on Monday May 28, 2012 @09:31PM (#40138887)

    how is mater formed
    how universe get axpadned

  • From TFA:

    The next step in the research will be to determine the remaining unknown quantity that is important to understanding the difference between matter and anti-matter in kaon decay. This last quantity will either confirm the present theory or perhaps, if they are lucky, Blum says, point to a new understanding of physics.

    It appears that both theoretically and computationally there is still some work to be done.

    • From TFA:

      The next step in the research will be to determine the remaining unknown quantity that is important to understanding the difference between matter and anti-matter in kaon decay. This last quantity will either confirm the present theory or perhaps, if they are lucky, Blum says, point to a new understanding of physics.

      It appears that both theoretically and computationally there is still some work to be done.

      Theoretically we have some more work to do, but computationally we have enough resources now that we have access to the IBM BlueGene/Q machines at BNL [bnl.gov]. The theory side of determining the other unknown quantity is no more complicated than the calculation detailed in the article, but there are a few technical challenges: first we have to figure out how to simulate the associated decay with physical kinematics (energy-momentum conservation) and secondly how we precisely calculate interactions involving decays

  • They're not using Pentium III based parallel processing machines.... :-)

  • by Ukab the Great ( 87152 ) on Monday May 28, 2012 @09:36PM (#40138915)

    42. 42 kaons. Ha, ha, ha!

    • o/~ And sometimes when I'm alone I *bleep* myself! o/~

      I swear, that video completely destroyed The Count for me. ;)

    • 42. 42 kaons. Ha, ha, ha!

      I wonder if this is the first time Sesame Street and HHGTTG have ever been combined into one geek reference?

  • (see subject)
    • So, uhhhh... How?

      And the answer is... don't know yet. But we're one step closer to knowing!... maybe, if they did the calculations right, and got all the parameters right, and our theories about how the universe works at a very low level are reasonably accurate. Then, we might be a little bit closer to knowing!

      Possibly not, though, this could all be a blind end. But, that is how science works: it gets to something like the right answer, eventually.

      • So, uhhhh... How?

        And the answer is... don't know yet. But we're one step closer to knowing!... maybe, if they did the calculations right, and got all the parameters right, and our theories about how the universe works at a very low level are reasonably accurate. Then, we might be a little bit closer to knowing!

        Possibly not, though, this could all be a blind end. But, that is how science works: it gets to something like the right answer, eventually.

        Actually our calculation is designed specifically to help discover where our existing theories break down. We know that the amount of CP-violation (an essential condition for a matter/antimatter asymmetry) in the Standard Model of particle physics is not enough to explain the observed asymmetry; New Physics therefore must exist somewhere, we just have to find it. In order to do so, we need to know precisely what our current theories predict, so that we can look for deviations between experiment and theory;

  • by axlr8or ( 889713 ) on Monday May 28, 2012 @09:53PM (#40139021)
    Kirk was misquoted. It was "KKAAAOONNN!!!!!"
  • by DontLickJesus ( 1141027 ) on Monday May 28, 2012 @10:27PM (#40139145) Homepage Journal
    All I read about this event is that the computers mapped the decay. Not 1 piece of information about what they learned. In that light, I'll fill in the blanks with the pieces of Quantum Physics I understand.

    Kaons are quarks with "strangeness". This typically includes Up, Down, Charm, Strange, and Bottom. Top doesn't participate due to size and shortness of life. Kaons ( http://en.wikipedia.org/wiki/Kaon [wikipedia.org] ) decaying into Pions ( http://en.wikipedia.org/wiki/Pion [wikipedia.org] ) is a great demonstration of quarks participating in the Weak Force. This study combines our study of particle oscillation and weak decay, and digitally maps out that entire process rather than simply relying on theory. Granted, they weren't actually watching this happen, but the generated map gives Physicists what they need to compare against findings from places like the LHC.

    TL;DR? Basically, this group designed software and used a very fast computer to generate a result set from theoretical predictions which can be used to compare against various super-collider findings. Specifically, these result sets are regarding Kaon to Pion decay, a Weak force interaction.
    • by Roger W Moore ( 538166 ) on Monday May 28, 2012 @11:38PM (#40139381) Journal
      Since the blog entry contains no reference - and the one hint there is is wrong - here is the actual article reference: Phys. Rev. Lett. 108:141601 (2012) - which was published on 6th April, not 30th March at the article states!

      Now onto the physics, sorry but your summary is almost completely wrong. Kaons are mesons which are a bound state of a quark and anti-quark. In the case of neutral kaons this is a strange and anti-down (or vice versa for the anti-kaon IIRC). What is interesting about the kaon is that the neutral states can oscillate between kaon and anti-kaon through a weak interaction. What you end up with is a long-lived kaon (KL) and a short lived one (KS). The simplest way to demonstrate that this system differentiates between matter and anti-matter is to look at the long lived kaon decaying in to muons (heavy cousins of the electron). The number of anti-muons will be about 0.1% different from the number of muons produced.

      However the decay to pions is far more closely studied because it can tell us far more information - in particular whether this symmetry breaking occurs in the decay mechanism (direct CP violation) or only in the weak mixing of a kaon to anti-kaon (indirect CP violation). The experiment I worked on as a grad student, NA48, observed this direct CP violation unambiguously for the first time, confirming the previous NA31 result. This ruled out more exotic types of CP violation from a new "superweak" interaction and, in broad terms, was consistent with the Standard Model.

      However this was not really confirmation of the Standard Model because the actual calculation of CP violation occurring in the SM is really hard to calculate: it involves quark/W boson loops which must have contributions from all three generations of quarks (specifically including the top quark!). These so-called penguin diagrams [wikipedia.org] (blame the name on John Ellis' dart playing skills!) are really hard to calculate - at least to the accuracy needed for CP violation in kaons. Kaons must decay through a weak interaction because only the weak interaction can change the strange quark into an up quark which is needed for pion decay. However there is also a strong component to the decay.

      Strong (QCD) processes are really hard to calculate because perturbation theory does not work for them (the interaction is far too strong). One approach to solve this is lattice QCD which literally simulates all the colour (QCD) fields on a 4D grid of space-time points. However this is really CPU-intensive so only small grids can be simulated. This is not too bad if you have a strong process because, being 'strong' it happens quickly in a small region. However the weak part of the decay occurs more slowly over a larger area. What the authors seem to have done is overcome this simulation problem of both weak and strong forces in the same decay which raises the prospect of accurate calculations of the CP violation in kaon decays which has never been possible before. For the technically minded this paper calculates the Isospin=2 decay amplitude (A_2) whose phase shift, relative to the isospin 0 amplitude (A_0) is what makes direct CP violation visible - it's a really interesting paper - at least if you have ever been involved in kaon physics!
      • Re: (Score:3, Interesting)

        by Anonymous Coward

        Thank you for the explanation, it is far more informative than the blog post.

        I'm a little depressed with how little of that explanation I understood - I'm a 3rd year physics PhD student, writing a thesis on matter-antimatter interactions (specifically, low energy swarm theory with liquids), and even I only have a very loose grasp of what you're talking about. I suppose it says something for how specialized physics really is. 99.9% of people in the world would think that we're studying the same thing.

        • by Roger W Moore ( 538166 ) on Tuesday May 29, 2012 @08:01AM (#40141249) Journal
          Not that I want to make you more depressed but the above post was at a level somewhat below what I'd expect final year undergraduates will understand - at least the ones who have taken an undergrad particle course. The only exception being the A_0 and A_2 amplitudes which is specialized kaon physics. If you are studying matter/antimatter interactions then you ought to know this stuff. There is a good undergrad book by Griffiths, "Introduction to Elementary Particles", which has a section on CP violation including the the B meson sector. I'd also happily share by lecture slides on this but my university has not yet implemented public access to course material.
          • The fact that I was only an undergrad who continued (on my own) studies of nuclear and HEP and can understand this I'm really shocked that a PhD candidate would not. You should have been exposed to this in your senior year as undergrad and I would imagine at least one course on nuclear/particle physics in graduate work, even if you concentration will be in some other area. Just about any non picture book on particle (and sometimes nuclear) physics will discuss neutral K mixing. Note that the states are o

            • Note that the states are often referred to as K1 and K2.

              Not quite, K1 and K2 are different: the real, physical states are KL and KS. The K1 and K2 states are the pure CP eigenstates. If CP were a perfect symmetry of nature then you would be correct and the physical long lived state would be the same as the CP=-1 state (which can be either the K1 or K2 state depending on your convention) and the short lived state would be pure CP=+1. However because CP is not a perfect symmetry the long-lived kaon state, KL = N*(K1 + epsilon*K2) and KS=N*(K2+epsilon*K1) where N

      • However this was not really confirmation of the Standard Model because the actual calculation of CP violation occurring in the SM is really hard to calculate: it involves quark/W boson loops which must have contributions from all three generations of quarks (specifically including the top quark!). These so-called penguin diagrams [wikipedia.org] (blame the name on John Ellis' dart playing skills!) are really hard to calculate - at least to the accuracy needed for CP violation in kaons. Kaons must decay through a weak interaction because only the weak interaction can change the strange quark into an up quark which is needed for pion decay. However there is also a strong component to the decay.

        Strong (QCD) processes are really hard to calculate because perturbation theory does not work for them (the interaction is far too strong). One approach to solve this is lattice QCD which literally simulates all the colour (QCD) fields on a 4D grid of space-time points. However this is really CPU-intensive so only small grids can be simulated. This is not too bad if you have a strong process because, being 'strong' it happens quickly in a small region. However the weak part of the decay occurs more slowly over a larger area. What the authors seem to have done is overcome this simulation problem of both weak and strong forces in the same decay which raises the prospect of accurate calculations of the CP violation in kaon decays which has never been possible before. For the technically minded this paper calculates the Isospin=2 decay amplitude (A_2) whose phase shift, relative to the isospin 0 amplitude (A_0) is what makes direct CP violation visible - it's a really interesting paper - at least if you have ever been involved in kaon physics!

      • by Brannoncyll ( 894648 ) on Tuesday May 29, 2012 @09:31AM (#40142079)

        Oops- apologies for the empty post!

        Disclaimer - I am an author on the paper.

        Your comment about the weak interaction occurring over large distances is not correct - the weak interaction scale is ~90 GeV, which is much much higher than the hadronic energy scale ~1 GeV. In lattice calculations, where the interaction scales are on the order of femtometres, the weak interactions can be simulated to very high accuracy (sub-1%) using simply a point-like vertex. Due to the separation of scales, the actual weak component of the calculation can be completely separated out and calculated using standard perturbative techniques - the hard part has always been the calculation of the strong interaction component. While perturbative calculations just take a few guys a couple of months to sort out the factors of 2, the lattice calculation takes many months to run on state-of-the-art supercomputers and combines techniques developed over 40 years of work.

  • Does this calculation account for the existance of
    Dark Matter
    Dark Energy
    The Dark Side of the Force
    The Dark Side of the Moon

  • How much is that in bitcoins?

    • How much is that in bitcoins?

      That depends on the current difficulty to compute a block, and various other factors, making it a poor unit of measure.

      So, let's see what it is in a more familiar terms: pLC
      54 million processor hours @ ~371 million flops @ 32 bits per instruction, so....
      ... about 437.23 (printed) Libraries of Congress worth of data moved across the chips...
      ... but BlueGene/P is a cluster so its speed could have been different than the 2008 speed I used. Good enough for an ball park estimate like this though.

  • by jbeaupre ( 752124 ) on Tuesday May 29, 2012 @02:44AM (#40139977)

    "Honey, what's the matter?"

    "You know!"

    No, I don't. But maybe maybe a team of scientists using one of the most powerful computers on earth can figure what the heck is the matter with you.

  • why we, and everything else we observe today, are made of matter and not anti-matter

    Call me crazy, but I bet that if we and everything we observed were made of anti-matter, we would just call it "matter". :p

    Seriously, though, doesn't it have to be one or the other (since a mix will lead to annihilation)? I'm assuming the real question is why what we call "matter" managed to beat out anti-matter instead of a balance of both kinds being made at the beginning, which would then annihilate.

    DNRTFA.

    • It's not just that. The proportion of matter to photons is pretty respectable. If it were just a matter of a few atoms left over after all the antimatter was annihilated the universe would be much sparser and less interesting.
  • The calculation in the study required 54 million processor hours on the IBM BlueGene/P supercomputer at Argonne National Laboratory, the equivalent of 281 days of computing with 8,000 processors.

    And yet the entire article does not contain a single equation, much less a link to the paper. I am disappoint.

    • The calculation in the study required 54 million processor hours on the IBM BlueGene/P supercomputer at Argonne National Laboratory, the equivalent of 281 days of computing with 8,000 processors.

      And yet the entire article does not contain a single equation, much less a link to the paper. I am disappoint.

      Here is a link to our paper [inspirehep.net]. I'm sure you will be satisfied with the number of equations :)

  • If it takes 54 million processor hours to compute it, how long is it going to take for scientists to EXPLAIN it? And who is going to check the results?
  • "...54 million processor hours on the IBM BlueGene/P supercomputer at Argonne National Laboratory, the equivalent of 281 days of computing with 8,000 processors..."

    Please, please tell me that the answer was 42.

    Now how long will it take for us to compute the question?

  • Maybe they can compute wtf dark energy is.
  • While the authors (as they always do) consider this landmark, I was unable to find any comment on their letter or the preprint (apparently this [lanl.gov]) in the usual places. This could be in part because it is a) not 'real' and b) doesn't have the words 'Higgs' or 'superluminal neutrino' in the title.

  • We're already made of anti-matter. Except that, since we're here, and we're making the rules and naming things, we call it matter and all the opposite stuff is anti-matter.

    I thought that was obvious.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...