Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Math Music Science

Exploiting Network Captures For Truer Randomness 189

First time accepted submitter ronaldm writes "As a composer who uses computers for anything and everything from engraving to live performance projects, it's periodically of some concern that computers do exactly what they're supposed to do — what they're told. Introducing imperfections into music to make it sound more 'natural' is nothing new: yet it still troubles me that picking up random data from /dev/random to do this is well, cheating. It's not random. It bugs me. So, short of bringing in and using an atomic source, here's a way to embrace natural randomness — and bring your packet captures to life!"
This discussion has been archived. No new comments can be posted.

Exploiting Network Captures For Truer Randomness

Comments Filter:
  • computers do exactly what they're supposed to do — what they're told.

    90% of my day job is a bunch of engineers standing around scratching our heads trying to brainstorm ways to figure out what the hell is going on with our system. We don't even know what it is doing, let along being able to tell it what to do.

    • That's because you're talking to software (or even firmware) that has hidden/obfuscated routines (i.e. you don't have the source). If it's open all the way, you can track down exactly what is happening, and even fix it, or it can even expose a flaw with the input you have provided. Either way, it's much easier to solve the issue.
  • Random (Score:5, Insightful)

    by somersault ( 912633 ) on Saturday November 05, 2011 @05:51PM (#37961042) Homepage Journal

    The imperfections in music aren't perfectly random either, so what's the big deal?

    • Re:Random (Score:5, Insightful)

      by PopeRatzo ( 965947 ) * on Saturday November 05, 2011 @06:40PM (#37961478) Journal

      The imperfections in music aren't perfectly random either, so what's the big deal?

      Most insightful comment on this story. Period.

      Even if we could get perfect randomness in our art, it wouldn't really matter because the humans who see it or hear it will just try to impose some order on that randomness. It's what we do.

      Instead of randomness, what I seek to add to my sounds in the music I make is complexity. That's what makes for a rich sound.

      For example, if you look at the harmonics in a struck piano string or plucked guitar or bowed violin, they appear at predictable places. Now look at the harmonics in a free reed instrument, such as a chromatic harmonica. All sorts of weird places, strange ratios. It's what gives the chromatic such a distinctive, heart-rending timbre. Listen to the album Affinity by Bill Evans and Toots Theilemans and you can see why Evans decided to record his masterwork with a "trivial" instrument like the chromatic harp. It's basically a shaped noise generator with pitch.

      Similarly, listen to the digital sound used in "Sky Saw" in Brian Eno's Another Green World album. A simple waveform made extremely complex using god knows what filthy circuitry and it feels like someone is sticking the motor from a pair of hair clippers up your butt (not that I would know what that feels like since I would never, ever do such a thing since I turned 40).

      It can be easy, or hard, using pseudo-random algorithms in MaxDSP but it's the complexity that makes the sound do it's business. Except when it's simple, like a flute which is basically a sine wave. Oh never mind. I hate thinking about this stuff. It's a waste of time and I left my days as a theorist behind me. I'll let the young guys like the lad in the article worry about how pure the randomness is in the sounds he uses. It'll keep him occupied until inspiration comes along.

      • And there's nothing to substitute for the variations in timing that come from humans playing the instruments.

        Whether it's Sly and Robbie or George Shearing (sorry new music fans - I really don't like most modern crap), it's that slight movement ahead or behind the beat, and the control of it that adds emotion and a certain thrill to a tune.

        On the harmonica - give me Larry Adler any day :-P

        • On the harmonica - give me Larry Adler any day :-P

          Of course, Adler is terrific. The album he made with Sir George Martin producing of Gershwin tunes is just spectacular (especially "But Not for Me" with vocal by Elvis Costello - yes, you read that right).

          But you have got to hear Hendrik Meurkens. He's a German (or maybe Belgian?) chromatic harmonica player who is wonderful. I have to be careful how much I listen to his recordings or I'm liable to toss my rather expensive Gregoire Maret chrom right out

        • by swalve ( 1980968 )
          In the hands of a musician, however, it is not random.
    • For crypto, you need *Perfect* random indeed, but for music, a pseudorandom generator should surely be enough?

      • I also don't see why pseudo-random would not be enough for all art forms. It's so miniscule difference in that context after all.
    • The imperfections in music aren't perfectly random either, so what's the big deal?

      http://dilbert.com/strips/comic/2001-10-25/ [dilbert.com]

  • by AuMatar ( 183847 ) on Saturday November 05, 2011 @05:52PM (#37961058)

    The vast majority of traffic is either html or email. Very structured data. It's sufficiently random to use for a video game or the like, but it's definitely not random from a cryptography point of view. So you're doing things the hard way with no discernible benefit. Total waste of time.

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Saturday November 05, 2011 @05:55PM (#37961080)
    Comment removed based on user account deletion
    • Well, to be fair, hes taking the packet checksums, but in theory those could be predictable as well. They probably wouldnt be "ordered", however.

      • by gweihir ( 88907 )

        The packet checksums are about as non-random as you can get. There is no timing/jitter/... at all in these! This is really stupid. Even /dev/urandom is of far, far superior quality.

        I strongly advise the OP to actually try understand the issue before posting such utterly clueless nonsense.

  • /dev/random (Score:3, Informative)

    by Anonymous Coward on Saturday November 05, 2011 @05:56PM (#37961096)

    This seems like a fairly lame variant of the environmental entropy gathering which *is* what /dev/random does...

    • Mod parent up (Score:4, Interesting)

      by impaledsunset ( 1337701 ) on Saturday November 05, 2011 @06:05PM (#37961186)

      /dev/random is already gathering environmental entropy from hardware sources and (except if you're running it on a virtual machine), it should produce data with good entropy that's truly random and is not comping from a pseudo RNG algorithm.

      Now, of course, if you XOR it with the network data you might increase entropy, but if it happens that /dev/random already uses it, you're not gaining anything, or in fact make things worse.

      But, please, if you think that /dev/random isn't providing data that's random enough, suggestions and patches would be welcome. Even if they don't get accepted in the mainline kernel, you can still distribute them.

      Another issue: I'd encrypt the data from the network source or XOR it with a pseudo RNG, because otherwise you might be leaking sensitive data through your "random" numbers.

      • by Dahamma ( 304068 )

        Another issue: I'd encrypt the data from the network source or XOR it with a pseudo RNG, because otherwise you might be leaking sensitive data through your "random" numbers.

        I bet everyone was wondering why all of his music sounded like bad Internet porn videos lately...

    • by gweihir ( 88907 )

      Indeed. And of much, much lower quality. The OP is a clueless hack.

  • I recall reading that /dev/random will pull from any system modules that are capable of being noisey. Like radios or network equipment. It would make sense too. Also, network packets are not a very good source of entropy. Atmospheric noise from a radio is. Network packets have structured data being sent through them, often in the form of english text.
    • by gweihir ( 88907 )

      It does. An a simple "man 4 random" will give you that information. It seems the OP could not even be bothered to do that before posting his clueless BS.

  • by nten ( 709128 ) on Saturday November 05, 2011 @05:59PM (#37961122)

    The lavarnd.org folks have all the source you need and a reference implementation that literally is webcam stuffed in a dark can. When you can get such high quality entropy for less than US $30, it seems like anything else must just be for fun. Some opaque tape over the camera on many laptops should work fine too.

  • by Dahamma ( 304068 ) on Saturday November 05, 2011 @06:01PM (#37961144)

    /dev/random on most OS'ed these days uses an entropy pool generated from a bunch of different sources - timing of keystrokes, mouse movements, disk seeking - and yes, network information. Then it uses cryptographic hashes on those.

    Your implementation basically uses one of those entropy sources, and then doesn't even hash it...

    • by gman003 ( 1693318 ) on Saturday November 05, 2011 @06:49PM (#37961528)

      In brief:

      "The generation of random numbers is too important to be left to chance."

      Anyone trying to create a new random number generator with the intent of producing more random numbers, without an extensive and specialized education, is guaranteed to fail.

      • by gweihir ( 88907 )

        In addition, the problem is solved and there is absolutely no need for a "new random number generator". None at all. What people get consistently wrong is the use of the RNGs that are there, are implemented well and do work well. The RNGs are completely fine.

        What there also is a need for is for people to READ THE F****** DOCUMENTATION before putting complete and utter nonsense as "story" on /. !

    • /dev/random on most OS'ed these days uses an entropy pool generated from a bunch of different sources - timing of keystrokes, mouse movements, disk seeking - and yes, network information. Then it uses cryptographic hashes on those.

      Your implementation basically uses one of those entropy sources, and then doesn't even hash it...

      As I remember, OpenBSD used network details to produce entropy, but later stopped, because it allowed a remote attacker the ability to potentially poison the entropy source by carefully sending just the right packets at the right time. Cryptographically secure randomness for Theo de Radt was only satisfactory when it required physical access to the machine to manipulate.

      • by pthisis ( 27352 )

        As I remember, OpenBSD used network details to produce entropy, but later stopped, because it allowed a remote attacker the ability to potentially poison the entropy source by carefully sending just the right packets at the right time. Cryptographically secure randomness for Theo de Radt was only satisfactory when it required physical access to the machine to manipulate.

        Something's wrong or lost in communication here. The entropy pool in a /dev/random implementation is designed so that even if an attacker

        • Something's wrong or lost in communication here. The entropy pool in a /dev/random implementation is designed so that even if an attacker can add a known source of numbers to it, it still doesn't decrease the real entropy in the pool. As long as my entropy estimates are correct, I could let you pick half the bits (or 99% of the bits) going into /dev/random's entropy pool and that still wouldn't help you guess the output.

          Yes, but in a server most often there is no keyboard or mouse involved. So, the machines get the vast majority of their entropy from the network.

          And we're talking about Theo de Radt here... it doesn't have to be a RATIONAL threat, it just has to be a theoretical one.

          Going to the source [openbsd.org]:

          The OpenBSD kernel uses the mouse interrupt timing, network data interrupt latency, inter-keypress timing and disk IO information to fill an entropy pool.

          So, they do use "network data interrupt latency", but not the time between sequential packets, or packet data, or anything that a remote attacker could control.

          • by pthisis ( 27352 )

            And we're talking about Theo de Radt here... it doesn't have to be a RATIONAL threat, it just has to be a theoretical one.

            But that's the point, as long as you set its entropy count to zero it's not even a theoretical threat. It could potentially improve randomness and can't possibly hurt. That's how entropy pools are designed.

            The OpenBSD kernel uses the mouse interrupt timing, network data interrupt latency, inter-keypress timing and disk IO information to fill an entropy pool.

            That makes more sense than i

        • by gweihir ( 88907 )

          There is nothing missing. You can only estimate entropy, and for that you need to make assumptions. What gets broken if an attacker controls the network traffic is the relevant assumptions. With the border condition that an attacker can control all network traffic, the only valid assumption is an entropy content of exactly zero, so you can drop it from entropy gathering.

          • by pthisis ( 27352 )

            the only valid assumption is an entropy content of exactly zero, so you can drop it from entropy gathering.

            This is the part that's nonsensical. The usual course of action with something that's relatively high volume and probably contributes entropy but possibly is under attacker control is to lower the estimated entropy count to zero but continue mixing the source into the pool. The worst-case scenario is no gain (but no loss), but it's likely you get some gain and it hedges against accidental overestima

            • by gweihir ( 88907 )

              But you cannot get entropy that is there but estimated as zero out of the pool! When reading speed from /dev/random is concerned, this does exactly nothing. Also it does exactly nothing for the amount of other entropy you have to get. So, even though it is hard to understand, you can drop it with no adverse effects and a reduction in code complexity on the plus side.

              Entropy gathering is not a guessing game, if the quality needs to be high. There is no "hedging" involved when this is done right. The estimate

      • by gweihir ( 88907 )

        /dev/random on most OS'ed these days uses an entropy pool generated from a bunch of different sources - timing of keystrokes, mouse movements, disk seeking - and yes, network information. Then it uses cryptographic hashes on those.

        Your implementation basically uses one of those entropy sources, and then doesn't even hash it...

        As I remember, OpenBSD used network details to produce entropy, but later stopped, because it allowed a remote attacker the ability to potentially poison the entropy source by carefully sending just the right packets at the right time. Cryptographically secure randomness for Theo de Radt was only satisfactory when it required physical access to the machine to manipulate.

        And he has it exactly right. Even network timing is suspicious. Network packet content is almost completely non-random and out as a source. And other sources are suspect as well. For example keyboard input timing only has a 70ms resolution (I measured this on two different keyboards as scan-delay), so gives you probably less of 1 bit of entropy per key pressed. Mouse movements are better, as you can use the absolute positions, bit they still need to be used very conservatively.

  • Confusion... (Score:4, Insightful)

    by Junta ( 36770 ) on Saturday November 05, 2011 @06:03PM (#37961176)

    /dev/random is about as random as you'll get. I presume your issue is that the pool is exhausted for the given desire. /dev/urandom is your endless of supply of 'good-enough' random for something like this. If your criticism is that it isn't really 'random', it's no less random than your pcap stream. Besides, given the application 'true' randomness will not be distinguishable from good pseudo-random.

    If you wanted to be random and artistic, then maybe point a webcam at a fireplace or something as an entropy source.

    • Or just grab a computer with a VIA Nano CPU - they have a built-in true random-number generator, based on thermal and/or electric variances inside the processor. They claim "up to 1600 kilobits per second", so it should provide more than enough for music, provided you aren't adding bit-for-bit random noise in real time.

      • Thermal / electrical variances are linked by virtue of the fact that resistance in the circuit will increase output heat. I'm not saying that the output of the (P)RNG is predictable in any meaningful way, just that it's not really a true random number generator, and more than likely indistinguishable from the output of /dev/random.

        If anything, that's not a bad thing though. It means they're both "random enough".
    • Re: (Score:3, Informative)

      by ronaldm ( 966544 )
      I'm going to reply to just the one poster, as explaining this to each /.'r would take rather a long time! :)

      First and foremost, Slashdot (as you know) unfortunately chooses the URL for your particular story. "Truer[sic] Randomness" is not in fact what I'm going out to somehow magically solve (with my absolute non-background in cryptography etc.). As to why they chose to enter the title of the story as such - I don't know. A bit of sensationalism, perhaps? In any case, I'd originally titled this "Musical
      • Sorry that you feel like your corn flakes have been pissed in, but you can't go blaming this on the editor's bad choice of headlines. Your own submission says "[...] yet it still troubles me that picking up random data from /dev/random to do this is well, cheating. It's not random. It bugs me." Then you go on to describe a mechanism that's far, far less random than /dev/random or any halfway decent pseudo-random number generator.

        Your blog post doesn't actually try to say that the network captures are ra

  • Something is random if you don't have the information to predict it. Distinguishing between "false" and "true" randomness is pointless.

    • False: You don't have the information, but the information could, in princible, by obtained. Even if it would require omniscience.
      True: So random that the information to predict it does not exist anywhere, even if you had a hypercomputer and knew the positions of every particle in the universe down to the limits of uncertainty.
      • I believe you are missing the more important point (most of you in this thread). 'Random' is usually used as short for 'uniformely random' and would ultimately mean that any sequence of draws has an a priori equal probability to any other sequence of the same fixed length. Pseudo random generators do not have this property for long strings and most external events can't be guaranteed to have this property. The generators can be pretty good however, something that is often meassured by Spectral Tests [sbg.ac.at] (somewh
    • Distinguishing between "false" and "true" randomness is pointless.

      Not really, it's done all the time for many different purposes.

      Take, for example, how computer scientists define it: roughly, a sequence is random if it can't be compressed, that is, any (program+data) that generates it must be at least as large than the sequence itself. It distinguishes between "random" and "not having enough information to predict it": it doesn't matter if it looks random to YOU; if it could in principle be compressed, it's not random.

      That's not pointless hair splitting, it has real conse

      • No, sorry.

        The problem with your theory is that a truly random source can and will generate compressible data sometimes, or else it isnt a fucking truly random source.

        This is why great minds have coined phrases like "Random numbers are too important to be left to chance" (Robert Coveyou)

        In practice people want certain constraints, such as a guarantee not to generate a sequence of 10000 zeros.

        "For your convenience we have generated a random PIN for you. Your randomly generated PIN is 0000. Please do
        • It's not my theory; maybe you heard about a guy named Kolmogorov that lived in the last century? I bet the great mind of Robert Coveyou studied a lot of his theory :).

          But, more seriously, of course a random source will output compressible data sometimes. What happens is this: as you collect more output from a truly random source, the probability of it being compressible goes to zero very fast.

          But the point is that it *is* useful to distinguish between "false" and "true" randomness, otherwise it wouldn't be

          • It's not my theory; maybe you heard about a guy named Kolmogorov that lived in the last century?

            You think I'm a noob, doncha?

            What happens is this: as you collect more output from a truly random source, the probability of it being compressible goes to zero very fast.

            Kolmogorov invented new terminology because he knew that the terms 'random' and 'entropy' didnt fit with his work on describing sequences. Thats why his theory is called 'Kolmogorov Complexity' ..... Complexity being short for 'algorithmic entropy' .. not simply entropy.. not simply randomness or stochastic.. specifically algorithmic entropy, aka complexity.

            This may seem like splitting hairs to you, but it isn't because this is a technical subject. There is a reason that new

            • "Random" and "entropy" are already used in computer science with the meanings you seem to not want them to have (like [wikipedia.org] this [wikipedia.org]). Maybe someone should complain to the president of computer science. (I'm sorry, I couldn't resist. This discussion is too silly.)

      • How compressible is:
        4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4?
        http://xkcd.com/221/ [xkcd.com]

        Interestingly, my original version of this comment (with lots more 4's) threw this error:

        Your comment violated the "postercomment" compression filter. Try less whitespace and/or less repetition.

        Yet another use of the methods that you mention!

  • "As a composer who uses computers for anything and everything from engraving..."

    What kind of "composer" does engraving, and why does he need a random number generator? And yeah, I read TFA, and it had nothing about applications.

  • http://www.random.org/faq/ [random.org]

    Q2.1: How can you be sure the numbers are really random?

    Oddly enough, it is theoretically impossible to prove that a random number generator is really random. Rather, you analyse an increasing amount of numbers produced by a given generator, and depending on the results, your confidence in the generator increases (or decreases, as the case may be). This is explained in more detail on my Statistical Analysis page, which also contains two studies of the numbers generated by RANDOM.OR

  • Is is "cheating" to use PRNG when they work totally fine and computerized music that uses them can't be discerned from "natural" imperfect music. or is it cheating to use a computer to make music whether it's randomized or not?
  • That explains why my packets disappear when they have too many neighbors.

  • What kind of clown posts these things?!

    Oh... Ronald. I'm sorry dude.
  • Most of my network traffic involves downloading porn.

    Your music is going to come out sounding like a strip club.

  • It is impossible to prove that true randomness exists in the Universe.

    Let U be the universe that you believe in, and let R source of true randomness for that universe. Then the universe that you believe in is U(R).

    Let R' be one of the pseudo random algorithms that is too computationally complex for you to detect. How ever computationally advanced you are there will be an infinite number of these.

    It will be impossible for you to prove that the real universe is not one of the U(R'). Occam's razor is on

    • There is no randomness, it is you who must be random.

    • It's impossible to prove anything at all, aside from abstract mathematics and "I think therefore I am" and such. But we shouldn't let philosophy and arguments about human conventions get in the way of the fact that Occam's Razor and the acceptance of unprovable theories is actually incredibly useful.

  • by joe_cot ( 1011355 ) on Sunday November 06, 2011 @01:40AM (#37963486) Homepage
    As a number of commenters have pointed out, /dev/random is actually way more random than what this article suggests doing. If you want stuff that actually is more random, or need a lot more random data, here are some options.
    • Random.org [random.org] provides random data generated by radio noise. You can get as much random data as you'd like. Gaming websites download their random data in 5MB chunks to use for card shuffles and dice rolls.
    • HotBits [fourmilab.ch] is a similar idea, but uses radioactive decay instead of radio waves
    • If you want to do it in house, you can do so with a smoke detector and a webcam [inventgeek.com]. This was submitted to slashdot in 2006 [slashdot.org]
    • Finally, if you need a ton of random numbers, and they must be random, you can buy RNG hardware [idquantique.com]

    What do i do? if I don't really care if it's random, I use the RPG from the programming language I'm using, or /dev/random. If I really, really care that it's random, I download a chunk of data off random.org, and either use that for the numbers, or use it to seed my RNG. For the most part, anything more than that is overkill.

  • The OP is clueless. /dev/random has full entropy and is random. /dev/urandom is the watered-down version, which still has some entropy in it.

    There is no need for "better randomness". There is need for people to find out what actually exists and is implemented and use it properly.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...