Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Math

Mathematicians Catch a Pattern By Figuring Out How To Avoid It (quantamagazine.org) 26

We finally know how big a set of numbers can get before it has to contain a pattern known as a "polynomial progression." From a report: A new proof by Sarah Peluse of the University of Oxford establishes that one particularly important type of numerical sequence is, ultimately, unavoidable: It's guaranteed to show up in every single sufficiently large collection of numbers, regardless of how the numbers are chosen. "There's a sort of indestructibility to these patterns," said Terence Tao of the University of California, Los Angeles. Peluse's proof concerns sequences of numbers called "polynomial progressions." They are easy to generate -- you could create one yourself in short order -- and they touch on the interplay between addition and multiplication among the numbers. For several decades, mathematicians have known that when a collection, or set, of numbers is small (meaning it contains relatively few numbers), the set might not contain any polynomial progressions. They also knew that as a set grows it eventually crosses a threshold, after which it has so many numbers that one of these patterns has to be there, somewhere. It's like a bowl of alphabet soup -- the more letters you have, the more likely it is that the bowl will contain words. But prior to Peluse's work, mathematicians didn't know what that critical threshold was. Her proof provides an answer -- a precise formula for determining how big a set needs to be in order to guarantee that it contains certain polynomial progressions. Previously, mathematicians had only a vague understanding that polynomial progressions are embedded among the whole numbers (1, 2, 3 and so on). Now they know exactly how to find them.
This discussion has been archived. No new comments can be posted.

Mathematicians Catch a Pattern By Figuring Out How To Avoid It

Comments Filter:
  • by wierd_w ( 1375923 ) on Thursday November 28, 2019 @03:30PM (#59467506)

    would this be useful for finding where, say-- a given file's full contents would be found, in say, a memory device driver (think /dev/urandom, or /dev/zero) that linearly just spits out counted numbers to infinty based on a starting position?

    Since the contents might be found SOONER than when the number is counted to! See for instance, output from 0 to 20,

    01234567891011121314151617181920

    The number 1234 is found in the first 5 bytes.
    101 is found at 11 bytes.

    etc

    Knowing the "first instance" in the output string of such a regular expanding function, of any arbitrary expression, would let you record it at its lowest possible entropy, right?

    • It would indeed tell you how many numbers you must output from a generator to see a given pattern, assuming you met the criteria in the proof. Output 1s and 0s randomly won't ever give you a 2, unless you map groups of bits to numbers. It would not tell you where it is in the sequence.

      There is series of results (of which Tao is one of the authors) on the relationship between groups addition and multiplication and entropy which culminated in the BIW paper that is a fundamental paper in extractor theory and s

    • Like a compression algorithm using Tuppers self referential equation?
    • Not useful for compression.

      It has been proven for a very long time that the optimal coding of a symbol from any alphabet uses log2(1/p) bits where p is the symbols probability.
    • No, because you don't need some sort of an infinite random source for this. An 8-bit byte only describes 256 values. Hence, you can describe all the values in 256 bytes.

      An index into a 256 byte array is still going to need to be 8 bits long, so having an index into a minimal, fixed size array is still going to result in an identical file size as what you started with (and if each byte is at its own array index, you'll just wind up with an identical file).

      And that is minimal. Adding extra complexity of a

    • As an esoteric exercise, perhaps, but not in any practical sense.

      Seeded pseudo-random number generators (such as your counter) have regular patterns and will not contain a useful set of data that can be used as a library. Unless the library is built from the original data, the storage requirements of all the library indexes and compressed data will almost always exceed the original data.

  • The universe can a pattern of repeating truths...

    As "X" grows longer, the probability of "Y" approaches 1

    Godwin:

    • X = "an online discussion grows"
    • Y = "a comparison involving Nazis or Hitler "

    Peluse:

    • X = "a collection of numbers"
    • Y = "a polynomial progression"

news: gotcha

Working...