Pregled posta

Adresa bloga: https://blog.dnevnik.hr/diggamericaeurope

Marketing

prime numbers

prime numbers


This week I am in giving this years Simons lectures at MIT together with David Donoho. (These lectures, incidentally, are endowed by Jim Simons, who was mentioned in some earlier discussion here.) While preparing these lectures, it occurred to me that I may as well post my lecture notes on this blog, since this medium is essentially just an asynchronous version of a traditional lecture series, and prime numbers hypertext capability is in some ways more convenient and informal than, say, LaTeX slides.

I am giving three lectures, each expounding on some aspects prime numbers the theme the dichotomy between structure and randomness, which I also spoke about (and wrote about) for the ICM last August. This theme seems to pervade many of the areas of mathematics that I work in, and my lectures aim to explore how this theme manifests itself in several of these. In this, the first lecture, I describe the dichotomy as it appears in Fourier analysis and in number theory. (In the second, I discuss the dichotomy in ergodic theory and graph theory, while in the third, I discuss PDE.)

The dichotomy between structure and randomness to apply in circumstances in which one is considering a high-dimensional class of objects (e.g. sets prime numbers integers, functions on a space, dynamical systems, graphs, solutions to PDE, etc.). For sake of concreteness, let us focus today sets of integers (later lectures will focus on other classes of objects). There are many types of objects in these classes, however one can broadly classify them into three categories:

  • Structured objects - objects with a high degree of predictability and algebraic structure. A typical example are the odd integers A = {&,-3,-1,1,3,5,&}. Note that if some large number n is known to lie in A, this reveals a lot of information about whether n+1, n+2, etc. will also lie in A. Structured objects are best studied using the tools of algebra and geometry.
  • Pseudorandom objects - the opposite of structured; these are highly unpredictable and totally lack any algebraic structure. A good example is a randomly chosen set B of integers, in which each element n lies in B with an independent probability of 1/2. (One can imagine flipping prime numbers for each integer n, and defining B to be the set of n for which the coin flip resulted in heads.) Note that if some prime numbers n is known to lie in B, this conveys no information whatsoever about the relationship of n+1, n+2, etc. with respect to B. Pseudorandom objects are best studied using the tools of analysis and probability.
  • Hybrid sets - sets which exhibit some features of structure and some features of pseudorandomness. A good example is the primes P = {2,3,5,7,&}. The primes have some obvious structure in them: for instance, the prime numbers are all positive, they are all odd (with one exception), they are all adjacent to a multiple of six (with two exceptions), and their last digit is always 1, 3, 7, or 9 (with two exceptions). On the other hand, there is evidence that the primes, despite being a deterministic set, behave in a very pseudorandom or uniformly distributed manner. For instance, from the Siegel-Walfisz theorem we know that the last digit of large prime numbers is uniformly distributed in the set {1,3,7,9}; thus, if N is a large integer, the number of primes less than N ending in (say) 3, divided by the total number of primes less than N, is known prime numbers converge to 1/4 in the limit as N goes to infinity. In order to study hybrid objects, one needs a large variety of tools: one needs tools such as algebra and geometry to understand structured component, one needs tools such as analysis and probability to understand the pseudorandom component, and one needs tools such as prime numbers algorithms, and evolution equations to separate the structure from the pseudorandomness. (More on this in later talks.)

A recurring question in many areas of analysis is the following: given a specific object (such as the prime numbers), can one determine precisely what the structured components are within the object, and how pseudorandom the remaining components of the object are? One reason for asking this question is that it helps one compute various statistics (averages, sums, integrals, correlations, norms, etc.) of the object being prime numbers For instance, one can ask for how many twin pairs {n, n+2}, with n between 1 and N, one can find within a given set. In the structured set A given above, the answer is roughly N/2. For the random set B given above, the answer is roughly N/4; thus one sees that while A and B exactly the same density (namely, 1/2), their statistics are rather different due to the fact that one is structured and one is random. As for the prime numbers, nobody knows what the answer is (though it is conjectured to be roughly 1.32 frac{N}{log^2 N}), because we do not know enough yet about the pseudorandomness of the primes. On the other hand, the parity structure of the prime numbers is enough to show that the number of adjacent pairs {n, n+1} in the prime numbers is exactly one: {2,3}.

The problem of determining exactly what the structured and components are of any given object is still largely intractable. However, what we have learnt in many cases is that we can at least show that an arbitrary object can be decomposed into some structured component and some pseudorandom component. Also there is often an orthogonality property (or dichotomy): if an object is orthogonal (or has small correlation with) structured objects, then it is prime numbers pseudorandom, and vice versa. Finally, we are sometimes lucky enough to be able to classify all the structured objects which are relevant for any given problem (e.g. computing a particular statistic). In such cases, one merely needs (in principle) to compute how the given object correlates with each member in ones list of structured objects in order to determine what the desired statistic is. This is often simpler (though still non-trivial) than computing the statistic directly.

To illustrate these general principles, let us focus now on a specific area in analytic number theory, prime numbers that of finding additive patterns in the prime numbers {2, 3, 5, 7, &}. Despite centuries of progress on these problems, many questions are still unsolved, for instance:

On the other hand, we do have some positive results:

As a general rule, it appears that it prime numbers feasible (after non-trivial effort) to find patterns in the primes involving two or more degrees of freedom (as described by the parameters n, n in above examples), but we still do not have the proper technology for finding patterns in the primes involving only one degree of freedom n. (This is of course an oversimplification; for instance, the pattern n, n+2, n, n+2 has two degrees of freedom, but finding infinitely many of these patterns in the primes is equivalent to the twin prime conjecture. If however one makes a assumption, one can make the above claim more precise.)

One useful tool for establishing some (but not all) of prime numbers above positive results is Fourier analysis (which in this context also known as the Hardy-Littlewood circle method). Rather than give the textbook presentation of that method here, let us try to motivate why Fourier analysis is an essential feature of many of these problems from the perspective of the dichotomy between structure and randomness, and in particular viewing structure as an obstruction to computing statistics which needs to be understood before the statistic can be accurately computed.

To treat many of the above questions concerning the primes in a unified manner, let us consider the following general setting. We consider k affine-linear forms psi_1, ldots, psi_k: {Bbb Z}^r to {Bbb Z} on r integer unknowns, and ask

  • Question: does there there exist infinitely many r-tuples vec n = (n_1,ldots,n_r) in {Bbb Z}_+^r of positive integers such that psi_1(vec n), ldots, psi_k(vec n) are simulatenously prime?

For instance, the twin prime conjecture is the case when k=2, r=1, psi_1(n) = n, and psi_2(n) = n+2; van der Corputs theorem is the case when k=3, r=2, and psi_j(n,n for j=0,1,2; and so forth.

Because of the obvious structures in the primes, the answer to the above question can be no. For instance, since all but one of the primes are odd, we know that there not infinitely many patterns of the form n, n+1 in the primes, because it is not possible for n, n+1 to both be odd. More generally, given any prime q, we know that all but one of the primes is coprime to q. Hence, if it is not possible for psi_1(vec n), ldots, psi_k(vec n)obstruction at q. For instance, the pattern n, n+1 has an obstruction at 2. The pattern n, n+2, n+4 has no obstruction at 2, but has an obstruction at 3, because it is not possible for n, n+2, n+4 to all be coprime to 3. And so forth.

Another obstruction comes from the trivial observation that the primes are all positive. Hence, if it is not possible for psi_1(vec n), ldots, psi_k(vec n) to all be positive for infinitely many values of vec n, then we say that there is an obstruction at infinity, and the answer to the question is again in this case. For instance, for any fixed N, the pattern n, N-n can only occur finitely often in the primes, because there are only finitely many n for which n, N-n are both positive.

It is conjectured that these local obstructions are the only obstructions to solvability of the above question. More precisely, we have

  • Dicksons conjecture: If there are no obstructions at any prime q, and there are no obstructions at infinity, then the answer to the above question is yes.

This conjecture would imply the twin prime and Sophie Germain conjectures, as well as the Green-Tao theorem; it also implies the Hardy-Littlewood prime tuples conjecture as a special case. There is a quantitative version of this conjecture which predicts a precise count as to how many solutions there are in a given range, and which would then also imply Vinogradovs theorem, as well as Goldbachs conjecture (for sufficiently large N); see this paper for further discussion. As one can imagine, this conjecture is still largely unsolved, however there are many important special cases that have now been established - several of which via the Hardy-Littlewood circle method.

One can view Dicksons conjecture as an impossibility statement: that it is impossible to find any other obstructions to solvability for linear patterns in the primes than the obvious local obstructions at primes q and at infinity. (It is also a good example of a local-to-global principle, that local solvability implies global solvability.) Impossibility statements have always been very difficult to prove - one has to locate all possible obstructions to solvability, and eliminate each one of them in turn. In particular, one has to exclude various exotic conspiracies between the primes to behave in an unusually structured manner that somehow manages to always avoid all the patterns that one is seeking within the primes. How can one disprove a conspiracy?

To give an example of what such a conspiracy might look like, consider the twin prime conjecture, that of finding infinitely many pairs n, n+2 which are both prime. This pattern encounters no obstructions at primes q or at infinity and so Dicksons conjecture predicts that there should be infinitely many such patterns. In particular, there are no obstructions at 3 because prime numbers can equal 1 or 2 mod 3, and it is possible to find pairs n, n+2 which also have this property. But suppose that it transpired that all but finitely many of the primes ended up being 2 mod 3. From looking at tables primes this seems to be unlikely, but it is not immediately obvious how to disprove it; it could well be that once one numbers say, 10^{100}Dirichlets theorem, which guarantees infinitely many primes equal to a prime numbers q whenever a, q are coprime, and so we can rule out particular type of conspiracy. (This does strongly suggest, though, that knowledge of Dirichlets theorem is a necessary but not sufficient condition in order to solve the twin prime conjecture.) But perhaps there are other conspiracies that one needs to rule out also?

To look for other conspiracies that one prime numbers to eliminate, let us rewrite the conspiracy all but finitely many of the primes are 2 mod 33 in the more convoluted format

0.6 < { frac{1}{3} p } < 0.7 for all but finitely many primes p

where prime numbers is the fractional part of x. This type of conspiracy can now be generalised, for instance consider the statement

0 < { sqrt{2} p } < 0.01 for all but finitely many primes p. (*)

Again, such a conspiracy seems very unlikely - one would expect these fractional parts to be uniformly distributed between 0 and 1, rather than concentrate all in the interval [0, 0.01] - but it is hard to rule this conspiracy out a priori. And if this conspiracy (*) was in fact true, then the twin prime conjecture would be false, as can be quickly seen by considering the identity

{ sqrt{2} (n+2) } - { sqrt{2} n } = 2 sqrt{2} hbox{ mod } 1,

which forbids the two fractional parts on the left-hand side to simultaneously fall in the interval [0, 0.01]. Thus, in order to solve the twin prime conjecture, one must rule out (*). Fortunately, it has been known since the work of Vinogradov that { sqrt{2} p } is in fact uniformly distributed in the interval [0,1], and more generally that { alpha p } is uniformly distributed in [0,1] whenever alpha is irrational. Indeed, by Weyls criterion, this is equivalent to the exponential sum estimate

sum_{p<N} e^{2pi i alpha p} = o( sum_{p < N} 1 ),

and we now see the appearance of Fourier analysis in this subject.

One can rather easily concoct endless stream of further conspiracies, each of which could contradict the twin prime conjecture; this is one of the reasons why this conjecture is considered so difficult. Let us thus leave this conjecture for now and consider some two-parameter problems. Consider for instance the problem of finding infinitely many patterns of the form n, n+n, n+2n+2 (i.e. arithmetic progressions of length 3, but with the last element shifted by 2). Once again, the prime numbers (*), if true, would obstruct solvability for this pattern, due to the easily verified identity

{ sqrt{2} n } - 2 { sqrt{2} (n+n

which is related to the fact that the function sqrt{2} n has a vanishing second derivative. (Note that the same conspiracy does not obstruct solvability of an unmodified arithmetic progression n, n+n, n+2n. This a special property of arithmetic progressions, which most other patterns do not have: namely that arithmetic progressions tend to exist both in structured objects and in pseudorandom objects, or in hybrids of the two. This is why results about arithmetic progressions have tended to be easier to establish than those about more general patterns, as one does not need to know as much about the structured and random components of the set in which one is looking for progressions.)

More generally, we can see that if the primes correlate in some unusual way with a prime numbers character e^{2pi i alpha p}, then this is likely to bias or distort the number of patterns {n, n+n, n+2n+2} in a significant manner. However, thanks to Fourier analysis, we can show that these Fourier conspiracies are in fact the only obstructions to counting this type of pattern. Very roughly, one can sketch the reason for this as follows. Firstly, it is helpful to create a counting function for the primes, namely the von Mangoldt function Lambda(n), defined as log p whenever n is a power of a prime p, and 0 otherwise. This rather strange-looking function is actually rather natural, because of the identity

sum_{d|n} Lambda(d) = log n

for all positive integers n; this identity is a restatement of the fundamental theorem of arithmetic, and in fact defines the von Mangoldt function uniquely. The problem of counting patterns {n, n+n, n+2n+2} is then roughly equivalent to the task of computing sums such as

sum_n sum_{n (**)

where prime numbers shall be intentionally vague as to what range the variables n, n will be summed over. We have the Fourier inversion formula

Lambda(n) = int_0^1 e^{2pi i n theta}  Lambda(theta) dtheta

where

hat Lambda(theta) = sum_n Lambda(n) e^{-2pi i n theta}

is a sum very similar in nature to the sums sum_{p  N} e^{2pi i p alpha} mentioned earlier. Substituting this formula into (**), we prime numbers get an expression of the form

int_0^1 hat Lambda(theta)^2 hat Lambda(-2theta) e^{4pi i theta} dtheta.

Thus, if one gets good enough control on the Fourier coefficients hat Lambda(theta), which can be viewed as a measure of how much the primes conspire with a linear phase oscillation with frequency theta, then one can (in principle, at least) count the solutions to the pattern {n, n+n, n+2n+2} in the primes. This is the Hardy-Littlewood circle method in a nutshell, and this is for instance how van der Corputs theorem and Vinogradov theorem were first proven.

I have glossed over the question of how one actually computes Fourier coefficients hat Lambda(theta). It turns out that there prime numbers two cases. In the major arc case when theta is rational, or close prime numbers rational (with a reasonably small denominator), then the problem turns out to be essentially equivalent to counting primes in arithmetic progressions, and so one uses tools related to Dirichlets theorem (L-functions, the Siegel-Walfisz theorem, etc.). In the minor arc case when theta is far from rational, one instead uses identities such as

Lambda(n) = sum_{d|n} mu(d) prime numbers title=

where mu is the Möbius function, to split the Fourier coefficient as

hat Lambda(theta) = sum_d sum_m mu(d) log(m) prime numbers i alpha dm}

and then one uses the irrationality of alpha to exhibit some significant prime numbers in the phase e^{2pi i alpha dm}, which cannot be fully canceled out by the oscillation in the mu(d) factor. (In practice, the above strategy does not work directly, and one has to work with various truncated or smoothed out versions of the above identities; this is technical and will not be discussed here.)

Now suppose we look at progressions of length 4: n, n+n, n+2n, n+3n. As with progressions of length 3, linear or Fourier conspiracies such as (*) will bias or distort the prime numbers count of such progressions in the primes less than a given number N. But, prime numbers contrast to the length 3 case where these are the only conspiracies that actually influence things, for length 4 progressions there are now quadratic conspiracies which can cause trouble. Consider for instance the conspiracy

0 < { sqrt{2} p^2 } < 0.01 for all but finitely many primes p (***).

This conspiracy, which can exist even when all linear conspiracies are eliminated, will significantly bias the number of progressions of length 4, due to the identity

{ sqrt{2} n^2 } - 3 { sqrt{2} (n+n

= 0 hbox{ mod } 1

which is related to the fact that the function sqrt{2} n^2 has a vanishing third derivative. In this case, the conspiracy works in ones favour, increasing the total number of progressions of length 4 beyond what one would have naively expected; as mentioned before, this is related to a remarkable indestructability property of prime numbers which can be used to establish things like the Green-Tao theorem without having to deal directly with these obstructions. Thus, in order to count progressions prime numbers length 4 in the primes accurately (and not just to establish the qualitative result that there are infinitely many of them), one needs to eliminate conspiracies such as (***), which necessitates understanding exponential sums such as sum_{p < N} e^{2pi i alpha p^2} for various rational or irrational numbers alpha. Whats worse, there are several further generalised quadratic conspiracies which can also bias this count, for instance the conspiracy

0 < { [sqrt{2} p] sqrt{3} p } < 0.01 for all but finitely many primes p,

where [] is the greatest integer function. The point here is that the function {}[sqrt{2} x] sqrt{3} x has a third divided difference which does not entirely vanish (as with the genuine quadratic sqrt{2} x^2), but does vanish a significant portion of the time (because the greatest integer function obeys the linearity property [x+y] = [x] + [y] a significant fraction of the time), which does lead ultimately to a non-trivial bias effect. Because of this, one is also faced with estimating exponential sums such as sum_{p < N} e^{2pi i [sqrt{2} p] sqrt{3} p}. It turns out that the correct way to phrase all of these obstructions is via the machinery of 2-step nilsequences: details can be found in these three papers of Ben Green and myself. As a consequence, we can in fact give a precise count as to how many arithmetic progressions of primes of length 4 with all primes less than N; it turns out to be

(frac{3}{4}  geq 5} (1 - frac{3p-1}{(p-1)^3}) + o(1)) frac{N^2}{log^4 N} approx 0.4764 frac{N^2}{log^4 N}.

The method also works for other linear patterns of comparable complexity to progressions of length 4. We are currently working on the problem of longer progressions, in which cubic and higher order obstructions appear (which should be modeled by 3-step and higher nilsequences); some work related to this should appear here shortly.



Popular topics today: give me liberty or give me death, black velvet lyrics, lake superior, opposites attract lyrics, cartographers, you light up my life lyrics, matthew higa, homophone, cloverfield ending, cloverfield monster, buynowbe

Post je objavljen 18.01.2008. u 12:32 sati.