Abstract
We have previously proposed the idea of performing a carddrawing experiment of which the outcome potentially decides whether the Large Hadron Collider (LHC) should be closed or not. The purpose is to test theoretical models such as our own model that have an action with an imaginary part that has a similar form to the real part. The imaginary part affects the initial conditions not only in the past but even from the future. It was speculated that all the accelerators producing large amounts of Higgs particles such as the Superconducting Super Collider (SSC) would mean that the initial conditions must have been arranged so as not to allow these accelerators to work. If such effects existed, we could perhaps cause a very clearcut “miracle” by having the effect of a drawn card to be the closure of the LHC. Here we shall, however, argue that the closure of an accelerator is hardly needed to demonstrate such an effect and seek to calculate how one could perform a verification experiment for the proposed type of effect from the future in the statistically least disturbing and least harmful way.
We shall also discuss how to extract the maximum amount of information about such as effect or model in the unlikely case that a card preventing the running of the LHC or the Tevatron is drawn, by estimating the relative importance of high beam energy or high luminosity for the purpose of our effect.
CERNPHTH/2008035
YITP0799
OIQP0720
Test of Effect from Future in Large Hadron Collider;
A Proposal
Holger B. Nielsen ^{1}^{1}1 On leave of absence to CERN, Geneva from 1 Aug. 2007 to 31 March 2008.
The Niels Bohr Institute, University of Copenhagen,
Copenhagen , DK2100, Denmark
and
Masao Ninomiya ^{2}^{2}2 Also working at Okayama Institute for Quantum Physics, Kyoyama 1, Okayama 7000015, Japan.
Yukawa Institute for Theoretical Physics,
Kyoto University, Kyoto 6068502, Japan
PACS numbers: 12.90.tb, 14.80.cp, 11.10.z
Keywords: Backward causation, Initial condition model, LHC, Higgs particle
1 Introduction
Each time an accelerator is used to investigate a hitherto uninvestigated regime such as collision energy or luminosity, there is, a priori, a chance of finding new effects that, in principle, could mean that a wellestablished principle could be violated, in lowerenergy physics or in daily life. The present paper is one of a series of articles [1, 2, 3, 4, 5] discussing how one might use the LHC and perhaps the Tevatron to search for effects violating the following wellestablished principle while the future is very much influenced by the past, the future does not influence the past. Perhaps we can more precisely state this principle, which we propose to test at the new LHC accelerator, as follows: While we find that there is a lot of structure from the past that exists today in its present state – at the level of pure physics – simple structures existing in the future, so to speak, do not appear to prearrange the past so that they are [6, 8, 7, 9, 10, 11, 12]. If there really were such prearrangements organizing simple things to exist in the future we could say that it would be a model for an initial state with a builtin arrangement for the future, which is what our model is. However, models or theories for the initial state such as Hartle and Hawking’s noboundary model [13] are not normally of this type, but rather lead to a simple starting state corresponding to the fact that we normally do not see things being arranged for the future in the fundamental laws and thus find no backward causation [6]. However, we sometimes see that this type of prearrangement occurs, but we manage to explain it away. For example, we may see lot of people gathering for a concert. At first it appears that we have a simple structure in the future, namely, many people sitting in a specific place, such as the concert hall, causing a prearrangement in the past.
Normally we do not accept the phenomenon of people gathering for a concert as an effect of some mysterious fundamental physical law seeking to collect the people at the concert hall, and thus arranging the motion of these people shortly before the concert to be directed towards the hall. In our previous model [1, 2, 3, 4, 7, 5], which even we do not claim to be relevant to the concert hall example, such an explanation based on a fundamental physics model could have sounded plausible. In our model, we have a quantity , which is the imaginary part of an action in the sense that it is substituted into a certain Feynman path integral, as is the real part of the action, except for a factor . In fact, we let the action be complex, and its imaginary part , as for the real part , be an integral over time, . Thus . Roughly speaking, the way that the world develops is to make almost minimal (so that the probability weight obtained from the Feynman path integral is as large as possible). Thus, a tempting “explanation” for the gathering of the people would be that many people gathering for a concert provides a considerable negative contribution to the imaginary part of the Lagrangian during the concert, and thereby, a negative contribution to the imaginary action . Thus, the solutions to the equations for such a gathering before a concert would have an increased probability of , and we would have an explanation for the phenomenon of the people gathering for the concert. If we did not have an alternative – and we think better – explanation, then we might have to take such gatherings of people for concerts as evidence for our type of model with an effect from the future; we would, for example, conclude that the gathering of people occurred in order to minimize “an imaginary action” .
An alternative and better explanation that does not require any fundamental physical influence from the future is as follows: The participants in the concert and their behaviors are indeed, in the classical and naive approximation, completely determined from the initial state of the universe at the moment of the Big Bang, an initial state in which the concert was not planned. Later on, however, some organizers – possibly the musicians themselves – used their phantasy to model the future by means of calendars, etc., and they issued an announcement. We can use this announcement as the true explanation of the gathering of the listeners to the concert. The gathering at the concert was due to some practical knowledge of the equation of motion allowing the possibility to organize events entirely on the basis of the equation of motion and using the fact that the properties of the initial conditions are, in some respects, very well organized (low entropy, sufficient food and gasoline resources). However, there was no effect from the future, only from the phantasies about the future implemented in memories, which are true physical objects of course, such as the biological memories of the announcement and so on.
Even more difficult examples can be used to explain the fact that our actions are not preorganized “by God”, which may here be roughly identified with fundamental physical influences from the future, such as the biological development of extremely useful organs. Has the development of legs, say, really got nothing to do with the fact that they can later be used for walking and running? Darwin and Walles produced a convincing explanation for the development of legs without the need for any fundamental influence from the future on the past.
If, as would be said prior to Darwin’s time, it were God’s plan (analogous to the concert organizer’s plan) to make legs, this would come very close to the fundamental physics model, provided the following two assumptions were satisfied:

That this God is not limited or has His memory limited by physical degrees of freedom in contrast to the brains of the concert organizers.

This God is allknowing, which means that He has access to the future and does not need a phantasy or simulation to create a model of it.
In the earlier works[2, 3, 4] we attempted to find various reasons why this effect from the future might to be suppressed. For instance this effect is definitely suppressed for particles whose eigentimes are trivial in some sense. From the Lorentz invariance, the contribution of an action, which may be to the real part or the imaginary part involving from the passage of a particle from one point to another point must be proportional to the eigentime of the passage (i.e., the time the passage would take according to a standard clock located at the particle).
Two examples that dominate the physics of daily life ensure at least one source of strong suppression of the effect from the future: 1) Massless particles such as the photon have always zero eigentimes, thus for photons, the effect is strongly suppressed or killed. 2) For nonrelativistic particles, the eigentime is equal to the reference frame time, and thus the eigentime is trivial unless the particle is produced and/or destroyed. If a particle such as an electron is conserved the eigentime becomes trivial and there is little chance to see our effect at the lowest order with electrons.
Actually, since there is a factor of in front of the action, one initially expects the effects of to be so large that we need a large amount of suppression to prevent our model from being in immediate disagreement with the experiment.
We shall discuss some suppression mechanisms in section 3. We shall also discuss reasons why it may be likely that the effects of Higgs particles are much greater than those of already observed particles. The number of Higgs particles is not preserved so that even a non relativistic Higgs particle may contribute , but we do, of course, expect somewhat relativistic Higgs particles to be produced with velocities of the order of magnitude of that of light, but typically not extremely relativistic, thus there is no reason from the two above mentioned mechanisms that the effect of a Higgs particle should be suppressed.
In contrast, as we shall see, there is a reason why even if the whole effect of is generally strongly suppressed, a counteracting factor could be caused for the case of the Higgs particle.
Normally it would be reasonable to assume that in the real and imaginary parts, the multipliers of the same field combination, say in in and in in , should be of the same order of magnitude, so that the complex phases for the couplings, such as , should be of the order of unity.
There is, however, one case in which the problems connected with the hierarchy problem make this assumption unlikely to be true: Because of the hierarchy problem it is difficult to avoid considerable fine tuning of the Higgs mass, since its quadratically divergent contributions with a cutoff at the Planck scale would shift it considerably. We may only need this fine tuning for the squared term in the real part of the complex mass in the Lagrangian density. Whether or not only the real part of the square of the mass is tuned or whether also the imaginary part is also tuned may depend on which of the various models is used to attempt to solve the hierarchy problem, and how such a model is implemented together with our model of the imaginary part of the action.
For instance, one of us has constructed a long argument – using a bound state of six top and six antitop quarks – that under the assumption of several degenerate vacua (=MPP)[[5, 8, 16]], which in turn follows [5] from the model in the present article, we obtain a very small Higgs mass with its order of magnitude agreeing with the weak scale. In this model, which “solves the hierarchy problem”, it is clearly the real part of the square of the mass, i.e., , that gets fine tuned, and there would be insufficiently many equations of MPP stating that several vacua have essentially zero (effective) cosmological constants allowing the fine tuning of more than just this real part . In this model, the argument would thus be that the hierarchy problem would remain unsolved for the imaginary part of the coefficient of the masssquare term for the Higgs field , but it is unknown whether this imaginary part should be fine tuned to be small. A priori, it may be difficult for any model to obtain a small real part of the weak scale; thus it is highly possible that also in other models, only the real part is tuned and not the imaginary part. In such cases, the imaginary part of the Higgs mass square could, a priori, remain untuned and be of, the order of some fundamental scale, such as the Planck scale or a unified scale. This would mean that the imaginary part may be much larger than the real part,
(1) 
and thus, the assumption that all the ratios of the real to the imaginary part for the various coefficients in the Lagrangian should be of order unity would not be expected to be true for the case of the masssquare coefficient. Conversely, unless the hierarchy problem solution is valid for both real and imaginary parts we could have
(2) 
This would mean that by estimating the effect of in an analogous way to that for particles with real and imaginary parts of their coupling coefficients being of the same order or given by some general suppression factor, we could potentially under estimate the effects of the Higgs particle by a factor of .
In the light of this estimation for the relative importance of the effect from the future ( our effect) on the Higgs particle relative to that on other particles, it is an obvious conclusion that one should search for this type of effect when new Higgsparticleproducing machines such as the LHC, the Tevatron, or the canceled SSC are planned.
If the production and existence of a Higgs particle for a small amount of time gave a negative contribution to , which would enhance the probability density , one could wonder why the universe is not filled with Higgs particles. This may only be a weak argument, but it suggests that presumably the contribution from an existing Higgs particle to is positive if it is at all important. Now if the production and “existence” of a Higgs particle indeed gave a positive contribution to , whereby the probability for developments in a world containing large Higgsparticleproducing accelerators should be decreased by the effect of their contribution, then the production and existence of such Higgs particles in greater amounts should be avoided somehow in the true history of the world. If an accelerator potentially existed that could generate a large number of Higgs particles and if the parameters were so that such an accelerator would indeed give a large positive contribution, then such a machine should practically never be realized!
We consider this to be an interesting example and weak experimental evidence for our model because the great Higgsparticleproducing accelerator SSC[17], in spite of the tunnel being a quarter built, was canceled by Congress! Such a cancellation after a huge investment is already in itself an unusual event that should not happen too often. We might take this event as experimental evidence for our model in which an accelerator with the luminosity and beam energy of the SSC will not be built (because in our model, will become too large, i.e., less negative, if such an accelerator was built)[17].
Since the LHC has a performance approaching the SSC, it suggests that also the LHC may be in danger of being closed under mysterious circumstances. In an introductory article to the present one [1] we offered to demonstrate the mysterious effect of in our model on potentially closing the LHC by carrying out a carddrawing game, or using a random number generator.
Under the assumption that our model is indeed correct, demonstrating a strong effect on the LHC by a carddrawing game such as its possible closure would serve a couple of purposes:

Even though some unusual political or natural catastrophe causing the closure of the LHC would be strong evidence for the validity of a model of our type with an effect from the future, it would still be debatable whether the closure was not due to some other cause other than our effects. However, if a carddrawing game or a quantum random number generator causes the closure of the LHC in spite of the fact that it was assigned a small probability of the order of, say , then the closure would appear to very clear evidence for our model. In other words, if our model was true we would obtain a very clear evidence using such a carddrawing game or random number generator.

A drawn card or a random number causing a restriction on the LHC could be much milder than a closure caused by other means due to the effect of our model. The latter could, in addition, result in the LHC machine being badly used, or cause other effects such as the total closure of CERN, a political crisis, or the loss of many human lives in the case of a natural catastrophe.
Thus, the cheapest way of closing the main part of the LHC may be to demonstrate the effect via the carddrawing game.
However, in spite of these benefits of performing the carddrawing experiment it would be a terrible waste if a card really did enforce the closure or a restriction on the LHC. It should occur with such a low probability under normal conditions that if our model were nonsense, then drawing a card requiring a strong restriction should mean that our type of theory was established solely on the basis of that “miraculous” drawing. Such a drawing would have the consolation that instead of finding supersymmetric partners or other novel phenomena at the LHC, one would see the influence from the future! That might indeed lead to even more spectacular new physics than one could otherwise hope for! Thus, the restriction of the LHC would not be so bad. Nevertheless, it is of high importance that one statistically minimizes the harm done by an experiment such as a carddrawing game. Also, one should allow several possible restrictions to be written on the cards that might be chosen so that several possible effects from the future may occur. Then one might, in principle, learn about the detailed properties of this effect such as the number of Higgs particles needed to obtain an effect, or whether luminosity or beam energy matters the most for the ?
It is the purpose of the present article to raise and discuss these questions of how to arrange a carddrawing game experiment to obtain maximum information and benefit and minimal loss and obtain statistically minimal restrictions.
In section 2, we formulate some of the goals one would have to consider in planning a carddrawing game or random number experiment on restricting the LHC. In section 3 we give a simplified description of the optimal organization of the game and propose that the probability distribution in the game should be assigned on the basis of the maximum allowed size of some quantity consisting of luminosity, beam energy and number of Higgs particles etc..
In section 4 we develop the theory of our imaginary action so as to obtain a method of estimating the mathematical form expected for the probability of obtaining a “miraculous” closedown of the LHC.
In section 5 we discuss possible rules for the carddrawing game and random quantum generator. However, we still think that more discussion and calculation may be needed to develop the proposed example before actually drawing the cards.
Section 6 is devoted to further discussion and a conclusion.
2 The goals, or what to optimize
It is clear that the most important goal concerning the LHC is for it to operate in a way that delivers as many valuable and interesting results as possible while searching for one or more Higgs particles, strange bound states and supersymmetric partners. In contrast, our model is extremely unlikely to be true, and thus, the investigation of our model should only be allowed to disturb the other investigations very marginally. The problem is that although the probability of disturbance by the investigation of our theory has been statistically evaluated to be very tiny, there is a risk that the selection of a very unlucky card could impose a significant restriction on the LHC, and thereby cause a very major disturbance.
2.1 What to expect
Before estimating the optimal strategy with respect to the carddrawing game and its rules, we wish to obtain a crude statistical impression of what to expect.
The most likely event is that our model is simply wrong, and thus it is very unlikely that anything should happen to the LHC unless it is caused by our carddrawing experiments. If, however, our model is correct in principle, we must accept the unlucky fate of the SSC [17] as experimental evidence and conclude that the amount of superhighenergy physics at SSC, measured in some way by a combination of the luminosity and beam energy, seemingly sufficient to change the fate of the universe on a macroscopic scale. We do not at present know the parameters of our model, and even if we make orderofmagnitude guesses, there are several difficulties in estimating even orderofmagnitude suppression mechanisms. For instance, there may be competition between the arrangements of the events in our time to give a low with a similar arrangement for other times. Thus, even guessing the orderofmagnitude of fundamental couplings will still not give a safe estimate for the orderofmagnitude of the strength in practice. We are therefore left with a crude method of prior estimation by taking the probability of different “amounts of superhighenergy collisions” (say some combination of beam energy and luminosity) needed to macroscopically change the fate of the universe to be of constant density in the logarithm of this measure of superhighenergy collisions.
For simplicity, we consider that is, for example, the integrated luminosity for collisions with sufficiently high energy to produce Higgs particles, or take it simply to be the number of Higgs particles produced. Whether they are observed or not does not matter; it is the physically produced Higgs particles and the time they exist that matters.
We know in the case that our theory was correct, an upper limit for this amount of superhighenergy collisions is needed to obtain an effect. Namely, we know that the SSC was canceled and that its potential amount of superhighenergy collisions must have been above the amount needed. On the other hand, we also know that the Tevatron seemingly operates as expected so that its amount of superhighenergy collisions must be below the amount needed to cause fatal macroscopic changes.
In our prior estimation, we should thus calculate the probability for the amount . This is needed to cause fatal effects in a machine, where is in the interval , as
(3) 
and we assume
(4) 
for and that outside this range.
Here we have respectively denoted the amounts of superhighenergy collisions, say, the numbers of Higgs particles, produced as integrated luminosity in the SSC and the Tevatron by and .
The SSC should have achieved a luminosity of cms and a beam energy TeV in each beam, while the Tevatron has achieved values of cms and TeV.
The LHC should achieve a luminosity of cms and a beam energy of TeV in each beam.
Beam Energy  Luminosity  

Tevatron  TeV  cms 
LHC  TeV  cms 
SSC  TeV  cms 
With respect to luminosity, the LHC is expected to be even stronger than the SSC; thus, if we apply our criterion, one would expect the LHC to be prone to even greater bad luck than the SSC.
Let us, however, illustrate the idea by using for the beam energy for . Then
(5) 
and
(6) 
Hence,
(7) 
Now . Thus, the probability that the critical for closure is smaller than is . This means that if our theory was correct and the beam energy was the relevant quantity, then the LHC would be stopped somehow with a probability of .
2.2 What would we like to know about our model, if it is correct?
There are several information one would like to get concerning our model:

One would like to obtain an estimate of how strong the effect is, i.e., one would like to estimate at least the orderofmagnitude of the value of needed to disturb the fate of the universe macroscopically.

One would like to determine whether it is the beam energy or the luminosity that is most important for causing closing.

One would also like to determine which type of random numbers, quantum random numbers or more classically constructed ones, allow our effect to more easily manipulate the past. One could even speculate whether one could construct a mathematically random number that should make it almost impossible to manipulate such a physical effect (from the future).
In addition to all these wishes, to get questions answered random number experiment that causes minimal harm to the optimal use of the LHC machine.
2.3 How to evaluate cost?
Let us now discuss the above questions about our model.
To obtain a convincing answer to question 1), of whether there is indeed an effect, as proposed, the probability of selecting a random number – a card for instance – that leads to restrictions should be so small that one could practically ignore the possibility that a restriction occurs simply by chance. This suggests that one should let the a priori probability of a restriction, i.e., the number of card combinations corresponding to a restriction relative to the total number of combinations, be sufficiently small to correspond to getting by accident an experimental measurement five standard deviations away from the mean. That is to say, a crude number for the suggested probability of any restriction at all is .
To obtain a good answer to question 2) on the orderofmagnitude of the strength of the effect we must let the a priori probability of a drawing card giving a certain degree of restriction vary with the degree of restriction. Thus, a milder restriction is made, a priori, to be much more likely than a more severe restriction. We can basically assume that we will not draw a restriction (card combination) appreciably stronger than that required to demonstrate our effect. Thus, we can assume that the restriction drawn will be of the order of the magnitude of the strength of the operating machine, , which is the maximum allowed before our effect stops it. Thus if we arrange the probabilities in this way, we may claim that the restriction resulting from the drawn random number represent the strength of the effect. Mathematically, such an arrangement means that we choose the a priori probability for the restriction value of , , to be a power law:
(8) 
where
(9) 
and is a normalization constant. Here a larger value of should be chosen for a sharper measurement of the strength of the effect. If we only require a crude order of magnitude, we can simply take .
Note that having a large means that very severe restrictions become relatively very unlikely. Thus, a large is optimal for ensuring minimal harm to the operation of the machine.
We shall also assume that since the Tevatron seems not to be disturbed we do not have to include more severe restrictions on the LHC than those that would force it to operate as a Tevatron.
Concerning question 3) as to which features of the operation, e.g., luminosity and centerofmass beam energy, are the most important for our effect, we answer this question by letting different random numbers – the drawings – result in different types of restrictions. That is to say, the different drawings represent different combinations of restrictions on the beam energy and luminosity. Presumably it would be wise to make as many variations in the restriction patterns as possible, because the more combinations of various parameters, the more information about our effect one can obtain. If one draws a combination of cards that causes a restriction, then one has immediately verified our type of model or the existence of an effect from the future. In this case, any detail of the specific restriction combination obtained from the drawing is no longer random but is an expression of the mysterious new effect just established by the same drawing. The more details one can thus arrange to be readable from the card combination drawn, the more information one will obtain about the effect in the case of restrictions that actually show up in spite of having been a priori arranged to do so with a probability of the order of 5 standard deviations from the mean. Thus, to obtain as much profitable information as possible, there should be as many drawing combinations with as many different detailed restrictions as possible. One could easily make restrictions only for a limited number of years or one could restrict the number of Higgs particles produced according to some specific Monte Carlo program using, at that time, the best estimate for the mass of a Higgs particle. If one allows some irrelevant details to also result from the drawing, it is not a serious problem, since one will simply obtain a random answer concerning the irrelevant parameter. It would be much worse if our theory was correct and one missed the chance of extracting an important parameter, that could have been extracted from the drawing.
One should therefore also be careful when adjusting the relative a priori probabilities of the values for parameters one hopes to extract, so that one really extracts interesting information relative to theoretical expectations and does not simply obtain a certain result because one has adjusted the priori probability too much.
Concerning question 4), to determine the type of random number that can be most easily manipulated by our effect, we should extract information – but unfortunately very little information we suspect – to answer the question, by using several, or at least two, competing types of random numbers. One could, for instance, have one quantum mechanical random number generator and one carddrawing game. One could easily reduce the probability of a restriction in each of them by a factor of 2 so as to keep the total probability of obtaining a restriction at the initially prescribed level of 5 standard deviations. Then one should have two (or more) sources of random numbers, e.g., a genuine carddrawing game and a quantum random number generator, each with a very high probability that there will be no restrictions on the running of the LHC (for that type of random number) and only a tiny probability of some restriction (as already discussed with as many different ways of imposing a restriction as one can invent) of less than 5 standard deviations divided by the number of different types of random number, 2 in our example.
After having drawn a restriction from one of the types of random numbers, one would at least know that this type of random number was accessible for manipulation by our effect. Such information could be of theoretical value because one can potentially imagine that various detailed models based on our type of effect from a future model may give various predictions as to through which type of random number the effect can express itself. If, say, a model only allowed the effect through classical effects of the initial state of the universe but quantum experiments gave fundamentally random or “fortuitous” [18] results that not even could influence, then such a model would be falsified if effect produced the quantum number led to restrictions on the LHC.
One could also imagine that more detailed calculations would determine whether the effect from the future had to manifest itself not too far back in time. In that case one could perhaps invent a type of card game with cards that had been shuffled many years in advance, and one only used the first six cards in such stack of cards.
If it was the type of random number that came from stack shuffled years in advance that allowed the effect, then any type of detailed theory in which the effects of the future go only a short time interval back in time could be falsified.
2.4 Statistical cost estimate of experiment so far discussed
Let us now, as a first overview as to how risky it would be to perform a random number experiment, consider the simple proposal above:
The highest probability in the experiment is that no restrictions are imposed because we only propose restrictions with a probability of the order of . Even in the case of drawing a restriction, one then considers the distribution of, say, the beam energy restriction to be the th power of the beam energy. Here we think of being or . This leads to the average allowed beam energy being reduced by , where denotes the fraction of allowed beam relative to the maximum beam. In other words we call the highest allowed beam according to the carddrawing game . Then the average reduction in the case with probability that we get a reduction relative to the maximum beam becomes
(10)  
(11)  
(12)  
(13) 
For we lose of the maximum beam due to the restriction.
With the cost of the LHC machine estimated at to billion Swiss francs, the probability of a restriction being , and the expected loss of the beam being , the average cost of the card game experiment is of the order of Swiss francs. However, of course there is, a priori, a risk. One shall, however, not mind if the “bad luck of drawing a restriction card” occurs, because in reality it is fantastically good luck because one would have discovered a fantastic and at first unbelievable effect from the future!
2.5 Attempts to further reduce the harm
One can, of course, seek to further bias the rules of the game so as to assign the highest probabilities to the restrictions causing least harm. For instance, one could allow a relatively high probability for restrictions of the type in which one is only allowed to operate the machine for a short time at its highest energy.
3 Competition determining the fate between different times
To determine how our effect from the future functions let us consider our imaginary Lagrangian model from a more theoretical viewpoint.
In the classical approximation of our model, we consider a first approximation such that

the classical solution is determined alone by extremizing the real part of the action , i.e.,
(14) The reason for this is very simple. The real part determines the phase variation of the integral in the Feynman pathway
(15) and thus, it is only when varies slowly, i.e., when , that we do not have huge cancellation because the rapid sign variation (phase rotation) cancels the contribution out.
We may illustrate this by the following drawing.