An alternative Kummer function based Zeta function theory is proposed to enable
(a) the verification of several Riemann Hypothesis (RH) criteria
(b) a truly circle method for the analysis of binary number theory problems.
The Kummer function based Zeta function theory is basically about a replacement of the integral exponential function Ei(x) by a corresponding integral Kummer function. It enables the validation of several RH criteria, especially the "Hilbert-Polya conjecture", the "Riemann error function asymptotics" criterion and the „Beurling“ RH criterion. The latter one provides the link to the fractional function and its related periodicalL(2) Hilbert space framework, (TiE).
Regarding the tertiary Goldbach problem Vinogradov applied the Hardy-Littlewood circle method (with its underlying domain "open unit disk") to derivehis famous (currently best known, but not sufficient) estimate. It is derived from two estimate components based on a decomposition of the (Hardy-Littlewood) "nearly"-circle into two parts, the „major arcs“ (also called „basic intervals“) and the „minor arcs“ (also called „supplementary intervals“). The „major arcs“ estimates are sufficient to prove the Goldbach conjecture, unfortunately the „minor arc“ estimate is insufficient to prove the Goldbach conjecture. The latter one is purely based on "Weyl sums" estimates taking not any problem relevant information into account. However,this estimate is optimal in the context of the Weyl sums theory. In other words, the major/minor arcs decomposition is inappropriate to solve the tertiary and the binary Goldbach conjecture.
The primary technical challenge regarding number theoretical problems isthe fact that only the set of odd integers has Snirelman density ½, while the set of even integers has only Snirelman density zero (because the integer 1 is not part of this set).
The additional challenge regarding binary number theoretical problems is the fact that the problem connects two sets of prime numbers occuring with different density (probability) during the counting process; regarding the Goldbach conjecture this concerns the fact, that the number of primes in theinterval (2n-p) is less than the number of primes in the interval (1,p).Therefore, two different „counting methods“ are required to count the numbers of primes in the intervals (1,p) and (p,2n-p).
In order to overcome both technical challenges above a truly circle method in a Hilbert space framework with underlying domain „boundary of the unit circle“ is proposed. The nonharmonic Fourier series theory in a distributional periodic Hilbert scale framework replaces the power series Theory with its underlying domain, the "open unit disk".
The proposed nonharmonic Fourier series are built on the (non-integer) zeros of the considered Kummer function, (which are only imaginary whereby for their real parts it holds >1/2) replacing therole of the integers of exp(inx) for harmonic Fourier series. They are accompanied by the zeros of the Digamma function (the Gaussian psi function). The set of both sequences are supposed to enable appropriate non-Z based lattices of functions with domain "negative real line & "positive" critical line". This domain is supposed to replace the full critical line in the context of the analysis of the Zeta function, in order to anticipate the full information of the set of zeros of the Zeta function (including the so-called trivial zeros), while omitting the redundant information provided by the critical zeros from the zeros from the "negative" part of the critical line.
With respect to the analysis of the Goldbach conjecture it is about a replacement of the concepts of trigonometric (Weyl) sums in a power series framework by Riesz bases, which are "close" (in a certain sense) to the trigonometric system exp(inx).The nonharmonic Fourier series concept of almost periodic functions is basically about the change from integers "n" to appropriate sequence "a(n)". Such a change also makes the difference between the Weyl method and the van der Corput method regarding exponential sums with domains (n,n+N), (GrS), (MoH). Selberg‘s proof of the large sieve inequality is based on the fact, that the characteristic functions of an interval (n,n+N) can be estimated by the Beurling entire function of exponential type 2*pi, applying its remarkable extremal property with respect to the sgn(x) function, (GrS).
The Riesz based nonharmonic Fourier theory enables the split of number theoretical functions into a sum of two functions dealing with odd and evenintegers separately, while both domains do have Snirelman density ½. In case of an analysis of the Goldbach conjecture it also enables the definition of two different density functions, „counting“ the numbers of primes in the intervals (1,p)resp. (p,2n-p).
The trigometric system exp(inx) is stable under sufficienctly small perturbance, which leads to the Paley-Wiener criterion. Kadec's 1/4-theorem provides the "small perturbance" criterion, which is fulfilled for both sets of zeros, the considered Kummer function and the digamma function. A striking generalization of "Kadec's 1/4-theorem", (YoR) p. 36, with respect to the below is Avdonin's Theorem "1/4 in the mean", (YoR) p. 178.
The Fourier transformed system of the trigonometric system forms an orthogonal basis for the Paley-Wiener Hilbert space (PW space), providing an unique expansion of every function in the PW-space with respect to the system of sinc(z-n)-functions. Therefore, every PW-function f can be recaptured from its values at the integers, which is achieved by the cardinal series representation of that function f (YoR) p. 90.
When the integers n are replaced by a sequence a(n)) the correspondingly transformed exponential system builds a related Riesz basis of the PW-space with the reproducing sinc(z-a(n))-kernel functions system.
For the link of the nonharmonic Fourier series theory with its underlying concepts of frames and Riesz bases to the wavelet theory and sampling theorems, which is part of the solution concept of part B, we refer to (ChO), (HoM),(ReH).
(B) A Hilbert scale based integrated gravity and quantum field model After having passed the two milestones, Einstein’s Special Relativity Theory, (BoD1), and Pauli’s spin concept (accompanied by the spin statistics CPT theorem, (StR)), the General Relativity Theory (GRT) and the quantum theory of fields became two „dead end road“ theories towards a common gravity and quantum field theory. The physical waymarking labels directing into those dead end roads may be read as
dead end road label (1): "towards space-time regions with not constant gravitational potentials governed by a globally constant speed of light",(UnA)
dead end road label (2): "towards Yang-Mills mass gap".
The waymarker labels of the royal road towards a geometric gravity and quantum field theory may be
royal road label 1: towards mathematical concepts of „potential“,„potential operator“, and „potential barrior“ as intrinsic elements of a geometric mathematical model beyond a metric space (*)
royal road label 2: towards a Hilbert space based hyperboloid manifold with hyperbolic and conical regions governed by a „half-odd-integer“& „half-even integer“ spin concept
royal road label 3: towards the Lorentz-invariant, CPT theorem supporting weak Maxwell equations model of „proton potentials“ and „electron potentials“ as intrinsic elements of a geometric mathematical model beyond a metric space royal road label 4: towards „the understanding of physical units“, (UnA) p. 78, modelled as „potential barrior" constants, (*),(**), (***), (****), (*****)
(*) Einstein quote, (UnA) p. 78: „The principle of the constancy of the speed of light only can be maintained by restricting to space-time regions with a constant gravitational potential.“
(**) The Planck action constant may mark the "potential barrior" between the measurarable action of an electron and the action of a proton, which "is acting" beyond the Planck action constant barrior.
(***) The „potential barrior“ for the validity of the Mach principle determines the fine structure constant and the mass ratio constant of a proton and an electron: Dirac’s large number hypothesis is about the fact that for a hydrogen atom with two masses, a proton and an electron mass, the ratio of corresponding electric and gravitational force, orbiting one another, coincides to the ratio of the size of a proton and the size of the universe (as measured by Hubble), (UnA) p. 150. In the proposed geometric model the hydrogen atom mass is governed by the Mach principle, while the Mach principle is no longer valid for the electron mass, governed by the CPT spin statistics.
(****) The norm quadrat representation of the proposed "potential" definition indicates a representation of the fine structure constant in the form 256/137 ~ (pi*pi) - 8. In (GaB) there is an interesting approach (key words: "Margolus-Levitin theorem", "optimal packaged information in micro quantum world and macro universe") to „decrypt“ the fine structure constant as the borderline multiplication factor between the range of the total information volume size (calculated from the quantum energy densities) of all quantum-electromagnetic effects in the universe (including those in The a sense of real electrodynamic fields in a vacuum; Lamb shift) and the range of the total information volume size of all matter in the four dimensional universe (calculated from the matter density of the universe).
(*****) The vacuum is a homogeneous, dielectric medium, where no charge distributions and no external currents exist. It is governed by the dielectric and the permeability constants, which together build the speed of light; the fine structure constant can be interpreted as the ratio of the circulation speed of the electron of a hydrogen atom in its ground state and the speed of light. This puts the spot on the Maxwell equations and the "still missing underlying laws governing the "currents" and "charges" of electromagnetic particles. ...The energetical factors are unknown, which determine the arrangement of electricity in bodies of a given size and charge", (EiA), p. 52:
Thep roposed Hilbert space based model ...
… overcomes the (mathematical model caused) Yang-Mills-Equations mass gap problem
… builds on the (mathematically) proven (physical) PCT theorem
... overcomes the main gap of Dirac‘s quantum theory of radiation, i.e. the small term representing the coupling energy of the atom and the Radiation field becomes part of the H(1)-complementary (truly bosons) sub-space of the overall energy Hilbert space H(1/2); the new concept replaces Dirac’s H(-n/2-e)-based point charge model by a H(-1/2)-based quantum element model
... acknowledges the primacy of micro quantum world against the macro (classical field) cosmology world, where the Mach principle governs the gravity of masses and masses govern the variable speed of light, (DeH)
... allows to revisit Einstein's thoughts on ETHER AND THE THEORY OF RELATIVITY in the context of the space-time theory and the kinematics of the special theory of relativity modelled on the Maxwell-Lorentz theory of the electromagnetic field
... acknowledges the Mach principle as a selecting principle to select the appropriate classical cosmology field model out of the few current physical relevant ones, (DeH): the to be selected classical cosmology field equation model may be modelled as the Ritz approximation equation (= orthogonal projection onto the coarse-grained (energy) Hilbert sub-space H(1) of the overall variational representation in the overall H(1/2) (energy Hilbert) space) of an extended Newton model, accompanied with the Dirichlet integral basedinner product, the and the three dimensional unit sphere S(3) based on the field of quaternions in sync with the Lorentz transformation
... aknowledges Bohm's property of a "particle" in case of quantum fluctuation, (BoD), chapter 4, section 9, (SmL).
The GRT describes a mechanical system characterized by the (inert = heavy) mass of the considered „particles“ and their related interacting „gravity force“; the mathematical model framework is about a real number based (purely) metric space. This is about the distance measurement of real number points, which are per definition without any „mass density“, (as long as the field of real numbers is not extended to the field of hyper-real number, i.e. as long as the Archimedian axiom is still valid). For a mechanical system every real-valued function of the location and momentum (real number) coordinates represents an observable of the physical system. The underlying metric space concept (equipped with an (only) "exterior" product of differential forms, because a metric space has no geometric structure at all) accompanied by a global non-linear stable, Minkowski space, (ChD), is replaced by aH(1/2)-quantum energy Hilbert space concept equipped, where the H(1/2)-inner product is equivalent to a corresponding inner product of differential forms.
The Hilbert space framework supports also the solution to related challenges, e.g. regarding the „first mover“ question (inflation, as a prerequiste) of the „Big Bang“ theory, the symmetrical time arrow of the (hyperbolic) wave (and radiation) equation, governing the light speed and derived from the Maxwell equations by differentiation, no long term stable and well-posed 3D-NSE, no allowed standing (stationary) waves in the Maxwell equation, the mystery of the initial generation of an uplift force in a ideal fluid environment of aircraft wings, i.e. no fluids collisions with the wings surfaces, and a weak (H(-1/2) based variational) Landau equation based proof of the Landau damping phenomenon. The Landau damping is about the strong plasma turbulence phenomenon generating plasma heating.
The spectrum of a Hermitian, positive definite operator H in a complex valued Hilbert space H with domain D(A), where its inverse operator is compact, is discrete (measurable/stable). Without the latter property the concept of a continuous spectrum is required (unstable meaurement results). The link back to part A is given by the Berry-Keating conjecture with its mathematical counterpart, the Hilbert-Polya conjecture. It is about the suggestion that the E(n) of thenon-trivial zeros ½+iE(n) of the Zeta function are all real, because they are eigenvalues of some Hermitian operator H. Berry’s extensions are that if H is regarded as the Hamiltonian of a quantum-mechanical system then, (BeM)
i) the operator H has classical limits ii) the classical orbits are all chaotic (unstable) iii) the classical orbits do not possess time-reversal symmetry.
While the full H(-1/2) based quantum-mechanical system behaves chaotic from an observer perspective (measured in the statistical Hilbert (sub) space L(2)), the underlying complementary sub-systems allow a differentiation between those observable thermostatistical effects and the related, not statistically measurable mathematical model parameters, which are physically governed by physical Nature constants, reflecting the border line to those complementary sub-systems (with purely continuous spectrum).
Spontaneous symmetry breaking is a spontaneous process of symmetry breaking, by which a physical system in a symmetric state ends up in an asymmetric state. In quantum theory it describes systems where the equation of motion or the Lagrangian obey symmetries, but the lowest-energy vacuum solutions do not exhibit that same symmetry. When the system goes to one of those vacuum solutions, the symmetry is broken for perturbations around that vacuum even though the entire Lagrangian retains that symmetry. In our case this is about a "spontaneous self-adjointness break down" from the wavelet governed complementary (quantum potential energy, closed) sub-space of H(1/2) onto H(1). In line with part A it might be appropriate to replace the Mexican hat (wavelet) function, which is basically the second derivative of the Gaussian function, by a related Kummer function based wavelet for a corresponding modelling.
The proposed circle method above (part A) comes along with a split of the integer domain N (of number theoretical functions) being bijectively mapped onto two sets of integers („odd“ and „even“) on the boundary of the unit circle, both with Snirelman density ½. The two sets may be used to revisit the elementary „particle“ state numbers, n=2n/2 (n=1,2,3, ...) exdended by n=1/2, and Schrödinger’s half integer state numbers n+1/2=(2n+1)/2 (n=0,1,2,...), (corresponding to the levels of the Planck oscillator, see above resp. (ScE) pp. 20, 50)) to re-organize the chaotic model behavior of the current thermodynamics based purely kinematical quantum-mechanical systems.
Further referring to part A we note that the zeta function on the critical line is an element of the sequences Hilbert space l(-1). The Shannon sampling operator is the linear interpolation operator mapping the standard sequences Hilbert space l(2) isomorphically onto the Paley-Wiener space with band width „pi“ (i.e. a signals f represented as the inverse Fourier transform of a L(2) function g can be bijectively mapped to its related sequences f(k) and vice versa). We note that the generalized distributional (polynomial and and exponential decay) Hilbert scales l(a) and l(t,a) allow the definition of corresponding generalized Paley-Wiener scales enabling a (wavelet basis based) convolution integral representation of the zeta function, fulfilling appropriate admissibility conditions for wave functions to support general quantum‐mechanical principles. We note that the Fourier transform does not allow localization inthe phase space. In order to overcome this handicap D. Gabor introduced the concept of windowed Fourier transforms. From a group theoretical perspective the wavelet concept and the windowed Fourier transform are identical. With respect to the above Hilbert scale framework we note that the admissibility condition defining wavelets (e.g. (LoA)) puts the spot on the proposed H(1/2)(energy) Hilbert space.
From a physical perspective the wavelet transform may be looked upon as a „time-frequency analysis“ with constant relative band width. From a „mathematization“ perspective „wavelet analysis may be considered as a mathematical microscope to look by arbitrary (optics) functions over the real line R on different length scales at the details at position b that are added if one goes from scale „a“ to scale „a-da“ with da>0 but infinitesimal small“, (HoM) 1.2. Technically speaking, „the wavelet transform allows us to unfold a function over the one-dimensional space R into a function over the two-dimensional half-plane H of positions and details (where is which details generated?). … Therefore, the parameter space H of the wavelet analysis may also be called the position-scale half-plane since if g is localized around zero with width „Delta“ then g(b,a) is localized around the position b with width a*Delta“.
The related wavelet duality relationship provides an additional degree of freedom to apply wavelet analysis with appropriately (problem specific) defined wavelets in a Hilbert scale framework where the "microscope observations" of two wavelet (optics) functions g and h can be compared with each other by a "reproducing" ("duality") formula. The (only) prize to be paid is about additional efforts, when re-building the reconstruction wavelet.
(C) Some „views of the world“ from physicists and philosophers
Disclaimer: None of the papers of this homepage have been reviewed by other people; therefore there must be typos, but also errors for sure. Nevertheless, the fun part should prevail and if someone will become famous at the end, it would be nice if there could be a reference found to this homepage somewhere.