### Large extra dimensions

In particle physics, the ADD model, also known as the model with large extra dimensions, is an alternative scenario to explain the weakness of gravity relative to the other forces. This theory requires that the fields of the Standard Model are confined to a four-dimensional membrane, while gravity propagates in several additional spatial dimensions that are large compared to the Planck scale.[1]

The model was proposed by Nima Arkani-Hamed, Savas Dimopoulos, and Gia Dvali in 1998.[2][3]

Results from the Large Hadron Collider do not appear to support the model.[4] [5]

## Proponents' views

Traditionally in theoretical physics the Planck scale is the highest energy scale and all dimensionful parameters are measured in terms of the Planck scale. There is a great hierarchy between the weak scale and the Planck scale and explaining the ratio of $G_F/G_N= 10^\left\{-32\right\}$ is the focus of much of beyond the Standard Model physics. In models of large extra dimensions the fundamental scale is much lower than the Planck. This occurs because the power law of gravity changes. For example, when there are two extra dimensions of size $d$, the power law of gravity is $1/r^4$ for objects with $\scriptstyle r \ll d$ and $1/r^2$ for objects with $\scriptstyle r \gg d$. If we want the Planck scale to be equal to the next accelerator energy (1 TeV), we should take $d$ approximately 1mm. For larger numbers of dimensions, fixing the Planck scale at 1TeV, the size of the extra-dimensions become smaller and as small as 1 femtometer for six extra dimensions.

By reducing the fundamental scale to the weak scale, the fundamental theory of quantum gravity, such as string theory, might be accessible at colliders such as the Tevatron or the LHC.[6] There has been recent progress in generating large volumes in the context of string theory.[7] Having the fundamental scale accessible allows the production of black holes at the LHC,[8][9] though there are constraints on the viability of this possibility at the energies at the LHC.[10] There are other signatures of large extra dimensions at high energy colliders.[11][12][13][14][15]

Many of the mechanisms that were used to explain the problems in the Standard Model used very high energies. In the years after the publication of ADD, much of the work of the beyond the Standard Model physics community went to explore how these problems could be solved with a low scale of quantum gravity. Almost immediately there was an alternate explanation to the see-saw mechanism for the neutrino mass.[16][17] Using extra dimensions as a new source of small numbers allowed for new mechanisms for understanding the masses and mixings of the neutrinos.[18][19]

Another huge problem with having a low scale of quantum gravity was the existence of possibly TeV-suppressed proton decay, flavor violating, and CP violating operators. These would be disastrous phenomenologically. It was quickly realized that there were novel mechanisms for getting small numbers necessary for explaining these very rare processes.[20][21][22][23][24]

## Opponents' views

In the traditional view, the enormous gap in energy between the mass scales of ordinary particles and the Planck mass is reflected in the fact that virtual processes involving black holes or gravity are strongly suppressed. The suppression of these terms is the principle of renormalizability — in order to see an interaction at low energy, it must have the property that its coupling only changes logarithmically as a function of the Planck scale. Nonrenormalizable interactions are weak only to the extent that the Planck scale is large.

Virtual gravitational processes don't conserve anything except gauge charges, because black holes decay into anything with the same charge. So it is difficult to suppress interactions at the gravitational scale. One way to do it is by postulating new gauge symmetries. A different way to suppress these interactions in the context of extra-dimensional models is the "split fermion scenario" proposed by Arkani-Hamed and Schmaltz in their paper "Hierarchies without Symmetries from Extra Dimensions".[25] In this scenario the wavefunctions of particles that are bound to the brane have a finite width significantly smaller than the extra-dimension, but the center (e.g. of a gaussian wave-packet) can be dislocated along the direction of the extra dimension in which is known as a 'fat brane'. Integrating out the additional dimension(s) to obtain the effective coupling of higher dimensional operators on the brane, the result is suppressed with the exponential of the square of the distance between the centers of the wave-functions, a factor that generates a suppression by many orders of magnitude already by a dislocation of only a few times the typical width of the wave-function.

In electromagnetism, the electron magnetic moment is described by perturbative processes derived in the QED Lagrangian:



\int \bar{\psi} \gamma^\mu \partial_\mu \psi + {1\over 4}F^{\mu\nu}F_{\mu\nu} +\bar{\psi} e\gamma^\mu A_\mu\psi \,

which is calculated and measured to one part in a trillion. But it is also possible to include a Pauli term in the Lagrangian:



A \bar\psi F^{\mu \nu} \sigma_{\mu \nu} \psi \,

and the magnetic moment would change by A. The reason the magnetic moment is correctly calculated without this term is because the coefficient A has the dimension of inverse mass. The mass scale is at most the Planck mass. So A would only be seen at the 20th decimal place with the usual Planck scale.

Since the electron magnetic moment is measured so accurately, and since the scale where it is measured is at the electron mass, a term of this kind would be visible even if the Planck scale were only about 109 electron masses, which is 1000 TeV. This is much higher than the proposed Planck scale in the ADD model.

QED is not the full theory, and the standard model does not have many possible Pauli terms. A good rule of thumb is that a Pauli term is like a mass term— in order to generate it the Higgs must enter. But in the ADD model, the Higgs vacuum expectation value is comparable to the Planck scale, so the Higgs field can contribute to any power without any suppression. One coupling which generates a Pauli term is the same as the electron mass term, except with an extra $Y^\left\{\mu\nu\right\}\sigma_\left\{\mu\nu\right\}$ where Y is the U(1) gauge field. This is dimension-six, and it contains one power of the Higgs expectation value, and is suppressed by two powers of the Planck mass. This should start contributing to the electron magnetic moment at the sixth decimal place. A similar term should contribute to the muon magnetic moment at the third or fourth decimal place.

The neutrinos are only massless because the dimension-five operator $\bar\left\{L\right\} H H L$ does not appear. But neutrinos have a mass scale of approximately $10^\left\{-2\right\}$ eV, which is 14 orders of magnitude smaller than the scale of the Higgs expectation value of 1 TeV. This means that the term is suppressed by a mass M such that



\frac{H^2}{M} = 0.01\,\text{eV}. \,

Substituting $H \simeq 1$ TeV gives $M \simeq 10^\left\{26\right\}$ eV $\simeq 10^\left\{17\right\}$ GeV. So this is where the neutrino masses suggest new physics; at close to the traditional GUT scale, a few orders of magnitude less than the traditional Planck scale. The same term in a large extra dimension model would give a mass to the neutrino in the MeV-GeV range, comparable to the mass of the other particles.

In this view, models with large extra dimensions miscalculate the neutrino masses by inappropriately assuming that the mass is due to interactions with a hypothetical right-handed partner. The only reason to introduce a right-handed partner is to produce neutrino masses in a renormalizable GUT. If the Planck scale is small so that renormalizability is no longer an issue, there are many neutrino mass terms which don't require extra particles.

For example, at dimension-six, there is a Higgs-free term which couples the lepton doublets to the quark doublets, $\bar\left\{L\right\}L\bar\left\{q\right\}q$, which is a coupling to the strong interaction quark condensate. Even with a relatively low energy pion scale, this type of interaction could conceivably give a mass to the neutrino of size $\scriptstyle \left\{f_\pi\right\}^3/TeV^2$, which is only a factor of 107 less than the pion condensate itself at 200 MeV. This would be some 10 eV of mass, about than a thousand times bigger than what is measured.

This term also allows for lepton number violating pion decays, and for proton decay. In fact in all operators with dimension greater than four, there are CP, baryon, and lepton-number violations. The only way to suppress them is to deal with them term by term, which nobody has done.

The popularity, or at least prominence, of these models may have been enhanced because they allow the possibility of black hole production at LHC, which has attracted significant attention.

## Empirical tests

An analysis of results from the Large Hadron Collider in December 2010 severely constrains theories with large extra dimensions.[4][5]

The Fermi-LAT collaboration, in 2012, published limits on the ADD model of Large Extra Dimensions from astrophysical observations of neutron stars. If the unification scale is at a TeV, then for n < 4, the results presented here imply that the compactification topology is more complicated than a torus, i.e., all large extra dimensions (LED) having the same size. For flat LED of the same size, the lower limits on the unification scale results are consistent with n ≥ 4. [26] The details of the analysis is as follows: A sample of 6 gamma-ray faint NS sources not reported in the first Fermi gamma-ray source catalog that are good candidates are selected for this analysis, based on age, surface magnetic field, distance, and galactic latitude. Based on 11 months of data from Fermi -LAT, 95% CL upper limits on the size of extra dimensions R from each source are obtained, as well as 95% CL lower limits on the (n+4)-dimensional Planck scale M_D. In addition, the limits from all of the analyzed NSs have been combined statistically using two likelihood-based methods. The results indicate more stringent limits on LED than quoted previously from individual neutron star sources in gamma-rays. In addition, the results are more stringent than current collider limits, from the LHC, for n < 4. Further details of the analysis are found in [27]