text
large_stringlengths
252
2.37k
length
uint32
252
2.37k
arxiv_id
large_stringlengths
9
16
text_id
int64
36.7k
21.8M
year
int64
1.99k
2.02k
month
int64
1
12
day
int64
1
31
astro
bool
2 classes
hep
bool
2 classes
num_planck_labels
int64
1
11
planck_labels
large_stringclasses
66 values
The highest photon energy detected for *Fermi* GRBs is on the order of 10 GeV. We first calculate the random LF in the downstream co-moving frame, $\gamma_e$, of electrons radiating 10 GeV photons via synchrotron radiation, because these electrons have the largest Larmor radius and thus give us stricter confinement requirements. The synchrotron frequency in observer frame is $\nu_{syn} = e B_d \gamma_e^2 \Gamma / 2 \pi m_e c (1+z)$, where $\Gamma$ is the bulk LF of the shocked fluid measured in the upstream rest frame (lab frame), $B_{d}$ is the magnetic field downstream (measured in the local rest frame), $z$ is the redshift, $m_e$ and $e$ are the electron's mass and charge, respectively, and $c$ is the speed of light (Rybicki & Lightman 1979). We convert the synchrotron frequency to 10 GeV, i.e., $\nu_{10} = h \nu_{syn}/1.6 \times 10^{-2}$ erg, where $h$ is the Planck constant and 10 GeV corresponds to $1.6 \times 10^{-2}$ erg. Using the convention $Q_{x}=Q/10^{x}$ and solving the last expression for $\gamma_e$ yields
1,035
1003.5916
7,634,711
2,010
3
30
true
false
1
CONSTANT
Here we look at the question (first considered in [CIT]), can a background be found that resolves the hierarchy problem and still has small EW constraints. If we start by rewriting the 5D version of (REF), i.e. with $\delta=\alpha_n=0$ as FORMULA This can be solved for $a(r)$ FORMULA where $K$ is a constant of integration. In order to determine whether a space resolves the hierarchy problem or not, we are interested in the warp factor defined to be $\Omega=a^{-1}(r_{\rm{ir}})$. But clearly under Neumann BC's (REF) is ill defined. So evaluating (REF) at the boundaries FORMULA and fixing the the UV brane at $a(r_{\rm{uv}})=1$ then we arrive at the expression FORMULA That is to say the wave function of a gauge field propagating in a space with a warp factor $\Omega$ would satisfy this condition. Typically one would require $\Omega$ to be large of order $\sim10^{15}$. In the RS model this is obtained predominantly with a small value of $f_n^{\prime\prime}(r_{\rm{uv}})$. However an important point is that the required size of $\Omega$ is determined by the relative scaling of the fundamental Planck mass (REF). A space of large volume would suppress the fundamental Planck mass and not require as large a warp factor. Assuming for the moment $\Omega$ must be very large. We could, for example, get $F_n\sim 1$ if $f_n$ was close to constant for most of the space but, in order to generate a large warping, the wave function would have to either blow up (or be suppressed) in the UV (IR) while simultaneously the second derivative would have to be small (or large). Alternatively the warping could be achieved through the $b$ term but this would have to be done such that the 4D Planck mass was not enhanced. Of course metrics can always be written down that satisfy these conditions but here we agree with [CIT] in saying that such metrics would be in danger of being contrived.
1,889
1004.1159
7,650,555
2,010
4
7
false
true
3
UNITS, UNITS, UNITS
The models that we investigate in this paper are derived from varying the action FORMULA with respect to the metric $\delta g_{\alpha\beta}$, where FORMULA is the Gauss-Bonnet invariant, $R$ is the Ricci scalar, $R_{\alpha\beta}$ is the Ricci tensor, $R_{\alpha\beta\gamma\delta}$ is the Riemann tensor, $L_m$ and $L_{rad}$ are the matter and radiation energy Lagrangians, respectively. We will work in units with reduced Planck mass $M_{pl}^2 = (8 \pi G_N)^{-1}=1$. The corresponding field equations read FORMULA where we use the definition $f_{G}\equiv\frac{\partial{f}}{\partial{G}}$.
587
1004.2459
7,665,175
2,010
4
14
true
false
1
UNITS
At the same time, however, there is an independent argument that suggests one should be cautious before counting on such a possibility. Inflation requires an expansion factor of at least $e^{60}$ in order to resolve all the cosmological puzzles that it was designed to resolve. One can ask what the total field excursion is during this period, and the larger the scale of inflation, the larger the magnitude of the inflaton field at the end of inflation. One finds, in fact, again for single field inflation [CIT] : ${ \Delta \phi/ M_{pl}} \ge 1.06 \times {\left (r/.01 \right)}^{1/2}$ where $M_{pl}$ is the Planck mass. Field excursions larger than the Planck scale lead to territory where Planck Scale effects may be significant, producing possibly large undetermined corrections to simple quantum field theoretic estimates. In fact, in many string theory models the value of the inflating field is associated with an excursion in an extra dimension, which is itself restricted to be Planck length in size so that it is impossible for the field to take values larger than the Planck scale, and one would expect, therefore that $r <.01$. More recently it has been realized that string theory might allow for cases where $r$ can exceed 0.01 [CIT] so that distinguishing between small and large $r$ would help define the symmetry structure of the theory. Thus $r=0.01$ represents an important threshold.
1,402
1004.2504
7,666,325
2,010
4
14
true
true
5
UNITS, UNITS, UNITS, UNITS, UNITS
On the basis of previous works, we have considered only the scalar and vector contribution, and by using their regular initial condition, we constrain $B_{1 {\rm Mpc}} < 5.0$ nG and $n_B < - 0.12$ at $95\%$ confidence level. with the most updated combination of CMB anisotropies. Planck will be able to constrain the spectrum of a stochastic background of PMF even further at the level of $2.7$ nG.
398
1005.0148
7,700,952
2,010
5
2
true
false
1
MISSION
Consider Einstein gravity in $D$ space-time dimensions, in which gravitational interaction is mediated by a $D$-dimensional massless particle of spin-2, the graviton $h_{\mu\nu}$. The corresponding $D$-dimensional Planck mass we shall denote by $M_D$. As we have discussed, the shortest observable distance in such a theory is given by $D$-dimensional Planck length $L_D, \equiv, M_D^{-1}$, and this fact makes the theory self-complete in deep-UV. Let us now try to couple this theory to $N$ particle species. For simplicity we shall take the species to be light. As we shall see, this seemingly-innocent deformation of the theory dramatically affects the gravitational dynamics. In fact, in the presence of light species, it is no longer consistent to assume that gravity is mediated by a single massless graviton, but rather the new gravitational degrees of freedom *must be introduced necessarily* [CIT]. These degrees of freedom are necessary in order to UV-complete theory at the new fundamental scale, $L_N$, which is *larger* than the $D$-dimensional Planck length $L_D$, FORMULA We shall refer to $L_N$ as the *species scale*.
1,134
1005.3497
7,740,757
2,010
5
19
false
true
3
UNITS, UNITS, UNITS
In the present case, the original rotating symmetry in 2+1 dimensional spacetime is reduced to the gauge symmetry in two dimensional background. In the region near the horizon, the covariant current equations modified by Abelian anomaly is given by FORMULA By solving this equation with appropriate boundary conditions that the covariant gauge currents vanish at the horizon, the flux of angular momentum from the horizon is given by FORMULA This is consistent with the flux derived from the Hawking distribution given by the Planck distribution with chemical potentials for angular momentums $m$ of the fields radiated from the black hole, where the distribution for fermions is given by FORMULA
697
1005.3615
7,742,250
2,010
5
20
false
true
1
LAW
In this paper we have performed a systematic analysis of the future constraints on several parameters achievable from CMB experiments. Aside from the $5$ parameters of the standard $\Lambda$-CDM model we have considered new parameters mostly related to quantities which can be probed in a complementary way in the laboratory and/or with astrophysical measurements. In particular we found that the Planck experiment will provide bounds on the sum of the masses $\Sigma m_{\nu}$ that could potentially definitively confirm or rule out the Heidelberg-Moscow claim for a detection of an absolute neutrino mass scale. Planck+ACTPol could reach sufficient sensitivity for a robust detection of neutrino mass for an inverted hierarchy, while CMBPol should also be able to detect it for a direct mass hierarchy. The comparison of Planck+ACTPol constraints on baryon density, $N_{eff}$ and $Y_p$ with the complementary bounds from BBN will provide a fundamental test for the whole cosmological scenario. CMBPol could have a very important impact in understanding the epoch of neutrino decoupling. Moreover, the primordial Helium abundance can be constrained with an accuracy equal to that of current astrophysical measurements but with much better control of systematics. Constraints on fundamental constants can be achieved at a level close to laboratory constraints. Such overlap between cosmology and other fields of physics and astronomy is one of the most interesting aspect of future CMB research.
1,494
1005.3808
7,744,194
2,010
5
20
true
false
3
MISSION, MISSION, MISSION
The evolution of the abundance of a thermal relic $X$ is governed by the Boltzmann equation FORMULA where $n_X$ is the number density of the dark matter particles, $n_X^{\text{eq}}$ is its value in equilibrium, $H$ is the Hubble constant, and $\langle \sigma_{\text{an}}v_{\text{rel}}\rangle$ is the annihilation cross section multiplied by the relative velocity, averaged over the dark matter velocity distribution. Changing variables from $t \to x=m_X/T$ and $n_X \to Y=n_X/s$, where $s$ is the entropy density, the Boltzmann equation becomes FORMULA where $m_{\text{Pl}}= G_N^{-1/2} \simeq 1.2 \times 10^{19} \text{GeV}$ is the Planck mass, and $g_{*s}$ and $g_*$ are the effective relativistic degrees of freedom for entropy and energy density, respectively. In this section we will assume that the particles $X$ annihilates to are in thermal equilibrium at the time of freeze out. We will revisit this issue in Sec. [3.2], where we show that this requirement leads to significant constraints if the dominant annihilation is to the dark force carriers.
1,056
1005.4678
7,753,943
2,010
5
25
true
true
1
UNITS
We see similar improvements if we parametrize growth using $\gamma$ as in Eq. (REF). Fig. REF shows that the constraints improve significantly if we consider a $\Lambda$CDM model rather than the more general wCDM model. Adding Planck data would make the measurements even stronger and breaks the degeneracy between $\gamma$ and $h$ (for our treatment of Planck Fisher matrix see App. [9]). With *Euclid* and Planck measurements combined $\gamma$ can be measured to the precision of 7% in wCDM and to the precision of 4% in $\Lambda$CDM, while $h$ can be measured to the precision of 2% in wCDM and to the precision of 1.5% in $\Lambda$CDM.
639
1006.0609
7,772,804
2,010
6
3
true
false
3
MISSION, MISSION, MISSION
The DETF Planck Fisher matrix is computed assuming GR and to use it with our galaxy survey Fisher matrix we first have to generalize it for arbitrary $\gamma\neq 0.55$. To do this we make use of the fact that CMB experiments measure amplitude of fluctuations at the last scattering surface -- $\sigma_8(z=1100)$ -- which is related to the amplitude of density fluctuations today -- $\sigma_8(z=0)$ -- through FORMULA where $G_z$ depends on $\gamma$ and other cosmological parameters through Eq. (REF). The Fisher matrix elements of $\sigma_{8,1100}$ and $\sigma_{8,0}$ are related FORMULA and the Fisher matrix elements (and cross-correlation terms) on $\gamma$ are FORMULA We add a row and column corresponding to $\gamma$ to the DETF Planck Fisher matrix and fill it with elements computed from Eqns. (REF) and (REF). Since our fiducial cosmology has $\gamma=0.55$ other matrix elements do not change. The resulting 9$\times$ 9 matrix is a Fisher matrix of Planck survey for a general $\gamma$. This procedure does not account for the fact that different value of $\gamma$ would also result in slightly different late-time integrated Sachs-Wolf effect and would bias the estimate of $\sigma_8(0)$. We expect, however, this effect to be small as long as $\gamma$ is within a reasonable range ($\gamma \simeq 0.2 - 1.0$) of it's fiducial GR value.
1,347
1006.0609
7,772,825
2,010
6
3
true
false
3
MISSION, MISSION, MISSION
From a logical viewpoint, if the gravitational force represents an emergent force rather than a fundamental one, the Newtonian constant $G$ does not play a fundamental role any more, but instead, should be considered as a deduced quantity from more basic constants. Conventionally, the gravitational constant $G$ is one of the five "God-given" ingredients, *i.e.*, $c$, $G$, $\hbar$, $k_B$, and $1/4\pi \epsilon_0$ (where $\epsilon_0$ is the permittivity of free space), to construct the Planck units [CIT]. In 1899, Planck set the above five constants as bases, and elegantly simplified recurring algebraic expressions in physics. The nontrivial non-dimensionalization has conceptually profound significance for theoretical physics. There are a number of basic quantities in this unit system, such as the Planck length $l_{\rm P} \equiv \sqrt{G\hbar/c^3} \simeq 1.6 \times 10^{-35}$ m, the Planck time $t_{\rm P} \equiv \sqrt{G\hbar/c^5} \simeq 5.4 \times 10^{-44}$ s, the Planck energy $E_{\rm P} \equiv \sqrt{\hbar c^5/G} \simeq 2.0 \times 10^{9}$ J, and the Planck temperature $T_{\rm P} \equiv \sqrt{\hbar c^5/Gk_B^2} \simeq 1.4 \times 10^{32}$ K [CIT].
1,158
1006.3031
7,798,235
2,010
6
15
false
true
6
UNITS, PERSON, UNITS, UNITS, UNITS, UNITS
CONTEXT: Water vapour maser emission from evolved oxygen-rich stars remains poorly understood. Additional observations, including polarisation studies and simultaneous observation of different maser transitions may ultimately lead to greater insight. AIMS: We have aimed to elucidate the nature and structure of the VY CMa water vapour masers in part by observationally testing a theoretical prediction of the relative strengths of the 620.701 GHz and the 22.235 GHz maser components of ortho water vapour. METHODS: In its high-resolution mode (HRS) the Herschel Heterodyne Instrument for the Infrared (HIFI) offers a frequency resolution of 0.125 MHz, corresponding to a line-of-sight velocity of 0.06 km/s, which we employed to obtain the strength and linear polarisation of maser spikes in the spectrum of VY CMa at 620.701 GHz. Simultaneous ground based observations of the 22.235 GHz maser with the Max-Planck-Institut f\"ur Radioastronomie 100-meter telescope at Effelsberg, provided a ratio of 620.701 GHz to 22.235 GHz emission. RESULTS:We report the first astronomical detection to date of water vapour maser emission at 620.701 GHz. In VY CMa both the 620.701 and the 22.235 GHz polarisation are weak. At 620.701 GHz the maser peaks are superposed on what appears to be a broad emission component, jointly ejected asymmetrically from the star. We observed the 620.701 GHz emission at two epochs 21 days apart, both to measure the potential direction of linearly polarised maser components and to obtain a measure of the longevity of these components. Although we do not detect significant polarisation levels in the core of the line, they rise up to approximately 6% in its wings.
1,690
1007.0905
7,842,985
2,010
7
6
true
false
1
MPS
The perihelion shift of bound orbits of massive particles can be either negative or positive depending on the particle's energy and decreases with increasing ratio between symmetry breaking scale and Planck mass. Moreover, it increases with increasing Higgs to gauge boson mass ratio. The light deflection by a cosmic string increases with both the ratio between symmetry breaking scale and Planck mass and the ratio between Higgs and gauge boson mass. Since one of the possible detections of cosmic strings would be by light deflection (i.e. gravitational lensing) our results can be used to compare possible observational data to make predictions about the symmetry breaking scale at which the cosmic string formed as well as about the ratio between the corresponding Higgs and gauge boson mass of the underlying field theory.
828
1007.0863
7,843,522
2,010
7
6
false
true
2
UNITS, UNITS
At the Planck scale $\mathrm{M}_{\mathrm{P}}=2.4\cdot 10^{18}$, gravitational effects are important. The corrections to the Higgs boson mass in the Standard Model would have to be fine-tuned to one part in $10^{16}$ to give a Higgs mass of the order of the electroweak scale if the Standard Model were to be valid up to this scale. As supersymmetric models contain two scalars for every fermion, the quadratic divergences cancel and only the logarithmic parts remain. The fermion masses, including radiative corrections, are logarithmically divergent. Including additionally soft supersymmetry breaking, qualitatively the corrections to the Higgs boson mass then take on the following form ($\mathrm{m_{soft}}$ is the mass splitting between the fermion and the scalars) [CIT] : FORMULA If $\lambda\sim 1$ and $\Lambda$ the Planck scale, for a soft supersymmetry breaking mass of 1, the correction to the Higgs boson mass will be about 500. Since the correction increases about linearly with $\mathrm{m_{soft}}$, therefore the masses of at least some of the supersymmetric particles must be less than about 1.
1,108
1007.1321
7,848,567
2,010
7
8
false
true
2
UNITS, UNITS
First, let us expose that the noncommutative spactime (REF) introduces a new kind of duality between gauge theory and gravity. [CIT] In order to illuminate the issue in a broad context, let us return to the system (REF) of physical constants. We believe that all the four interactions in Nature, gravitational, electromagnetic, weak and strong forces, will be unified into a single force at the Planck scale (REF). So it may be more natural to treat gauge theory on an equal footing with gravity in the system (REF) which is missing the gauge theory counterpart. For the reason, consider the quartet of physical constants by adding a coupling constant $e$ which is the electric charge but sometimes it will be denoted with $g_{YM}$ to refer to a general gauge coupling constant. Using the symbol $L$ for length, $T$ for time, $M$ for mass, and writing $[X]$ for the dimensions of some physical quantity $X$, we have the following in $D$ dimensions FORMULA A remarkable point of the system (REF) is that it specifies the following intrinsic scales independently of dimensions [CIT] : FORMULA From the four dimensional case where $e^2/\hbar c \approx 1/137$, we can see that the scales in (REF) are not so different from the Planck scales in (REF).
1,246
1007.1795
7,853,357
2,010
7
11
false
true
2
UNITS, UNITS
**Acknowledgements.** We wish to thank S. Solanki for the possibility to work at the Max Planck Institute for Aeronomy, for making available FTS observation data, and for useful comments, S. Ploner for his thorough support rendered to the author during her stay at the Max Planck Institute for Aeronomy and for useful discussion, M. Schüssler for discussion of the results, and N. V. Kharchenko for her useful advice on the statistics techniques. We also wish to thank the referee for useful critical comments. The research described in this publication is a part of an international cooperation program, it was made possible in part by Grant No. CLG97501 from NATO and Grant No. 00084 from INTAS.
697
1007.3377
7,871,129
2,010
7
20
true
false
2
MPS, MPS
Seiberg's prescription tells us that if we want to find the DLCQ of M-theory compactified on a torus or K3 manifold, we should study the world volume Lagrangian of D0 branes moving on that manifold. If the manifold has size of order the 11D Planck scale, then it is very small in string units, and we should do a T-duality transformation to find a description that is under greater control. For a torus of less than four dimensions, this gives us SYM theory compactified on the dual torus. These are all finite theories and the prescription is unambiguous. Many exact results, including some famous string dualities can be derived from this prescription, and agree with calculations or conjectures that one already had in supergravity or string theory. Other calculations, not protected by supersymmetry non-renormalization theorems are only supposed to be correct when takes the $N \rightarrow \infty$ limit, keeping only states whose light cone energy scales like $\frac{1}{N}$.
980
1007.4001
7,878,450
2,010
7
22
false
true
1
UNITS
One simple example is the $f(R)$ theories of gravity with $f(0)=0$ which also has Schwarzschild metric as a solution but with an effective gravitational constant scaled by $G_N^{-1}\to f'(0)G_N^{-1}$ so that the effective Planck length also gets renormalized to $L_{eff}^{-2}=f'(0)L_P^{-2}$. The Wald entropy will now be $S_{Wald}=(1/4)(f'(0)A/L_P^2)$. (There is even a claim that Wald entropy is always $(1/4)$th of area when measured in units of effective coupling constant $G_{eff}$; see [CIT]). If we now regularize the divergence in the entanglement entropy with the renormalized Planck length, then the Wald and entanglement entropies will match. To implement this idea rigorously, we need a regularization prescription *for the Kernel* obtained by extending the ideas of [CIT] to a general theory of gravity. This question is under investigation.
853
1007.5066
7,892,412
2,010
7
28
false
true
2
UNITS, UNITS
Upcoming imaging surveys such as the Large Synoptic Survey Telescope will repeatedly scan large areas of sky and have the potential to yield million-supernova catalogs. Type Ia supernovae are excellent standard candles and will provide distance measures that suffice to detect mean pairwise velocities of their host galaxies. We show that when combining these distance measures with photometric redshifts for either the supernovae or their host galaxies, the mean pairwise velocities of the host galaxies will provide a dark energy probe which is competitive with other widely discussed methods. Adding information from this test to type Ia supernova photometric luminosity distances from the same experiment, plus the cosmic microwave background power spectrum from the Planck satellite, improves the Dark Energy Task Force Figure of Merit by a factor of 1.8. Pairwise velocity measurements require no additional observational effort beyond that required to perform the traditional supernova luminosity distance test, but may provide complementary constraints on dark energy parameters and the nature of gravity. Incorporating additional spectroscopic redshift follow-up observations could provide important dark energy constraints from pairwise velocities alone. Mean pairwise velocities are much less sensitive to systematic redshift errors than the luminosity distance test or weak lensing techniques, and also are only mildly affected by systematic evolution of supernova luminosity.
1,488
1008.2560
7,929,307
2,010
8
16
true
false
1
MISSION
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/.
382
1008.2832
7,932,797
2,010
8
17
true
false
1
MPS
In short, the proposal I am making is this: that spacetime foam strongly focuses geodesics at the Planck scale, leading to the BKL behavior predicted by the strongly coupled Wheeler-DeWitt equation. If this suggestion proves to be correct, it leads to a novel and interesting picture of the small-scale structure of spacetime. At each point, the dynamics picks out a "preferred" spatial direction, leading to approximately (1+1)-dimensional local physics. The preferred directions are presumably determined classically by initial conditions, but because of the chaotic behavior of BKL bounces, they are quickly randomized; in the quantum theory, they are picked out by an initial wave function, but again one expects evolution to scramble any initial choices. From point to point, these preferred directions vary continuously, but they oscillate rapidly [CIT]. Space at a fixed time is thus threaded by rapidly fluctuating lines, and spacetime by two-surfaces; the leading behavior of the physics is described by an approximate dimensional reduction to these surfaces.
1,068
1009.1136
7,974,231
2,010
9
6
false
true
1
UNITS
The high Galactic latitude sky at millimeter and submm wavelengths contains significant cosmological information about the early Universe (in terms of the cosmic microwave background) but also the process of structure formation in the Universe from the far infrared background produced by early galaxies and the Sunyaev-Zeldovich effect in clusters of galaxies. As the Planck mission will produce full sky maps in this frequency range, deeper maps of selected low-foregrounds patches of the sky can produce complementary and important information. Here we analyze the performance of a balloon-borne survey covering a $10^\circ \times 10^\circ$ patch of the sky with a few arcminute resolution and very high pixel sensitivity. We simulate the different components of the mm/submm sky (i.e., CMB anisotropies, SZ effect, radio and infrared sources, far infrared background, and interstellar dust) using current knowledge about each of them. We then combine them, adding detector noise, to produce detailed simulated observations in four observational bands ranging from 130 to 500 GHz. Finally, we analyze the simulated maps and estimate the performance of the instrument in extracting the relevant information about each of the components. We find that the CMB angular power spectrum is accurately recovered up to $\ell \sim 3000$. Using the Sunyaev-Zel'dovich effect, most of the galaxy clusters present in our input map are detected (60% efficiency overall). Our results also show that much stronger constrains can be placed on far infrared background models.
1,560
1009.2865
7,995,603
2,010
9
15
true
false
1
MISSION
The classical background arising from the saddle point contribution to generic amplitudes post selected on $\cal J^+$ appears to break down at an invariant Planck distance from the horizon where the dominant contribution to the Hawking radiation occurs. One might have objected that the transplanckian fluctuations of the weak energy-momentum tensor in the vicinity of the horizon are irrelevant as compensation is expected to occur when an incoming object feels transplanckian effects on *both* sides of the horizon [CIT]. But, independently of the fact that the present analysis does not require information from inside the horizon, the fluctuations are inscribed in the geometry itself as seen from Eq.(REF), leading to inconsistency. We take the inadequacy of the classical geometry to describe such huge fluctuations as an indication that generic post-selection destroys the event horizon and that, for such amplitudes, classical geometry must be abandoned in favor of a genuine quantum description, which unfortunately is at present not available operationally.
1,067
1009.6190
8,034,098
2,010
9
30
false
true
1
UNITS
We discuss the phenomenology of recently proposed holographic models of inflation, in which the very early universe is non-geometric and is described by a dual three-dimensional quantum field theory (QFT). We analyze models determined by a specific class of dual QFTs and show that they have the following universal properties: (i) they have a nearly scale invariant spectrum of small amplitude primordial fluctuations, (ii) the scalar spectral index runs as alpha_s = -(n_s-1), (iii) the three-point function of primordial scalar perturbations is of exactly the factorizable equilateral form with f_nl^eq=5/36. These properties hold irrespective of the details (e.g. field content, strength of interactions, etc.) of the dual QFT within the class of theories we analyze. The ratio of tensors-to-scalars is determined by the field content of the dual QFT and does not satisfy the slow-roll consistency relations. Observations from the Planck satellite should be able to confirm or exclude these models.
1,002
1010.0244
8,035,095
2,010
10
1
true
true
1
MISSION
In the pre-big-bang scenario of string cosmology, the quantum fluctuations of the metric tensor tend to be amplified with a spectrum which is rapidly growing in frequency. This has two important physical consequences. First, the rapidly growing spectrum of the tensor metric perturbations leads to a relic background of pre-big-bang gravitons which peaks at high frequency [CIT]. This signature should be easily accessible to planned experiments for Earth-based detectors (e.g. LIGO, VIRGO) and gravitational antennas operating in space (e.g. LISA, BBO, DECIGO). At the low frequency scales relevant to the observed CMB-anisotropy, the spectrum is in contrast strongly suppressed and a possible contribution to the CMB- polarization should be completely negligible [CIT]. Second, the non-standard production of CMB-anisotropies in the pre-big-bang scenario induce a small non-Gaussianity in the CMB-spectrum, which may be detectable in the soon future: with the WMAP 8-year mission results one expects a 20% improvement on the bounds on non-Gaussianity, while ESA's Planck satellite can yield a factor of about 4.
1,113
1010.3420
8,071,492
2,010
10
17
false
true
1
MISSION
Tree diagrams are generally regarded as the leading contribution to scattering amplitudes in the limit where the Planck constant $\hbar \to 0$ [CIT]. This is also the limit where one expects the laws of classical physics to apply. The fact that there are no classical bound states would suggest that there can be no Born level bound states, either. However, the relation between the $\hbar \to 0$ limit and classical physics is not straightforward [CIT], as I shall next demonstrate for the harmonic oscillator [CIT].
517
1010.5431
8,093,803
2,010
10
26
false
true
1
CONSTANT
We first make projections for reconstruction utilizing expectations from both the Planck Surveyor and CMBPol for the case in which neither isocurvature perturbations nor non-Gaussianities are observed. We do not include the possibility of a precision measurement of $n_T$ at this time. This can be considered the worst-case scenario in which the degeneracy persists, and shall serve as a baseline against which to compare different observational outcomes. For Planck, we assume 68% CL detections of $r$ ($r \gtrsim 0.01$, $\Delta r \sim 0.03$) [CIT], $n_s$ ($\Delta n_s \sim 0.0038$), and $dn_s/d{\rm ln}k$ ($\Delta dn_s/d{\rm ln}k \sim 0.005$) [CIT]. For CMBPol, we assume 68% CL detections of $r$ ($r \gtrsim 10^{-4}$, $\Delta r \sim r/10$), $n_s$ ($\Delta n_s \sim 0.0016$), and $dn_s/d{\rm ln}k$ ($\Delta dn_s/d{\rm ln}k \sim 0.0036$) [CIT]. We will assume this base set of observations throughout the remainder of this analysis, unless otherwise indicated. The tensor spectral index will not be adequately constrained with these CMB missions and the modified consistency relation will not be useful for constraining curvaton models.
1,137
1011.0434
8,110,752
2,010
11
1
true
true
2
MISSION, MISSION
In the near future it will be possible to improve our BBN constraints on the lepton number of the universe. On one hand, thanks to the better sensitivity of new neutrino experiments, either long-baseline or reactor, we expect to have a very stringent bound on $\theta_{13}$ or eventually its measurement (see e.g. [CIT]). Such results would lead to a more restrictive BBN analysis on primordial asymmetries. On the other hand, in the next couple of years data on the anisotropies of the cosmic microwave background from the Planck satellite [CIT] will largely reduce the allowed range of $N_{\rm eff}$, since the forecast sensitivity is of the order of $0.4$ at $2\sigma$ [CIT]. Indeed, suppose that Planck data confirm a value of $N_{\rm eff}> 3$. An excess of radiation up to $0.4-0.5$ could imply the presence of a significative degeneracy in the neutrino sector, but this would depend on the result of $\theta_{13}$ measurement. A vanishing $\theta_{13}$ would allow such possibility, but a measured $\theta_{13}$ in the next generation of experiments *would imply* the presence of extra degrees of freedom others than active neutrinos. In case of a much larger result by Planck, namely $N_{\rm eff} > 4$, it would be impossible to explain such a result in terms of primordial neutrino asymmetries only, and alternative cosmological scenarios with additional relativistic species, such as sterile neutrinos (see for instance [CIT]) would be strongly favored.
1,462
1011.0916
8,116,739
2,010
11
3
true
true
3
MISSION, MISSION, MISSION
The GZK cutoff for a proton component of the UHECR spectrum would, if demonstrated, imply for $\alpha$ (proton) an upper bound $\approx 10^{-6}$ if $a$ is the Planck length [CIT]. For quarks and gluons, this bound should probably be multiplied by $\approx N^2$, where $N$ is the number of effective constituents of the incoming protons. A $10^{20}$ eV iron nucleus would basically amount to a set of nucleons with energies $\simeq 2.10^{18}$ eV. At these energies, the nucleon mass terms still dominate over the QDRK deformations for $\alpha$ (nucleon) $<$ 1 and $a$ = Planck length. Furthermore, the validity of present algorithms to estimate UHECR energy is not really established. It therefore seems necessary : i) to clearly identify a UHECR component lighter than iron; ii) to better understand UHECR interaction with the atmosphere, as well as the internal structure of UHECR nucleons; iii) to further explore and study UHECR sources and acceleration.
957
1011.4889
8,160,561
2,010
11
22
true
true
2
UNITS, UNITS
As the typical quantum gravitational scale, *i.e.*, the Planck scale $E_\mathrm{Planck}=\sqrt{c\hbar/G}c^2$, is practically unattainable from conventional accelerator experiments, people turn to search for phenomenologically accessible effects from quantum gravity [CIT] at relatively low energy scales. One of them is the possibility of Lorentz invariance violation (LIV). The possibility that quantum gravity may leave a tiny imprint of LIV at relatively low energies was observed by many authors from various approaches to quantum gravity. These include string field theory, where the tachyon field may induce an instability to the naive Lorentz invariant vacuum and translate it into the potential of a tensor field. As a consequence, the tensor field acquires a vacuum expectation value [CIT] and breaks Lorentz invariance. Later, by incorporating various LIV coefficients into the standard model, the theory was developed into an effective field theory (EFT), called standard model extension (SME) [CIT]. Other approaches include spin network calculation in loop gravity [CIT], deformed special relativity [CIT], foamy structure of space-time [CIT], noncommutative field theory [CIT], emergent gravity [CIT], and the recently suggested Horava-Lifschitz gravity [CIT]. All of them suggest that tiny LIV may be a signature of new physics.
1,342
1011.5074
8,163,382
2,010
11
23
false
true
2
UNITS, UNITS
where $d$ is the distance, $\rho_{\rm g}$ the bulk density of the grain material, $F_\lambda$ the observed flux density at wavelength $\lambda$, $B$ the Planck function at $\lambda$ and grain temperature $T_{\rm g}$, and $Q_{\rm abs}$ the grain absorption coefficient. If $Q_{\rm abs} \propto \lambda^{-\alpha}$ for small dielectric absorbers, it can be shown that
364
1011.5988
8,174,183
2,010
11
27
true
false
1
LAW
We now make an interesting observation for $k=-1$ case. For a certain choice of parameters which lead to a type III singularity in the past, we find that there exists an additional classical branch at the small scale factors. The additional branch faces a big bang singularity in the past of classical evolution and is free from singularity in future evolution (as is shown by the dashed curves for the Hubble rate and the Ricci scalar in Fig.,REF). However the additional branch occurs in LQC when the scale factor is less than the Planck length. Since the length scale involved is below where we expect the effective dynamics in LQC to be valid, a more detailed analysis is needed, by including modifications pertaining to the inverse scale factor, in order to understand the physics emerging from LQC in this special case. Never the less, if we assume the validity of the effective Hamiltonian in this regime, then LQC resolves the past singularity in the additional branch (as depicted by the solid curves for the Hubble rate and the Ricci scalar in Fig.,REF).
1,064
1012.1307
8,199,698
2,010
12
6
true
false
1
UNITS
We thank Raúl Angulo and Carlton Baugh for providing us with the L-BASICC II simulations. ABA acknowledges the Ph.D fellowship of the International Max Planck Research School in the OPINAS group at MPE. This research was supported by the DFG cluster of excellence "Origin and Structure of the Universe". This publication contains observational data obtained at the ESO La Silla observatory. We wish to thank the ESO Team for their support.
439
1012.1322
8,200,207
2,010
12
6
true
false
1
MPS
Perhaps most important for us will be the form of the scalar potential. In units with $M_p = 1$, where $M_p$ is the reduced Planck mass, FORMULA the scalar potential takes the form: FORMULA $D_i \phi \equiv F_i$ is an order parameter for SUSY breaking: FORMULA From the form of the potential, eq. (REF), we see that if supersymmetry is unbroken, space time is Minkowski if $W=0$, and AdS if $W \ne 0$. If supersymmetry is broken, and the space-time is approximately flat space ($\langle V \rangle = 0$), then FORMULA
517
1012.2836
8,215,948
2,010
12
13
false
true
1
UNITS
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
1,311
1012.3445
8,223,741
2,010
12
15
true
false
3
MPS, MPS, MPS
Now let us turn to the primordial perturbation in this model. Using the slow-roll approximation, we obtain FORMULA We thus arrive at FORMULA and FORMULA According to WMAP observations, ${\cal P}_{{\cal R}}= 2.4\times 10^{-9}$ [CIT], and hence FORMULA The scalar-to-tensor ratio is given by FORMULA For ${\cal N}_{\rm COBE}=60$ this yields $r \simeq 0.14$, which is large enough to be detected by the forthcoming observation by PLANCK [CIT].
440
1012.4238
8,233,141
2,010
12
20
true
true
1
MISSION
Using this isothermal $\beta$-model, we can then calculate a map of the $y$ parameter on the sky along the line of sight by solving the integral in equation (5) analytically (Birkinshaw et al. 1999) FORMULA where $\beta>1/3$, $s$ is the projected distance from the centre of the cluster on the sky such that $r^2=s^2+l^2$ and $y_0$ is the central Comptonisation parameter FORMULA The integral of the $y$ parameter over the solid angle $\Omega$ subtended by the cluster is denoted by $Y_{\rm SZ}$, and is proportional to the volume integral of the gas pressure. It is thus a good estimate for the total thermal energy content of the cluster and its mass (see e.g. Bartlett & Silk 1994). Thus determining the normalisation and the slope of $Y_{\rm SZ}-M$ relation have been the subject of studies of the SZ effect (da Silva et al. 2004; Nagai 2006; Kravtsov 2006; Plagge et al. 2010; Andersson et al. 2011; Arnaud et al 2010; Planck Collaboration 2011d,e,f,g,h). In particular, Andersson et al. (2011) investigated the $Y_{\rm SZ}-Y_{\rm X}$ scaling relation within a sample of 15 clusters observed by South Pole Telescope (SPT), Chandra and XMM Newton and found a slope of close to unity ($0.96 \pm 0.18$). Similar studies were carried out by Planck Collaboration (Planck Collaboration 2011g) using a sample of 62 nearby ($z < 0.5$) clusters observed by both Planck and XMM--Newton satellites. The results are consistent with predictions from X-ray studies (Arnaud et al. 2010) and the ones presented in Andersson et al. (2011). These studies at low redshifts where the data are available from both X-ray and SZ observations of galaxy clusters are crucial to calibrate the $Y_{\rm SZ}-M$ relation and such a relation can then be scaled and used to determine masses of SZ selected clusters at high redshifts in order to constrain cosmology.
1,838
1012.4996
8,244,209
2,010
12
22
true
false
4
MISSION, MISSION, MISSION, MISSION
The ESA's Planck satellite[^2] is surveying the sky in nine frequency bands (30, 44, 70 GHz for the Low Frequency Instrument, LFI, and 100, 143, 217, 353, 545, 857 GHz for the High Frequency Instrument, HFI, with the beam FWHM ranging from 33 to 5 arcmin, Planck Collaboration 2011c). At the HFI frequencies it will provide the first all-sky surveys ever, while at LFI frequencies its higher sensitivity and resolution will allow a significant improvement over WMAP (Gold et al. 2011, Planck Collaboration 2006 and 2011c, Leach et al. 2008, Massardi et al. 2009). Planck thus offers a unique opportunity to carry out an unbiased investigation of the spectral properties of radio sources in a poorly explored frequency range, partially unaccessible from the ground.
764
1101.0225
8,257,931
2,010
12
31
true
false
4
MISSION, MISSION, MISSION, MISSION
Since Universe's expansion temp is $\dot{a}/a$, more strict, than (6.2.8) condition of LTE establishment has a form of: FORMULA If numbers of particles, participating in given reaction are conserved: FORMULA where for definiteness we lay here and further: FORMULA ($t=1$ corresponds to planck point of time), $n_1=n(1)$- particles' number density in this moment. According to (REF) the choice of such normalization of scale factor corresponds to the choice of planck length units in planck point of time. At this normalization in case of barotropic equation of state (REF) we obtain from (REF): FORMULA and from (REF): FORMULA
627
1101.0364
8,258,671
2,011
1
1
false
true
3
UNITS, UNITS, UNITS
For a more conservative estimate of the probability of our observation of the tiny value of the cosmological constant, let us suppose that the universe is finite. A plausible (though still very highly uncertain) estimate of a finite size would be the size to which the universe would inflate during slow-roll inflation from an inflaton that starts near the Planck density. If the inflaton were a massive scalar field, observations of the fluctuations of the cosmic microwave background give $m \sim 1.5\times 10^{-6}$ [CIT]. Then if the inflation starts with a symmetric bounce on a round three-sphere at density $\rho_0 = 0.5 m^2 \phi_0^2$, the volume at the end of classical slow-roll inflation is approximately [CIT] $[0.09644/(m\rho_0^2)]\exp{(12\pi\rho_0/m^2)} \sim \exp{(17\times 10^{12}\rho_0)}$, which would be $\sim \exp{(17\times 10^{12})}$ if the initial density were the Planck density.
898
1101.1083
8,266,623
2,011
1
5
false
true
2
UNITS, UNITS
We now compare the filter performance in the *WMAP*channels which are studied in OMCP(see their Figure 11) with the Planckchannels in this work (Figure REF), with the use of the RASS cluster sample which is used in both analysis. In OMCPthe MF was found to be more sensitive than the UF, but we find the opposite. The velocity detected at $95\%$ CL with the MF (UF) is reduced from $\sim5000$ km/s ($\sim10,000$ km/s) in the *WMAP*channels to $\sim1000$ km/s ($\sim1000$ km/s) in the Planckchannels. Considering that the amplitude of observed bulk flow velocities are a few hundreds of km/s at scales $r<300 {\rm Mpc h^{-1}}$ (Figure REF), the filters in the *WMAP*channels do not have the sensitivity to measure bulk flow velocities at these scales. The differences between the *WMAP*and the Planckfilters is expected: the error due to instrument noise is largely reduced because the noise levels of Planckare much lower than *WMAP*; the wider frequency coverage in the SZ sensitive regime also allows filters in the Planckchannels to suppress the thermal SZ bias.
1,065
1101.1581
8,273,102
2,011
1
8
true
false
5
MISSION, MISSION, MISSION, MISSION, MISSION
In sources with still faster variability, the effect of non-simultanity of the Planck observations can produce even more bizarre effects. The ERCSC spectrum of the source J1159+2914, also known as the IDV blazar TON599, shows a strong zig-zag shape, dropping by a factor of two between 44 and 70 GHz, followed by another small bump (see Figure REF). This source is known to show very fast variability, and the comparison with Effelsberg data, and IRAM 86,GHz and 143,GHz data taken around the time of the Planck scans, suggests that it had a strong flare at a peak frequency of $\sim$ 50 GHz in the first days of June 2010. This flare must have started after 23 May (when the Effelsberg observations were made), and probably declined again on 7 June when the Planck 100,GHz point was taken. The ERCSC spectrum of the source is a superposition of a quite low state in December 2009, and the high, flaring state in June 2010, except for 30 and 70,GHz measurements, which were made by Planck for the second time shortly after the last day of data used for ERCSC, and are therefore not included in the average.
1,106
1101.1721
8,274,338
2,011
1
10
true
false
4
MISSION, MISSION, MISSION, MISSION
[^1]: (http://www.rssd.esa.int/Planck) is a project of the European Space Agency (ESA) with instruments provided by two scientific consortia funded by ESA member states (in particular the lead countries France and Italy), with contributions from NASA (USA) and telescope reflectors provided by a collaboration between ESA and a scientific consortium led and funded by Denmark.
376
1101.2024
8,278,435
2,011
1
11
true
false
1
MISSION
Within the project Galactic Cold Cores we are carrying out Herschel photometric observations of cold interstellar clouds detected with the Planck satellite. The three fields observed as part of the Herschel science demonstration phase (SDP) provided the first glimpse into the nature of these sources. We examine the properties of the dust emission within the fields. We determine the dust sub-millimetre opacity, look for signs of spatial variations in the dust spectral index, and estimate how the apparent variations of the parameters could be affected by different sources of uncertainty. We use the Herschel observations where the zero point of the surface brightness scale is set with the help of the Planck satellite data. We derive the colour temperature and column density maps of the regions and determine the dust opacity by a comparison with extinction measurements. By simultaneously fitting the colour temperature and the dust spectral index values we look for spatial variations in the apparent dust properties. With a simple radiative transfer model we estimate to what extent these can be explained by line-of-sight temperature variations, without changes in the dust grain properties. The analysis of the dust emission reveals cold and dense clouds that coincide with the Planck sources and confirm those detections. The derived dust opacity varies in the range kappa(250um) ~ 0.05-0.2 cm^2/g, higher values being observed preferentially in regions of high column density. The average dust spectral index beta is ~ 1.9-2.2. There are indications that beta increases towards the coldest regions. The spectral index decreases strongly near internal heating sources but, according to radiative transfer models, this can be explained by the line-of-sight temperature variations without a change in the dust properties.
1,832
1101.3003
8,289,984
2,011
1
15
true
false
3
MISSION, MISSION, MISSION
The zero-level offsets in the Herschel maps were established by comparison with the IRAS and Planck data at comparable wavelengths, following the same procedure as described in [CIT]. We compared the Herschel-SPIRE and PACS data with the predictions of a model provided by the Planck collaboration (Planck Core-Team, private communication) and constrained on the Planck and IRIS data [CIT]. The model uses the all-sky dust temperature maps derived from the IRAS 100 $\mu$mand the two highest Planck frequencies to infer the average radiation field intensity for each pixel at the common resolution of the Planck and IRAS resolution of 5'. The Dustem model [CIT] with the above value for the radiation field intensity was then used to predict the expected brightness in the Herschel-SPIRE and PACS bands, using the nearest available Planck or IRAS band for normalization and taking into account the appropriate color correction in the Herschel filters. The predicted brightness was correlated with the observed maps smoothed to the 5' resolution over the region observed with Herschel and the offsets were derived from the zero intercept of the correlation. We estimate the accuracy of the offset determination to better than 5%.
1,228
1101.4654
8,308,918
2,011
1
24
true
false
7
MISSION, MISSION, MISSION, MISSION, MISSION, MISSION, MISSION
So far, we have assumed that the underlying cosmological parameters necessary to compute the redshift-space power spectrum are whole known a priori from the CMB observations such as PLANCK. However, even the precision CMB measurement produces some uncertainties in the cosmological parameters due to the parameter degeneracy. This may give an important source for the incorrect theoretical template for redshift-space power spectrum, and leads to a biased estimate of $D_A$, $H$, and $f$.
488
1101.4723
8,309,672
2,011
1
25
true
false
1
MISSION
Fixed lenses correlate the observed CMB with the gradient of the unlensed CMB and this property can be used to construct quadratic estimators for the lensing deflection field [CIT]. Once reconstructed, the deflection field can be used as another cosmological observable. The majority of the information on lensing comes from modes at the resolution limit of the survey so good angular resolution is essential even to reconstruct large-scale lenses. At the WMAP resolution and sensitivity, the reconstruction is very noisy but cross-correlation studies with galaxy surveys have led to a detection at the $3\sigma$ level [CIT]. A highly significant detection of the deflection power spectrum should be possible with PLANCK's temperature maps (see Fig. REF for forecasts) and detailed analysis of the PLANCK maps is underway. Detections are also expected soon from the ground (e.g., with SPT and ACT). However the statistical noise in temperature reconstructions (due to cosmic variance of the primary anisotropies) is such that the deflection power spectrum will only ever be measured to the cosmic variance limit for multipoles $l < 100$. To do better, and hence increase the lever arm to constrain dark-sector physics, one must use polarization data. PLANCK is too noisy for polarization to add much, but with COrE it improves the reconstruction greatly; see Figs. REF and REF. Most of the information comes from the $E$-$B$ correlations and the reconstructed deflection power should be cosmic-variance limited to $l \approx 500$, roughly a 25-fold increase in the number of reconstructed modes with $S/N > 1$. Significantly, **COrE can mine all of the information in the deflection power spectrum where linear theory is reliable**.
1,732
1102.2181
8,349,174
2,011
2
10
true
false
3
MISSION, MISSION, MISSION
COrE will also be able to test the adiabaticity of the primordial scalar perturbation by probing for the presence of isocurvature perturbations of various types. (See for example [CIT] and references therein.) This is an important point because although the simplest possibility is that the perturbations were completely adiabatic, a myriad of models have been proposed that include some degree of isocurvature perturbations as well and the only way to rule out these models is through better observations [CIT]. COrE will improve constraints on isocurvature perturbations to the total CMB power spectrum. Considering a generic cosmological model with the addition of CDM, neutrino density and neutrino velocity isocurvature modes, a Fisher forecast of COrE shows an improvement of these constraints by approximately a factor of two over PLANCK. The most improved constraints will be those on the contribution of the neutrino density and velocity isocurvatures, which will be more than double that of PLANCK [CIT].
1,014
1102.2181
8,349,199
2,011
2
10
true
false
2
MISSION, MISSION
As mentioned in section [1], another, more realistic, possibility is that the photospheric emission is broadened. We model this scenario with a multicolour blackbody, using the XSPEC model *diskpbb*. This model was developed to describe emission from an accretion disc, but the parameters can also be interpreted in the general case of multicolour blackbody emission. The free parameters of the model are the shape of the blackbody (described by the parameter $p$), the maximum blackbody temperature ($T_{\rm{max}}$) and the normalization. More specifically, the parameter $p$ describes the temperature profile of an accretion disc giving rise to multicolour blackbody emission. Since this is irrelevant in the case of GRBs, we instead introduce the parameter $q$, which relates to the fitted parameter $p$ as $q = 4 -2/p$. Using $q$ we can express the relationship between the flux and the temperature of the single blackbody components (which together make up the multicolour blackbody) as FORMULA where $F_{\mathrm {max}}$ is the flux of the Planck function at $T_\mathrm {{max}}$. As $q \rightarrow \infty$ we approach the case of a single blackbody. We note that the same parameter $q$ is used to describe the multicolour-blackbody model in [CIT]. We also use (REF) to define the average temperature of the multicolour blackbody FORMULA
1,342
1102.4739
8,375,239
2,011
2
23
true
false
1
LAW
The value of $k_t^*$ directly relates to the best-pivot multipole $\ell_t^*$ by Eq.(REF), and the value of $\ell_t^*$ is obtained by solving the equation in (REF). By using (REF), we obtain the value of $\ell_t^*$ as a function of the input (or true) value of the tensor-to-scalar ratio $r^*$ for EPIC-2m and EPIC-LC, which are plotted in Fig. REF (left panel). We find that, in both cases, the value of $\ell_t^*$ becomes larger with the increasing of $r^*$. For EPIC-2m, we have $\ell_t^*=43$ for $r^*=0.001$, and $\ell^*_t=137$ for $r^*=0.1$. For EPIC-LC, the value of $\ell_t^*$ is smaller than that of EPIC-2m, due to the larger noise level and the larger beam FWHM of EPIC-LC. When $r=0.001$, we have $\ell_t^*=26$, and when $r=0.1$, we have $\ell_t^*=87$. These reflect that gravitational waves in the frequency range $k\sim0.01$Mpc$^{-1}$ will be best constrained by the future CMBPol observations, unless the value of $r$ is extremely small. This is because the main contribution comes from the observation of the peak of $B$-polarization at $\ell\sim80$. We should remember that this is different from the Planck case, where $\ell_t^*\sim 10$, due to the main contribution of the reionization peak of $B$-polarization [CIT].
1,234
1102.4908
8,377,264
2,011
2
24
true
true
1
MISSION
Quantum geometry effects modify the geometric, left side of Einstein's equations. In particular, the Friedmann equation becomes FORMULA To compare with the standard Friedmann equation $H^2 = (8\pi/3),\rho$, it is often convenient to use (REF) to write (REF) as FORMULA where $\rho_{\mathrm{crit}}= 3/8\pi\gamma^2\lambda^2 \approx 0.41 \rho_{\rm Pl}$. By inspection it is clear from Eqs (REF) - (REF) that, away from the Planck regime ---i.e., when $\lambda b \ll 1$, or, $\rho \ll \rho_{\mathrm{crit}}$--- we recover classical general relativity. However, modifications in the Planck regime are drastic. The main features of this new physics can be summarized as follows.
671
1103.2475
8,414,468
2,011
3
12
true
true
2
UNITS, UNITS
It is important to remark that the limit $l \to 0$ is purely formal in the sense that quantum effects preclude it to be fully performed. The outcome of this limit, that is, the cone spacetime, must be thought of as the frozen geometrical structure behind the quantum spacetime fluctuations taking place at the Planck scale. A more realistic situation is to assume that the de Sitter pseudo-radius $l$ is of the order of the Planck length $l_P$, in which case the cosmological term would have the value FORMULA Notice that, due to the fact that spacetime becomes locally a high-$\Lambda$ de Sitter spacetime, it will naturally be endowed with a causal de Sitter horizon, and consequently with a holographic structure [CIT]. As the $\Lambda$ term decays and the universe expands, the proper conformal current is gradually transformed into energy-momentum current, giving rise to a FRW universe. Concomitantly, spacetime becomes transitive under a combination of translations and proper conformal transformations. This means that our usual (translational) notion of space and time begin to emerge, though at this stage spacetime is still preponderantly transitive under proper conformal transformation.
1,199
1103.3679
8,429,817
2,011
3
18
false
true
2
UNITS, UNITS
[^1]: Based on observations at the Paranal Observatory of the European Southern Observatory for programme number 081.D-0819. Based on observations at the La Silla Observatory of the European Southern Observatory for programme number 082.D-0649. Based on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck Institut für Astronomie and the Instituto de Astrofísica de Andalucía (CSIC). Based on observations with the William Herschel Telescope operated by the Isaac Newton Group at the Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias on the island of La Palma, Spain.
670
1103.4045
8,433,255
2,011
3
21
true
false
1
MPS
- Planck/HFI has two channels at 545 and 857 GHz (550 and 350 $\rm \mu m$). By 2012, Planck will have mapped the full sky with $\theta_{\rm Planck} = 5^\prime$ resolution. Fig. REF marks the highest frequency Planck channels. In principle, the noise levels should allow detection of the CIB dipole. However with only two frequencies and the broad channels with $\Delta\nu/\nu=0.33$ for the HFI instrument (Tauber 2004), convincing separation of the Galactic foreground will be difficult, a point already realized by Piat et al.(2002).
534
1104.0901
8,468,756
2,011
4
5
true
false
4
MISSION, MISSION, MISSION, MISSION
MEDR thanks the Argentinian Astronomical Society for its partial financial support to attend this meeting. We acknowledge support from the PICT 32342 (2005) and PICT 245-Max Planck (2006) of ANCyT (Argentina). Simulations were run in Fenix and HOPE clusters at IAFE and CeCAR cluster at University of Buenos Aires.
314
1104.2103
8,482,847
2,011
4
12
true
false
1
MISSION
Our results are also applicable to scenarios where quantized metric fluctuations are considered as part of an effective quantum field theory at an intermediate scale below or near the Planck scale. Here the underlying UV completion of gravity, which may use different degrees of freedom for gravity than the metric, or even leave the local QFT framework completely, will determine the initial conditions for the RG flow at this intermediate scale. Also from this more general viewpoint, we do not find any indications for gravity-stimulated chiral symmetry breaking. Still, our results can be used to decompose the accessible theory space into those branches where the chiral sector remains symmetric and other branches where the chiral sector becomes critical and typically generates heavy fermion masses. The distribution of these branches in theory space depends on the gravitational couplings. In particular, the universality properties of the shifted Gaußian matter fixed point can substantially vary. This analysis provides general constraints on the Planck scale behavior of any microscopic theory of quantum gravity: the existence and observation of light fermions potentially excludes those branches of theory space where the chiral sector is critical at the Planck scale.
1,281
1104.5366
8,521,082
2,011
4
28
false
true
3
UNITS, UNITS, UNITS
Although inflationary scenario has been initially introduced to explain the homogeneous and isotropic nature of the universe at large scale structure [CIT], it also gives extremely interesting results while studying inhomogeneities and anisotropries of the universe, which is a consequence of the vacuum fluctuations of the inflaton as well as the metric fluctuations. These fluctuations result in non-linear effects (parametrized by "$f_{NL},\tau_{NL}$\") seeding the non-Gaussianity of the primordial curvature perturbation, which are expected to be observed by PLANCK, with non-linear parameter $f_{NL} \sim {\cal O}(1)$ [CIT]. Along with the non-linear parameter $f_{NL}$, the "tensor-to-scalar ratio\" $r$ is also one of the key inflationary observables, which measures the anisotropy arising from the gravity-wave(tensor) perturbations and the signature of the same is expected to be predicted by the PLANCK if the tensor-to-scalar ratio $r \sim 10^{-2}$ [CIT]. As these parameters give a lot of information about the dynamics inside the universe, the theoretical prediction of large/finite (detectable) values of the non-linear parameters "$f_{NL},\tau_{NL}$\" as well as "tensor-to-scalar ratio\" $r$ has received a lot of attention for recent few years [CIT].
1,268
1105.0365
8,528,684
2,011
5
2
false
true
2
MISSION, MISSION
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is `http://www.sdss.org/`. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
1,313
1105.0674
8,532,511
2,011
5
3
true
false
3
MPS, MPS, MPS
The work is based on the public data of the sky surveys GALEX and SDSS. Funding for the Sloan Digital Sky Survey (SDSS) and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, and the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, The University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. During the data analysis we have used the Lyon-Meudon Extragalactic Database (HYPERLEDA) supplied by the LEDA team at the CRAL-Observatoire de Lyon (France) and the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Our study of the rings in lenticular galaxies is supported by the RFBR grant number 10-02-00062a.
1,889
1105.3147
8,557,767
2,011
5
16
true
false
3
MPS, MPS, MPS
The actual motivation for doing this comes from Supergravity and String theory [CIT] : String theory is considered as a candidate for a UV completion of General Relativity, which in its present formulation requires extra dimensions and supersymmetry. Supergravity is considered as the low energy effective field theory limit of String theory. One may therefore call String theory a top -- bottom approach. In this series of papers we take first steps towards a bottom -- top approach in that we try to canonically quantise the Supergravity theories by LQG methods. While String theory in its present form needs a background dependent and perturbative quantum formulation, the LQG quantum formulation is by design background independent and non perturbative. On the other hand, quantum String theory is much richer above the low energy field theory limit, containing an infinite tower of higher excitation modes of the string, which come into play only when approaching the Planck scale and which are necessary in order to find a theory which is finite at least order by order in perturbation theory.
1,099
1105.3710
8,566,347
2,011
5
18
false
true
1
UNITS
The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Korean Scientist Group, Los Alamos National Laboratory, theMax-Planck-Institute forAstronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, University of Pittsburgh, Princeton University, the United States Naval Observatory and the University ofWashington.
569
1105.4469
8,575,372
2,011
5
23
true
false
2
MPS, MPS
The formation and evolution of protoplanetary systems, the breeding grounds of planet formation, is a complex dynamical problem that involves many orders of magnitudes. To serve this purpose, we present a new hybrid algorithm that combines a Fokker-Planck approach with the advantages of a pure direct-summation N-body scheme, with a very accurate integration of close encounters for the orbital evolution of the larger bodies with a statistical model, envisaged to simulate the very large number of smaller planetesimals in the disc. Direct-summation techniques have been historically developped for the study of dense stellar systems such as open and globular clusters and, within some limits imposed by the number of stars, of galactic nuclei. The number of modifications to adapt direct-summation N-body techniques to planetary dynamics is not undemanding and requires modifications. These include the way close encounters are treated, as well as the selection process for the "neighbour radius" of the particles and the extended Hermite scheme, used for the very first time in this work, as well as the implementation of a central potential, drag forces and the adjustment of the regularisation treatment. For the statistical description of the planetesimal disc we employ a Fokker-Planck approach. We include dynamical friction, high- and low-speed encounters, the role of distant encounters as well as gas and collisional damping and then generalise the model to inhomogenous discs. We then describe the combination of the two techniques to address the whole problem of planetesimal dynamics in a realistic way via a transition mass to integrate the evolution of the particles according to their masses.
1,710
1105.6094
8,591,620
2,011
5
30
true
false
2
FOKKER, FOKKER
In this article, we intend to confront quasi-exponential inflationary models with WMAP seven years data [CIT] with a phenomenological Hubble parameter following *Hamilton Jacobi* formalism. The absence of time dependence in the Hubble parameter produces de-Sitter inflation which was very appealing from theoretical point of view but its acceptability is more or less limited considering present day observations. So, certain deviation from an exact exponential inflation turns out to be a good move so as to go along with latest as well as forthcoming data. In general these models are called quasi-exponential inflation. Our primary intention here is to confront this quasi-exponential inflation with WMAP seven using the publicly available code CAMB [CIT]. Nevertheless, as it will turn out, the analysis also predicts a detectable tensor to scalar ratio so as to reflect significant features of quasi-exponential inflation in forthcoming PLANCK [CIT] data sets as well. Thus, PLANCK may directly verify (or put further constraint to) our analysis by detecting (or pushing further below the upper bound of) gravitational waves.
1,130
1105.6362
8,597,736
2,011
5
31
true
true
2
MISSION, MISSION
N. A. Kudryavtseva was supported for this research through a stipend from the International Max Planck Research School (IMPRS) for Radio and Infrared Astronomy at the Universities of Bonn and Cologne. We would like to thank Shane O'Sullivan for useful discussions. This publication has emanated from research conducted with the financial support of Science Foundation Ireland. The UMRAO has been supported from the series of grants from the NSF and NASA and from the University of Michigan.
490
1106.0069
8,599,367
2,011
6
1
true
false
1
MPS
The fundamental notion of the noncommutative geometry is that the picture of spacetime as a manifold of points breaks down at distance scales of the order of the Planck length: Spacetime events cannot be localized with an accuracy given by Planck length [CIT] as well as particles do in the quantum phase space. So that the points on the classical commutative manifold should then be replaced by states on a noncommutative algebra and the point-like object is replaced by a smeared object [CIT] to cure the singularity problems at the terminal stage of black hole evaporation [CIT].
582
1106.1974
8,621,068
2,011
6
10
true
true
2
UNITS, UNITS
Supersymmetry breaking can be parameterized by a spurion field living on the UV brane, $\eta = \theta^2 F$ with $F\sim M_P^2$, along with the a term leading to gaugino masses d\^2 W\^W\_(y) +. This gives a gaugino mass $m_\lambda\sim F/M_P \sim M_P$. The gravitino will also receive a Planck scale mass. So both the gaugino and gravitino will decouple from the low energy theory. SUSY can also be broken for squarks and sleptons with a UV term, d\^4 k S\^S (y) which leads to soft scalar masses which are also of order the Planck scale, decoupling squarks and sleptons from the low energy theory as well. All potentially light superpartner zero modes (except for the Higgs) in the 5d theory are given a Planck scale mass in this way, reducing the low energy theory to the usual SM fields (aside from the Higgs, which will be discussed in a moment).
848
1106.2193
8,624,878
2,011
6
11
false
true
3
UNITS, UNITS, UNITS
To compare our models with the recent CMB observations, we perform the by-now common practice of the Markov Chain Monte Carlo sampling of the parameter space using the publicly available CosmoMC package [CIT]. The CosmoMC code in turn utilizes the Boltzmann code CAMB [CIT] to arrive at the CMB angular power spectrum from given primordial scalar and tensor spectra. We evaluate the inflationary scalar as well as tensor spectra using an accurate and efficient numerical code (as outlined in Subsec. [2.2]) and feed these primordial spectra into CAMB to obtain the corresponding CMB angular power spectra. We should stress here that we actually evolve *all the modes* that are required by CAMB from the sub to the super-Hubble scales to obtain the perturbation spectra, rather than evolve for a smaller set of modes and interpolate to arrive at the complete spectrum. This becomes imperative in the models of our interest which (as one would expect, and as we shall illustrate below) contain fine features in the scalar power spectrum. It should be pointed out here that, while the chaotic model leads to a tensor-to-scalar ratio of $0.16$, the monodromy model results in $r\simeq 0.06$. Though these tensor amplitudes are rather small to make any significant changes to the results, we have developed the code to evaluate the inflationary power spectra with future datasets (such as, say, Planck) in mind, and hence we nevertheless take the tensors into account exactly.
1,471
1106.2798
8,631,380
2,011
6
14
true
true
1
MISSION
Given $T_\mathrm{dust}$ of a galaxy, we estimate the dust mass with the following: FORMULA where $f_\lambda$ is the flux density, $D$ is the distance from the galaxy, $B_\lambda$ is the Planck function, which is $2ckT/\lambda^4$ in the Rayleigh-Jeans limit (and the additional $4\pi$ factor is due to integrating over steradians).
330
1106.4022
8,646,855
2,011
6
20
true
false
1
LAW
Take $E$ to be much larger than the size of the internal space $K$ (in the $2 \times 2$ model $K=24$ while in the Rubik's cube $K=4\times 10^{19}$), so that almost all internal states contain unsolved puzzles in almost all configurations. Initially, the outside observer will see at every time step approximately the same number of $q$-particles, $E/K$, with typical relative deviation of order $\sqrt{K/E}$. The niceness conditions hold firmly so long as these deviations remain small, that is so long as random walks do not (frequently) self-intersect. When self-intersections begin to kick in, the wavefunction of the outgoing radiation develops components marked by conspicuous absences of $q$-particles. At this stage, the nice semiclassical description of the state is still maintained in a coarse-grained sense: the average number of detected $q$-particles per $k$ steps is well-behaved for sufficiently large $k$. But when the random walk has covered almost all the internal space, leaving only a few unsolved puzzles scattered over distant regions of the internal space, then large uncertainties take over and the niceness conditions are gone. This is easily interpreted: at advanced stages of evaporation, black holes are nearly Planck-sized and not well-described by semiclassical physics.
1,300
1106.5229
8,662,157
2,011
6
26
false
true
1
UNITS
In standard RS models the warping generates the weak/Planck hierarchy and the UV scale is of order the Planck scale; the UV scale is thus associated with the need to cutoff the CFT to include 4D Einstein gravity. In the absence of a UV localized Einstein-Hilbert term, the 4D Planck mass is generated entirely by the dynamics of the (cut-off) CFT. That is, the usual expression for the Planck mass in RS: FORMULA is dual to an induced Planck mass that results from loops[^16] containing CFT modes [CIT]. From the perspective of the dual 4D theory there is no particular reason why the 4D Planck mass should be *entirely* induced by the CFT and, more generally, the 4D Planck mass could contain a "bare" contribution that arises either from integrating out heavy fields present in the UV completion or as a fundamental input in the theory. Irrespective of the details of its origin, a bare contribution to the Planck mass in the 4D theory is consistent with the symmetries of the theory and, in the standard effective-theory approach, is expected to be present. The inclusion of a bare contribution to the Planck mass in the dual 4D theory is dual to including a UV-localized Einstein-Hilbert term in the 5D theory. In this case the 4D Planck mass is: FORMULA In the LWS limit one has $M_*\ll M_{Pl}$ and $v_0\gg1$ is necessary to include 4D Einstein gravity, giving $M_{Pl}^2\sim (M_*^3/k) v_0$. In the dual formulation it is clear why it is consistent to include a large UV Einstein-Hilbert term, $M_{UV}\sim M_{Pl}$, despite the fact that the UV cutoff is much less than the Planck scale --- this is simply the standard low-energy effective theory approach for including gravity. The theory breaks down at energies $M_*\ll M_{Pl}$ and requires UV completion, but also contains a consistent description of gravity for distances $>1/M_*$.
1,838
1107.0755
8,685,590
2,011
7
4
false
true
11
UNITS, UNITS, UNITS, UNITS, UNITS, UNITS, UNITS, UNITS, UNITS, UNITS, UNITS
The question of Eddington bias was discussed by [CIT] and has also been suggested by [CIT] as a possible explanation of the wide radio profiles. In terms of the Planck sources an Eddington bias of $\approx$ 0.02mK is required to explain our results. However our pre-selection of these sources as being point-like at Planck resolution and our rejection of both faint ($S<1.1$Jy) and CMB-contaminated sources mean that it is difficult to see how Eddington bias could be affecting these results. In Fig. REF (d),(e),(f) we have also presented the source sample without the $S\geq1.1$Jy flux cut. The consistency of the full source and brighter source samples indicates that Eddington bias is not significantly affecting these samples.
731
1107.2654
8,709,858
2,011
7
13
true
false
2
MISSION, MISSION
The experimental upper bound on the current value of the cosmological constant is extremely small. On the other hand, it is usually assumed that an *effective* cosmological constant describes the energy density of the vacuum $<\rho_{vac}>$. In fact, it is believed the vacuum energy density $<\rho_{vac}>$ includes quantum field theory contributions to the effective cosmological constant FORMULA where $\Lambda$ is a small bare cosmological constant corresponding to the bare energy density FORMULA The calculations show that, the quantum field theory contributions affect enormously the value of effective cosmological constant according to FORMULA for electroweak cut off, FORMULA for QCD cut off, FORMULA for GUT cut off, and FORMULA for Planck cut off. We know the Einstein equation is a classical equation which is applied on the scales larger than the Planck scale so that one may reasonably expect the following equation to be almost valid for energy scales of electroweak, QCD, GUT, and even Planck FORMULA However, observational considerations requires the following Einstein equation FORMULA where the small bare cosmological constant is included instead of the huge effective one. The point is that what we observe today is in complete agreement with this latter equation, and not the former one coupled with the effective cosmological constant. But, where the quantum field theory contributions $<\rho_{vac}>$ have gone? This is the well-known cosmological constant problem [CIT].
1,493
1107.3307
8,717,955
2,011
7
17
false
true
3
UNITS, UNITS, LAW
We can also visualize this interpolation between two vacua from a different prospective. By dimensional reduction, the $(p+q+2)$ dimensional theory with $q$ form flux becomes a $(p+2)$ dimensional theory with a scalar field. Following the convention in [CIT], we parametrize the radius of $S_q$ by the radion field $\phi$, FORMULA Here $M_D$ is the Planck mass in $D=p+q+2$ dimensions, and $M_{p+2}$ is the Planck mass in $(p+2)$ dimensions. They are related by the area of unit $S_q$. FORMULA
494
1107.3533
8,721,013
2,011
7
18
false
true
2
UNITS, UNITS
In the appendix [5] it is shown that the new effects in our case are captured by just a single term, with an unknown coefficient $\gamma$ in the effective lagrangian which becomes FORMULA where $\kappa=1/M_P$. We thus obtain effectively a two-field system with field dependent kinetic term. The non-metric effects are suppressed by the Planck scale $M_P$. The most general lagrangian is considered in the Jordan and in the Einstein frame in the appendix [5].
458
1107.3739
8,724,441
2,011
7
19
true
false
1
UNITS
In this paper we will investigate the hybrid potential (REF) using a different choice of parameters from the original proposal. We find that this leads to a viable cosmological model, with perturbations that are compatible with current observations and level of non-Gaussianity that would be observable with the Planck satellite.
329
1107.4739
8,736,327
2,011
7
24
true
true
1
MISSION
We update the constraints on possible features in the primordial inflationary density perturbation spectrum by using the latest data from the WMAP7 and ACT Cosmic Microwave Background experiments. The inclusion of new data significantly improves the constraints with respect to older work, especially to smaller angular scales. While we found no clear statistical evidence in the data for extensions to the simplest, featureless, inflationary model, models with a step provide a significantly better fit than standard featureless power-law spectra. We show that the possibility of a step in the inflationary potential like the one preferred by current data will soon be tested by the forthcoming temperature and polarization data from the Planck satellite mission.
764
1107.4992
8,737,308
2,011
7
25
true
true
1
MISSION
This presents an immediate problem: even for scales as high as the Planck mass, $M \sim 10^{19}$ GeV, the dynamically generated mass (REF) is unrealistically small. To obtain masses of order of the electron mass ($0.5$ MeV) one needs un-naturally high transplanckian mass scales $M$. This problem may be resolved by embedding the model [CIT] in microscopic brane theories, and applying an inverse Randall-Sundrum hierarchy. In such constructions, the seeds for Lorentz Violation (LV) may be associated with point-like brane defects puncturing the bulk space time in which three-dimensional brane Universes, one of which represents our world, propagate. The interaction of string matter with these defects induce local Lorentz violations, as a consequence of the recoil of the defect. The resulting low-energy effective action describing the interaction of photons with these defects contain higher spatial derivative terms of the LV form considered in [CIT]. The electron sector in such brane inspired models have been argued to remain unmodified, as a result of the fact that only electrically neutral excitations can interact non-trivially with the space-time brane defects [CIT]. The advantage of embedding these models into such microscopic framework, apart from the possibility of enhancing the generated fermion masses to realistic values by virtue of the afore-mentioned Randall-Sundrum enhancement, also lies on the fact that one can obtain a physical scale $M$ which also plays the rôle of the UV cut-off of the low-energy theory, which is proportional to FORMULA where $M_s$ is the string scale, $g_s$ is the string coupling assumed weak and $\sigma^2$ denotes the foam stochastic fluctuations, which is a free parameter in the model of [CIT].
1,753
1108.3983
8,799,381
2,011
8
19
false
true
1
UNITS
The preference of spinning dust models over other mechanisms is based on several arguments. First, AME is correlated with dust IR emission and this correlation is particularly tight for the mid-IR emission of small grains. Second, AME is weakly polarized as expected for PAHs because these grains are not supposed to be aligned with the interstellar magnetic field [CIT]. Third, the shape and the intensity of the AME can be reproduced with spinning dust spectra (e.g. Watson et al. 2005; Planck Collaboration et al. 2011b, and many other references). However, the spinning dust emission depends on the local physical conditions (gas ionisation state and radiation field) and on the size distribution of small grains. Recent observations of interstellar clouds point out dissimilar morphologies in the mid-IR and in the microwave range that may be explained by local variations of the environmental conditions [CIT]. In this work we study the spinning dust emission of interstellar clouds including a treatment of the gas state and radiative transfer. In this context we reexamine the relationship between the AME and the dust IR emission.
1,139
1108.4563
8,806,783
2,011
8
23
true
false
1
MISSION
The momentum of individual particles in this scheme, is proportional to block size, as in Matrix Theory [CIT]. If we take the unit of momentum to be $1/R$, then the UV cutoff is of order $(\frac{M_P}{R})^{1/2}$, which agrees with the field theoretic estimate. In QFT this very low cutoff is somewhat puzzling. We know we can make states of much larger momentum, but there does not seem to be a nice way to implement the no black hole constraint directly in QFT. By contrast, in the HST formalism, it is clear that we can have particles of higher momentum at the expense of having fewer of them, with less entropy. If we insist on the maximal angular localization consistent with the overall momentum, then the highest momentum we can get comes from having $o(1)$ particles described by $N^{3/4} \times N^{3/4}$ matrices. For $R$ of order our current horizon radius, this is the TeV scale. We can get fuzzier particles, with higher momentum, by considering states with each $N^{1/2} \times N^{1/2}$ block in *the same* angularly localized state. This gives momenta of order the Planck scale as the maximum. The smaller degree of angular localization for particles above the TeV scale is interesting, and might have experimental consequences. Note however that for $R$ of order our current horizon volume, $N^{1/2} \sim 10^{30}$ so the apparent lack of angular precision predicted by the theory is probably not measurable. All of this talk of our current horizon radius suggests that we should be moving on to the theory of dS space.
1,531
1109.2435
8,854,193
2,011
9
12
false
true
1
UNITS
We use factorisations of the local isometry groups arising in 3d gravity for Lorentzian and Euclidean signatures and any value of the cosmological constant to construct associated bicrossproduct quantum groups via semidualisation. In this way we obtain quantum doubles of the Lorentz and rotation groups in 3d, as well as kappa-Poincare algebras whose associated r-matrices have spacelike, timelike and lightlike deformation parameters. We confirm and elaborate the interpretation of semiduality proposed in [13] as the exchange of the cosmological length scale and the Planck mass in the context of 3d quantum gravity. In particular, semiduality gives a simple understanding of why the quantum double of the Lorentz group and the kappa-Poincare algebra with spacelike deformation parameter are both associated to 3d gravity with vanishing cosmological constant, while the kappa-Poincare algebra with a timelike deformation parameter can only be associated to 3d gravity if one takes the Planck mass to be imaginary.
1,016
1109.4086
8,872,707
2,011
9
19
false
true
2
UNITS, UNITS
Recently we proposed some "minimal models" of hidden sector dark (DM) matter [CIT] that have the desired properties, and we noted that a number of astrophysical considerations constrain the models very strongly. Essentially, if DM of mass $\sim 10$ GeV annihilates primarily into any channel other than muons, it is ruled out by constraints from dumping electromagnetic energy into the cosmic microwave background (CMB) [CIT], Fermi observations of dwarf satellite galaxies [CIT], or SuperK limits on neutrinos from DM annihilations in the sun [CIT]. These constraints will tighten as a result of forthcoming data from new experiments like Planck. Moreover the PAMELA constraint on cosmic ray antiprotons excludes 10 GeV DM with an annihilation cross section greater than 0.1 times the standard relic density value if the final state contains quarks that can hadronize [CIT]. This tension is demonstrated to be rather insensitive to different choices for cosmic ray propagation models and halo models for the $b\bar b$ channel in [CIT]. The only robust particle physics mechanism for evading this tension is if the annihilation is into a pair of bosons that are too light to decay into $p$-$\bar p$ (or do not couple to quarks).
1,228
1109.4639
8,880,626
2,011
9
21
true
true
1
MISSION
To conclude, observations highlight a tension between mini-superspace parametrizations of FLRW LQC ($\sigma>1$) and lattice refinement parametrizations. The latters are the only one compatible (at least in this model) with anomaly cancellation in inhomogeneous LQC and power-law inflationary solutions. Tight upper bounds can be obtained for inverse-volume quantum corrections; their improvement with future missions such as Planck will further constrain the parameter space of the theory and, hopefully, stimulate our understanding of the semi-classical limit of loop quantum cosmology.
587
1110.0291
8,910,091
2,011
10
3
true
true
1
MISSION
The coupling of MSSM fields to heavy triplets is an important ingredient in the study of proton stability through dimension-five effective operators that result from integrating out the heavy triplets. In this context our calculations are a prerequisite to constraining F-theory GUTs through dimension-five proton stability. We find that, analogously to minimal 4-dimensional GUTs, the coupling of the heaviest generation to the massive triplets is of the same order as the Yukawa coupling, and that the coupling of the lighter generations to the triplets are suppressed with respect to the heavier ones. The quantitative analysis depends on few local parameters associated to the flux and geometry, which are in turn constrained by the measured values of $\alpha_{\rm GUT}$ and $M_{\rm Planck}$ and also by keeping the bottom quark Yukawa not too small so that we can remain within the tan $\beta>1$ regime favoured by unification. The detailed relation between the local scales and the global ones is however very complicate and model dependent, and given a particular compactification it is in general out of the scope of the present techniques to obtain such relation in a precise way. Once the relation between local and global scales is understood in a given model, the constraints on the local parameter space coming from the observed values of $M_{\rm Planck}$ and $\alpha_{\rm GUT}$ would be made precise within that model.
1,432
1110.2206
8,932,682
2,011
10
10
false
true
2
UNITS, UNITS
The Fokker-Planck method also has some serious limitations. Each stellar mass in the cluster requires its own distribution function in Equation REF. For more than a few distribution functions the Fokker-Planck method becomes numerically intractable so the ability of the method to resolve a continuous spectrum of masses limited. Since the Fokker-Planck method uses a distribution function, it does not automatically provide a star-by-star representation of a globular cluster. This means that it can be difficult to compare the results of a Fokker-Planck simulation with observations. It is also difficult to include stellar evolution in Fokker-Planck simulations because there are no individual stars to evolve. For the same reason Fokker-Planck methods cannot easily model strong few-body interactions [CIT], interactions that are essential for describing the evolution of the compact binary fraction in star clusters. Therefore Fokker-Planck methods are of limited use in studying binary populations in star clusters.
1,021
1110.4423
8,955,993
2,011
10
20
true
false
7
FOKKER, FOKKER, FOKKER, FOKKER, FOKKER, FOKKER, FOKKER
We fit the data to a model for the lensed primary CMB anisotropy. Parameter constraints are calculated using the publicly available [C]{.smallcaps}osmoMC[^1] package [CIT]. The CMB power spectrum for a given cosmology is calculated using the CAMB[^2] package [CIT]. We have modified both packages to include the EDE prescription outlined in §[2]. This model has eight free parameters: $\Omega_b h^2$, $\Omega_c h^2$, $A_s$, $n_s$, $\tau$, $\Omega_{\rm de}^0$, $w_0$ and $\Omega_e$. We do not consider additional cosmological parameters such as the neutrino energy density fraction or $N_{\rm eff}$, but note that [CIT] suggest that $\Omega_e$ is largely independent of these parameters. We also include in all parameter chains the three nuisance foreground terms described by K11; these nuisance terms negate the information from the amplitude (but not peak structure) of the power spectrum at $\ell \gtrsim 2500$. We have made one change to the foreground terms; we have changed the template shape of the clustered component of the cosmic infrared background to a pure power law ($D_\ell \propto \ell^{0.8}$) to reflect recent constraints from SPT, ACT, Planck, and BLAST [CIT]. The Monte Carlo Markov chains are available for download at the SPT website.[^3]
1,260
1110.5328
8,967,349
2,011
10
24
true
false
1
MISSION
The *ROSAT* Project was supported by the Bundesministerium für Bildung und Forschung (BMBF/DLR) and the Max-Planck-Gesellschaft (MPG). Funding for the Sloan Digital Sky Survey (SDSS) has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The SDSS Web site is http://www.sdss.org/.
482
1110.5648
8,971,633
2,011
10
25
true
false
2
MPS, MPS
Even if Supersymmetric particles are found at the Large Hadron Collider (LHC), it will be difficult to prove that they constitute the bulk of the Dark Matter (DM) in the Universe using LHC data alone. We study the complementarity of LHC and DM indirect searches, working out explicitly the reconstruction of the DM properties for a specific benchmark model in the coannihilation region of a 24-parameters supersymmetric model. Combining mock high-luminosity LHC data with present-day null searches for gamma-rays from dwarf galaxies with the Fermi LAT, we show that current Fermi LAT limits already have the capability of ruling out a spurious Wino-like solution that would survive using LHC data only, thus leading to the correct identification of the cosmological solution. We also demonstrate that upcoming Planck constraints on the reionization history will have a similar constraining power, and discuss the impact of a possible detection of gamma-rays from DM annihilation in Draco with a CTA-like experiment. Our results indicate that indirect searches can be strongly complementary to the LHC in identifying the DM particles, even when astrophysical uncertainties are taken into account.
1,195
1111.2607
9,016,927
2,011
11
10
true
false
1
MISSION
Spectral broadening can be achieved in two ways. First, energy dissipation below the photosphere produces a population of non-thermal electrons, which emit radiation. Such a dissipation can result from internal shocks, magnetic reconnection or collisional heating. Since the dissipation is assumed to take place in region of high optical depth, multiple Compton scattering dominates the spectra, which can be broader than Planck if the optical depth is not too high. This scenario gained broad interest recently [CIT]. Second, broadening is caused by contribution of off-axis emission [CIT]. While off-axis photons are sub-dominant as long as the inner engine is active, they broaden the Planck spectrum. They become dominant once the inner engine decays. Quantitative results depend on the jet geometry.
804
1111.3378
9,028,623
2,011
11
14
true
false
2
LAW, LAW
Tomographic weak lensing surveys can be cross-correlated with external data sets including frequency cleaned maps of secondaries from ongoing CMB surveys; e.g. the thermal Sunyaev-Zeldovic (tSZ) maps or $y$-maps that will be available from CMB surveys such as Planck. The cross-correlation with tomographic information can help to understand the evolution of cosmological pressure fluctuations responsible for tSZ effect with redshift. The formalism presented here is perfectly suitable for such analysis. Detailed results of such analysis will be presented elsewhere. In addition to the weak lensing surveys the Supernova pencil beam surveys might also benefit for the results presented here.
693
1112.0495
9,077,522
2,011
12
2
true
false
1
MISSION
Hence we conclude that this null curvature singularity can not resolved by either $\alpha'$ or $g_s$ effects. It is not immediately clear whether test strings propagating on this background would become infinitely excited; unfortunately there does not seem to be any obvious way to transform the near singularity region into plane wave coordinates and immediately apply the results of [CIT]. From the point of view of diverging tidal forces infinite string excitation would not be a surprising result, but on the other hand this singularity has the rather unusual feature of repelling both timelike and null geodesics, so it is also plausible that strings might be simply repelled away from the singularity without exciting many high frequency modes. In any case, any stringy, or more general Planck scale, physics will be visible to distant observers as noted before. The stability of these solutions remains an open question, although it seems likely if one enforces the periodic identification of $x_3$ one should expect instabilities in at least the CTC region $r > r_{\star}$.
1,081
1112.0578
9,079,144
2,011
12
2
false
true
1
UNITS
The global IR-submm SEDs of the detected S0+S0a and elliptical galaxies with available data from 60 to 500,$\mu$m are presented in Figures REF and REF, respectively. NGC4486 (M87) and NGC4374 are both bright radio sources and their SEDs are well fitted by a power law at the longer FIR wavelengths. For the former, we find no evidence for dust emission above the power-law synchrotron component (in agreement with Baes et al. 2010). In NGC4374, we do see a strong excess from dust emission in the FIR. We fit the SEDs from 100 to 500,$\mu$m with a modified blackbody model, where FORMULA $M_d$ is the dust mass, $T_d$ is the dust temperature, $B(\nu,T_d)$ is the Planck function and $D$ is the distance to the galaxy. $\kappa_{\nu}$ is the dust absorption coefficient described by a power law with dust emissivity index $\beta$, such that $\kappa_{\nu} \propto \nu^{\beta}$. For $\beta = 2$ (typical of Galactic interstellar dust grains), we use $\kappa_{350\mu \rm m} = 0.19,\rm m^2,kg^{-1}$ (Draine 2003). Although $\kappa$ is notoriously uncertain, virtually all of our analysis relies on $\kappa$ being constant between galaxies rather than on its absolute value.
1,167
1112.1408
9,087,547
2,011
12
6
true
false
1
LAW
However, although the consistency relations may be true under very generic assumptions in the *exact* limit in which one of the momenta goes to zero, $k_a\to 0$, it is possible that important corrections appear when restricted to the range of momenta accessible in our finite observable universe. It was shown in [CIT] that this is the case for $f_{NL}$ when the initial state for perturbations at the onset of inflation departs from the vacuum. There, it was shown that, for a large class of initial states, the $f_{NL}$ parameter acquires an extra factor of the order $k_1/k_3\gg1$, as compare to the vacuum prediction. That enhancement factor can be as large as several hundred for the range of $k's$ accessible in forthcoming observations, which may cause the bispectrum in the squeezed momentum configuration to fall within the observational sensitivity of the PLANCK satellite [CIT].
889
1112.1581
9,089,785
2,011
12
7
true
true
1
MISSION
In this paper we discuss how to choose a prior and implement it in a computationally efficient method to reconstruct $w(z)$ non-parametrically from the observed data. As we will show, it is very accurate (with relative error $\lesssim$ 10% for a range of dark energy models), computationally cheap and fast, straightforward to implement, and can be used to analyse any kind of cosmological data. After discussing general issues regarding the choice of prior correlation functions, we test our reconstruction accuracy using the simulated mock data for Planck [CIT] and a future space-based mission. We then conclude with a discussion of how to extend this method to other parameterization and how to calculate more realistic priors.
731
1112.1693
9,090,686
2,011
12
7
true
false
1
MISSION
Various terrestrial particles accelerators collide particles at the center of mass energies upto TeV scale which is significantly below the Planck scale that corresponds to the quantum gravity regime. Clearly, the physics in a very large energy range remains unexplored today. In this connection, an intriguing possibility would be to make use of various naturally occurring processes in the vicinity of various astrophysical compact massive objects where gravity would be very strong.
485
1112.2525
9,100,520
2,011
12
12
true
true
1
UNITS
Further problems arise even if we assume that Stenger's argument is correct. Stenger has asked us to consider the universe at the Planck time, and in particular a region of the universe that is the size of the Planck length. Let's see what happens to this comoving volume as the universe expands. 13.7 billion years of (concordance model) expansion will blow up this Planck volume until it is roughly the size of a grain of sand. A single Planck volume in a maximum entropy state at the Planck time is a good start but hardly sufficient. To make our universe, we would need around $10^{90}$ such Planck volumes, all arranged to transition to a classical expanding phase within a temporal window 100,000 times shorter than the Planck time[^17]. This brings us to the most serious problem with Stenger's reply.
808
1112.4647
9,129,050
2,011
12
20
true
false
7
UNITS, UNITS, UNITS, UNITS, UNITS, UNITS, UNITS
We wish to thank the anonymous referee for her/his careful reading and comments, which substantially enhanced both the science and appearance of the paper. This publication is partially based on data acquired with the Atacama Pathfinder Experiment (APEX). APEX is a collaboration between the Max-Planck-Institut für Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory. This work is based, in part, on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This research has made use of the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. L. B. acknowledges support from CONICYT projects FONDAP 15010003 and Basal PFB-06. J. T. is supported by the International Max Planck Research School (IMPRS) for Astronomy and Cosmic Physics.
1,022
1201.4732
9,203,277
2,012
1
23
true
false
2
MPS, MPS
Once we have the phase space manifold as $\mathbb{CP}^{n_C-1}$ with Fubini-Study form as the symplectic form, the geometric quantization is well known. The Hilbert space ${\cal H}_C$ is spanned by wavefunctions, which are holomorphic polynomials of the $n_C$ projective coordinates $w_{n_1,n_2,n_3}$ of degree $N$ FORMULA or equivalently polynomials of $n_C-1$ inhomogeneous coordinates $w_{n_1,n_2,n_3}$ of degree up to $N$ (if we take e.g. $w_{0,0,0}=1$ in the patch). It is important to note how $N$ enters the definition of Hilbert space purely through setting the scale of $\omega$, which controls the effective Planck constant $1/N$ or the area in phase space that a single quantum state occupies. As we increase $N$, the area occupied by a state decreases, and we get more states in ${\cal H}_C$.
803
1201.5588
9,214,028
2,012
1
26
false
true
1
CONSTANT
As a result for light Higgs masses shown in the left part of Figure REF we see that the coupling remains perturbative to Planck scale. Furthermore, as one can see from the red plot in the right part of Figure REF, the tree level $s-$wave partial amplitude for unitarity constraining scattering of the state $(2W^+_\text{L}W^-_{\text{L}}+Z_{\text{L}} Z_{\text{L}})/\sqrt{6}$ is within the unitary limit $Re a_0<1/2$ for all center of mass energies all the way to the Planck scale. The electroweak logarithms modify this amplitude by reducing it significantly at high energies. For example at $\sqrt{s}=10^{14}\text{GeV}$ the suppression factor is of the order $10^{-5}$. Thus for phenomenologically interesting low Higgs masses, the unitarity is preserved at all scales. The role of electroweak logarithms is suppression of the amplitude (and cross section) at high energies compared to tree level result.
904
1202.0294
9,229,108
2,012
2
1
false
true
2
UNITS, UNITS
Thus Planck data will be an important testing ground for the reality of DF. However, given that the signal has been detected by us at only $S/N \simeq 3.5-4$, we emphasize again that incorrectly applied methodologies can, as demonstrated in mathematical detail in this report, reduce the S/N of the measurement to below statistically significant levels ($S/N\lsim 2$) rendering the measurement impossible. As discussed in KAE we have proposed such a measurement to the Planck collaboration as early as 2005[^2] and again in 2009. We plan to perform the necessary verification as soon as the first Planck data is publicly released (projected in early 2013). The data needed to verification of the WMAP-based DF results has been posted by us publicly at <http://www.kashlinsky.info/bulkflows/data_public> and has indeed been verified by numerous colleagues. Should the signal not be found in the Planck analysis that would imply a systematic difference (presumably at cluster locations) between the WMAP and Planck maps, which will require an explanation of their respective data processing pipelines. We use this space to provide several comments concerning specifically Planck-related issues and its potential promise in resolving the issues described above in Sec. [1.1] for which the upcoming SCOUT catalog will be critical.
1,326
1202.0717
9,232,605
2,012
2
2
true
false
6
MISSION, MISSION, MISSION, MISSION, MISSION, MISSION