text
large_stringlengths 252
2.37k
| length
uint32 252
2.37k
| arxiv_id
large_stringlengths 9
16
| text_id
int64 36.7k
21.8M
| year
int64 1.99k
2.02k
| month
int64 1
12
| day
int64 1
31
| astro
bool 2
classes | hep
bool 2
classes | num_planck_labels
int64 1
11
| planck_labels
large_stringclasses 66
values |
---|---|---|---|---|---|---|---|---|---|---|
In the early Universe, the gravitational wave source will eventually diminish due to thermalisation and Hubble expansion. We find that even if the bump continues to grow for as long as a Hubble time, $H_*^{-1}$, the power spectrum from the oscillation phase will be subdominant to that of bubble collisions providing that the mass of the scalar field is much less than the Planck mass.
| 385 |
1802.05712
| 15,337,626 | 2,018 | 2 | 15 | true | true | 1 |
UNITS
|
Combining Equations (REF) and (REF), it is easy to get FORMULA which is known as the consistency relation of the DBI inflation. This differs from the standard consistency relation $r=-8n_t$ in the canonical inflation. However, as it was explained previously, the Planck 2015 constraints on inflation that we will consider in the present paper are applicable for all slow-roll inflationary models, independent of their consistency relations. Of course, if one considers the Planck bounds on $r$ and applies the consistency relation of the model, one can find some constraints on $n_t$ that are obviously model-dependent.
| 619 |
1802.06075
| 15,340,085 | 2,018 | 2 | 16 | true | true | 2 |
MISSION, MISSION
|
On the other hand, an important argument against extended supersymmetry is that it does not allow for chiral fermions as they are observed in Nature (neutrinos) (see Ref. [CIT] for details on this topic). This and other arguments of this type hold strictly only in the absence of gravity. In the context of supergravity, it is possible to overcome such difficulties. For example, in Kaluza-Klein supergravities [CIT], which are characterized by additional spatial dimensions in which the space is very highly curved (radii in the region of the Planck length), deviations from the phenomenology of flat space are particularly large, and many "no-go theorems" can be overcome.
| 674 |
1802.06602
| 15,343,306 | 2,018 | 2 | 19 | false | true | 1 |
UNITS
|
The possibility of linking inflation and late cosmic accelerated expansion using the $\alpha$-attractor models has received increasing attention due to their physical motivation. In the early universe, $\alpha$-attractors provide an inflationary mechanism compatible with Planck satellite CMB observations and predictive for future gravitational wave CMB modes. Additionally $\alpha$-attractors can be written as quintessence models with a potential that connects a power law regime with a plateau or uplifted exponential, allowing a late cosmic accelerated expansion that can mimic behavior near a cosmological constant. In this paper we study a generalized dark energy $\alpha$-attractor model. We thoroughly investigate its phenomenology, including the role of all model parameters and the possibility of large-scale tachyonic instability clustering. We verify the relation that $1+w\sim 1/\alpha$ (while the gravitational wave power $r\sim\alpha$) so these models predict that a signature should appear in either the primordial B-modes or in late time deviation from a cosmological constant. We constrain the model parameters with current datasets, including the cosmic microwave background (Planck 2015 angular power spectrum, polarization and lensing), baryon acoustic oscillations (BOSS DR12) and supernovae (Pantheon compressed). Our results show that expansion histories close to a cosmological constant exist in large regions of the parameter space, not requiring a fine-tuning of the parameters or initial conditions.
| 1,528 |
1803.00661
| 15,378,046 | 2,018 | 3 | 1 | true | true | 2 |
MISSION, MISSION
|
2. We may compensate the change in $x_{\nu}$ with an equal (1 percent) reduction in physical matter density $\omega_{\rm cb}$, thereby keeping constant $\omega_{\rm m}$, $h$ and almost constant $D_*$; such a shift in $\omega_{\rm cb}$ and $z_{\rm eq}$ would be tolerable at around the 1$,\sigma$ Planck precision. However, due to the differing sensitivities above, this would give a $\sim +0.25$ percent increase in $\theta_*$, which is still $\sim 4\times$ larger than the Planck precision and therefore ruled out.
| 515 |
1803.02298
| 15,393,271 | 2,018 | 3 | 6 | true | true | 2 |
MISSION, MISSION
|
Under the above assumptions, our magnetic field model is left with 8 free parameters out of the initial 12. We follow the methodology detailed in [CIT] :2016 to produce all-sky mock Stokes parameter maps from our 3D magnetic field model. In particular, we compute normalized Stokes parameters, $Q/I$ and $U/I$, which depend on the magnetic field orientation only, and then multiply them by the intensity map $I$. We use the python mpfit routine to derive the set of parameter values that best fit the observations. We take into account the noise in the Planck $Q$ and $U$ data, which we add in quadrature with a contribution from the turbulent magnetic field component, which is not accounted for in our model. The latter is estimated from the power spectra of the Planck dust polarization maps at 353,GHz using data simulations from [CIT] :2017. In the region of the sky under study, the mean contribution from the statistical noise is $\sigma^{\rm noise}_{Q,U}=1.4\times10^{-3}$,MJy,sr$^{-1}$ and that from the turbulent magnetic field is $\sigma^{\rm turb}_{Q,U}/I=0.055$.
| 1,075 |
1803.05251
| 15,415,524 | 2,018 | 3 | 14 | true | false | 2 |
MISSION, MISSION
|
The validation criteria described in Sect. [3] yield 62 SZ sources classified as 'non-detections' (ND) or as clusters weakly associated with the SZ signal detected by Planck. This means that no cluster counterparts were detected for 54% of the sample, where our deep imaging and spectroscopic data did not show any evidence of optical counterparts to the SZ sources. There are two possible ways to explain these non-identifications. The first, and most plausible, explanation is that there is no optical counterpart, owing to false SZ detections, high noise in the $Y_{500}$ Planckmaps ([CIT] -p15; see Fig. 4), or contamination in SZ maps produced by radio emission of galactic dust clouds. A second explanation is that the cluster counterpart does exist but is at high redshift ($z>0.85$), hence making it very difficult to detect at visible wavelength.
| 855 |
1803.05764
| 15,418,600 | 2,018 | 3 | 15 | true | false | 2 |
MISSION, MISSION
|
Next, we constrain the I$\Lambda$CDM2+$\sum m_\nu$ model using the Planck TT, TE, EE + lowP + BAO + SNIa + RSD data, and we constrain the I$\Lambda$CDM2+$\sum m_\nu$+$N_{\rm eff}$ model using the Planck TT, TE, EE + lowP + BAO + SNIa + RSD + $H_0$ data. The detailed fitting results are given in Tables REF and REF, respectively. For the $Q=\beta H\rho_{\rm \Lambda}$ model, an exciting result is that negative values of $\beta$ are favored by current observations at more than the $1\sigma$ level, indicating that vacuum energy decays into cold dark matter. Further, we see that the values of $\beta$ are truncated when $\beta<-0.3$. This is because $\beta$ is anticorrelated with $\Omega_{\rm m}$, as shown in Figs. REF and REF. A larger $\Omega_{\rm m}$ leads to a smaller $\beta$, whereas a too-small value of $\beta$ (negative value) is not allowed by theory in current cosmology.
| 885 |
1803.06910
| 15,426,825 | 2,018 | 3 | 19 | true | true | 2 |
MISSION, MISSION
|
The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.
| 1,043 |
1803.07175
| 15,429,197 | 2,018 | 3 | 19 | true | false | 3 |
MPS, MPS, MPS
|
Where E$l$ and M$l$ stands for the multipolarity of the transition, $c$ for the speed of light, $\hbar$ for Planck's constant, $M_p$ for the mass of the proton and $\alpha$ for the fine-structure constant. The nuclear radius $R=1.2\cdot A^{1/3}$fm scales with the number of nucleons $A$ in the nucleus.
| 302 |
1803.08335
| 15,438,835 | 2,018 | 3 | 22 | true | false | 1 |
CONSTANT
|
Fig. REF shows the $68\%$ (light blue) and $95\%$ (dark blue) CL regions for the tensor-to-scalar ratio $r$ versus the scalar spectral index $n_s$ from Planck [CIT]. The predictions in the case of natural inflation with the inflaton coupled with $\mathcal{N}$ gauge fields are also shown, for 50 (red band) and 60 $e$-folds (black band). These regions were constructed by varying the parameters in the ranges: $\xi=[2.5,,10]$ and $\alpha=[80,,400]$. For a given number of $e$-folds, the number $\cal N$ of gauge fields is fixed by the COBE normalization, the scale $\Lambda$ of the potential by the equation of motion of the inflaton. Let us also note that in the minimally coupled case, both $r$ and $n_s$ are independent on the coupling $f$ of the potential, in the regime where $\xi$ is almost constant. It is clear from the figure that there are regions compatible with the $95\%$ CL Planck limits, for the cases with both 50 and 60 $e$-folds.[^5]
| 951 |
1803.09743
| 15,449,141 | 2,018 | 3 | 26 | true | true | 2 |
MISSION, MISSION
|
To obtain $\mathcal{B}$, the drag coefficient $\mathcal{R}$ has to be known. Kelvin wave dynamics have been addressed by [CIT], albeit arriving at different results for the corresponding dissipation. In order to provide context for these papers and discuss the origin of the discrepancy, we use a simplified version of [CIT] 's argument to derive the expected scalings for $\mathcal{R}$. The equation of motion for forced vortex oscillations reads FORMULA where $\vec{\epsilon}$ is the displacement of a vortex aligned with the $z$-direction, $T$ the vortex tension and $\vec{f}$ the driving force per unit length. In the absence of forces, a plane wave ansatz shows that the vortex supports Kelvin waves with characteristic frequency [CIT] FORMULA Here, $k$ is the wave number along a vortex, $\hbar$ the reduced Planck constant and $\mu(k)$ an effective mass that varies slowly with $k$. This dispersion relation provides the tension associated with a specific mode, $T = \rho_{\rm s} \kappa\hbar/2\mu$.
| 1,005 |
1804.02706
| 15,489,507 | 2,018 | 4 | 8 | true | false | 1 |
CONSTANT
|
The measurements we consider to constrain the ionization fraction of the Universe are: *(a)* the value of the reionization optical depth from the Planck-CMB `SimLow` likelihood results [CIT]; *(b)* Gunn-Peterson optical depth at $z = 6.1$ from bright quasars [CIT]; and *(c)* Lyman-$\alpha$ emission in star-forming galaxies at $z \gtrsim 7$ [CIT] (see also Ref. [CIT]).
| 370 |
1804.03888
| 15,499,015 | 2,018 | 4 | 11 | true | false | 1 |
MISSION
|
- *CMB distance posteriors from Planck 2015 measurements:* We use the acoustic scale, $l_{A}=301.787\pm0.089$, the shift parameter, $R=1.7492\pm0.0049$, and the decoupling redshift, $z_{*}=1089.99\pm0.29$ obtained for a flat $w$-cold dark matter model [CIT]. Although this method could lead to biased constraints when used in modified gravity models (see discussion in [CIT]), we choose these data as a first approach.
| 418 |
1804.05085
| 15,508,069 | 2,018 | 4 | 13 | true | false | 1 |
MISSION
|
A similar but more complicated example is radiation at electron scattering on a family of aligned atomic strings in a crystal ("doughnut scattering"). Assuming the strings to be mutually collinear and randomly distributed with uniform density in the transverse plane (which may be justified by the dynamical chaos in the electron transverse motion), the kinetics of the electron multiple scattering on the strings may be described by Fokker-Planck equation for the probability distribution $f(\phi,t)$ in azimuthal angles $\phi$ between the velocity vectors relative to the string direction: FORMULA Here $D$ is the angular diffusion rate proportional to the string density and scattering strength. Solving Eq. (REF) with the initial condition $f(\phi,\vec{r}_{\perp},0)=\delta(\phi)\delta(\vec{r}_{\perp})$, we get FORMULA FORMULA The behavior of the spectrum obtained by plugging Eqs. (REF), (REF) to Eq. (REF) is shown in Fig. REF. It basically complies with experimental data of [CIT].
| 989 |
1804.05878
| 15,514,850 | 2,018 | 4 | 16 | false | true | 1 |
FOKKER
|
We would like to thank the referee for their constructive comments and suggestions that have helped to improve this paper. This paper is based on data acquired with the Atacama Pathfinder EXperiment (APEX). APEX is a collaboration between the Max Planck Institute for Radioastronomy, the European Southern Observatory, and the Onsala Space Observatory. This work was partly carried out within the Collaborative Research Council 956, sub-project A6, funded by the Deutsche Forschungsgemeinschaft (DFG). This document was produced using the Overleaf web application, which can be found at www.overleaf.com. Won-Ju, Kim was supported for this research through a stipend from the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne. The ATLASGAL project is a collaboration between the Max-Planck-Gesellschaft, the European Southern Observatory (ESO) and the Universidad de Chile. It includes projects E-181.C-0885, E-078.F-9040(A), M-079.C-9501(A), M-081.C-9501(A) plus Chilean data.
| 1,047 |
1804.06999
| 15,525,449 | 2,018 | 4 | 19 | true | false | 3 |
MPS, MPS, MPS
|
- **Pressure profiles:** A possible source of systematics on the reconstruction of SZ pressure profiles is the relativistic corrections to the SZ effect [CIT], which reduce the amplitude of the SZ increment in the high-frequency part of the CMB spectrum. Several recent works claimed a detection of the relativistic SZ corrections on stacked Planck data [CIT]. In particular, [CIT] noted that the relativistic corrections could lead to an underestimate of the integrated SZ signal up to 15% for the hottest clusters, which could thus affect our pressure profiles too. However, we note that the gas temperature decreases by a factor of $2-2.5$ from the core to the outskirts, such that the impact of SZ corrections should be limited to the central regions, where spectroscopic X-ray measurements are preferred because of their higher signal-to-noise ratio and resolution. For typical temperatures of $\sim5$ keV at $R_{500}$ and beyond the expected effect is less than 5% [CIT]. For more discussion on the impact of systematic uncertainties we refer to [CIT] +13.
| 1,062 |
1805.00042
| 15,564,252 | 2,018 | 4 | 30 | true | false | 1 |
MISSION
|
To get rid of the unphysical curves, we generate the samples under the additional condition $0.03<\tau<0.13$, which is around $3\sigma$ width of Planck constraints. Figure REF illustrates the cases of randomly sampled reionization history. Each curve connects the end points and randomly sampled knots, while the interpolation function is a piecewise cubic Hermite interpolating polynomial (PCHIP). This approach makes sure that $x_e(z)$ is bounded in between 0 and 1. We consider two types of curves: (a) monotonically decreasing curves interpolated between the end points ($z=6.0$ and $z=30.0$) and five randomly sampled knots with PCHIP and (b) nonmonotonic curves interpolated between the end points and two randomly sampled knots with PCHIP. All curves are smoothed by a Gaussian function. Then, we project $x_e(z)$ onto the eigenvectors and get the coefficients FORMULA With Eq. REF(#eq:xe_pca){reference-type="eqref" reference="eq:xe_pca"}, the PCA reconstruction is easy and straightforward. $\Delta x_e(z)$ is defined as the difference between PCA reconstructed and the true form of $x_e(z)$, which is sensitive to the reionization history.
| 1,149 |
1805.02236
| 15,580,579 | 2,018 | 5 | 6 | true | false | 1 |
MISSION
|
The union mask is applied during the stacking process: for a given supercluster, the masked pixels in the Planck$y$ map are not accumulated in the stacked image. As an example, one supercluster is shown before masking galaxy clusters and after masking them in Fig. REF. Without the mask, bright signals from galaxy clusters were seen especially around the core, but they are well covered by the mask.
| 400 |
1805.04555
| 15,602,363 | 2,018 | 5 | 11 | true | false | 1 |
MISSION
|
The Pan-STARRS1 Surveys (PS1) have been made possible through contributions of the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation under Grant No. AST-1238877, the University of Maryland, and Eotvos Lorand University (ELTE).
| 935 |
1805.08139
| 15,631,776 | 2,018 | 5 | 21 | true | false | 3 |
MPS, MPS, MPS
|
The superconformal models lead to distinct cosmological predictions despite the flexibility in the choice of potentials, characteristic of the $\alpha$-attractors. The superconformal attractors customized to generate PBH in the low-mass region predict a relatively small $n_s$ value, and larger $r$ and $\alpha_s$ values compared to the conventional inflationary $\alpha$--attractor models. The $n_s$ value is compatible with the Planck 2015 data at the 95 % CL. This improved Planck 2018 bounds further constrain our models and our predictions are placed in the borderline of what present data allows. This scenario is possible to be tested by the next generation CMB probes that aim at pinning down the scalar tilt value with per mil accuracy. Large values for the $n_s$ will rule out this sort of models. Moreover, microlensing observational programs should be capable to search for PBH in the mass range predicted here and considerably constrain the abundance. The current status is that the PBH mass window (REF) is in accordance with recent searches for femtolensing effects caused by compact objects, even when the presence of an extra DM component composed of WIMP particles is considered [CIT].
| 1,203 |
1805.09483
| 15,641,579 | 2,018 | 5 | 24 | true | true | 2 |
MISSION, MISSION
|
Cosmological inflation predicts the existence of a stochastic background of tensor modes, produced by quantum fluctuations of the metric spin-2 degrees freedom during the phase of inflationary expansion. CMB experiments constrain the amplitude of the primordial SGWB power spectrum at large CMB scales in terms of the tensor-to-scalar ratio $r$: the current upper bound from BICEP2/Keck and Planck is $r,<,0.07$ at 95% confidence level [CIT] (assuming the consistency relation $r=-8 n_{T}$), and future CMB polarization experiments [CIT] can lower this bound down to around $10^{-3}$ in absence of a detection.
| 610 |
1806.02819
| 15,686,076 | 2,018 | 6 | 7 | true | false | 1 |
MISSION
|
In our analysis, we varied the six base cosmological parameters, the three foreground amplitudes, and the mass bias. We computed the trispectrum at each step of the MCMC, as it depends on the cosmological parameters and mass bias. Moreover, we used the information contained in the Planck SZ catalogue of clusters to impose an upper bound on the combined foregrounds. Indeed the projection of the SZ fluxes coming from the Planck catalogues on a sky map yields a lower bound for the tSZ power spectrum. Several authors have already revisited and extended the Planck analysis, including [CIT], nevertheless this is the first time that the analysis is carried out consistently, with all the relevant pieces together.
| 714 |
1806.04786
| 15,700,947 | 2,018 | 6 | 12 | true | false | 3 |
MISSION, MISSION, MISSION
|
The flavor mixing matrix, $U_{PMNS}$ depends on $x_f, m_3,\phi$. The charged lepton masses are well determined and the light neutrino masses are taken from table (REF). Taking $m_3$ value around $0.05$ eV the parameters $x_f$ are varied in the range $0-1$, $\phi$ in the range $0,-,2\pi$. The elements of $U_{PMNS}$ matrix is obtained and the three mixing angles and Dirac CP-phase is extracted as given in eq.s (REF) and (REF). The values obtained are presented in figures listed as follows. In figure (REF) the sum of three light neutrino masses meet the recent Planck bound $\sum m_i < 0.12$ eV [CIT]. The three mixing angle are also falling in the $3\sigma$ range of latest neutrino global analysis as given in recent data NuFit.org [CIT]. The values are shown in figures (REF) and (REF). In figure (REF) the $J_{CP}$ values are recorded. Using $3\sigma$ ranges of the three mixing angles the recent data implies $0.030 \sin,\delta \lesssim J_{CP} \lesssim 0.035 \sin, \delta$. Using the value of CP violating phase from table (REF) the bound can be given by $-0.0243 < J_{CP} < 0.0037$.
| 1,091 |
1806.06229
| 15,712,274 | 2,018 | 6 | 16 | false | true | 1 |
MISSION
|
If the stars are the photoionization source the number of ionizing photons cm$^{-2}$ s$^{-1}$ produced by the hot source is $N$= $\int_{\nu_0}$ $B_{\nu}$/h$\nu$ d$\nu$, where $\nu_0$ = 3.29$\times$ 10$^{15}$ s$^{-1}$ and B$_{\nu}$ is the Planck function. The flux from the star is combined with $U$ and n by $N$ (r/R)$^2$=$U$nc, where r is the radius of the hot source (the stars), R is the radius of the nebula (in terms of the distance from the stars), n is the density of the nebula and c is the speed of light. Therefore, $\rm T_{*}$ and $U$ compensate each other, but only in a qualitative way, because $\rm T_{*}$ determines the frequency distribution of the primary flux, while $U$ represents the number of photons per number of electrons reaching the nebula. The choice of $\rm T_{*}$ and $U$ is obtained by the fit of the line ratios.
| 843 |
1806.07578
| 15,723,033 | 2,018 | 6 | 20 | true | false | 1 |
LAW
|
Our model assumes the best fit cosmological parameters for the fiducial $\Lambda$CDM cosmology derived by the Planck spacecraft [CIT]. This cosmology assumes cold dark matter and a cosmological constant. The cosmology enters into the constraint on $\gamma$ as the critical lensing surface mass density is proportional to $D_{0,zl} D_{0,zs} / D_{zl,zs}$. The low redshift of the lens means that the angular diameter distance to the lens is only sensitive to the Hubble constant, $H_0$. Because the source is at much higher redshift that the lens $D_{0,zs} / D_{zl,zs} \approx 1$ regardless of the cosmological parameters. Combining these two effects, our inference on the lensing mass is therefore inversely proportional to the assumed value of $H_0$. The value of the Hubble constant inferred from Planck [CIT] has an uncertainty of 1.3% implying an uncertainty of 2.6% on $\gamma$. The Planck measurements are derived assuming $\gamma=1$, it is therefore possible that if $\gamma \neq 1$ the inferred value of $H_0$ may not be the same as that derived assuming $\Lambda$CDM. In this case our inference on $\gamma$ is inversely proportional to the change in $H_0$ FORMULA Where $\gamma_\mathrm{True}$ is the real value of $\gamma$, $\gamma_\mathrm{inferred}$ is the value inferred assuming the $\Lambda$CDM value of $H_0$, $H_{0,\Lambda \mathrm{CDM}}$ [CIT] and ${H_{0,\mathrm{True}}}$ is the correct value of the Hubble Constant.
| 1,430 |
1806.08300
| 15,729,043 | 2,018 | 6 | 21 | true | false | 3 |
MISSION, MISSION, MISSION
|
The previously proposed "Complexity=Volume" or CV-duality is probed and developed in several directions. We show that the apparent lack of universality for large and small black holes is removed if the volume is measured in units of the maximal time from the horizon to the "final slice" (times Planck area). This also works for spinning black holes. We make use of the conserved "volume current", associated with a foliation of spacetime by maximal volume surfaces, whose flux measures their volume. This flux picture suggests that there is a transfer of the complexity from the UV to the IR in holographic CFTs, which is reminiscent of thermalization behavior deduced using holography. It also naturally gives a second law for the complexity when applied at a black hole horizon. We further establish a result supporting the conjecture that a boundary foliation determines a bulk maximal foliation without gaps, establish a global inequality on maximal volumes that can be used to deduce the monotonicity of the complexification rate on a boost-invariant background, and probe CV duality in the settings of multiple quenches, spinning black holes, and Rindler-AdS.
| 1,166 |
1807.02186
| 15,772,365 | 2,018 | 7 | 5 | false | true | 1 |
UNITS
|
The recent exquisite measurements of the CMB temperature and polarization by Planck [CIT] significantly reduced the uncertainties in $\Lambda$CDM parameters. With this dramatic improvement in precision, it is perhaps not surprising that several 2-3$,\sigma$ level tensions have appeared between Planck and other datasets, as well as within the Planck data itself [CIT], when interpreted within the $\Lambda$CDM model. For instance, the locally measured value of the Hubble constant $H_0$ is off by $3.5,\sigma$ from the Planck best fit [CIT]. The expansion rate at $z=2.34$, implied by the Baryon Oscillation Spectroscopic Survey (BOSS) baryonic acoustic oscillations (BAO) measurement from the Lyman-$\alpha$ forest [CIT], disagrees with the best fit $\Lambda$CDM prediction at a $\sim2.7,\sigma$ level. These tensions are not at a significance level sufficient to rule out $\Lambda$CDM -- they could simply be statistical fluctuations [CIT]. It is also possible that they are caused by unaccounted systematic effects in the measurements or the modelling of the data. However, it is worth noting that these tensions have persisted and got stronger over the past three years, fuelling significant interest in possible extensions of $\Lambda$CDM, such as dynamical dark energy (DE) [CIT], interacting DE and dark matter [CIT], and other extensions of $\Lambda$CDM.
| 1,363 |
1807.03772
| 15,787,569 | 2,018 | 7 | 10 | true | false | 4 |
MISSION, MISSION, MISSION, MISSION
|
Multi-wavelength observations in the sub-mm regime provide information on the distribution of both the dust column density and the effective dust temperature in molecular clouds. In this study, we created high-resolution and high-dynamic-range maps of the Pipe nebula region and explored the value of dust-temperature measurements in particular towards the dense cores embedded in the cloud. The maps are based on data from the Herschel and Planck satellites, and calibrated with a near-infrared extinction map based on 2MASS observations. We have considered a sample of previously defined cores and found that the majority of core regions contain at least one local temperature minimum. Moreover, we observed an anti-correlation between column density and temperature. The slope of this anti-correlation is dependent on the region boundaries and can be used as a metric to distinguish dense from diffuse areas in the cloud if systematic effects are addressed appropriately. Employing dust-temperature data thus allows us to draw conclusions on the thermodynamically dominant processes in this sample of cores: external heating by the interstellar radiation field and shielding by the surrounding medium. In addition, we have taken a first step towards a physically motivated core definition by recognising that the column-density-temperature anti-correlation is sensitive to the core boundaries. Dust-temperature maps therefore clearly contain valuable information about the physical state of the observed medium.
| 1,514 |
1807.04286
| 15,789,762 | 2,018 | 7 | 11 | true | false | 1 |
MISSION
|
On the CMB side, we use the likelihood `fake_planck_realistic` [CIT] included in `MontePython` `v3.0`, taking into account temperature, polarisation and CMB lensing extraction. We adopt noise spectra roughly matching those expected from the full Planck results [^5]. For the purpose of forecasting sensitivities, it is easier to use a mock Planck likelihood rather than a real one, because we can then use the exact same fiducial model across all likelihoods.
| 459 |
1807.04672
| 15,795,696 | 2,018 | 7 | 12 | true | true | 2 |
MISSION, MISSION
|
Very recently, the IceCube Collaboration has reported the observation of an ultra-high-energy neutrino from the direction of the blazar TXS 0506+056, and together with a number of other groups, most notably the MAGIC Collaboration, have reported [CIT] an enhanced level of activity in $\gamma$-ray and photon emission from this source, which is located at a distance $\sim 4 \times 10^9$ ly. As we discuss in this paper, the great distance of TXS 0506+056 and the high energy $\gtrsim 200$ TeV of the observed high-energy neutrino, in conjunction with the $\gamma$-ray observations, provides unique sensitivity to Lorentz violation in neutrino propagation, which almost rivals that to linear Lorentz violation in photon propagation [^1]. The sensitivity to linear Lorentz violation in neutrino propagation is to $M_1 \gtrsim 3 \times 10^{16}$ GeV, approaching the Planck energy scale that might be characteristic of the possible quantum-gravity effects that were the original motivation for [CIT].
| 997 |
1807.05155
| 15,799,765 | 2,018 | 7 | 13 | true | true | 1 |
UNITS
|
Now, as advocated in [CIT], let us ascribe the present cosmic acceleration to a rolling quintessence scalar [CIT] rather than to a cosmological constant (the latter being inconsistent with equation (REF)). This in principle allows one to avoid $c\ll1$. Concretely, it was argued in [CIT] that current observational constraints on dark energy only require $c \lesssim.6$, consistent with the proposed swampland criterion, and that the least constrained model is of the form FORMULA Hence, $\lambda\lesssim 0.6$ and $|\nabla V_Q|_{\rm today}=\lambda V_Q(\phi_{\rm today})\sim 10^{-120}$ in Planck units. The property $|\nabla V_Q| \sim V_Q \sim 10^{-120}$ is a generic feature of quintessence models for the currently observed cosmic acceleration.
| 745 |
1807.06581
| 15,811,890 | 2,018 | 7 | 17 | false | true | 1 |
UNITS
|
It is interesting to evaluate the recovery performance by comparing our results with the Planck 2015 temperature power spectrum [CIT]. The residuals with respect to the best-fit theoretical prediction and associated 1-$\sigma$ uncertainties are about tens to hundreds of $\mu\rm{K}^2$ at multipoles $\ell\lesssim500$, and they decrease to several to tens of $\mu\rm{K}^2$ in the range of $500\lesssim\ell\lesssim2500$. Both residuals and uncertainties in Planck are much greater than our results ($\sim1 \mu\rm{K}^2$). Foreground removal with ABS seems to be more robust and effective against existing methods used in Planck. However, evaluating the recovery performance for ABS against the Planck results is complicated and difficult, because (1) the CMB signal is always fixed in our simulations and the cosmic variance induced errors are not taken into account; (2) the primary beam is assumed to be unity for all simulated maps, with no boost in noises at high-$\ell$ regime from beam deconvolution; (3) the simulations cannot model the foregrounds and noise properties as well as systematic effects of the real Planck data to sufficiently high accuracy. It is thus important to further test the ABS approach by using the real Planck data. We will leave this to future works.
| 1,279 |
1807.07016
| 15,816,294 | 2,018 | 7 | 18 | true | false | 6 |
MISSION, MISSION, MISSION, MISSION, MISSION, MISSION
|
We present cosmological parameter measurements from the Deep Lens Survey (DLS) using galaxy-mass and galaxy-galaxy power spectra in the multipole range $\ell=250\sim2000$. We measure galaxy-galaxy power spectra from two lens bins centered at $z\sim0.27$ and $0.54$ and galaxy-mass power spectra by cross-correlating the positions of galaxies in these two lens bins with galaxy shapes in two source bins centered at $z\sim0.64$ and $1.1$. We marginalize over a baryonic feedback process using a single-parameter representation and a sum of neutrino masses, as well as photometric redshift and shear calibration systematic uncertainties. For a flat $\Lambda$CDM cosmology, we determine $S_8\equiv\sigma_8\sqrt{\Omega_m/0.3}=0.810^{+0.039}_{-0.031}$, in good agreement with our previous DLS cosmic shear and the Planck Cosmic Microwave Background (CMB) measurements. Without the baryonic feedback marginalization, $S_8$ decreases by $\sim0.05$ because the dark matter-only power spectrum lacks the suppression at the highest $\ell$'s due to Active Galactic Nuclei (AGN) feedback. Together with the Planck CMB measurement, we constrain the baryonic feedback parameter to $A_{baryon}=1.07^{+0.31}_{-0.39}$, which suggests an interesting possibility that the actual AGN feedback might be stronger than the recipe used in the OWLS simulations. The interpretation is limited by the validity of the baryonic feedback simulation and the one-parameter representation of the effect.
| 1,470 |
1807.09195
| 15,831,588 | 2,018 | 7 | 24 | true | false | 2 |
MISSION, MISSION
|
Regarding the inflationary parameters that describe the theory analyzed here, we have an upper limit of the tensor-to-scalar ratio $r$, consistent with the $\Lambda$CDM+r value. We find for this model and Planck TT + lowP that $r<0.0941$ c.l.. If we look at Figure REF, which shows the constraints at $68 \%$ and $95 \%$ confidence levels on the $10^{12}V_0/M^4$ vs. $log(b)$ plane, we can see that there exists a lower limit for $b>5.6\times10^6 GeV$ and $V_0<11.7\times 10^{-12} M_P^4$ for Planck TT+lowP.
| 507 |
1807.10833
| 15,850,274 | 2,018 | 7 | 27 | true | true | 2 |
MISSION, MISSION
|
In order to assess the UV stability of the model for a given parameter choice, we also perform the renormalization group evolution from $\Lambda_\text{\tiny{GW}}$ up to the Planck scale (see REF). A consistent scenario requires no appearance of Landau poles or absolute instabilities below this scale where quantum gravity effects become relevant. Furthermore, the potential from REF must not develop a flat direction at any scale $\Lambda^\prime$ larger than $\Lambda_\text{\tiny{GW}}$ because otherwise the scale symmetry breaking would have occurred already at $\Lambda^\prime$. The unwanted Gildener-Weinberg conditions which would induce such breaking are [CIT] FORMULA In our numerical implementation, we test these relations after each energy step in the renormalization group evolution.
| 794 |
1807.11490
| 15,855,745 | 2,018 | 7 | 30 | false | true | 1 |
UNITS
|
A stacking analysis of Planck data [CIT] sees $E>0$ along filaments (selected from intensity data), but claims no $B$-mode signal above the noise. We predict that a $B$-mode signal from the filaments should be present, since they have a finite length of a few degrees. Detecting $B$-modes from filaments will be easier with more signal-to-noise and may require a more careful filament analysis, rescaling and aligning the filament ends. While the detectability of the $B$-mode signature from stacked filaments in Planck data calls for a more careful assessment, we should see it in higher-fidelity data.
| 603 |
1807.11940
| 15,860,228 | 2,018 | 7 | 31 | true | false | 2 |
MISSION, MISSION
|
Z18 argue that two of their 15 measurements are outliers, if their 15 measurements were drawn from a Gaussian distribution. Discarding these two measurements Z18 determine a weighted mean $\Omega_b h^2$ using the remaining 13 deuterium abundance measurements. They note that this value differs at $1.6\sigma$ from that determined by using the Planck 2015 TT + lowP + lensing CMB anistropy data [CIT].
| 400 |
1808.01490
| 15,873,938 | 2,018 | 8 | 4 | true | true | 1 |
MISSION
|
SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatário Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
| 1,317 |
1808.02840
| 15,884,746 | 2,018 | 8 | 8 | true | false | 3 |
MPS, MPS, MPS
|
(c) Marlene from Jena says: Due to the Hubble-expansion, every point is in recession motion with respect to every other point. If a photon gets scattered into your direction, the scattering particle will necessarily move away from you, leading to a lower perceived energy and a larger wavelength. It's important to view it like that because a photon gas can not change its state without interaction due to the linearity of electrodynamics, and this argument shows that it's a kinematic effect: It's a similarity transform of the Planck-spectrum.
| 546 |
1808.07551
| 15,924,098 | 2,018 | 8 | 22 | true | false | 1 |
LAW
|
As we can see (second column), Planck+BAO always provides strong evidence against HZ with respect to standard $\Lambda$CDM. When the BAO data are included, the evidence against HZ under $\Lambda$CDM grows by $\Delta \ln {\cal B} = 10.18$ for Planck TT and by $\Delta \ln {\cal B} = 6.9$ for Planck TTTEEE. When considering an HZ spectrum in a $\Lambda$CDM$+N_{\rm eff}$ extension, the evidence against it with respect to standard $\Lambda$CDM also grows by $\Delta \ln {\cal B} = 3.33$ for Planck TT and by $\Delta \ln {\cal B} = 1.98$ for Planck TTTEEE. While HZ was already ruled out from Planck TTTEEE data alone, the inclusion of the BAO datasets excludes HZ also in the case of Planck TT.
| 693 |
1808.09201
| 15,937,961 | 2,018 | 8 | 28 | true | false | 7 |
MISSION, MISSION, MISSION, MISSION, MISSION, MISSION, MISSION
|
Now integrating the *Langevin equation II* over a small time interval $\epsilon$ we get: FORMULA Now to deal with the product $\sqrt{D(n(\tau)} b(\tau)$ one can use various prescriptions and that will finally lead to different form of *Fokker Planck equations*. One of the possibility is to apply *Stratonovich prescription*, using which we can compute the integral as [^25]: FORMULA This will correspond to the following form of *Fokker Planck equation*, as given by: FORMULA Further, one can write the *Fokker-Planck equation* in terms of a *continuity equation*, given by the following expression: FORMULA where the *Fokker-Planck current* is defined as: FORMULA Additionally, it is important to note that the *Fokker-Planck equation* explicitly mimics the role of a *Schr$\ddot{o}$dinger equation*, provided the real time should be replaced by imaginary time in the present context. and such an analogy usually used to describe the convergence to the equilibrium. To establish this statement let us start with the time dependent *Schr$\ddot{o}$dinger equation* for an electron moving in one dimension conduction wire in presence impurity potential $V(x)$, as given by: FORMULA Now changing $t=-i\tau$, $x=n$, $\psi(x,\tau)=P(n;\tau)$ we get: FORMULA Further taking, $V=0$ one can identify the above equation as diffusion equation and identify the diffusion coefficient as: FORMULA Now if we consider the contribution from the impurity potential is non vanishing then one can write: FORMULA then for equilibrium we set the *Fokker-Planck current* is zero [^26], for which we get finally the following result: FORMULA from which we get the following Boltzmann probability distribution function for equilibrium: FORMULA where $P_0=P(n=0)$ is the normalization constant for the probability distribution.
| 1,803 |
1809.02732
| 15,974,706 | 2,018 | 9 | 8 | true | true | 6 |
FOKKER, FOKKER, FOKKER, FOKKER, FOKKER, FOKKER
|
For comparison with observations, we have analyzed the four foreground-cleaned CMB maps released by the Planck Collaboration in their second data release [CIT], namely, the `SMICA`, `NILC`, `SEVEM`, and Commander maps. These are high-resolution maps, with Healpix [CIT] resolution $N_{\mbox{\scriptsize side}}$ = 2048. We extracted the multipoles $2 \leq \ell \leq 20$, and rebuilt these maps using $N_{\mbox{\footnotesize side}}$ = 16.
| 436 |
1809.05924
| 15,998,862 | 2,018 | 9 | 16 | true | false | 1 |
MISSION
|
Matched filters (MFs) are elegant and widely used tools to detect and measure signals that resemble a known template in noisy data. However, they can perform poorly in the presence of contaminating sources of similar or smaller spatial scale than the desired signal, especially if signal and contaminants are spatially correlated. We introduce new multicomponent MF and matched multifilter (MMF) techniques that allow for optimal reduction of the contamination introduced by sources that can be approximated by templates. The application of these new filters is demonstrated by applying them to microwave and X-ray mock data of galaxy clusters with the aim of reducing contamination by point-like sources, which are well approximated by the instrument beam. Using microwave mock data, we show that our method allows for unbiased photometry of clusters with a central point source but requires sufficient spatial resolution to reach a competitive noise level after filtering. A comparison of various MF and MMF techniques is given by applying them to Planck multifrequency data of the Perseus galaxy cluster, whose brightest cluster galaxy hosts a powerful radio source known as Perseus A. We also give a brief outline how the constrained MF (CMF) introduced in this work can be used to reduce the number of point sources misidentified as clusters in X-ray surveys like the upcoming eROSITA all-sky survey. A python implementation of the filters is provided by the authors of this manuscript at \url{https://github.com/j-erler/pymf}.
| 1,532 |
1809.06446
| 15,999,185 | 2,018 | 9 | 17 | true | false | 1 |
MISSION
|
In the upcoming years galaxy surveys like *Euclid*[^1], the Large Synoptic Survey Telescope ([LSST]{.smallcaps})[^2], the Dark Energy Spectroscopic Instrument ([DESI]{.smallcaps})[^3] and the Square Kilometer Array ([SKA]{.smallcaps})[^4] will become operative. Indeed, some of these ambitious projects are already happening, see for example [DES]{.smallcaps}[^5] [CIT]. Thanks to these probes we will be able to study the evolution of the Universe through cosmic ages, using as observables galaxy clustering (baryon acoustic oscillations, BAO, and redshift-space distortions, RSD) and weak lensing, which will be measured with unprecedented accuracy. Such improvements will allow to constrain better the cosmological parameters and assess possible deviations from the standard flat $\Lambda$-Cold Dark Matter ($\Lambda$CDM) paradigm. In particular, these new experiments will almost certainly be able to measure for the first time the total neutrino mass, $M_\nu$, which is known to suppress the growth of structures at small scales. A lower bound of $M_\nu = 0.056$ eV is obtained by particle physics experiments from neutrino oscillations (see e.g. [CIT]); on the other hand, cosmology so far has been able only to place either upper limits [CIT] or marginal preference [CIT] for a non-zero total neutrino mass. To date, the most stringent constraint comes from combining Planck [CIT] with BOSS Lyman-$\alpha$ forest data, providing $M_\nu < 0.12$ eV at 95% confidence level [CIT].
| 1,484 |
1809.06634
| 16,003,793 | 2,018 | 9 | 18 | true | false | 1 |
MISSION
|
We thank Chris Hayward, Kate Rowlands, Dries Van De Putte, Liza Sazonova, and Mike Fall for useful comments and discussions, as well as Maarten Baes and Peter Camps for making the [skirt]{.smallcaps} code public. VRG, JL and GS acknowledge support from the National Science Foundation (NSF) under Grant No. AST-1517559. The POGS catalogue was created with support from NSF grant AST-1412596. This work used the Extreme Science and Engineering Discovery Environment [XSEDE; [CIT]], which is supported by NSF grant ACI-1548562. The XSEDE allocation TG-AST160043 utilized the Comet and Data Oasis resources provided by the San Diego Supercomputer Center. The IllustrisTNG flagship simulations were run on the HazelHen Cray XC40 supercomputer at the High Performance Computing Center Stuttgart (HLRS) as part of project GCS-ILLU of the Gauss Centre for Supercomputing (GCS). Ancillary and test runs of the project were also run on the compute cluster operated by HITS, on the Stampede supercomputer at TACC/XSEDE (allocation AST140063), at the Hydra and Draco supercomputers at the Max Planck Computing and Data Facility, and on the MIT/Harvard computing facilities supported by FAS and MIT MKI. The original Illustris simulations were run on the Harvard Odyssey and CfA/ITC clusters, the Ranger and Stampede supercomputers at TACC/XSEDE, the Kraken supercomputer at ORNL/XSEDE, the CURIE supercomputer at CEA/France as part of PRACE project RA0844, and the SuperMUC computer at the Leibniz Computing Centre, Germany, as part of project pr85je. The Flatiron Institute is supported by the Simons Foundation.
| 1,602 |
1809.08239
| 16,016,680 | 2,018 | 9 | 21 | true | false | 1 |
MPS
|
In recent years interest has grown in the possibility of string solutions in de Sitter space, for at least a couple of practical reasons. One is the discovery that the expansion of the Universe is accelerating due to non-vanishing vacuum energy that is small relative to the energy scale of the Standard Model [CIT]. The other is the growing observational support for inflationary cosmology [CIT], according to which the Universe underwent an early epoch of near-exponential quasi-de Sitter expansion driven by vacuum energy that was large compared with the energy scale of the Standard Model, but still hierarchically smaller than the Planck scale. At the time of writing there is an ongoing controversy whether string theory in fact admits consistent solutions in de Sitter space [CIT].
| 788 |
1809.10114
| 16,031,241 | 2,018 | 9 | 26 | false | true | 1 |
UNITS
|
The cycle of the projects presented here is concluded by a closer look into the properties of black holes within the asymptotic safety scenario. In particular, the observation is elucidated that correspondingly renormalisation group improved Schwarzschild black holes constitute a prototypical example of a Hayward geometry. The latter has been advocated as a model for non-singular black holes within quantum gravity phenomenology. Furthermore, the role of the cosmological constant in the renormalisation group improvement process is briefly discussed. It is emphasised that these non-singular black holes share many features of a so-called Planck star: their effective geometry naturally incorporates the one-loop corrections found in the effective field theory framework, their Kretschmann scalar is bounded, and the black hole singularity is replaced by a regular de Sitter patch.
| 885 |
1810.03132
| 16,065,631 | 2,018 | 10 | 7 | false | true | 1 |
STAR
|
We consider the scotogenic model, where the standard model (SM) is extended by a scalar doublet and three $Z_2$ odd SM-singlet fermions ($N_i$, $i=1,2,3$), all odd under an additional $Z_2$ symmetry, as a unifying framework for simultaneous explanation of inflation, dark matter, baryogenesis and neutrino mass. The inert doublet is coupled nonminimally to gravity and forms the inflaton. The lightest neutral particle of this doublet later becomes the dark matter candidate. Baryogenesis is achieved via leptogenesis by the decay of $N_1$ to SM leptons and the inert doublet particles. Neutrino masses are generated at the one-loop level. Explaining all these phenomena together in one model is very economic and gives us a new set of constraints on the model parameters. We calculate the inflationary parameters like spectral index, tensor-to-scalar ratio and scalar power spectrum, and find them to be consistent with the Planck 2018 constraints. We also do the reheating analysis for the inert doublet decays/annihilations to relativistic, SM particles. We find that the observed baryon asymmetry of the Universe can be obtained and the sum of light neutrino mass bound can be satisfied for the lightest $Z_2$ odd singlet fermion of mass around 10 TeV, dark matter in the mass range 1.25--1.60 TeV, and the lepton number violating quartic coupling between the SM Higgs and the inert doublet in the range of $6.5\times10^{-5}$ to $7.2\times 10^{-5}$.
| 1,453 |
1810.03645
| 16,066,719 | 2,018 | 10 | 8 | true | true | 1 |
MISSION
|
We also note that in the IHDM, although we have two complex scalar fields during inflation, only the inert doublet components contribute to the effective potential given by Eq. REF(#potential){reference-type="eqref" reference="potential"}. Thus, the isocurvature fluctuations typically present in multi-field inflation models are suppressed here. To be specific, the isocurvature fraction is predicted to be $\beta_{\rm iso}\sim {\cal O}(10^{-5})$ [CIT], which is consistent with the Planck constraints.
| 503 |
1810.03645
| 16,070,405 | 2,018 | 10 | 8 | true | true | 1 |
MISSION
|
With the tendency explained in Sec. REF C in mind, we impose observational constraints on the $\alpha$-attractor-type double inflation model based on the Planck result. For this, we take into account the constraints on $\mathcal{P}_{\mathcal{R}}$ and $n_s$ only, as that on $r$ does not give an additional constraint. As in the previous sections, if we assume that the initial velocity of the fields obey the slow-roll approximation, the predictions on $\mathcal{P}_{\mathcal{R}}$ and $n_s$ in this model depend on $M$, $\lambda$, and $\varphi^I _{\rm ini}$. Here, in order to constrain them, first we fix $M$ and specify possible $\lambda$ for given $\varphi^I_{\rm ini}$. First we set $M=\sqrt{3} M_{\rm Pl}$ so that the analysis in the previous sections are included. Then we repeat similar procedures for different $M$ to see "$M$ dependence" of the observational constraints, where we consider the cases with $M=M_{\rm Pl}$ as an example of smaller $M$ and $M=\sqrt{6} M_{\rm Pl}$ as that of larger $M$.
| 1,008 |
1810.06914
| 16,097,340 | 2,018 | 10 | 16 | true | true | 1 |
MISSION
|
Throughout this work we analyse the cosmological, hydrodynamical simulation RefL100N1504 (hereafter Ref), from the EAGLE simulation series. RefL100N1504 represents a cosmological volume of 100 comoving Mpc on a side that was run with a modified version of GADGET 3 ([CIT]), an $N$-Body Tree-PM smoothed particle hydrodynamics (SPH) code, which was modified to use an updated formulation of SPH, new time stepping and new subgrid physics (see [CIT], for a complete description). RefL100N1504 contains 1504$^{3}$ dark matter (as well as gas) particles, with initial gas and dark matter particle masses of $m_{\rm{g}}=1.8\times 10^{6}, {\rm M_{\odot}}$ and $m_{\rm{dm}}=9.7\times 10^{6}, {\rm M_{\odot}}$, respectively, and a Plummer-equivalent gravitational softening length of $\epsilon=0.7$ proper kpc at $z=0$. It assumes a $\Lambda$CDM cosmology derived from the *Planck-1* data ([CIT]), $\Omega_{\rm{m}}=1-\Omega_{\Lambda}=0.307$, $\Omega_{\rm{b}}=0.04825$, $h=0.6777$, $\sigma_{8}=0.8288$, $n_{s}=0.9611$, and a primordial mass fraction of hydrogen of $X=0.752$.
| 1,066 |
1810.07189
| 16,099,315 | 2,018 | 10 | 16 | true | false | 1 |
MISSION
|
First, we recover the well-known result that sub-TeV higgsinos and winos lie below the Planck region (with $\Omega h^2 \sim 0.12$ [CIT]), *i.e.* the predicted relic density is smaller by one or two orders of magnitude than the observed one. This is illustrated on the bottom of the left subfigure as a line-like accumulation of scenarios. In contrast, almost pure bino dark matter that does not annihilate into vector-like fermions can be either overabundant or under-abundant, depending on whether or not co-annihilations and funnels are efficient in depleting DM. The blue parameter space points for which the Planck measurements are exactly met correspond to scenarios of bino dark matter either annihilating through a quasi on-shell $Z/h/H$ or $A$ boson, or co-annihilating with MSSM sparticles. Such configurations are also present in the MSSM. The novel feature appearing in the LND model are the red points, which correspond exactly to situations in which binos annihilate into vector-like leptons. While co-annihilations with the corresponding sfermions are also possible, they are not necessary to reproduce the Planck measurements.
| 1,141 |
1810.07224
| 16,100,446 | 2,018 | 10 | 16 | false | true | 3 |
MISSION, MISSION, MISSION
|
The CMB B mode reionization and recombination bumps from primordial gravitational waves are affected modestly in No Slip Gravity relative to general relativity, despite the equivalence of the gravitational wave speed of propagation, due to the running of the Planck mass $\alpha_M$. We find a fractional change to the tensor to scalar power ratio of $\delta r/r\lesssim 0.2,c_M$, where $c_M$ is maximum value of $\alpha_M$. For a gravity model designed to provide current cosmic acceleration but restore to general relativity in the early universe, the main effect is on the reionization bump at low multipoles $\ell\lesssim10$.
| 628 |
1810.12337
| 16,141,138 | 2,018 | 10 | 29 | true | false | 1 |
UNITS
|
Polarized galactic foregrounds are dominated by dust and synchrotron emissions, which are still only poorly known with the best constraints due to Planck [CIT]. The limited knowledge of spectral emission densities (SEDs) of these signals leaves the possibility of a rather complex sky, even at high galactic latitudes. This view is further corroborated by several theoretical and observational works [CIT], which indicate that spectral indices, parametrizing synchrotron and dust SEDs, are typically to be expected to vary across the sky. If this is indeed the case, the component separation methods need to estimate these emissions in various parts, or regions, of the sky. This will be particularly pertinent in the analysis of (nearly) entire sky data, as expected from future satellites, where neglecting the SEDs variability could lead to a false detection of the tensor-to-scalar ratio $r\approx 0.005-0.01$ [CIT]. However, given the projected and limited sensitivity of the future CMB instruments, such a generalization of the foreground cleaning procedure will unavoidably affect the quality of the characterization of the SEDs, consequently, allowing for more of the leaked galactic foregrounds signal in the final, estimated, cleaned CMB map [CIT]. This could potentially undermine the feasibility of reaching the scientific target of $r\leq 0.001$ as defined for the future missions, even in the cases when the spatial dependence of the foregrounds spectral indices is known a priori. In this paper we generalize the approach of [CIT] and study foreground residuals arising in such more general applications, showing how to model residuals of a different origin and how to incorporate them in the cosmological likelihood on $r$.
| 1,739 |
1811.00479
| 16,153,276 | 2,018 | 11 | 1 | true | false | 1 |
MISSION
|
We present an improved measurement of the Hubble constant (H_0) using the 'inverse distance ladder' method, which adds the information from 207 Type Ia supernovae (SNe Ia) from the Dark Energy Survey (DES) at redshift 0.018 < z < 0.85 to existing distance measurements of 122 low redshift (z < 0.07) SNe Ia (Low-z) and measurements of Baryon Acoustic Oscillations (BAOs). Whereas traditional measurements of H_0 with SNe Ia use a distance ladder of parallax and Cepheid variable stars, the inverse distance ladder relies on absolute distance measurements from the BAOs to calibrate the intrinsic magnitude of the SNe Ia. We find H_0 = 67.8 +/- 1.3 km s-1 Mpc-1 (statistical and systematic uncertainties, 68% confidence). Our measurement makes minimal assumptions about the underlying cosmological model, and our analysis was blinded to reduce confirmation bias. We examine possible systematic uncertainties and all are below the statistical uncertainties. Our H_0 value is consistent with estimates derived from the Cosmic Microwave Background assuming a LCDM universe (Planck Collaboration et al. 2018).
| 1,104 |
1811.02376
| 16,166,207 | 2,018 | 11 | 6 | true | false | 1 |
MISSION
|
Nearly a decade ago, the South Pole Telescope (SPT) and the Atacama Cosmology Telescope (ACT) obtained sufficiently high mapping speeds to detect previously unknown clusters in wide-field surveys based on their thermal SZ effect signals [CIT]. Subsequent to these ground-based surveys, the Planck satellite surveyed the full sky in 9 photometric bands spanning the range 30--850 GHz, delivering a final catalogue of roughly 2000 SZ-selected clusters [CIT]. The Planck survey data have also been used to measure the SZ effect spectrum with the broadest frequency coverage to date (*e.g.*, [CIT]).
| 595 |
1811.02310
| 16,167,434 | 2,018 | 11 | 6 | true | false | 2 |
MISSION, MISSION
|
In this section we explore extra dimensions that are warped, i.e. their metric is non-factorizable. In 5 dimensions, this can be written generically as: FORMULA where $z$ is the conformal coordinate along the extra dimension and $a(z)$ is called the scale factor or warp factor. Warped extra dimensions were first proposed by Randall and Sundrum (RS). In a seminal paper [CIT], they showed how a metric of the form Eq. (REF) can arise as a solution to Einstein's equations on a 5D interval with a negative cosmological constant $\Lambda$, sandwiched between two branes of tensions $\pm\Lambda$. The resulting metric is called 5 dimensional Anti de-Sitter (AdS$_5$), in which the warp factor assumes the form: FORMULA For more details on how to get AdS$_5$ gravity solutions see [CIT]. As we will see in detail, the AdS$_5$ form of the metric has far reaching implications for the Hierarchy problem, making the cutoff to the SM warped down with respect to the Planck scale. In fact, we can now get the weak-Planck Hierarchy from a Planck size extra dimension. This was indeed a revolutionary step towards a solution to the Hierarchy problem.
| 1,140 |
1811.04279
| 16,185,140 | 2,018 | 11 | 10 | false | true | 3 |
UNITS, UNITS, UNITS
|
[^3]: Note that observables evaluated on the late-time solutions (as one typical does in all inflationary models) do not satisfy the constraints from the latest Planck results, as they are identical to observables for single-field exponentials. These models, therefore, may be more relevant as dark energy candidates.
| 317 |
1811.06456
| 16,202,923 | 2,018 | 11 | 15 | true | true | 1 |
MISSION
|
We also show the SEDs for two luminous intermediate-type supergiants in the bottom panel. 10584-8.1 may have circumstellar dust although its spectrum did not show any stellar wind emission lines. Planck curve fits to their optical photometry are shown.
| 252 |
1811.06559
| 16,204,803 | 2,018 | 11 | 15 | true | false | 1 |
MISSION
|
It is worth noting that the strong Spin-2 conjecture (REF) does not explicitly involve $M_p$. This is unusual with respect to Swampland conjectures, and this property is due to the fact that the conjecture is unique in the sense that it is about gravity itself rather than a theory coupled to gravity. The Planck mass then appears implicitly rather than explicitly.
| 365 |
1811.07908
| 16,214,808 | 2,018 | 11 | 19 | false | true | 1 |
UNITS
|
PCC 11546 is an extremely cold (${\lesssim}$ 15 K) dust source identified in the southern limits of the [lmc]{acronym-label="lmc" acronym-form="singular+short"} as part of the Planck Galactic Cold Cloud catalog [CIT]. It also exhibits strong [co]{acronym-label="co" acronym-form="singular+short"} (1-0) emission in the Planck integrated [co]{acronym-label="co" acronym-form="singular+short"} map and the MAGMA [lmc]{acronym-label="lmc" acronym-form="singular+short"} [co]{acronym-label="co" acronym-form="singular+short"} survey [CIT]. There appears to be a lack of massive star formation within PCC 11546 and it contains lower density gas than clouds closer to the center of the [lmc]{acronym-label="lmc" acronym-form="singular+short"} [CIT].
| 743 |
1811.07994
| 16,217,055 | 2,018 | 11 | 19 | true | false | 2 |
MISSION, MISSION
|
We consider a particular form of the general Horndeski Lagrangian [CIT] with two additional matter components, $\mathcal{L} _{\mathrm{m}}$ and $\mathcal{L} _{\mathrm{r}}$, in the form of barotropic perfect fluids with energy densities $\rho _{\mathrm{m}}$ and $\rho _{\mathrm{r}}$ that represent pressure-less dust and radiation, respectively. Then the Lagrangian can be written as FORMULA where $X = (\nabla \phi) ^{2}$, for a generic function $A(\phi,X)$ we adopt the notation $A_{,X}\equiv\partial A/\partial X$, and $A_{,\phi}\equiv\partial A/\partial \phi$, while $M _{\mathrm{P}}$ is the Planck mass.
| 606 |
1811.10885
| 16,239,021 | 2,018 | 11 | 27 | true | true | 1 |
UNITS
|
On the largest scales we plot the Planck measurements of the power spectrum [CIT]. The next relevant constraints on smaller scales come from $\mu$-distortions. Since the power spectrum cannot grow arbitrarily quickly, it is clear that the power spectrum cannot become large enough to generate PBHs on scales $k< 10^4 {\rm Mpc}^{-1}$, subject to the aforementioned assumptions, the most relevant being that the perturbations are Gaussian [CIT]. Hence there is no need to also show the $y$-distortion constraints which affect larger scales. The blue line is the upper bound on the amplitude for a monochromatic power spectrum, whilst the dashed purple line is the upper bound on the amplitude for a power spectrum with $k^4$ slope and immediate drop off. For a constraint on slightly smaller scales than spectral distortions, see [CIT].
| 834 |
1811.11158
| 16,240,837 | 2,018 | 11 | 27 | true | true | 1 |
MISSION
|
[^10]: In the flat space (Poincare algebra) the coupling has to be $g,l_{Pl}^{\lambda_1+\lambda_2+\lambda_3-1}$ instead of just $g$ by dimensional analysis, where $l_{Pl}$ is the Planck length. In $AdS$ the light-front generators do not depend on the cosmological constant and the appropriate dimension of the vertices is thanks to the $z$-factors. Therefore, there is no need in dimensionful constants for the conformal algebra case. This is a very convenient feature of the light-front approach since there are no dimensionful quantities in CFT's.
| 549 |
1811.12333
| 16,252,060 | 2,018 | 11 | 29 | false | true | 1 |
UNITS
|
Fig. REF further shows the constraints obtained from BAO and SNIa. Neither of the two are affected by $\sigma_8$and $\sum m_\nu$.[^7] However, they both exhibit narrow parameter degeneracies that cut through the region of parameter space that is allowed by Planck. Therefore, the joint analyses of Planck+BAO and Planck+SNIa allow for constraints on $\nu w\mathrm{CDM}$that are tighter than the ones from Planck+SPTcl (see Fig. REF).
| 433 |
1812.01679
| 16,271,806 | 2,018 | 12 | 4 | true | false | 4 |
MISSION, MISSION, MISSION, MISSION
|
The currently accepted theoretical value is $N_\nu^{\mathrm{eff}}=3.046$, after including the slight effect of neutrino reheating [CIT]. The favored value of $N_\nu^{\mathrm{eff}}$ can be found by fitting to CMB data. In 2013 the Planck collaboration found $N_\nu^{\mathrm{eff}}=3.36\pm0.34$ (CMB only) and $N_\nu^{\mathrm{eff}}= 3.62\pm0.25$ (CMB and $H_0$) [CIT]; moreover, the discrepancy between $H_\mathrm{CMB}$ and $H_0$ has increased [CIT]. This tension, and the possibility that leptogenesis in the early Universe resulted in neutrino asymmetry, motivates our study of the dependence of $N_\nu^{\mathrm{eff}}$ on $L$.
| 625 |
1812.05157
| 16,301,790 | 2,018 | 12 | 12 | true | true | 1 |
MISSION
|
To obtain an estimation on the parameter space of this model detailed simulation of structure formation is needed, which is beyond the scope of this letter. Here we work on the following simplified treatment. We first use the package CAMB [CIT] to calculate the matter power spectrum for 5.3 (3.5) keV warm DM model and convert to the 1D matter power spectrum by integration over a $k$ plane. Then similarly we calculate the matter power spectrum for our model, but the free-streaming is only turned on when the temperature is below temperature $T_{\rm fs}$, or the free streaming velocity is simply set to zero when $T>T_{\rm fs}$. $T_{\rm fs}$ is defined as the temperature of the SM sector at which the scattering rate of the DM particles is equal to the Hubble expansion rate. In the NR limit the average cross section of the $\chi\chi$ and $\chi\bar\chi$ processes reads FORMULA Therefore, at temperature $T$ the collision rate can be estimated as FORMULA where $\eta_\gamma\approx 6\times 10^{-10}$ is the baryon-to-photon ratio, and FORMULA Equating $\Gamma_c$ to the Hubble expansion rate $H = 1.66 g_\star^{1/2} T^2/m_{\rm pl}$ where $g_\star$ is the effective degree of freedom and $m_{\rm pl}$ is the Planck mass, we get FORMULA
| 1,240 |
1812.05699
| 16,307,560 | 2,018 | 12 | 13 | true | true | 1 |
UNITS
|
The paper is organised as follows. In the next section we discuss why the tension between Planck and HST measurements of $H_0$ can be alleviated with a higher $N_{\mathrm{eff}}$ value. In section 3 we describe the tests to be performed, and in sections 4 and 5 we show the results of our joint analysis. In section 6 some conclusions are outlined.
| 347 |
1812.06064
| 16,309,963 | 2,018 | 12 | 14 | true | true | 1 |
MISSION
|
In our work, we explore the evolution of the DE EOS $\omega$(z) by using model-independent parametriztions. The CPL parametrization and three kinds of binned parametrizations (including "const$\triangle z$", "const $n\triangle z$" and "free$\triangle z$") are taken into account in this work. To perform cosmology-fits, we adopt the observation data including the SNIa observation from JLA samples, the BAO observation from SDSS DR12, and the CMB observation from Planck 2015 distance priors. In particular, for the SNIa data, we make use of three statistics techniques, i.e. MS, FS and IFS.
| 591 |
1812.10542
| 16,346,671 | 2,018 | 12 | 21 | true | false | 1 |
MISSION
|
The ComPRASS catalogue was obtained by running the blind joint X-ray--SZ detection algorithm summarized in Sect. [2.2] on the RASS and Planck all-sky maps described in Sect. [2.1]. It contains 2323 candidates, distributed in the sky as shown in Fig. REF. The sky is not covered homogeneously: there are more candidates in the regions where the RASS exposure time is higher and where the Planck noise is lower. This is expected, since in those regions both surveys are deeper.
| 475 |
1901.00873
| 16,370,502 | 2,019 | 1 | 3 | true | false | 2 |
MISSION, MISSION
|
The SDSS is one of the largest optical surveys available. It has produced deep images of one third of the sky in five optical bands: u, g, r, i, and z, and has performed spectroscopic measurements for more than three million astronomical objects. From these data, several value-added catalogues were generated, which provide a wealth of information about the objects thanks to the study of a large panel of spectral emission lines. We use here the MPA--JHU DR8 catalogue, from the Max Planck Institute for Astrophysics and the Johns Hopkins University [CIT]. It provides SFR and stellar masses for 1,843,200 galaxies with redshifts up to $z\sim0.3$ (Fig. REF). These data based on the SDSS DR8 release are publicly available[^6] together with all details about the catalogue and the computations and fits of the galaxy physical properties.
| 839 |
1901.01932
| 16,378,942 | 2,019 | 1 | 7 | true | false | 1 |
MPS
|
Our results based on six UFDs whose star formation histories have been studied in detail imply a stringent upper limit of $0.50\pm 0.086$ ($0.31\pm 0.04$) nG for an assumed average formation redshift of their host halo, $z_{\rm form}=$ 10 (20). This limit is better than previously derived limits based on other methods, which range between $1-10 \rm nG$ [CIT], and improve the upper limit of 0.6 nG achieved from CMB non-Gaussianity from the Planck mission data [CIT].
| 469 |
1901.03341
| 16,390,888 | 2,019 | 1 | 10 | true | false | 1 |
MISSION
|
The functional renormalization flow REF(#eq:AM4){reference-type="eqref" reference="eq:AM4"} of the scalar mass term is quadratic. The same holds for the running squared Planck mass REF(#eq:93){reference-type="eqref" reference="eq:93"}. No quadratic running is seen in perturbative investigations using dimensionless regularization. One should therefore discuss the physical meaning and status of the quadratic running. We will do this first for the simple case of the flowing scalar mass term. The generalization to the flowing Planck mass will be straight forward.
| 565 |
1901.04741
| 16,404,565 | 2,019 | 1 | 15 | true | true | 2 |
UNITS, UNITS
|
Equation REF(#equ:jkl){reference-type="eqref" reference="equ:jkl"} relates the luminosity distance $d_L(z)$ to the Hubble expansion rate $H(z)$ in the FLRW universe, in which the former may be extracted with GP reconstruction of the HIIGx Hubble diagram, while the latter may be found using cosmic chronometers. We employ two distinct values of the Hubble constant to turn the Hubble parameter $H(z)$ into a dimensionless quantity. These are the Planck value and that measured locally using the distance ladder: FORMULA For each of these quantitites, we extract a purely geometric measurement of the curvature parameter $\Omega_{k}$ though, as noted earlier, our intention is clearly to probe which of these two disparate values of $H_0$ is more consistent with spatial flatness.
| 779 |
1901.06626
| 16,420,239 | 2,019 | 1 | 20 | true | false | 1 |
MISSION
|
The noise map of the PACT shown in Fig. REF is obtained from the half-dataset difference maps and applying the same linear combination (and weights) as the one used to compute the reconstructed PACT $y$-map. The PACT noise maps in Fig. REF are shown for the ACT equatorial footprint (top panel) and southern footpring (bottom panel). The rectangular regions represent selected deep regions defined in Sect. [2]. By construction of the PACT dataset from the combination of ACT and Planck frequency channels, the associated noise structures are scale dependent. This is exhibited in Fig. REF, bottom panel, where we note that the noise spatial distribution is correlated and inhomogeneous. Considering that large angular scales are dominated by Planck data and small angular scales by ACT data.
| 792 |
1902.00350
| 16,458,031 | 2,019 | 2 | 1 | true | false | 2 |
MISSION, MISSION
|
We present a uniform catalog of accurate distances to local molecular clouds informed by the Gaia DR2 data release. Our methodology builds on that of Schlafly et al. (2014). First, we infer the distance and extinction to stars along sightlines towards the clouds using optical and near-infrared photometry. When available, we incorporate knowledge of the stellar distances obtained from Gaia DR2 parallax measurements. We model these per-star distance-extinction estimates as being caused by a dust screen with a 2-D morphology derived from Planck at an unknown distance, which we then fit for using a nested sampling algorithm. We provide updated distances to the Schlafly et al. (2014) sightlines towards the Dame et al. (2001) and Magnani et al. (1985) clouds, finding good agreement with the earlier work. For a subset of 27 clouds, we construct interactive pixelated distance maps to further study detailed cloud structure, and find several clouds which display clear distance gradients and/or are comprised of multiple components. We use these maps to determine robust average distances to these clouds. The characteristic combined uncertainty on our distances is approximately 5-6%, though this can be higher for clouds at farther distances, due to the limitations of our single-cloud model.
| 1,298 |
1902.01425
| 16,461,009 | 2,019 | 2 | 4 | true | false | 1 |
MISSION
|
Note that most CMB-S4 experiments are ground-based, so they can probe smaller fraction of the sky compared to Planck. Having a smaller fraction of the sky leads to increased uncertainties for the estimator. Current estimate is that the new experiments will cover 40% of the sky, significantly less than the 74% of Planck. The error bars will thus increase by a factor of 1.38 from the decrease in $f_\text{sky}$ alone. This may be reduced by combining Planck data for unobserved pixels in these experiments
| 506 |
1902.01142
| 16,462,141 | 2,019 | 2 | 4 | true | false | 3 |
MISSION, MISSION, MISSION
|
The mock catalogues used in this paper were produced by the LasDamas project (<http://lss.phy.vanderbilt.edu/lasdamas/>); we thank NSF XSEDE for providing the computational resources for LasDamas. Some of the computational facilities used in this project were provided by the Vanderbilt Advanced Computing Center for Research and Education (ACCRE). This project has been supported by the National Science Foundation (NSF) through a Career Award (AST-1151650). Parts of this research were conducted by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. This research has made use of NASA's Astrophysics Data System. This work made use of the IPython package [CIT], Scikit-learn [CIT], SciPy [CIT], matplotlib, a Python library for publication quality graphics [CIT], Astropy, a community-developed core Python package for Astronomy [CIT], and NumPy [CIT]. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. These acknowledgements were compiled using the Astronomy Acknowledgement Generator.
| 2,341 |
1902.02680
| 16,474,659 | 2,019 | 2 | 7 | true | false | 3 |
MPS, MPS, MPS
|
In the phantom inflation case, if the slow-roll approximation is assumed for the scalar field, the evolution of the phantom scalar is governed by the following differential equation at leading order, FORMULA which can be solved and it yields the same solution as in Eq. (REF). Let us calculate the slow-roll indices for the $f(R)=R+\alpha R^n$ model, so after following the steps of the previous section, we obtain the slow-roll indices (REF), where in the case at hand, the parameters $J_1$, $J_2$ are, FORMULA Accordingly one can easily obtain the observational indices (REF) in closed form, which are too lengthy to be presented here. By appropriately adjusting the free parameters $f_1$, $\alpha$, $n$, $t_i$ and $m$, one can obtain a viable phenomenology, for example, by choosing $n=1.36602$, $f_1=10^{-40}$, $\alpha=6.751\times 10^{43}$, $t_i=10^{-20}$ and $m=1.4$, we obtain $n_s=0.966$ and $r=0.0613$, which are both compatible with the latest Planck [CIT] and BICEP2/Keck-Array [CIT] data. However, it is obvious that extreme fine tuning is needed in the model, nevertheless, a non-viable $f(R)$ gravity model becomes viable by the inclusion of an appropriate phantom higher order kinetic scalar field term in the gravitational action. In the next section we shall present a general technique for obtaining viable $k$-essence $f(R)$ gravity theories, in the slow-roll approximation.
| 1,392 |
1902.03669
| 16,481,018 | 2,019 | 2 | 10 | true | true | 1 |
MISSION
|
We provide a new interpretation for the Bayes factor combination used in the Dark Energy Survey (DES) first year analysis to quantify the tension between the DES and Planck datasets. The ratio quantifies a Bayesian confidence in our ability to combine the datasets. This interpretation is prior-dependent, with wider prior widths boosting the confidence. We therefore propose that if there are any reasonable priors which reduce the confidence to below unity, then we cannot assert that the datasets are compatible. Computing the evidence ratios for the DES first year analysis and Planck, given that narrower priors drop the confidence to below unity, we conclude that DES and Planck are, in a Bayesian sense, incompatible under LCDM. Additionally we compute ratios which confirm the consensus that measurements of the acoustic scale by the Baryon Oscillation Spectroscopic Survey (SDSS) are compatible with Planck, whilst direct measurements of the acceleration rate of the Universe by the SHOES collaboration are not. We propose a modification to the Bayes ratio which removes the prior dependency using Kullback-Leibler divergences, and using this statistical test find Planck in strong tension with SHOES, in moderate tension with DES, and in no tension with SDSS. We propose this statistic as the optimal way to compare datasets, ahead of the next DES data releases, as well as future surveys. Finally, as an element of these calculations, we introduce in a cosmological setting the Bayesian model dimensionality, which is a parameterisation-independent measure of the number of parameters that a given dataset constrains.
| 1,628 |
1902.04029
| 16,481,351 | 2,019 | 2 | 11 | true | false | 5 |
MISSION, MISSION, MISSION, MISSION, MISSION
|
Some progress has been made in recent years in our understanding of the cluster outskirts. However, the low density and temperature of the gas in these external regions make X-ray and SZ measurements highly challenging. It is beyond the scope of this work to discuss the goodness of the temperature profile measured by *Suzaku* or Planck observations. The main result reported here concerns the SM analysis of the *XMM-Newton* X-ray data that shows that a steep temperature profile may be present in the X-COP cluster outskirts instead of the flatter profile reported by the Planck survey. In addition, the two analyses have evidenced different ICM thermodynamic properties. The SZ temperature profile involves a modest presence of a nonthermal pressure component, at variance with numerical simulations, and an entropy profile that follows beyond $\sim (0.5-1)r_{500}$ the predicted power law increase with slope $1.1$. Conversely, the rapid decline of the temperature reported by the SM analysis of the X-ray data, in good agreement with the *Suzaku* observations, implies a more relevant level of the nonthermal pressure support consistent with the values derived by numerical simulations, and an entropy flattening beyond $r_{500}$.
| 1,236 |
1902.05420
| 16,494,108 | 2,019 | 2 | 13 | true | false | 2 |
MISSION, MISSION
|
In fact, one could argue that such a definition exists and is nothing but the Bayesian evidence considered in Sec. [4.4]. Technically, the Bayesian evidence is the integral of the likelihood over prior space but its meaning can easily be grasped intuitively. Let us consider a model depending on, say, one free parameter. If, for all values of the parameter in the prior range, one obtains a good fit, then the Bayesian evidence is "good". This is for instance the case of the model in Fig. REF (left panel). Different points correspond to different values of the reheating temperature but all points are within the $1\sigma$ Planck contour. On the contrary, if one needs to tune the value of the free parameter in order to have a good fit, then the Bayesian evidence will be "bad". This is the case for the model in Fig. REF (right panel). In order to have a good compatibility with the data (i.e. points within the $1\sigma$ contour), one needs to tune the parameter $A_{_{\rm I}}$ (which controls the amplitude of the quantum corrections) and the Bayesian evidence is "bad". In other words the wasted parameter space is penalized. Obviously, the smaller the range of $A_{_{\rm I}}$ leading to a good fit (compared to the prior), the smaller the evidence. We conclude that the evidence is a good, objective, measure of fine tuning. In this sense, the Starobinsky model is the best model because it is the less fine-tuned one.
| 1,427 |
1902.05286
| 16,495,224 | 2,019 | 2 | 14 | true | true | 1 |
MISSION
|
The input parameters of the $U(1)_X$-extended MSSM/SUGRA [CIT] are of the usual non-universal SUGRA model with additional parameters as below (all at the GUT scale) FORMULA where $m_0, A_0, m_1, m_2, m_3, \tan\beta$ and $\text{sgn}(\mu)$ are the soft parameters in the MSSM sector as defined earlier. The parameters $M_2$ and $M_{XY}$ are set to zero at the GUT scale. The input parameters must be such as to satisfy a number of experimental constraints. These include the constraint that the computed Higgs boson mass must be consistent with the Higgs boson mass measurements by the ATLAS and the CMS collaborations. Further, the relic density of dark matter given by the model must be consistent with that measured by the Planck experiment, and sparticle spectrum of the model be consistent with the lower experimental limits on sparticle masses. The consistency of the computed Higgs boson mass with the experimental determination of $m_{h^0}\sim 125$ GeV requires the loop correction to the Higgs boson mass be large which in turn implies that the size of weak scale supersymmetry lie in the several TeV region. Typically this leads to the average squark masses also lying in the TeV region. Such a situation is realized on the hyperbolic branch of radiative breaking of electroweak symmetry [CIT] (for related works see [CIT]). It turns out that there are at least two ways in which the squark masses may be large, i.e., either $m_0$ is large or $m_3$ is large lying in the several TeV region while $m_0$ can be relatively small. In the latter case renormalization group running would generate squark masses lying in the several TeV region while the slepton masses would be relatively much lighter [CIT]. In this analysis we follow the second possibility and choose $m_3$ in the several TeV region but $m_0$ relatively much smaller.
| 1,837 |
1902.05538
| 16,497,058 | 2,019 | 2 | 14 | false | true | 1 |
MISSION
|
We do not use the joint CMB--CIB lensing estimate released by Planck because the joint lensing reconstruction will gain a direct contribution from thermal dust emission due to the quasars themselves, or due to galaxies in their local environment (e.g., Stevens et al. 2010), and so will not give a pure indication of gravitational deflection at the location of the quasars. We refer the reader to the Planck 2018 lensing paper viii for a thorough description of the construction of the Planck lensing products.
| 510 |
1902.06955
| 16,506,373 | 2,019 | 2 | 19 | true | false | 3 |
MISSION, MISSION, MISSION
|
In the minimal SUSY SU(5) GUT, the soft SUSY-breaking terms are given by FORMULA where $\widetilde{\psi}_i$ and $\widetilde{\phi}_i$ are the scalar components of $\Psi_i$ and $\Phi_i$, respectively, the $\widetilde{\lambda}^A$ are the SU(5) gauginos, and for the scalar components of the Higgs superfields we use the same symbols as for the corresponding superfields. In this work, we assume that these soft SUSY-breaking terms in the visible sector are induced at a scale $M_{\rm in} > M_{\rm GUT}$ through PGM [CIT]. We focus on the minimal PGM content for the moment, and discuss the case with the Planck-scale suppressed non-renormalizable operators in the subsequent subsection.
| 683 |
1902.09084
| 16,522,593 | 2,019 | 2 | 25 | false | true | 1 |
UNITS
|
We use for $P_0(k)$ the "wiggle-less" BBKS power spectrum [CIT] under Planck 2015 cosmology (see table REF). Unknown ground truth cosmological parameters $\boldsymbol{\upomega}_\mathrm{gt}$ are drawn from the (marginalised, Gaussian) Planck priors: FORMULA The "wiggly" ground truth power spectrum $P_{\textrm{gt}}(k)$ is generated with the [CIT] (EH) fitting function, using these cosmological parameters. It is used to simulate observed data $\boldsymbol{\Phi}_\mathrm{O}$, with unknown nuisance parameters (phase realisation and instrumental noise). For later use, the fiducial "wiggly" power spectrum $P_\mathrm{fid}(k)$ is also generated with the EH prescription, using Planck cosmology. The target parameters $(\boldsymbol{\uptheta})_s \equiv P(k_s)/P_0(k_s)$ are the values of the wiggle function at the $S=100$ support wavenumbers defined in section [3.1]. We note $\boldsymbol{\uptheta}_\mathrm{gt}$ and $\boldsymbol{\uptheta}_\mathrm{fid}$ the vectors of component $P_\mathrm{gt}(k_s)/P_0(k_s)$ and $P_\mathrm{fid}(k_s)/P_0(k_s)$, respectively.
| 1,054 |
1902.10149
| 16,530,462 | 2,019 | 2 | 26 | true | false | 3 |
MISSION, MISSION, MISSION
|
But, in as far as the question is well-put, other answers may be considered. The brick wall generates a deep potential well, which may actually hold particles for a much longer amount of time. If we assume thermal equilibrium to arise, at the Hawking temperature, we see that many of the Hawking particles will stay trapped since they carry not enough kinetic energy to escape. We see an *atmosphere* of particles near the horizon. These particles might hang around for time scales much longer than $M_{\mathrm{BH}}$ in Planck units. Who is right?
| 547 |
1902.10469
| 16,532,577 | 2,019 | 2 | 27 | false | true | 1 |
UNITS
|
- The effective Planck mass, FORMULA can be absorbed in the densities and pressures of matter and the scalar and be effectively hidden in the equations of motion. But this is not the case for the Planck mass run rate, FORMULA which measures the variation in time of the Planck mass. This function is non-zero in models where in the action the scalar field couples directly to curvature, and it produces anisotropic stress in the gravitational potentials.
| 454 |
1902.10687
| 16,534,170 | 2,019 | 2 | 27 | true | true | 3 |
UNITS, UNITS, UNITS
|
A different way to understand the anomalous threshold is as follows. For the case of standard black holes with entropy $N$ we should expect that the threshold for an absorptive part should be $t_0 \sim O(1/N)$ in Planck units i.e. absorption of one information bit. The existence of massless charged particles pushes down this threshold to the anomalous value $O(m_e^2)$ and therefore we could expect a lower *information bound* for the mass of the electron $m_e \sim 1/N$ in Planck units for the largest possible black hole. Thus and using a cosmological bound for the largest black hole we could conclude that the lower bound on the mass of electrically charged fermions is given, in Planck units, by $\frac{1}{\sqrt{N_{H}}}$ with $N_{H}$ determined by the Hubble radius of the Universe as $\frac{R_{H}^2}{L_P^2}$.
| 816 |
1903.01311
| 16,548,064 | 2,019 | 3 | 4 | false | true | 3 |
UNITS, UNITS, UNITS
|
Weak lensing analysis of clusters included in both the Planck analysis and the Weighing the Giants survey (WtG; [CIT]) shows Planck cluster masses may indeed be underestimated by $\sim$ 42% for the most massive clusters ($> 10^{15} M_{\odot}$), while Planck masses appear to be more accurate for less massive clusters ($\sim 5 \times 10^{14} M_{\odot}$). Subsequent weak lensing analyses from various surveys (WtG, CCCP, LoCUSS, CLASH, CFHTLenS, RCSLenS, HSC-SSP) found a range of results, some consistent with WtG including bias increasing with mass [CIT], and others more consistent with the original Planck estimate of $\sim$ 20% bias [CIT]. Overall, the tension appears to be somewhat relieved, although not conclusively [CIT], especially after accounting for new Planck measurements of the reionization optical depth [CIT].
| 828 |
1903.02002
| 16,553,700 | 2,019 | 3 | 5 | true | false | 5 |
MISSION, MISSION, MISSION, MISSION, MISSION
|
At least two neutrino species are known to have non-negligible mass thanks to flavor oscillation experiments [CIT]. However, current observations are consistent with many neutrino mass models, and determining the absolute mass scale is an obvious goal in the field of particle physics. It is the target of terrestrial experiments such as the searches for neutrinoless double beta decay [CIT] or tritium beta decay experiments [CIT], but can also be measured through astronomical observations [CIT]. The best current constraints on the summed neutrino mass are $\sum m_\nu < 0.12\; {\rm eV}$ (95% confidence), combining low-redshift BAO data with the 2018 CMB data from Planck [CIT]. The normal hierarchy with one particle of negligible mass has $\sum m_\nu = 0.057\; {\rm eV}$, while the inverted hierarchy with one negligible mass neutrino has $\sum m_\nu = 0.097\; {\rm eV}$. Thus, for example, we need to measure the neutrino mass with an error of $\sigma =0.008\; {\rm eV}$ in order to rule out the inverted hierarchy at $5\sigma$ if neutrino masses are distributed in the normal hierarchy and $\sum m_\nu = 0.057\; {\rm eV}$. As we will see below, the MSE will be a vital component in enabling such a measurement, which is achievable when the MSE data is combined with other available cosmological data.
| 1,308 |
1903.03158
| 16,563,456 | 2,019 | 3 | 7 | true | false | 1 |
MISSION
|
- The Thomson optical depth of the IGM was observed by Planck to be: $\tau = 0.058 \pm 0.012$, assuming an instantaneous EoR [CIT]. During each MCMC call, 21cmFAST can produce estimates of $\tau$ by interpolating the neutral hydrogen fraction across the desired redshifts. This allows testing between the observed and simulated values of $\tau$.
| 345 |
1903.09064
| 16,610,418 | 2,019 | 3 | 21 | true | false | 1 |
MISSION
|
We forecast that $\mathrm{H}_0$can be measured to sub-percent precision within $\Lambda$CDMcosmological models with time delays from systems discovered in the first year of the LSST survey. The total number of supernova time delay systems over the LSST survey is expected to be $\sim 100$. Our projected constraint on $\mathrm{H}_0$as shown in Figure REF is comparable in precision to the leading current measurement from the combination of $\textit{Planck}$ and BOSS data [CIT], and it is almost ten times better than the current state-of-the-art constraints from quasar time delays [CIT]. In addition to constraining $\mathrm{H}_0$, time delays from lensed supernovae are sensitive to dark energy in a completely different way than cosmological probes based on distances and volumes [e.g., the CMB, the Type Ia SNdistance-redshift relation, BAO, and galaxy clusters; [CIT]], making time-delay measurements highly complementary. Adding in lensed supernovae from the first year of LSST increases the Dark Energy Task Force [CIT] figure of merit by a factor of 3 over an Type Ia SN-based constraint alone---a major gain.
| 1,119 |
1903.09324
| 16,614,068 | 2,019 | 3 | 22 | true | false | 1 |
MISSION
|
The maximum value of the Kretschmann scalar REF(#eq:Ksol){reference-type="eqref" reference="eq:Ksol"} occurs at $T=0$ and is given by $K(0)= (3/2)\;b^{-4}$, which allows for the interpretation of the parameter $b$ from the metric *Ansatz* REF(#eq:mod-FLRW-ds2){reference-type="eqref" reference="eq:mod-FLRW-ds2"} as the minimum curvature length scale of the resulting spacetime manifold. The maximum value of the matter density also occurs at $T=0$ and, from REF(#eq:mod-Friedmann-equation-a){reference-type="eqref" reference="eq:mod-Friedmann-equation-a"} and REF(#eq:mod-Friedmann-T-odd-asol){reference-type="eqref" reference="eq:mod-Friedmann-T-odd-asol"}, is given by $\rho(0) =(3/4), E^{2}_\text{planck},b^{-2}$, in terms of the reduced Planck energy $E_\text{planck} \equiv \sqrt{1/(8\pi, G_N)} \approx 2.44 \times 10^{18},\text{GeV}$. Similar results hold for a modified FLRW universe with nonrelativistic matter, as discussed in Appendix [5]. For completeness, we also present, in Appendix [6], a particular modified FLRW universe with a positive cosmological constant $\Lambda$.
| 1,087 |
1903.10450
| 16,621,050 | 2,019 | 3 | 25 | true | true | 3 |
UNITS, UNITS, UNITS
|
For a thin shell of zero thickness, the function $a(u, r)$ in the metric REF(#metric){reference-type="eqref" reference="metric"} is given by FORMULA where $\Theta(x)$ is the step function that is $0$ or $1$ for $x < 0$ or $x > 0$. It has a diverging energy density proportional to the Dirac delta function that diverges at $r = R_0(u)$. For the low-energy effective theory to be applicable, a shell should have a finite thickness much larger than the Planck length, and an energy density much smaller than the Planck scale. However, this notion of an ideal thin shell has been widely used in the literature in the context of low-energy effective theories.
| 655 |
1903.11499
| 16,631,403 | 2,019 | 3 | 27 | false | true | 2 |
UNITS, UNITS
|
In this paper, we will calculate $\Delta S$ in the limit that the minima become degenerate and interpret it as providing the leading semiclassical decay rate when the minima are slightly separate. We will also refer to the static separation as the "separation between the created particles\" with the understanding that this is the limiting case. As discussed in the introduction, our instantons will create traversable wormholes so long as the acceleration is sufficiently small, proportional to a power of $\ell_{planck}$ (see REF(#eq:cMwh){reference-type="eqref" reference="eq:cMwh"} for the case of the wormholes of [CIT]).[^3] We expect $K$ for such cases to be proportional to some positive power of the $\ell_{planck}$, and $\Delta S$ to diverge like an inverse power of $\ell_{planck}$, so the leading contribution to $\Gamma$ as $\ell_{planck} \rightarrow 0$ is indeed $e^{-2\Delta S}$.
| 895 |
1904.02187
| 16,656,856 | 2,019 | 4 | 3 | false | true | 4 |
UNITS, UNITS, UNITS, UNITS
|
As a cautionary note accompanying the results presented in this subsection, the highlighted signatures should not be over-interpreted. In reality, resonant Compton upscattering emission involves a convolution of various $, r_{\rm max},$ toroidal surfaces [e.g., see [CIT]] and Lorentz factors for electrons populating the magnetosphere, both of which will blur the features identified here. Moreover, for some rotational phases the resonant spectrum is effectively suppressed because the local resonance condition samples soft photon energies $, \varepsilon_s,$ above the Planck mean of $, 3 kT,$, and this can move the luminous part of the upscattering signal to well below the escape energy $, \varepsilon^{sp}_{\rm esc},$. In addition, excursions from dipolar field morphology such as those predicted in twisted field models [e.g. [CIT]] will complicate the phase dependence of $, \varepsilon_f^{\rm max},$ and $, \varepsilon^{sp}_{\rm esc},$ substantially, introducing higher-order asymmetries. Notwithstanding, if the highlighted phase-resolved polarimetric characteristics are detected by future hard X-ray spectrometers and polarimeters, they would help confirm the action of photon splitting in Nature. In particular, signatures such as ones highlighted here offer a path to constraining both the magnetar geometry parameters $, \{ \alpha, \zeta \},$, and the particle Lorentz factor $, \gamma_e,$ distribution, and also which $, r_{\rm max},$ bundles contribute to the total emission.
| 1,493 |
1904.03315
| 16,664,903 | 2,019 | 4 | 5 | true | false | 1 |
UNITS
|
Just like when proceeding along the trajectory REF(#limit){reference-type="eqref" reference="limit"} several towers of D1/D(-1)-instantons dramatically decrease their action, the same is true for the tension of certain D-strings. In particular, bound states of D3/D1-branes wrapping holomorphic cycles are seen as 4d strings which, in the limit $\sigma \rightarrow\infty$, become tensionless.[^3] It is in fact instructing to analyse the behaviour of their (classical) tension in the quantum corrected trajectory REF(#newlimit){reference-type="eqref" reference="newlimit"}. The tensions of a D1-brane pointlike in $X$ and a D3-brane wrapping a holomorphic 2-cycle of $X$ is given by FORMULA where we have used eq.REF(#chiscaling){reference-type="eqref" reference="chiscaling"} to relate the string scale with the corrected Planck scale. In both cases, for $0< \epsilon \leq 1/2$ the exponent of $\rho$ is positive, and so these tensions decrease to zero as we reach the limit $\rho \rightarrow 0$.
| 997 |
1904.04848
| 16,675,507 | 2,019 | 4 | 9 | false | true | 1 |
UNITS
|
The classical Schwarzschild spacetime and the RG-improved case can be distinguished via the $r$-dependence in Eq. REF(#eq:Meff){reference-type="eqref" reference="eq:Meff"}. Specifically, one can in principle extract the effective mass at two different distances from the black hole. For instance, one measurement can be extracted from the size of the shadow at distances $\sim M$. The second could use the Keplerian orbital periods of nearby stars at distances of $\gtrsim 10^3M$ (e.g. the pericentre distance of the star S2 in the galactic centre [CIT]). Weighing black holes using the orbital motion of nearby stars is still possible, even for spatially unresolved orbital motion, via spectral analysis of emission line profiles of stars or gas in orbit around a supermassive black hole, as done for M87 [CIT]. For a classical Schwarzschild spacetime, these results should agree. In the RG-improved case, the effective mass extracted from the size of the shadow is smaller than the effective mass extracted at larger radii (for small $\gamma$, i.e. effects tied to the Planck scale, only marginally so). This makes it evident that the modifications of the spacetime are not degenerate with a classical spherically symmetric spacetime.
| 1,236 |
1904.07133
| 16,692,353 | 2,019 | 4 | 15 | true | true | 1 |
UNITS
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.