url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://www.science.gov/topicpages/p/probing+cosmic+accelerators.html | #### Sample records for probing cosmic accelerators
1. Observational probes of cosmic acceleration
Weinberg, David H.; Mortonson, Michael J.; Eisenstein, Daniel J.; Hirata, Christopher; Riess, Adam G.; Rozo, Eduardo
2013-09-01
The accelerating expansion of the universe is the most surprising cosmological discovery in many decades, implying that the universe is dominated by some form of “dark energy” with exotic physical properties, or that Einstein’s theory of gravity breaks down on cosmological scales. The profound implications of cosmic acceleration have inspired ambitious efforts to understand its origin, with experiments that aim to measure the history of expansion and growth of structure with percent-level precision or higher. We review in detail the four most well established methods for making such measurements: Type Ia supernovae, baryon acoustic oscillations (BAO), weak gravitational lensing, and the abundance of galaxy clusters. We pay particular attention to the systematic uncertainties in these techniques and to strategies for controlling them at the level needed to exploit “Stage IV” dark energy facilities such as BigBOSS, LSST, Euclid, and WFIRST. We briefly review a number of other approaches including redshift-space distortions, the Alcock-Paczynski effect, and direct measurements of the Hubble constant H0. We present extensive forecasts for constraints on the dark energy equation of state and parameterized deviations from General Relativity, achievable with Stage III and Stage IV experimental programs that incorporate supernovae, BAO, weak lensing, and cosmic microwave background data. We also show the level of precision required for clusters or other methods to provide constraints competitive with those of these fiducial programs. We emphasize the value of a balanced program that employs several of the most powerful methods in combination, both to cross-check systematic uncertainties and to take advantage of complementary information. Surveys to probe cosmic acceleration produce data sets that support a wide range of scientific investigations, and they continue the longstanding astronomical tradition of mapping the universe in ever greater detail over ever
2. Probing cosmic-ray acceleration and propagation with H{sub 3}{sup +} observations
SciTech Connect
Indriolo, Nick; Fields, Brian D.; McCall, Benjamin J.
2015-01-22
As cosmic rays traverse the interstellar medium (ISM) they interact with the ambient gas in various ways. These include ionization of atoms and molecules, spallation of nuclei, excitation of nuclear states, and production of pions among others. All of these interactions produce potential observables which may be used to trace the flux of cosmic rays. One such observable is the molecular ion H{sub 3}{sup +}-produced via the ionization of an H{sub 2} molecule and its subsequent collision with another H{sub 2}-which can be identified by absorption lines in the 3.5-4 μm spectral region. We have detected H{sub 3}{sup +} in several Galactic diffuse cloud sight lines and used the derived column densities to infer ζ{sub 2}, the cosmic-ray ionization rate of H{sub 2}. Ionization rates determined in this way vary from about 7×10{sup −17} s{sup −1} to about 8×10{sup −16} s{sup −1}, and suggest the possibility of discrete sources producing high local fluxes of low-energy cosmic rays. Theoretical calculations of the ionization rate from postulated cosmic-ray spectra also support this possibility. Our recent observations of H{sub 3}{sup +} near the supernova remnant IC 443 (a likely site of cosmic-ray acceleration) point to even higher ionization rates, on the order of 10{sup −15} s{sup −1}. Together, all of these results can further our understanding of the cosmic-ray spectrum both near the acceleration source and in the general Galactic ISM.
3. Cosmic Plasma Wakefield Acceleration
Chen, Pisin; Tajima, Toshiki; Takahashi, Yoshiyuki
2002-10-01
A cosmic acceleration mechanism is introduced which is based on the wakefields excited by the Alfven shocks in a relativistically flowing plasma. We show that there exists a threshold condition for transparency below which the accelerating particle is collision-free and suffers little energy loss in the plasma medium. The stochastic encounters of the random accelerating-decelerating phases results in a power-law energy spectrum: f([epsilon]) [is proportional to] 1/[epsilon]2. As an example, we discuss the possible production of super-GZK ultra high energy cosmic rays (UHECR) in the atmosphere of gamma ray bursts. The estimated event rate in our model agrees with that from UHECR observations. [copyright] 2002 American Institute of Physics
4. Cosmic ray antiprotons from nearby cosmic accelerators
Joshi, Jagdish C.; Gupta, Nayantara
2015-05-01
The antiproton flux measured by PAMELA experiment might have originated from Galactic sources of cosmic rays. These antiprotons are expected to be produced in the interactions of cosmic ray protons and nuclei with cold protons. Gamma rays are also produced in similar interactions inside some of the cosmic accelerators. We consider a few nearby supernova remnants observed by Fermi LAT. Many of them are associated with molecular clouds. Gamma rays have been detected from these sources which most likely originate in decay of neutral pions produced in hadronic interactions. The observed gamma ray fluxes from these SNRs are used to find out their contributions to the observed diffuse cosmic ray antiproton flux near the earth.
5. Hot Spot Cosmic Accelerators
2002-11-01
length of more than 3 million light-years, or no less than one-and-a-half times the distance from the Milky Way to the Andromeda galaxy, this structure is indeed gigantic. The region where the jets collide with the intergalactic medium are known as " hot spots ". Superposing the intensity contours of the radio emission from the southern "hot spot" on a near-infrared J-band (wavelength 1.25 µm) VLT ISAAC image ("b") shows three distinct emitting areas; they are even better visible on the I-band (0.9 µm) FORS1 image ("c"). This emission is obviously associated with the shock front visible on the radio image. This is one of the first times it has been possible to obtain an optical/near-IR image of synchrotron emission from such an intergalactic shock and, thanks to the sensitivity and image sharpness of the VLT, the most detailed view of its kind so far . The central area (with the strongest emission) is where the plasma jet from the galaxy centre hits the intergalactic medium. The light from the two other "knots", some 10 - 15,000 light-years away from the central "hot spot", is also interpreted as synchrotron emission. However, in view of the large distance, the astronomers are convinced that it must be caused by electrons accelerated in secondary processes at those sites . The new images thus confirm that electrons are being continuously accelerated in these "knots" - hence called "cosmic accelerators" - far from the galaxy and the main jets, and in nearly empty space. The exact physical circumstances of this effect are not well known and will be the subject of further investigations. The present VLT-images of the "hot spots" near 3C 445 may not have the same public appeal as some of those beautiful images that have been produced by the same instruments during the past years. But they are not less valuable - their unusual importance is of a different kind, as they now herald the advent of fundamentally new insights into the mysteries of this class of remote and active
6. Cosmic ray studies with an Interstellar Probe
NASA Technical Reports Server (NTRS)
Mewaldt, R. A.; Stone, E. C.
1990-01-01
Among the NASA mission concepts that have been suggested for the 21st century is an Interstellar Probe that might be accelerated to a velocity of about 10 to 20 AU/yr, allowing it to leave the heliosphere, ultimately reaching a radial distance of about 500 to 1000 AU in about 50 years. Previous studies of such a mission, and its potential significance for cosmic ray studies, both within the heliosphere, and beyond, in interstellar space are discussed.
7. Is cosmic acceleration slowing down?
SciTech Connect
Shafieloo, Arman; Sahni, Varun; Starobinsky, Alexei A.
2009-11-15
We investigate the course of cosmic expansion in its recent past using the Constitution SN Ia sample, along with baryon acoustic oscillations (BAO) and cosmic microwave background (CMB) data. Allowing the equation of state of dark energy (DE) to vary, we find that a coasting model of the universe (q{sub 0}=0) fits the data about as well as Lambda cold dark matter. This effect, which is most clearly seen using the recently introduced Om diagnostic, corresponds to an increase of Om and q at redshifts z < or approx. 0.3. This suggests that cosmic acceleration may have already peaked and that we are currently witnessing its slowing down. The case for evolving DE strengthens if a subsample of the Constitution set consisting of SNLS+ESSENCE+CfA SN Ia data is analyzed in combination with BAO+CMB data. The effect we observe could correspond to DE decaying into dark matter (or something else)
8. The cosmic-ray pathlength distribution at low energy - A new probe of the source/acceleration regions
NASA Technical Reports Server (NTRS)
Guzik, T. G.; Wefel, J. P.
1984-01-01
Compiled measurements of secondary to primary ratios covering the charge range Z = 3-26 and the energy range 0.05 - 50 GeV/nucleon are analyzed in energy dependent galactic propagation plus solar modulation calculations. The cosmic ray pathlength distribution is shown to consist of two energy dependent components interpreted as representing confinement in the galaxy and confinement in the 'source' regions.
9. Matter creation and cosmic acceleration
Ramos, Rudnei O.; Vargas dos Santos, Marcelo; Waga, Ioav
2014-04-01
We investigate the creation of cold dark matter (CCDM) cosmology as an alternative to explain the cosmic acceleration. Particular attention is given to the evolution of density perturbations and constraints coming from recent observations. By assuming negligible effective sound speed we compare CCDM predictions with redshift-space-distortion based f(z)σ8(z) measurements. We identify a subtle issue associated with which contribution in the density contrast should be used in this test and then show that the CCDM results are the same as those obtained with ΛCDM. These results are then contrasted with the ones obtained at the background level. For the background tests we have used type Ia supernovae data (Union 2.1 compilation) in combination with baryonic acoustic oscillations and cosmic microwave background observations and also measurements of the Hubble parameter at different redshifts. As a consequence of the studies we have performed at both the background and perturbation levels, we explicitly show that CCDM is observationally degenerate with respect to ΛCDM (dark degeneracy). The need to overcome the lack of a fundamental microscopic basis for the CCDM is the major challenge for this kind of model.
10. When did cosmic acceleration start?
SciTech Connect
Melchiorri, Alessandro; Pagano, Luca; Pandolfi, Stefania
2007-08-15
A precise determination, and comparison, of the epoch of the onset of cosmic acceleration, at redshift z{sub acc}, and of dark energy domination, at z{sub eq}, provides an interesting measure with which to parametrize dark energy models. By combining several cosmological data sets, we place constraints on the redshift and age of cosmological acceleration. For a {lambda}CDM model, we find the constraint z{sub acc}=0.76{+-}0.10 at 95% C.L., occurring 6.7{+-}0.4 Gyr ago. Allowing a constant equation of state but different from -1 changes the constraint to z{sub acc}=0.81{+-}0.12 (6.9{+-}0.5 Gyr ago), while dynamical models markedly increase the error on the constraint z{sub acc}=0.81{+-}0.30 (6.8{+-}1.4 Gyr ago). Unified dark energy models such as silent quartessence yield z{sub acc}=0.8{+-}0.16 (6.8{+-}0.6 Gyr ago). Interestingly, we find that the best fit z{sub acc} and z{sub eq} are remarkably insensitive to both the cosmological data sets and theoretical dark energy models considered.
11. Ion acceleration to cosmic ray energies
NASA Technical Reports Server (NTRS)
Lee, Martin A.
1990-01-01
The acceleration and transport environment of the outer heliosphere is described schematically. Acceleration occurs where the divergence of the solar-wind flow is negative, that is at shocks, and where second-order Fermi acceleration is possible in the solar-wind turbulence. Acceleration at the solar-wind termination shock is presented by reviewing the spherically-symmetric calculation of Webb et al. (1985). Reacceleration of galactic cosmic rays at the termination shock is not expected to be important in modifying the cosmic ray spectrum, but acceleration of ions injected at the shock up to energies not greater than 300 MeV/charge is expected to occur and to create the anomalous cosmic ray component. Acceleration of energetic particles by solar wind turbulence is expected to play almost no role in the outer heliosphere. The one exception is the energization of interstellar pickup ions beyond the threshold for acceleration at the quasi-perpendicular termination shock.
12. Growth of Cosmic Structure: Probing Dark Energy Beyond Expansion
SciTech Connect
Huterer, Dragan; Kirkby, David; Bean, Rachel; Connolly, Andrew; Dawson, Kyle; Dodelson, Scott; Evrard, August; Jain, Bhuvnesh; Jarvis, Michael; Linder, Eric; Mandelbaum, Rachel; May, Morgan; Raccanelli, Alvise; Reid, Beth; Rozo, Eduardo; Schmidt, Fabian; Sehgal, Neelima; Slosar, Anze; Van Engelen, Alex; Wu, Hao-Yi; Zhao, Gongbo
2014-03-15
The quantity and quality of cosmic structure observations have greatly accelerated in recent years, and further leaps forward will be facilitated by imminent projects. These will enable us to map the evolution of dark and baryonic matter density fluctuations over cosmic history. The way that these fluctuations vary over space and time is sensitive to several pieces of fundamental physics: the primordial perturbations generated by GUT-scale physics; neutrino masses and interactions; the nature of dark matter and dark energy. We focus on the last of these here: the ways that combining probes of growth with those of the cosmic expansion such as distance-redshift relations will pin down the mechanism driving the acceleration of the Universe.
13. Growth of Cosmic Structure: Probing Dark Energy Beyond Expansion
DOE PAGESBeta
Huterer, Dragan; Kirkby, David; Bean, Rachel; Connolly, Andrew; Dawson, Kyle; Dodelson, Scott; Evrard, August; Jain, Bhuvnesh; Jarvis, Michael; Linder, Eric; et al
2014-03-15
The quantity and quality of cosmic structure observations have greatly accelerated in recent years, and further leaps forward will be facilitated by imminent projects. These will enable us to map the evolution of dark and baryonic matter density fluctuations over cosmic history. The way that these fluctuations vary over space and time is sensitive to several pieces of fundamental physics: the primordial perturbations generated by GUT-scale physics; neutrino masses and interactions; the nature of dark matter and dark energy. We focus on the last of these here: the ways that combining probes of growth with those of the cosmic expansionmore » such as distance-redshift relations will pin down the mechanism driving the acceleration of the Universe.« less
14. A Simplified Model for the Acceleration of Cosmic Ray Particles
ERIC Educational Resources Information Center
Gron, Oyvind
2010-01-01
Two important questions concerning cosmic rays are: Why are electrons in the cosmic rays less efficiently accelerated than nuclei? How are particles accelerated to great energies in ultra-high energy cosmic rays? In order to answer these questions we construct a simple model of the acceleration of a charged particle in the cosmic ray. It is not…
15. Muon acceleration in cosmic-ray sources
SciTech Connect
Klein, Spencer R.; Mikkelsen, Rune E.; Becker Tjus, Julia
2013-12-20
Many models of ultra-high energy cosmic-ray production involve acceleration in linear accelerators located in gamma-ray bursts, magnetars, or other sources. These transient sources have short lifetimes, which necessitate very high accelerating gradients, up to 10{sup 13} keV cm{sup –1}. At gradients above 1.6 keV cm{sup –1}, muons produced by hadronic interactions undergo significant acceleration before they decay. This muon acceleration hardens the neutrino energy spectrum and greatly increases the high-energy neutrino flux. Using the IceCube high-energy diffuse neutrino flux limits, we set two-dimensional limits on the source opacity and matter density, as a function of accelerating gradient. These limits put strong constraints on different models of particle acceleration, particularly those based on plasma wake-field acceleration, and limit models for sources like gamma-ray bursts and magnetars.
16. Does electromagnetic radiation accelerate galactic cosmic rays
NASA Technical Reports Server (NTRS)
Eichler, D.
1977-01-01
The 'reactor' theories of Tsytovich and collaborators (1973) of cosmic-ray acceleration by electromagnetic radiation are examined in the context of galactic cosmic rays. It is shown that any isotropic synchrotron or Compton reactors with reasonable astrophysical parameters can yield particles with a maximum relativistic factor of only about 10,000. If they are to produce particles with higher relativistic factors, the losses due to inverse Compton scattering of the electromagnetic radiation in them outweigh the acceleration, and this violates the assumptions of the theory. This is a critical restriction in the context of galactic cosmic rays, which have a power-law spectrum extending up to a relativistic factor of 1 million.
17. Cosmic Ray Origin, Acceleration and Propagation
NASA Technical Reports Server (NTRS)
Baring, Matthew G.
2000-01-01
This paper summarizes highlights of the OG3.1, 3.2 and 3.3 sessions of the 26th International Cosmic Ray Conference in Salt Lake City, which were devoted to issues of origin/composition, acceleration and propagation.
18. Cosmic acceleration and Brans-Dicke theory
SciTech Connect
Sharif, M. Waheed, S.
2012-10-15
We study the accelerated expansion of the universe by exploring the Brans-Dicke parameter in different eras. For this, we take the FRW universe model with a viscous fluid (without potential) and the Bianchi type-I universe model with a barotropic fluid (with and without a potential). We evaluate the deceleration parameter and the Brans-Dicke parameter to explore cosmic acceleration. It is concluded that accelerated expansion of the universe can also be achieved for higher values of the Brans-Dicke parameter in some cases.
19. Acceleration of cosmic rays in Tycho's SNR.
Morlino, G.; Caprioli, D.
We apply the non-linear diffusive shock acceleration theory in order to describe the properties of SN 1572 (G120.1+1.4, hereafter simply Tycho). By analyzing its multi-wavelength spectrum, we show how Tycho's forward shock (FS) is accelerating protons up to ˜ 500 TeV, channeling into cosmic rays more than 10 per cent of its kinetic energy. We find that the streaming instability induced by cosmic rays is consistent with all the observational evidences indicating a very efficient magnetic field amplification (up to ˜ 300 mu G), in particular the X-ray morphology of the remnant. We are able to explain the gamma-ray spectrum from the GeV up to the TeV band, recently measured respectively by Fermi-LAT and VERITAS, as due to pion decay produced in nuclear collisions by accelerated nuclei scattering against the background gas. We also show that emission due to the accelerated electrons does not play a relevant role in the observed gamma-ray spectrum.
20. Superdiffusion of cosmic rays: Implications for cosmic ray acceleration
SciTech Connect
Lazarian, A.; Yan, Huirong
2014-03-20
Diffusion of cosmic rays (CRs) is the key process for understanding their propagation and acceleration. We employ the description of spatial separation of magnetic field lines in magnetohydrodynamic turbulence in Lazarian and Vishniac to quantify the divergence of the magnetic field on scales less than the injection scale of turbulence and show that this divergence induces superdiffusion of CR in the direction perpendicular to the mean magnetic field. The perpendicular displacement squared increases, not as the distance x along the magnetic field, which is the case for a regular diffusion, but as the x {sup 3} for freely streaming CRs. The dependence changes to x {sup 3/2} for the CRs propagating diffusively along the magnetic field. In the latter case, we show that it is important to distinguish the perpendicular displacement with respect to the mean field and to the local magnetic field. We consider how superdiffusion changes the acceleration of CRs in shocks and show how it decreases efficiency of the CRs acceleration in perpendicular shocks. We also demonstrate that in the case when the small-scale magnetic field is generated in the pre-shock region, an efficient acceleration can take place for the CRs streaming without collisions along the magnetic loops.
1. New cosmic accelerating scenario without dark energy
Lima, J. A. S.; Basilakos, S.; Costa, F. E. M.
2012-11-01
We propose an alternative, nonsingular, cosmic scenario based on gravitationally induced particle production. The model is an attempt to evade the coincidence and cosmological constant problems of the standard model (ΛCDM) and also to connect the early and late time accelerating stages of the Universe. Our space-time emerges from a pure initial de Sitter stage thereby providing a natural solution to the horizon problem. Subsequently, due to an instability provoked by the production of massless particles, the Universe evolves smoothly to the standard radiation dominated era thereby ending the production of radiation as required by the conformal invariance. Next, the radiation becomes subdominant with the Universe entering in the cold dark matter dominated era. Finally, the negative pressure associated with the creation of cold dark matter (CCDM model) particles accelerates the expansion and drives the Universe to a final de Sitter stage. The late time cosmic expansion history of the CCDM model is exactly like in the standard ΛCDM model; however, there is no dark energy. The model evolves between two limiting (early and late time) de Sitter regimes. All the stages are also discussed in terms of a scalar field description. This complete scenario is fully determined by two extreme energy densities, or equivalently, the associated de Sitter Hubble scales connected by ρI/ρf=(HI/Hf)2˜10122, a result that has no correlation with the cosmological constant problem. We also study the linear growth of matter perturbations at the final accelerating stage. It is found that the CCDM growth index can be written as a function of the Λ growth index, γΛ≃6/11. In this framework, we also compare the observed growth rate of clustering with that predicted by the current CCDM model. Performing a χ2 statistical test we show that the CCDM model provides growth rates that match sufficiently well with the observed growth rate of structure.
2. New Kinematical Constraints on Cosmic Acceleration
SciTech Connect
Rapetti, David; Allen, Steve W.; Amin, Mustafa A.; Blandford, Roger; /-KIPAC, Menlo Park
2007-05-25
We present and employ a new kinematical approach to ''dark energy'' studies. We construct models in terms of the dimensionless second and third derivatives of the scale factor a(t) with respect to cosmic time t, namely the present-day value of the deceleration parameter q{sub 0} and the cosmic jerk parameter, j(t). An elegant feature of this parameterization is that all {Lambda}CDM models have j(t)=1 (constant), which facilitates simple tests for departures from the {Lambda}CDM paradigm. Applying our model to redshift-independent distance measurements, from type Ia supernovae and X-ray cluster gas mass fraction measurements, we obtain clear statistical evidence for a late time transition from a decelerating to an accelerating phase. For a flat model with constant jerk, j(t)=j, we measure q{sub 0}=-0.81 {+-} 0.14 and j=2.16 +0.81 -0.75, results that are consistent with {Lambda}CDM at about the 1{sigma} confidence level. In comparison to dynamical analyses, the kinematical approach uses a different model set and employs a minimum of prior information, being independent of any particular gravity theory. The results obtained with this new approach therefore provide important additional information and we argue that both kinematical and dynamical techniques should be employed in future dark energy studies, where possible.
3. PROBING THE UNIVERSE'S TILT WITH THE COSMIC INFRARED BACKGROUND DIPOLE
SciTech Connect
Fixsen, D. J.; Kashlinsky, A. E-mail: [email protected]
2011-06-10
Conventional interpretation of the observed cosmic microwave background (CMB) dipole is that all of it is produced by local peculiar motions. Alternative explanations requiring part of the dipole to be primordial have received support from measurements of large-scale bulk flows. A test of the two hypotheses is whether other cosmic dipoles produced by collapsed structures later than the last scattering coincide with the CMB dipole. One background is the cosmic infrared background (CIB) whose absolute spectrum was measured to {approx}30% by the COBE satellite. Over the 100-500 {mu}m wavelength range its spectral energy distribution can provide a probe of its alignment with the CMB. This is tested with the COBE FIRAS data set which is available for such a measurement because of its low noise and frequency resolution which are important for Galaxy subtraction. Although the FIRAS instrument noise is in principle low enough to determine the CIB dipole, the Galactic foreground is sufficiently close spectrally to keep the CIB dipole hidden. A similar analysis is performed with DIRBE, which-because of the limited frequency coverage-provides a poorer data set. We discuss strategies for measuring the CIB dipole with future instruments to probe the tilt and apply it to the Planck, Herschel, and the proposed Pixie missions. We demonstrate that a future FIRAS-like instrument with instrument noise a factor of {approx}10 lower than FIRAS would make a statistically significant measurement of the CIB dipole. We find that the Planck and Herschel data sets will not allow a robust CIB dipole measurement. The Pixie instrument promises a determination of the CIB dipole and its alignment with either the CMB dipole or the dipole galaxy acceleration vector.
4. Growth of cosmic structure: Probing dark energy beyond expansion
Huterer, Dragan; Kirkby, David; Bean, Rachel; Connolly, Andrew; Dawson, Kyle; Dodelson, Scott; Evrard, August; Jain, Bhuvnesh; Jarvis, Michael; Linder, Eric; Mandelbaum, Rachel; May, Morgan; Raccanelli, Alvise; Reid, Beth; Rozo, Eduardo; Schmidt, Fabian; Sehgal, Neelima; Slosar, Anže; van Engelen, Alex; Wu, Hao-Yi; Zhao, Gongbo
2015-03-01
The quantity and quality of cosmic structure observations have greatly accelerated in recent years, and further leaps forward will be facilitated by imminent projects. These will enable us to map the evolution of dark and baryonic matter density fluctuations over cosmic history. The way that these fluctuations vary over space and time is sensitive to several pieces of fundamental physics: the primordial perturbations generated by GUT-scale physics; neutrino masses and interactions; the nature of dark matter and dark energy. We focus on the last of these here: the ways that combining probes of growth with those of the cosmic expansion such as distance-redshift relations will pin down the mechanism driving the acceleration of the Universe. One way to explain the acceleration of the Universe is invoke dark energy parameterized by an equation of state w. Distance measurements provide one set of constraints on w, but dark energy also affects how rapidly structure grows; the greater the acceleration, the more suppressed the growth of structure. Upcoming surveys are therefore designed to probe w with direct observations of the distance scale and the growth of structure, each complementing the other on systematic errors and constraints on dark energy. A consistent set of results will greatly increase the reliability of the final answer. Another possibility is that there is no dark energy, but that General Relativity does not describe the laws of physics accurately on large scales. While the properties of gravity have been measured with exquisite precision at stellar system scales and densities, within our solar system and by binary pulsar systems, its properties in different environments are poorly constrained. To fully understand if General Relativity is the complete theory of gravity we must test gravity across a spectrum of scales and densities. Rapid developments in gravitational wave astronomy and numerical relativity are directed at testing gravity in the high
5. Solar Cosmic Ray Acceleration and Propagation
Podgorny, I. M.; Podgorny, A. I.
2016-05-01
The GOES data for emission of flare protons with the energies of 10 - 100 MeV are analyzed. Proton fluxes of ~1032 accelerated particles take place at the current sheet decay. Proton acceleration in a flare occurs along a singular line of the current sheet by the Lorentz electric field, as in the pinch gas discharge. The duration of proton flux measured on the Earth orbit is by 2 - 3 orders of magnitude longer than the duration of flares. The high energy proton flux from the flares that appear on the western part of the solar disk arrives to Earth with the time of flight. These particles propagate along magnetic lines of the Archimedes spiral connecting the flare with the Earth. Protons from the flare on the eastern part of the solar disk begin to register with a delay of several hours. Such particles cannot get on the magnetic field line connecting the flare with the Earth. These protons reach the Earth, moving across the interplanetary magnetic field. The particles captured by the magnetic field in the solar wind are transported with solar wind and due to diffusion across the magnetic field. The patterns of solar cosmic rays generation demonstrated in this paper are not always observed in the small ('1 cm-2 s-1 ster-1) proton events.
6. Cosmic ray sources, acceleration and propagation
NASA Technical Reports Server (NTRS)
Ptuskin, V. S.
1986-01-01
A review is given of selected papers on the theory of cosmic ray (CR) propagation and acceleration. The high isotropy and a comparatively large age of galactic CR are explained by the effective interaction of relativistic particles with random and regular electromagnetic fields in interstellar medium. The kinetic theory of CR propagation in the Galaxy is formulated similarly to the elaborate theory of CR propagation in heliosphere. The substantial difference between these theories is explained by the necessity to take into account in some cases the collective effects due to a rather high density of relativisitc particles. In particular, the kinetic CR stream instability and the hydrodynamic Parker instability is studied. The interaction of relativistic particles with an ensemble of given weak random magnetic fields is calculated by perturbation theory. The theory of CR transfer is considered to be basically completed for this case. The main problem consists in poor information about the structure of the regular and the random galactic magnetic fields. An account is given of CR transfer in a turbulent medium.
7. Stellar black holes and the origin of cosmic acceleration
SciTech Connect
Prescod-Weinstein, Chanda; Afshordi, Niayesh; Balogh, Michael L.
2009-08-15
The discovery of cosmic acceleration has presented a unique challenge for cosmologists. As observational cosmology forges ahead, theorists have struggled to make sense of a standard model that requires extreme fine-tuning. This challenge is known as the cosmological constant problem. The theory of gravitational aether is an alternative to general relativity that does not suffer from this fine-tuning problem, as it decouples the quantum field theory vacuum from geometry, while remaining consistent with other tests of gravity. In this paper, we study static black hole solutions in this theory and show that it manifests a UV-IR coupling: Aether couples the space-time metric close to the black hole horizon, to metric at infinity. We then show that using the trans-Planckian ansatz (as a quantum gravity effect) close to the black hole horizon, leads to an accelerating cosmological solution, far from the horizon. Interestingly, this acceleration matches current observations for stellar-mass black holes. Based on our current understanding of the black hole accretion history in the Universe, we then make a prediction for how the effective dark energy density should evolve with redshift, which can be tested with future dark energy probes.
8. Ionisation as indicator for cosmic ray acceleration
Schuppan, F.; Röken, C.; Fedrau, N.; Becker Tjus, J.
2014-06-01
Astrospheres and wind bubbles of massive stars are believed to be sources of cosmic rays with energies E ≲ 1 TeV. These particles are not directly detectable, but their impact on surrounding matter, in particular ionisation of atomic and molecular hydrogen, can lead to observable signatures. A correlation study of both gamma ray emission, induced by proton-proton interactions of cosmic ray protons with kinetic energies Ep ≥ 280 MeV with ambient hydrogen, and ionisation induced by cosmic ray protons of kinetic energies Ep < 280 MeV can be performed in order to study potential sources of (sub)TeV cosmic rays.
9. Particle acceleration in cosmic sites. Astrophysics issues in our understanding of cosmic rays
Diehl, R. L.
2009-11-01
Particles are accelerated in cosmic sites probably under conditions very different from those at terrestrial particle accelerator laboratories. Nevertheless, specific experiments which explore plasma conditions and stimulate particle acceleration carry significant potential to illuminate some aspects of the cosmic particle acceleration process. Here we summarize our understanding of cosmic particle acceleration, as derived from observations of the properties of cosmic ray particles, and through astronomical signatures caused by these near their sources or throughout their journey in interstellar space. We discuss the candidate-source object variety, and what has been learned about their particle-acceleration characteristics. We conclude identifying open issues as they are discussed among astrophysicists. - The cosmic ray differential intensity spectrum across energies from 1010 eV to 1021 eV reveals a rather smooth power-law spectrum. Two kinks occur at the “knee” (≃1015 eV) and at the “ankle” (≃ 3×1018 eV). It is unclear if these kinks are related to boundaries between different dominating sources, or rather related to characteristics of cosmic-ray propagation. Currently we believe that galactic sources dominate up to 1017 eV or even above, and the extragalactic origin of cosmic rays at highest energies merges rather smoothly with galactic contributions throughout the 1015-1018 eV range. Pulsars and supernova remnants are among the prime candidates for galactic cosmic-ray production, while nuclei of active galaxies are considered best candidates to produce ultrahigh-energy cosmic rays of extragalactic origin. The acceleration processes are probably related to shocks formed when matter is ejected into surrounding space from energetic sources such as supernova explosions or matter accreting onto black holes. Details of shock acceleration are complex, as relativistic particles modify the structure of the shock, and simple approximations or perturbation
10. Cosmic-ray acceleration at stellar wind terminal shocks
NASA Technical Reports Server (NTRS)
Webb, G. M.; Axford, W. I.; Forman, M. A.
1985-01-01
Steady-state spherically symmetric analytic solutions of the cosmic-ray transport equations, applicable to the problem of acceleration of cosmic rays at the terminal shock to a stellar wind, are studied. The spectra, graidents, and flow patterns of particles modulated and accelerated by the stellar wind and shock are investigated by means of monoenergetic-source solutions at finite radius, as well as solutions with monoenergetic and power-law galactic spectra. On the basis of calculations given, early-type stars could supply a significant fraction of the 3 x 10 to the 40th ergs/sec required by galactic cosmic rays.
11. Hidden Cosmic-Ray Accelerators as an Origin of TeV-PeV Cosmic Neutrinos.
PubMed
Murase, Kohta; Guetta, Dafne; Ahlers, Markus
2016-02-19
The latest IceCube data suggest that the all-flavor cosmic neutrino flux may be as large as 10^{-7} GeV cm^{-2} s^{-1} sr^{-1} around 30 TeV. We show that, if sources of the TeV-PeV neutrinos are transparent to γ rays with respect to two-photon annihilation, strong tensions with the isotropic diffuse γ-ray background measured by Fermi are unavoidable, independently of the production mechanism. We further show that, if the IceCube neutrinos have a photohadronic (pγ) origin, the sources are expected to be opaque to 1-100 GeV γ rays. With these general multimessenger arguments, we find that the latest data suggest a population of cosmic-ray accelerators hidden in GeV-TeV γ rays as a neutrino origin. Searches for x-ray and MeV γ-ray counterparts are encouraged, and TeV-PeV neutrinos themselves will serve as special probes of dense source environments. PMID:26943524
12. Hidden Cosmic-Ray Accelerators as an Origin of TeV-PeV Cosmic Neutrinos
Murase, Kohta; Guetta, Dafne; Ahlers, Markus
2016-02-01
The latest IceCube data suggest that the all-flavor cosmic neutrino flux may be as large as 10-7 GeV cm-2 s-1 sr-1 around 30 TeV. We show that, if sources of the TeV-PeV neutrinos are transparent to γ rays with respect to two-photon annihilation, strong tensions with the isotropic diffuse γ -ray background measured by Fermi are unavoidable, independently of the production mechanism. We further show that, if the IceCube neutrinos have a photohadronic (p γ ) origin, the sources are expected to be opaque to 1-100 GeV γ rays. With these general multimessenger arguments, we find that the latest data suggest a population of cosmic-ray accelerators hidden in GeV-TeV γ rays as a neutrino origin. Searches for x-ray and MeV γ -ray counterparts are encouraged, and TeV-PeV neutrinos themselves will serve as special probes of dense source environments.
13. Secondary antiprotons - A valuable cosmic-ray probe
NASA Technical Reports Server (NTRS)
Steigman, G.
1977-01-01
Even in the absence of antiprotons in the primary cosmic rays, a flux of secondary antiprotons will be produced in collisions between cosmic rays and interstellar gas. The predicted antiproton fraction increases with increasing cosmic-ray confinement, so that observations of antiprotons will provide a probe of models of cosmic-ray confinement. It is shown that the expected antiproton fraction (for energies of at least about 10 GeV) ranges between 0.00023 for the 'leaky box' model and 0.0018 for the 'closed box' model. In addition, attention is called to the fact that a detection of cosmic-ray antiprotons at or above a level of 0.0002 will provide a valuable lower limit to the antiproton lifetime.
14. Xenia: A Probe of Cosmic Chemical Evolution
NASA Technical Reports Server (NTRS)
Kouveliotou, Chryssa; Piro, L.
2008-01-01
Xenia is a concept study for a medium-size astrophysical cosmology mission addressing the Cosmic Origins key objective of NASA's Science Plan. The fundamental goal of this objective is to understand the formation and evolution of structures on various scales from the early Universe to the present time (stars, galaxies and the cosmic web). Xenia will use X-and y-ray monitoring and wide field X-ray imaging and high-resolution spectroscopy to collect essential information from three major tracers of these cosmic structures: the Warm Hot Intergalactic Medium (WHIM), Galaxy Clusters and Gamma Ray Bursts (GRBs). Our goal is to trace the chemo-dynamical history of the ubiquitous warm hot diffuse baryon component in the Universe residing in cosmic filaments and clusters of galaxies up to its formation epoch (at z =0-2) and to map star formation and galaxy metal enrichment into the re-ionization era beyond z 6. The concept of Xenia (Greek for "hospitality") evolved in parallel with the Explorer of Diffuse Emission and GRB Explosions (EDGE), a mission proposed by a multinational collaboration to the ESA Cosmic Vision 2015. Xenia incorporates the European and Japanese collaborators into a U.S. led mission that builds on the scientific objectives and technological readiness of EDGE.
15. Cosmic microwave background probes models of inflation
NASA Technical Reports Server (NTRS)
Davis, Richard L.; Hodges, Hardy M.; Smoot, George F.; Steinhardt, Paul J.; Turner, Michael S.
1992-01-01
Inflation creates both scalar (density) and tensor (gravity wave) metric perturbations. We find that the tensor-mode contribution to the cosmic microwave background anisotropy on large-angular scales can only exceed that of the scalar mode in models where the spectrum of perturbations deviates significantly from scale invariance. If the tensor mode dominates at large-angular scales, then the value of DeltaT/T predicted on 1 deg is less than if the scalar mode dominates, and, for cold-dark-matter models, bias factors greater than 1 can be made consistent with Cosmic Background Explorer (COBE) DMR results.
16. Cosmic-ray shock acceleration in oblique MHD shocks
NASA Technical Reports Server (NTRS)
Webb, G. M.; Drury, L. OC.; Volk, H. J.
1986-01-01
A one-dimensional, steady-state hydrodynamical model of cosmic-ray acceleration at oblique MHD shocks is presented. Upstream of the shock the incoming thermal plasma is subject to the adverse pressure gradient of the accelerated particles, the J x B force, as well as the thermal gas pressure gradient. The efficiency of the acceleration of cosmic-rays at the shock as a function of the upstream magnetic field obliquity and upstream plasma beta is investigated. Astrophysical applications of the results are briefly discussed.
17. Angular Anisotropies in the Cosmic Gamma-Ray Background as a Probe of Its Origin
Miniati, Francesco; Koushiappas, Savvas M.; Di Matteo, Tiziana
2007-09-01
Notwithstanding the advent of the Gamma-ray Large Area Space Telescope, theoretical models predict that a significant fraction of the cosmic γ-ray background (CGB), at a level of 20% of the currently measured value, will remain unresolved. The angular power spectrum of intensity fluctuations of the CGB contains information on its origin. We show that probing the latter on scales from a few tens of arcminutes to several degrees, together with complementary GLAST observations of γ-ray emission from galaxy clusters and the blazar luminosity function, can discriminate between a background that originates from unresolved blazars or cosmic rays accelerated at structure formation shocks.
18. A cocoon of freshly accelerated cosmic rays detected by Fermi in the Cygnus superbubble.
PubMed
Ackermann, M; Ajello, M; Allafort, A; Baldini, L; Ballet, J; Barbiellini, G; Bastieri, D; Belfiore, A; Bellazzini, R; Berenji, B; Blandford, R D; Bloom, E D; Bonamente, E; Borgland, A W; Bottacini, E; Brigida, M; Bruel, P; Buehler, R; Buson, S; Caliandro, G A; Cameron, R A; Caraveo, P A; Casandjian, J M; Cecchi, C; Chekhtman, A; Cheung, C C; Chiang, J; Ciprini, S; Claus, R; Cohen-Tanugi, J; de Angelis, A; de Palma, F; Dermer, C D; do Couto E Silva, E; Drell, P S; Dumora, D; Favuzzi, C; Fegan, S J; Focke, W B; Fortin, P; Fukazawa, Y; Fusco, P; Gargano, F; Germani, S; Giglietto, N; Giordano, F; Giroletti, M; Glanzman, T; Godfrey, G; Grenier, I A; Guillemot, L; Guiriec, S; Hadasch, D; Hanabata, Y; Harding, A K; Hayashida, M; Hayashi, K; Hays, E; Jóhannesson, G; Johnson, A S; Kamae, T; Katagiri, H; Kataoka, J; Kerr, M; Knödlseder, J; Kuss, M; Lande, J; Latronico, L; Lee, S-H; Longo, F; Loparco, F; Lott, B; Lovellette, M N; Lubrano, P; Martin, P; Mazziotta, M N; McEnery, J E; Mehault, J; Michelson, P F; Mitthumsiri, W; Mizuno, T; Monte, C; Monzani, M E; Morselli, A; Moskalenko, I V; Murgia, S; Naumann-Godo, M; Nolan, P L; Norris, J P; Nuss, E; Ohsugi, T; Okumura, A; Orlando, E; Ormes, J F; Ozaki, M; Paneque, D; Parent, D; Pesce-Rollins, M; Pierbattista, M; Piron, F; Pohl, M; Prokhorov, D; Rainò, S; Rando, R; Razzano, M; Reposeur, T; Ritz, S; Parkinson, P M Saz; Sgrò, C; Siskind, E J; Smith, P D; Spinelli, P; Strong, A W; Takahashi, H; Tanaka, T; Thayer, J G; Thayer, J B; Thompson, D J; Tibaldo, L; Torres, D F; Tosti, G; Tramacere, A; Troja, E; Uchiyama, Y; Vandenbroucke, J; Vasileiou, V; Vianello, G; Vitale, V; Waite, A P; Wang, P; Winer, B L; Wood, K S; Yang, Z; Zimmer, S; Bontemps, S
2011-11-25
The origin of Galactic cosmic rays is a century-long puzzle. Indirect evidence points to their acceleration by supernova shockwaves, but we know little of their escape from the shock and their evolution through the turbulent medium surrounding massive stars. Gamma rays can probe their spreading through the ambient gas and radiation fields. The Fermi Large Area Telescope (LAT) has observed the star-forming region of Cygnus X. The 1- to 100-gigaelectronvolt images reveal a 50-parsec-wide cocoon of freshly accelerated cosmic rays that flood the cavities carved by the stellar winds and ionization fronts from young stellar clusters. It provides an example to study the youth of cosmic rays in a superbubble environment before they merge into the older Galactic population. PMID:22116880
19. Strangelets accelerated by pulsars in galactic cosmic rays
SciTech Connect
Cheng, K. S.; Usov, V. V.
2006-12-15
It is shown that nuggets of strange quark matter may be extracted from the surface of pulsars and accelerated by strong electric fields to high energies if pulsars are strange stars with the crusts, comprised of nuggets embedded in a uniform electron background. Such high energy nuggets called usually strangelets give an observable contribution into galactic cosmic rays and may be detected by the upcoming cosmic ray experiment Alpha Magnetic Spectrometer AMS-02 on the International Space Station.
20. Intergalactic shock acceleration and the cosmic gamma-ray background
Miniati, Francesco
2002-11-01
We investigate numerically the contribution to the cosmic gamma-ray background from cosmic-ray ions and electrons accelerated at intergalactic shocks associated with cosmological structure formation. We show that the kinetic energy of accretion flows in the low-redshift intergalactic medium is thermalized primarily through moderately strong shocks, which allow for an efficient conversion of shock ram pressure into cosmic-ray pressure. Cosmic rays accelerated at these shocks produce a diffuse gamma-ray flux which is dominated by inverse Compton emission from electrons scattering off cosmic microwave background photons. Decay of neutral π mesons generated in p-p inelastic collisions of the ionic cosmic-ray component with the thermal gas contribute about 30 per cent of the computed emission. Based on experimental upper limits on the photon flux above 100 MeV from nearby clusters we constrain the efficiency of conversion of shock ram pressure into relativistic CR electrons to <~1 per cent. Thus, we find that cosmic rays of cosmological origin can generate an overall significant fraction of order 20 per cent and no more than 30 per cent of the measured gamma-ray background.
1. X-ray Observations of Cosmic Ray Acceleration
NASA Technical Reports Server (NTRS)
Petre, Robert
2012-01-01
Since the discovery of cosmic rays, detection of their sources has remained elusive. A major breakthrough has come through the identification of synchrotron X-rays from the shocks of supernova remnants through imaging and spectroscopic observations by the most recent generation of X-ray observatories. This radiation is most likely produced by electrons accelerated to relativistic energy, and thus has offered the first, albeit indirect, observational evidence that diffusive shock acceleration in supernova remnants produces cosmic rays to TeV energies, possibly as high as the "knee" in the cosmic ray spectrum. X-ray observations have provided information about the maximum energy to which these shOCks accelerate electrons, as well as indirect evidence of proton acceleration. Shock morphologies measured in X-rays have indicated that a substantial fraction of the shock energy can be diverted into particle acceleration. This presentation will summarize what we have learned about cosmic ray acceleration from X-ray observations of supernova remnants over the past two decades.
2. Multiwavelength Signatures of Cosmic Ray Acceleration by Young Supernova Remnants
SciTech Connect
Vink, Jacco
2008-12-24
An overview is given of multiwavelength observations of young supernova remnants, with a focus on the observational signatures of efficient cosmic ray acceleration. Some of the effects that may be attributed to efficient cosmic ray acceleration are the radial magnetic fields in young supernova remnants, magnetic field amplification as determined with X-ray imaging spectroscopy, evidence for large post-shock compression factors, and low plasma temperatures, as measured with high resolution optical/UV/X-ray spectroscopy. Special emphasis is given to spectroscopy of post-shock plasma's, which offers an opportunity to directly measure the post-shock temperature. In the presence of efficient cosmic ray acceleration the post-shock temperatures are expected to be lower than according to standard equations for a strong shock. For a number of supernova remnants this seems indeed to be the case.
3. Constraint on electromagnetic acceleration of highest energy cosmic rays.
PubMed
Medvedev, Mikhail V
2003-04-01
The energetics of electromagnetic acceleration of ultrahigh-energy cosmic rays (UHECRs) is constrained both by confinement of a particle within an acceleration site and by radiative energy losses of the particle in the confining magnetic fields. We demonstrate that the detection of approximately 3 x 10(20) eV events is inconsistent with the hypothesis that compact cosmic accelerators with high magnetic fields can be the sources of UHECRs. This rules out the most popular candidates, namely spinning neutron stars, active galactic nuclei (AGNs). Galaxy clusters and, perhaps, AGN radio lobes and gamma-ray burst blast waves remain the only possible (although not very strong) candidates for UHECR acceleration sites. Our analysis places no limit on linear accelerators. With the data from the future Auger experiment one should be able to answer whether a conventional theory works or some new physics is required to explain the origin of UHECRs. PMID:12786427
4. Particle acceleration, transport and turbulence in cosmic and heliospheric physics
NASA Technical Reports Server (NTRS)
Matthaeus, W.
1992-01-01
In this progress report, the long term goals, recent scientific progress, and organizational activities are described. The scientific focus of this annual report is in three areas: first, the physics of particle acceleration and transport, including heliospheric modulation and transport, shock acceleration and galactic propagation and reacceleration of cosmic rays; second, the development of theories of the interaction of turbulence and large scale plasma and magnetic field structures, as in winds and shocks; third, the elucidation of the nature of magnetohydrodynamic turbulence processes and the role such turbulence processes might play in heliospheric, galactic, cosmic ray physics, and other space physics applications.
5. Probing cosmic strings with satellite CMB measurements
SciTech Connect
Jeong, E.; Baccigalupi, Carlo; Smoot, G.F. E-mail: [email protected]
2010-09-01
We study the problem of searching for cosmic string signal patterns in the present high resolution and high sensitivity observations of the Cosmic Microwave Background (CMB). This article discusses a technique capable of recognizing Kaiser-Stebbins effect signatures in total intensity anisotropy maps from isolated strings. We derive the statistical distributions of null detections from purely Gaussian fluctuations and instrumental performances of the operating satellites, and show that the biggest factor that produces confusion is represented by the acoustic oscillation features of the scale comparable to the size of horizon at recombination. Simulations show that the distribution of null detections converges to a χ{sup 2} distribution, with detectability threshold at 99% confidence level corresponding to a string induced step signal with an amplitude of about 100 μK which corresponds to a limit of roughly Gμ ∼ 1.5 × 10{sup −6}. We implement simulations for deriving the statistics of spurious detections caused by extra-Galactic and Galactic foregrounds. For diffuse Galactic foregrounds, which represents the dominant source of contamination, we construct sky masks outlining the available region of the sky where the Galactic confusion is sub-dominant, specializing our analysis to the case represented by the frequency coverage and nominal sensitivity and resolution of the Planck experiment. As for other CMB measurements, the maximum available area, corresponding to 7%, is reached where the foreground emission is expected to be minimum, in the 70–100 GHz interval.
6. Constraining the Cosmic-ray Acceleration Efficiency in the Supernova Remnant IC 443
Ritchey, Adam Michael; Federman, Steven R.; Jenkins, Edward B.; Caprioli, Damiano; Wallerstein, George
2015-08-01
Supernova remnants are widely believed to be the sources responsible for the acceleration of Galactic cosmic rays. Over the last several years, observations made with the Fermi Gamma-ray Space Telescope have confirmed that cosmic-ray nuclei are indeed accelerated in some supernova remnants, including IC 443, which is a prototype for supernova remnants interacting with molecular clouds. However, the details concerning the particle acceleration processes in middle aged remnants are not fully understood, in part because the basic model parameters are not always well constrained. Here, we present preliminary results of a Hubble Space Telescope investigation into the physical conditions in diffuse molecular gas interacting with IC 443. We examine high-resolution FUV spectra of two stars, one that probes the interior region of the supernova remnant, and the other located just outside the visible edge of IC 443. With this arrangement, we are able to evaluate the densities and temperatures in neutral gas clumps positioned both ahead of and behind the supernova shock front. From these measurements, we obtain estimates for the post-shock temperature and the shock velocity in the interclump medium. We discuss the efficacy of these results for constraining both the age of IC 443, and also the cosmic-ray acceleration efficiency. Finally, we report the first detection of boron in a supernova remnant, and discuss the usefulness of the B/O ratio in constraining the cosmic-ray content of the gas interacting with IC 443.
7. On cosmic acceleration without dark energy
SciTech Connect
Kolb, E.W.; Matarrese, S.; Riotto, A.; ,
2005-06-01
We elaborate on the proposal that the observed acceleration of the Universe is the result of the backreaction of cosmological perturbations, rather than the effect of a negative-pressure dark energy fluid or a modification of general relativity. Through the effective Friedmann equations describing an inhomogeneous Universe after smoothing, we demonstrate that acceleration in our local Hubble patch is possible even if fluid elements do not individually undergo accelerated expansion. This invalidates the no-go theorem that there can be no acceleration in our local Hubble patch if the Universe only contains irrotational dust. We then study perturbatively the time behavior of general-relativistic cosmological perturbations, applying, where possible, the renormalization group to regularize the dynamics. We show that an instability occurs in the perturbative expansion involving sub-Hubble modes, which indicates that acceleration in our Hubble patch may originate from the backreaction of cosmological perturbations on observable scales.
8. Probing exotic physics with cosmic neutrinos
SciTech Connect
Hooper, Dan; /Fermilab
2005-10-01
Traditionally, collider experiments have been the primary tool used in searching for particle physics beyond the Standard Model. In this talk, I will discuss alternative approaches for exploring exotic physics scenarios using high energy and ultra-high energy cosmic neutrinos. Such neutrinos can be used to study interactions at energies higher, and over baselines longer, than those accessible to colliders. In this way, neutrino astronomy can provide a window into fundamental physics which is highly complementary to collider techniques. I will discuss the role of neutrino astronomy in fundamental physics, considering the use of such techniques in studying several specific scenarios including low scale gravity models, Standard Model electroweak instanton induced interactions, decaying neutrinos and quantum decoherence.
9. Characterizing the Sites of Hadronic Cosmic Ray Acceleration
Pihlstrom, Ylva; Mesler, R.; Sjouwerman, L.; Frail, D.; Claussen, M.
2012-01-01
It has been argued that supernova remnant (SNRs) shocks are the acceleration sites for galactic cosmic rays. While this has been established for electrons, solid evidence for hadrons constituting the bulk of the cosmic rays have been lacking. Models of hadronic cosmic ray acceleration in SNRs predict a gamma-ray flux density depending on parameters like the environment density and distance. Few reliable estimates of those parameters exist. SNRs with cosmic rays interacting with molecular clouds are expected to be bright gamma-ray sources, and these sites can be traced using 1720 MHz OH masers. The masers give information about the density and kinematical distance estimates. Only 10% of galactic SNRs harbor OH masers, and we have therefore searched for a more frequently occurring SNR/cloud interaction tracer. We have detected 36 GHz and 44 GHz methanol masers associated with a few SNRs. Here we report on the result of a search for methanol masers in 21 SNRs, and in particular the details of our detections in Sgr A East. Combining observations and modeling of methanol masers in SNRs, we aim to better constrain the density and distance to SNRs with TeV emission. The goal is to test the hadronic cosmic ray models and to understand the mechanisms of particle acceleration in SNRs. This project is supported under NASA-Fermi grant NNX10A055G.
10. Acceleration and propagation of solar cosmic rays
Podgorny, I. M.; Podgorny, A. I.
2015-12-01
Analysis of the solar cosmic ray measurements on the Geostationary Orbital Environmental Satellite (GOES) spacecraft indicated that the duration of solar flare relativistic proton large pulses is comparable with the solar wind propagation duration from the Sun to the Earth. The front of the proton flux from flares on the western solar disk approaches the Earth with a flight time along the Archimedean spiral magnetic field line of 15-20 min. The proton flux from eastern flares is registered in the Earth's orbit 3-5 h after the flare onset. These particles apparently propagate across IMF owing to diffusion.
11. Probing Inflation via Cosmic Microwave Background Polarimetry
NASA Technical Reports Server (NTRS)
Chuss, David T.
2008-01-01
The Cosmic Microwave Background (CMB) has been a rich source of information about the early Universe. Detailed measurements of its spectrum and spatial distribution have helped solidify the Standard Model of Cosmology. However, many questions still remain. Standard Cosmology does not explain why the early Universe is geometrically flat, expanding, homogenous across the horizon, and riddled with a small anisotropy that provides the seed for structure formation. Inflation has been proposed as a mechanism that naturally solves these problems. In addition to solving these problems, inflation is expected to produce a spectrum of gravitational waves that will create a particular polarization pattern on the CMB. Detection of this polarized signal is a key test of inflation and will give a direct measurement of the energy scale at which inflation takes place. This polarized signature of inflation is expected to be -9 orders of magnitude below the 2.7 K monopole level of the CMB. This measurement will require good control of systematic errors, an array of many detectors having the requisite sensitivity, and a reliable method for removing polarized foregrounds, and nearly complete sky coverage. Ultimately, this measurement is likely to require a space mission. To this effect, technology and mission concept development are currently underway.
12. Cosmic Accelerators: Engines of the Extreme Universe
SciTech Connect
Funk, Stefan
2009-06-23
The universe is home to numerous exotic and beautiful phenomena, some of which can generate almost inconceivable amounts of energy. While the night sky appears calm, it is populated by colossal explosions, jets from supermassive black holes, rapidly rotating neutron stars, and shock waves of gas moving at supersonic speeds. These accelerators in the sky boost particles to energies far beyond those we can produce on earth. New types of telescopes, including the Fermi Gamma-ray Space Telescope orbiting in space, are now discovering a host of new and more powerful accelerators. Please come and see how these observations are revising our picture of the most energetic phenomena in the universe.
13. Cosmic parallax as a probe of late time anisotropic expansion
SciTech Connect
Quercellini, Claudia; Cabella, Paolo; Balbi, Amedeo; Amendola, Luca
2009-09-15
Cosmic parallax is the change of angular separation between a pair of sources at cosmological distances induced by an anisotropic expansion. An accurate astrometric experiment like Gaia could observe or put constraints on cosmic parallax. Examples of anisotropic cosmological models are Lemaitre-Tolman-Bondi void models for off-center observers (introduced to explain the observed acceleration without the need for dark energy) and Bianchi metrics. If dark energy has an anisotropic equation of state, as suggested recently, then a substantial anisotropy could arise at z < or approx. 1 and escape the stringent constraints from the cosmic microwave background. In this paper we show that such models could be constrained by the Gaia satellite or by an upgraded future mission.
14. Constraining the efficiency of cosmic ray acceleration by cluster shocks
Vazza, F.; Brüggen, M.; Wittor, D.; Gheller, C.; Eckert, D.; Stubbe, M.
2016-06-01
We study the acceleration of cosmic rays by collisionless structure formation shocks with ENZO grid simulations. Data from the Fermi satellite enable the use of galaxy clusters as a testbed for particle acceleration models. Based on advanced cosmological simulations that include different prescriptions for gas and cosmic rays physics, we use the predicted γ-ray emission to constrain the shock acceleration efficiency. We infer that the efficiency must be on average ≤10-3 for cosmic shocks, particularly for the M ˜ 2-5 merger shocks that are mostly responsible for the thermalization of the intracluster medium (ICM). These results emerge, both, from non-radiative and radiative runs including feedback from active galactic nuclei, as well as from zoomed resimulations of a cluster resembling MACSJ1752.0+0440. The limit on the acceleration efficiency we report is lower than what has been assumed in the literature so far. Combined with the information from radio emission in clusters, it appears that a revision of the present understanding of shock acceleration in the ICM is unavoidable.
15. Magnetowave induced plasma wakefield acceleration for ultrahigh energy cosmic rays.
PubMed
Chang, Feng-Yin; Chen, Pisin; Lin, Guey-Lin; Noble, Robert; Sydora, Richard
2009-03-20
Magnetowave induced plasma wakefield acceleration (MPWA) in a relativistic astrophysical outflow has been proposed as a viable mechanism for the acceleration of cosmic particles to ultrahigh energies. Here we present simulation results that clearly demonstrate the viability of this mechanism for the first time. We invoke the high frequency and high speed whistler mode for the driving pulse. The plasma wakefield obtained in the simulations compares favorably with our newly developed relativistic theory of the MPWA. We show that, under appropriate conditions, the plasma wakefield maintains very high coherence and can sustain high-gradient acceleration over hundreds of plasma skin depths. Invoking active galactic nuclei as the site, we show that MPWA production of ultrahigh energy cosmic rays beyond ZeV (10{21} eV) is possible. PMID:19392185
16. An overview of cosmic ray research - Composition, acceleration and propagation
NASA Technical Reports Server (NTRS)
Wefel, John P.
1988-01-01
An overview of cosmic ray (CR) research and its relationship to other areas of high-energy astrophysics is presented. Research being conducted on the composition of cosmic rays (CRs) is examined, including the study of the solar system 'template' for CRs, CR abundances at earth, solar energetic particles, the CR elements beyond zinc, and the study of electrons, positrons, antinuclei, and of isotopic composition of CRs. Research on the CR energy spectrum and anisotropy is briefly reviewed. The study of acceleration processes, particle confinement, and propagation of CRs is addressed. Finally, the investigation of source abundances in CRs is discussed.
17. Holographic dark energy and late cosmic acceleration
Pavón, Diego
2007-06-01
It has been persuasively argued that the number of effective degrees of freedom of a macroscopic system is proportional to its area rather than to its volume. This entails interesting consequences for cosmology. Here we present a model based on this 'holographic principle' that accounts for the present stage of accelerated expansion of the Universe and significantly alleviates the coincidence problem also for non-spatially flat cosmologies. Likewise, we comment on a recently proposed late transition to a fresh decelerated phase.
18. Cosmic acceleration and the helicity-0 graviton
SciTech Connect
2011-05-15
We explore cosmology in the decoupling limit of a nonlinear covariant extension of Fierz-Pauli massive gravity obtained recently in arXiv:1007.0443. In this limit the theory is a scalar-tensor model of a unique form defined by symmetries. We find that it admits a self-accelerated solution, with the Hubble parameter set by the graviton mass. The negative pressure causing the acceleration is due to a condensate of the helicity-0 component of the massive graviton, and the background evolution, in the approximation used, is indistinguishable from the {Lambda}CDM model. Fluctuations about the self-accelerated background are stable for a certain range of parameters involved. Most surprisingly, the fluctuation of the helicity-0 field above its background decouples from an arbitrary source in the linearized theory. We also show how massive gravity can remarkably screen an arbitrarily large cosmological constant in the decoupling limit, while evading issues with ghosts. The obtained static solution is stable against small perturbations, suggesting that the degravitation of the vacuum energy is possible in the full theory. Interestingly, however, this mechanism postpones the Vainshtein effect to shorter distance scales. Hence, fifth force measurements severely constrain the value of the cosmological constant that can be neutralized, making this scheme phenomenologically not viable for solving the old cosmological constant problem. We briefly speculate on a possible way out of this issue.
19. Cosmic bullets as particle accelerators and radio sources
NASA Technical Reports Server (NTRS)
Jones, T. W.; Kang, Hyesung; Tregillis, I. L.
1994-01-01
We have simulated in two dimensions the dynamical evolution of dense gas clouds(cosmic bullets') moving supersonically through a uniform low-density medium. The diffusive shock acceleration of relativistic protons (cosmic rays) and their dynamical feedback on the background flow are included by the two-fluid model for this process. The acceleration of relativistic electrons is approximated by a test-particle model, and a passive magnetic field is followed by a simple advection scheme. Strong bow shocks, with Mach numbers similar to that of a bullet's motion, are the most important particle accelerators in the flow, while tail shocks and shocks inside the bullets do not play generally significant roles in this regard. For our simulation parameters, approximately greater than 10% of the initial bullet kinetic energy is converted to a combination of internal energy of gas and cosmic-ray protons by the time the bullets begin to be disrupted. Characteristically, the cosmic rays gain several percent of the available kinetic energy. Bullet destruction on timescales only a little larger than the ram pressure bullet crushing time begins in response to Kelvin-Helmholtz and especially to Rayleigh-Taylor instabilities along the forward bullet surface. For dense bullets this happens before the bullet is stopped by ram pressure. According to our simple model for synchrotron emission from relativistic electrons accelerated and transported within the flows, that emission increases rapidly as the bullet begins to fragment, when it is strongly dominated by field enhancement in sheared flows. Synchrotron emission from the acceleration region within the bow shock is, by contrast, much weaker.
20. Cosmic ray acceleration by spiral shocks in the galactic wind
Völk, H. J.; Zirakashvili, V. N.
2004-04-01
Cosmic ray acceleration by shocks related with Slipping Interaction Regions (SIRs) in the Galactic Wind is considered. SIRs are similar to Solar Wind Corotating Interaction Regions. The spiral structure of our Galaxy results in a strong nonuniformity of the Galactic Wind flow and in SIR formation at distances of 50 to 100 kpc. SIRs are not corotating with the gas and magnetic field because the angular velocity of the spiral pattern differs from that of the Galactic rotation. It is shown that the collective reacceleration of the cosmic ray particles with charge Ze in the resulting shock ensemble can explain the observable cosmic ray spectrum beyond the knee'' up to energies of the order of 1017 Z eV. For the reaccelerated particles the Galactic Wind termination shock acts as a reflecting boundary.
1. Cosmic Ray Acceleration in Force Free Fields
Colgate, Stirling; Li, Hui; Kronberg, Philipp
2002-11-01
Galactic, extragalactic, and cluster magnetic fields are in apparent pressure equilibrium with the in-fall pressure of matter from the external medium, IGM, onto the Galaxies and clusters, and from the voids onto the galaxy sheets, (walls), implying fields of 5 , 0.5, & 20 μG respectively. Equipartition or minimum energy, implies β_CR=n_CRm_pc^2/(B^2/8π)˜= 1. The total energy in field and CRs is then ˜= 10^55 ergs Galactic and ˜= 4 ot 10^60 ergs per galaxy in the IGM and less within clusters, e.g., radio lobes, synchrotron "glow" in the IGM (Kronberg), and the UHECRs spectrum, Γ =-2.6. CRs escape from the Galaxy to the IGM, τ˜=10^7y, and similarly from the walls to the voids, ˜=10^8y, less than the GZK cut-off time provided B_galaxy>B_IGM>B_voids. The free energy of black hole formation, The Los Alamos model, is just sufficient. The lack of shocks at the boundaries of over pressured radio lobes and the need for high acceleration efficiency suggests eE_allel˜= eη_reconJ_allel, acceleration by reconnection of these force-free fields.
2. Cosmic acceleration without dark energy: background tests and thermodynamic analysis
SciTech Connect
Lima, J.A.S.; Graef, L.L.; Pavón, D.; Basilakos, Spyros E-mail: [email protected] E-mail: [email protected]
2014-10-01
A cosmic scenario with gravitationally induced particle creation is proposed. In this model the Universe evolves from an early to a late time de Sitter era, with the recent accelerating phase driven only by the negative creation pressure associated with the cold dark matter component. The model can be interpreted as an attempt to reduce the so-called cosmic sector (dark matter plus dark energy) and relate the two cosmic accelerating phases (early and late time de Sitter expansions). A detailed thermodynamic analysis including possible quantum corrections is also carried out. For a very wide range of the free parameters, it is found that the model presents the expected behavior of an ordinary macroscopic system in the sense that it approaches thermodynamic equilibrium in the long run (i.e., as it nears the second de Sitter phase). Moreover, an upper bound is found for the Gibbons–Hawking temperature of the primordial de Sitter phase. Finally, when confronted with the recent observational data, the current 'quasi'-de Sitter era, as predicted by the model, is seen to pass very comfortably the cosmic background tests.
3. Cosmic acceleration without dark energy: background tests and thermodynamic analysis
Lima, J. A. S.; Graef, L. L.; Pavón, D.; Basilakos, Spyros
2014-10-01
A cosmic scenario with gravitationally induced particle creation is proposed. In this model the Universe evolves from an early to a late time de Sitter era, with the recent accelerating phase driven only by the negative creation pressure associated with the cold dark matter component. The model can be interpreted as an attempt to reduce the so-called cosmic sector (dark matter plus dark energy) and relate the two cosmic accelerating phases (early and late time de Sitter expansions). A detailed thermodynamic analysis including possible quantum corrections is also carried out. For a very wide range of the free parameters, it is found that the model presents the expected behavior of an ordinary macroscopic system in the sense that it approaches thermodynamic equilibrium in the long run (i.e., as it nears the second de Sitter phase). Moreover, an upper bound is found for the Gibbons-Hawking temperature of the primordial de Sitter phase. Finally, when confronted with the recent observational data, the current quasi'-de Sitter era, as predicted by the model, is seen to pass very comfortably the cosmic background tests.
4. Connecting inflation with late cosmic acceleration by particle production
Nunes, Rafael C.
2016-04-01
A continuous process of creation of particles is investigated as a possible connection between the inflationary stage with late cosmic acceleration. In this model, the inflationary era occurs due to a continuous and fast process of creation of relativistic particles, and the recent accelerating phase is driven by the nonrelativistic matter creation from the gravitational field acting on the quantum vacuum, which finally results in an effective equation of state (EoS) less than ‑ 1. Thus, explaining recent results in favor of a phantom dynamics without the need of any modifications in the gravity theory has been proposed. Finally, we confront the model with recent observational data of type Ia Supernova, history of the Hubble parameter, baryon acoustic oscillations (BAOs) and the cosmic microwave background (CMB).
5. Interstellar Mapping and Acceleration Probe (IMAP)
2016-04-01
Our piece of cosmic real-estate, the heliosphere, is the domain of all human existence - an astrophysical case-history of the successful evolution of life in a habitable system. By exploring our global heliosphere and its myriad interactions, we develop key physical knowledge of the interstellar interactions that influence exoplanetary habitability as well as the distant history and destiny of our solar system and world. IBEX was the first mission to explore the global heliosphere and in concert with Voyager 1 and Voyager 2 is discovering a fundamentally new and uncharted physical domain of the outer heliosphere. In parallel, Cassini/INCA maps the global heliosphere at energies (~5-55 KeV) above those measured by IBEX. The enigmatic IBEX ribbon and the INCA belt were unanticipated discoveries demonstrating that much of what we know or think we understand about the outer heliosphere needs to be revised. The next quantum leap enabled by IMAP will open new windows on the frontier of Heliophysics at a time when the space environment is rapidly evolving. IMAP with 100 times the combined resolution and sensitivity of IBEX and INCA will discover the substructure of the IBEX ribbon and will reveal in unprecedented resolution global maps of our heliosphere. The remarkable synergy between IMAP, Voyager 1 and Voyager 2 will remain for at least the next decade as Voyager 1 pushes further into the interstellar domain and Voyager 2 moves through the heliosheath. The "A" in IMAP refers to acceleration of energetic particles. With its combination of highly sensitive pickup and suprathermal ion sensors, IMAP will provide the species and spectral coverage as well as unprecedented temporal resolution to associate emerging suprathermal tails with interplanetary structures and discover underlying physical acceleration processes. These key measurements will provide what has been a critical missing piece of suprathermal seed particles in our understanding of particle acceleration to high
6. SPECTRUM OF GALACTIC COSMIC RAYS ACCELERATED IN SUPERNOVA REMNANTS
SciTech Connect
2010-07-20
The spectra of high-energy protons and nuclei accelerated by supernova remnant (SNR) shocks are calculated, taking into account magnetic field amplification and Alfvenic drift both upstream and downstream of the shock for different types of SNRs during their evolution. The maximum energy of accelerated particles may reach 5 x 10{sup 18} eV for Fe ions in Type IIb SNRs. The calculated energy spectrum of cosmic rays after propagation through the Galaxy is in good agreement with the spectrum measured at the Earth.
7. Is Cosmic Acceleration Telling Us Something About Gravity?
ScienceCinema
Trodden, Mark [Syracuse University, Syracuse, New York, United States
2009-09-01
Among the possible explanations for the observed acceleration of the universe, perhaps the boldest is the idea that new gravitational physics might be the culprit. In this colloquium I will discuss some of the challenges of constructing a sensible phenomenological extension of General Relativity, give examples of some candidate models of modified gravity and survey existing observational constraints on this approach. I will conclude by discussing how we might hope to distinguish between modifications of General Relativity and dark energy as competing hypotheses to explain cosmic acceleration.
8. Dark matter and cosmic acceleration from Wesson's IMT
Israelit, Mark
2009-12-01
In the present work a procedure is build up, that allows obtaining dark matter (DM) and cosmic acceleration in our 4D universe embedded in a 5D manifold. Both, DM and the factor causing cosmic acceleration, as well ordinary matter are induced in the 4D space-time by a warped, but empty from matter, 5D bulk. The procedure is carried out in the framework of the Weyl-Dirac version (Israelit, Found Phys 35:1725, 2005; Israelit, Found Phys 35:1769, 2005) of Paul Wesson’s Induced Matter Theory (Wesson, Space-time matter, 1999) enriched by Rosen’s approach (Found Phys 12:213, 1982). Considering chaotically oriented Weyl vector fields, which exist in microscopic cells, we obtain cold dark matter (CDM) consisting of weylons, massive bosons having spin 1. Assuming homogeneity and isotropy at large scale we derive cosmological equations in which luminous matter, CDM and dark energy may be considered separately. Making in the given procedure use of present observational data one can develop a model of the Universe with conventional matter, DM and cosmic acceleration, induced by the 5D bulk.
9. Cosmic ray drift, shock wave acceleration and the anomalous component of cosmic rays
NASA Technical Reports Server (NTRS)
Pesses, M. E.; Jokipii, J. R.; Eichler, D.
1981-01-01
A model of the anomalous component of the quiet-time cosmic ray flux is presented in which ex-interstellar neutral particles are accelerated continuously in the polar regions of the solar-wind termination shock, and then drift into the equatorial regions of the inner heliosphere. The observed solar-cycle variations, radial gradient, and apparent latitude gradient of the anomalous component are a natural consequence of this model.
10. Cosmic acceleration of Earth and the Moon by dark matter
NASA Technical Reports Server (NTRS)
Nordtvedt, Kenneth L.
1994-01-01
In order to test the hypothesis that the gravitational interaction between our Galaxy's dark matter and the ordinary matter in Earth and the Moon might not fulfill the equivalence principle (universality of free fall), we consider the pertinent perturbation of the lunar orbit -- a sidereal month period range oscillation resulting from a spatially fixed polarization of the orbit. Lunar laser ranging (LLR) data can measure this sidereal perturbation to an accuracy equal to or better than its existing measurement of the synodic month period range oscillation amplitude (+/- 3 cm) which has been used for testing whether Earth and the Moon accelerate at equal rates toward the Sun. Because of the slow precession rate of the Moon's perigree (8.9 yr period), the lunar orbit is particularly sensitive to a cosmic acceleration; the LLR fit of the orbit places an upper limit of 10(exp -13) cm/sq. s for any cosmic differential acceleration between Earth (Fe) and the Moon (silicates). This is 10(exp -5) of the total galactic acceleration of the solar system, of which, it has been suggested, a large portion is produced by dark matter.
11. Cosmic acceleration in a model of fourth order gravity
Banerjee, Shreya; Jayswal, Nilesh; Singh, Tejinder P.
2015-10-01
We investigate a fourth order model of gravity, having a free length parameter, and no cosmological constant or dark energy. We consider cosmological evolution of a flat Friedmann universe in this model for the case that the length parameter is of the order of the present Hubble radius. By making a suitable choice for the present value of the Hubble parameter, and the value of the third derivative of the scale factor (the jerk), we find that the model can explain cosmic acceleration to the same degree of accuracy as the standard concordance model. If the free length parameter is assumed to be time dependent, and of the order of the Hubble parameter of the corresponding epoch, the model can still explain cosmic acceleration, and provides a possible resolution of the cosmic coincidence problem. We work out the effective equation of state, and its time evolution, in our model. The fourth order correction terms are proportional to the metric, and hence mimic the cosmological constant. We also compare redshift drift in our model, with that in the standard model. The equation of state and the redshift drift serve to discriminate our model from the standard model.
12. Cosmic Shear as a Probe of Galaxy Formation Physics
Foreman, Simon; Becker, Matthew R.; Wechsler, Risa H.
2016-09-01
We evaluate the potential for current and future cosmic shear measurements from large galaxy surveys to constrain the impact of baryonic physics on the matter power spectrum. We do so using a model-independent parameterization that describes deviations of the matter power spectrum from the dark-matter-only case as a set of principal components that are localized in wavenumber and redshift. We perform forecasts for a variety of current and future datasets, and find that at least ˜90% of the constraining power of these datasets is contained in no more than nine principal components. The constraining power of different surveys can be quantified using a figure of merit defined relative to currently available surveys. With this metric, we find that the final Dark Energy Survey dataset (DES Y5) and the Hyper Suprime Cam Survey will be roughly an order of magnitude more powerful than existing data in constraining baryonic effects. Upcoming Stage IV surveys (LSST, Euclid, and WFIRST) will improve upon this by a further factor of a few. We show that this conclusion is robust to marginalization over several key systematics. The ultimate power of cosmic shear to constrain galaxy formation is dependent on understanding systematics in the shear measurements at small (sub-arcminute) scales. If these systematics can be sufficiently controlled, cosmic shear measurements from DES Y5 and other future surveys have the potential to provide a very clean probe of galaxy formation and to strongly constrain a wide range of predictions from modern hydrodynamical simulations.
13. Toward a Direct Measurement of the Cosmic Acceleration
Darling, Jeremiah K.
2013-01-01
We present precise redshift measurements and place model-free constraints on cosmic acceleration and the acceleration of the Solar System in a universal context. HI 21 cm absorption lines observed over multiple epochs can constrain the secular redshift drift or the proper acceleration, Δv/Δto, with high precision. A comparison of literature analog spectra to contemporary digital spectra shows significant acceleration almost certainly attributable to systematic instrumental or calibration errors. However, robust constraints have been obtained by using digital data from a single telescope, the Green Bank Telescope. An ensemble of 10 objects spanning z = 0.09-0.69 observed over 13.5 years show Δz/Δto = (-1.8 ± 1.2) × 10-8 yr-1 or Δv/Δto = -4.0 ± 3.0 m s-1 yr-1. The best constraint from a single object, 3C286 at = 0.692153275(85), is dz/dto = (1.5 ± 4.7) × 10-8 yr-1 or Δv/Δto = 2.7 ± 8.4 m s-1 yr-1. These measurements are three orders of magnitude larger than the expected acceleration in the concordance dark energy cosmology at z=0.5, Δz/Δto = 2 × 10-11 yr-1 or Δv/Δto = 0.3 cm s-1 yr-1, but they demonstrate a lack of secular redshift drift in absorption line systems and the long-term frequency stability of modern radio telescopes. This measurement likewise constrains the barycentric proper acceleration in a cosmological reference frame (as opposed to the Galactic pulsar-defined reference frame), but currently lacks the precision of quasar proper motion observations. A comparison of rest-frame UV metal absorption lines to the HI 21 cm line places improved constraints on the cosmic variation of physical constants: Δ(α2 gp μ)/(α2 gp μ) = (-3.5 ± 1.4) × 10-6 in the redshift range z=0.24-2.04, consistent with no variation. We estimate that the cosmic acceleration could be directly measured with this technique in about 300 years using modern telescopes or in about 12 years using a Square Kilometer Array, provided that new systematic effects do
14. ACCELERATION OF GALACTIC COSMIC RAYS IN THE INTERSTELLAR MEDIUM
SciTech Connect
Fisk, L. A.; Gloeckler, G.
2012-01-10
Challenges have arisen to diffusive shock acceleration as the primary means to accelerate galactic cosmic rays (GCRs) in the interstellar medium. Diffusive shock acceleration is also under challenge in the heliosphere, where at least the simple application of diffusive shock acceleration cannot account for observations. In the heliosphere, a new acceleration mechanism has been invented-a pump mechanism, driven by ambient turbulence, in which particles are pumped up in energy out of a low-energy core particle population through a series of adiabatic compressions and expansions-that can account for observations not only at shocks but in quiet conditions in the solar wind and throughout the heliosheath. In this paper, the pump mechanism is applied to the acceleration of GCRs in the interstellar medium. With relatively straightforward assumptions about the magnetic field in the interstellar medium, and how GCRs propagate in this field, the pump mechanism yields (1) the overall shape of the GCR spectrum, a power law in particle kinetic energy, with a break at the so-called knee in the GCR spectrum to a slightly steeper power-law spectrum. (2) The rigidity dependence of the H/He ratio observed from the PAMELA satellite instrument.
15. Nuclear Fusion Drives Present-Day Accelerated Cosmic Expansion
SciTech Connect
Ying, Leong
2010-09-30
The widely accepted model of our cosmos is that it began from a Big Bang event some 13.7 billion years ago from a single point source. From a twin universe perspective, the standard stellar model of nuclear fusion can account for the Dark Energy needed to explain the mechanism for our present-day accelerated expansion. The same theories can also be used to account for the rapid inflationary expansion at the earliest time of creation, and predict the future cosmic expansion rate.
16. TOWARD A DIRECT MEASUREMENT OF THE COSMIC ACCELERATION
SciTech Connect
Darling, Jeremy
2012-12-20
We present precise H I 21 cm absorption line redshifts observed in multiple epochs to directly constrain the secular redshift drift z-dot or the cosmic acceleration, {Delta}v/{Delta}t{sub circle}. A comparison of literature analog spectra to contemporary digital spectra shows significant acceleration likely attributable to systematic instrumental errors. However, we obtain robust constraints using primarily Green Bank Telescope digital data. Ten objects spanning z = 0.09-0.69 observed over 13.5 years show z-dot = (-2.3 {+-} 0.8) Multiplication-Sign 10{sup -8} yr{sup -1} or {Delta}v/{Delta}t{sub circle} = -5.5 {+-} 2.2 m s{sup -1} yr{sup -1}. The best constraint from a single object, 3C 286 at (z) = 0.692153275(85), is z-dot = (1.6 {+-} 4.7) Multiplication-Sign 10{sup -8} yr{sup -1} or {Delta}v/{Delta}t{sub circle} = 2.8 {+-} 8.4 m s{sup -1} yr{sup -1}. These measurements are three orders of magnitude larger than the theoretically expected acceleration at z = 0.5, z-dot = 2 Multiplication-Sign 10{sup -11} yr{sup -1} or {Delta}v/{Delta}t{sub circle} = 0.3 cm s{sup -1} yr{sup -1}, but they demonstrate the lack of peculiar acceleration in absorption line systems and the long-term frequency stability of modern radio telescopes. A comparison of UV metal absorption lines to the 21 cm line improves constraints on the cosmic variation of physical constants: {Delta}({alpha}{sup 2} g{sub p} {mu})/{alpha}{sup 2} g{sub p} {mu} = (- 1.2 {+-} 1.4) Multiplication-Sign 10{sup -6} in the redshift range z = 0.24-2.04. The linear evolution over the last 10.4 Gyr is (- 0.2 {+-} 2.7) Multiplication-Sign 10{sup -16} yr{sup -1}, consistent with no variation. The cosmic acceleration could be directly measured in {approx}125 years using current telescopes or in {approx}5 years using a Square Kilometer Array, but systematic effects will arise at the 1 cm s{sup -1} yr{sup -1} level.
17. Acceleration of cosmic rays in supernova-remnants
NASA Technical Reports Server (NTRS)
Dorfi, E. A.; Drury, L. O.
1985-01-01
It is commonly accepted that supernova-explosions are the dominant source of cosmic rays up to an energy of 10 to the 14th power eV/nucleon. Moreover, these high energy particles provide a major contribution to the energy density of the interstellar medium (ISM) and should therefore be included in calculations of interstellar dynamic phenomena. For the following the first order Fermi mechanism in shock waves are considered to be the main acceleration mechanism. The influence of this process is twofold; first, if the process is efficient (and in fact this is the cas) it will modify the dynamics and evolution of a supernova-remnant (SNR), and secondly, the existence of a significant high energy component changes the overall picture of the ISM. The complexity of the underlying physics prevented detailed investigations of the full non-linear selfconsistent problem. For example, in the context of the energy balance of the ISM it has not been investigated how much energy of a SN-explosion can be transfered to cosmic rays in a time-dependent selfconsistent model. Nevertheless, a lot of progress was made on many aspects of the acceleration mechanism.
18. Cosmic rays as probes of atmospheric electric fields
Scholten, O.; Trinh, G. T. N.; Schellart, P.; Ebert, U.; Rutjes, C.; Nelles, A.; Buitink, S.; ter Veen, S.; Horandel, J.; Corstanje, A.; Rachen, J. P.; Thoudam, S.; Falcke, H.; Koehn, C. C.; van den Berg, A. A. M.; de Vries, K. K. D.; Rossetto, L.
2015-12-01
Energetic cosmic rays impinging on the atmosphere create a particle avalanche called an extensive air shower. In the leading plasma of this shower electric currents are induced that generate radio waves which have been detected with LOFAR, a large and dense array of simple antennas primarily developed for radio-astronomy observations.LOFAR has observed air showers under fair-weather conditions as well as under atmospheric conditions where thunderstorms occur. For air showers under fair-weather conditions the intensity as well as the polarization of the radio emission can be understood rather accurately from the present models.For air showers measured under thunderstorm conditions we observe large differences in the intensity and polarization patterns from the fair weather models. We will show that the linear as well as the circular polarization of the radio waves carry clear information on the orientation of the electric fields at different heights in the thunderstorm clouds. We will show for the first time that the circular polarization of the radio waves tells about the change of orientation of the fields with altitude. We will show that from the measured data at LOFAR the thunderstorm electric fields can be reconstructed.We thus have established the measurement of radio emission from extensive air showers induced by cosmic rays as a new tool to probe the atmospheric electric fields present in thunderclouds in a non-intrusive way.
19. Nuclear Effects of Supernova-Accelerated Cosmic Rays on Early Solar System Planetary Bodies
Meyer, B. S.; The, L.-S.; Johnson, J.
2008-03-01
The solar system apparently formed in the neighborhood of massive stars. Supernova explosions of these stars accelerate cosmic rays to 100s of TeVs. These cosmic rays could accelerate the beta decay of certain radioactive species in meteorite parent bodies.
20. "Espresso" Acceleration of Ultra-high-energy Cosmic Rays
Caprioli, Damiano
2015-10-01
We propose that ultra-high-energy (UHE) cosmic rays (CRs) above 1018 eV are produced in relativistic jets of powerful active galactic nuclei via an original mechanism, which we dub “espresso” acceleration: “seed” galactic CRs with energies ≲1017 eV that penetrate the jet sideways receive a “one-shot” boost of a factor of ∼Γ2 in energy, where Γ is the Lorentz factor of the relativistic flow. For typical jet parameters, a few percent of the CRs in the host galaxy can undergo this process, and powerful blazars with Γ ≳ 30 may accelerate UHECRs up to more than 1020 eV. The chemical composition of espresso-accelerated UHECRs is determined by that at the Galactic CR knee and is expected to be proton-dominated at 1018 eV and increasingly heavy at higher energies, in agreement with recent observations made at the Pierre Auger Observatory.
1. A Comprehensive Investigation on the Slowing Down of Cosmic Acceleration
Wang, Shuang; Hu, Yazhou; Li, Miao; Li, Nan
2016-04-01
Shafieloo et al. first proposed the possibility that the current cosmic acceleration (CA) is slowing down. However, this is rather counterintuitive because a slowing down CA cannot be accommodated in most mainstream cosmological models. In this work, by exploring the evolutionary trajectories of the dark energy equation of state w(z) and deceleration parameter q(z), we present a comprehensive investigation on the slowing down of CA from both the theoretical and the observational sides. For the theoretical side, we study the impact of different w(z) using six parametrization models, and then we discuss the effects of spatial curvature. For the observational side, we investigate the effects of different type Ia supernovae (SNe Ia), baryon acoustic oscillation (BAO), and cosmic microwave background (CMB) data. We find that (1) the evolution of CA is insensitive to the specific form of w(z); in contrast, a non-flat universe favors a slowing down CA more than a flat universe. (2) SNLS3 SNe Ia data sets favor a slowing down CA at a 1σ confidence level, while JLA SNe Ia samples prefer an eternal CA; in contrast, the effects of different BAO data are negligible. (3) Compared with CMB distance prior data, full CMB data favor a slowing down CA more. (4) Due to the low significance, the slowing down of CA is still a theoretical possibility that cannot be confirmed by the current observations.
2. Ringlike inelastic events in cosmic rays and accelerators
NASA Technical Reports Server (NTRS)
Dremin, I. M.; Orlov, A. M.; Tretyakova, M. I.
1985-01-01
In cosmic rays and in accelerators there were observed single inelastic processes with densely produced (azimuthally isotropic) groups of particles exhibiting spikes in the pseudorapidity plot of an individual event (i.e. ringlike events). Theoretically the existence of such processes was predicted as a consequence of Cerenkov gluon radiation or, more generally, of deconfinement radiation. Nowadays some tens of such events have been accumulated at 400 GeV and at 150 TeV. Analyzing ringlike events in proton-nucleon interactions at 400 GeV/c it is shown that they exhibit striking irregularity in the positions of pseudorapidity spikes' centers which tend to lie mostly at 55,90 and 125 deg in cms. It implies rather small deconfinement lengths of the order of some fermi.
3. Cosmic slowing down of acceleration for several dark energy parametrizations
SciTech Connect
Magaña, Juan; Cárdenas, Víctor H.; Motta, Verónica E-mail: [email protected]
2014-10-01
We further investigate slowing down of acceleration of the universe scenario for five parametrizations of the equation of state of dark energy using four sets of Type Ia supernovae data. In a maximal probability analysis we also use the baryon acoustic oscillation and cosmic microwave background observations. We found the low redshift transition of the deceleration parameter appears, independently of the parametrization, using supernovae data alone except for the Union 2.1 sample. This feature disappears once we combine the Type Ia supernovae data with high redshift data. We conclude that the rapid variation of the deceleration parameter is independent of the parametrization. We also found more evidence for a tension among the supernovae samples, as well as for the low and high redshift data.
4. Interstellar Mapping and Acceleration Probe (IMAP) - Its Time Has Come!
Schwadron, N.; Kasper, J. C.; Mewaldt, R. A.; Moebius, E.; Opher, M.; Spence, H. E.; Zurbuchen, T.
2014-12-01
Our piece of cosmic real-estate, the heliosphere, is the domain of all human existence -- an astrophysical case-history of the successful evolution of life in a habitable system. By exploring our global heliosphere and its myriad interactions, we develop key physical knowledge of the interstellar interactions that influence exoplanetary habitability as well as the distant history and destiny of our solar system and world. IBEX was the first mission to explore the global heliosphere and in concert with Voyager 1 and Voyager 2 is discovering a fundamentally new and uncharted physical domain of the outer heliosphere. The enigmatic IBEX ribbon is an unanticipated discovery demonstrating that much of what we know or think we understand about the outer heliosphere needs to be revised. The next quantum leap enabled by IMAP will open new windows on the frontier of Heliophysics at a time when the space environment is rapidly evolving. IMAP with 100 times the combined resolution and sensitivity of IBEX will discover the substructure of the IBEX ribbon and will reveal in unprecedented resolution global maps of our heliosphere. The remarkable synergy between IMAP, Voyager 1 and Voyager 2 will remain for at least the next decade as Voyager 1 pushes further into the interstellar domain and Voyager 2 moves through the heliosheath. Voyager 2 moves outward in the vicinity of the IBEX ribbon and its plasma measurements will create singular opportunities for discovery in the context of IMAP's global measurements. IMAP, like ACE before it, will be a keystone of the Heliophysics System Observatory by providing comprehensive cosmic ray, energetic particle, pickup ion, suprathermal ion, neutral atom, solar wind, solar wind heavy ion, and magnetic field observations to diagnose the changing space environment and understand the fundamental origins of particle acceleration. Thus, IMAP is a mission whose time has come. IMAP is the highest ranked next Solar Terrestrial Probe in the Decadal
5. Estimating field scale root zone soil moisture using the cosmic-ray neutron probe
Peterson, A. M.; Helgason, W. D.; Ireson, A. M.
2015-12-01
Many practical hydrological, meteorological and agricultural management problems require estimates of soil moisture with an areal footprint equivalent to "field scale", integrated over the entire root zone. The cosmic-ray neutron probe is a promising instrument to provide field scale areal coverage, but these observations are shallow and require depth scaling in order to be considered representative of the entire root zone. A study to identify appropriate depth-scaling techniques was conducted at a grazing pasture site in central Saskatchewan, Canada over a two year period. Area-averaged soil moisture was assessed using a cosmic-ray neutron probe. Root zone soil moisture was measured at 21 locations within the 5002 m2 area, using a down-hole neutron probe. The cosmic-ray neutron probe was found to provide accurate estimates of field scale surface soil moisture, but accounted for less than 40 % of the seasonal change in root zone storage due to its shallow measurement depth. The root zone estimation methods evaluated were: (1) the coupling of the cosmic-ray neutron probe with a time stable neutron probe monitoring location, (2) coupling the cosmic-ray neutron probe with a representative landscape unit monitoring approach, and (3) convolution of the cosmic-ray neutron probe measurements with the exponential filter. The time stability method provided the best estimate of root zone soil moisture (RMSE = 0.004 cm3 cm-3), followed by the exponential filter (RMSE = 0.012 cm3 cm-3). The landscape unit approach, which required no calibration, had a negative bias but estimated the cumulative change in storage reasonably. The feasibility of applying these methods to field sites without existing instrumentation is discussed. It is concluded that the exponential filter method has the most potential for estimating root zone soil moisture from cosmic-ray neutron probe data.
6. Estimating field-scale root zone soil moisture using the cosmic-ray neutron probe
Peterson, Amber M.; Helgason, Warren D.; Ireson, Andrew M.
2016-04-01
Many practical hydrological, meteorological, and agricultural management problems require estimates of soil moisture with an areal footprint equivalent to field scale, integrated over the entire root zone. The cosmic-ray neutron probe is a promising instrument to provide field-scale areal coverage, but these observations are shallow and require depth-scaling in order to be considered representative of the entire root zone. A study to identify appropriate depth-scaling techniques was conducted at a grazing pasture site in central Saskatchewan, Canada over a 2-year period. Area-averaged soil moisture was assessed using a cosmic-ray neutron probe. Root zone soil moisture was measured at 21 locations within the 500 m × 500 m study area, using a down-hole neutron probe. The cosmic-ray neutron probe was found to provide accurate estimates of field-scale surface soil moisture, but measurements represented less than 40 % of the seasonal change in root zone storage due to its shallow measurement depth. The root zone estimation methods evaluated were: (a) the coupling of the cosmic-ray neutron probe with a time-stable neutron probe monitoring location, (b) coupling the cosmic-ray neutron probe with a representative landscape unit monitoring approach, and (c) convolution of the cosmic-ray neutron probe measurements with the exponential filter. The time stability method provided the best estimate of root zone soil moisture (RMSE = 0.005 cm3 cm-3), followed by the exponential filter (RMSE = 0.014 cm3 cm-3). The landscape unit approach, which required no calibration, had a negative bias but estimated the cumulative change in storage reasonably. The feasibility of applying these methods to field sites without existing instrumentation is discussed. Based upon its observed performance and its minimal data requirements, it is concluded that the exponential filter method has the most potential for estimating root zone soil moisture from cosmic-ray neutron probe data.
7. Effects of cosmic acceleration on black hole thermodynamics
Mandal, Abhijit
2016-07-01
Direct local impacts of cosmic acceleration upon a black hole are matters of interest. Babichev et. al. had published before that the Friedmann equations which are prevailing the part of fluid filled up in the universe to lead (or to be very specific, dominate') the other constituents of universe and are forcing the universe to undergo present-day accelerating phase (or to lead to violate the strong energy condition and latter the week energy condition), will themselves tell that the rate of change of mass of the central black hole due to such exotic fluid's accretion will essentially shrink the mass of the black hole. But this is a global impact indeed. The local changes in the space time geometry next to the black hole can be analysed from a modified metric governing the surrounding space time of a black hole. A charged deSitter black hole solution encircled by quintessence field is chosen for this purpose. Different thermodynamic parameters are analysed for different values of quintessence equation of state parameter, ω_q. Specific jumps in the nature of the thermodynamic space near to the quintessence or phantom barrier are noted and physically interpreted as far as possible. Nature of phase transitions and the situations at which these transitions are taking place are also explored. It is determined that before quintessence starts to work (ω_q=-0.33>-1/3) it was preferable to have a small unstable black hole followed by a large stable one. But in quintessence (-1/3>ω_q>-1), black holes are destined to be unstable large ones pre-quelled by stable/ unstable small/ intermediate mass black holes.
8. A class of effective field theory models of cosmic acceleration
Bloomfield, Jolyon K.; Flanagan, Éanna É.
2012-10-01
We explore a class of effective field theory models of cosmic acceleration involving a metric and a single scalar field. These models can be obtained by starting with a set of ultralight pseudo-Nambu-Goldstone bosons whose couplings to matter satisfy the weak equivalence principle, assuming that one boson is lighter than all the others, and integrating out the heavier fields. The result is a quintessence model with matter coupling, together with a series of correction terms in the action in a covariant derivative expansion, with specific scalings for the coefficients. After eliminating higher derivative terms and exploiting the field redefinition freedom, we show that the resulting theory contains nine independent free functions of the scalar field when truncated at four derivatives. This is in contrast to the four free functions found in similar theories of single-field inflation, where matter is not present. We discuss several different representations of the theory that can be obtained using the field redefinition freedom. For perturbations to the quintessence field today on subhorizon lengthscales larger than the Compton wavelength of the heavy fields, the theory is weakly coupled and natural in the sense of t'Hooft. The theory admits a regime where the perturbations become modestly nonlinear, but very strong nonlinearities lie outside its domain of validity.
9. The acceleration rate of cosmic rays at cosmic ray modified shocks
Saito, Tatsuhiko; Hoshino, Masahiro; Amano, Takanobu
It is a still controversial matter whether the production efficiency of cosmic rays (CRs) is relatively efficient or inefficient (e.g. Helder et al. 2009; Hughes et al. 2000; Fukui 2013). In upstream region of SNR shocks (the interstellar medium), the energy density of CRs is comparable to a substantial fraction of that of the thermal plasma (e.g. Ferriere 2001). In such a situation, CRs can possibly exert a back-reaction to the shocks and modify the global shock structure. These shocks are called cosmic ray modified shocks (CRMSs). In CRMSs, as a result of the nonlinear feedback, there are almost always up to three steady-state solutions for given upstream parameters, which are characterized by CR production efficiencies (efficient, intermediate and inefficient branch). We evaluate qualitatively the efficiency of the CR production in SNR shocks by considering the stability of CRMS, under the effects of i) magnetic fields and ii) injection, which play significant roles in efficiency of acceleration. By adopting two-fluid model (Drury & Voelk, 1981), we investigate the stability of CRMSs by means of time-dependent numerical simulations. As a result, we show explicitly the bi-stable feature of these multiple solutions, i.e., the efficient and inefficient branches are stable and the intermediate branch is unstable, and the intermediate branch transit to the inefficient one. This feature is independent of the effects of i) shock angles and ii) injection. Furthermore, we investigate the evolution from a hydrodynamic shock to CRMS in a self-consistent manner. From the results, we suggest qualitatively that the CR production efficiency at SNR shocks may be the least efficient.
10. Cosmic-Ray Accelerators in Milky Way studied with the Fermi Gamma-ray Space Telescope
SciTech Connect
Kamae, Tuneyoshi; /SLAC /KIPAC, Menlo Park
2012-05-04
High-energy gamma-ray astrophysics is now situated at a confluence of particle physics, plasma physics and traditional astrophysics. Fermi Gamma-ray Space Telescope (FGST) and upgraded Imaging Atmospheric Cherenkov Telescopes (IACTs) have been invigorating this interdisciplinary area of research. Among many new developments, I focus on two types of cosmic accelerators in the Milky-Way galaxy (pulsar, pulsar wind nebula, and supernova remnants) and explain discoveries related to cosmic-ray acceleration.
11. Probing gravitation, dark energy, and acceleration
SciTech Connect
Linder, Eric V.
2004-02-20
The acceleration of the expansion of the universe arises from unknown physical processes involving either new fields in high energy physics or modifications of gravitation theory. It is crucial for our understanding to characterize the properties of the dark energy or gravity through cosmological observations and compare and distinguish between them. In fact, close consistencies exist between a dark energy equation of state function w(z) and changes to the framework of the Friedmann cosmological equations as well as direct spacetime geometry quantities involving the acceleration, such as ''geometric dark energy'' from the Ricci scalar. We investigate these interrelationships, including for the case of super acceleration or phantom energy where the fate of the universe may be more gentle than the Big Rip.
12. Probing Cosmic Gas Accretion with RESOLVE and ECO
Kannappan, Sheila; Eckert, Kathleen D.; Stark, David; Lagos, Claudia; Nasipak, Zachary; Moffett, Amanda J.; Baker, Ashley; Berlind, Andreas A.; Hoversten, Erik A.; Norris, Mark A.; RESOLVE Team
2016-01-01
We review results bearing on the existence, controlling factors, and mechanisms of cosmic gas accretion in the RESOLVE and ECO surveys. Volume-limited analysis of RESOLVE's complete census of HI-to-stellar mass ratios and star formation histories for ~1500 galaxies points to the necessity of an "open box" model of galaxy fueling, with the most gas-dominated galaxies doubling their stellar masses on ~Gyr timescales in a regime of rapid accretion. Transitions in gas richness and disk-building activity for isolated or central galaxies with halo masses near ~10^11.5 Msun and ~10^12 Msun plausibly correspond to the endpoints of a theoretically predicted transition in halo gas temperature that slows accretion across this range. The same mass range is associated with the initial grouping of isolated galaxies into common halos, where "isolated" is defined relative to the survey baryonic mass limits of >~10^9 Msun. Above 10^11.5 Msun, patterns in central vs. satellite gas richness as a function of group halo mass suggest that galaxy refueling is valved off from the inside out as the halo grows, with total quenching beyond the virial radius for halo masses >~10^13-13.5 Msun. Within the transition range from ~10^11.5-10^12 Msun, theoretical models predict >3 dex dispersion in ratios of uncooled halo gas to cold gas in galaxies (or more generally gas and stars). In RESOLVE and ECO, the baryonic mass function of galaxies in this transitional halo mass range displays signs of stripping or destruction of satellites, leading us to investigate a possible connection with halo gas heating using central galaxy color and group dynamics to probe group evolutionary state. Finally, we take a first look at how internal variations in metallicity, dynamics, and star formation constrain accretion mechanisms such as cold streams, induced extraplanar gas cooling, isotropic halo gas cooling, and gas-rich merging in different mass and environment regimes. The RESOLVE and ECO surveys have been
13. Late time cosmic acceleration from natural infrared cutoff
2016-09-01
In this paper, inspired by the ultraviolet deformation of the Friedmann-Lemaître-Robertson-Walker geometry in loop quantum cosmology, we formulate an infrared-modified cosmological model. We obtain the associated deformed Friedmann and Raychaudhuri equations and we show that the late time cosmic acceleration can be addressed by the infrared corrections. As a particular example, we applied the setup to the case of matter dominated universe. This model has the same number of parameters as ΛCDM, but a dynamical dark energy generates in the matter dominated era at the late time. According to our model, as the universe expands, the energy density of the cold dark matter dilutes and when the Hubble parameter approaches to its minimum, the infrared effects dominate such that the effective equation of state parameter smoothly changes from weff = 0 to weff = - 2. Interestingly and nontrivially, the unstable de Sitter phase with weff = - 1 is corresponding to Ωm =Ωd = 0.5 and the universe crosses the phantom divide from the quintessence phase with weff > - 1 and Ωm >Ωd to the phantom phase with weff < - 1 and Ωm <Ωd which shows that the model is observationally viable. The results show that the universe finally ends up in a big rip singularity for a finite time proportional to the inverse of the minimum of the Hubble parameter. Moreover, we consider the dynamical stability of the model and we show that the universe starts from the matter dominated era at the past attractor with weff = 0 and ends up in a future attractor at the big rip with weff = - 2.
14. Probing electron acceleration and x-ray emission in laser-plasma accelerators
SciTech Connect
Thaury, C.; Ta Phuoc, K.; Corde, S.; Brijesh, P.; Lambert, G.; Malka, V.; Mangles, S. P. D.; Bloom, M. S.; Kneip, S.
2013-06-15
While laser-plasma accelerators have demonstrated a strong potential in the acceleration of electrons up to giga-electronvolt energies, few experimental tools for studying the acceleration physics have been developed. In this paper, we demonstrate a method for probing the acceleration process. A second laser beam, propagating perpendicular to the main beam, is focused on the gas jet few nanosecond before the main beam creates the accelerating plasma wave. This second beam is intense enough to ionize the gas and form a density depletion, which will locally inhibit the acceleration. The position of the density depletion is scanned along the interaction length to probe the electron injection and acceleration, and the betatron X-ray emission. To illustrate the potential of the method, the variation of the injection position with the plasma density is studied.
15. Probing electron acceleration and x-ray emission in laser-plasma accelerators
Thaury, C.; Ta Phuoc, K.; Corde, S.; Brijesh, P.; Lambert, G.; Mangles, S. P. D.; Bloom, M. S.; Kneip, S.; Malka, V.
2013-06-01
While laser-plasma accelerators have demonstrated a strong potential in the acceleration of electrons up to giga-electronvolt energies, few experimental tools for studying the acceleration physics have been developed. In this paper, we demonstrate a method for probing the acceleration process. A second laser beam, propagating perpendicular to the main beam, is focused on the gas jet few nanosecond before the main beam creates the accelerating plasma wave. This second beam is intense enough to ionize the gas and form a density depletion, which will locally inhibit the acceleration. The position of the density depletion is scanned along the interaction length to probe the electron injection and acceleration, and the betatron X-ray emission. To illustrate the potential of the method, the variation of the injection position with the plasma density is studied.
16. SUPERNOVA REMNANT KES 17: AN EFFICIENT COSMIC RAY ACCELERATOR INSIDE A MOLECULAR CLOUD
SciTech Connect
Gelfand, Joseph D.; Castro, Daniel; Slane, Patrick O.; Temim, Tea; Hughes, John P.; Rakowski, Cara E-mail: [email protected]
2013-11-10
The supernova remnant Kes 17 (SNR G304.6+0.1) is one of a few but growing number of remnants detected across the electromagnetic spectrum. In this paper, we analyze recent radio, X-ray, and γ-ray observations of this object, determining that efficient cosmic ray acceleration is required to explain its broadband non-thermal spectrum. These observations also suggest that Kes 17 is expanding inside a molecular cloud, though our determination of its age depends on whether thermal conduction or clump evaporation is primarily responsible for its center-filled thermal X-ray morphology. Evidence for efficient cosmic ray acceleration in Kes 17 supports recent theoretical work concluding that the strong magnetic field, turbulence, and clumpy nature of molecular clouds enhance cosmic ray production in supernova remnants. While additional observations are needed to confirm this interpretation, further study of Kes 17 is important for understanding how cosmic rays are accelerated in supernova remnants.
17. Cosmic-Ray Anisotropy as a Probe of Interstellar Turbulence
Giacinti, Gwenael; Kirk, John
2016-07-01
IceTop and IceCube have observed a mysterious cold spot in the angular distribution of high energy (≳ 100 TeV) cosmic rays, thereby placing interesting constraints on their transport properties. In this paper we examine these constraints by comparing the observations with the predictions of pitch-angle diffusion in various kinds of turbulence. In the case of incompressible Alfvénic turbulence with a Goldreich-Sridhar power-spectrum, we show that pseudo-Alfvén modes produce a signature that is compatible with the observations, although they fail to provide enough scattering to confine cosmic rays in the galaxy. We confirm that adding fast magnetosonic modes can alleviate this problem, and further show that for physically relevant values of the turbulence parameters, this model can still match the observations. Finally, we study the imprint on the cosmic-ray anistropy of anistropic damping of the fast modes.
18. Cascaded Gamma Rays as a Probe of Cosmic Rays
Murase, Kohta
2014-06-01
Very-high-energy (VHE) and ultra-high-energy (UHE) gamma rays from extragalactic sources experience electromagnetic cascades during their propagation in intergalactic space. Recent gamma-ray data on TeV blazars and the diffuse gamma-ray background may have hints of the cascade emission, which are especially interesting if it comes from UHE cosmic rays. I show that cosmic-ray-induced cascades can be discriminated from gamma-ray-induced cascades with detailed gamma-ray spectra. I also discuss roles of structured magnetic fields, which suppress inverse-Compton pair halos/echoes but lead to guaranteed signals - synchrotron pair halos/echoes.
19. Pointlike gamma ray sources as signatures of distant accelerators of ultrahigh energy cosmic rays.
PubMed
Gabici, Stefano; Aharonian, Felix A
2005-12-16
We discuss the possibility of observing distant accelerators of ultrahigh energy cosmic rays in synchrotron gamma rays. Protons propagating away from their acceleration sites produce extremely energetic electrons during photopion interactions with cosmic microwave background photons. If the accelerator is embedded in a magnetized region, these electrons will emit high energy synchrotron radiation. The resulting synchrotron source is expected to be pointlike, steady, and detectable in the GeV-TeV energy range if the magnetic field is at the nanoGauss level. PMID:16384444
20. The COBE cosmic 3 K anisotropy experiment: A gravity wave and cosmic string probe
NASA Technical Reports Server (NTRS)
Bennett, Charles L.; Smoot, George F.
1989-01-01
Among the experiments to be carried into orbit next year, by the COBE satellite, are differential microwave radiometers. They will make sensitive all-sky maps of the temperature of the cosmic microwave background radiation at three frequencies, giving dipole, quadrupole, and higher order multipole measurements of the background radiation. The experiment will either detect, or place significant constraints on, the existence of cosmic strings and long wavelength gravity waves.
1. High energy neutrinos from astrophysical accelerators of cosmic ray nuclei
Anchordoqui, Luis A.; Hooper, Dan; Sarkar, Subir; Taylor, Andrew M.
2008-02-01
Ongoing experimental efforts to detect cosmic sources of high energy neutrinos are guided by the expectation that astrophysical accelerators of cosmic ray protons would also generate neutrinos through interactions with ambient matter and/or photons. However, there will be a reduction in the predicted neutrino flux if cosmic ray sources accelerate not only protons but also significant numbers of heavier nuclei, as is indicated by recent air shower data. We consider plausible extragalactic sources such as active galactic nuclei, gamma ray bursts and starburst galaxies and demand consistency with the observed cosmic ray composition and energy spectrum at Earth after allowing for propagation through intergalactic radiation fields. This allows us to calculate the expected neutrino fluxes from the sources, normalized to the observed cosmic ray spectrum. We find that the likely signals are still within reach of next generation neutrino telescopes such as IceCube.PACS95.85.Ry98.70.Rz98.54.Cm98.54.EpReferencesFor a review, see:F.HalzenD.HooperRep. Prog. Phys.6520021025A.AchterbergIceCube CollaborationPhys. Rev. Lett.972006221101A.AchterbergIceCube CollaborationAstropart. Phys.262006282arXiv:astro-ph/0611063arXiv:astro-ph/0702265V.NiessANTARES CollaborationAIP Conf. Proc.8672006217I.KravchenkoPhys. Rev. D732006082002S.W.BarwickANITA CollaborationPhys. Rev. Lett.962006171101V.Van ElewyckPierre Auger CollaborationAIP Conf. Proc.8092006187For a survey of possible sources and event rates in km3 detectors see e.g.,W.BednarekG.F.BurgioT.MontaruliNew Astron. Rev.4920051M.D.KistlerJ.F.BeacomPhys. Rev. D742006063007A. Kappes, J. Hinton, C. Stegmann, F.A. Aharonian, arXiv:astro-ph/0607286.A.LevinsonE.WaxmanPhys. Rev. Lett.872001171101C.DistefanoD.GuettaE.WaxmanA.LevinsonAstrophys. J.5752002378F.A.AharonianL.A.AnchordoquiD.KhangulyanT.MontaruliJ. Phys. Conf. Ser.392006408J.Alvarez-MunizF.HalzenAstrophys. J.5762002L33F.VissaniAstropart. Phys.262006310F.W
2. The cosmic microwave background - A probe of particle physics
NASA Technical Reports Server (NTRS)
Silk, Joseph
1990-01-01
The current status of spectral distortions and angular anisotropies in the cosmic microwave background is reviewed, with emphasis on the role played by weakly interacting particle dark matter. Theoretical predictions and recent observational results are described, and prospects for future progress are summarized.
3. Stochastic Acceleration of Galactic Cosmic Rays by Compressible Plasma Fluctuations in Supernova Shells
Zhang, Ming
2015-10-01
A theory of 2-stage acceleration of Galactic cosmic rays in supernova remnants is proposed. The first stage is accomplished by the supernova shock front, where a power-law spectrum is established up to a certain cutoff energy. It is followed by stochastic acceleration with compressible waves/turbulence in the downstream medium. With a broad \\propto {k}-2 spectrum for the compressible plasma fluctuations, the rate of stochastic acceleration is constant over a wide range of particle momentum. In this case, the stochastic acceleration process extends the power-law spectrum cutoff energy of Galactic cosmic rays to the knee without changing the spectral slope. This situation happens as long as the rate of stochastic acceleration is faster than 1/5 of the adiabatic cooling rate. A steeper spectrum of compressible plasma fluctuations that concentrate their power in long wavelengths will accelerate cosmic rays to the knee with a small bump before its cutoff in the comic-ray energy spectrum. This theory does not require a strong amplification of the magnetic field in the upstream interstellar medium in order to accelerate cosmic rays to the knee energy.
4. An absence of neutrinos associated with cosmic-ray acceleration in γ-ray bursts.
PubMed
2012-04-19
Very energetic astrophysical events are required to accelerate cosmic rays to above 10(18) electronvolts. GRBs (γ-ray bursts) have been proposed as possible candidate sources. In the GRB 'fireball' model, cosmic-ray acceleration should be accompanied by neutrinos produced in the decay of charged pions created in interactions between the high-energy cosmic-ray protons and γ-rays. Previous searches for such neutrinos found none, but the constraints were weak because the sensitivity was at best approximately equal to the predicted flux. Here we report an upper limit on the flux of energetic neutrinos associated with GRBs that is at least a factor of 3.7 below the predictions. This implies either that GRBs are not the only sources of cosmic rays with energies exceeding 10(18) electronvolts or that the efficiency of neutrino production is much lower than has been predicted. PMID:22517161
5. Laboratory laser acceleration and high energy astrophysics: {gamma}-ray bursts and cosmic rays
SciTech Connect
Tajima, T.; Takahashi, Y.
1998-08-20
Recent experimental progress in laser acceleration of charged particles (electrons) and its associated processes has shown that intense electromagnetic pulses can promptly accelerate charged particles to high energies and that their energy spectrum is quite hard. On the other hand some of the high energy astrophysical phenomena such as extremely high energy cosmic rays and energetic components of {gamma}-ray bursts cry for new physical mechanisms for promptly accelerating particles to high energies. The authors suggest that the basic physics involved in laser acceleration experiments sheds light on some of the underlying mechanisms and their energy spectral characteristics of the promptly accelerated particles in these high energy astrophysical phenomena.
6. Super-TIGER: A Balloon-Borne Instrument to Probe Galactic Cosmic Ray Origins
Rauch, Brian
2012-07-01
Super-TIGER (Super Trans-Iron Galactic Element Recorder) is a balloon-borne instrument under construction for a long-duration flight from Antarctica in 2012. It is designed to measure the relative abundances of the ultra-heavy (UH) Galactic cosmic rays (GCR) with individual-element resolution from _{30}Zn to _{42}Mo and make exploratory measurements through _{56}Ba, as well as the energy spectra of the GCR from _{10}Ne to _{29}Cu between 0.8 and 10 GeV/nucleon. The UH measurements will test the OB association origin model of the GCR, as well as the model of preferential acceleration of refractory elements. The GCR spectrum measurements will probe for microquasars or other sources that could superpose spectral features. Super-TIGER is a ˜ 4 × larger evolution of the preceding TIGER instrument, and is comprised of two independent modules with a total area of 5.4 m^{2}. A combination of plastic scintillation detectors, acrylic and silica-aerogel Cherenkov detectors, and scintillating fiber hodoscopes are used to resolve particle charge, kinetic energy per nucleon, and trajectory. Refinements in the Super-TIGER design over TIGER, including reduced material in the beam, give it a collecting power that is ˜ 6.4× larger. This paper will report on the instrument development status, the expected flight performance, and the scientific impact of the anticipated Super-TIGER GCR measurements. This research was supported by NASA under Grant NNX09AC17G
7. Calibration of a catchment scale cosmic-ray probe network: A comparison of three parameterization methods
Baatz, R.; Bogena, H. R.; Hendricks Franssen, H.-J.; Huisman, J. A.; Qu, W.; Montzka, C.; Vereecken, H.
2014-08-01
The objective of this work was to assess the accuracy of soil water content determination from neutron flux measured by cosmic-ray probes under humid climate conditions. Ten cosmic-ray probes were set up in the Rur catchment located in western Germany, and calibrated by gravimetric soil sampling campaigns. Aboveground biomass was estimated at the sites to investigate the role of vegetation cover on the neutron flux and the calibration procedure. Three parameterization methods were used to generate site-specific neutron flux - soil water content calibration curves: (i) the N0-method, (ii) the hydrogen molar fraction method (hmf-method), and (iii) the COSMIC-method. At five locations, calibration measurements were repeated to evaluate site-specific calibration parameters obtained in two different sampling campaigns. At two locations, soil water content determined by cosmic-ray probes was evaluated with horizontally and vertically weighted soil water content measurements of two distributed in situ soil water content sensor networks. All three methods were successfully calibrated to determine field scale soil water content continuously at the ten sites. The hmf-method and the COSMIC-method had more similar calibration curves than the N0-method. The three methods performed similarly well in the validation and errors were within the uncertainty of neutron flux measurements despite observed differences in the calibration curves and variable model complexity. In addition, we found that the obtained calibration parameters NCOSMIC, N0 and NS showed a strong correlation with aboveground biomass.
8. Visual phenomena induced by cosmic rays and accelerated particles
NASA Technical Reports Server (NTRS)
Tobias, C. A.; Budinger, T. F.; Leith, J. T.; Mamoon, A.; Chapman, P. K.
1972-01-01
Experiments, conducted at cyclotrons together with observations by Apollo astronauts, suggest with little doubt that cosmic nuclei interacting with the visual apparatus cause the phenomenon of light flashes seen on translunar and transearth coast over the past four Apollo missions. Other experiments with high and low energy neutrons and a helium ion beam suggest that slow protons and helium ions with a stopping power greater than 10 to the 8th power eV/gram sq cm can cause the phenomenon in the dark adapted eye. It was demonstrated that charged particles induced by neutrons and helium ions can stimulate the visual apparatus. Some approaches to understanding the long term mission effects of galactic cosmic nuclei interacting with man and his nervous system are outlined.
9. Integration of cosmic-ray neutron probes into production agriculture: Lessons from the Platte River cosmic-ray neutron probe monitoring network
Avery, W. A.; Finkenbiner, C. E.; Franz, T. E.; Nguy-Robertson, A. L.; Munoz-Arriola, F.; Suyker, A.; Arkebauer, T. J.
2015-12-01
Projected increases in global population will put enormous pressure on fresh water resources in the coming decades. Approximately 70 percent of human water use is allocated to agriculture with 40 percent of global food production originating from irrigated lands. Growing demand for food will only worsen the strain placed on many irrigated agricultural systems resulting in an unsustainable reliance on groundwater. This work presents an overview of the Platte River Cosmic-ray Neutron Probe Monitoring Network, which consists of 10 fixed probes and 3 mobile probes located across the Platte River Basin. The network was installed in 2014 and is part of the larger US COSMOS (70+ probes) and global COSMOS networks (200+ probes). Here we will present an overview of the network, comparison of fixed neutron probe results across the basin, spatial mapping results of the mobile sensors at various sites and spatial scales, and lessons learned by working with various producers and water stakeholder groups. With the continued development of this technique, its incorporation for soil moisture management in large producer operations has the potential to increase irrigation water use efficiency in the Platte River Basin and beyond.
10. Los Alamos, Toshiba probing Fukushima with cosmic rays
ScienceCinema
Morris, Christopher
2014-06-25
Los Alamos National Laboratory has announced an impending partnership with Toshiba Corporation to use a Los Alamos technique called muon tomography to safely peer inside the cores of the Fukushima Daiichi reactors and create high-resolution images of the damaged nuclear material inside without ever breaching the cores themselves. The initiative could reduce the time required to clean up the disabled complex by at least a decade and greatly reduce radiation exposure to personnel working at the plant. Muon radiography (also called cosmic-ray radiography) uses secondary particles generated when cosmic rays collide with upper regions of Earth's atmosphere to create images of the objects that the particles, called muons, penetrate. The process is analogous to an X-ray image, except muons are produced naturally and do not damage the materials they contact. Muon radiography has been used before in imaginative applications such as mapping the interior of the Great Pyramid at Giza, but Los Alamos's muon tomography technique represents a vast improvement over earlier technology.
11. Los Alamos, Toshiba probing Fukushima with cosmic rays
SciTech Connect
Morris, Christopher
2014-06-16
Los Alamos National Laboratory has announced an impending partnership with Toshiba Corporation to use a Los Alamos technique called muon tomography to safely peer inside the cores of the Fukushima Daiichi reactors and create high-resolution images of the damaged nuclear material inside without ever breaching the cores themselves. The initiative could reduce the time required to clean up the disabled complex by at least a decade and greatly reduce radiation exposure to personnel working at the plant. Muon radiography (also called cosmic-ray radiography) uses secondary particles generated when cosmic rays collide with upper regions of Earth's atmosphere to create images of the objects that the particles, called muons, penetrate. The process is analogous to an X-ray image, except muons are produced naturally and do not damage the materials they contact. Muon radiography has been used before in imaginative applications such as mapping the interior of the Great Pyramid at Giza, but Los Alamos's muon tomography technique represents a vast improvement over earlier technology.
12. A MODEL OF ACCELERATION OF ANOMALOUS COSMIC RAYS BY RECONNECTION IN THE HELIOSHEATH
SciTech Connect
Lazarian, A.; Opher, M. E-mail: [email protected]
2009-09-20
We discuss a model of cosmic ray acceleration that accounts for the observations of anomalous cosmic rays (ACRs) by Voyager 1 and 2. The model appeals to fast magnetic reconnection rather than shocks as the driver of acceleration. The ultimate source of energy is associated with magnetic field reversals that occur in the heliosheath. It is expected that the magnetic field reversals will occur throughout the heliosheath, but especially near the heliopause where the flows slow down and diverge with respect to the interstellar wind and also in the boundary sector in the heliospheric current sheet. While the first-order Fermi acceleration theory within reconnection layers is in its infancy, the predictions do not contradict the available data on ACR spectra measured by the spacecraft. We argue that the Voyager data are one of the first pieces of evidence favoring the acceleration within regions of fast magnetic reconnection, which we believe to be a widely spread astrophysical process.
13. Generation of mesoscale magnetic fields and the dynamics of Cosmic Ray acceleration
Diamond, P. H.; Malkov, M. A.
The problem of the cosmic ray origin is discussed in connection with their acceleration in supernova remnant shocks. The diffusive shock acceleration mechanism is reviewed and its potential to accelerate particles to the maximum energy of (presumably) galactic cosmic rays (1018eV ) is considered. It is argued that to reach such energies, a strong magnetic field at scales larger than the particle gyroradius must be created as a result of the acceleration process, itself. One specific mechanism suggested here is based on the generation of Alfven wave at the gyroradius scale with a subsequent transfer to longer scales via interaction with strong acoustic turbulence in the shock precursor. The acoustic turbulence in turn, may be generated by Drury instability or by parametric instability of the Alfven waves. The generation mechanism is modulational instability of CR generated Alfven wave packets induced, in turn, by scattering off acoustic fluctuations in the shock precursor which are generated by Drury instability.
14. Photon damping in cosmic-ray acceleration in active galactic nuclei
SciTech Connect
Colgate, S.A.
1983-04-07
The usual assumption of the acceleration of ultra high energy cosmic rays, greater than or equal to 10/sup 18/ eV in quasars, Seyfert galaxies and other active galactic nuclei is challenged on the basis of the photon interactions with the accelerated nucleons. This is similar to the effect of the black body radiation on particles > 10/sup 20/ eV for times of the age of the universe except that the photon spectrum is harder and the energy density greater by approx. = 10/sup 15/. Hence, a single traversal, radial or circumferential, of radiation whose energy density is no greater than the emitted flux will damp an ultra high energy. Hence, it is unlikely that any reasonable configuration of acceleration can void disastrous photon energy loss. A different site for ultra high energy cosmic ray acceleration must be found.
15. An empirical vegetation correction for soil water content quantification using cosmic ray probes
Baatz, R.; Bogena, H. R.; Hendricks Franssen, H.-J.; Huisman, J. A.; Montzka, C.; Vereecken, H.
2015-04-01
Cosmic ray probes are an emerging technology to continuously monitor soil water content at a scale significant to land surface processes. However, the application of this method is hampered by its susceptibility to the presence of aboveground biomass. Here we present a simple empirical framework to account for moderation of fast neutrons by aboveground biomass in the calibration. The method extends the N0-calibration function and was developed using an extensive data set from a network of 10 cosmic ray probes located in the Rur catchment, Germany. The results suggest a 0.9% reduction in fast neutron intensity per 1 kg of dry aboveground biomass per m2 or per 2 kg of biomass water equivalent per m2. We successfully tested the novel vegetation correction using temporary cosmic ray probe measurements along a strong gradient in biomass due to deforestation, and using the COSMIC, and the hmf method as independent soil water content retrieval algorithms. The extended N0-calibration function was able to explain 95% of the overall variability in fast neutron intensity.
16. Particle acceleration in cosmic plasmas – paradigm change?
SciTech Connect
Lytikov, Maxim; Guo, Fan
2015-07-21
The presentation begins by considering the requirements on the acceleration mechanism. It is found that at least some particles in high-energy sources are accelerated by magnetic reconnection (and not by shocks). The two paradigms can be distinguished by the hardness of the spectra. Shocks typically produce spectra with p > 2 (relativistic shocks have p ~ 2.2); non-linear shocks & drift acceleration may give p < 2, e.g. p=1.5; B-field dissipation can give p = 1. Then collapse of stressed magnetic X-point in force-free plasma and collapse of a system of magnetic islands are taken up, including Island merger: forced reconnection. Spectra as functions of sigma are shown, and gamma ~ 109 is addressed. It is concluded that reconnection in magnetically-dominated plasma can proceed explosively, is an efficient means of particle acceleration, and is an important (perhaps dominant for some phenomena) mechanism of particle acceleration in high energy sources.
17. Probing Turbulence and Acceleration at Relativistic Shocks in Blazar Jets
Baring, Matthew G.; Boettcher, Markus; Summerlin, Errol J.
2016-04-01
Acceleration at relativistic shocks is likely to be important in various astrophysical jet sources, including blazars and other radio-loud active galaxies. An important recent development for blazar science is the ability of Fermi-LAT data to pin down the power-law index of the high energy portion of emission in these sources, and therefore also the index of the underlying non-thermal particle population. This paper highlights how multiwavelength spectra including X-ray band and Fermi data can be used to probe diffusive acceleration in relativistic, oblique, MHD shocks in blazar jets. The spectral index of the non-thermal particle distributions resulting from Monte Carlo simulations of shock acceleration, and the fraction of thermal particles accelerated to non-thermal energies, depend sensitively on the particles' mean free path scale, and also on the mean magnetic field obliquity to the shock normal. We investigate the radiative synchrotron/Compton signatures of thermal and non-thermal particle distributions generated from the acceleration simulations. Important constraints on the frequency of particle scattering and the level of field turbulence are identified for the jet sources Mrk 501, AO 0235+164 and Bl Lacertae. Results suggest the interpretation that turbulence levels decline with remoteness from jet shocks, with a significant role for non-gyroresonant diffusion.
18. X-Ray Probes of Cosmic Star-Formation History
NASA Technical Reports Server (NTRS)
Ghosh, Pranab; White, Nicholas E.
2001-01-01
In a previous paper we point out that the X-ray luminosity L(sub x) of a galaxy is driven by the evolution of its X-ray binary population and that the profile of L(sub x) with redshift can both serve as a diagnostic probe of the Star Formation Rate (SFR) profile and constrain evolutionary models for X-ray binaries. We update our previous work using a suite of more recently developed SFR profiles that span the currently plausible range. The first Chandra deep imaging results on L(sub x)-evolution are beginning to probe the SFR profile of bright spirals and the early results are consistent with predictions based on current SFR models. Using these new SFR profiles the resolution of the "birthrate problem" of lowmass X-ray binaries (LMXBs) and recycled, millisecond pulsars in terms of an evolving global SFR is more complete. We also discuss the possible impact of the variations in the SFR profile of individual galaxies.
19. Cosmic microwave background anisotropy from nonlinear structures in accelerating universes
SciTech Connect
2008-09-15
We study the cosmic microwave background (CMB) anisotropy due to spherically symmetric nonlinear structures in flat universes with dust and a cosmological constant. By modeling a time-evolving spherical compensated void/lump by Lemaitre-Tolman-Bondi spacetimes, we numerically solve the null geodesic equations with the Einstein equations. We find that a nonlinear void redshifts the CMB photons that pass through it regardless of the distance to it. In contrast, a nonlinear lump blueshifts (or redshifts) the CMB photons if it is located near (or sufficiently far from) us. The present analysis comprehensively covers previous works based on a thin-shell approximation and a linear/second-order perturbation method and the effects of shell thickness and full nonlinearity. Our results indicate that, if quasilinear and large (> or approx.100 Mpc) voids/lumps would exist, they could be observed as cold or hot spots with temperature variance > or approx. 10{sup -5} K in the CMB sky.
20. Magnetowave Induced Plasma Wakefield Acceleration for Ultra High Energy Cosmic Rays
SciTech Connect
Chang, Feng-Yin; Chen, Pisin; Lin, Guey-Lin; Noble, Robert; Sydora, Richard; /Alberta U.
2009-10-17
Magnetowave induced plasma wakefield acceleration (MPWA) in a relativistic astrophysical outflow has been proposed as a viable mechanism for the acceleration of cosmic particles to ultrahigh energies. Here we present simulation results that clearly demonstrate the viability of this mechanism for the first time. We invoke the high frequency and high speed whistler mode for the driving pulse. The plasma wakefield obtained in the simulations compares favorably with our newly developed relativistic theory of the MPWA. We show that, under appropriate conditions, the plasma wakefield maintains very high coherence and can sustain high-gradient acceleration over hundreds of plasma skin depths. Invoking active galactic nuclei as the site, we show that MPWA production of ultrahigh energy cosmic rays beyond ZeV (10{sup 21} eV) is possible.
1. Spontaneous excitation of a uniformly accelerated atom in the cosmic string spacetime
Zhou, Wenting; Yu, Hongwei
2016-04-01
We study, in the cosmic string spacetime, the average rate of change of energy for an atom coupled to massless scalar fields and uniformly accelerated in a direction parallel to the string in vacuum. We find that both the noninertial motion and the nontrivial global spacetime topology affect the atomic transition rates, so an accelerated atom (an Unruh detector) does feel the string contrary to claims in the literature. We demonstrate that the equivalence between the effect of uniform acceleration and that of thermal radiation on the transition rates of the atom, which is valid in the Minkowski spacetime, holds only on the string.
2. Cosmic-ray acceleration during the impact of shocks on dense clouds
NASA Technical Reports Server (NTRS)
Jones, T. W.; Kang, Hyesung
1993-01-01
In order to elucidate the properties of diffusive shock acceleration in nonuniform environments, an extensive set of simulations of the dynamical interactions between plane nonradiative shocks and dense gas clouds was carried out initially in static equilibrium with their environments. These time-dependent calculations are based on the two-fluid model for diffusive cosmic ray transport, and include the dynamically active energetic proton component of the cosmic rays as well as passive electron and magnetic field components. Except when the incident shock is itself already dominated by cosmic ray pressure, it is found that the presence of the cloud adds little to the net acceleration efficiency of the original shock and can, in fact, reduce slightly the net amount of energy transferred to cosmic rays after a given time. It is found that, in 2D cloud simulations, the always-weak bow shock and the shock inside the cloud are less important to acceleration during the interaction than the tail shock.
3. Clusters of Galaxies as a Probe of the Cosmic Density
Richstone, Douglas
1994-05-01
We focus on the influence of cosmological model on the process of formation of clusters of galaxies. Richstone, Loeb and Turner (1992 ApJ 393, 477) have shown that under the assumptions of hierarchical formation and a Gaussian random field of perturbations, the rate at which matter is currently being added to the most massive virialized structures is a strong function of Omega_0 , and suggested that the observed frequency of substructure in clusters might be a probe of Omega . Evrard, Mohr, Fabricant and Geller (1993 ApJ Letters 419, L9) have shown that it is possible to compare SPH simulations of clusters to X-ray images of clusters using a test measuring the skewness of the image, to explore this effect. We report on calculations done in collaboration with Crone and Evrard, which explore the cosmological dependence of the cluster density profile and various tests of substructure in N-body simulations.
4. Global universe anisotropy probed by the alignment of structures in the cosmic microwave background.
PubMed
Wiaux, Y; Vielva, P; Martínez-González, E; Vandergheynst, P
2006-04-21
We question the global universe isotropy by probing the alignment of local structures in the cosmic microwave background (CMB) radiation. The original method proposed relies on a steerable wavelet decomposition of the CMB signal on the sphere. The analysis of the first-year Wilkinson Microwave Anisotropy Probe data identifies a mean preferred plane with a normal direction close to the CMB dipole axis, and a mean preferred direction in this plane, very close to the ecliptic poles axis. Previous statistical anisotropy results are thereby synthesized, but further analyses are still required to establish their origin. PMID:16712146
5. Black holes are neither particle accelerators nor dark matter probes.
PubMed
McWilliams, Sean T
2013-01-01
It has been suggested that maximally spinning black holes can serve as particle accelerators, reaching arbitrarily high center-of-mass energies. Despite several objections regarding the practical achievability of such high energies, and demonstrations past and present that such large energies could never reach a distant observer, interest in this problem has remained substantial. We show that, unfortunately, a maximally spinning black hole can never serve as a probe of high energy collisions, even in principle and despite the correctness of the original diverging energy calculation. Black holes can indeed facilitate dark matter annihilation, but the most energetic photons can carry little more than the rest energy of the dark matter particles to a distant observer, and those photons are actually generated relatively far from the black hole where relativistic effects are negligible. Therefore, any strong gravitational potential could probe dark matter equally well, and an appeal to black holes for facilitating such collisions is unnecessary. PMID:23383773
6. Fab 5: noncanonical kinetic gravity, self tuning, and cosmic acceleration
SciTech Connect
Appleby, Stephen A.; Linder, Eric V.; Felice, Antonio De E-mail: [email protected]
2012-10-01
We investigate circumstances under which one can generalize Horndeski's most general scalar-tensor theory of gravity. Specifically we demonstrate that a nonlinear combination of purely kinetic gravity terms can give rise to an accelerating universe without the addition of extra propagating degrees of freedom on cosmological backgrounds, and exhibit self tuning to bring a large cosmological constant under control. This nonlinear approach leads to new properties that may be instructive for exploring the behaviors of gravity.
7. Cosmic ray acceleration at perpendicular shocks in supernova remnants
SciTech Connect
Ferrand, Gilles; Danos, Rebecca J.; Shalchi, Andreas; Safi-Harb, Samar; Edmon, Paul; Mendygral, Peter
2014-09-10
Supernova remnants (SNRs) are believed to accelerate particles up to high energies through the mechanism of diffusive shock acceleration (DSA). Except for direct plasma simulations, all modeling efforts must rely on a given form of the diffusion coefficient, a key parameter that embodies the interactions of energetic charged particles with magnetic turbulence. The so-called Bohm limit is commonly employed. In this paper, we revisit the question of acceleration at perpendicular shocks, by employing a realistic model of perpendicular diffusion. Our coefficient reduces to a power law in momentum for low momenta (of index α), but becomes independent of the particle momentum at high momenta (reaching a constant value κ{sub ∞} above some characteristic momentum p {sub c}). We first provide simple analytical expressions of the maximum momentum that can be reached at a given time with this coefficient. Then we perform time-dependent numerical simulations to investigate the shape of the particle distribution that can be obtained when the particle pressure back-reacts on the flow. We observe that for a given index α and injection level, the shock modifications are similar for different possible values of p {sub c}, whereas the particle spectra differ markedly. Of particular interest, low values of p {sub c} tend to remove the concavity once thought to be typical of non-linear DSA, and result in steep spectra, as required by recent high-energy observations of Galactic SNRs.
8. New accelerators for femtosecond beam pump-and-probe analysis
Uesaka, Mitsuru; Sakumi, Akira; Hosokai, Tomonao; Kinoshita, Kenichi; Yamaoka, Nobuaki; Zhidkov, Alexei; Ohkubo, Takeru; Ueda, Toru; Muroya, Yusa; Katsumura, Yosuke; Iijima, Hokuto; Tomizawa, Hiromitsu; Kumagai, Noritaka
2005-12-01
Femtosecond electron beams are novel tool for pump-probe analysis of condensed matter. Progress in developing femtosecond electron beams with the use of both conventional accelerators and laser-plasma accelerators is discussed. In conventional accelerators, the critical issue is timing jitter and drift of the linac-laser synchronization system. Sophisticated electronic devices are developed to reduce the jitter to 330 fs (rms); the precise control of temperature at several parts of the accelerator lessens the drift to 1 ps (rms). We also report on a full-optical X-ray and e-beam system based on the laser-plasma cathode by using a 12 TW 50 fs laser, which enables 40 MeV (at maximum), 40 fs (cal.), 100 pC and quasi-monochromatic single electron bunches. Since the synchronization is done by a passive optical beam-splitter, this system intrinsically has no jitter and drift. It could achieve tens of femtoseconds time-resolved analysis in the near future.
9. Cosmic gamma-ray propagation as a probe for intergalactic media and interactions
Huan, Hao
2012-05-01
Very-high-energy (VHE) gamma rays beyond 100 GeV, coming from galactic and extragalactic sources, reflect the most energetic non-thermal processes in the universe. The emission of these photons indicates the acceleration of charged particles to very high energies or the existence of exotic particles that annihilate or decay to photons. Observations of VHE gamma rays probing this highest energy window of electromagnetic waves thus can reveal the underlying acceleration processes or new astrophysical particles. The fluxes tend to be power-law spectra and this poses a difficulty for direct observation due to the low flux at the high-energy end and to the limited effective area of space-borne instruments. Ground-based VHE gamma-ray observatories therefore take advantage of the earth atmosphere as a calorimeter and observe the gamma rays indirectly via the electromagnetic cascade shower particles they produce. The shower particles are detected either directly or via the Cherenkov radiation they emit while propagating through the air. The current-generation telescopes adopting this ground-based methodology have confirmed several source categories and are starting to answer various physical and astronomical questions, e.g., the origin of cosmic rays, the nature of dark matter, the black hole accretion processes, etc. Together with multi-wavelength observations covering the full electromagnetic spectrum and astrophysical observatories of other particles (cosmic rays, neutrinos, etc.) VHE gamma-ray astronomy contributes as an indispensable part of the recently emerging field of multi-messenger particle astrophysics. When emitted by extragalactic sources, the VHE gamma rays undergo various interactions in the intergalactic medium as they propagate toward the earth. There is a guaranteed interaction, where the VHE gamma-ray photons are absorbed by the extragalactic background light (EBL), an isotropic background of optical-to-infrared photons coming from starlight or dust re
10. The Evolution of the Acceleration Mechanisms of Cosmic Rays and Relativistic Electrons in Radio Galaxies
Tsvyk, N.
There are estimated an efficacy for different acceleration mechanisms of e- and p-cosmic rays (CRs) in radio galaxies, using an evolution model for jet gaps and shock fronts with a turbulence. It is shown that diffusion shock acceleration of the CRs is the most efficient mechanism in the FR II radio galaxies (RGs). At the same time, there are a break-pinch mechanism (for a short-term at a jet gap moment), and a stochastic turbulent mechanism (for an all time when RG exist), that to play a grate part in acceleration of the CRs (give to 10-50 % of the all acceleration efficiency). It is predicted what properties of radio emission spectra give us to recognize a type of acceleration mechanisms of e-CR in the RG.
11. Method for direct measurement of cosmic acceleration by 21-cm absorption systems.
PubMed
Yu, Hao-Ran; Zhang, Tong-Jie; Pen, Ue-Li
2014-07-25
So far there is only indirect evidence that the Universe is undergoing an accelerated expansion. The evidence for cosmic acceleration is based on the observation of different objects at different distances and requires invoking the Copernican cosmological principle and Einstein's equations of motion. We examine the direct observability using recession velocity drifts (Sandage-Loeb effect) of 21-cm hydrogen absorption systems in upcoming radio surveys. This measures the change in velocity of the same objects separated by a time interval and is a model-independent measure of acceleration. We forecast that for a CHIME-like survey with a decade time span, we can detect the acceleration of a ΛCDM universe with 5σ confidence. This acceleration test requires modest data analysis and storage changes from the normal processing and cannot be recovered retroactively. PMID:25105607
12. A Global Probe of Cosmic Magnetic Fields to High Redshifts
Kronberg, P. P.; Bernet, M. L.; Miniati, F.; Lilly, S. J.; Short, M. B.; Higdon, D. M.
2008-03-01
Faraday rotation (rotation measure [RM]) probes of magnetic fields in the universe are sensitive to cosmological and evolutionary effects as z increases beyond ~1 because of the scalings of electron density and magnetic fields, and the growth in the number of expected intersections with galaxy-scale intervenors, dN/dz. In this new global analysis of an unprecedented large sample of RMs of high-latitude quasars extending out to z ~ 3.7, we find that the distribution of RM broadens with redshift in the 20-80 rad m-2 range, despite the (1 + z)-2 wavelength dilution expected in the observed Faraday rotation. Our results indicate that the universe becomes increasingly "Faraday-opaque" to sources beyond z ~ 2; that is, as z increases, progressively fewer sources are found with a "small" RM in the observer's frame. This is in contrast to sources at zlesssim 1. They suggest that the environments of galaxies were significantly magnetized at high redshifts, with magnetic field strengths that were at least as strong within a few Gyr of the big bang as at the current epoch. We separately investigate a simple unevolving toy model in which the RM is produced by Mg II absorber systems, and find that it can approximately reproduce the observed trend with redshift. An additional possibility is that the intrinsic RM associated with the radio sources was much higher in the past, and we show that this is not a trivial consequence of the higher radio luminosities of the high-redshift sources.
13. Radio emission and nonlinear diffusive shock acceleration of cosmic rays in the supernova SN 1993J
Tatischeff, V.
2009-05-01
Aims: The extensive observations of the supernova SN 1993J at radio wavelengths make this object a unique target for the study of particle acceleration in a supernova shock. Methods: To describe the radio synchrotron emission we use a model that couples a semianalytic description of nonlinear diffusive shock acceleration with self-similar solutions for the hydrodynamics of the supernova expansion. The synchrotron emission, which is assumed to be produced by relativistic electrons propagating in the postshock plasma, is worked out from radiative transfer calculations that include the process of synchrotron self-absorption. The model is applied to explain the morphology of the radio emission deduced from high-resolution VLBI imaging observations and the measured time evolution of the total flux density at six frequencies. Results: Both the light curves and the morphology of the radio emission indicate that the magnetic field was strongly amplified in the blast wave region shortly after the explosion, possibly via the nonresonant regime of the cosmic-ray streaming instability operating in the shock precursor. The amplified magnetic field immediately upstream from the subshock is determined to be Bu ≈ 50 (t/1 { day})-1 G. The turbulent magnetic field was not damped behind the shock but carried along by the plasma flow in the downstream region. Cosmic-ray protons were efficiently produced by diffusive shock acceleration at the blast wave. We find that during the first 8.5 years after the explosion, about 19% of the total energy processed by the forward shock was converted to cosmic-ray energy. However, the shock remained weakly modified by the cosmic-ray pressure. The high magnetic field amplification implies that protons were rapidly accelerated to energies well above 1015 eV. The results obtained for this supernova support the scenario that massive stars exploding into their former stellar wind are a major source of Galactic cosmic-rays of energies above 1015 eV. We
14. A cocoon of freshly accelerated cosmic rays detected by Fermi in the Cygnus superbubble
Grenier, Isabelle A.; Tibaldo, Luigi; Fermi-LAT Collaboration
2013-02-01
Conspicuous stellar clusters, with high densities of massive stars, powerful stellar winds, and intense UV flux, have formed over the past few million years in the large molecular clouds of the Cygnus X region, 1.4 kpc away from the Sun. By capturing the gamma-ray signal of young cosmic rays spreading in the interstellar medium surrounding the clusters, the Fermi Large Area Telescope (LAT) has confirmed the long-standing hypothesis that massive-star forming regions host cosmic-ray factories. The 50-pc wide cocoon of energetic particles appears to fill the interstellar cavities carved by the stellar activity. The cocoon provides a first test case to study the impact of wind-powered turbulence on the early phases of cosmic-ray diffusion (between the sources and the Galaxy at large) and to study the acceleration potential of this type of superbubble environment for in-situ cosmic-ray production or to energize Galactic cosmic rays passing by.
15. Late decaying dark matter, bulk viscosity, and the cosmic acceleration
SciTech Connect
Mathews, G. J.; Kolda, C.; Lan, N. Q.
2008-08-15
We discuss a cosmology in which cold dark matter begins to decay into relativistic particles at a recent epoch (z<1). We show that the large entropy production and associated bulk viscosity from such decays leads to an accelerating cosmology as required by observations. We investigate the effects of decaying cold dark matter in a {lambda}=0, flat, initially matter dominated cosmology. We show that this model satisfies the cosmological constraint from the redshift-distance relation for type Ia supernovae. The age in such models is also consistent with the constraints from the oldest stars and globular clusters. Possible candidates for this late decaying dark matter are suggested along with additional observational tests of this cosmological paradigm.
16. Acceleration of cosmic rays by turbulence during reconnection events
Drake, Jim
2007-05-01
A Fermi-like model for energetic electron production during magnetic reconnection is described that converts a substantial fraction of released magnetic energy into energetic electrons [1]. Magnetic reconnection with a guide field leads to the growth and dynamics of multiple magnetic islands rather than a single large x-line. Electrons trapped within islands gain energy as they reflect from ends of contracting magnetic islands. The resulting rate of energy gain dominates that from parallel electric fields. The pressure from energetic electrons rises rapidly until the rate of electron energy gain balances the rate of magnetic energy release, establishing for the first time a link between the energy gain of electrons and the released magnetic energy. The energetic particle pressure therefore throttles the rate of reconnection. A transport equation for the distribution of energetic particles, including their feedback on island contraction, is obtained by averaging over the particle interaction with many islands. The steady state solutions in reconnection geometry result from convective losses balancing the Fermi drive. At high energy distribution functions take the form of a powerlaw whose spectral index depends only on the initial electron β, lower (higher) β producing harder (softer) spectra. The spectral index matches that seen in recent Wind spacecraft observations in the Earth's magnetotail. Harder spectra are predicted for the low β conditions of the solar corona or other astrophysical systems. Ions can be similarly accelerated if they are above an energy threshold. 1. J. F. Drake, M. Swisdak, H. Che and M. Shay, Nature 443, 553, 2006.
17. Power requirements for cosmic ray propagation models involving re-acceleration and a comment on second-order Fermi acceleration theory
Thornbury, Andrew; Drury, Luke O'C.
2014-08-01
We derive an analytic expression for the power transferred from interstellar turbulence to the Galactic cosmic rays in propagation models which include re-acceleration. This is used to estimate the power required in such models and the relative importance of the primary acceleration as against re-acceleration. The analysis provides a formal mathematical justification for Fermi's heuristic account of second-order acceleration in his classic 1949 paper.
18. VERITAS observations of supernova remnants for studies of cosmic ray acceleration
Park, Nahee
Supernova remnants (SNRs) have been suggested as the main sites for acceleration of cosmic rays (CRs) with energies up to the knee region ( 10(15) eV). Gamma-ray emission from SNRs can provide a unique window to observe the cosmic ray acceleration and to test existing acceleration models in these objects. The Very Energetic Radiation Imaging Telescope Array System (VERITAS) is an array of atmospheric Cherenkov telescopes that measures gamma rays with energies higher than 100 GeV. Located in Arizona, USA, VERITAS has observed several SNRs in the northern hemisphere since the beginning of operations in 2007. These include two young SNRs of different types (Cassiopeia A and Tycho), as well as middle- to old-aged remnants with nearby target material such as molecular clouds. Gamma-ray data from different types of SNRs in different evolutionary stages are important to study SNRs as CR accelerators. Here we present a summary of VERITAS results on Galactic SNRs including Tycho, and discuss what these observations have taught us.
19. Stochastic acceleration of solar cosmic rays in an expanding coronal magnetic bottle
SciTech Connect
Mullan, D.J.
1980-04-01
Several key features of the coronal propagation of solar cosmic rays have previously been explained by a ''magnetic bottle'' model proposed by Schatten and Mullan. The major apparent difficulty with that model is that expansion of the closed bottle might have a severe cooling effect on the cosmic rays trapped inside. In the present paper, we examine this difficulty by applying the equation for stochastic acceleration to an expanding bottle. Following our earlier suggestion, the scattering centers are taken to be small-scale magnetic inhomogeneities which are present in the corona prior to the flare, and which are set into turbulent motion when a flare-induced shock passes by. We identify the inhomogeneities with the collapsing magnetic neutral sheets discussed by Levine in the context of normal coronal heating. We find that the acceleration efficiencies can indeed be high enough to offset expansive cooling: within the time intervals that are typically available for closed bottle evolution (1000--3000 s), protons can be accelerated from 1 keV to 100 MeV and more. Our results indicate that the flux of particles which are accelerated to (say) 100 MeV is very sensitive to shock speed if this speed is less than about 10/sup 3/ km s/sup -1/.
20. Probing the gravitational wave signature from cosmic phase transitions at different scales
SciTech Connect
Krauss, Lawrence M.; Dent, James; Jones-Smith, Katherine; Mathur, Harsh
2010-08-15
We present a new signature by which one could potentially discriminate between a spectrum of gravitational radiation generated by a self-ordering scalar field vs that of inflation, specifically a comparison of the magnitude of a flat spectrum at frequencies probed by future direct detection experiments to the magnitude of a possible polarization signal in the cosmic microwave background radiation. In the process we clarify several issues related to the proper calculation of such modes, focusing on the effect of post-horizon-crossing evolution.
1. Cosmic far-ultraviolet background radiation - Probe of a dense hot intergalactic medium
NASA Technical Reports Server (NTRS)
Sherman, R. D.; Silk, J.
1979-01-01
Line and continuum radiation fluxes have been computed for a wide range of enriched intergalactic medium (IGM) models. Observations of the diffuse extragalactic light at optical and far-ultraviolet wavelengths are found to provide a potentially important probe of a dense hot intergalactic medium. If the diffuse X-ray background is produced by this gas, the models constrain the cosmological density parameter (Omega) to be less than 0.4. The associated Compton distortions of the cosmic blackbody background radiation and the optical depths to distant quasars at X-ray wavelengths are also evaluated.
2. Overview of the SuperNova/Acceleration probe (SNAP)
SciTech Connect
[email protected]
2002-07-29
The SuperNova/Acceleration Probe (SNAP) is a space-based experiment to measure the expansion history of the Universe and study both its dark energy and the dark matter. The experiment is motivated by the startling discovery that the expansion of the Universe is accelerating. A 0.7 square-degree imager comprised of 36 large format fully-depleted n-type CCD's sharing a focal plane with 36 HgCdTe detectors forms the heart of SNAP, allowing discovery and lightcurve measurements simultaneously for many supernovae. The imager and a high-efficiency low-resolution integral field spectrograph are coupled to a 2-m three mirror anastigmat wide-field telescope, which will be placed in a high-earth orbit. The SNAP mission can obtain high-signal-to-noise calibrated light-curves and spectra for over 2000 Type Ia supernovae at redshifts between z = 0.1 and 1.7. The resulting data set can not only determine the amount of dark energy with high precision, but test the nature of the dark energy by examining its equation of state. In particular, dark energy due to a cosmological constant can be differentiated from alternatives such as ''quintessence'', by measuring the dark energy's equation of state to an accuracy of {+-} 0.05, and by studying its time dependence.
3. Acceleration of High Energy Cosmic Rays in the Nonlinear Shock Precursor
Derzhinsky, F.; Diamond, P. H.; Malkov, M. A.
2006-10-01
The problem of understanding acceleration of very energetic cosmic rays to energies above the 'knee' in the spectrum at 10^15-10^16eV remains one of the great challenges in modern physics. Recently, we have proposed a new approach to understanding high energy acceleration, based on exploiting scattering of cosmic rays by inhomogenities in the compressive nonlinear shock precursor, rather than by scattering across the main shock, as is conventionally assumed. We extend that theory by proposing a mechanism for the generation of mesoscale magnetic fields (krg<1, where rg is the cosmic ray gyroradius). The mechanism is the decay or modulational instability of resonantly generated Alfven waves scattering off ambient density perturbations in the precursors. Such perturbations can be produced by Drury instability. This mechanism leads to the generation of longer wavelength Alfven waves, thus enabling the confinement of higher energy particles. A simplified version of the theory, cast in the form of a Fokker-Planck equation for the Alfven population, will also be presented. This process also limits field generation on rg scales.
4. COLLISIONLESS SHOCKS IN A PARTIALLY IONIZED MEDIUM. III. EFFICIENT COSMIC RAY ACCELERATION
SciTech Connect
Morlino, G.; Blasi, P.; Bandiera, R.; Amato, E.; Caprioli, D.
2013-05-10
In this paper, we present the first formulation of the theory of nonlinear particle acceleration in collisionless shocks in the presence of neutral hydrogen in the acceleration region. The dynamical reaction of the accelerated particles, the magnetic field amplification, and the magnetic dynamical effects on the shock are also included. The main new aspect of this study, however, consists of accounting for charge exchange and the ionization of a neutral hydrogen, which profoundly change the structure of the shock, as discussed in our previous work. This important dynamical effect of neutrals is mainly associated with the so-called neutral return flux, namely the return of hot neutrals from the downstream region to upstream, where they deposit energy and momentum through charge exchange and ionization. We also present the self-consistent calculation of Balmer line emission from the shock region and discuss how to use measurements of the anomalous width of the different components of the Balmer line to infer cosmic ray acceleration efficiency in supernova remnants showing Balmer emission: the broad Balmer line, which is due to charge exchange of hydrogen atoms with hot ions downstream of the shock, is shown to become narrower as a result of the energy drainage into cosmic rays, while the narrow Balmer line, due to charge exchange in the cosmic-ray-induced precursor, is shown to become broader. In addition to these two well-known components, the neutral return flux leads to the formation of a third component with an intermediate width: this too contains information on ongoing processes at the shock.
5. Probing the inner structure of blast furnaces by cosmic-ray muon radiography
Nagamine, K.; Tanaka, H. K. M.; Nakamura, S. N.; Ishida, K.; Hashimoto, M.; Shinotake, A.; Naito, M.; Hatanaka, A.
By using the detection system of the near-horizontal cosmic-ray radiography originally developed for probing inner structure of volcanic mountains, a measurement was conducted to probe the inner structure and its time-dependent change of the blast furnace for iron-making. Precise determination (+/-5 cm) of the thickness of brick used for both base-plate and side-wall was made in 45 days; a crucial information to predict a life-time of the furnace. Also, the local density of iron-rich part was determined in +/-0.2 g/cm2 in 45 days; static structure as well as time-dependent behavior can be monitored for the iron-rich part of the furnace during operation.
6. The difference PDF of 21-cm fluctuations: a powerful statistical tool for probing cosmic reionization
Barkana, Rennan; Loeb, Abraham
2008-03-01
A new generation of radio telescopes are currently being built with the goal of tracing the cosmic distribution of atomic hydrogen at redshifts 6-15 through its 21-cm line. The observations will probe the large-scale brightness fluctuations sourced by ionization fluctuations during cosmic reionization. Since detailed maps will be difficult to extract due to noise and foreground emission, efforts have focused on a statistical detection of the 21-cm fluctuations. During cosmic reionization, these fluctuations are highly non-Gaussian and thus more information can be extracted than just the one-dimensional function that is usually considered, i.e. the correlation function. We calculate a two-dimensional function that if measured observationally would allow a more thorough investigation of the properties of the underlying ionizing sources. This function is the probability distribution function (PDF) of the difference in the 21-cm brightness temperature between two points, as a function of the separation between the points. While the standard correlation function is determined by a complicated mixture of contributions from density and ionization fluctuations, we show that the difference PDF holds the key to separately measuring the statistical properties of the ionized regions.
7. Particle acceleration and turbulence in cosmic Ray shocks: possible pathways beyond the Bohm limit
Malkov, M. A.; Diamond, P. H.
2007-08-01
Diffusive shock acceleration is discussed in terms of its potential to accelerate cosmic rays (CR) to 1018 eV (beyond the knee,'' as observations suggest) and in terms of the related observational signatures (spectral features). One idea to reach this energy is to resonantly generate a turbulent magnetic field via accelerated particles much in excess of the background field. We identify difficulties with this scenario and suggest two separate mechanisms that can work in concert with one another leading to a significant acceleration enhancement. The first mechanism is based on a nonlinear modification of the flow ahead of the shock supported by particles already accelerated to some specific (knee) momentum. The particles gain energy by bouncing off converging magnetic irregularities frozen into the flow in the shock precursor and not so much by re-crossing the shock itself. The acceleration rate is determined by the gradient of the flow velocity and turns out to be formally independent of the particle mean free path. The velocity gradient is set by the knee-particles. The acceleration rate of particles above the knee does not decrease with energy, unlike in the linear acceleration regime. The knee (spectrum steepening) forms because particles above it are effectively confined to the shock only if they are within limited domains in the momentum space, while other particles fall into loss-islands'', similar to the loss-cone'' of magnetic traps. This also maintains the steep velocity gradient and high acceleration rate. The second mechanism is based on the generation of Alfven waves at the gyroradius scale at the background field level, with a subsequent transfer to longer scales via interaction with strong acoustic turbulence in the shock precursor. The acoustic turbulence in turn, may be generated by Drury instability or by parametric instability of the Alfven (A) waves.
8. Origin of cosmic rays. II. The cosmic-ray distribution and the spiral structure of NGC 3310. III. Particle acceleration by global spiral shocks
SciTech Connect
Duric, N.
1986-05-01
The optical and radio continuum properties of the spiral arms of NGC 3310 are analyzed and intercompared. The likely presence of a strong density wave in NGC 3310 is demonstrated, and a number of observational results constraining the relationship between synchrotron emission, emission line radiation, and starlight are developed. The role of supernova remnants in the production of relativistic particles is investigated and found to be inconsistent with the constraints. The generation of cosmic rays by global spiral shocks via a Fermi-type shock acceleration process is shown to agree with all the major constraints, suggesting that the rotation of the galaxy powers the acceleration of particles to cosmic ray energies. A medium with temperature of 10,000 K, partially to fully ionized, is shown to support the diffusive shock acceleration mechanism. 57 references.
9. Cosmic Rays Across the Universe
Gould Zweibel, Ellen
2016-01-01
Cosmic rays play an important role in the dynamics, energetics, and chemisry of gas inside and outside galaxies. It has long been recognized that gamma ray astronomy is a powerful probe of cosmic ray acceleration and propagation, and that gamma ray data, combined with other observations of cosmic rays and of the host medium and with modeling, can provide an integrated picture of cosmic rays and their environments. I will discuss the plasma physics underlying this picture, where it has been successful, and where issues remain.
10. Relativistic cosmic ray spectra in the full non-linear theory of shock acceleration
NASA Technical Reports Server (NTRS)
Eichler, D.; Ellison, D. C.
1985-01-01
The non-linear theory of shock acceleration was generalized to include wave dynamics. In the limit of rapid wave damping, it is found that a finite ave velocity tempers the acceleration of high Mach number shocks and limits the maximum compression ratio even when energy loss is important. For a given spectrum, the efficiency of relativistic particle production is essentially independent of v sub Ph. For the three families shown, the percentage of kinetic energy flux going into relativistic particles is (1) 72%, 2) 44%, and (3) 26% (this includes the energy loss at the upper energy cuttoff). Even small v sub ph, typical of the HISM, produce quasi-universal spectra that depend only weakly on the acoustic Mach number. These spectra should be close enough to e(-2) to satisfy cosmic ray source requirements.
11. Acceleration and propagation of high Z cosmic rays in a pulsar environment
NASA Technical Reports Server (NTRS)
Balasubrahmanyan, V. K.; Ormes, J. F.; Ryan, M. J.
1971-01-01
The survival of high Z nuclei in the X-ray photon field of a pulsar is investigated. For heavy nuclei with energies greater than or equal to 100 GeV/nucleon, 100 keV X-ray photons have sufficient energy to cause photodisintegration with cross sections of approximately 10 to the minus 25th power sq cm. Using the observed properties of the Crab pulsar, extrapolation back to epochs when the pulsar was more active indicates that the photon field is sufficiently dense to prevent the acceleration of heavy nuclei within the velocity of light cylinder. On this model, the upper limit on the energy of the escaping nuclei varies with time. The models for cosmic ray acceleration in supernova explosions or by pulsars will be related to experimental observations.
12. The solar wind structures associated with cosmic ray decreases and particle acceleration in 1978-1982
NASA Technical Reports Server (NTRS)
Cane, H. V.; Richardson, I. G.; Vonrosenvinge, T. T.
1992-01-01
The time histories of particles in the energy range 1 MeV to 1 GeV at times of all greater than 3 percent cosmic ray decreases in the years 1978 to 1982 are studied. Essentially all 59 of the decreases commenced at or before the passages of interplanetary shocks, the majority of which accelerated energetic particles. We use the intensity-time profiles of the energetic particles to separate the cosmic ray decreases into four classes which we subsequently associate with four types of solar wind structures. Decreases in class 1 (15 events) and class 2 (26 events) can be associated with shocks which are driven by energetic coronal mass ejections. For class 1 events the ejecta is detected at 1 AU whereas this is not the case for class 2 events. The shock must therefore play a dominant role in producing the depression of cosmic rays in class 2 events. In all class 1 and 2 events (which comprise 69 percent of the total) the departure time of the ejection from the sun (and hence the location) can be determined from the rapid onset of energetic particles several days before the shock passage at Earth. The class 1 events originate from within 50 deg of central meridian. Class 3 events (10 decreases) can be attributed to less energetic ejections which are directed towards the Earth. In these events the ejecta is more important than the shock in causing a depression in the cosmic ray intensity. The remaining events (14 percent of the total) can be attributed to corotating streams which have ejecta material embedded in them.
13. Cosmic-ray acceleration in supernova remnants: non-linear theory revised
SciTech Connect
Caprioli, Damiano
2012-07-01
A rapidly growing amount of evidences, mostly coming from the recent gamma-ray observations of Galactic supernova remnants (SNRs), is seriously challenging our understanding of how particles are accelerated at fast shocks. The cosmic-ray (CR) spectra required to account for the observed phenomenology are in fact as steep as E{sup −2.2}–E{sup −2.4}, i.e., steeper than the test-particle prediction of first-order Fermi acceleration, and significantly steeper than what expected in a more refined non-linear theory of diffusive shock acceleration. By accounting for the dynamical back-reaction of the non-thermal particles, such a theory in fact predicts that the more efficient the particle acceleration, the flatter the CR spectrum. In this work we put forward a self-consistent scenario in which the account for the magnetic field amplification induced by CR streaming produces the conditions for reversing such a trend, allowing — at the same time — for rather steep spectra and CR acceleration efficiencies (about 20%) consistent with the hypothesis that SNRs are the sources of Galactic CRs. In particular, we quantitatively work out the details of instantaneous and cumulative CR spectra during the evolution of a typical SNR, also stressing the implications of the observed levels of magnetization on both the expected maximum energy and the predicted CR acceleration efficiency. The latter naturally turns out to saturate around 10-30%, almost independently of the fraction of particles injected into the acceleration process as long as this fraction is larger than about 10{sup −4}.
14. Current and Prospective Constraints on Cosmic Acceleration using X-ray Galaxy Clusters and Supernovae
Rapetti, David A.; Allen, S. W.; Amin, M. A.; Blandford, R. D.
2006-09-01
We employ both a standard dynamical approach and a new kinematical approach to constrain cosmic acceleration using the three best available sets of redshift-independent distance measurements, from type Ia supernovae and X-ray cluster gas mass fraction measurements. The standard dynamical' analysis employs the Friedmann equations and models dark energy as a fluid with an equation of state parameter, w. From a purely kinematical point of view, however, we can also construct models in terms of the dimensionless second and third derivatives of the scale factor a(t) with respect to cosmic time t, namely the present-day value of the deceleration parameter q_0 and the cosmic jerk parameter, j(t). A convenient feature of this parameterization is that all LambdaCDM models have j(t)=1 (constant), which facilitates simple tests for departures from the LambdaCDM paradigm. We obtain clear statistical evidence for a late time transition from a decelerating to an accelerating phase. For a flat model with constant jerk j(t)=j, we measure q_0=-0.81+-0.14 and j=2.16+0.81-0.75. For a dynamical model with constant w we measure Omega_m=0.306+0.042-0.040 and w=-1.15+0.14-0.18. Both kinematical and dynamical results are consistent with LambdaCDM at the 1sigma level. In comparison to dynamical analyses, the kinematical approach uses a different model set and employs a minimum of prior information, being independent of any particular gravity theory. We argue that both kinematical and dynamical techniques should be employed in future dark energy studies, where possible. Finally, we discuss the potential for future experiments including Constellation-X, which will constrain dark energy with comparable accuracy and in a beautifully complementary manner to the best other techniques available circa 2018.
15. Supernova Acceleration Probe: Studying Dark Energy with Type Ia Supernovae
SciTech Connect
Albert, J.; Aldering, G.; Allam, S.; Althouse, W.; Amanullah, R.; Annis, J.; Astier, P.; Aumeunier, M.; Bailey, S.; Baltay, C.; Barrelet, E.; Basa, S.; Bebek, C.; Bergstom, L.; Bernstein, G.; Bester, M.; Besuner, B.; Bigelow, B.; Blandford, R.; Bohlin, R.; Bonissent, A.; /Caltech /LBL, Berkeley /Fermilab /SLAC /Stockholm U. /Paris, IN2P3 /Marseille, CPPM /Marseille, Lab. Astrophys. /Yale U. /Pennsylvania U. /UC, Berkeley /Michigan U. /Baltimore, Space Telescope Sci. /Indiana U. /Caltech, JPL /Australian Natl. U., Canberra /American Astron. Society /Chicago U. /Cambridge U. /Saclay /Lyon, IPN
2005-08-08
The Supernova Acceleration Probe (SNAP) will use Type Ia supernovae (SNe Ia) as distance indicators to measure the effect of dark energy on the expansion history of the Universe. (SNAP's weak-lensing program is described in a separate White Paper.) The experiment exploits supernova distance measurements up to their fundamental systematic limit; strict requirements on the monitoring of each supernova's properties leads to the need for a space-based mission. Results from pre-SNAP experiments, which characterize fundamental SN Ia properties, will be used to optimize the SNAP observing strategy to yield data, which minimize both systematic and statistical uncertainties. With early R&D funding, we have achieved technological readiness and the collaboration is poised to begin construction. Pre-JDEM AO R&D support will further reduce technical and cost risk. Specific details on the SNAP mission can be found in Aldering et al. (2004, 2005). The primary goal of the SNAP supernova program is to provide a dataset which gives tight constraints on parameters which characterize the dark-energy, e.g. w{sub 0} and w{sub a} where w(a) = w{sub 0} + w{sub a}(1-a). SNAP data can also be used to directly test and discriminate among specific dark energy models. We will do so by building the Hubble diagram of high-redshift supernovae, the same methodology used in the original discovery of the acceleration of the expansion of the Universe that established the existence of dark energy (Perlmutter et al. 1998; Garnavich et al. 1998; Riess et al. 1998; Perlmutter et al. 1999). The SNAP SN Ia program focuses on minimizing the systematic floor of the supernova method through the use of characterized supernovae that can be sorted into subsets based on subtle signatures of heterogeneity. Subsets may be defined based on host-galaxy morphology, spectral-feature strength and velocity, early-time behavior, inter alia. Independent cosmological analysis of each subset of ''like'' supernovae can be
16. Probing Atmospheric Electric Fields through Radio Emission from Cosmic-Ray-Induced Air Showers
Scholten, Olaf; Trinh, Gia; Buitink, Stijn; Corstanje, Arthur; Ebert, Ute; Enriquez, Emilio; Falcke, Heino; Hoerandel, Joerg; Nelles, Anna; Schellart, Pim; Rachen, Joerg; Rutjes, Casper; ter Veen, Sander; Rossetto, Laura; Thoudam, Satyendra
2016-04-01
Energetic cosmic rays impinging on the atmosphere create a particle avalanche called an extensive air shower. In the leading plasma of this shower electric currents are induced that generate coherent radio wave emission that has been detected with LOFAR, a large and dense array of simple radio antennas primarily developed for radio-astronomy observations. Our measurements are performed in the 30-80 MHz frequency band. For fair weather conditions the observations are in excellent agreement with model calculations. However, for air showers measured under thunderstorm conditions we observe large differences in the intensity and polarization patterns from the predictions of fair weather models. We will show that the linear as well as the circular polarization of the radio waves carry clear information on the magnitude and orientation of the electric fields at different heights in the thunderstorm clouds. We will show that from the measured data at LOFAR the thunderstorm electric fields can be reconstructed. We thus have established the measurement of radio emission from extensive air showers induced by cosmic rays as a new tool to probe the atmospheric electric fields present in thunderclouds in a non-intrusive way. In part this presentation is based on the work: P. Schellart et al., Phys. Rev. Lett. 114, 165001 (2015).
17. EVIDENCE FOR PARTICLE ACCELERATION TO THE KNEE OF THE COSMIC RAY SPECTRUM IN TYCHO'S SUPERNOVA REMNANT
SciTech Connect
Eriksen, Kristoffer A.; Hughes, John P.; Badenes, Carles; Fesen, Robert; Ghavamian, Parviz; Moffett, David; Plucinksy, Paul P.; Slane, Patrick; Rakowski, Cara E.; Reynoso, Estela M.
2011-02-20
Supernova remnants (SNRs) have long been assumed to be the source of cosmic rays (CRs) up to the 'knee' of the CR spectrum at 10{sup 15} eV, accelerating particles to relativistic energies in their blast waves by the process of diffusive shock acceleration (DSA). Since CR nuclei do not radiate efficiently, their presence must be inferred indirectly. Previous theoretical calculations and X-ray observations show that CR acceleration significantly modifies the structure of the SNR and greatly amplifies the interstellar magnetic field. We present new, deep X-ray observations of the remnant of Tycho's supernova (SN 1572, henceforth Tycho), which reveal a previously unknown, strikingly ordered pattern of non-thermal high-emissivity stripes in the projected interior of the remnant, with spacing that corresponds to the gyroradii of 10{sup 14}-10{sup 15} eV protons. Spectroscopy of the stripes shows the plasma to be highly turbulent on the (smaller) scale of the Larmor radii of TeV energy electrons. Models of the shock amplification of magnetic fields produce structure on the scale of the gyroradius of the highest energy CRs present, but they do not predict the highly ordered pattern we observe. We interpret the stripes as evidence for acceleration of particles to near the knee of the CR spectrum in regions of enhanced magnetic turbulence, while the observed highly ordered pattern of these features provides a new challenge to models of DSA.
18. ENTROPY AT THE OUTSKIRTS OF GALAXY CLUSTERS AS IMPLICATIONS FOR COSMOLOGICAL COSMIC-RAY ACCELERATION
SciTech Connect
Fujita, Yutaka; Ohira, Yutaka; Yamazaki, Ryo
2013-04-10
Recently, gas entropy at the outskirts of galaxy clusters has attracted much attention. We propose that the entropy profiles could be used to study cosmic-ray (CR) acceleration around the clusters. If the CRs are effectively accelerated at the formation of clusters, the kinetic energy of infalling gas is consumed by the acceleration and the gas entropy should decrease. As a result, the entropy profiles become flat at the outskirts. If the acceleration is not efficient, the entropy should continue to increase outward. By comparing model predictions with X-ray observations with Suzaku, which show flat entropy profiles, we find that the CRs have carried {approx}< 7% of the kinetic energy of the gas away from the clusters. Moreover, the CR pressure at the outskirts can be {approx}< 40% of the total pressure. On the other hand, if the entropy profiles are not flat at the outskirts, as indicated by combined Plank and ROSAT observations, the carried energy and the CR pressure should be much smaller than the above estimations.
19. Simulation of Cosmic Ray Acceleration, Propagation And Interaction in SNR Environment
SciTech Connect
Lee, S.H.; Kamae, T.; Ellison, D.C.; /North Carolina State U.
2007-10-15
Recent studies of young supernova remnants (SNRs) with Chandra, XMM, Suzaku and HESS have revealed complex morphologies and spectral features of the emission sites. The critical question of the relative importance of the two competing gamma-ray emission mechanisms in SNRs; inverse-Compton scattering by high-energy electrons and pion production by energetic protons, may be resolved by GLAST-LAT. To keep pace with the improved observations, we are developing a 3D model of particle acceleration, diffusion, and interaction in a SNR where broad-band emission from radio to multi-TeV energies, produced by shock accelerated electrons and ions, can be simulated for a given topology of shock fronts, magnetic field, and ISM densities. The 3D model takes as input, the particle spectra predicted by a hydrodynamic simulation of SNR evolution where nonlinear diffusive shock acceleration is coupled to the remnant dynamics. We will present preliminary models of the Galactic Ridge SNR RX J1713-3946 for selected choices of SNR parameters, magnetic field topology, and ISM density distributions. When constrained by broad-band observations, our models should predict the extent of coupling between spectral shape and morphology and provide direct information on the acceleration efficiency of cosmic-ray electrons and ions in SNRs.
20. Simulation of Cosmic Ray Acceleration, Propagation and Interaction in SNR Environment
Lee, S. H.; Kamae, T.; Ellison, D. C.
2007-07-01
Recent studies of young supernova remnants (SNRs) with Chandra, XMM, Suzaku and HESS have revealed complex morphologies and spectral features of the emission sites. The critical question of the relative importance of the two competing gamma-ray emission mechanisms in SNRs; inverse-Compton scattering by high-energy electrons and pion production by energetic protons, may be resolved by GLAST-LAT. To keep pace with the improved observations, we are developing a 3D model of particle acceleration, diffusion, and interaction in a SNR where broad-band emission from radio to multi-TeV energies, produced by shock accelerated electrons and ions, can be simulated for a given topology of shock fronts, magnetic field, and ISM densities. The 3D model takes as input, the particle spectra predicted by a hydrodynamic simulation of SNR evolution where nonlinear diffusive shock acceleration is coupled to the remnant dynamics (e.g., Ellison, Decourchelle & Ballet; Ellison & Cassam-Chenai Ellison, Berezhko & Baring). We will present preliminary models of the Galactic Ridge SNR RX J1713-3946 for selected choices of SNR parameters, magnetic field topology, and ISM density distributions. When constrained by broad-band observations, our models should predict the extent of coupling between spectral shape and morphology and provide direct information on the acceleration efficiency of cosmic-ray electrons and ions in SNRs.
1. Evidence for Particle Acceleration to the Knee of the Cosmic Ray Spectrum in Tycho's Supernova Remnant
Eriksen, Kristoffer A.; Hughes, John P.; Badenes, Carles; Fesen, Robert; Ghavamian, Parviz; Moffett, David; Plucinksy, Paul P.; Rakowski, Cara E.; Reynoso, Estela M.; Slane, Patrick
2011-02-01
Supernova remnants (SNRs) have long been assumed to be the source of cosmic rays (CRs) up to the "knee" of the CR spectrum at 1015 eV, accelerating particles to relativistic energies in their blast waves by the process of diffusive shock acceleration (DSA). Since CR nuclei do not radiate efficiently, their presence must be inferred indirectly. Previous theoretical calculations and X-ray observations show that CR acceleration significantly modifies the structure of the SNR and greatly amplifies the interstellar magnetic field. We present new, deep X-ray observations of the remnant of Tycho's supernova (SN 1572, henceforth Tycho), which reveal a previously unknown, strikingly ordered pattern of non-thermal high-emissivity stripes in the projected interior of the remnant, with spacing that corresponds to the gyroradii of 1014-1015 eV protons. Spectroscopy of the stripes shows the plasma to be highly turbulent on the (smaller) scale of the Larmor radii of TeV energy electrons. Models of the shock amplification of magnetic fields produce structure on the scale of the gyroradius of the highest energy CRs present, but they do not predict the highly ordered pattern we observe. We interpret the stripes as evidence for acceleration of particles to near the knee of the CR spectrum in regions of enhanced magnetic turbulence, while the observed highly ordered pattern of these features provides a new challenge to models of DSA.
2. Probing the effective number of neutrino species with the cosmic microwave background
SciTech Connect
Ichikawa, Kazuhide; Sekiguchi, Toyokazu; Takahashi, Tomo
2008-10-15
We discuss how much we can probe the effective number of neutrino species N{sub {nu}} with the cosmic microwave background alone. Using the data of the WMAP, ACBAR, CBI, and BOOMERANG experiments, we obtain a constraint on the effective number of neutrino species as 0.96
3. Calibration and correction procedures for cosmic-ray neutron soil moisture probes located across Australia
Hawdon, Aaron; McJannet, David; Wallace, Jim
2014-06-01
The cosmic-ray probe (CRP) provides continuous estimates of soil moisture over an area of ˜30 ha by counting fast neutrons produced from cosmic rays which are predominantly moderated by water molecules in the soil. This paper describes the setup, measurement correction procedures, and field calibration of CRPs at nine locations across Australia with contrasting soil type, climate, and land cover. These probes form the inaugural Australian CRP network, which is known as CosmOz. CRP measurements require neutron count rates to be corrected for effects of atmospheric pressure, water vapor pressure changes, and variations in incoming neutron intensity. We assess the magnitude and importance of these corrections and present standardized approaches for network-wide analysis. In particular, we present a new approach to correct for incoming neutron intensity variations and test its performance against existing procedures used in other studies. Our field calibration results indicate that a generalized calibration function for relating neutron counts to soil moisture is suitable for all soil types, with the possible exception of very sandy soils with low water content. Using multiple calibration data sets, we demonstrate that the generalized calibration function only applies after accounting for persistent sources of hydrogen in the soil profile. Finally, we demonstrate that by following standardized correction procedures and scaling neutron counting rates of all CRPs to a single reference location, differences in calibrations between sites are related to site biomass. This observation provides a means for estimating biomass at a given location or for deriving coefficients for the calibration function in the absence of field calibration data.
4. Ongoing cosmic ray acceleration in the supernova remnant W51C revealed with the MAGIC telescopes
Krause, J.; Reichardt, I.; Carmona, E.; Gozzini, S. R.; Jankowski, F.; MAGIC Collaboration
2012-12-01
The supernova remnant (SNR) W51C interacts with the molecular clouds of the star-forming region W51B, making the W51 complex one of the most promising targets to study cosmic ray acceleration. Gamma-ray emission from this region was discovered by Fermi/LAT and H.E.S.S., although its location was compatible with the SNR shell, the molecular cloud (MC) and a pulsar wind nebula (PWN) candidate. The modeling of the spectral energy distribution presented by the Fermi/LAT collaboration suggests a hadronic emission mechanism. Furthermore indications of an enhanced flux of low energy cosmic rays in the interaction region between SNR and MC have been reported based on ionization measurements in the mm regime. MAGIC conducted deep observations of W51, yielding a detection of an extended emission with more than 11 standard deviations. We extend the spectrum from the highest Fermi/LAT energies to ~5 TeV and find that it follows a single power law with an index of 2.58+/-0.07stat+/-0.22syst. We restrict the main part of the emission region to the zone where the SNR interacts with the molecular clouds. We also find a tail extending towards the PWN candidate CXO J192318.5+140305, possibly contributing up to 20% of the total flux. The broad band spectral energy distribution can be explained with a hadronic model that implies proton acceleration at least up to 50 TeV. This result, together with the morphology of the source, suggests that we observe ongoing acceleration of ions in the interaction zone between the SNR and the cloud.
5. Origin of high energy Galactic cosmic rays
NASA Technical Reports Server (NTRS)
Gaisser, T. K.
1990-01-01
The flux of cosmic ray antiprotons and the chemical composition in the region of the 'knee' of the cosmic ray energy spectrum are discussed. The importance of a direct determination of the energy spectrum of each major component of cosmic radiation through the knee region is stressed, and the necessary kinds of experiments are described. It is emphasized that antiprotons are a unique probe of acceleration and propagation of energetic particles in the galaxy because of the high threshold for their production.
6. Dodging the cosmic curvature to probe the constancy of the speed of light
Cai, Rong-Gen; Guo, Zong-Kuan; Yang, Tao
2016-08-01
We develop a new model-independent method to probe the constancy of the speed of light c. In our method, the degeneracy between the cosmic curvature and the speed of light can be eliminated, which makes the test more natural and general. Combining the independent observations of Hubble parameter H(z) and luminosity distance dL(z), we use the model-independent smoothing technique, Gaussian processes, to reconstruct them and then detect variation of the speed of light. We find no signal of deviation from the present value of the speed of light c0. Moreover, to demonstrate the improvement in probing the constancy of the speed of light from future experiments, we produce a series of simulated data. The Dark Energy Survey will be able to detect Δc/c0 ~ 1% at ~ 1.5σ confidence level and Δc/c0 ~ 2% at ~ 3σ confidence level. If the errors are reduced to one-tenth of the expected DES ones, it can detect a Δc/c0 ~ 0.1% variation at ~ 2σ confidence level.
7. Is the acceleration of anomalous cosmic rays affected by the geometry of the termination shock?
SciTech Connect
Senanayake, U. K.; Florinski, V. E-mail: [email protected]
2013-12-01
Historically, anomalous cosmic rays (ACRs) were thought to be accelerated at the solar-wind termination shock (TS) by the diffusive shock acceleration process. When Voyager 1 crossed the TS in 2004, the measured ACR spectra did not match the theoretical prediction of a continuous power law, and the source of the high-energy ACRs was not observed. When the Voyager 2 crossed the TS in 2007, it produced similar results. Several possible explanations have since appeared in the literature, but we follow the suggestion that ACRs are still accelerated at the shock, only away from the Voyager crossing points. To investigate this hypothesis closer, we study ACR acceleration using a three-dimensional, non-spherical model of the heliosphere that is axisymmetric with respect to the interstellar flow direction. We then compare the results with those obtained for a spherical TS. A semi-analytic model of the plasma and magnetic field backgrounds is developed to permit an investigation over a wide range of parameters under controlled conditions. The model is applied to helium ACRs, whose phase-space trajectories are stochastically integrated backward in time until a pre-specified, low-energy boundary, taken to be 0.5 MeV n{sup –1} (the so-called injection energy), is reached. Our results show that ACR acceleration is quite efficient on the heliotail-facing part of the TS. For small values of the perpendicular diffusion coefficient, our model yields a positive intensity gradient between the TS and about midway through the heliosheath, in agreement with the Voyager observations.
8. On ultra-high energy cosmic ray acceleration at the termination shock of young pulsar winds
Lemoine, Martin; Kotera, Kumiko; Pétri, Jérôme
2015-07-01
Pulsar wind nebulae (PWNe) are outstanding accelerators in Nature, in the sense that they accelerate electrons up to the radiation reaction limit. Motivated by this observation, this paper examines the possibility that young pulsar wind nebulae can accelerate ions to ultra-high energies at the termination shock of the pulsar wind. We consider here powerful PWNe, fed by pulsars born with ~ millisecond periods. Assuming that such pulsars exist, at least during a few years after the birth of the neutron star, and that they inject ions into the wind, we find that protons could be accelerated up to energies of the order of the Greisen-Zatsepin-Kuzmin cut-off, for a fiducial rotation period P ~ 1 msec and a pulsar magnetic field Bstar ~ 1013 G, implying a fiducial wind luminosity Lp ~ 1045 erg/s and a spin-down time tsd ~ 3× 107 s. The main limiting factor is set by synchrotron losses in the nebula and by the size of the termination shock; ions with Z>= 1 may therefore be accelerated to even higher energies. We derive an associated neutrino flux produced by interactions in the source region. For a proton-dominated composition, our maximum flux lies slightly below the 5-year sensitivity of IceCube-86 and above the 3-year sensitivity of the projected Askaryan Radio Array. It might thus become detectable in the next decade, depending on the exact level of contribution of these millisecond pulsar wind nebulae to the ultra-high energy cosmic ray flux.
9. Shock waves and cosmic ray acceleration in the outskirts of galaxy clusters
SciTech Connect
Hong, Sungwook E.; Ryu, Dongsu; Kang, Hyesung; Cen, Renyue E-mail: [email protected] E-mail: [email protected]
2014-04-20
The outskirts of galaxy clusters are continuously disturbed by mergers and gas infall along filaments, which in turn induce turbulent flow motions and shock waves. We examine the properties of shocks that form within r {sub 200} in sample galaxy clusters from structure formation simulations. While most of these shocks are weak and inefficient accelerators of cosmic rays (CRs), there are a number of strong, energetic shocks which can produce large amounts of CR protons via diffusive shock acceleration. We show that the energetic shocks reside mostly in the outskirts and a substantial fraction of them are induced by infall of the warm-hot intergalactic medium from filaments. As a result, the radial profile of the CR pressure in the intracluster medium is expected to be broad, dropping off more slowly than that of the gas pressure, and might be even temporarily inverted, peaking in the outskirts. The volume-integrated momentum spectrum of CR protons inside r {sub 200} has the power-law slope of 4.25-4.5, indicating that the average Mach number of the shocks of main CR production is in the range of {sub CR} ≈ 3-4. We suggest that some radio relics with relatively flat radio spectrum could be explained by primary electrons accelerated by energetic infall shocks with M{sub s} ≳ 3 induced in the cluster outskirts.
10. Dynamics of rising magnetized cavities and ultrahigh energy cosmic ray acceleration in clusters of galaxies
Gourgouliatos, Konstantinos N.; Lyutikov, Maxim
2012-02-01
We study the expansion of low-density cavities produced by active galactic nucleus jets in clusters of galaxies. The long-term stability of these cavities requires the presence of linked magnetic fields. We find solutions describing the self-similar expansion of structures containing large-scale electromagnetic fields. Unlike the force-free spheromak-like configurations, these solutions have no surface currents and, thus, are less susceptible to resistive decay. The cavities are internally confined by external pressure, with zero gradient at the surface. If the adiabatic index of the plasma within the cavity is Γ > 4/3, the expansion ultimately leads to the formation of large-scale current sheets. The resulting dissipation of the magnetic field can only partially offset the adiabatic and radiative losses of radio-emitting electrons. We demonstrate that if the formation of large-scale current sheets is accompanied by explosive reconnection of the magnetic field, the resulting reconnection layer can accelerate cosmic rays to ultrahigh energies. We speculate that the enhanced flux of ultrahigh energy cosmic rays towards Centaurus A originates at the cavities due to magnetic reconnection.
11. High energy neutrinos from primary cosmic rays accelerated in the cores of active galaxies
NASA Technical Reports Server (NTRS)
Stecker, F. W.; Done, C.; Salamon, M. H.; Sommers, P.
1991-01-01
The spectra and high-energy neutrino fluxes are calculated from photomeson production in active galactic nuclei (AGN) such as quasars and Seyfert galaxies using recent UV and X-ray observations to define the photon fields and an accretion-disk shock-acceleration model for producing ultrahigh-energy cosmic rays in the AGN. Collectively AGN should produce the dominant isotropic neutrino background between 10 exp 4 and 10 exp 10 GeV. Measurement of this background could be critical in determining the energy-generation mechanism, evolution, and distribution of AGN. High-energy background spectra and spectra from bright AGN such as NGC4151 and 3C273 are predicted which should be observable with present detectors. High energy AGN nus should produce a sphere of stellar disruption around their cores which could explain their observed broad-line emission regions.
12. A test of the nature of cosmic acceleration using galaxy redshift distortions.
PubMed
Guzzo, L; Pierleoni, M; Meneux, B; Branchini, E; Le Fèvre, O; Marinoni, C; Garilli, B; Blaizot, J; De Lucia, G; Pollo, A; McCracken, H J; Bottini, D; Le Brun, V; Maccagni, D; Picat, J P; Scaramella, R; Scodeggio, M; Tresse, L; Vettolani, G; Zanichelli, A; Adami, C; Arnouts, S; Bardelli, S; Bolzonella, M; Bongiorno, A; Cappi, A; Charlot, S; Ciliegi, P; Contini, T; Cucciati, O; de la Torre, S; Dolag, K; Foucaud, S; Franzetti, P; Gavignaud, I; Ilbert, O; Iovino, A; Lamareille, F; Marano, B; Mazure, A; Memeo, P; Merighi, R; Moscardini, L; Paltani, S; Pellò, R; Perez-Montero, E; Pozzetti, L; Radovich, M; Vergani, D; Zamorani, G; Zucca, E
2008-01-31
Observations of distant supernovae indicate that the Universe is now in a phase of accelerated expansion the physical cause of which is a mystery. Formally, this requires the inclusion of a term acting as a negative pressure in the equations of cosmic expansion, accounting for about 75 per cent of the total energy density in the Universe. The simplest option for this 'dark energy' corresponds to a 'cosmological constant', perhaps related to the quantum vacuum energy. Physically viable alternatives invoke either the presence of a scalar field with an evolving equation of state, or extensions of general relativity involving higher-order curvature terms or extra dimensions. Although they produce similar expansion rates, different models predict measurable differences in the growth rate of large-scale structure with cosmic time. A fingerprint of this growth is provided by coherent galaxy motions, which introduce a radial anisotropy in the clustering pattern reconstructed by galaxy redshift surveys. Here we report a measurement of this effect at a redshift of 0.8. Using a new survey of more than 10,000 faint galaxies, we measure the anisotropy parameter beta = 0.70 +/- 0.26, which corresponds to a growth rate of structure at that time of f = 0.91 +/- 0.36. This is consistent with the standard cosmological-constant model with low matter density and flat geometry, although the error bars are still too large to distinguish among alternative origins for the accelerated expansion. The correct origin could be determined with a further factor-of-ten increase in the sampled volume at similar redshift. PMID:18235494
13. Cosmic Ray Neutron Probe Soil Water Measurements over Complex Terrain in Austria
Vreugdenhil, Mariette; Weltin, Georg; Kheng Heng, Lee; Wahbi, Ammar; Oismueller, Markus; Dercon, Gerd
2014-05-01
The importance of surface soil water (rooting zone) has become evident with climate change affecting rainfall patterns and crop production. The use of Cosmic Ray Neutron Probe (CRNP) for measuring surface soil water has become increasingly popular. The advantage of CRNP is that it is a non-invasive technique for measuring soil water content at an area-wide scale, in contrast to more conventional, techniques which measure mainly at field scale (point level). The CRNP integrates over a circular area of ca. 600 meters in diameter, to a depth of 70 cm, giving an average value for soil water content. Cosmic radiation interacting with the Earth's atmosphere continuously generates neutrons. At Earth's surface, these neutrons interact with surface water, and are slowed down. At sub-micrometer geometrics, these neutrons affect semiconductor devices, so they can be counted, slow and fast ones separately. From the difference in numbers between fast and slow neutrons, soil water content is calculated. As first in Austria, a CRNP (CRS 1000/B model) consisting of two neutron counters (one tuned for slow, the other one for fast neutrons), data logger and an Iridium modem, has been installed at Petzenkirchen research station of the Doctoral Programme for Water Resource Systems (TU Vienna) at 48.14 latitude and 15.17 longitude, 100 km west of Vienna, in late autumn 2013. The research station is located in an undulating agricultural landscape, characterized by heavy Cambisols and Planosols, and winter wheat and barley as main crops in winter, and maize and sunflower in summer. In addition, an in-situ soil moisture network consisting of 32 stations of Time Domain Transmissivity (TDT) sensors measuring soil water at 4 depths (0.05, 0.10,0.20 and 0.50 m) over an area of 64 ha has been established. This TDT network is currently being used to validate the use of the innovative CRNP technique. First results will be shown at the EGU 2014.
14. The Super-TIGER Instrument to Probe Galactic Cosmic Ray Origins
NASA Technical Reports Server (NTRS)
Mitchell, John W.; Binns, W. R.; Bose, R, G.; Braun, D. L.; Christian, E. R.; Daniels, W. M; DeNolfo, G. A.; Dowkontt, P. F.; Hahne, D. J.; Hams, T.; Israel, M. H.; Klemic, J.; Labrador, A. W.; Link, J. T.; Mewaldt, R. A.; Moore, P. R.; Murphy, R. P.; Olevitch, M. A.; Rauch, B. F.; SanSebastian, F.; Sasaki, M.; Simburger, G. E.; Stone, E. C.; Waddington, C. J.
2011-01-01
Super-TIGER (Super Trans-Iron Galactic Element Recorder) is under construction for the first of two planned Antarctic long-duration balloon flights in December 2012. This new instrument will measure the abundances of ultra-heavy elements (30Zn and heavier), with individual element resolution, to provide sensitive tests of the emerging model of cosmic-ray origins in OB associations and models of the mechanism for selection of nuclei for acceleration. Super-TIGER builds on the techniques of TIGER, which produced the first well-resolved measurements of elemental abundances of the elements 31Ga, 32Ge, and 34Se. Plastic scintillators together with acrylic and silica-aerogel Cherenkov detectors measure particle charge. Scintillating-fiber hodoscopes track particle trajectories. Super-TIGER has an active area of 5.4 sq m, divided into two independent modules. With reduced material thickness to decrease interactions, its effective geometry factor is approx.6.4 times larger than TIGER, allowing it to measure elements up to 42Mo with high statistical precision, and make exploratory measurements up to 56Ba. Super-TIGER will also accurately determine the energy spectra of the more abundant elements from l0Ne to 28Ni between 0.8 and 10 GeV/nucleon to test the hypothesis that microquasars or other sources could superpose spectral features. We will discuss the implications of Super-TIGER measurements for the study of cosmic-ray origins and will present the measurement technique, design, status, and expected performance, including numbers of events and resolution. Details of the hodoscopes, scintillators, and Cherenkov detectors will be given in other presentations at this conference.
15. Model experiment of cosmic ray acceleration due to an incoherent wakefield induced by an intense laser pulse
SciTech Connect
Kuramitsu, Y.; Sakawa, Y.; Takeda, K.; Tampo, M.; Takabe, H.; Nakanii, N.; Kondo, K.; Tsuji, K.; Kimura, K.; Fukumochi, S.; Kashihara, M.; Tanimoto, T.; Nakamura, H.; Ishikura, T.; Kodama, R.; Mima, K.; Tanaka, K. A.; Mori, Y.; Miura, E.; Kitagawa, Y.
2011-01-15
The first report on a model experiment of cosmic ray acceleration by using intense laser pulses is presented. Large amplitude light waves are considered to be excited in the upstream regions of relativistic astrophysical shocks and the wakefield acceleration of cosmic rays can take place. By substituting an intense laser pulse for the large amplitude light waves, such shock environments were modeled in a laboratory plasma. A plasma tube, which is created by imploding a hollow polystyrene cylinder, was irradiated by an intense laser pulse. Nonthermal electrons were generated by the wakefield acceleration and the energy distribution functions of the electrons have a power-law component with an index of {approx}2. The maximum attainable energy of the electrons in the experiment is discussed by a simple analytic model. In the incoherent wakefield the maximum energy can be much larger than one in the coherent field due to the momentum space diffusion or the energy diffusion of electrons.
16. Probing 'Parent Universe' in Loop Quantum Cosmology with B-mode Polarization in Cosmic Microwave Background
Lucky Chang, Wen-Hsuan; Proty Wu, Jiun-Huei
2016-06-01
We aim to use the observations of B-mode polarization in the Cosmic Microwave Background (CMB) to probe the ‘parent universe’ under the context of Loop Quantum Cosmology (LQC). In particular, we investigate the possibility for the gravitational waves (GW) such as those from the stellar binary systems in the parent universe to survive the big bounce and thus to be still observable today. Our study is based on the background dynamics with the zeroth-order holonomy correction using the Arnowitt-Deser-Misner (ADM) formalism. We propose a new framework in which transfer functions are invoked to bring the GWs in the parent universe through the big bounce, inflation, and big bang to reach today. This transparent and intuitive formalism allows us to accurately discuss the influence of the GWs from the parent universe on the B-mode polarization in the CMB today under backgrounds of different LQC parameters. These features can soon be tested by the forth-coming CMB observations and we note that the LQC backgrounds with symmetric bouncing scenarios are ruled out by the latest observational results from Planck and BICEP2/Keck experiments.
17. Anisotropies in the cosmic neutrino background after Wilkinson Microwave Anisotropy Probe five-year data
SciTech Connect
De Bernardis, Francesco; Pagano, Luca; Melchiorri, Alessandro; Serra, Paolo; Cooray, Asantha E-mail: [email protected] E-mail: [email protected]
2008-06-15
We search for the presence of cosmological neutrino background (CNB) anisotropies in recent Wilkinson Microwave Anisotropy Probe (WMAP) five-year data using their signature imprinted on modifications to the cosmic microwave background (CMB) anisotropy power spectrum. By parameterizing the neutrino background anisotropies with the speed viscosity parameter c{sub vis}, we find that the WMAP five-year data alone provide only a weak indication for CNB anisotropies with c{sub vis}{sup 2}>0.06 at the 95% confidence level. When we combine CMB anisotropy data with measurements of galaxy clustering, the SN-Ia Hubble diagram, and other cosmological information, the detection increases to c{sub vis}{sup 2}>0.16 at the same 95% confidence level. Future data from Planck, combined with a weak lensing survey such as the one expected with DUNE from space, will be able to measure the CNB anisotropy parameter at about 10% accuracy. We discuss the degeneracy between neutrino background anisotropies and other cosmological parameters such as the number of effective neutrinos species and the dark energy equation of state.
18. Radius of influence for a cosmic-ray soil moisture probe : theory and Monte Carlo simulations.
SciTech Connect
Desilets, Darin
2011-02-01
The lateral footprint of a cosmic-ray soil moisture probe was determined using diffusion theory and neutron transport simulations. The footprint is radial and can be described by a single parameter, an e-folding length that is closely related to the slowing down length in air. In our work the slowing down length is defined as the crow-flight distance traveled by a neutron from nuclear emission as a fast neutron to detection at a lower energy threshold defined by the detector. Here the footprint is defined as the area encompassed by two e-fold distances, i.e. the area from which 86% of the recorded neutrons originate. The slowing down length is approximately 150 m at sea level for neutrons detected over a wide range of energies - from 10{sup 0} to 10{sup 5} eV. Both theory and simulations indicate that the slowing down length is inversely proportional to air density and linearly proportional to the height of the sensor above the ground for heights up to 100 m. Simulations suggest that the radius of influence for neutrons >1 eV is only slightly influenced by soil moisture content, and depends weakly on the energy sensitivity of the neutron detector. Good agreement between the theoretical slowing down length in air and the simulated slowing down length near the air/ground interface support the conclusion that the footprint is determined mainly by the neutron scattering properties of air.
19. Studying Star and Planet Formation with the Submillimeter Probe of the Evolution of Cosmic Structure
NASA Technical Reports Server (NTRS)
Rinehart, Stephen A.
2005-01-01
The Submillimeter Probe of the Evolution of Cosmic Structure (SPECS) is a far- infrared/submillimeter (40-640 micrometers) spaceborne interferometry concept, studied through the NASA Vision Missions program. SPECS is envisioned as a 1-km baseline Michelson interferometer with two 4- meter collecting mirrors. To maximize science return, SPECS will have three operational modes: a photometric imaging mode, an intermediate spectral resolution mode (R approximately equal to 1000-3000), and a high spectral resolution mode (R approximately equal to 3 x 10(exp 5)). The first two of these modes will provide information on all sources within a 1 arcminute field-of-view (FOV), while the the third will include sources in a small (approximately equal to 5 arcsec) FOV. With this design, SPECS will have angular resolution comparable to the Hubble Space Telescope (50 mas) and sensitivity more than two orders of magnitude better than Spitzer (5sigma in 10ks of approximately equal to 3 x 10(exp 7) Jy Hz). We present here some of the results of the recently-completed Vision Mission Study for SPECS, and discuss the application of this mission to future studies of star and planet formation.
20. GRB time profiles as cosmic probes: Is time dilation extrinsic or intrinsic?
SciTech Connect
Norris, J. P.; Nemiroff, R. J.
1998-05-16
Recent detections of gamma-ray burst (GRB) counterparts confirm their great distances and consequent potential use as cosmic probes. However, GRB diversity may thwart this idea. The scatter of intrinsic GRB properties could easily mask the extrinsic effect of cosmological time dilation. Current investigations examine the question of temporal self-similarity: Extrinsic time dilation must be manifest with the same factor on all GRB timescales as a function of distance. Here we show that time-reversal-independent analysis of GRB time profiles using a peak alignment methodology reveals average profiles with approximately equal rise and decay timescales, per peak-flux group, except for the dimmest group. This departure from self-similarity is consistent with a selection effect: At sufficiently low peak fluxes, the BATSE causal trigger misses dim, slowly rising bursts. Interestingly, for GRB970508, the redshift-corrected time-dilation factor inferred for its peak flux is consistent with the measured redshift, z=0.835, if bright bursts lie at redshifts of a few tenths.
1. Probing The Cosmic History of Light With High-Energy Gamma Rays
Hartmann, Dieter
2016-01-01
The Cosmic Microwave Background (CMB) holds answers to many questions of moderrn cosmology. The origin of the CMB lies in the early universe, and when it was released during the recombination phase the conditions were not yet right for new sources of light. But the first generation of stars born in a mostly neutral universe quickly re-ionized their surroubding baryonic environments, and dust was produced which allowed reprocessing of some star light into the infrared specral region. Black holes and other compact objects were born and the emissions from their accretion processes and relativistic jetted outflws contributed new light. Today, we observe this evolving radiation field as the Extragalactic Backgroud Light (EBL), ranging from the radio- to the gamma-ray band. The evolution of the diffuse electromagnetic energy content of the universe is the focus of this special session, and I will discuss its importance within the context of modern cosmology. I will emphasize the role of gamma-ray astronomy, which probes the EBL and the CMB through the opacity created by photon-photon pair production.
2. Probing the neutrino mass hierarchy with cosmic microwave background weak lensing
Hall, Alex C.; Challinor, Anthony
2012-09-01
We forecast constraints on cosmological parameters with primary cosmic microwave background (CMB) anisotropy information and weak lensing reconstruction with a future post-Planck CMB experiment, the Cosmic Origins Explorer (COrE), using oscillation data on the neutrino mass splittings as prior information. Our Markov chain Monte Carlo (MCMC) simulations in flat models with a non-evolving equation of state of dark energy w give typical 68 per cent upper bounds on the total neutrino mass of 0.136 and 0.098 eV for the inverted and normal hierarchies, respectively, assuming the total summed mass is close to the minimum allowed by the oscillation data for the respective hierarchies (0.10 and 0.06 eV). Including geometric information from future baryon acoustic oscillation measurements with the complete Baryon Oscillation Spectroscopic Survey, Type Ia supernovae distance moduli from Wide-Field Infrared Survey Telescope (WFIRST) and a realistic prior on the Hubble constant, these upper limits shrink to 0.118 and 0.080 eV for the inverted and normal hierarchies, respectively. Addition of these distance priors also yields per cent-level constraints on w. We find tension between our MCMC results and the results of a Fisher matrix analysis, most likely due to a strong geometric degeneracy between the total neutrino mass, the Hubble constant and w in the unlensed CMB power spectra. If the minimal-mass, normal hierarchy were realized in nature, the inverted hierarchy should be disfavoured by the full data combination at typically greater than the 2σ level. For the minimal-mass inverted hierarchy, we compute the Bayes factor between the two hierarchies for various combinations of our forecast data sets, and find that the future cosmological probes considered here should be able to provide 'strong' evidence (odds ratio 12:1) for the inverted hierarchy. Finally, we consider potential biases of the other cosmological parameters from assuming the wrong hierarchy and find that all
3. A new limit on the time between the nucleosynthesis and the acceleration of cosmic rays in supernova remnants using the Co/Ni ratio
NASA Technical Reports Server (NTRS)
Webber, W. R.; Gupta, M.
1990-01-01
Using new cross section measurements of Ni into Co, data on the Co/Ni ratio in cosmic rays from the HEAO C spacecraft have been reinterpreted in terms of the time between nucleosynthesis and the acceleration of cosmic rays, delta t. The observed Co/Ni ratio is now consistent with interstellar fragmentation only, leading to a small or zero source abundance. In terms of the decay of e-process nucleosynthesis nuclides into Co after a supernova explosion, this permits an estimate of delta t = 4-30,000 yr for the time between nucleosynthesis and the acceleration of cosmic rays if supernovae are the direct progenitors of cosmic rays. These age limits are used in conjunction with models of the expansion of supernova remnants (SNRs), to estimate that cosmic rays are accelerated when the radius of these remnants is between 0.1 and 25 pc.
4. A 6% measurement of the Hubble parameter at z~0.45: direct evidence of the epoch of cosmic re-acceleration
Moresco, Michele; Pozzetti, Lucia; Cimatti, Andrea; Jimenez, Raul; Maraston, Claudia; Verde, Licia; Thomas, Daniel; Citro, Annalisa; Tojeiro, Rita; Wilkinson, David
2016-05-01
Deriving the expansion history of the Universe is a major goal of modern cosmology. To date, the most accurate measurements have been obtained with Type Ia Supernovae (SNe) and Baryon Acoustic Oscillations (BAO), providing evidence for the existence of a transition epoch at which the expansion rate changes from decelerated to accelerated. However, these results have been obtained within the framework of specific cosmological models that must be implicitly or explicitly assumed in the measurement. It is therefore crucial to obtain measurements of the accelerated expansion of the Universe independently of assumptions on cosmological models. Here we exploit the unprecedented statistics provided by the Baryon Oscillation Spectroscopic Survey (BOSS, [1-3]) Data Release 9 to provide new constraints on the Hubble parameter H(z) using the cosmic chronometers approach. We extract a sample of more than 130000 of the most massive and passively evolving galaxies, obtaining five new cosmology-independent H(z) measurements in the redshift range 0.3 < z < 0.5, with an accuracy of ~11–16% incorporating both statistical and systematic errors. Once combined, these measurements yield a 6% accuracy constraint of H(z = 0.4293) = 91.8 ± 5.3 km/s/Mpc. The new data are crucial to provide the first cosmology-independent determination of the transition redshift at high statistical significance, measuring zt = 0.4 ± 0.1, and to significantly disfavor the null hypothesis of no transition between decelerated and accelerated expansion at 99.9% confidence level. This analysis highlights the wide potential of the cosmic chronometers approach: it permits to derive constraints on the expansion history of the Universe with results competitive with standard probes, and most importantly, being the estimates independent of the cosmological model, it can constrain cosmologies beyond—and including—the ΛCDM model.
5. Diffusive Cosmic-ray Acceleration at Relativistic Shock Waves with Magnetostatic Turbulence
Schlickeiser, R.
2015-08-01
The analytical theory of diffusive cosmic-ray acceleration at parallel stationary shock waves with magnetostatic turbulence is generalized to arbitrary shock speeds {V}{{s}}={β }1c, including, in particular, relativistic speeds. This is achieved by applying the diffusion approximation to the relevant Fokker-Planck particle transport equation formulated in the mixed comoving coordinate system. In this coordinate system, the particle's momentum coordinates p and μ ={p}\\parallel /p are taken in the rest frame of the streaming plasma, whereas the time and space coordinates are taken in the observer's system. For magnetostatic slab turbulence, the diffusion-convection transport equation for the isotropic (in the rest frame of the streaming plasma) part of the particle's phase space density is derived. For a step-wise shock velocity profile, the steady-state diffusion-convection transport equation is solved. For a symmetric pitch-angle scattering Fokker-Planck coefficient, {D}μ μ (-μ )={D}μ μ (μ ), the steady-state solution is independent of the microphysical scattering details. For nonrelativistic mono-momentum particle injection at the shock, the differential number density of accelerated particles is a Lorentzian-type distribution function, which at large momenta approaches a power-law distribution function N(p≥slant {p}c)\\propto {p}-ξ with the spectral index ξ ({β }1)=1+[3/({{{Γ }}}1\\sqrt{{r}2-{β }12}-1)(1+3{β }12)]. For nonrelativistic ({β }1\\ll 1) shock speeds, this spectral index agrees with the known result ξ ({β }1\\ll 1)≃ (r+2)/(r-1), whereas for ultrarelativistic ({{{Γ }}}1\\gg 1) shock speeds the spectral index value is close to unity.
6. Issues for Simulation of Galactic Cosmic Ray Exposures for Radiobiological Research at Ground-Based Accelerators.
PubMed
Kim, Myung-Hee Y; Rusek, Adam; Cucinotta, Francis A
2015-01-01
For radiobiology research on the health risks of galactic cosmic rays (GCR) ground-based accelerators have been used with mono-energetic beams of single high charge, Z and energy, E (HZE) particles. In this paper, we consider the pros and cons of a GCR reference field at a particle accelerator. At the NASA Space Radiation Laboratory (NSRL), we have proposed a GCR simulator, which implements a new rapid switching mode and higher energy beam extraction to 1.5 GeV/u, in order to integrate multiple ions into a single simulation within hours or longer for chronic exposures. After considering the GCR environment and energy limitations of NSRL, we performed extensive simulation studies using the stochastic transport code, GERMcode (GCR Event Risk Model) to define a GCR reference field using 9 HZE particle beam-energy combinations each with a unique absorber thickness to provide fragmentation and 10 or more energies of proton and (4)He beams. The reference field is shown to well represent the charge dependence of GCR dose in several energy bins behind shielding compared to a simulated GCR environment. However, a more significant challenge for space radiobiology research is to consider chronic GCR exposure of up to 3 years in relation to simulations with animal models of human risks. We discuss issues in approaches to map important biological time scales in experimental models using ground-based simulation, with extended exposure of up to a few weeks using chronic or fractionation exposures. A kinetics model of HZE particle hit probabilities suggests that experimental simulations of several weeks will be needed to avoid high fluence rate artifacts, which places limitations on the experiments to be performed. Ultimately risk estimates are limited by theoretical understanding, and focus on improving knowledge of mechanisms and development of experimental models to improve this understanding should remain the highest priority for space radiobiology research. PMID:26090339
7. Issues for Simulation of Galactic Cosmic Ray Exposures for Radiobiological Research at Ground-Based Accelerators
PubMed Central
Kim, Myung-Hee Y.; Rusek, Adam; Cucinotta, Francis A.
2015-01-01
For radiobiology research on the health risks of galactic cosmic rays (GCR) ground-based accelerators have been used with mono-energetic beams of single high charge, Z and energy, E (HZE) particles. In this paper, we consider the pros and cons of a GCR reference field at a particle accelerator. At the NASA Space Radiation Laboratory (NSRL), we have proposed a GCR simulator, which implements a new rapid switching mode and higher energy beam extraction to 1.5 GeV/u, in order to integrate multiple ions into a single simulation within hours or longer for chronic exposures. After considering the GCR environment and energy limitations of NSRL, we performed extensive simulation studies using the stochastic transport code, GERMcode (GCR Event Risk Model) to define a GCR reference field using 9 HZE particle beam–energy combinations each with a unique absorber thickness to provide fragmentation and 10 or more energies of proton and 4He beams. The reference field is shown to well represent the charge dependence of GCR dose in several energy bins behind shielding compared to a simulated GCR environment. However, a more significant challenge for space radiobiology research is to consider chronic GCR exposure of up to 3 years in relation to simulations with animal models of human risks. We discuss issues in approaches to map important biological time scales in experimental models using ground-based simulation, with extended exposure of up to a few weeks using chronic or fractionation exposures. A kinetics model of HZE particle hit probabilities suggests that experimental simulations of several weeks will be needed to avoid high fluence rate artifacts, which places limitations on the experiments to be performed. Ultimately risk estimates are limited by theoretical understanding, and focus on improving knowledge of mechanisms and development of experimental models to improve this understanding should remain the highest priority for space radiobiology research. PMID:26090339
8. ASSESSING THE FEASIBILITY OF COSMIC-RAY ACCELERATION BY MAGNETIC TURBULENCE AT THE GALACTIC CENTER
SciTech Connect
Fatuzzo, M.; Melia, F. E-mail: [email protected]
2012-05-01
The presence of relativistic particles at the center of our Galaxy is evidenced by the diffuse TeV emission detected from the inner {approx}2 Degree-Sign of the Galaxy. Although it is not yet entirely clear whether the origin of the TeV photons is due to hadronic or leptonic interactions, the tight correlation of the intensity distribution with the distribution of molecular gas along the Galactic ridge strongly points to a pionic-decay process involving relativistic protons. In previous work, we concluded that point-source candidates, such as the supermassive black hole Sagittarius A* (identified with the High-Energy Stereoscopic System (HESS) source J1745-290) or the pulsar wind nebulae dispersed along the Galactic plane, could not account for the observed diffuse TeV emission from this region. Motivated by this result, we consider here the feasibility that the cosmic rays populating the Galactic center region are accelerated in situ by magnetic turbulence. Our results indicate that even in a highly conductive environment, this mechanism is efficient enough to energize protons within the intercloud medium to the {approx}>TeV energies required to produce the HESS emission.
9. Aromatic units from the macromolecular material in meteorites: Molecular probes of cosmic environments
Sephton, Mark A.
2013-04-01
Ancient meteorites contain several percent of organic matter that represents a chronicle of chemical evolution in the early solar system. Aromatic hydrocarbon units make up the majority of meteorite organic matter but reading their record of organic evolution is not straightforward and their formation mechanisms have remained elusive. Most aromatic units reside in a macromolecular material and new perceptions of its structure have been provided by a novel on-line hydrogenation approach. When applied to the Orgueil (CI1) and Murchison (CM2) meteorites the technique releases a range of aromatic hydrocarbons along with some oxygen, sulphur and nitrogen-containing aromatic units. When on-line hydrogenation is compared to conventional pyrolysis, more high molecular weight units and a wider range of liberated entities are evident. Comparisons of results from Orgueil and Murchison reveal variations that are most likely related to differing levels of parent body alteration. The enhancement of straight-chain hydrocarbons (n-alkanes) in the hydrogenation products imply a source of these common contaminants from straight-chain carboxylic acid (n-alkanoic acid) precursors, perhaps from bacterial contributions on Earth. The on-line hydrogenation data also highlight a long-standing but unexplained observation related to the relative preference for specific isomers in methyl-substituted benzenes (meta-, ortho- and para-xylenes). The new hydrogenation approach appears to release and transform macromolecular material meta-structures (benzenes with substituents separated by single carbon atoms) into their free hydrocarbon counterparts. Their release characteristics suggest that the meta-structures are bound by oxygen-linkages. The meta-structures may be molecular probes of specific ancient cosmic environments. Parent body processing may have performed a similar function as hydrogenation to produce the most common meta configuration for free substituted benzenes. Notably, this
10. Narrowband Gyrosynchrotron Bursts: Probing Electron Acceleration in Solar Flares
Fleishman, Gregory D.; Nita, Gelu M.; Kontar, Eduard P.; Gary, Dale E.
2016-07-01
Recently, in a few case studies we demonstrated that gyrosynchrotron microwave emission can be detected directly from the acceleration region when the trapped electron component is insignificant. For the statistical study reported here, we have identified events with steep (narrowband) microwave spectra that do not show a significant trapped component and, at the same time, show evidence of source uniformity, which simplifies the data analysis greatly. Initially, we identified a subset of more than 20 radio bursts with such narrow spectra, having low- and high-frequency spectral indices larger than three in absolute value. A steep low-frequency spectrum implies that the emission is nonthermal (for optically thick thermal emission, the spectral index cannot be steeper than two), and the source is reasonably dense and uniform. A steep high-frequency spectrum implies that no significant electron trapping occurs, otherwise a progressive spectral flattening would be observed. Roughly half of these radio bursts have RHESSI data, which allow for detailed, joint diagnostics of the source parameters and evolution. Based on an analysis of radio-to-X-ray spatial relationships, timing, and spectral fits, we conclude that the microwave emission in these narrowband bursts originates directly from the acceleration regions, which have a relatively strong magnetic field, high density, and low temperature. In contrast, the thermal X-ray emission comes from a distinct loop with a smaller magnetic field, lower density, but higher temperature. Therefore, these flares likely occurred due to interaction between two (or more) magnetic loops.
11. Cosmology of a Friedmann-Lamaître-Robertson-Walker 3-brane, late-time cosmic acceleration, and the cosmic coincidence.
PubMed
Doolin, Ciaran; Neupane, Ishwaree P
2013-04-01
A late epoch cosmic acceleration may be naturally entangled with cosmic coincidence--the observation that at the onset of acceleration the vacuum energy density fraction nearly coincides with the matter density fraction. In this Letter we show that this is indeed the case with the cosmology of a Friedmann-Lamaître-Robertson-Walker (FLRW) 3-brane in a five-dimensional anti-de Sitter spacetime. We derive the four-dimensional effective action on a FLRW 3-brane, from which we obtain a mass-reduction formula, namely, M(P)(2) = ρ(b)/|Λ(5)|, where M(P) is the effective (normalized) Planck mass, Λ(5) is the five-dimensional cosmological constant, and ρ(b) is the sum of the 3-brane tension V and the matter density ρ. Although the range of variation in ρ(b) is strongly constrained, the big bang nucleosynthesis bound on the time variation of the effective Newton constant G(N) = (8πM(P)(2))(-1) is satisfied when the ratio V/ρ ≳ O(10(2)) on cosmological scales. The same bound leads to an effective equation of state close to -1 at late epochs in accordance with astrophysical and cosmological observations. PMID:25166976
12. 2D electron density profile measurement in tokamak by laser-accelerated ion-beam probe
SciTech Connect
Chen, Y. H.; Yang, X. Y.; Lin, C. E-mail: [email protected]; Wang, X. G.; Xiao, C. J. E-mail: [email protected]; Wang, L.; Xu, M.
2014-11-15
A new concept of Heavy Ion Beam Probe (HIBP) diagnostic has been proposed, of which the key is to replace the electrostatic accelerator of traditional HIBP by a laser-driven ion accelerator. Due to the large energy spread of ions, the laser-accelerated HIBP can measure the two-dimensional (2D) electron density profile of tokamak plasma. In a preliminary simulation, a 2D density profile was reconstructed with a spatial resolution of about 2 cm, and with the error below 15% in the core region. Diagnostics of 2D density fluctuation is also discussed.
13. Influence of total biomass and rainfall interception on soil moisture measurements using cosmic-ray neutron probes
Fuchs, Hannah; Reemt Bogena, Heye; Huisman, Johan Alexander; Hendricks-Franssen, Harrie-Jan; Vereecken, Harry
2016-04-01
Cosmic-ray neutron probes are an emerging technology to continuously monitor soil water content at a scale significant to land surface processes. This method relies on the negative correlation between near-surface fast neutron counts and soil moisture content since hydrogen atoms in the soil, which are mainly present as water, moderate the secondary neutrons on the way back to the surface. Any application of this method needs to consider the sensitivity of the neutron counts to additional sources of hydrogen (e.g. above- and below-ground biomass, humidity of the lower atmosphere, lattice water of the soil minerals, organic matter and water in the litter layer, intercepted water in the canopy, and soil organic matter). In this study, we analyzed the effects of changing above- and below-ground biomass and intercepted water in the canopy on the cosmic-ray neutron counts and calibration parameters. For this, the arable field test site Selhausen, which is part of the TERENO and ICOS observation networks, was cropped with winter wheat and additionally instrumented with cosmic-ray neutron probes and a wireless sensor network with 108 soil moisture sensors. In order to increase the sensitivity of the cosmic-ray neutron measurements, we used seven neutron detectors simultaneously. In addition, we measured rainfall interception in the wheat canopy at several locations in the field. In order to track the changes in above and below-ground biomass of the winter wheat, roots and plants were sampled approximately every four weeks and LAI was measured weekly during the growing season. Weekly biomass changes were derived by relating LAI to total biomass. As expected, we found an increasing discrepancy between cosmic-ray-derived and in-situ measured soil moisture during the growing season and a sharp decrease in discrepancy after the harvest. In order to quantify the effect of hydrogen stored in the vegetation on fast neutron intensity, we derived a daily and weekly time series of
14. Accuracy of the cosmic-ray soil water content probe in humid forest ecosystems: The worst case scenario
Bogena, H. R.; Huisman, J. A.; Baatz, R.; Hendricks Franssen, H.-J.; Vereecken, H.
2013-09-01
Soil water content is one of the key state variables in the soil-vegetation-atmosphere continuum due to its important role in the exchange of water and energy at the soil surface. A new promising method to measure integral soil water content at the field or small catchment scale is the cosmic-ray probe (CRP). Recent studies of CRP measurements have mainly presented results from test sites located in very dry areas and from agricultural fields with sandy soils. In this study, distributed continuous soil water content measurements from a wireless sensor network (SoilNet) were used to investigate the accuracy of CRP measurements for soil water content determination in a humid forest ecosystem. Such ecosystems are less favorable for CRP applications due to the presence of a litter layer. In addition, lattice water and carbohydrates of soil organic matter and belowground biomass reduce the effective sensor depth and thus were accounted for in the calibration of the CRP. The hydrogen located in the biomass decreased the level of neutron count rates and thus also decreased the sensitivity of the cosmic-ray probe, which in turn resulted in an increase of the measurement uncertainty. This uncertainty was compensated by using longer integration times (e.g., 24 h). For the Wüstebach forest site, the cosmic-ray probe enabled the assessment of integral daily soil water content dynamics with a RMSE of about 0.03 cm3/cm3 without explicitly considering the litter layer. By including simulated water contents of the litter layer in the calibration, a better accuracy could be achieved.
15. ANALYTIC SOLUTION FOR SELF-REGULATED COLLECTIVE ESCAPE OF COSMIC RAYS FROM THEIR ACCELERATION SITES
SciTech Connect
Malkov, M. A.; Diamond, P. H.; Sagdeev, R. Z.; Aharonian, F. A.; Moskalenko, I. V. E-mail: [email protected]
2013-05-01
Supernova remnants (SNRs), as the major contributors to the galactic cosmic rays (CRs), are believed to maintain an average CR spectrum by diffusive shock acceleration regardless of the way they release CRs into the interstellar medium (ISM). However, the interaction of the CRs with nearby gas clouds crucially depends on the release mechanism. We call into question two aspects of a popular paradigm of the CR injection into the ISM, according to which they passively and isotropically diffuse in the prescribed magnetic fluctuations as test particles. First, we treat the escaping CR and the Alfven waves excited by them on an equal footing. Second, we adopt field-aligned CR escape outside the source, where the waves become weak. An exact analytic self-similar solution for a CR ''cloud'' released by a dimmed accelerator strongly deviates from the test-particle result. The normalized CR partial pressure may be approximated as P(p,z,t)=2[|z|{sup 5/3}+z{sub dif}{sup 5/3}(p,t)]{sup -3/5} exp[-z{sup 2}/4D{sub ISM}(p)t], where p is the momentum of CR particle, and z is directed along the field. The core of the cloud expands as z{sub dif}{proportional_to}{radical}(D{sub NL}(p)t) and decays in time as p{proportional_to}2z{sup -1}{sub dif}(t). The diffusion coefficient D{sub NL} is strongly suppressed compared to its background ISM value D{sub ISM}: D{sub NL} {approx} D{sub ISM}exp (- {Pi}) << D{sub ISM} for sufficiently high field-line-integrated CR partial pressure, {Pi}. When {Pi} >> 1, the CRs drive Alfven waves efficiently enough to build a transport barrier (p Almost-Equal-To 2/ Divides z Divides -{sup p}edestal{sup )} that strongly reduces the leakage. The solution has a spectral break at p = p{sub br}, where p{sub br} satisfies the equation D{sub NL}(p{sub br}) {approx_equal} z {sup 2}/t.
16. Cosmic-ray antiprotons as a probe of a photino-dominated universe
NASA Technical Reports Server (NTRS)
Silk, J.; Srednicki, M.
1984-01-01
Observational tests of the hypothesis that the universe is flat and dominated by dark matter in the form of massive photinos include the production of significant fluxes of cosmic rays and gamma rays in our galactic halo. Specification of the cosmological photino density and the masses of scalar quarks and leptons determines the present annihilation rate. The predicted number of low-energy cosmic-ray antiprotons is comparable to the observed flux.
17. Non Parametric Determination of Acceleration Characteristics in Supernova Shocks Based on Spectra of Cosmic Rays and Remnant Radiation
Petrosian, Vahe
2016-07-01
We have developed an inversion method for determination of the characteristics of the acceleration mechanism directly and non-parametrically from observations, in contrast to the usual forward fitting of parametric model variables to observations. This is done in the frame work of the so-called leaky box model of acceleration, valid for isotropic momentum distribution and for volume integrated characteristics in a finite acceleration site. We consider both acceleration by shocks and stochastic acceleration where turbulence plays the primary role to determine the acceleration, scattering and escape rates. Assuming a knowledge of the background plasma the model has essentially two unknown parameters, namely the momentum and pitch angle scattering diffusion coefficients, which can be evaluated given two independent spectral observations. These coefficients are obtained directly from the spectrum of radiation from the supernova remnants (SNRs), which gives the spectrum of accelerated particles, and the observed spectrum of cosmic rays (CRs), which are related to the spectrum of particles escaping the SNRs. The results obtained from application of this method will be presented.
18. Rayleigh-Taylor instabilities in Type Ia supernova remnants undergoing cosmic ray particle acceleration - low adiabatic index solutions
Wang, Chih-Yueh
2011-07-01
This study investigates the evolution of Rayleigh-Taylor (R-T) instabilities in Type Ia supernova remnants that are associated with a low adiabatic index γ, where γ < 5/3, which reflects the expected change in the supernova shock structure as a result of cosmic ray particle acceleration. Extreme cases, such as the case with the maximum compression ratio that corresponds to γ= 1.1, are examined. As γ decreases, the shock compression ratio rises, and an increasingly narrow intershock region with a more pronounced initial mixture of R-T unstable gas is produced. Consequently, the remnant outline may be perturbed by small-amplitude, small-wavelength bumps. However, as the instability decays over time, the extent of convective mixing in terms of the ratio of the radius of the R-T fingers to the blast wave does not strongly depend on the value of γ for γ≥ 1.2. As a result of the age of the remnant, the unstable gas cannot extend sufficiently far to form metal-enriched filaments of ejecta material close to the periphery of Tycho's supernova remnant. The consistency of the dynamic properties of Tycho's remnant with the adiabatic model γ= 5/3 reveals that the injection of cosmic rays is too weak to alter the shock structure. Even with very efficient acceleration of cosmic rays at the shock, significantly enhanced mixing is not expected in Type Ia supernova remnants.
19. Studies into the nature of cosmic acceleration: Dark energy or a modification to gravity on cosmological scales
Dossett, Jason Nicholas
Since its discovery more than a decade ago, the problem of cosmic acceleration has become one of the largest in cosmology and physics as a whole. An unknown dark energy component of the universe is often invoked to explain this observation. Mathematically, this works because inserting a cosmic fluid with a negative equation of state into Einstein's equations provides an accelerated expansion. There are, however, alternative explanations for the observed cosmic acceleration. Perhaps the most promising of the alternatives is that, on the very largest cosmological scales, general relativity needs to be extended or a new, modified gravity theory must be used. Indeed, many modified gravity models are not only able to replicate the observed accelerated expansion without dark energy, but are also more compatible with a unified theory of physics. Thus it is the goal of this dissertation to develop and study robust tests that will be able to distinguish between these alternative theories of gravity and the need for a dark energy component of the universe. We will study multiple approaches using the growth history of large-scale structure in the universe as a way to accomplish this task. These approaches include studying what is known as the growth index parameter, a parameter that describes the logarithmic growth rate of structure in the universe, which describes the rate of formation of clusters and superclusters of galaxies over the entire age of the universe. We will explore the effectiveness of this parameter to distinguish between general relativity and modifications to gravity physics given realistic expectations of results from future experiments. Next, we will explore the modified growth formalism wherein deviations from the growth expected in general relativity are parameterized via changes to the growth equations, i.e. the perturbed Einstein's equations. We will also explore the impact of spatial curvature on these tests. Finally, we will study how dark energy
20. Probing new physics with underground accelerators and radioactive sources
Izaguirre, Eder; Krnjaic, Gordan; Pospelov, Maxim
2015-01-01
New light, weakly coupled particles can be efficiently produced at existing and future high-intensity accelerators and radioactive sources in deep underground laboratories. Once produced, these particles can scatter or decay in large neutrino detectors (e.g. Super-K and Borexino) housed in the same facilities. We discuss the production of weakly coupled scalars ϕ via nuclear de-excitation of an excited element into the ground state in two viable concrete reactions: the decay of the 0+ excited state of 16O populated via a (p , α) reaction on fluorine and from radioactive 144Ce decay where the scalar is produced in the de-excitation of 144Nd*, which occurs along the decay chain. Subsequent scattering on electrons, e (ϕ , γ) e, yields a mono-energetic signal that is observable in neutrino detectors. We show that this proposed experimental setup can cover new territory for masses 250 keV ≤mϕ ≤ 2me and couplings to protons and electrons, 10-11 ≤gegp ≤10-7. This parameter space is motivated by explanations of the 'proton charge radius puzzle', thus this strategy adds a viable new physics component to the neutrino and nuclear astrophysics programs at underground facilities. For the LUNA-type setup, we show that such light particles can be efficiently produced by populating the first excited 6.05 MeV 0+ state of 16O in (p , α) reactions on fluorine. For the SOX-type setup we find similarly powerful sensitivity from the 144Ce-144Pr (νbare) radioactive source, which can produce a scalar with 2.19 or 1.49 MeV energies from the Nd144* de-excitation that occurs along the decay chain. The subsequent detection of a mono-energetic release in a Borexino-type detector with 6.05, 2.19, or 1.49 MeV will be free from substantial environmental backgrounds. The strategy proposed in this Letter is capable of advancing the sensitivity to such states by many orders of magnitude, completely covering the parameter space relevant for the rp puzzle.
1. ON THE e{sup +}e{sup -} EXCESSES AND THE KNEE OF THE COSMIC RAY SPECTRA-HINTS OF COSMIC RAY ACCELERATION IN YOUNG SUPERNOVA REMNANTS
SciTech Connect
Hu Hongbo; Yuan Qiang; Wang Bo; Fan Chao; Zhang Jianli; Bi Xiaojun
2009-08-01
Supernova remnants (SNRs) have long been regarded as sources of the Galactic cosmic rays (CRs) up to petaelectronvolts, but convincing evidence is still lacking. In this work we explore the common origin of the subtle features of the CR spectra, such as the knee of CR spectra and the excesses of electron/positron fluxes recently observed by ATIC, H.E.S.S., Fermi-LAT, and PAMELA. Numerical calculation shows that those features of CR spectra can be well reproduced in a scenario with e{sup +}e{sup -} pair production by interactions between high-energy CRs and background photons in an environment similar to the young SNR. The success of such a coherent explanation serves in turn as evidence that at least a portion of CRs might be accelerated in young SNRs.
2. Cosmic Rays: "A Thin Rain of Charged Particles."
ERIC Educational Resources Information Center
Friedlander, Michael
1990-01-01
Discussed are balloons and electroscopes, understanding cosmic rays, cosmic ray paths, isotopes and cosmic-ray travel, sources of cosmic rays, and accelerating cosmic rays. Some of the history of the discovery and study of cosmic rays is presented. (CW)
3. Spatiotemporal characterization of soil moisture fields in agricultural areas using cosmic-ray neutron probes and data fusion
Franz, Trenton; Wang, Tiejun
2015-04-01
Approximately 40% of global food production comes from irrigated agriculture. With the increasing demand for food even greater pressures will be placed on water resources within these systems. In this work we aimed to characterize the spatial and temporal patterns of soil moisture at the field-scale (~500 m) using the newly developed cosmic-ray neutron rover near Waco, NE USA. Here we mapped soil moisture of 144 quarter section fields (a mix of maize, soybean, and natural areas) each week during the 2014 growing season (May to September). The 12 by 12 km study domain also contained three stationary cosmic-ray neutron probes for independent validation of the rover surveys. Basic statistical analysis of the domain indicated a strong relationship between the mean and variance of soil moisture at several averaging scales. The relationships between the mean and higher order moments were not significant. Scaling analysis indicated strong power law behavior between the variance of soil moisture and averaging area with minimal dependence of mean soil moisture on the slope of the power law function. In addition, we combined the data from the three stationary cosmic-ray neutron probes and mobile surveys using linear regression to derive a daily soil moisture product at 1, 3, and 12 km spatial resolutions for the entire growing season. The statistical relationships derived from the rover dataset offer a novel set of observations that will be useful in: 1) calibrating and validating land surface models, 2) calibrating and validating crop models, 3) soil moisture covariance estimates for statistical downscaling of remote sensing products such as SMOS and SMAP, and 4) provide daily center-pivot scale mean soil moisture data for optimal irrigation timing and volume amounts.
4. TWO-STEP ACCELERATION MODEL OF COSMIC RAYS AT MIDDLE-AGED SUPERNOVA REMNANTS: UNIVERSALITY IN SECONDARY SHOCKS
SciTech Connect
Inoue, Tsuyoshi; Yamazaki, Ryo; Inutsuka, Shu-ichiro
2010-11-01
Recent gamma-ray observations of middle-aged supernova remnants revealed a mysterious broken power-law spectrum. Using three-dimensional magnetohydrodynamic simulations, we show that the interaction between a supernova blast wave and interstellar clouds formed by thermal instability generates multiple reflected shocks. The typical Mach numbers of the reflected shocks are shown to be M{approx_equal} 2 depending on the density contrast between the diffuse intercloud gas and clouds. These secondary shocks can further energize cosmic-ray particles originally accelerated at the blast-wave shock. This 'two-step' acceleration scenario reproduces the observed gamma-ray spectrum and predicts the high-energy spectral index ranging approximately from 3 to 4.
5. (Re-)Constraining the Cosmic-Ray Acceleration Efficiency and Magnetic Field Strength in the Northeast Rims of RCW 86
Yamaguchi, Hiroya
2014-09-01
Accurate determination of SNR's shock velocity and magnetic filed is essential to reveal the mechanism of cosmic-ray acceleration. A previous velocity measurement with Chandra for the SNR RCW 86 northeast rim revealed that a substantial fraction of the postshock pressure is produced by the accelerated particles. However, there are disagreement with a H-alpha-measured velocity, and large uncertainty in the X-ray measurement itself, since the observation dates of the two Chandra datasets that were used for the proper motion measurement were not well separated with each other. We thus propose an additional observation of this region to measure the expansion velocity accurately. We will also constrain the magnetic field by searching for short-time variability in the synchrotron X-ray flux.
6. Coronal and interplanetary propagation, interplanetary acceleration, cosmic-ray observations by deep space network and anomalous component
NASA Technical Reports Server (NTRS)
Ng, C. K.
1986-01-01
The purpose is to provide an overview of the contributions presented in sessions SH3, SH1.5, SH4.6 and SH4.7 of the 19th International Cosmic Ray Conference. These contributed papers indicate that steady progress continues to be made in both the observational and the theoretical aspects of the transport and acceleration of energetic charged particles in the heliosphere. Studies of solar and interplanetary particles have placed emphasis on particle directional distributions in relation to pitch-angle scattering and magnetic focusing, on the rigidity and spatial dependence of the mean free path, and on new propagation regimes in the inner and outer heliosphere. Coronal propagation appears in need of correlative multi-spacecraft studies in association with detailed observation of the flare process and coronal magnetic structures. Interplanetary acceleration has now gone into a consolidation phase, with theories being worked out in detail and checked against observation.
7. Two-step Acceleration Model of Cosmic Rays at Middle-aged Supernova Remnants: Universality in Secondary Shocks
Inoue, Tsuyoshi; Yamazaki, Ryo; Inutsuka, Shu-ichiro
2010-11-01
Recent gamma-ray observations of middle-aged supernova remnants revealed a mysterious broken power-law spectrum. Using three-dimensional magnetohydrodynamic simulations, we show that the interaction between a supernova blast wave and interstellar clouds formed by thermal instability generates multiple reflected shocks. The typical Mach numbers of the reflected shocks are shown to be Msime 2 depending on the density contrast between the diffuse intercloud gas and clouds. These secondary shocks can further energize cosmic-ray particles originally accelerated at the blast-wave shock. This "two-step" acceleration scenario reproduces the observed gamma-ray spectrum and predicts the high-energy spectral index ranging approximately from 3 to 4.
8. Probing Dark Energy via Weak Gravitational Lensing with the Supernova Acceleration Probe (SNAP)
SciTech Connect
Albert, J.; Aldering, G.; Allam, S.; Althouse, W.; Amanullah, R.; Annis, J.; Astier, P.; Aumeunier, M.; Bailey, S.; Baltay, C.; Barrelet, E.; Basa, S.; Bebek, C.; Bergstom, L.; Bernstein, G.; Bester, M.; Besuner, B.; Bigelow, B.; Blandford, R.; Bohlin, R.; Bonissent, A.; /Caltech /LBL, Berkeley /Fermilab /SLAC /Stockholm U. /Paris, IN2P3 /Marseille, CPPM /Marseille, Lab. Astrophys. /Yale U. /Pennsylvania U. /UC, Berkeley /Michigan U. /Baltimore, Space Telescope Sci. /Indiana U. /Caltech, JPL /Australian Natl. U., Canberra /American Astron. Society /Chicago U. /Cambridge U. /Saclay /Lyon, IPN
2005-08-08
SNAP is a candidate for the Joint Dark Energy Mission (JDEM) that seeks to place constraints on the dark energy using two distinct methods. The first, Type Ia SN, is discussed in a separate white paper. The second method is weak gravitational lensing, which relies on the coherent distortions in the shapes of background galaxies by foreground mass structures. The excellent spatial resolution and photometric accuracy afforded by a 2-meter space-based observatory are crucial for achieving the high surface density of resolved galaxies, the tight control of systematic errors in the telescope's Point Spread Function (PSF), and the exquisite redshift accuracy and depth required by this project. These are achieved by the elimination of atmospheric distortion and much of the thermal and gravity loads on the telescope. The SN and WL methods for probing dark energy are highly complementary and the error contours from the two methods are largely orthogonal. The nominal SNAP weak lensing survey covers 1000 square degrees per year of operation in six optical and three near infrared filters (NIR) spanning the range 350 nm to 1.7 {micro}m. This survey will reach a depth of 26.6 AB magnitude in each of the nine filters and allow for approximately 100 resolved galaxies per square arcminute, {approx} 3 times that available from the best ground-based surveys. Photometric redshifts will be measured with statistical accuracy that enables scientific applications for even the faint, high redshift end of the sample. Ongoing work aims to meet the requirements on systematics in galaxy shape measurement, photometric redshift biases, and theoretical predictions.
9. Probe of the solar magnetic field using the "cosmic-ray shadow" of the sun.
PubMed
Amenomori, M; Bi, X J; Chen, D; Chen, T L; Chen, W Y; Cui, S W; Danzengluobu; Ding, L K; Feng, C F; Feng, Zhaoyang; Feng, Z Y; Gou, Q B; Guo, Y Q; Hakamada, K; He, H H; He, Z T; Hibino, K; Hotta, N; Hu, Haibing; Hu, H B; Huang, J; Jia, H Y; Jiang, L; Kajino, F; Kasahara, K; Katayose, Y; Kato, C; Kawata, K; Labaciren; Le, G M; Li, A F; Li, H J; Li, W J; Liu, C; Liu, J S; Liu, M Y; Lu, H; Meng, X R; Mizutani, K; Munakata, K; Nanjo, H; Nishizawa, M; Ohnishi, M; Ohta, I; Onuma, H; Ozawa, S; Qian, X L; Qu, X B; Saito, T; Saito, T Y; Sakata, M; Sako, T K; Shao, J; Shibata, M; Shiomi, A; Shirai, T; Sugimoto, H; Takita, M; Tan, Y H; Tateyama, N; Torii, S; Tsuchiya, H; Udo, S; Wang, H; Wu, H R; Xue, L; Yamamoto, Y; Yang, Z; Yasue, S; Yuan, A F; Yuda, T; Zhai, L M; Zhang, H M; Zhang, J L; Zhang, X Y; Zhang, Y; Zhang, Yi; Zhang, Ying; Zhaxisangzhu; Zhou, X X
2013-07-01
We report on a clear solar-cycle variation of the Sun’s shadow in the 10 TeV cosmic-ray flux observed by the Tibet air shower array during a full solar cycle from 1996 to 2009. In order to clarify the physical implications of the observed solar cycle variation, we develop numerical simulations of the Sun’s shadow, using the potential field source surface model and the current sheet source surface (CSSS) model for the coronal magnetic field. We find that the intensity deficit in the simulated Sun’s shadow is very sensitive to the coronal magnetic field structure, and the observed variation of the Sun’s shadow is better reproduced by the CSSS model. This is the first successful attempt to evaluate the coronal magnetic field models by using the Sun’s shadow observed in the TeV cosmic-ray flux. PMID:24027782
10. Cosmic Mach Number: a sensitive probe for the growth of structure
SciTech Connect
Ma, Yin-Zhe; Ostriker, Jeremiah P.; Zhao, Gong-Bo E-mail: [email protected]
2012-06-01
We investigate the potential power of the Cosmic Mach Number (CMN), which is the ratio between the mean velocity and the velocity dispersion of galaxies as a function of cosmic scales, to constrain cosmologies. We first measure the CMN from 4 catalogs of galaxy peculiar velocity surveys at low redshift (z element of [0.002,0.03]), and use them to contrast cosmological models. Overall, current data is consistent with the WMAP7 ΛCDM model. We find that the CMN is highly sensitive to the growth of structure on scales k element of [0.01,0.1] h/Mpc in Fourier space. Therefore, modified gravity models, and models with massive neutrinos, in which the structure growth generically deviates from that of the ΛCDM model in a scale-dependent way, can be well differentiated from the ΛCDM model by using future CMN data.
11. Combined analysis of soil moisture measurements from roving and fixed cosmic ray neutron probes for multiscale real-time monitoring
Franz, Trenton E.; Wang, Tiejun; Avery, William; Finkenbiner, Catherine; Brocca, Luca
2015-05-01
Soil moisture partly controls land-atmosphere mass and energy exchanges and ecohydrological processes in natural and agricultural systems. Thus, many models and remote sensing products continue to improve their spatiotemporal resolution of soil moisture, with some land surface models reaching 1 km resolution. However, the reliability and accuracy of both modeled and remotely sensed soil moisture require comparison with ground measurements at the appropriate spatiotemporal scales. One promising technique is the cosmic ray neutron probe. Here we further assess the suitability of this technique for real-time monitoring across a large area by combining data from three fixed probes and roving surveys over a 12 km × 12 km area in eastern Nebraska. Regression analyses indicated linear relationships between the fixed probe averages and roving estimates of soil moisture for each grid cell, allowing us to derive an 8 h product at spatial resolutions of 1, 3, and 12 km, with root-mean-square error of 3%, 1.8%, and 0.9%.
12. Diffuse cosmic gamma-ray background as a probe of cosmological gravitino regeneration and decay
SciTech Connect
Olive, K.A.; Silk, J.
1985-11-18
We predict the presence of a spectral feature in the isotropic cosmic gamma-ray background associated with gravitino decays at high red shifts. With a gravitino abundance that falls in the relatively narrow range expected for thermally regenerated gravitinos following an inflationary epoc in the very early universe, gravitinos of mass several gigaelectronvolts are found to yield an appreciable flux of 1--10-MeV diffuse gamma rays.
13. Probing Atmospheric Electric Fields in Thunderstorms through Radio Emission from Cosmic-Ray-Induced Air Showers.
PubMed
Schellart, P; Trinh, T N G; Buitink, S; Corstanje, A; Enriquez, J E; Falcke, H; Hörandel, J R; Nelles, A; Rachen, J P; Rossetto, L; Scholten, O; Ter Veen, S; Thoudam, S; Ebert, U; Koehn, C; Rutjes, C; Alexov, A; Anderson, J M; Avruch, I M; Bentum, M J; Bernardi, G; Best, P; Bonafede, A; Breitling, F; Broderick, J W; Brüggen, M; Butcher, H R; Ciardi, B; de Geus, E; de Vos, M; Duscha, S; Eislöffel, J; Fallows, R A; Frieswijk, W; Garrett, M A; Grießmeier, J; Gunst, A W; Heald, G; Hessels, J W T; Hoeft, M; Holties, H A; Juette, E; Kondratiev, V I; Kuniyoshi, M; Kuper, G; Mann, G; McFadden, R; McKay-Bukowski, D; McKean, J P; Mevius, M; Moldon, J; Norden, M J; Orru, E; Paas, H; Pandey-Pommier, M; Pizzo, R; Polatidis, A G; Reich, W; Röttgering, H; Scaife, A M M; Schwarz, D J; Serylak, M; Smirnov, O; Steinmetz, M; Swinbank, J; Tagger, M; Tasse, C; Toribio, M C; van Weeren, R J; Vermeulen, R; Vocks, C; Wise, M W; Wucknitz, O; Zarka, P
2015-04-24
We present measurements of radio emission from cosmic ray air showers that took place during thunderstorms. The intensity and polarization patterns of these air showers are radically different from those measured during fair-weather conditions. With the use of a simple two-layer model for the atmospheric electric field, these patterns can be well reproduced by state-of-the-art simulation codes. This in turn provides a novel way to study atmospheric electric fields. PMID:25955053
14. Cosmic ray decreases and particle acceleration in 1978-1982 and the associated solar wind structures
NASA Technical Reports Server (NTRS)
Cane, H. V.; Richardson, I. G.; Von Rosenvinge, T. T.
1993-01-01
Results of a study of the time histories of particles in the energy range 1 MeV to 1 GeV at the times of greater than 3-percent cosmic ray decreases in the years 1978-1982 are presented. The intensity-time profiles of the particles are used to separate the cosmic ray decreases into four classes which are subsequently associated with three types of solar wind structures. Decreases in class 1 (15 events) and class 2 (26 events) are associated with shocks driven by energetic coronal mass ejections. For class 1 events, the ejecta are detected at 1 AU, whereas this is not usually the case for class 2 events. The shock must therefore play a dominant role in producing the cosmic ray depression in class 2 events. It is argued that since energetic particles (from MEV to GeV energies) seen at earth may respond to solar wind structures which are not detected at earth, consideration of particle observations over a wide range of energies is necessary for a full understanding of cosmic ray decreases.
15. A New Mechanism of Magnetic Field Generation in Supernova Shock Waves and its Implication for Cosmic Ray Acceleration
Diamond, Patrick
2005-10-01
SNR shocks are the most probable source of galactic cosmic rays. We discuss the diffusive acceleration mechanism in terms of its potential to accelerate CRs to 10^18 eV, as observations imply. One possibility, currently discussed in the literature, is to resonantly generate a turbulent magnetic field via accelerated particles in excess of the background field. We indicate some difficulties of this scenario and suggest a different possibility, which is based on the generation of Alfven waves at the gyroradius scale at the background field level, with a subsequent transfer to longer scales via interaction with strong acoustic turbulence in the shock precursor. The acoustic turbulence in turn, may be generated by Drury instability or by parametric instability of the Alfven (A) waves. The essential idea is an A-->A+S decay instability process, where one of the interacting scatterers (i.e. the sound, or S-waves) are driven by the Drury instability process. This rapidly generates longer wavelength Alfven waves, which in turn resonate with high energy CRs thus binding them to the shock and enabling their further acceleration.
16. PROBING THE EPOCH OF PRE-REIONIZATION BY CROSS-CORRELATING COSMIC MICROWAVE AND INFRARED BACKGROUND ANISOTROPIES
SciTech Connect
Atrio-Barandela, F.; Kashlinsky, A. E-mail: [email protected]
2014-12-20
The epoch of first star formation and the state of the intergalactic medium (IGM) at that time are not directly observable with current telescopes. The radiation from those early sources is now part of the cosmic infrared background (CIB) and, as these sources ionize the gas around them, the IGM plasma would produce faint temperature anisotropies in the cosmic microwave background (CMB) via the thermal Sunyaev-Zeldovich (TSZ) effect. While these TSZ anisotropies are too faint to be detected, we show that the cross-correlation of maps of source-subtracted CIB fluctuations from Euclid, with suitably constructed microwave maps at different frequencies, can probe the physical state of the gas during reionization and test/constrain models of the early CIB sources. We identify the frequency-combined, CMB-subtracted microwave maps from space- and ground-based instruments to show that they can be cross-correlated with the forthcoming all-sky Euclid CIB maps to detect the cross-power at scales ∼5'-60' with signal-to-noise ratios (S/Ns) of up to S/N ∼ 4-8 depending on the contribution to the Thomson optical depth during those pre-reionization epochs (Δτ ≅ 0.05) and the temperature of the IGM (up to ∼10{sup 4} K). Such a measurement would offer a new window to explore the emergence and physical properties of these first light sources.
17. DIFFUSE EMISSION MEASUREMENT WITH THE SPECTROMETER ON INTEGRAL AS AN INDIRECT PROBE OF COSMIC-RAY ELECTRONS AND POSITRONS
SciTech Connect
Bouchet, Laurent; Jourdain, Elisabeth; Roques, Jean-Pierre; Strong, Andrew W.; Porter, Troy A.; Moskalenko, Igor V.
2011-09-20
Significant advances have been made in the understanding of the diffuse Galactic hard X-ray continuum emission using data from the INTEGRAL observatory. The diffuse hard power-law component seen with the SPectrometer on INTEGRAL (SPI) has been identified with inverse-Compton emission from relativistic (GeV) electrons on the cosmic microwave background and Galactic interstellar radiation field. In the present analysis, SPI data from 2003 to 2009, with a total exposure time of {approx}10{sup 8} s, are used to derive the Galactic ridge hard X-ray spatial distribution and spectrum between 20 keV and 2.4 MeV. Both are consistent with predictions from the GALPROP code. The good agreement between measured and predicted emission from keV to GeV energies suggests that the correct production mechanisms have been identified. We discuss the potential of the SPI data to provide an indirect probe of the interstellar cosmic-ray electron distribution, in particular for energies below a few GeV.
18. Direct Acceleration of Pickup Ions at The Solar Wind Termination Shock: The Production of Anomalous Cosmic Rays
NASA Technical Reports Server (NTRS)
Ellison, Donald C.; Jones, Frank C.; Baring, Matthew G.
1998-01-01
We have modeled the injection and acceleration of pickup ions at the solar wind termination shock and investigated the parameters needed to produce the observed Anomalous Cosmic Ray (ACR) fluxes. A non-linear Monte Carlo technique was employed, which in effect solves the Boltzmann equation and is not restricted to near-isotropic particle distribution functions. This technique models the injection of thermal and pickup ions, the acceleration of these ions, and the determination of the shock structure under the influence of the accelerated ions. The essential effects of injection are treated in a mostly self-consistent manner, including effects from shock obliquity, cross- field diffusion, and pitch-angle scattering. Using recent determinations of pickup ion densities, we are able to match the absolute flux of hydrogen in the ACRs by assuming that pickup ion scattering mean free paths, at the termination shock, are much less than an AU and that modestly strong cross-field diffusion occurs. Simultaneously, we match the flux ratios He(+)/H(+) or O(+)/H(+) to within a factor approx. 5. If the conditions of strong scattering apply, no pre-termination-shock injection phase is required and the injection and acceleration of pickup ions at the termination shock is totally analogous to the injection and acceleration of ions at highly oblique interplanetary shocks recently observed by the Ulysses spacecraft. The fact that ACR fluxes can be modeled with standard shock assumptions suggests that the much-discussed "injection problem" for highly oblique shocks stems from incomplete (either mathematical or computer) modeling of these shocks rather than from any actual difficulty shocks may have in injecting and accelerating thermal or quasi-thermal particles.
19. Probing the Light Speed Anisotropy with Respect to the Cosmic Microwave Background Radiation Dipole
Gurzadyan, V. G.; Bocquet, J.-P.; Kashin, A.; Margarian, A.; Bartalini, O.; Bellini, V.; Castoldi, M.; D'Angelo, A.; Didelez, J.-P.; di Salvo, R.; Fantini, A.; Gervino, G.; Ghio, F.; Girolami, B.; Giusa, A.; Guidal, M.; Hourany, E.; Knyazyan, S.; Kouznetsov, V.; Kunne, R.; Lapik, A.; Levi Sandri, P.; Lleres, A.; Mehrabyan, S.; Moricciani, D.; Nedorezov, V.; Perrin, C.; Rebreyend, D.; Russo, G.; Rudnev, N.; Schaerf, C.; Sperduto, M.-L.; Sutera, M.-C.; Turinge, A.
We have studied the angular fluctuations in the speed of light with respect to the apex of the dipole of Cosmic Microwave Background (CMB) radiation using the experimental data obtained with GRAAL facility, located at the European Synchrotron Radiation Facility (ESRF) in Grenoble. The measurements were based on the stability of the Compton edge of laser photons scattered on the 6 GeV monochromatic electron beam. The results enable one to obtain a conservative constraint on the anisotropy in the light speed variations Δc(θ)/c<3×10-12, i.e. with higher precision than from previous experiments.
20. A new probe of the magnetic field power spectrum in cosmic web filaments
Hales, Christopher A.; Greiner, Maksim; Ensslin, Torsten A.
2015-08-01
Establishing the properties of magnetic fields on scales larger than galaxy clusters is critical for resolving the unknown origin and evolution of galactic and cluster magnetism. More generally, observations of magnetic fields on cosmic scales are needed for assessing the impacts of magnetism on cosmology, particle physics, and structure formation over the full history of the Universe. However, firm observational evidence for magnetic fields in large scale structure remains elusive. In an effort to address this problem, we have developed a novel statistical method to infer the magnetic field power spectrum in cosmic web filaments using observation of the two-point correlation of Faraday rotation measures from a dense grid of extragalactic radio sources. Here we describe our approach, which embeds and extends the pioneering work of Kolatt (1998) within the context of Information Field Theory (a statistical theory for Bayesian inference on spatially distributed signals; Enfllin et al., 2009). We describe prospects for observation, for example with forthcoming data from the ultra-deep JVLA CHILES Con Pol survey and future surveys with the SKA.
1. Modeling Focused Acceleration of Cosmic-Ray Particles by Stochastic Methods
Armstrong, C. K.; Litvinenko, Yuri E.; Craig, I. J. D.
2012-10-01
Schlickeiser & Shalchi suggested that a first-order Fermi mechanism of focused particle acceleration could be important in several astrophysical applications. In order to investigate focused acceleration, we express the Fokker-Planck equation as an equivalent system of stochastic differential equations. We simplify the system for a set of physically motivated parameters, extend the analytical theory, and determine the evolving particle distribution numerically. While our numerical results agree with the focused acceleration rate of Schlickeiser & Shalchi for a weakly anisotropic particle distribution, we establish significant limitations of the analytical approach. Momentum diffusion is found to be more significant than focused acceleration at early times. Most critically, the particle distribution rapidly becomes anisotropic, leading to a much slower momentum gain rate. We discuss the consequences of our results for the role of focused acceleration in astrophysics.
2. Yang-Mills Gravity in Flat Space-Time II:. Gravitational Radiations and Lee-Yang Force for Accelerated Cosmic Expansion
Hsu, Jong-Ping
Within Yang-Mills gravity with translation group T(4) in flat space-time, the invariant action involving quadratic translation gauge-curvature leads to quadrupole radiations, which are shown to be consistent with experiments. The radiation power turns out to be the same as that in Einstein's gravity to the second-order approximation. We also discuss an interesting physical reason for the accelerated cosmic expansion based on the long-range Lee-Yang force of Ub(1) gauge field associated with the established conservation law of baryon number. We show that the Lee-Yang force can be related to a linear potential ∝ r, provided the gauge field satisfies a fourth-order differential equation in flat space-time. Furthermore, we consider an experimental test of the Lee-Yang force related to the accelerated cosmic expansion. The necessity of generalizing Lorentz transformations for accelerated frames of reference and accelerated Wu-Doppler effects are briefly discussed.
3. Generation of the cosmic rays flux variations due to surfatron acceleration of charges by electromagnetic waves in space plasma
2016-07-01
The analysis of experimental data on the spectra of cosmic rays (CR) has shown their variability on time scales of a few years, in particular, CR variations observed in E / Z range from TeV to 10000 TeV, where E is the energy of the particle, Z is its charge number. Consequently, the source of these variations must be located at a distance of no more than 1 parsec from the sun in the closest local interstellar clouds. As a mechanism of such variations appearance it is considered the surfatron acceleration of CR particles by electromagnetic wave in a relatively quiet space plasma. On the basis of developed model the numerical calculations were performed for particle capture dynamics (electrons, protons, helium and iron nuclei) in the wave effective potential well with a following growth their energy by 3-6 orders of magnitude. Optimal conditions for the implementation of charged particles surfatron acceleration in space plasma, the rate of trapped particles energy growth, the dynamics of wave phase on the captured particle trajectory, a temporal dynamics of components for charge impulse momentum and speed were studied. It is indicated that the capture of a small fraction of particles by wave for energies about TeV and less followed by their surfatron acceleration to an energy of about 10000 TeV will lead to a significant increase in the CR flux at such high energies. Thus CL flow variations are conditioned by changes in the space weather parameters
4. Data acquisition, storage and control architecture for the SuperNova Acceleration Probe
SciTech Connect
Prosser, Alan; Cardoso, Guilherme; Chramowicz, John; Marriner, John; Rivera, Ryan; Turqueti, Marcos; /Fermilab
2007-05-01
The SuperNova Acceleration Probe (SNAP) instrument is being designed to collect image and spectroscopic data for the study of dark energy in the universe. In this paper, we describe a distributed architecture for the data acquisition system which interfaces to visible light and infrared imaging detectors. The architecture includes the use of NAND flash memory for the storage of exposures in a file system. Also described is an FPGA-based lossless data compression algorithm with a configurable pre-scaler based on a novel square root data compression method to improve compression performance. The required interactions of the distributed elements with an instrument control unit will be described as well.
5. Multi-probing of the auroral acceleration region by Cluster (Invited)
Marklund, G. T.; Sadeghi, S.; Karlsson, R.; Lindqvist, P.; Nilsson, H.; Pickett, J.; Fazakerley, A. N.; Forsyth, C.; Masson, A.
2010-12-01
Multi-probe in situ measurements in the auroral acceleration region became a reality in November 2008, when the orbit of the European Space Agency Cluster satellites, was lowered to cover this region, typically located between 5000 and 12000 km altitude above the polar atmosphere. Results are presented from Cluster crossings of this region, at different altitudes and with time separations of a few minutes between the spacecraft. The unique observations allow us to address the spatial and temporal properties of this region, such as the morphology and stability in space and time of the associated quasi-static electric potential structures. The formation of such acceleration structures is a fundamental and ubiquitous space plasma process, taking place not only around Earth, but around many other solar system planets, such as Mars, Jupiter, and Saturn.
6. Small-scale cosmic microwave background anisotropies as probe of the geometry of the universe
NASA Technical Reports Server (NTRS)
Kamionkowski, Marc; Spergel, David N.; Sugiyama, Naoshi
1994-01-01
We perform detailed calculations of cosmic microwave background (CMB) anisotropies in a cold dark matter (CDM)-dominated open universe with primordial adiabatic density perturbations for a variety of reionization histories. The CMB anisotropies depend primarily on the geometry of the universe, which in a matter-dominated universe is determined by Omega and the optical depth to the surface of last scattering. In particular, the location on the primary Doppler peak depends primarily on Omega and is fairly insensitive to the other unknown parameters, such as Omega(sub b), h, Lambda, and the shape of the power spectrum. Therefore, if the primordial density perturbations are adiabatic, measurements of CMB anisotropies on small scales may be used to determine Omega.
7. Late time cosmic acceleration from vacuum Brans-Dicke theory in 5D
Ponce de Leon, J.
2010-05-01
We show that the scalar-vacuum Brans-Dicke equations in 5D are equivalent to Brans-Dicke theory in 4D with a self-interacting potential and an effective matter field. The cosmological implication, in the context of FRW models, is that the observed accelerated expansion of the universe comes naturally from the condition that the scalar field is not a ghost, i.e. ω > -3/2. We find an effective matter-dominated 4D universe which shows accelerated expansion if -3/2 < ω < -1. We study the question of whether accelerated expansion can be made compatible with large values of ω, within the framework of a 5D scalar-vacuum Brans-Dicke theory with variable, instead of constant, parameter ω. In this framework, and based on a general class of solutions of the field equations, we demonstrate that accelerated expansion is incompatible with large values of ω.
8. Mass entrainment and turbulence-driven acceleration of ultra-high energy cosmic rays in Centaurus A
Wykes, Sarka; Croston, Judith H.; Hardcastle, Martin J.; Eilek, Jean A.; Biermann, Peter L.; Achterberg, Abraham; Bray, Justin D.; Lazarian, Alex; Haverkorn, Marijke; Protheroe, Ray J.; Bromberg, Omer
2013-10-01
Observations of the FR I radio galaxy Centaurus A in radio, X-ray, and gamma-ray bands provide evidence for lepton acceleration up to several TeV and clues about hadron acceleration to tens of EeV. Synthesising the available observational constraints on the physical conditions and particle content in the jets, inner lobes and giant lobes of Centaurus A, we aim to evaluate its feasibility as an ultra-high-energy cosmic-ray source. We apply several methods of determining jet power and affirm the consistency of various power estimates of ~1 × 1043 erg s-1. Employing scaling relations based on previous results for 3C 31, we estimate particle number densities in the jets, encompassing available radio through X-ray observations. Our model is compatible with the jets ingesting ~3 × 1021 g s-1 of matter via external entrainment from hot gas and ~7 × 1022 g s-1 via internal entrainment from jet-contained stars. This leads to an imbalance between the internal lobe pressure available from radiating particles and magnetic field, and our derived external pressure. Based on knowledge of the external environments of other FR I sources, we estimate the thermal pressure in the giant lobes as 1.5 × 10-12 dyn cm-2, from which we deduce a lower limit to the temperature of ~1.6 × 108 K. Using dynamical and buoyancy arguments, we infer ~440-645 Myr and ~560 Myr as the sound-crossing and buoyancy ages of the giant lobes respectively, inconsistent with their spectral ages. We re-investigate the feasibility of particle acceleration via stochastic processes in the lobes, placing new constraints on the energetics and on turbulent input to the lobes. The same "very hot" temperatures that allow self-consistency between the entrainment calculations and the missing pressure also allow stochastic UHECR acceleration models to work.
9. Onion-shell model of cosmic ray acceleration in supernova remnants
NASA Technical Reports Server (NTRS)
Bogdan, T. J.; Volk, H. J.
1983-01-01
A method is devised to approximate the spatially averaged momentum distribution function for the accelerated particles at the end of the active lifetime of a supernova remnant. The analysis is confined to the test particle approximation and adiabatic losses are oversimplified, but unsteady shock motion, evolving shock strength, and non-uniform gas flow effects on the accelerated particle spectrum are included. Monoenergetic protons are injected at the shock front. It is found that the dominant effect on the resultant accelerated particle spectrum is a changing spectral index with shock strength. High energy particles are produced in early phases, and the resultant distribution function is a slowly varying power law over several orders of magnitude, independent of the specific details of the supernova remnant.
10. Kinetic studies of wave-particle interactions in cosmic-ray acceleration
Pohl, Martin; Niemiec, Jacek; Stroman, Thomas; Bret, Antoine; Roeken, Christian
Shock acceleration relies on the presence of magnetic-field fluctuations that can scatter rela-tivistic charged particles in both the upstream and downstream regions of the shock. We report on kinetic particle-in-cell simulations of the non-linear evolution of magnetic turbulence that arises upstream of the shock as well as at the shock itself. We will in particular address the relation between modes seen in the simulations and waves expected on the grounds of a linear instability analysis, the efficiency of small-scale turbulence in scattering relativistic particles, and the influence of accelerated particles on the formation of the shock itself.
11. ANISOTROPY AS A PROBE OF THE GALACTIC COSMIC-RAY PROPAGATION AND HALO MAGNETIC FIELD
SciTech Connect
Qu, Xiao-bo; Zhang, Yi; Liu, Cheng; Hu, Hong-bo; Xue, Liang
2012-05-01
The anisotropy of cosmic rays (CRs) in the solar vicinity is generally attributed to CR streaming due to the discrete distribution of CR sources or local magnetic field modulation. Recently, the two-dimensional large-scale CR anisotropy has been measured by many experiments in the TeV-PeV energy range in both hemispheres. The tail-in excess along the tangential direction of the local spiral arm and the loss cone deficit pointing to the north Galactic pole direction agree with what have been obtained in tens to hundreds of GeV. The persistence of the two large-scale anisotropy structures in such a wide energy range suggests that the anisotropy might be due to global streaming of the Galactic CRs (GCRs). This work tries to extend the observed CR anisotropy picture from the solar system to the whole galaxy. In such a case, we can find a new interesting signature, a loop of GCR streaming, of the GCR propagation. We further calculate the overall GCR streaming induced magnetic field, and find a qualitative consistency with the observed structure of the halo magnetic field.
12. Probing the cosmic distance duality with strong gravitational lensing and supernovae Ia data
Holanda, R. F. L.; Busti, V. C.; Alcaniz, J. S.
2016-02-01
We propose and perform a new test of the cosmic distance-duality relation (CDDR), DL(z) / DA(z) (1 + z)2 = 1, where DA is the angular diameter distance and DL is the luminosity distance to a given source at redshift z, using strong gravitational lensing (SGL) and type Ia Supernovae (SNe Ia) data. We show that the ratio D=DA12/DA2 and D*=DL12/DL2, where the subscripts 1 and 2 correspond, respectively, to redshifts z1 and z2, are linked by D/D*=(1+z1)2 if the CDDR is valid. We allow departures from the CDDR by defining two functions for η(z1), which equals unity when the CDDR is valid. We find that combination of SGL and SNe Ia data favours no violation of the CDDR at 1σ confidence level (η(z) simeq 1), in complete agreement with other tests and reinforcing the theoretical pillars of the CDDR.
13. Cosmic ray confinement and transport models for probing their putative sources
Malkov, M. A.
2015-09-01
Recent efforts in cosmic ray (CR) confinement and transport theory are discussed. Three problems are addressed as being crucial for understanding the present day observations and their possible telltale signs of the CR origin. The first problem concerns CR behavior right after their release from a source, such as a supernova remnant. At this phase, the CRs are confined near the source by self-emitted Alfven waves. The second is the problem of diffusive propagation of CRs through the turbulent interstellar medium. This is a seemingly straightforward and long-resolved problem, but it remains controversial and reveals paradoxes. A resolution based on the Chapman-Enskog asymptotic CR transport analysis, that also includes magnetic focusing, is suggested. The third problem is about a puzzling sharp ( ˜10 ° ) anisotropies in the CR arrival directions that might bear on important clues of their transport between the source and observer. The overarching goal is to improve our understanding of all aspects of the CR's source escape and ensuing propagation through the galaxy to the level at which their sources can be identified observationally.
14. Ultrahigh energy cosmic ray probes of large scale structure and magnetic fields
Sigl, Günter; Miniati, Francesco; Enßlin, Torsten A.
2004-08-01
We study signatures of a structured universe in the multi-pole moments, auto-correlation function, and cluster statistics of ultrahigh energy cosmic rays above 1019 eV. We compare scenarios where the sources are distributed homogeneously or according to the baryon density distribution obtained from a cosmological large scale structure simulation. The influence of extragalactic magnetic fields is studied by comparing the case of negligible fields with fields expected to be produced along large scale shocks with a maximal strength consistent with observations. We confirm that strongly magnetized observers would predict considerable anisotropy on large scales, which is already in conflict with current data. In the best fit scenario only the sources are strongly magnetized, although deflection can still be considerable, of order 20° up to 1020 eV, and a pronounced GZK cutoff is predicted. We then discuss signatures for future large scale full-sky detectors such as the Pierre Auger and EUSO projects. Auto-correlations are sensitive to the source density only if magnetic fields do not significantly affect propagation. In contrast, for a weakly magnetized observer, degree scale auto-correlations below a certain level indicate magnetized discrete sources. It may be difficult even for next generation experiments to distinguish between structured and unstructured source distributions.
15. Three-dimensional electron density along the WSA and MSNA latitudes probed by FORMOSAT-3/COSMIC
Chang, F. Y.; Liu, J. Y.; Chang, L. C.; Lin, C. H.; Chen, C. H.
2015-09-01
In this paper, we employ electron density profiles derived by the GPS radio occultation experiment aboard the FORMOSAT-3/COSMIC (F3/C) satellites to examine the electron density on geographic latitudes of 40° to 80° in the Southern hemisphere and 30° to 60° in the Northern hemisphere at various global fixed local times from February 2009 to January 2010. The results reveal that an eastward shift of a single-peak plasma density feature occurs along the Weddell Sea Anomaly (WSA) latitudes, while a double-peak plasma density feature appears along the northern Mid-latitude Summer Nighttime Anomaly (MSNA) latitudes. A cross-comparison between three-dimensional F3/C electron density and HWM93 simulation confirms that the magnetic meridional effect and vertical effect caused by neutral winds exhibit the eastward shifts. Furthermore, we find that the eastward shift of the peaks when viewed as a function of local time suggests that they could be interpreted as being comprised of different tidal components with distinct zonal phase velocities in local time.
16. Magnetic field amplification in nonlinear diffusive shock acceleration including resonant and non-resonant cosmic-ray driven instabilities
SciTech Connect
Bykov, Andrei M.; Osipov, Sergei M.; Ellison, Donald C.; Vladimirov, Andrey E. E-mail: [email protected] E-mail: [email protected]
2014-07-10
We present a nonlinear Monte Carlo model of efficient diffusive shock acceleration where the magnetic turbulence responsible for particle diffusion is calculated self-consistently from the resonant cosmic-ray (CR) streaming instability, together with non-resonant short- and long-wavelength CR-current-driven instabilities. We include the backpressure from CRs interacting with the strongly amplified magnetic turbulence which decelerates and heats the super-Alfvénic flow in the extended shock precursor. Uniquely, in our plane-parallel, steady-state, multi-scale model, the full range of particles, from thermal (∼eV) injected at the viscous subshock to the escape of the highest energy CRs (∼PeV) from the shock precursor, are calculated consistently with the shock structure, precursor heating, magnetic field amplification, and scattering center drift relative to the background plasma. In addition, we show how the cascade of turbulence to shorter wavelengths influences the total shock compression, the downstream proton temperature, the magnetic fluctuation spectra, and accelerated particle spectra. A parameter survey is included where we vary shock parameters, the mode of magnetic turbulence generation, and turbulence cascading. From our survey results, we obtain scaling relations for the maximum particle momentum and amplified magnetic field as functions of shock speed, ambient density, and shock size.
17. Complete cosmic scenario from inflation to late time acceleration: Nonequilibrium thermodynamics in the context of particle creation
Chakraborty, Subenoy; Saha, Subhajit
2014-12-01
The paper deals with the mechanism of particle creation in the framework of irreversible thermodynamics. The second order nonequilibrium thermodynamical prescription of Israel and Stewart has been presented with particle creation rate, treated as the dissipative effect. In the background of a flat Friedmann-Robertson-Walker (FRW) model, we assume the nonequilibrium thermodynamical process to be isentropic so that the entropy per particle does not change and consequently the dissipative pressure can be expressed linearly in terms of the particle creation rate. Here the dissipative pressure behaves as a dynamical variable having a nonlinear inhomogeneous evolution equation and the entropy flow vector satisfies the second law of thermodynamics. Further, using the Friedmann equations and by proper choice of the particle creation rate as a function of the Hubble parameter, it is possible to show (separately) a transition from the inflationary phase to the radiation era and also from the matter dominated era to late time acceleration. Also, in analogy to analytic continuation, it is possible to show a continuous cosmic evolution from inflation to late time acceleration by adjusting the parameters. It is found that in the de Sitter phase, the comoving entropy increases exponentially with time, keeping entropy per particle unchanged. Subsequently, the above cosmological scenarios have been described from a field theoretic point of view by introducing a scalar field having self-interacting potential. Finally, we make an attempt to show the cosmological phenomenon of particle creation as Hawking radiation, particularly during the inflationary era.
18. Cosmic accelerated expansion and the entropy-corrected holographic dark energy
2011-06-01
By considering the logarithmic correction to the energy density, we study the behavior of Hubble parameter in the holographic dark energy model. We assume that the universe is dominated by interacting dark energy and matter and the accelerated expansion of the universe, which may be occurred in the early universe or late time, is studied.
19. 21 cm line bispectrum as a method to probe cosmic dawn and epoch of reionization
Shimabukuro, Hayato; Yoshiura, Shintaro; Takahashi, Keitaro; Yokoyama, Shuichiro; Ichiki, Kiyotomo
2016-05-01
Redshifted 21 cm signal is a promising tool to investigate the state of intergalactic medium (IGM) in the cosmic dawn (CD) and epoch of reionization (EoR). In our previous work, we studied the variance and skewness of the 21 cm fluctuations to give a clear interpretation of the 21 cm power spectrum and found that skewness is a good indicator of the epoch when X-ray heating becomes effective. Thus, the non-Gaussian feature of the spatial distribution of the 21 cm signal is expected to be useful to investigate the astrophysical effects in the CD and EoR. In this paper, in order to investigate such a non-Gaussian feature in more detail, we focus on the bispectrum of the 21 cm signal. It is expected that the 21 cm brightness temperature bispectrum is produced by non-Gaussianity due to the various astrophysical effects such as the Wouthuysen-Field effect, X-ray heating and reionization. We study the various properties of 21 cm bispectrum such as scale dependence, shape dependence and redshift evolution. And also we study the contribution from each component of 21 cm bispectrum. We find that the contribution from each component has characteristic scale-dependent feature. In particular, we find that the bulk of the 21 cm bispectrum at z = 20 comes from the matter fluctuations, while in other epochs it is mainly determined by the spin and/or neutral fraction fluctuations and it is expected that we could obtain more detailed information on the IGM in the CD and EoR by using the 21 cm bispectrum in the future experiments, combined with the power spectrum and skewness.
20. Probing the Dark Flow Signal in WMAP 9 -Year and Planck Cosmic Microwave Background Maps
Atrio-Barandela, F.; Kashlinsky, A.; Ebeling, H.; Fixsen, D. J.; Kocevski, D.
2015-09-01
The “dark flow” dipole is a statistically significant dipole found at the position of galaxy clusters in filtered maps of Cosmic Microwave Background (CMB) temperature anisotropies. The dipole measured in WMAP 3-, 5-, and 7- year data releases was (1) mutually consistent, (2) roughly aligned with the all-sky CMB dipole, and (3) correlated with clusters’ X-ray luminosities. We analyzed WMAP 9 -year and Planck 1st- year data releases using a catalog of 980 clusters outside of the Kp0 mask to test our earlier findings. The dipoles measured on these new data sets are fully compatible with our earlier estimates, are similar in amplitude and direction to our previous results, and are in disagreement with the results of an earlier study by the Planck Collaboration. Furthermore, in the Planck data sets dipoles are found to be independent of frequency, ruling out the thermal Sunyaev-Zeldovich as the source of the effect. In the data of both WMAP and Planck we find a clear correlation between the dipole measured at the cluster location in filtered maps and the average anisotropy on the original maps, further proving that the dipole is associated with clusters. The dipole signal is dominated by the most massive clusters, with a statistical significance that is better than 99%, slightly larger than in WMAP. Since both data sets differ in foreground contributions, instrumental noise, and other systematics, the agreement between the WMAP and Planck dipoles argues against them being due to systematic effects in either of the experiments.
1. Lyman-tomography of Cosmic Infrared Background Fluctuations with Euclid: Probing Emissions and Baryonic Acoustic Oscillations at z ≳ 10
Kashlinsky, A.; Arendt, R. G.; Atrio-Barandela, F.; Helgason, K.
2015-11-01
The Euclid space mission, designed to probe evolution of the Dark Energy (DE), will map a large area of the sky at three adjacent near-IR filters, Y, J, and H. This coverage will also enable mapping source-subtracted cosmic infrared background (CIB) fluctuations with unprecedented accuracy on sub-degree angular scales. Here, we propose methodology, using the Lyman-break tomography applied to the Euclid-based CIB maps, to accurately isolate the history of CIB emissions as a function of redshift from 10 ≲ z ≲ 20 and to identify the baryonic acoustic oscillations (BAOs) at those epochs. To identify the BAO signature, we would assemble individual CIB maps over conservatively large contiguous areas of ≳400 deg2. The method can isolate the CIB spatial spectrum by z to sub-percent statistical accuracy. We illustrate this with a specific model of CIB production at high z normalized to reproduce the measured Spitzer-based CIB fluctuation. We show that even if the latter contains only a small component from high-z sources, the amplitude of that component can be accurately isolated with the methodology proposed here and the BAO signatures at z ≳ 10 are recovered well from the CIB fluctuation spatial spectrum. Probing the BAO at those redshifts will be an important test of the underlying cosmological paradigm and would narrow the overall uncertainties on the evolution of cosmological parameters, including the DE. Similar methodology is applicable to the planned WFIRST mission, where we show that a possible fourth near-IR channel at ≥2 μm would be beneficial.
2. CHILES Con Pol: Probing galaxy evolution, the dark Universe, and cosmic magnetism with a deep 1000 hour Jansky VLA survey
Hales, Christopher A.; Chiles Con Pol Collaboration
2014-04-01
We recently started a 1000 hour campaign to observe 0.2 square degrees of the COSMOS field in full polarization continuum at 1.4 GHz with the Jansky VLA, as part of a joint program with the spectral line COSMOS HI Large Extragalactic Survey (CHILES). When complete, we expect our CHILES Continuum Polarization (CHILES Con Pol) survey to reach an unprecedented SKA-era sensitivity of 0.7 uJy per 4 arcsecond FWHM beam. Here we present the key goals of CHILES Con Pol, which are to (i) produce a source catalog of legacy value to the astronomical community, (ii) measure differential source counts in total intensity, linear polarization, and circular polarization in order to constrain the redshift and luminosity distributions of source populations, (iii) perform a novel weak lensing study using radio polarization as an indicator of intrinsic alignment to better study dark energy and dark matter, and (iv) probe the unknown origin of cosmic magnetism by measuring the strength and structure of intergalactic magnetic fields in the filaments of large scale structure. The CHILES Con Pol source catalog will be a useful resource for upcoming wide-field surveys by acting as a training set for machine learning algorithms, which can then be used to identify and classify radio sources in regions lacking deep multiwavelength coverage.
3. RED SUPERGIANT STARS AS COSMIC ABUNDANCE PROBES: NLTE EFFECTS IN J-BAND IRON AND TITANIUM LINES
SciTech Connect
Bergemann, Maria; Kudritzki, Rolf-Peter; Lind, Karin; Plez, Bertrand; Davies, Ben; Gazak, Zach E-mail: [email protected] E-mail: [email protected] E-mail: [email protected]
2012-06-01
Detailed non-LTE (NLTE) calculations for red supergiant (RSG) stars are presented to investigate the influence of NLTE on the formation of atomic iron and titanium lines in the J band. With their enormous brightness at J band RSG stars are ideal probes of cosmic abundances. Recent LTE studies have found that metallicities accurate to 0.15 dex can be determined from medium-resolution spectroscopy of individual RSGs in galaxies as distant as 10 Mpc. The NLTE results obtained in this investigation support these findings. NLTE abundance corrections for iron are smaller than 0.05 dex for effective temperatures between 3400 K and 4200 K and 0.1 dex at 4400 K. For titanium the NLTE abundance corrections vary smoothly between -0.4 dex and +0.2 dex as a function of effective temperature. For both elements, the corrections also depend on stellar gravity and metallicity. The physical reasons behind the NLTE corrections and the consequences for extragalactic J-band abundance studies are discussed.
4. Unveiling the Origin of Cosmic Rays
Olinto, Angela V.
2015-04-01
The origin of cosmic rays, relativistic particles that range from below GeVs to hundreds of EeVs, is a century old mystery. Extremely energetic phenomena occurring over a wide range of scales, from the Solar System to distant galaxies, are needed to explain the non-thermal particle spectrum that covers over 12 orders of magnitude. Space Missions are the most effective platforms to study the origin and history of these cosmic particles. Current missions probe particle acceleration and propagation in the Solar System and in our Galaxy. This year ISS-CREAM and CALET join AMS in establishing the International Space Station as the most active site for studying the origin of Galactic cosmic rays. These missions will study astrophysical cosmic ray accelerators as well as other possible sources of energetic particles such as dark matter annihilation or decay. In the future, the ISS may also be the site for studying extremely high-energy extragalactic cosmic rays with JEM-EUSO. We review recent results in the quest for unveiling the sources of energetic particles with balloons and space payloads and report on activities of the Cosmic ray Science Interest Group (CosmicSIG) under the Physics of the Cosmos Program Analysis Group (PhysPAG).
5. PREFACE: Technology development for a cosmic microwave background probe of inflation
Hanany, S.; Irwin, K.
2009-07-01
In late 2007 NASA called for proposals to fund Astrophysics Strategic Mission Concept Studies. The goal was to generate concept studies for key future missions, which would be forwarded to the Astro2010 astrophysics decadal review committee for prioritization. Under the guidance and orchestration of the Primordial Polarization Program Definition Team, a NASA committee chartered to coordinate the activities of the cosmic microwave background (CMB) community, a CMB proposal aiming to represent the consensus of the entire community was submitted. A CMBPol Mission Concept Study grant was awarded in early 2008. Under the grant we reviewed the entire activities of the CMB community and proposed a path for the next decade. We also assessed the case and recommended a path for a future CMB polarization satellite. The grant funded three community-wide workshops that were held over the summer of 2008. The goal of the first workshop, held at Fermilab, was to discuss the theoretical foundation of inflation and its signature on the CMB, as well as the theoretical aspects of other polarimetric signatures observable at millimeter wavelengths. Volume 1141 of the American Institute of Physics conference proceedings summarizes the results of this workshop. The second workshop, held at Annapolis, Maryland, centered on expected systematic effects in polarimetric experiments and their potential mitigation. The third workshop, held at the NIST facility at Boulder, Colorado, focused on the technology requirements necessary to make incisive CMB polarization measurements and what was needed to advance the technology to the readiness level required for a start of a space-borne mission. The electronic proceedings presented here are the result of this third workshop. In preparing for the workshop the organizers assigned topical-editors for each technology topic. Each of them solicited white paper contributions from experts in their respective areas. The white papers were distributed to all
6. The Role of Cosmic-Ray Pressure in Accelerating Galactic Outflows
Simpson, Christine M.; Pakmor, Rüdiger; Marinacci, Federico; Pfrommer, Christoph; Springel, Volker; Glover, Simon C. O.; Clark, Paul C.; Smith, Rowan J.
2016-08-01
We study the formation of galactic outflows from supernova (SN) explosions with the moving-mesh code AREPO in a stratified column of gas with a surface density similar to the Milky Way disk at the solar circle. We compare different simulation models for SN placement and energy feedback, including cosmic rays (CRs), and find that models that place SNe in dense gas and account for CR diffusion are able to drive outflows with similar mass loading as obtained from a random placement of SNe with no CRs. Despite this similarity, CR-driven outflows differ in several other key properties including their overall clumpiness and velocity. Moreover, the forces driving these outflows originate in different sources of pressure, with the CR diffusion model relying on non-thermal pressure gradients to create an outflow driven by internal pressure and the random-placement model depending on kinetic pressure gradients to propel a ballistic outflow. CRs therefore appear to be non-negligible physics in the formation of outflows from the interstellar medium.
7. BLAZAR HALOS AS PROBE FOR EXTRAGALACTIC MAGNETIC FIELDS AND MAXIMAL ACCELERATION ENERGY
SciTech Connect
Dolag, K.; Kachelriess, M.; Ostapchenko, S.; Tomas, R.
2009-09-20
High-energy photons from blazars interact within tens of kpc with the extragalactic photon background, initiating electromagnetic pair cascades. The charged component of such cascades is deflected by extragalactic magnetic fields (EGMFs), leading to halos even around initially point-like sources. We calculate the intensity profile of the resulting secondary high-energy photons for different assumptions on the initial source spectrum and the strength of the EGMF, employing also fields found earlier in a constrained simulation of structure formation including magnetohydrodynamics processes. We find that the observation of halos around blazars like Mrk 180 probes an interesting range of EGMF strengths and acceleration models: in particular, blazar halos test if the photon energy spectrum at the source extends beyond {approx}100 TeV and how anisotropic this high-energy component is emitted.
8. Motion of a Probe Ball in the Fluid under Centrifugal Acceleration
Nyrkova, I. A.; Semenov, A. N.; Khokhlov, A. R.; Linliu, K.; Chu, B.
1997-11-01
The viscosity of a fluid can be measured by observing the motion of a probe sphere (or ball) in a centrifuge tube filled with this fluid. The hydrodynamic behavior of the probe ball moving in the centrifuge tube has been solved theoretically. We have got the universal relationship (for balls of a given material andsize in a given tube) between the terminal ball velocity, the fluid viscosity and the centrifuge acceleration using the only adjustable parameter — the rotational friction coefficient between the ball and the tube. The rotation of the centrifuge tube in the horizontal plane induces an inertia force which is counterbalanced by the friction force acting on the ball. As a result, the ball moves along the tube with some characteristic speed, which is a measure of the viscosity of the fluid. This speed was calculated in the lubrication approximation. The gravitational acceleration causes the ball to move very close to the bottom of the centrifuge tube. In this situation, the gravity is balanced by a “levitation” force introduced and calculated in the present paper. The origin of this force is the formation of the “bubble” behind and below the moving ball. The theoretical development on the terminal velocity for the ball moving very near the bottom of the horizontal centrifuge tube is tested by using a specially designed centrifuge for two types of balls and a wide set of viscosity standards. Excellent agreement between theory and experiment suggests that we have developed a new approach to measure high viscosities of fluids at low shear rates which might be especially useful for the investigation of polymer melts.
9. Testing flatness of the universe with probes of cosmic distances and growth
Mortonson, Michael J.
2009-06-01
When using distance measurements to probe spatial curvature, the geometric degeneracy between curvature and dark energy in the distance-redshift relation typically requires either making strong assumptions about the dark energy evolution or sacrificing precision in a more model-independent approach. Measurements of the redshift evolution of the linear growth of perturbations can break the geometric degeneracy, providing curvature constraints that are both precise and model-independent. Future supernova, CMB, and cluster data have the potential to measure the curvature with an accuracy of s(O K ) 0.002, without specifying a particular dark energy phenomenology. In combination with distance measurements, the evolution of the growth function at low redshifts provides the strongest curvature constraint if the high-redshift universe is well approximated as being purely matter dominated. However, in the presence of early dark energy or massive neutrinos, the precision in curvature is reduced due to additional degeneracies, and precise normalization of the growth function relative to recombination is important for obtaining accurate constraints. Curvature limits from distances and growth compare favorably to other approaches to curvature estimation proposed in the literature, providing either greater accuracy or greater freedom from dark energy modeling assumptions, and are complementary due to the use of independent data sets. Model-independent estimates of curvature are critical for both testing inflation and obtaining unbiased constraints on dark energy parameters.
10. Cosmic Microwave Background Fluctuations from the Kinetic Sunyaev-Zeldovich Effect as a Cosmological Probe
Park, Hyunbae; Shapiro, P.; Komatsu, E.
2012-01-01
We present a calculation of the kinetic Sunyaev-Zel'dovich (kSZ) effect on of the Comic Microwave Background fluctuation. We focus on the scale at the multipole moment of l = 3000 10000 that is currently being probed by the South Pole Telescope (SPT) and the Atacama Cosmology Telescope. For the post-reionization contribution of the total signal, we use the 3rd order perturbation theory (3PT) to model non-linearity of post-reionization epoch. We evaluate a non-linear expression for momentum powerspectrum in Ma and Fry (2002) with the 3PT density and velocity powerspectrum. And, we use the 3PT momentum powerspectrum to calculate the kSZ signal. We show that the 3PT is a reasonable approximation by comparing our result with previous work by Zhang, Pen and Trac (2004). For reionization contribution, we use our N-body radiative transfer simulations to take patchiness of ionization of intergalactic medium in reionization epoch into account. Using ionized fraction field in the simulation, we calculate the momentum field of the ionized gas. And, we correct for the missing power in finite size boxes of simulations. Finally, we show the kSZ calculation for different simulations with reionization scenarios. With contributions from each epoch, we predict total kSZ signal for different reionization history and put constraint on reionization scenario using an upper bound of the signal from recent SPT measurement.
11. Testing flatness of the universe with probes of cosmic distances and growth
SciTech Connect
Mortonson, Michael J.
2009-12-15
When using distance measurements to probe spatial curvature, the geometric degeneracy between curvature and dark energy in the distance-redshift relation typically requires either making strong assumptions about the dark energy evolution or sacrificing precision in a more model-independent approach. Measurements of the redshift evolution of the linear growth of perturbations can break the geometric degeneracy, providing curvature constraints that are both precise and model independent. Future supernova, CMB, and cluster data have the potential to measure the curvature with an accuracy of {sigma}({omega}{sub K})=0.002, without specifying a particular dark energy phenomenology. In combination with distance measurements, the evolution of the growth function at low redshifts provides the strongest curvature constraint if the high-redshift universe is well approximated as being purely matter dominated. However, in the presence of early dark energy or massive neutrinos, the precision in curvature is reduced due to additional degeneracies, and precise normalization of the growth function relative to recombination is important for obtaining accurate constraints. Curvature limits from distances and growth compare favorably to other approaches to curvature estimation proposed in the literature, providing either greater accuracy or greater freedom from dark energy modeling assumptions, and are complementary due to the use of independent data sets. Model-independent estimates of curvature are critical both for testing inflation and for obtaining unbiased constraints on dark energy parameters.
12. Dark Energy Models and Cosmic Acceleration with Anisotropic Universe in f(T) Gravity
Sharif, M.; Sehrish, Azeem
2014-04-01
This paper is devoted to studing the accelerated expansion of the universe in context of f(T) theory of gravity. For this purpose, we construct different f(T) models and investigate their cosmological behavior through equation of state parameter by using holographic, new agegraphic and their power-law entropy corrected dark energy models. We discuss the graphical behavior of this parameter versus redshift for particular values of constant parameters in Bianchi type I universe model. It is shown that the universe lies in different forms of dark energy, namely quintessence, phantom, and quintom corresponding to the chosen scale factors, which depend upon the constant parameters of the models.
13. A no-go for no-go theorems prohibiting cosmic acceleration in extra dimensional models
SciTech Connect
Koster, Rik; Postma, Marieke E-mail: [email protected]
2011-12-01
A four-dimensional effective theory that arises as the low-energy limit of some extra-dimensional model is constrained by the higher dimensional Einstein equations. Steinhardt and Wesley use this to show that accelerated expansion in our four large dimensions can only be transient in a large class of Kaluza-Klein models that satisfy the (higher dimensional) null energy condition [1]. We point out that these no-go theorems are based on a rather ad-hoc assumption on the metric, without which no strong statements can be made.
14. Seven-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Are There Cosmic Microwave Background Anomalies?
Bennett, C. L.; Hill, R. S.; Hinshaw, G.; Larson, D.; Smith, K. M.; Dunkley, J.; Gold, B.; Halpern, M.; Jarosik, N.; Kogut, A.; Komatsu, E.; Limon, M.; Meyer, S. S.; Nolta, M. R.; Odegard, N.; Page, L.; Spergel, D. N.; Tucker, G. S.; Weiland, J. L.; Wollack, E.; Wright, E. L.
2011-02-01
A simple six-parameter ΛCDM model provides a successful fit to WMAP data. This holds both when the WMAP data are analyzed alone or in combination with other cosmological data. Even so, it is appropriate to examine the data carefully to search for hints of deviations from the now standard model of cosmology, which includes inflation, dark energy, dark matter, baryons, and neutrinos. The cosmological community has subjected the WMAP data to extensive and varied analyses. While there is widespread agreement as to the overall success of the six-parameter ΛCDM model, various "anomalies" have been reported relative to that model. In this paper we examine potential anomalies and present analyses and assessments of their significance. In most cases we find that claimed anomalies depend on posterior selection of some aspect or subset of the data. Compared with sky simulations based on the best-fit model, one can select for low probability features of the WMAP data. Low probability features are expected, but it is not usually straightforward to determine whether any particular low probability feature is the result of the a posteriori selection or non-standard cosmology. Hypothesis testing could, of course, always reveal an alternative model that is statistically favored, but there is currently no model that is more compelling. We find that two cold spots in the map are statistically consistent with random cosmic microwave background (CMB) fluctuations. We also find that the amplitude of the quadrupole is well within the expected 95% confidence range and therefore is not anomalously low. We find no significant anomaly with a lack of large angular scale CMB power for the best-fit ΛCDM model. We examine in detail the properties of the power spectrum data with respect to the ΛCDM model and find no significant anomalies. The quadrupole and octupole components of the CMB sky are remarkably aligned, but we find that this is not due to any single map feature; it results from the
15. Using Betatron Emissions from Laser Wakefield Accelerated Electrons to Probe Ultra-fast Warm Dense Matter
Kotick, Jordan; Schumaker, Will; Condamine, Florian; Albert, Felicie; Barbrel, Benjamin; Galtier, Eric; Granados, Eduardo; Ravasio, Alessandra; Glenzer, Siegfried
2015-11-01
Laser wakefield acceleration (LWFA) has been shown to produce short X-ray pulses from betatron oscillations of electrons within the plasma wake. These betatron X-rays pulses have a broad, synchrotron-like energy spectrum and a duration on the order of the driving laser pulse, thereby enabling probing of ultrafast interactions. Using the 1 J, 40fs short-pulse laser at the Matter in Extreme Conditions experimental station at LCLS, we have implemented LWFA to generate and subsequently characterized betatron X-rays. Notch filtering and single photon counting techniques were used to measure the betatron X-ray spectrum while the spatial profile was measured using X-ray CCDs and image plates. We used an ellipsoidal mirror to focus the soft betatron X-rays for pump-probe studies on various targets in conjunction with LCLS X-ray and optical laser pulses. This experimental platform provides the conditions necessary to do a detailed study of warm-dense matter dynamics on the ultrafast time-scale.
16. Calibration of a non-invasive cosmic-ray probe for wide area snow water equivalent measurement
Sigouin, Mark J. P.; Si, Bing C.
2016-06-01
Measuring snow water equivalent (SWE) is important for many hydrological purposes such as modelling and flood forecasting. Measurements of SWE are also crucial for agricultural production in areas where snowmelt runoff dominates spring soil water recharge. Typical methods for measuring SWE include point measurements (snow tubes) and large-scale measurements (remote sensing). We explored the potential of using the cosmic-ray soil moisture probe (CRP) to measure average SWE at a spatial scale between those provided by snow tubes and remote sensing. The CRP measures above-ground moderated neutron intensity within a radius of approximately 300 m. Using snow tubes, surveys were performed over two winters (2013/2014 and 2014/2015) in an area surrounding a CRP in an agricultural field in Saskatoon, Saskatchewan, Canada. The raw moderated neutron intensity counts were corrected for atmospheric pressure, water vapour, and temporal variability of incoming cosmic-ray flux. The mean SWE from manually measured snow surveys was adjusted for differences in soil water storage before snowfall between both winters because the CRP reading appeared to be affected by soil water below the snowpack. The SWE from the snow surveys was negatively correlated with the CRP-measured moderated neutron intensity, giving Pearson correlation coefficients of -0.90 (2013/2014) and -0.87 (2014/2015). A linear regression performed on the manually measured SWE and moderated neutron intensity counts for 2013/2014 yielded an r2 of 0.81. Linear regression lines from the 2013/2014 and 2014/2015 manually measured SWE and moderated neutron counts were similar; thus differences in antecedent soil water storage did not appear to affect the slope of the SWE vs. neutron relationship. The regression equation obtained from 2013/2014 was used to model SWE using the moderated neutron intensity data for 2014/2015. The CRP-estimated SWE for 2014/2015 was similar to that of the snow survey, with an root
17. Acceleration of solar cosmic rays in a flare current sheet and their propagation in interplanetary space
Podgorny, A. I.; Podgorny, I. M.
2015-09-01
Analyses of GOES spacecraft data show that the prompt component of high-energy protons arrive at the Earth after a time corresponding to their generation in flares in the western part of the solar disk, while the delayed component is detected several hours later. All protons in flares are accelerated by a single mechanism. The particles of the prompt component propagate along magnetic lines of the Archimedean spiral connectng the flare with the Earth. The prompt component generated by flares in the eastern part of the solar disk is not observed at the Earth, since particles accelerated by these flares do not intersect magnetic-field lines connecting the flare with the Earth. These particles arrive at the Earth via their motion across the interplanetary magnetic field. These particles are trapped by the magnetic field and transported by the solar wind, since the interplanetary magnetic field is frozen in the wind plasma, and these particles also diffuse across the field. The duration of the delay reaches several days.
18. Using Supra-Arcade Downflows as Probes of Particle Acceleration in Solar Flares
NASA Technical Reports Server (NTRS)
Savage, Sabrina
2012-01-01
Extracting information from coronal features above flares has become more reliable with the availability of increasingly higher spatial- and temporal-resolution data in recent decades. We are now able to sufficiently probe the region high above long-duration flaring active regions where reconnection is expected to be continually occurring. Flows in the supra-arcade region, first observed with Yohkoh/SXT, have been theorized to be associated with newly-reconnected outflowing loops. High resolution data appears to confirm these assertions. Assuming that these flows are indeed reconnection outflows, then the detection of those directed toward the solar surface (i.e. downflowing) should be associated with particle acceleration between the current sheet and the loop footpoints rooted in the chromosphere. RHESSI observations of highly energetic particles with respect to downflow detections could potentially constrain electron acceleration models. We provide measurements of these supra-arcade downflows (SADs) in relation to reconnection model parameters and present preliminary findings comparing the downflow timings with high-energy RHESSI lightcurves.
19. Using Supra-Arcade Downflows as Probes of Electron Acceleration During Solar Flares
NASA Technical Reports Server (NTRS)
Savage, Sabrina L.
2011-01-01
Extracting information from coronal features above flares has become more reliable with the availability of increasingly higher spatial and temporal-resolution data in recent decades. We are now able to sufficiently probe the region high above long-duration flaring active regions where reconnection is expected to be continually occurring. Flows in the supra-arcade region, first observed with Yohkoh/SXT, have been theorized to be associated with newly-reconnected outflowing loops. High resolution data appears to confirm these assertions. Assuming that these flows are indeed reconnection outflows, then the detection of those directed toward the solar surface (i.e. downflowing) should be associated with particle acceleration between the current sheet and the loop footpoints rooted in the chromosphere. RHESSI observations of highly energetic particles with respect to downflow detections could potentially constrain electron acceleration models. I will discuss measurements of these supra-arcade downflows (SADs) in relation to reconnection model parameters and present preliminary findings comparing the downflow timings with high-energy RHESSI lightcurves.
20. Using Dark Matter Haloes to Learn about Cosmic Acceleration: A New Proposal for a Universal Mass Function
NASA Technical Reports Server (NTRS)
Prescod-Weinstein, Chanda; Afshordi, Niayesh
2011-01-01
Structure formation provides a strong test of any cosmic acceleration model because a successful dark energy model must not inhibit or overpredict the development of observed large-scale structures. Traditional approaches to studies of structure formation in the presence of dark energy or a modified gravity implement a modified Press-Schechter formalism, which relates the linear overdensities to the abundance of dark matter haloes at the same time. We critically examine the universality of the Press-Schechter formalism for different cosmologies, and show that the halo abundance is best correlated with spherical linear overdensity at 94% of collapse (or observation) time. We then extend this argument to ellipsoidal collapse (which decreases the fractional time of best correlation for small haloes), and show that our results agree with deviations from modified Press-Schechter formalism seen in simulated mass functions. This provides a novel universal prescription to measure linear density evolution, based on current and future observations of cluster (or dark matter) halo mass function. In particular, even observations of cluster abundance in a single epoch will constrain the entire history of linear growth of cosmological of perturbations.
1. K-essence model from the mechanical approach point of view: coupled scalar field and the late cosmic acceleration
Bouhmadi-López, Mariam; Sravan Kumar, K.; Marto, João; Morais, João; Zhuk, Alexander
2016-07-01
In this paper, we consider the Universe at the late stage of its evolution and deep inside the cell of uniformity. At these scales, we can consider the Universe to be filled with dust-like matter in the form of discretely distributed galaxies, a K-essence scalar field, playing the role of dark energy, and radiation as matter sources. We investigate such a Universe in the mechanical approach. This means that the peculiar velocities of the inhomogeneities (in the form of galaxies) as well as the fluctuations of the other perfect fluids are non-relativistic. Such fluids are designated as coupled because they are concentrated around the inhomogeneities. In the present paper, we investigate the conditions under which the K-essence scalar field with the most general form for its action can become coupled. We investigate at the background level three particular examples of the K-essence models: (i) the pure kinetic K-essence field, (ii) a K-essence with a constant speed of sound and (iii) the K-essence model with the Lagrangian bX+cX2‑V(phi). We demonstrate that if the K-essence is coupled, all these K-essence models take the form of multicomponent perfect fluids where one of the component is the cosmological constant. Therefore, they can provide the late-time cosmic acceleration and be simultaneously compatible with the mechanical approach.
2. Probing the Cosmic Gamma-Ray Burst Rate with Trigger Simulations of the Swift Burst Alert Telescope
NASA Technical Reports Server (NTRS)
Lien, Amy; Sakamoto, Takanori; Gehrels, Neil; Palmer, David M.; Barthelmy, Scott D.; Graziani, Carlo; Cannizzo, John K.
2013-01-01
The gamma-ray burst (GRB) rate is essential for revealing the connection between GRBs, supernovae and stellar evolution. Additionally, the GRB rate at high redshift provides a strong probe of star formation history in the early universe. While hundreds of GRBs are observed by Swift, it remains difficult to determine the intrinsic GRB rate due to the complex trigger algorithm of Swift. Current studies of the GRB rate usually approximate the Swift trigger algorithm by a single detection threshold. However, unlike the previously own GRB instruments, Swift has over 500 trigger criteria based on photon count rate and additional image threshold for localization. To investigate possible systematic biases and explore the intrinsic GRB properties, we develop a program that is capable of simulating all the rate trigger criteria and mimicking the image threshold. Our simulations show that adopting the complex trigger algorithm of Swift increases the detection rate of dim bursts. As a result, our simulations suggest bursts need to be dimmer than previously expected to avoid over-producing the number of detections and to match with Swift observations. Moreover, our results indicate that these dim bursts are more likely to be high redshift events than low-luminosity GRBs. This would imply an even higher cosmic GRB rate at large redshifts than previous expectations based on star-formation rate measurements, unless other factors, such as the luminosity evolution, are taken into account. The GRB rate from our best result gives a total number of 4568 +825 -1429 GRBs per year that are beamed toward us in the whole universe.
3. Probing the cosmic gamma-ray burst rate with trigger simulations of the swift burst alert telescope
SciTech Connect
Lien, Amy; Cannizzo, John K.; Sakamoto, Takanori; Gehrels, Neil; Barthelmy, Scott D.; Palmer, David M.; Graziani, Carlo
2014-03-01
The gamma-ray burst (GRB) rate is essential for revealing the connection between GRBs, supernovae, and stellar evolution. Additionally, the GRB rate at high redshift provides a strong probe of star formation history in the early universe. While hundreds of GRBs are observed by Swift, it remains difficult to determine the intrinsic GRB rate due to the complex trigger algorithm of Swift. Current studies of the GRB rate usually approximate the Swift trigger algorithm by a single detection threshold. However, unlike the previously flown GRB instruments, Swift has over 500 trigger criteria based on photon count rate and an additional image threshold for localization. To investigate possible systematic biases and explore the intrinsic GRB properties, we develop a program that is capable of simulating all the rate trigger criteria and mimicking the image threshold. Our simulations show that adopting the complex trigger algorithm of Swift increases the detection rate of dim bursts. As a result, our simulations suggest that bursts need to be dimmer than previously expected to avoid overproducing the number of detections and to match with Swift observations. Moreover, our results indicate that these dim bursts are more likely to be high redshift events than low-luminosity GRBs. This would imply an even higher cosmic GRB rate at large redshifts than previous expectations based on star formation rate measurements, unless other factors, such as the luminosity evolution, are taken into account. The GRB rate from our best result gives a total number of 4568{sub −1429}{sup +825} GRBs per year that are beamed toward us in the whole universe.
4. EVOLUTION OF THE COSMIC MICROWAVE BACKGROUND POWER SPECTRUM ACROSS WILKINSON MICROWAVE ANISOTROPY PROBE DATA RELEASES: A NONPARAMETRIC ANALYSIS
SciTech Connect
2012-02-01
Using a nonparametric function estimation methodology, we present a comparative analysis of the Wilkinson Microwave Anisotropy Probe (WMAP) 1-, 3-, 5-, and 7-year data releases for the cosmic microwave background (CMB) angular power spectrum with respect to the following key questions. (1) How well is the power spectrum determined by the data alone? (2) How well is the {Lambda}CDM model supported by a model-independent, data-driven analysis? (3) What are the realistic uncertainties on peak/dip locations and heights? Our results show that the height of the power spectrum is well determined by data alone for multipole l approximately less than 546 (1-year), 667 (3-year), 804 (5-year), and 842 (7-year data). We show that parametric fits based on the {Lambda}CDM model are remarkably close to our nonparametric fits in l-regions where data are sufficiently precise. In contrast, the power spectrum for an H{Lambda}CDM model is progressively pushed away from our nonparametric fit as data quality improves with successive data realizations, suggesting incompatibility of this particular cosmological model with respect to the WMAP data sets. We present uncertainties on peak/dip locations and heights at the 95% (2{sigma}) level of confidence and show how these uncertainties translate into hyperbolic 'bands' on the acoustic scale (l{sub A} ) and peak shift ({phi}{sub m}) parameters. Based on the confidence set for the 7-year data, we argue that the low-l upturn in the CMB power spectrum cannot be ruled out at any confidence level in excess of about 10% ( Almost-Equal-To 0.12{sigma}). Additional outcomes of this work are a numerical formulation for minimization of a noise-weighted risk function subject to monotonicity constraints, a prescription for obtaining nonparametric fits that are closer to cosmological expectations on smoothness, and a method for sampling cosmologically meaningful power spectrum variations from the confidence set of a nonparametric fit.
5. Real space tests of the statistical isotropy and Gaussianity of the Wilkinson Microwave Anisotropy Probe cosmic microwave background data
SciTech Connect
Lew, Bartosz
2008-08-15
We introduce and analyze a method for testing statistical isotropy and Gaussianity and apply it to the Wilkinson Microwave Anisotropy Probe (WMAP) cosmic microwave background (CMB) foreground reduced temperature maps. We also test cross-channel difference maps to constrain levels of residual foreground contamination and systematic uncertainties. We divide the sky into regions of varying size and shape and measure the first four moments of the one-point distribution within these regions, and using their simulated spatial distributions we test the statistical isotropy and Gaussianity hypotheses. By randomly varying orientations of these regions, we sample the underlying CMB field in a new manner, that offers a richer exploration of the data content, and avoids possible biasing due to a single choice of sky division. In our analysis we account for all two-point correlations between different regions and also show the impact on the results when these correlations are neglected. The statistical significance is assessed via comparison with realistic Monte Carlo simulations. We find the three-year WMAP maps to agree well with the isotropic, Gaussian random field simulations as probed by regions corresponding to the angular scales ranging from 6 Degree-Sign to 30 Degree-Sign at 68% confidence level (CL). We report a strong, anomalous (99.8% CL) dipole 'excess' in the V band of the three-year WMAP data and also in the V band of the WMAP five-year data (99.3% CL). Using our statistics, we notice large scale hemispherical power asymmetry, and find that it is not highly statistically significant in the WMAP three-year data ( Less-Than-Or-Equivalent-To 97%) at scales l{<=}40. The significance is even smaller if multipoles up to l=1024 are considered ({approx}90% CL). We give constraints on the amplitude of the previously proposed CMB dipole modulation field parameter. We find some hints of foreground contamination in the form of a locally strong, anomalous kurtosis excess in
6. Late cosmic acceleration in a vector-Gauss-Bonnet gravity model
Oliveros, A.; Solis, Enzo L.; Acero, Mario A.
2016-12-01
In this work, we study a general vector-tensor model of dark energy (DE) with a Gauss-Bonnet term coupled to a vector field and without explicit potential terms. Considering a spatially flat Friedmann-Robertson-Walker (FRW) type universe and a vector field without spatial components, the cosmological evolution is analyzed from the field equations of this model considering two sets of parameters. In this context, we have shown that it is possible to obtain an accelerated expansion phase of the universe since the equation state parameter w satisfies the restriction - 1 < w < -1/3 (for suitable values of model parameters). Further, analytical expressions for the Hubble parameter H, equation state parameter w and the invariant scalar ϕ are obtained. We also find that the square of the speed of sound is negative for all values of redshift, therefore, the model presented here shows a sign of instability under small perturbations. We finally perform an analysis using H(z) observational data and we find that for the free parameter ξ in the interval (-23.9,-3.46) × 10-5, at 99.73% C.L. (and fixing η = -1 and ω = 1/4), the model has a good fit to the data.
7. Constraining the cosmic deceleration-acceleration transition with type Ia supernova, BAO/CMB and H(z) data
Vargas dos Santos, M.; Reis, R. R. R.; Waga, I.
2016-02-01
We revisit the kink-like parametrization of the deceleration parameter q(z) [1], which considers a transition, at redshift zt, from cosmic deceleration to acceleration. In this parametrization the initial, at z gg zt, value of the q-parameter is qi, its final, z=-1, value is qf and the duration of the transition is parametrized by τ. By assuming a flat space geometry we obtain constraints on the free parameters of the model using recent data from type Ia supernovae (SN Ia), baryon acoustic oscillations (BAO), cosmic microwave background (CMB) and the Hubble parameter H(z). The use of H(z) data introduces an explicit dependence of the combined likelihood on the present value of the Hubble parameter H0, allowing us to explore the influence of different priors when marginalizing over this parameter. We also study the importance of the CMB information in the results by considering data from WMAP7, WMAP9 (Wilkinson Microwave Anisotropy Probe—7 and 9 years) and Planck 2015. We show that the contours and best fit do not depend much on the different CMB data used and that the considered new BAO data is responsible for most of the improvement in the results. Assuming a flat space geometry, qi=1/2 and expressing the present value of the deceleration parameter q0 as a function of the other three free parameters, we obtain zt=0.67+0.10-0.08, τ=0.26+0.14-0.10 and q0=-0.48+0.11-0.13, at 68% of confidence level, with an uniform prior over H0. If in addition we fix qf=-1, as in flat ΛCDM, DGP and Chaplygin quartessence that are special models described by our parametrization, we get zt=0.66+0.03-0.04, τ=0.33+0.04-0.04 and q0=-0.54+0.05-0.07, in excellent agreement with flat ΛCDM for which τ=1/3. We also obtain for flat wCDM, another dark energy model described by our parametrization, the constraint on the equation of state parameter -1.22 < w < -0.78 at more than 99% confidence level.
8. Cosmology with cosmic shear observations: a review.
PubMed
Kilbinger, Martin
2015-07-01
Cosmic shear is the distortion of images of distant galaxies due to weak gravitational lensing by the large-scale structure in the Universe. Such images are coherently deformed by the tidal field of matter inhomogeneities along the line of sight. By measuring galaxy shape correlations, we can study the properties and evolution of structure on large scales as well as the geometry of the Universe. Thus, cosmic shear has become a powerful probe into the nature of dark matter and the origin of the current accelerated expansion of the Universe. Over the last years, cosmic shear has evolved into a reliable and robust cosmological probe, providing measurements of the expansion history of the Universe and the growth of its structure. We review here the principles of weak gravitational lensing and show how cosmic shear is interpreted in a cosmological context. Then we give an overview of weak-lensing measurements, and present the main observational cosmic-shear results since it was discovered 15 years ago, as well as the implications for cosmology. We then conclude with an outlook on the various future surveys and missions, for which cosmic shear is one of the main science drivers, and discuss promising new weak cosmological lensing techniques for future observations. PMID:26181770
9. First-order Fermi acceleration in the two-stream limit. [for cosmic rays at relativistic and non-relativistic shocks
NASA Technical Reports Server (NTRS)
Bogdan, T. J.; Webb, G. M.
1987-01-01
A study of the first-order Fermi mechanism for accelerating cosmic-rays at relativistic and nonrelativistic shocks is carried out by using the two-stream approximation. Exact steady-state analytic solutions illustrating the shock acceleration process in the test-particle limit in which monoenergetic (relativistic) seed particles enter the shock through an upstream free-escape boundary are obtained. The momentum spectrum of the shock accelerated particles consists of a series of Dirac delta distributions corresponding to particles that have undergone an integral number of acceleration cycles. Since particles in the model have a finite fixed escape probability from the shock and the particle momenta p are equally spaced in log p, the envelope of the delta functions series is a power law in momentum. The solutions are used to discuss time-dependent aspects of the shock acceleration process in terms of the finite cycle time, escape probability, and momentum change per cycle that can be deduced from the steady-state model. The length-scale over which the accelerated particles extend upstream of the shock is shown to depend upon the particle energy, with the higher energy particles extending further upstream. This effect is shown to be intimately related to the kinematic threshold requirement that the particle speed exceed the fluid speed in order for particles to swim upstream of the shock and participate in the shock acceleration process.
10. Probing the climatological impact of a cosmic ray-cloud connection through low-frequency radio observations
Magee, Nathan; Kavic, Michael
2012-01-01
It has been proposed that cosmic ray events could have a causal relationship with cloud formation rates. Given the weak constraints on the role that cloud formation plays in climate forcing it is essential to understand the role such a relationship could have in shaping the Earth's climate. This issue has been previously investigated in the context of the long-term effect of cosmic ray events on climate. However, in order to establish whether or not such a relationship exists, measurements of short-timescale solar events, individual cosmic ray events, and spatially correlated cloud parameters could be of great significance. Here we propose such a comparison using observations from a pair of radio telescopes arrays, the Long Wavelength Array (LWA) and the Eight-meter-wavelength Transient Array (ETA). These low-frequency radio arrays have a unique ability to simultaneously conduct solar, ionospheric and cosmic rays observations and are thus ideal for such a comparison. We will outline plans for a comparison using data from these instruments, satellite images of cloud formation as well as expected cloud formation rates from numerical models. We present some preliminary results illustrating the efficacy of this type of comparison and discuss future plans to carryout this program.
11. Cosmic jets
NASA Technical Reports Server (NTRS)
Rees, M. J.
1986-01-01
The evidence that active galactic nuclei produce collimated plasma jets is summarised. The strongest radio galaxies are probably energised by relativistic plasma jets generated by spinning black holes interacting with magnetic fields attached to infalling matter. Such objects can produce e(+)-e(-) plasma, and may be relevant to the acceleration of the highest-energy cosmic ray primaries. Small-scale counterparts of the jet phenomenon within our own galaxy are briefly reviewed.
12. Probing the Cosmic X-Ray and MeV Gamma-Ray Background Radiation through the Anisotropy
SciTech Connect
Inoue, Yoshiyuki; Murase, Kohta; Madejski, Grzegorz M.; Uchiyama, Yasunobu
2013-09-24
While the cosmic soft X-ray background is very likely to originate from individual Seyfert galaxies, the origin of the cosmic hard X-ray and MeV gamma-ray background is not fully understood. It is expected that Seyferts including Compton thick population may explain the cosmic hard X-ray background. At MeV energy range, Seyferts having non-thermal electrons in coronae above accretion disks or MeV blazars may explain the background radiation. We propose that future measurements of the angular power spectra of anisotropy of the cosmic X-ray and MeV gamma-ray backgrounds will be key to deciphering these backgrounds and the evolution of active galactic nuclei (AGNs). As AGNs trace the cosmic large-scale structure, spatial clustering of AGNs exists. We show that e-ROSITA will clearly detect the correlation signal of unresolved Seyferts at 0.5-2 keV and 2-10 keV bands and will be able to measure the bias parameter of AGNs at both bands. Once the future hard X-ray all sky satellites achieve the sensitivity better than 10-12 erg/cm2/s-1 at 10-30 keV or 30-50 keV - although this is beyond the sensitivities of current hard X-ray all sky monitors - angular power spectra will allow us to independently investigate the fraction of Compton-thick AGNs in all Seyferts. We also find that the expected angular power spectra of Seyferts and blazars in the MeV range are different by about an order of magnitude, where the Poisson term, so-called shot noise, is dominant. Current and future MeV instruments will clearly disentangle the origin of the MeV gamma-ray background through the angular power spectrum.
13. Supernova / Acceleration Probe: a Satellite Experiment to Study the Nature of the Dark Energy
SciTech Connect
Aldering, G.; Althouse, W.; Amanullah, R.; Annis, J.; Astier, P.; Baltay, C.; Barrelet, E.; Basa, S.; Bebek, C.; Bergstrom, L.; Bernstein, G.; Bester, M.; Bigelow, B.; Blandford, R.; Bohlin, R.; Bonissent, A.; Bower, C.; Brown, M.; Campbell, M.; Carithers, W.; Commins, E.; /LBL, Berkeley /SLAC /Stockholm U. /Fermilab /Paris U., VI-VII /Yale U. /Pennsylvania U. /UC, Berkeley /Michigan U. /Baltimore, Space Telescope Sci. /Marseille, CPPM /Indiana U. /American Astron. Society /Caltech /Case Western Reserve U. /Cambridge U. /Saclay /Lyon, IPN
2005-08-15
The Supernova/Acceleration Probe (SNAP) is a proposed space-based experiment designed to study the dark energy and alternative explanations of the acceleration of the Universe's expansion by performing a series of complementary systematics-controlled astrophysical measurements. We here describe a self-consistent reference mission design that can accomplish this goal with the two leading measurement approaches being the Type Ia supernova Hubble diagram and a wide-area weak gravitational lensing survey. This design has been optimized to first order and is now under study for further modification and optimization. A 2-m three-mirror anastigmat wide-field telescope feeds a focal plane consisting of a 0.7 square-degree imager tiled with equal areas of optical CCDs and near infrared sensors, and a high-efficiency low-resolution integral field spectrograph. The instrumentation suite provides simultaneous discovery and light-curve measurements of supernovae and then can target individual objects for detailed spectral characterization. The SNAP mission will discover thousands of Type Ia supernovae out to z = 3 and will obtain high-signal-to-noise calibrated light-curves and spectra for a subset of > 2000 supernovae at redshifts between z = 0.1 and 1.7 in a northern field and in a southern field. A wide-field survey covering one thousand square degrees in both northern and southern fields resolves {approx} 100 galaxies per square arcminute, or a total of more than 300 million galaxies. With the PSF stability afforded by a space observatory, SNAP will provide precise and accurate measurements of gravitational lensing. The high-quality data available in space, combined with the large sample of supernovae, will enable stringent control of systematic uncertainties. The resulting data set will be used to determine the energy density of dark energy and parameters that describe its dynamical behavior. The data also provide a direct test of theoretical models for the dark energy
14. Supernova/Acceleration Probe: A Satellite Experiment to Study the Nature of the Dark Energy
SciTech Connect
Aldering, G.; Althouse, W.; Amanullah, R.; Annis, J.; Astier, P.; Baltay, C.; Barrelet, E.; Basa, E.; Bebek, C.; Bergstrom, L.; Bernstein, G.; Bester, M.; Bigelow, C.; Blandford, R.; Bohlin, R.; Bonissent, A.; Bower, C.; Brown, M.; Campbell, M.; Carithers, W.; Commins, E.; Craig, W.; Day, C.; DeJongh, F.; Deustua, S.; Diehl, T.; Dodelson, S.; Ealet, A.; Ellis, R.; Emmet, W.; Fouchez, D.; Frieman, J.; Fruchter, A.; Gerdes, D.; Gladney, L.; Goldhaber, G.; Goobar, A.; Groom, D.; Heetderks, H.; Hoff, M.; Holland, S.; Huffer, M.; Hui, L.; Huterer, D.; Jain, B.; Jelinsky, P.; Karcher, A.; Kent, S.; Kahn, S.; Kim, A.; Kolbe, W.; Krieger, B.; Kushner, G.; Kuznetsova, N.; Lafever, R.; Lamoureux, J.; Lampton, M.; Le Fevre, O.; Levi, M.; Limon, P.; Lin, H.; Linder, E.; Loken, S.; Lorenzon, W.; Malina, R.; Marriner, J.; Marshall, P.; Massey, R.; Mazure, A.; McKay, T.; McKee, S.; Miquel, R.; Morgan, N.; Mortsell, E.; Mostek, N.; Mufson, S.; Musser, J.; Nugent, P.; Oluseyi, H.; Pain, R.; Palaio, N.; Pankow, D.; Peoples, J.; Perlmutter, S.; Prieto, E.; Rabinowitz, D.; Refregier, A.; Rhodes, J.; Roe, N.; Rusin, D.; Scarpine, V.; Schubnell, M.; Sholl, M.; Samdja, G.; Smith, R.M.; Smoot, G.; Snyder, J.; Spadafora, A.; Stebbine, A.; Stoughton, C.; Szymkowiak, A.; Tarle, G.; Taylor, K.; Tilquin, A.; Tomasch, A.; Tucker, D.; Vincent, D.; von der Lippe, H.; Walder, J-P.; Wang, G.; Wester, W.
2004-05-12
The Supernova/Acceleration Probe (SNAP) is a proposed space-based experiment designed to study the dark energy and alternative explanations of the acceleration of the Universes expansion by performing a series of complementary systematics-controlled astrophysical measurements. We here describe a self-consistent reference mission design that can accomplish this goal with the two leading measurement approaches being the Type Ia supernova Hubble diagram and a wide-area weak gravitational lensing survey. This design has been optimized to first order and is now under study for further modification and optimization. A 2-m three-mirror anastigmat wide-field telescope feeds a focal plane consisting of a 0.7 square-degree imager tiled with equal areas of optical CCDs and near infrared sensors, and a high efficiency low-resolution integral field spectrograph. The instrumentation suite provides simultaneous discovery and light-curve measurements of supernovae and then can target individual objects for detailed spectral characterization. The SNAP mission will discover thousands of Type Ia supernovae out to z = 3 and will obtain high-signal-to-noise calibrated light-curves and spectra for a subset of > 2000 supernovae at redshifts between z = 0.1 and 1.7 in a northern field and in a southern field. A wide-field survey covering one thousand square degrees in both northern and southern fields resolves {approx} 100 galaxies per square arcminute, or a total of more than 300 million galaxies. With the PSF stability afforded by a space observatory, SNAP will provide precise and accurate measurements of gravitational lensing. The high-quality data available in space, combined with the large sample of supernovae, will enable stringent control of systematic uncertainties. The resulting data set will be used to determine the energy density of dark energy and parameters that describe its dynamical behavior. The data also provide a direct test of theoretical models for the dark energy
15. Accelerator Measurements of Magnetically Induced Radio Emission from Particle Cascades with Applications to Cosmic-Ray Air Showers
Belov, K.; Mulrey, K.; Romero-Wolf, A.; Wissel, S. A.; Zilles, A.; Bechtol, K.; Borch, K.; Chen, P.; Clem, J.; Gorham, P. W.; Hast, C.; Huege, T.; Hyneman, R.; Jobe, K.; Kuwatani, K.; Lam, J.; Liu, T. C.; Nam, J.; Naudet, C.; Nichol, R. J.; Rauch, B. F.; Rotter, B.; Saltzberg, D.; Schoorlemmer, H.; Seckel, D.; Strutt, B.; Vieregg, A. G.; Williams, C.; T-510 Collaboration
2016-04-01
For 50 years, cosmic-ray air showers have been detected by their radio emission. We present the first laboratory measurements that validate electrodynamics simulations used in air shower modeling. An experiment at SLAC provides a beam test of radio-frequency (rf) radiation from charged particle cascades in the presence of a magnetic field, a model system of a cosmic-ray air shower. This experiment provides a suite of controlled laboratory measurements to compare to particle-level simulations of rf emission, which are relied upon in ultrahigh-energy cosmic-ray air shower detection. We compare simulations to data for intensity, linearity with magnetic field, angular distribution, polarization, and spectral content. In particular, we confirm modern predictions that the magnetically induced emission in a dielectric forms a cone that peaks at the Cherenkov angle and show that the simulations reproduce the data within systematic uncertainties.
16. Accelerator Measurements of Magnetically Induced Radio Emission from Particle Cascades with Applications to Cosmic-Ray Air Showers.
PubMed
Belov, K; Mulrey, K; Romero-Wolf, A; Wissel, S A; Zilles, A; Bechtol, K; Borch, K; Chen, P; Clem, J; Gorham, P W; Hast, C; Huege, T; Hyneman, R; Jobe, K; Kuwatani, K; Lam, J; Liu, T C; Nam, J; Naudet, C; Nichol, R J; Rauch, B F; Rotter, B; Saltzberg, D; Schoorlemmer, H; Seckel, D; Strutt, B; Vieregg, A G; Williams, C
2016-04-01
For 50 years, cosmic-ray air showers have been detected by their radio emission. We present the first laboratory measurements that validate electrodynamics simulations used in air shower modeling. An experiment at SLAC provides a beam test of radio-frequency (rf) radiation from charged particle cascades in the presence of a magnetic field, a model system of a cosmic-ray air shower. This experiment provides a suite of controlled laboratory measurements to compare to particle-level simulations of rf emission, which are relied upon in ultrahigh-energy cosmic-ray air shower detection. We compare simulations to data for intensity, linearity with magnetic field, angular distribution, polarization, and spectral content. In particular, we confirm modern predictions that the magnetically induced emission in a dielectric forms a cone that peaks at the Cherenkov angle and show that the simulations reproduce the data within systematic uncertainties. PMID:27104694
17. Getting around cosmic variance
SciTech Connect
Kamionkowski, M.; Loeb, A.
1997-10-01
Cosmic microwave background (CMB) anisotropies probe the primordial density field at the edge of the observable Universe. There is a limiting precision ({open_quotes}cosmic variance{close_quotes}) with which anisotropies can determine the amplitude of primordial mass fluctuations. This arises because the surface of last scatter (SLS) probes only a finite two-dimensional slice of the Universe. Probing other SLS{close_quote}s observed from different locations in the Universe would reduce the cosmic variance. In particular, the polarization of CMB photons scattered by the electron gas in a cluster of galaxies provides a measurement of the CMB quadrupole moment seen by the cluster. Therefore, CMB polarization measurements toward many clusters would probe the anisotropy on a variety of SLS{close_quote}s within the observable Universe, and hence reduce the cosmic-variance uncertainty. {copyright} {ital 1997} {ital The American Physical Society}
18. Requirements on Atmospheric Entry of Small Probes for Several Planets: Venus, Saturn, Neptune and Uranus in Preparation for the Future ESA Cosmic Vision Missions
Tomuta, D.; Rebuffat, D.; Larranaga, J.; Erd, C.; Bavdaz, M.; Falkner, P.
2011-02-01
In preparation for the ESA Cosmic Vision new call for medium class missions, a set of entry probes for inner and outer planets have been preliminary investigated by ESA using its Concurrent Design Facility. These Entry Probe missions are hypothetically assumed for launching time 2020-2035. A preliminary design of the probes arrived at a mass of about 300kg. In the following, the study is focused on the entry conditions for each of the planets Venus, Saturn, Neptune and Uranus with the aim to define the conditions for the Entry and Descent System (EDS) and its required technologies. For Venus case, two scenarios where considered: one where the entry probe is released during a typical gravity assist by a large interplanetary mission and another scenario featuring a stand alone mission targeted to Venus. During the entry in Venus atmosphere (mainly composed of CO2 (96.5%) and N2 (3.5%)), the probes are subjected to maximum heat fluxes of 60MW/m2, which is highly demanding in both scenarios. For the outer planet missions, only flyby scenarios with a targeted release of the probe were considered. The entry probes for the outer planets are subjected to heat fluxes above 100MW/m2, which is even more challenging the Thermal Protection Systems (TPS) and therefore requiring the use of special high temperature protection technology to prevent the destruction during the entry. ESA efforts for future missions are directed towards the development of an European Light Ablative Material (ELAM), though used in PEP study only for the Back Cover of the Entry Module. The TPS as well as both radiative and convective heat fluxes need simulations and verification by means of ground facility experiments. Based on the lessons learned from previous mission studies (mission to a near-Earth objects c.f. Marco Polo, Deimos Sample return), an Atmospheric Mars Sample Return is now under study. For sample return missions on return to Earth, a passive re-entry capsule delivering the sample
19. Overview of North Ecliptic Pole Deep Multi-wavelength Survey as a Probe of the Cosmic Noon Era
Matsuhara, Hideo; Oi, Nagisa
2015-08-01
An overview of the North Ecliptic Pole deep (0.5 deg2, NEP-Deep) multi-wavelength survey covering from X-ray to radio-wave is presented. The NEP-Deep provides us with several thousands of 15 μm or 18 μm selected sample of galaxies, which is the largest sample ever made at these wavelengths. A continuous filter coverage in the mid-infrared wavelength (7, 9, 11, 15, 18, and 24 μm) is unique and vital to diagnose the contributions from starbursts and AGNs in the galaxies out to z=2. The goal of the project is to resolve the nature of the cosmic star formation history at the cosmic noon era (e.g. z=1--2), and to find a clue to understand its decline from z=1 to present universe by utilizing the unique power of the multiwavelength survey. To achieve the goal we use a few diagnostic physical parameters unique to the NEP dataset: specific star-formation rate, dust attenuation, and obscured AGN fraction, etc.It is also noteworthy that the NEP is the legacy field thanks to its high visibility by the space observatories, such as eROSITA, Euclid, JWST, and SPICA. SPICA, the next generation large cooled space telescope is extremely powerful to study the rise and fall of the cosmic star-formation history in the universe.
20. A critical shock mach number for particle acceleration in the absence of pre-existing cosmic rays: M=√5
SciTech Connect
Vink, Jacco
2014-01-10
It is shown that, under some generic assumptions, shocks cannot accelerate particles unless the overall shock Mach number exceeds a critical value M>√5. The reason is that for M≤√5 the work done to compress the flow in a particle precursor requires more enthalpy flux than the system can sustain. This lower limit applies to situations without significant magnetic field pressure. In case that the magnetic field pressure dominates the pressure in the unshocked medium, i.e., for low plasma beta, the resistivity of the magnetic field makes it even more difficult to fulfill the energetic requirements for the formation of shock with an accelerated particle precursor and associated compression of the upstream plasma. We illustrate the effects of magnetic fields for the extreme situation of a purely perpendicular magnetic field configuration with plasma beta β = 0, which gives a minimum Mach number of M = 5/2. The situation becomes more complex, if we incorporate the effects of pre-existing cosmic rays, indicating that the additional degree of freedom allows for less strict Mach number limits on acceleration. We discuss the implications of this result for low Mach number shock acceleration as found in solar system shocks, and shocks in clusters of galaxies.
1. On the level of the cosmic ray sea flux
SciTech Connect
Casanova, S.; Aharonian, F. A.; Gabici, S.; Torii, K.; Fukui, Y.; Onishi, T.; Yamamoto, H.; Kawamura, A.
2009-04-08
The study of Galactic diffuse {gamma} radiation combined with the knowledge of the distribution of the molecular hydrogen in the Galaxy offers a unique tool to probe the cosmic ray flux in the Galaxy. A methodology to study the level of the cosmic ray 'sea' and to unveil target-accelerator systems in the Galaxy, which makes use of the data from the high resolution survey of the Galactic molecular clouds performed with the NANTEN telescope and of the data from {gamma}-ray instruments, has been developed. Some predictions concerning the level of the cosmic ray 'sea' and the {gamma}-ray emission close to cosmic ray sources for instruments such as Fermi and Cherenkov Telescope Array are presented.
2. Evaluation of a Wake Vortex Upset Model Based on Simultaneous Measurements of Wake Velocities and Probe-Aircraft Accelerations
NASA Technical Reports Server (NTRS)
Short, B. J.; Jacobsen, R. A.
1979-01-01
Simultaneous measurements were made of the upset responses experienced and the wake velocities encountered by an instrumented Learjet probe aircraft behind a Boeing 747 vortex-generating aircraft. The vortex-induced angular accelerations experienced could be predicted within 30% by a mathematical upset response model when the characteristics of the wake were well represented by the vortex model. The vortex model used in the present study adequately represented the wake flow field when the vortices dissipated symmetrically and only one vortex pair existed in the wake.
3. Genesis and propagation of cosmic rays
SciTech Connect
Shapiro, M.M.; Wefel, J.P.
1988-01-01
This book presents a panorama of contemporary state-of-the-art knowledge on the origin of cosmic rays and how they propagate through space. Twenty-eight articles cover such topics as objects which generate cosmic rays, processes which accelerate particles to cosmic ray energies, the interaction of cosmic rays with their environment, elementary particles in cosmic rays, how to detect cosmic rays and future experiments to measure highly energetic particles.
4. Data processing for a cosmic ray experiment onboard the solar probes Helios 1 and 2: Experiment 6
NASA Technical Reports Server (NTRS)
Mueller-Mellin, R.; Green, G.; Iwers, B.; Kunow, H.; Wibberenz, G.; Fuckner, J.; Hempe, H.; Witte, M.
1982-01-01
The data processing system for the Helios experiment 6, measuring energetic charged particles of solar, planetary and galactic origin in the inner solar system, is described. The aim of this experiment is to extend knowledge on origin and propagation of cosmic rays. The different programs for data reduction, analysis, presentation, and scientific evaluation are described as well as hardware and software of the data processing equipment. A chronological presentation of the data processing operation is given. Procedures and methods for data analysis which were developed can be used with minor modifications for analysis of other space research experiments.
5. Cosmic Evolution of X-ray Binary Populations: Probes of Changing Chemistry and Aging Stellar Populations in the Universe
Lehmer, Bret; Basu-Zych, Antara; Mineo, Stefano; Brandt, W. Niel; Eufrasio, Rafael T.; Fragos, Tassos; Hornschemeier, Ann E.; Luo, Bin; Xue, Yongquan; Bauer, Franz E.; Gilfanov, Marat; Kalogera, Vassiliki; Ranalli, Piero; Schneider, Donald P.; Shemmer, Ohad; Tozzi, Paolo; Trump, Jonathan; Vignali, Cristian; Wang, JunXian; Yukita, Mihoko; Zezas, Andreas
2016-01-01
The 2-10 keV emission from normal galaxies is dominated by X-ray binary (XRB) populations. The formation of XRBs is sensitive to galaxy properties like stellar age and metallicity---properties that have evolved significantly in the broader galaxy population throughout cosmic history. The 6 Ms Chandra Deep Field-South (CDF-S) allows us to study how XRB emission has evolved over a significant fraction of cosmic history (since z ~ 4), without significant contamination from AGN. Using constraints from the CDF-S, I will show that the X-ray emission from normal galaxies from z = 0-7 depends not only on star-formation rate (SFR), but also on stellar mass (M) and redshift. Our analysis shows the that low-mass X-ray binary emission scales with stellar mass and evolves as LX(LMXB)/M ~ (1+z)^3, and high-mass X-ray binaries scale with SFR and evolve as LX(HMXB)/SFR ~ (1+z), consistent with predictions from population synthesis models, which attribute the increase in LMXB and HMXB scaling relations with redshift as being due to declining host galaxy stellar ages and metallicities, respectively. These findings have important implications for the X-ray emission from young, low-metallicity galaxies at high redshift, which are likely to be more X-ray luminous per SFR and play a significant role in the heating of the intergalactic medium.
6. Noninvasive Laser Probing of Ultrashort Single Electron Bunches for Accelerator And Light Source Development
SciTech Connect
Bolton, P.R.; /SLAC
2007-06-11
Companion development of ultrafast electron beam diagnostics capable of noninvasively resolving single bunch detail is essential for the development of high energy, high brightness accelerator facilities and associated beam-based light source applications. Existing conventional accelerators can exhibit timing-jitter down to the 100 femtosecond level which exceeds their single bunch duration capability. At the other extreme, in relatively jitterless environments, laser-plasma wakefield accelerators (LWFA) can generate single electron bunches of duration estimated to be of order 10 femtoseconds making this setting a valuable testbed for development of broadband electron bunch diagnostics. Characteristics of electro-optic schemes and laser-induced reflectance are discussed with emphasis on temporal resolution.
7. INTEGRAL IGR J18135-1751 = HESS J1813-178: A New Cosmic High-Energy Accelerator from keV to TeV Energies
Ubertini, P.; Bassani, L.; Malizia, A.; Bazzano, A.; Bird, A. J.; Dean, A. J.; De Rosa, A.; Lebrun, F.; Moran, L.; Renaud, M.; Stephen, J. B.; Terrier, R.; Walter, R.
2005-08-01
We report the discovery of a soft gamma-ray source, namely, IGR J18135-1751, detected with IBIS, the Imager on Board the INTEGRAL Satellite. The source is persistent and has a 20-100 keV luminosity of ~5.7× 1034 ergs s-1 (assuming a distance of 4 kpc). This source is coincident with one of the eight unidentified objects recently reported by the HESS collaboration as part of the first TeV survey of the inner part of the Galaxy. Two of these new sources found along the Galactic plane, HESS J1813-178 and HESS J1614-518, have no obvious lower energy counterparts, a fact that motivated the suggestion that they might be dark cosmic ray accelerators. HESS J1813-178 has a strongly absorbed X-ray counterpart, the ASCA source AGPS 273.4-17.8, showing a power-law spectrum with photon index ~1.8 and a total (Galactic plus intrinsic) absorption corresponding to NH~5×1022 cm-2. We hypothesize that the source is a pulsar wind nebula embedded in its supernova remnant. The lack of X-ray or gamma-ray variability, the radio morphology, and the ASCA spectrum are all compatible with this interpretation. In any case we rule out the hypothesis that HESS J1813-178 belongs to a new class of TeV objects or that it is a cosmic dark particle'' accelerator. Based on observations with INTEGRAL, an ESA project with instruments and science data center funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Switzerland, Spain), the Czech Republic, and Poland and with the participation of Russia and the US.
8. Acceleration of ions and electrons to near-cosmic ray energies in a perpendicular shock: The January 6, 1978 event
NASA Technical Reports Server (NTRS)
Krimigis, S. M.; Sarris, E. T.
1985-01-01
Acceleration of energetic ions to approx 200 MeV and electrons to approx 2 MeV were detected by the Low Energy Charged Particle (LECP) instrument on Voyager 2 in association with a quasiperpendicular shock of theta sub Bn - 87.5 deg at 1.9 AU. The measurments, obtained at a time resolution of approx. 1.2 sec, reveal structure of the energetic particle intensity enhancements down to a scale of the order of the particle gyroradius, and suggest that acceleration takes place within a gyrodiameter of the shock. The observations are consistent with the prediction of the shock drift acceleration (SDA) mechanism. The absence of any fluctuations in the magnetic field during the shock passage suggest that turbulence is not essential to the shock acceleration process in the interplanetary medium.
9. Magnetic and Pressure Probes on the HyperV Contoured Coaxial Plasma Accelerator
Messer, S.; Case, A.; Brockington, S.; Bomgardner, R.; Witherspoon, F. D.; Elton, R.
2008-11-01
Magnetic and pressure data from several contoured-gap coaxial railguns is presented. These plasma guns use an injected plasma annulus and shaped inner and outer electrodes to mitigate the blow-by instability. Passive magnetic probes and photodiodes search for evidence of the blow-by instability and azimuthal asymmetries. Stagnation pressure and velocity are compared for different size guns and for different driving voltages and currents.
10. Evading the pulsar constraints on the cosmic string tension in supergravity inflation
SciTech Connect
Kamada, Kohei; Miyamoto, Yuhei; Yokoyama, Jun'ichi E-mail: [email protected]
2012-10-01
The cosmic string is a useful probe of the early Universe and may give us a clue to physics at high energy scales which particle accelerators cannot reach. Although the most promising tool to observe it is the cosmic microwave background (CMB), the constraint from gravitational waves is becoming so stringent that detecting its signatures in CMB may be impossible. In this paper, we construct a scenario that contains cosmic strings observable in the cosmic microwave background while evading the constraint imposed by the recent pulsar timing data. We argue that cosmic strings with relatively large tension are allowed by diluting loops contributing to the relevant frequency range of the gravitational wave background. We also present a particle physics model to realize such dilution in the context of chaotic inflation in supergravity, where the phase transition occurs during inflation due to the time-dependence of the Hubble induced mass.
11. Probing the limits to muscle-powered accelerations: lessons from jumping bullfrogs.
PubMed
Roberts, Thomas J; Marsh, Richard L
2003-08-01
The function of many muscles during natural movements is to accelerate a mass. We used a simple model containing the essential elements of this functional system to investigate which musculoskeletal features are important for increasing the mechanical work done in a muscle-powered acceleration. The muscle model consisted of a muscle-like actuator with frog hindlimb muscle properties, operating across a lever to accelerate a load. We tested this model in configurations with and without a series elastic element and with and without a variable mechanical advantage. When total muscle shortening was held constant at 30%, the model produced the most work when the muscle operated with a series elastic element and an effective mechanical advantage that increased throughout the contraction (31 J kg(-1) muscle vs 26.6 J kg(-1) muscle for the non-compliant, constant mechanical advantage configuration). We also compared the model output with the dynamics of jumping bullfrogs, measured by high-speed video analysis, and the length changes of the plantaris muscle, measured by sonomicrometry. This comparison revealed that the length, force and power trajectory of the body of jumping frogs could be accurately replicated by a model of a fully active muscle operating against an inertial load, but only if the model muscle included a series elastic element. Sonomicrometer measurements of the plantaris muscle revealed an unusual, biphasic pattern of shortening, with high muscle velocities early and late in the contraction, separated by a period of slow contraction. The model muscle produced this pattern of shortening only when an elastic element was included. These results demonstrate that an elastic element can increase the work output in a muscle-powered acceleration. Elastic elements uncouple muscle fiber shortening velocity from body movement to allow the muscle fibers to operate at slower shortening velocities and higher force outputs. A variable muscle mechanical advantage
12. Accelerated expansion of the Universe as the most powerful source of the energy release in cosmic objects
Harutyunian, H. A.
2014-12-01
The available data on the expansion effects in the shorter scales are considered. It is mentioned that the prevailing opinion on the gravitationally bound states of the short-scale physical systems like solar system or galaxies is not provable but results from the a priori accepted ideas of their formation due to condensation. On the contrary, a lot of observational data speaks in favor of existence of Hubble expansion for all the scales. Some estimates of gravitational energy accumulation in cosmic objects owing to dark energy physical work are done. These estimates show that a cluster of galaxies could be formed from a pre-cluster via matter ejection during the Hubble time.
13. Variations of the relative abundances of He, (C,N,O) and Fe-group nuclei in solar cosmic rays and their relationship to solar particle acceleration
NASA Technical Reports Server (NTRS)
Bertsch, D. L.; Biswas, S.; Fichtel, C. E.; Pellerin, C. J.; Reames, D. V.
1973-01-01
Measurements of the flux of helium nuclei in the 24 January 1971 event and of helium and (C,N,O) nuclei in the 1 September 1971 event are combined with previous measurements to obtain the relative abundances of helium, (C,N,O), and Fe-group nuclei in these events. These data are then summarized together with previously reported results to show that, even when the same detector system using a dE/dx plus range technique is used, differences in the He/(C,N,O) value in the same energy/nucleon interval are observed in solar cosmic ray events. Further, when the He/(C,N,O) value is lower the He/(Fe-group nuclei) value is also systematically lower in these large events. When solar particle acceleration theory is analyzed, it is seen that the results suggest that, for large events, Coulomb energy loss probably does not play a major role in determining solar particle composition at higher energies (10 MeV). The variations in multicharged nuclei composition are more likely due to partial ionization during the acceleration phase.
14. Bulk Comptonization of the Cosmic Microwave Background by Extragalactic Jets as a Probe of their Matter Content
NASA Technical Reports Server (NTRS)
Georganopoulos, Markos; Kazanas, Demosthenes; Perlman, Eric; Stecker, Floyd W.
2004-01-01
We propose a method for estimating the composition, i.e. the relative amounts of leptons and protons, of extragalactic jets which exhibit Chandra - detected knots in their kpc scale jets. The method relies on measuring, or setting upper limits on, the component of the Cosmic Microwave Background (CMB) radiation that is bulk-Comptonized by the cold electrons in the relativistically flowing jet. These measurements, along with modeling of the broadband knot emission that constrain the bulk Lorentz factor GAMMA of the jets, can yield estimates of the jet power carried by protons and leptons. We provide an explicit calculation of the spectrum of the bulk-Comptonized (BC) CMB component and apply these results to PKS 0637 - 752 and 3C 273, two superluminal quasars with Chandra - detected large scale jets. What makes these sources particularly suited for such a procedure is the absence of significant non-thermal jet emission in the 'bridge', the region between the core and the first bright jet knot, which guarantees that most of the electrons are cold there, leaving the BC scattered CMB radiation as the only significant source of photons in this region. At lambda = 3.6 - 8.0 microns, the most likely band to observe the BC scattered CMB emission, the Spitzer angular resolution (approximately 1" - 3") is considerably smaller than the the 'bridges' of these jets (approximately 10"), making it possible to both measure and resolve this emission.
15. Protostars: Forges of cosmic rays?
Padovani, M.; Marcowith, A.; Hennebelle, P.; Ferrière, K.
2016-05-01
Context. Galactic cosmic rays are particles presumably accelerated in supernova remnant shocks that propagate in the interstellar medium up to the densest parts of molecular clouds, losing energy and their ionisation efficiency because of the presence of magnetic fields and collisions with molecular hydrogen. Recent observations hint at high levels of ionisation and at the presence of synchrotron emission in protostellar systems, which leads to an apparent contradiction. Aims: We want to explain the origin of these cosmic rays accelerated within young protostars as suggested by observations. Methods: Our modelling consists of a set of conditions that has to be satisfied in order to have an efficient cosmic-ray acceleration through diffusive shock acceleration. We analyse three main acceleration sites (shocks in accretion flows, along the jets, and on protostellar surfaces), then we follow the propagation of these particles through the protostellar system up to the hot spot region. Results: We find that jet shocks can be strong accelerators of cosmic-ray protons, which can be boosted up to relativistic energies. Other promising acceleration sites are protostellar surfaces, where shocks caused by impacting material during the collapse phase are strong enough to accelerate cosmic-ray protons. In contrast, accretion flow shocks are too weak to efficiently accelerate cosmic rays. Though cosmic-ray electrons are weakly accelerated, they can gain a strong boost to relativistic energies through re-acceleration in successive shocks. Conclusions: We suggest a mechanism able to accelerate both cosmic-ray protons and electrons through the diffusive shock acceleration mechanism, which can be used to explain the high ionisation rate and the synchrotron emission observed towards protostellar sources. The existence of an internal source of energetic particles can have a strong and unforeseen impact on the ionisation of the protostellar disc, on the star and planet formation
16. SOFIA-EXES: Probing the Thermal Structure of M Supergiant Wind Acceleration Zones
Harper, Graham M.; O'Gorman, Eamon; Guinan, Edward F.; EXES Instrument Team, EXES Science Team
2016-01-01
There is no standard model for mass loss from cool evolved stars, particularly for non-pulsating giants and supergiants. For the early-M supergiants, radiation pressure, convective ejections, magnetic fields, and Alfven waves have all been put forward as potential mass loss mechanisms. A potential discriminator between these ideas is the thermal structure resulting from the heating-cooling balance in the acceleration zone - the most important region to study mass loss physics.We present mid-IR [Fe II] emission line profiles of Betelgeuse and Antares obtained with NASA-DLR SOFIA-EXES and NASA IRTF-TEXES that were obtained as part of a GO program (Harper: Cycle 2-0004) and EXES instrument commissioning observations. The intra-term transitions sample a range of excitation conditions, Texc=540K, 3,400K, and 11,700K, i.e., from the warm chromospheric plasma, that also emits in the cm-radio and ultraviolet, to the cold inner circumstellar envelope. The spectrally-resolved profiles, when combined with VLA cm-radio observations, provide new constraints on the temperature and flow velocity in the outflow accelerating region. The semi-empirical energy balance can be used to test theoretical predictions of wind heating.
17. PROBING THE INFLATON: SMALL-SCALE POWER SPECTRUM CONSTRAINTS FROM MEASUREMENTS OF THE COSMIC MICROWAVE BACKGROUND ENERGY SPECTRUM
SciTech Connect
Chluba, Jens; Erickcek, Adrienne L.; Ben-Dayan, Ido
2012-10-20
In the early universe, energy stored in small-scale density perturbations is quickly dissipated by Silk damping, a process that inevitably generates {mu}- and y-type spectral distortions of the cosmic microwave background (CMB). These spectral distortions depend on the shape and amplitude of the primordial power spectrum at wavenumbers k {approx}< 10{sup 4} Mpc{sup -1}. Here, we study constraints on the primordial power spectrum derived from COBE/FIRAS and forecasted for PIXIE. We show that measurements of {mu} and y impose strong bounds on the integrated small-scale power, and we demonstrate how to compute these constraints using k-space window functions that account for the effects of thermalization and dissipation physics. We show that COBE/FIRAS places a robust upper limit on the amplitude of the small-scale power spectrum. This limit is about three orders of magnitude stronger than the one derived from primordial black holes in the same scale range. Furthermore, this limit could be improved by another three orders of magnitude with PIXIE, potentially opening up a new window to early universe physics. To illustrate the power of these constraints, we consider several generic models for the small-scale power spectrum predicted by different inflation scenarios, including running-mass inflation models and inflation scenarios with episodes of particle production. PIXIE could place very tight constraints on these scenarios, potentially even ruling out running-mass inflation models if no distortion is detected. We also show that inflation models with sub-Planckian field excursion that generate detectable tensor perturbations should simultaneously produce a large CMB spectral distortion, a link that could potentially be established with PIXIE.
18. A celestial gamma-ray foreground due to the albedo of small solar system bodies and a remote probe of the interstellar cosmic ray spectrum
SciTech Connect
Moskalenko, Igor V.; Porter, Troy A.; Digel, Seth W.; Michelson, Peter F.; Ormes, Jonathan F.
2007-12-17
We calculate the {gamma}-ray albedo flux from cosmic-ray (CR) interactions with the solid rock and ice in Main Belt asteroids and Kuiper Belt objects (KBOs) using the Moon as a template. We show that the {gamma}-ray albedo for the Main Belt and Kuiper Belt strongly depends on the small-body mass spectrum of each system and may be detectable by the forthcoming Gamma Ray Large Area Space Telescope (GLAST). The orbits of the Main Belt asteroids and KBOs are distributed near the ecliptic, which passes through the Galactic center and high Galactic latitudes. If detected, the {gamma}-ray emission by the Main Belt and Kuiper Belt has to be taken into account when analyzing weak {gamma}-ray sources close to the ecliptic, especially near the Galactic center and for signals at high Galactic latitudes, such as the extragalactic {gamma}-ray emission. Additionally, it can be used to probe the spectrum of CR nuclei at close-to-interstellar conditions, and the mass spectrum of small bodies in the Main Belt and Kuiper Belt. The asteroid albedo spectrum also exhibits a 511 keV line due to secondary positrons annihilating in the rock. This may be an important and previously unrecognized celestial foreground for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) observations of the Galactic 511 keV line emission including the direction of the Galactic center.
19. GAMMA-RAY BURST HOST GALAXY SURVEYS AT REDSHIFT z {approx}> 4: PROBES OF STAR FORMATION RATE AND COSMIC REIONIZATION
SciTech Connect
Trenti, Michele; Perna, Rosalba; Levesque, Emily M.; Shull, J. Michael; Stocke, John T.
2012-04-20
Measuring the star formation rate (SFR) at high redshift is crucial for understanding cosmic reionization and galaxy formation. Two common complementary approaches are Lyman break galaxy (LBG) surveys for large samples and gamma-ray burst (GRB) observations for sensitivity to SFR in small galaxies. The z {approx}> 4 GRB-inferred SFR is higher than the LBG rate, but this difference is difficult to understand, as both methods rely on several modeling assumptions. Using a physically motivated galaxy luminosity function model, with star formation in dark matter halos with virial temperature T{sub vir} {approx}> 2 Multiplication-Sign 10{sup 4} K (M{sub DM} {approx}> 2 Multiplication-Sign 10{sup 8} M{sub Sun }), we show that GRB- and LBG-derived SFRs are consistent if GRBs extend to faint galaxies (M{sub AB} {approx}< -11). To test star formation below the detection limit L{sub lim} {approx} 0.05L*{sub z=3} of LBG surveys, we propose to measure the fraction f{sub det}(L > L{sub lim}, z) of GRB hosts with L > L{sub lim}. This fraction quantifies the missing star formation fraction in LBG surveys, constraining the mass-suppression scale for galaxy formation, with weak dependence on modeling assumptions. Because f{sub det}(L > L{sub lim}, z) corresponds to the ratio of SFRs derived from LBG and GRB surveys, if these estimators are unbiased, measuring f{sub det}(L > L{sub lim}, z) also constrains the redshift evolution of the GRB production rate per unit mass of star formation. Our analysis predicts significant success for GRB host detections at z {approx} 5 with f{sub det}(L > L{sub lim}, z) {approx} 0.4, but rarer detections at z > 6. By analyzing the upper limits on host galaxy luminosities of six z > 5 GRBs from literature data, we infer that galaxies with M{sub AB} > -15 were present at z > 5 at 95% confidence, demonstrating the key role played by very faint galaxies during reionization.
20. Femtosecond probing around the K-edge of a laser heated plasma using X-rays from betatron oscillations in a laser wakefield accelerator
Behm, Keegan; Zhao, Tony; Maksimchuk, Anatoly; Yanovsky, Victor; Nees, John; Mangles, Stuart; Krushelnick, Karl; Thomas, Alexander; CenterUltrafast Optical Science Team; Plasmas Group Team
2015-11-01
Presented here are data from a two-beam pump-probe experiment. We used synchrotron-like X-rays created by betatron oscillations to probe a thin metal foil that is pumped by the secondary laser beam. The Hercules Ti:Sapph laser facility was operated with a pulse duration of 34 fs and a power of 65 TW split to drive a laser wakefield accelerator and heat the secondary target. We observed opacity changes around the K-edge of thin foils as they were heated by an ultrafast pump laser. To understand how the opacity is changing with heating and expansion of the plasma, the delay between the two laser paths was adjusted on a fs and ps time scale. Experimental data for polyvinylidene chloride (PVDC) and aluminum show variations in opacity around the Cl and Al K-edges with changes in the probe delay. The transmitted synchrotron-like spectrum was measured using single photon counting on an X-ray CCD camera and was available on a shot-by-shot basis. The success of this work demonstrates a practical application for X-rays produced from betatron oscillations in a wakefield accelerator. The compact size of these table-top'' accelerators and the ultrashort nature of the generated X-ray pulses allows pump-probe experiments that can probe events that occur on the femtosecond time scale.
1. Probing the Cosmic Gamma-Ray Burst Rate with Trigger Simulations of the Swift Burst Alert Telescope
Lien, Amy Y.; Sakamoto, Takanori; Gehrels, Neil; Palmer, David; Barthelmy, Scott Douglas; Graziani, Carlo; Cannizzo, John K.
2014-08-01
The gamma-ray burst (GRB) rate is essential for revealing the connection between GRBs, supernovae and stellar evolution. Additionally, the long GRB rate at high redshift provides a strong probe of star formation history in the early universe. While hundreds of GRBs are observed by Swift, it remains difficult to determine the intrinsic GRB rate due to Swift’s complex trigger algorithm. Current studies usually approximate the Swift trigger algorithm by a single detection threshold. However, unlike the previously flown GRB instruments, Swift has over 500 trigger criteria based on photon count rate and additional image threshold for localization. To investigate possible systematic biases and explore the intrinsic GRB properties, we developed a program that is capable of simulating all the rate trigger criteria and mimicking the image trigger threshold. We use this program to search for the intrinsic GRB rate. Our simulations show that adopting the Swift’s complex trigger algorithm increases the detection rate of dim bursts. Therefore, GRBs need to be intrinsically dimmer than previously expected to avoid over-producing the number of detections and to match with Swift observations. As a result, we find that either the GRB rate is much higher at large redshift than previous expectations, or the luminosity evolution is non-negligible.
2. Separable projection integrals for higher-order correlators of the cosmic microwave sky: Acceleration by factors exceeding 100
Briggs, J. P.; Pennycook, S. J.; Fergusson, J. R.; Jäykkä, J.; Shellard, E. P. S.
2016-04-01
We present a case study describing efforts to optimise and modernise "Modal", the simulation and analysis pipeline used by the Planck satellite experiment for constraining general non-Gaussian models of the early universe via the bispectrum (or three-point correlator) of the cosmic microwave background radiation. We focus on one particular element of the code: the projection of bispectra from the end of inflation to the spherical shell at decoupling, which defines the CMB we observe today. This code involves a three-dimensional inner product between two functions, one of which requires an integral, on a non-rectangular domain containing a sparse grid. We show that by employing separable methods this calculation can be reduced to a one-dimensional summation plus two integrations, reducing the overall dimensionality from four to three. The introduction of separable functions also solves the issue of the non-rectangular sparse grid. This separable method can become unstable in certain scenarios and so the slower non-separable integral must be calculated instead. We present a discussion of the optimisation of both approaches. We demonstrate significant speed-ups of ≈100×, arising from a combination of algorithmic improvements and architecture-aware optimisations targeted at improving thread and vectorisation behaviour. The resulting MPI/OpenMP hybrid code is capable of executing on clusters containing processors and/or coprocessors, with strong-scaling efficiency of 98.6% on up to 16 nodes. We find that a single coprocessor outperforms two processor sockets by a factor of 1.3× and that running the same code across a combination of both microarchitectures improves performance-per-node by a factor of 3.38×. By making bispectrum calculations competitive with those for the power spectrum (or two-point correlator) we are now able to consider joint analysis for cosmological science exploitation of new data.
3. PROBING THE SOLAR WIND ACCELERATION REGION WITH THE SUN-GRAZING COMET C/2002 S2
SciTech Connect
Giordano, S.; Raymond, J. C.; Lamy, P.; Uzzo, M.; Dobrzycka, D.
2015-01-01
Comet C/2002 S2, a member of the Kreutz family of sungrazing comets, was discovered in white-light images of the Large Angle and Spectromeric Coronagraph Experiment coronagraph on the Solar and Heliospheric Observatory (SOHO) on 2002 September 18 and observed in H I Lyα emission by the SOHO Ultraviolet Coronagraph Spectrometer (UVCS) instrument at four different heights as it approached the Sun. The H I Lyα line profiles detected by UVCS are analyzed to determine the spectral parameters: line intensity, width, and Doppler shift with respect to the coronal background. Two-dimensional comet images of these parameters are reconstructed at the different heights. A novel aspect of the observations of this sungrazing comet data is that, whereas the emission from most of the tail is blueshifted, that along one edge of the tail is redshifted. We attribute these shifts to a combination of solar wind speed and interaction with the magnetic field. In order to use the comet to probe the density, temperature, and speed of the corona and solar wind through which it passes, as well as to determine the outgassing rate of the comet, we develop a Monte Carlo simulation of the H I Lyα emission of a comet moving through a coronal plasma. From the outgassing rate, we estimate a nucleus diameter of about 9 m. This rate steadily increases as the comet approaches the Sun, while the optical brightness decreases by more than a factor of 10 and suddenly recovers. This indicates that the optical brightness is determined by the lifetimes of the grains, sodium atoms, and molecules produced by the comet.
4. Future evolution and finite-time singularities in F(R) gravity unifying inflation and cosmic acceleration
SciTech Connect
Nojiri, Shin'ichi; Odintsov, Sergei D.
2008-08-15
We study the future evolution of quintessence/phantom-dominated epoch in modified F(R) gravity which unifies the early-time inflation with late-time acceleration and which is consistent with observational tests. Using the reconstruction technique it is demonstrated that there are models where any known (big rip, II, III, or IV type) singularity may classically occur. From another side, in Einstein frame (scalar-tensor description) only IV type singularity occurs. Near the singularity the classical description breaks up, and it is demonstrated that quantum effects act against the singularity and may prevent its appearance. The realistic F(R) gravity which is future singularity free is proposed. We point out that additional modification of any F(R) gravity by the terms relevant at the early universe is possible, in such a way that future singularity does not occur even classically.
5. Analysis of non-Gaussian cosmic microwave background maps based on the N-pdf. Application to Wilkinson Microwave Anisotropy Probe data
Vielva, P.; Sanz, J. L.
2009-08-01
We present a new method based on the N-point probability distribution function (N-pdf) to study non-Gaussianity in cosmic microwave background maps. Likelihood and Bayesian estimation are applied to a local non-linear perturbed model up to third order, characterized by a linear term which is described by a Gaussian N-pdf, and a second- and third-order terms which are proportional to the square and the cube of the linear one. We also explore a set of model selection techniques (the Akaike and the Bayesian information criteria, the minimum description length, the Bayesian evidence and the generalized likelihood ratio test) and their application to decide whether a given data set is better described by the proposed local non-Gaussian model, rather than by the standard Gaussian temperature distribution. As an application, we consider the analysis of the Wilkinson Microwave Anisotropy Probe 5-year data at a resolution of ~2°. At this angular scale (the Sachs-Wolfe regime), the non-Gaussian description proposed in this work defaults (under certain conditions) to an approximative local form of the weak non-linear coupling inflationary model previously addressed in the literature. For this particular case, we obtain an estimation for the non-linear coupling parameter of -94 < fNL < 154 at 95 per cent confidence level. Equally, model selection criteria also indicate that the Gaussian hypothesis is favoured against the particular local non-Gaussian model proposed in this work. This result is in agreement with previous findings obtained for equivalent non-Gaussian models and with different non-Gaussian estimators. However, our estimator based on the N-pdf is more efficient than previous estimators and, therefore, provides tighter constraints on the coupling parameter at degree scale.
6. A new method of measuring the poloidal magnetic and radial electric fields in a tokamak using a laser-accelerated ion-beam trace probe.
PubMed
Yang, X Y; Chen, Y H; Lin, C; Wang, L; Xu, M; Wang, X G; Xiao, C J
2014-11-01
Both the poloidal magnetic field (Bp) and radial electric field (Er) are significant in magnetic confinement devices. In this paper, a new method was proposed to diagnose both Bp and Er at the same time, which was named Laser-accelerated Ion-beam Trace Probe (LITP). This method based on the laser-accelerated ion beam, which has three properties: large energy spread, short pulse lengths, and multiple charge states. LITP can provide the 1D profiles, or 2D images of both Bp and Er. In this paper, we present the basic principle and some preliminary theoretical results. PMID:25430336
7. EFFICIENT COSMIC RAY ACCELERATION, HYDRODYNAMICS, AND SELF-CONSISTENT THERMAL X-RAY EMISSION APPLIED TO SUPERNOVA REMNANT RX J1713.7-3946
SciTech Connect
Ellison, Donald C.; Patnaude, Daniel J.; Slane, Patrick; Raymond, John
2010-03-20
We model the broadband emission from supernova remnant (SNR) RX J1713.7-3946 including, for the first time, a consistent calculation of thermal X-ray emission together with non-thermal emission in a nonlinear diffusive shock acceleration model. Our model tracks the evolution of the SNR including the plasma ionization state between the forward shock and the contact discontinuity. We use a plasma emissivity code to predict the thermal X-ray emission spectrum assuming the initially cold electrons are heated either by Coulomb collisions with the shock-heated protons (the slowest possible heating), or come into instant equilibration with the protons. For either electron heating model, electrons reach {approx}>10{sup 7} K rapidly and the X-ray line emission near 1 keV is more than 10 times as luminous as the underlying thermal bremsstrahlung continuum. Since recent Suzaku observations show no detectable line emission, this places strong constraints on the unshocked ambient medium density and on the relativistic electron-to-proton ratio. For the uniform circumstellar medium (CSM) models that we consider, the low densities and high relativistic electron-to-proton ratios required to match the Suzaku X-ray observations definitively rule out pion decay as the emission process producing GeV-TeV photons. We show that leptonic models, where inverse-Compton scattering against the cosmic background radiation dominates the GeV-TeV emission, produce better fits to the broadband thermal and non-thermal observations in a uniform CSM.
8. Cosmic Rays above the 2ND Knee from Clusters of Galaxies
Murase, Kohta; Inoue, Susumu; Asano, Katsuaki
In clusters of galaxies, accretion and merger shocks are potential accelerators of high energy protons, as well as intracluster active galactic nuclei. We discuss the possibility that protons from cluster shocks make a significant contribution to the observed cosmic rays in the energy range between the second knee at ~1017.5 eV and the ankle at ~1018.5 eV. The accompanying neutrino and gamma-ray signals could be detectable by upcoming telescopes such as IceCube/KM3Net and CTA, providing a test of this scenario as well as a probe of cosmic-ray confinement properties in clusters.
9. The Origin of Cosmic Rays
ScienceCinema
Blasi, Pasquale [INAF/Arcetri-Italy and Fermilab, Italy
2010-01-08
Cosmic Rays reach the Earth from space with energies of up to more than 1020 eV, carrying information on the most powerful particle accelerators that Nature has been able to assemble. Understanding where and how cosmic rays originate has required almost one century of investigations, and, although the last word is not written yet, recent observations and theory seem now to fit together to provide us with a global picture of the origin of cosmic rays of unprecedented clarity. Here we will describe what we learned from recent observations of astrophysical sources (such as supernova remnants and active galaxies) and we will illustrate what these observations tell us about the physics of particle acceleration and transport. We will also discuss the ?end? of the Galactic cosmic ray spectrum, which bridges out attention towards the so called ultra high energy cosmic rays (UHECRs). At ~1020 eV the gyration scale of cosmic rays in cosmic magnetic fields becomes large enough to allow us to point back to their sources, thereby allowing us to perform ?cosmic ray astronomy?, as confirmed by the recent results obtained with the Pierre Auger Observatory. We will discuss the implications of these observations for the understanding of UHECRs, as well as some questions which will likely remain unanswered and will be the target of the next generation of cosmic ray experiments.
10. Cosmic Dawn
Zaldarriaga, Matias
The following sections are included: * Rapporteur Talk by R. Ellis: Massive Black Holes: Evidence, Demographics and Cosmic Evolution * Rapporteur Talk by S. Furlanetto: The Cosmic Dawn: Theoretical Models and the Future
11. Insights into the Galactic Cosmic-ray Source from the TIGER Experiment
NASA Technical Reports Server (NTRS)
Link, Jason T.; Barbier, L. M.; Binns, W. R.; Christian, E. R.; Cummings, J. R.; Geier, S.; Israel, M. H.; Lodders, K.; Mewaldt,R. A.; Mitchell, J. W.; deNolfo, G. A.; Rauch, B. F.; Schindler, S. M.; Scott, L. M.; Streitmatter, R. E.; Stone, E. C.; Waddington, C. J.; Wiedenbeck, M. E.
2009-01-01
We report results from 50 days of data accumulated in two Antarctic flights of the Trans-Iron Galactic Element Recorder (TIGER). With a detector system composed of scintillators, Cherenkov detectors, and scintillating optical fibers, TIGER has a geometrical acceptance of 1.7 sq m sr and a charge resolution of 0.23 cu at Iron. TIGER has obtained abundance measurements of some of the rare galactic cosmic rays heavier than iron, including Zn, Ga, Ge, Se, and Sr, as well as the more abundant lighter elements (down to Si). The heavy elements have long been recognized as important probes of the nature of the galactic cosmic-ray source and accelerator. After accounting for fragmentation of cosmic-ray nuclei as they propagate through the Galaxy and the atmosphere above the detector system, the TIGER source abundances are consistent with a source that is a mixture of about 20% ejecta from massive stars and 80% interstellar medium with solar system composition. This result supports a model of cosmic-ray origin in OB associations previously inferred from ACE-CRIS data of more abundant lighter elements. These TIGER data also support a cosmic-ray acceleration model in which elements present in interstellar grains are accelerated preferentially compared with those found in interstellar gas.
12. TOWARD UNDERSTANDING THE COSMIC-RAY ACCELERATION AT YOUNG SUPERNOVA REMNANTS INTERACTING WITH INTERSTELLAR CLOUDS: POSSIBLE APPLICATIONS TO RX J1713.7-3946
SciTech Connect
Inoue, Tsuyoshi; Yamazaki, Ryo; Inutsuka, Shu-ichiro; Fukui, Yasuo
2012-01-01
Using three-dimensional magnetohydrodynamic simulations, we investigate general properties of a blast wave shock interacting with interstellar clouds. The pre-shock cloudy medium is generated as a natural consequence of the thermal instability that simulates realistic clumpy interstellar clouds and their diffuse surrounding. The shock wave that sweeps the cloudy medium generates a turbulent shell through the vorticity generations that are induced by shock-cloud interactions. In the turbulent shell, the magnetic field is amplified as a result of turbulent dynamo action. The energy density of the amplified magnetic field can locally grow comparable to the thermal energy density, particularly at the transition layers between clouds and the diffuse surrounding. In the case of a young supernova remnant (SNR) with a shock velocity {approx}> 10{sup 3} km s{sup -1}, the corresponding strength of the magnetic field is approximately 1 mG. The propagation speed of the shock wave is significantly stalled in the clouds because of the high density, while the shock maintains a high velocity in the diffuse surrounding. In addition, when the shock wave hits the clouds, reflection shock waves are generated that propagate back into the shocked shell. From these simulation results, many observational characteristics of the young SNR RX J1713.7-3946 that is suggested to be interacting with molecular clouds can be explained as follows. The reflection shocks can accelerate particles in the turbulent downstream region where the magnetic field strength reaches 1 mG, which causes short-time variability of synchrotron X-rays. Since the shock velocity is stalled locally in the clouds, the temperature in the shocked cloud is suppressed far below 1 keV. Thus, thermal X-ray line emission would be faint even if the SNR is interacting with molecular clouds. We also find that the photon index of the {pi}{sup 0}-decay gamma rays generated by cosmic-ray protons can be 1.5 (corresponding energy flux
13. Toward Understanding the Cosmic-Ray Acceleration at Young Supernova Remnants Interacting with Interstellar Clouds: Possible Applications to RX J1713.7-3946
Inoue, Tsuyoshi; Yamazaki, Ryo; Inutsuka, Shu-ichiro; Fukui, Yasuo
2012-01-01
Using three-dimensional magnetohydrodynamic simulations, we investigate general properties of a blast wave shock interacting with interstellar clouds. The pre-shock cloudy medium is generated as a natural consequence of the thermal instability that simulates realistic clumpy interstellar clouds and their diffuse surrounding. The shock wave that sweeps the cloudy medium generates a turbulent shell through the vorticity generations that are induced by shock-cloud interactions. In the turbulent shell, the magnetic field is amplified as a result of turbulent dynamo action. The energy density of the amplified magnetic field can locally grow comparable to the thermal energy density, particularly at the transition layers between clouds and the diffuse surrounding. In the case of a young supernova remnant (SNR) with a shock velocity >~ 103 km s-1, the corresponding strength of the magnetic field is approximately 1 mG. The propagation speed of the shock wave is significantly stalled in the clouds because of the high density, while the shock maintains a high velocity in the diffuse surrounding. In addition, when the shock wave hits the clouds, reflection shock waves are generated that propagate back into the shocked shell. From these simulation results, many observational characteristics of the young SNR RX J1713.7-3946 that is suggested to be interacting with molecular clouds can be explained as follows. The reflection shocks can accelerate particles in the turbulent downstream region where the magnetic field strength reaches 1 mG, which causes short-time variability of synchrotron X-rays. Since the shock velocity is stalled locally in the clouds, the temperature in the shocked cloud is suppressed far below 1 keV. Thus, thermal X-ray line emission would be faint even if the SNR is interacting with molecular clouds. We also find that the photon index of the π0-decay gamma rays generated by cosmic-ray protons can be 1.5 (corresponding energy flux is νF νvpropν0.5) because
14. Supernova and cosmic rays
NASA Technical Reports Server (NTRS)
Wefel, J. P.
1981-01-01
A general overview of supernova astronomy is presented, followed by a discussion of the relationship between SN and galactic cosmic rays. Pre-supernova evolution is traced to core collapse, explosion, and mass ejection. The two types of SN light curves are discussed in terms of their causes, and the different nucleosynthetic processes inside SNs are reviewed. Physical events in SN remnants are discussed. The three main connections between cosmic rays and SNs, the energy requirement, the acceleration mechanism, and the detailed composition of CR, are detailed.
15. Galactic cosmic rays
Blasi, Pasquale
2015-12-01
The multi-facet nature of the origin of cosmic rays is such that some of the problems currently met in our path to describing available data are due to oversimplified models of CR acceleration and transport, and others to lack of knowledge of the physical processes at work in certain conditions. On the other hand, the phenomenology of cosmic rays, as arising from better observations, is getting so rich that it makes sense to try to distinguish the problems that derive from too simple views of Nature and those that are challenging the very foundations of the existing paradigms. Here I will briefly discuss some of these issues.
16. The Isotopic Composition of Cosmic-Ray Iron and Nickel
NASA Technical Reports Server (NTRS)
Wiedenbeck, M.; Binns, W.; Christian, E.; Cummings, A.; George, J.; Hink, P.; Klarmann, J.; Leske, R.; Lijowski, M.; Mewaldt, R.; Stone, E.; Rosenvinge, T. von
2000-01-01
Observations from the Cosmic Ray Isotope Spectrometer (CRIS) on ACE have been used to derive contraints on the locations, physical conditions, and time scales for cosmic-ray acceleration and transport.
17. Nineteenth International Cosmic Ray Conference. OG Sessions, Volume 3
NASA Technical Reports Server (NTRS)
Jones, F. C. (Compiler)
1985-01-01
Papers submitted for presentation at the 19th International Cosmic Ray Conference are compiled. This volume addresses cosmic ray sources and acceleration, interstellar propagation and nuclear interactions, and detection techniques and instrumentation.
18. Laser-pump/X-ray-probe experiments with electrons ejected from a Cu(111) target: space-charge acceleration.
PubMed
Schiwietz, G; Kühn, D; Föhlisch, A; Holldack, K; Kachel, T; Pontius, N
2016-09-01
A comprehensive investigation of the emission characteristics for electrons induced by X-rays of a few hundred eV at grazing-incidence angles on an atomically clean Cu(111) sample during laser excitation is presented. Electron energy spectra due to intense infrared laser irradiation are investigated at the BESSY II slicing facility. Furthermore, the influence of the corresponding high degree of target excitation (high peak current of photoemission) on the properties of Auger and photoelectrons liberated by a probe X-ray beam is investigated in time-resolved pump and probe measurements. Strong electron energy shifts have been found and assigned to space-charge acceleration. The variation of the shift with laser power and electron energy is investigated and discussed on the basis of experimental as well as new theoretical results. PMID:27577771
19. Cosmic-Rays and Gamma Ray Bursts
Meli, A.
2013-07-01
Cosmic-rays are subatomic particles of energies ranging between a few eV to hundreds of TeV. These particles register a power-law spectrum, and it seems that most of them originate from astrophysical galactic and extragalactic sources. The shock acceleration in superalfvenic astrophysical plasmas, is believed to be the main mechanism responsible for the production of the non-thermal cosmic-rays. Especially, the importance of the very high energy cosmic-ray acceleration, with its consequent gamma-ray radiation and neutrino production in the shocks of the relativistic jets of Gamma Ray Bursts, is a favourable theme of study. I will discuss the cosmic-ray shock acceleration mechanism particularly focusing on simulation studies of cosmic-ray acceleration occurring in the relativistic shocks of GRB jets.
20. Advanced cosmic-ray composition experiment for the space station (ACCESS)
SciTech Connect
Israel, Martin H.; Streitmatter, Robert E.; Swordy, Simon P.
1999-01-22
ACCESS is a large electronic cosmic-ray detector, designed for one of the zenith-pointing external attach points on the International Space Station. ACCESS addresses the fundamental astrophysical question: How do cosmic rays gain their enormous energies? It does this by combining two kinds of measurements. By determining the energy spectra of individual elements with atomic number (Z) in the interval 1{<=}Z{<=}28 up to an energy of 10{sup 15} eV, ACCESS will probe a region of the spectra where theories of supernova acceleration predict changes in the cosmic-ray element composition. By measuring individual element abundances at more moderate energies of every element in the entire periodic table, ACCESS will distinguish between competing theories of how the cosmic-ray nuclei are initially injected into the accelerator that gives them their high energies. ACCESS will identify the atomic number of incident cosmic-ray nuclei using silicon solid-state detectors Cherenkov detectors, and scintillators. It will measure the energy of heavy nuclei (Z{>=}4) with transition radiation detectors, and the energy of light nuclei (Z{<=}8) with an ionization calorimeter.
1. Contribution from individual nearby sources to the spectrum of high-energy cosmic-ray electrons
Sedrati, R.; Attallah, R.
2014-04-01
In the last few years, very important data on high-energy cosmic-ray electrons and positrons from high-precision space-born and ground-based experiments have attracted a great deal of interest. These particles represent a unique probe for studying local comic-ray accelerators because they lose energy very rapidly. These energy losses reduce the lifetime so drastically that high-energy cosmic-ray electrons can attain the Earth only from rather local astrophysical sources. This work aims at calculating, by means of Monte Carlo simulation, the contribution from some known nearby astrophysical sources to the cosmic-ray electron/positron spectra at high energy (≥ 10 GeV). The background to the electron energy spectrum from distant sources is determined with the help of the GALPROP code. The obtained numerical results are compared with a set of experimental data.
2. Li-7 and Be-7 de-excitation lines - Probes for accelerated particle transport models in solar flares
Murphy, R. J.; Hua, X.-M.; Kozlovsky, B.; Ramaty, R.
1990-03-01
The photon energy spectrum of a spectral feature composed of the 429 and 478 keV gamma-ray lines from Be-7 and Li-7 (produced by interactions of flare-accelerated alpha particles with ambient He in the solar atmosphere) depends on the angular distribution of the interacting accelerated particles. This spectrum is calculated for limb and disk-centered flares using a loop model for the transport of the ions. The resulting spectra are compared with data from the April 27, 1981 limb flare obtained with the gamma-ray spectrometer on SMM, providing convincing evidence for the existence of the (Li-7)-(Be-7) feature in this flare. By comparing the fluence of this feature with that of the 511 keV line, it is shown that the accelerated alpha particle abundance or the ambient He abundance, or both, must be enhanced.
3. Cosmic impacts, cosmic catastrophes. I
NASA Technical Reports Server (NTRS)
Chapman, Clark R.; Morrison, David
1989-01-01
The discovery of cosmic impacts and their effects on the earth's surface are discussed. The manner in which the object impacts with the earth is described. The formation of crytovolcanic structures by craters is examined. Examples of cosmic debris collisions with earth, in particular the Tunguska explosion of 1908 and the Meteor Crater in Arizona, are provided.
4. Herschel Survey of Galactic OH+, H2O+, and H3O+: Probing the Molecular Hydrogen Fraction and Cosmic-Ray Ionization Rate
Indriolo, Nick; Neufeld, D. A.; Gerin, M.; Schilke, P.; Benz, A. O.; Winkel, B.; Menten, K. M.; Chambers, E. T.; Black, John H.; Bruderer, S.; Falgarone, E.; Godard, B.; Goicoechea, J. R.; Gupta, H.; Lis, D. C.; Ossenkopf, V.; Persson, C. M.; Sonnentrucker, P.; van der Tak, F. F. S.; van Dishoeck, E. F.; Wolfire, Mark G.; Wyrowski, F.
2015-02-01
In diffuse interstellar clouds the chemistry that leads to the formation of the oxygen-bearing ions OH+, H2O+, and H3O+ begins with the ionization of atomic hydrogen by cosmic rays, and continues through subsequent hydrogen abstraction reactions involving H2. Given these reaction pathways, the observed abundances of these molecules are useful in constraining both the total cosmic-ray ionization rate of atomic hydrogen (ζH) and molecular hydrogen fraction (f_H_2). We present observations targeting transitions of OH+, H2O+, and H3O+ made with the Herschel Space Observatory along 20 Galactic sight lines toward bright submillimeter continuum sources. Both OH+ and H2O+ are detected in absorption in multiple velocity components along every sight line, but H3O+ is only detected along 7 sight lines. From the molecular abundances we compute f_H_2 in multiple distinct components along each line of sight, and find a Gaussian distribution with mean and standard deviation 0.042 ± 0.018. This confirms previous findings that OH+ and H2O+ primarily reside in gas with low H2 fractions. We also infer ζH throughout our sample, and find a lognormal distribution with mean log (ζH) = -15.75 (ζH = 1.78 × 10-16 s-1) and standard deviation 0.29 for gas within the Galactic disk, but outside of the Galactic center. This is in good agreement with the mean and distribution of cosmic-ray ionization rates previously inferred from H_3^+ observations. Ionization rates in the Galactic center tend to be 10-100 times larger than found in the Galactic disk, also in accord with prior studies. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
5. A cosmic microwave background feature consistent with a cosmic texture.
PubMed
Cruz, M; Turok, N; Vielva, P; Martínez-González, E; Hobson, M
2007-12-01
The Cosmic Microwave Background provides our most ancient image of the universe and our best tool for studying its early evolution. Theories of high-energy physics predict the formation of various types of topological defects in the very early universe, including cosmic texture, which would generate hot and cold spots in the Cosmic Microwave Background. We show through a Bayesian statistical analysis that the most prominent 5 degrees -radius cold spot observed in all-sky images, which is otherwise hard to explain, is compatible with having being caused by a texture. From this model, we constrain the fundamental symmetry-breaking energy scale to be (0) approximately 8.7 x 10(15) gigaelectron volts. If confirmed, this detection of a cosmic defect will probe physics at energies exceeding any conceivable terrestrial experiment. PMID:17962521
6. Space science: Cosmic rays beyond the knees
Taylor, Andrew M.
2016-03-01
The development of a radio technique for detecting cosmic rays casts fresh light on the origins of some of these accelerated particles, and suggests that they might have travelled much farther than was previously thought. See Letter p.70
7. A Portable Classroom Cosmic Ray Detector
Matis, Howard
2012-03-01
Normally, one has to work at an accelerator to demonstrate the principles of particle physics. We have developed a portable cosmic ray detector, the Berkeley Lab Detector, that can bring high energy physics experimentation into the classroom. The detector, which is powered by either batteries or AC power, consists of two scintillator paddles with a printed circuit board. The printed circuit board takes the analog signals from the paddles, compares them, and determines whether the pulses arrived at the same time. It has a visual display and a computer output. The output is compatible with commonly found probes in high schools and colleges. A bright high school student can assemble it. Teachers and students have used a working detector on six of the world's continents. These activities have included cross country trips, science projects, and classroom demonstrations. A complete description can be found at the web site: cosmic.lbl.gov. Besides, basic particle physics, the detector can be used to teach statistics and also to provide an opportunity where students have to determine how much data are taken. In this presentation, we will demonstrate the detector and describe some of the projects that teachers and students have completed with it.
8. Cosmic superstrings.
PubMed
2008-08-28
Cosmic superstrings are expected to be formed at the end of brane inflation, within the context of brane-world cosmological models inspired from string theory. By studying the properties of cosmic superstring networks and comparing their phenomenological consequences against observational data, we aim to pin down the successful and natural inflationary model and get an insight into the stringy description of our Universe. PMID:18534932
9. Design of a hard X-ray beamline and end-station for pump and probe experiments at Pohang Accelerator Laboratory X-ray Free Electron Laser facility
Park, Jaeku; Eom, Intae; Kang, Tai-Hee; Rah, Seungyu; Nam, Ki Hyun; Park, Jaehyun; Kim, Sangsoo; Kwon, Soonam; Park, Sang Han; Kim, Kyung Sook; Hyun, Hyojung; Kim, Seung Nam; Lee, Eun Hee; Shin, Hocheol; Kim, Seonghan; Kim, Myong-jin; Shin, Hyun-Joon; Ahn, Docheon; Lim, Jun; Yu, Chung-Jong; Song, Changyong; Kim, Hyunjung; Noh, Do Young; Kang, Heung Sik; Kim, Bongsoo; Kim, Kwang-Woo; Ko, In Soo; Cho, Moo-Hyun; Kim, Sunam
2016-02-01
The Pohang Accelerator Laboratory X-ray Free Electron Laser project, a new worldwide-user facility to deliver ultrashort, laser-like x-ray photon pulses, will begin user operation in 2017 after one year of commissioning. Initially, it will provide two beamlines for hard and soft x-rays, respectively, and two experimental end-stations for the hard x-ray beamline will be constructed by the end of 2015. This article introduces one of the two hard x-ray end-stations, which is for hard x-ray pump-probe experiments, and primarily outlines the overall design of this end-station and its critical components. The content of this article will provide useful guidelines for the planning of experiments conducted at the new facility.
10. Cosmic Rays Astrophysics: The Discipline, Its Scope, and Its Applications
NASA Technical Reports Server (NTRS)
Barghouty, A. F.
2009-01-01
This slide presentation gives an overview of the discipline surrounding cosmic ray astrophysics. It includes information on recent assertions surrounding cosmic rays, exposure levels, and a short history with specific information on the origin, acceleration, transport, and modulation of cosmic rays.
11. Gammy-Ray and Hard X-Ray Emission from Pulsar-aided Supernovae as a Probe of Particle Acceleration in Embryonic Pulsar Wind Nebulae
Murase, Kohta; Kashiyama, Kazumi; Kiuchi, Kenta; Bartos, Imre
2015-05-01
It has been suggested that some classes of luminous supernovae (SNe) and gamma-ray bursts (GRBs) are driven by newborn magnetars. Fast-rotating proto-neutron stars have also been of interest as potential sources of gravitational waves (GWs). We show that for a range of rotation periods and magnetic fields, hard X-rays and GeV gamma rays provide us with a promising probe of pulsar-aided SNe. It is observationally known that young pulsar wind nebulae (PWNe) in the Milky Way are very efficient lepton accelerators. We argue that, if embryonic PWNe satisfy similar conditions at early stages of SNe (in ˜1-10 months after the explosion), external inverse-Compton emission via upscatterings of SN photons is naturally expected in the GeV range as well as broadband synchrotron emission. To fully take into account the Klein-Nishina effect and two-photon annihilation process that are important at early times, we perform detailed calculations including electromagnetic cascades. Our results suggest that hard X-ray telescopes such as NuSTAR can observe such early PWN emission by follow-up observations in months to years. GeV gamma-rays may also be detected by Fermi for nearby SNe, which serve as counterparts of these GW sources. Detecting the signals will give us an interesting probe of particle acceleration at early times of PWNe, as well as clues to driving mechanisms of luminous SNe and GRBs. Since the Bethe-Heitler cross section is lower than the Thomson cross section, gamma rays would allow us to study subphotospheric dissipation. We encourage searches for high-energy emission from nearby SNe, especially SNe Ibc including super-luminous objects.
12. Numerical Cosmic-Ray Hydrodynamics
Miniati, F.
2009-04-01
We present a numerical method for integrating the equations describing a system made of a fluid and cosmic-rays. We work out the modified characteristic equations that include the CR dynamical effects in smooth flows. We model the energy exchange between cosmic-rays and the fluid, due to diffusive processes in configuration and momentum space, with a flux conserving method. For a specified shock acceleration efficiency as a function of the upstream conditions and shock Mach number, we modify the Riemann solver to take into account the cosmic-ray mediation at shocks without resolving the cosmic-ray induced substructure. A self-consistent time-dependent shock solution is obtained by using our modified solver with Glimm's method. Godunov's method is applied in smooth parts of the flow.
13. Compressive Acceleration of Solar Energetic Particles within Coronal Mass Ejections: Observations and Theory Relevant to the Solar Probe Plus and Solar Orbiter Missions
Roelof, E. C.
2015-12-01
observational technique by which (divV) may be extracted directly from coronograph white-light movies of out-going CMEs, thus offering observational closure of the new theory for SEP acceleration/injection that should be relevant to the Solar Probe Plus and Solar Orbiter missions.
14. High-energy cosmic ray interactions
SciTech Connect
Engel, Ralph; Orellana, Mariana; Reynoso, Matias M.; Vila, Gabriela S.
2009-04-30
Research into hadronic interactions and high-energy cosmic rays are closely related. On one hand--due to the indirect observation of cosmic rays through air showers--the understanding of hadronic multiparticle production is needed for deriving the flux and composition of cosmic rays at high energy. On the other hand the highest energy particles from the universe allow us to study the characteristics of hadronic interactions at energies far beyond the reach of terrestrial accelerators. This is the summary of three introductory lectures on our current understanding of hadronic interactions of cosmic rays.
15. Cosmic Rays at Earth
Grieder, P. K. F.
In 1912 Victor Franz Hess made the revolutionary discovery that ionizing radiation is incident upon the Earth from outer space. He showed with ground-based and balloon-borne detectors that the intensity of the radiation did not change significantly between day and night. Consequently, the sun could not be regarded as the sources of this radiation and the question of its origin remained unanswered. Today, almost one hundred years later the question of the origin of the cosmic radiation still remains a mystery. Hess' discovery has given an enormous impetus to large areas of science, in particular to physics, and has played a major role in the formation of our current understanding of universal evolution. For example, the development of new fields of research such as elementary particle physics, modern astrophysics and cosmology are direct consequences of this discovery. Over the years the field of cosmic ray research has evolved in various directions: Firstly, the field of particle physics that was initiated by the discovery of many so-called elementary particles in the cosmic radiation. There is a strong trend from the accelerator physics community to reenter the field of cosmic ray physics, now under the name of astroparticle physics. Secondly, an important branch of cosmic ray physics that has rapidly evolved in conjunction with space exploration concerns the low energy portion of the cosmic ray spectrum. Thirdly, the branch of research that is concerned with the origin, acceleration and propagation of the cosmic radiation represents a great challenge for astrophysics, astronomy and cosmology. Presently very popular fields of research have rapidly evolved, such as high-energy gamma ray and neutrino astronomy. In addition, high-energy neutrino astronomy may soon initiate as a likely spin-off neutrino tomography of the Earth and thus open a unique new branch of geophysical research of the interior of the Earth. Finally, of considerable interest are the biological
16. Probing the Peak Epoch of Cosmic Star Formation (1
Alavi, Anahita; Siana, Brian D.; Richard, Johan; Rafelski, Marc; Jauzac, Mathilde; Limousin, Marceau; Stark, Daniel; Teplitz, Harry I.
2016-01-01
Obtaining a complete census of cosmic star formation requires an understanding of faint star-forming galaxies that are far below the detection limits of current surveys. To search for the faint galaxies, we use the power of strong gravitational lensing from foreground galaxy clusters to boost the detection limits of HST to much fainter luminosities. Using the WFC3/UVIS on board the HST, we obtain deep UV images of 4 lensing clusters with existing deep optical and near-infrared data (three from Frontier Fields survey). Building multiband photometric catalogs and applying a photometric redshift selection, we uncover a large population of dwarf galaxies (-18.5cosmic star formation (150%) at these redshifts. We use this unique sample to investigate further the various properties of dwarf galaxies as it is claimed to deviate from the trends seen for the more massive galaxies. Recent hydro-dynamical simulations and observations of local dwarfs show that these galaxies have episodic bursts of star formation on short time scales (< 10 Myr). We find that the bursty star formation histories (SFHs) cause a large intrinsic scatter in UV colors (β) at MUV > -16, comparing a sample of low mass galaxies from simulations with bursty SFHs with our comprehensive measurements of the observed β values. As this scatter can also be due to the dust extinction, we distinguish these two effects by measuring the dust attenuation using Balmer decrement (Hα/Hβ) ratios from our MOSFIRE/Keck spectroscopy.
17. High energy physics in cosmic rays
SciTech Connect
Jones, Lawrence W.
2013-02-07
In the first half-century of cosmic ray physics, the primary research focus was on elementary particles; the positron, pi-mesons, mu-mesons, and hyperons were discovered in cosmic rays. Much of this research was carried out at mountain elevations; Pic du Midi in the Pyrenees, Mt. Chacaltaya in Bolivia, and Mt. Evans/Echo Lake in Colorado, among other sites. In the 1960s, claims of the observation of free quarks, and satellite measurements of a significant rise in p-p cross sections, plus the delay in initiating accelerator construction programs for energies above 100 GeV, motivated the Michigan-Wisconsin group to undertake a serious cosmic ray program at Echo Lake. Subsequently, with the succession of higher energy accelerators and colliders at CERN and Fermilab, cosmic ray research has increasingly focused on cosmology and astrophysics, although some groups continue to study cosmic ray particle interactions in emulsion chambers.
18. Cosmic strings
NASA Technical Reports Server (NTRS)
Bennett, David P.
1988-01-01
Cosmic strings are linear topological defects which are predicted by some grand unified theories to form during a spontaneous symmetry breaking phase transition in the early universe. They are the basis for the only theories of galaxy formation aside from quantum fluctuations from inflation based on fundamental physics. In contrast to inflation, they can also be observed directly through gravitational lensing and their characterisitc microwave background anisotropy. It was recently discovered that details of cosmic string evolution are very differnt from the so-called standard model that was assumed in most of the string-induced galaxy formation calculations. Therefore, the details of galaxy formation in the cosmic string models are currently very uncertain.
19. Cosmic strings
SciTech Connect
Bennett, D.P.
1988-07-01
Cosmic strings are linear topological defects that are predicted by some grand unified theories to form during a spontaneous symmetry breaking phase transition in the early universe. They are the basis for the only theories of galaxy formation aside from quantum fluctuations from inflation that are based on fundamental physics. In contrast to inflation, they can also be observed directly through gravitational lensing and their characteristic microwave background anistropy. It has recently been discovered by F. Bouchet and myself that details of cosmic string evolution are very different from the so-called ''standard model'' that has been assumed in most of the string induced galaxy formation calculations. Therefore, the details of galaxy formation in the cosmic string models are currently very uncertain. 29 refs., 9 figs.
20. Plasma characterization of the superconducting proton linear accelerator plasma generator using a 2 MHz compensated Langmuir probe
SciTech Connect
Schmitzer, C.; Kronberger, M.; Lettry, J.; Sanchez-Arias, J.; Stoeri, H.
2012-02-15
The CERN study for a superconducting proton Linac (SPL) investigates the design of a pulsed 5 GeV Linac operating at 50 Hz. As a first step towards a future SPL H{sup -} volume ion source, a plasma generator capable of operating at Linac4 or nominal SPL settings has been developed and operated at a dedicated test stand. The hydrogen plasma is heated by an inductively coupled RF discharge e{sup -} and ions are confined by a magnetic multipole cusp field similar to the currently commissioned Linac4 H{sup -} ion source. Time-resolved measurements of the plasma potential, temperature, and electron energy distribution function obtained by means of a RF compensated Langmuir probe along the axis of the plasma generator are presented. The influence of the main tuning parameters, such as RF power and frequency and the timing scheme is discussed with the aim to correlate them to optimum H{sup -} ion beam parameters measured on an ion source test stand. The effects of hydrogen injection settings which allow operation at 50 Hz repetition rate are discussed.
1. Plasma characterization of the superconducting proton linear accelerator plasma generator using a 2 MHz compensated Langmuir probe.
PubMed
Schmitzer, C; Kronberger, M; Lettry, J; Sanchez-Arias, J; Störi, H
2012-02-01
The CERN study for a superconducting proton Linac (SPL) investigates the design of a pulsed 5 GeV Linac operating at 50 Hz. As a first step towards a future SPL H(-) volume ion source, a plasma generator capable of operating at Linac4 or nominal SPL settings has been developed and operated at a dedicated test stand. The hydrogen plasma is heated by an inductively coupled RF discharge e(-) and ions are confined by a magnetic multipole cusp field similar to the currently commissioned Linac4 H(-) ion source. Time-resolved measurements of the plasma potential, temperature, and electron energy distribution function obtained by means of a RF compensated Langmuir probe along the axis of the plasma generator are presented. The influence of the main tuning parameters, such as RF power and frequency and the timing scheme is discussed with the aim to correlate them to optimum H(-) ion beam parameters measured on an ion source test stand. The effects of hydrogen injection settings which allow operation at 50 Hz repetition rate are discussed. PMID:22380224
2. The Kinetic Sunyaev-Zel'dovich Effect as a Probe of the Physics of Cosmic Reionization: The Effect of Self-regulated Reionization
Park, H.; Shapiro, P. R.; Komatsu, E.; Iliev, I. T.; Ahn, K.; Mellema, G.
2013-10-01
We calculate the angular power spectrum of the Cosmic Microwave Background (CMB) temperature fluctuations induced by the kinetic Sunyaev-Zel‘dovich (kSZ) effect from the epoch of reionization (EOR). We use detailed N-body simulation with radiative transfer to follow inhomogeneous reionization of the intergalactic medium (IGM). For the first time we take into account the “self-regulation” of reionization: star formation in low-mass atomic-cooling halos (LMACH; 10e8 M_solar1e9 M_solar) dominate. While inclusion of self-regulation affects the amplitude of the kSZ power spectrum only modestly (~10%), it can change the duration of reionization by a factor of more than two.
3. Low-Energy Cosmic Rays
Wiedenbeck, M. E.; ACE/CRIS Collaboration
2002-12-01
Cosmic rays with energies below about 10 GeV/nucleon have been measured with high precision as a result of experiments on the HEAO, Ulysses, and ACE spacecrafts. The observations provide energy spectra, elemental abundances, and isotopic composition for elements up through Z=30. They include both stable and radioactive nuclides that are synthesized in stars or are produced by nuclear fragmentation during diffusion at high energies through interstellar medium. From these data one obtains a rather detailed picture of the origin of low-energy cosmic rays. For refractory species, the cosmic-ray source composition closely resembles that of the Sun, suggesting that cosmic rays are accelerated from a well-mixed sample of interstellar matter. A chemical fractionation process has depleted the abundances of volatile elements relative to refractories. Using various radioactive clock isotopes it has been shown that particle acceleration occurs at least 105 years after supernova nucleosynthesis and that the accelerated particles diffuse in the Galaxy for approximately 15 Myr after acceleration. Energy spectra and secondary-to-primary ratios are reasonably well accounted for by models in which particles gain the bulk of their energy in a single encounter with a strong shock. Among the large number of species that have been measured, 22Ne stands out as the only nuclide with an abundance that is clearly much different than solar. To test models proposed to account for this anomaly, the data are being analyzed for predicted smaller effects on abundances of other nuclides. In addition to providing a detailed understanding of the origin and acceleration of low-energy cosmic rays, these data are providing constraints on the chemical evolution of interstellar matter. This work was supported by NASA at Caltech (under grant NAG5-6912), JPL, NASA/GSFC, and Washington U.
4. Models of Cosmic-Ray Origin
Shapiro, M. M.
2001-08-01
Two models of cosmic-ray genesis are compared: (a) the author s red-dwarf hypothesis requiring the injection of seed particles from coronal mass ejections (CME) prior to shock acceleration, and (b) the direct acceleration of thermal ions and of grains in the ISM, proposed by Meyer, Drury and Ellison. Both models agree that shocks in the expanding envelopes of supernova remnants are principally responsible for acceleration to cosmic-ray energies. Both are designed to overcome the mismatch between the source composition of the Galactic cosmic rays (GCR) and the composition of the thermal ISM gas. Model (a) utilizes the prolific emissions of energetic particles from active dMe and dKe stars via their CME as the agents of seed-particle injection into the ISM. The composition of these seed particles is governed by the FIP (first-ionization potential) selection mechanism that operates for both Galactic cosmic rays and solar energetic particles. Hence it is consistent with the cosmic-ray source composition. Model (b) relies on the sputtering and acceleration of grains in the ISM (along with acceleration of thermal ions) to provide the known source composition. This model considers the FIP ordering of GCR abundances as purely coincidental, and it attributes the relative source abundances to selection according to volatility. Recent cosmic-ray observations in favor of each model are cited.
5. Hubble space telescope/cosmic origins spectrograph observations of the quasar Q0302–003: Probing the He II reionization epoch and QSO proximity effects
SciTech Connect
Syphers, David; Shull, J. Michael
2014-03-20
Q0302–003 (z = 3.2860 ± 0.0005) was the first quasar discovered that showed a He II Gunn-Peterson trough, a sign of incomplete helium reionization at z ≳ 2.9. We present its Hubble Space Telescope/Cosmic Origins Spectrograph far-UV medium-resolution spectrum, which resolves many spectral features for the first time, allowing study of the quasar itself, the intergalactic medium, and quasar proximity effects. Q0302–003 has a harder intrinsic extreme-UV spectral index than previously claimed, as determined from both a direct fit to the spectrum (yielding α{sub ν} ≈ –0.8) and the helium-to-hydrogen ion ratio in the quasar's line-of-sight proximity zone. Intergalactic absorption along this sightline shows that the helium Gunn-Peterson trough is largely black in the range 2.87 < z < 3.20, apart from ionization due to local sources, indicating that helium reionization has not completed at these redshifts. However, we tentatively report a detection of nonzero flux in the high-redshift trough when looking at low-density regions, but zero flux in higher-density regions. This constrains the He II fraction to be about 1% in the low-density intergalactic medium (IGM) and possibly a factor of a few higher in the IGM as a whole, suggesting helium reionization has progressed substantially by z ∼ 3.1. The Gunn-Peterson trough recovers to a He II Lyα forest at z < 2.87. We confirm a transmission feature due to the ionization zone around a z = 3.05 quasar just off the sightline, and resolve the feature for the first time. We discover a similar such feature possibly caused by a luminous z = 3.23 quasar further from the sightline, which suggests that this quasar has been luminous for >34 Myr.
6. Cosmic Balloons
ERIC Educational Resources Information Center
El Abed, Mohamed
2014-01-01
A team of French high-school students sent a weather balloon into the upper atmosphere to recreate Viktor Hess's historical experiment that demonstrated the existence of ionizing radiation from the sky--later called cosmic radiation. This discovery earned him the Nobel Prize for Physics in 1936.
7. Cosmic balloons
El Abed, Mohamed
2014-11-01
A team of French high-school students sent a weather balloon into the upper atmosphere to recreate Viktor Hess’s historical experiment that demonstrated the existence of ionizing radiation from the sky—later called cosmic radiation. This discovery earned him the Nobel Prize for Physics in 1936.
8. FAIR - Cosmic Matter in the Laboratory
Stöcker, Horst; Stöhlker, Thomas; Sturm, Christian
2015-06-01
9. Aligned interactions in cosmic rays
Kempa, J.
2015-12-01
The first clean Centauro was found in cosmic rays years many ago at Mt Chacaltaya experiment. Since that time, many people have tried to find this type of interaction, both in cosmic rays and at accelerators. But no one has found a clean cases of this type of interaction.It happened finally in the last exposure of emulsion at Mt Chacaltaya where the second clean Centauro has been found. The experimental data for both the Centauros and STRANA will be presented and discussed in this paper. We also present our comments to the intriguing question of the existence of a type of nuclear interactions at high energy with alignment.
10. Aligned interactions in cosmic rays
SciTech Connect
Kempa, J.
2015-12-15
The first clean Centauro was found in cosmic rays years many ago at Mt Chacaltaya experiment. Since that time, many people have tried to find this type of interaction, both in cosmic rays and at accelerators. But no one has found a clean cases of this type of interaction.It happened finally in the last exposure of emulsion at Mt Chacaltaya where the second clean Centauro has been found. The experimental data for both the Centauros and STRANA will be presented and discussed in this paper. We also present our comments to the intriguing question of the existence of a type of nuclear interactions at high energy with alignment.
11. THE KINETIC SUNYAEV-ZEL'DOVICH EFFECT AS A PROBE OF THE PHYSICS OF COSMIC REIONIZATION: THE EFFECT OF SELF-REGULATED REIONIZATION
SciTech Connect
Park, Hyunbae; Shapiro, Paul R.; Komatsu, Eiichiro; Iliev, Ilian T.; Ahn, Kyungjin; Mellema, Garrelt
2013-06-01
We calculate the angular power spectrum of the cosmic microwave background temperature fluctuations induced by the kinetic Sunyaev-Zel'dovich (kSZ) effect from the epoch of reionization (EOR). We use detailed N-body+radiative-transfer simulations to follow inhomogeneous reionization of the intergalactic medium. For the first time, we take into account the ''self-regulation'' of reionization: star formation in low-mass dwarf galaxies (10{sup 8} M{sub Sun} {approx}< M {approx}< 10{sup 9} M{sub Sun }) or minihalos (10{sup 5} M{sub Sun} {approx}< M {approx}< 10{sup 8} M{sub Sun }) is suppressed if these halos form in the regions that were already ionized or Lyman-Werner dissociated. Some previous work suggested that the amplitude of the kSZ power spectrum from the EOR can be described by a two-parameter family: the epoch of half-ionization and the duration of reionization. However, we argue that this picture applies only to simple forms of the reionization history which are roughly symmetric about the half-ionization epoch. In self-regulated reionization, the universe begins to be ionized early, maintains a low level of ionization for an extended period, and then finishes reionization as soon as high-mass atomically cooling halos dominate. While inclusion of self-regulation affects the amplitude of the kSZ power spectrum only modestly ({approx}10%), it can change the duration of reionization by a factor of more than two. We conclude that the simple two-parameter family does not capture the effect of a physical, yet complex, reionization history caused by self-regulation. When added to the post-reionization kSZ contribution, our prediction for the total kSZ power spectrum is below the current upper bound from the South Pole Telescope. Therefore, the current upper bound on the kSZ effect from the EOR is consistent with our understanding of the physics of reionization.
12. Exploding Stars and the Accelerating Universe
Kirshner, Robert P.
2012-01-01
Supernovae are exceptionally interesting astronomical objects: they punctuate the end of stellar evolution, create the heavy elements, and blast the interstellar gas with energetic shock waves. By studying supernovae, we can learn how these important aspects of cosmic evolution take place. Over the decades, we have learned that some supernovae are produced by gravitational collapse, and others by thermonuclear explosions. By understanding what supernovae are, or at least learning how they behave, supernovae explosions have been harnessed for the problem of measuring cosmic distances with some astonishing results. Carefully calibrated supernovae provide the best extragalactic distance indicators to probe the distances to galaxies and to measure the Hubble constant. Even more interesting is the evidence from supernovae that cosmic expansion has been speeding up over the last 5 billion years. We attribute this acceleration to a mysterious dark energy whose effects are clear, but whose nature is obscure. Combining the cosmic expansion history traced by supernovae with clues from galaxy clustering and cosmic geometry from the microwave background has produced today's standard, but peculiar, picture of a universe that is mostly dark energy, braked (with diminishing effect) by dark matter, and illuminated by a pinch of luminous baryons. In this talk, I will show how the attempt to understand supernovae, facilitated by ever-improving instruments, has led to the ability to measure the properties of dark energy. Looking ahead, the properties of supernovae as measured at infrared wavelengths seem to hold the best promise for more precise and accurate distances to help us understand the puzzle of dark energy. My own contribution to this work has been carried out in joyful collaboration with many excellent students, postdocs, and colleagues and with generous support from the places I have worked, the National Science Foundation, and from NASA.
13. Inflation and late-time cosmic acceleration in non-minimal Maxwell-F(R) gravity and the generation of large-scale magnetic fields
SciTech Connect
Bamba, Kazuharu; Odintsov, Sergei D E-mail: [email protected]
2008-04-15
We study inflation and late-time acceleration in the expansion of the universe in non-minimal electromagnetism, in which the electromagnetic field couples to the scalar curvature function. It is shown that power-law inflation can be realized due to the non-minimal gravitational coupling of the electromagnetic field, and that large-scale magnetic fields can be generated due to the breaking of the conformal invariance of the electromagnetic field through its non-minimal gravitational coupling. Furthermore, it is demonstrated that both inflation and the late-time acceleration of the universe can be realized in a modified Maxwell-F(R) gravity which is consistent with solar-system tests and cosmological bounds and free of instabilities. At small curvature typical for the current universe the standard Maxwell theory is recovered. We also consider the classically equivalent form of non-minimal Maxwell-F(R) gravity, and propose the origin of the non-minimal gravitational coupling function based on renormalization-group considerations.
14. Acceleration in astrophysics
SciTech Connect
Colgate, S.A.
1993-12-31
The origin of cosmic rays and applicable laboratory experiments are discussed. Some of the problems of shock acceleration for the production of cosmic rays are discussed in the context of astrophysical conditions. These are: The presumed unique explanation of the power law spectrum is shown instead to be a universal property of all lossy accelerators; the extraordinary isotropy of cosmic rays and the limited diffusion distances implied by supernova induced shock acceleration requires a more frequent and space-filling source than supernovae; the near perfect adiabaticity of strong hydromagnetic turbulence necessary for reflecting the accelerated particles each doubling in energy roughly 10{sup 5} to {sup 6} scatterings with negligible energy loss seems most unlikely; the evidence for acceleration due to quasi-parallel heliosphere shocks is weak. There is small evidence for the expected strong hydromagnetic turbulence, and instead, only a small number of particles accelerate after only a few shock traversals; the acceleration of electrons in the same collisionless shock that accelerates ions is difficult to reconcile with the theoretical picture of strong hydromagnetic turbulence that reflects the ions. The hydromagnetic turbulence will appear adiabatic to the electrons at their much higher Larmor frequency and so the electrons should not be scattered incoherently as they must be for acceleration. Therefore the electrons must be accelerated by a different mechanism. This is unsatisfactory, because wherever electrons are accelerated these sites, observed in radio emission, may accelerate ions more favorably. The acceleration is coherent provided the reconnection is coherent, in which case the total flux, as for example of collimated radio sources, predicts single charge accelerated energies much greater than observed.
15. Cosmic Topology
Luminet, Jean-Pierre
2015-08-01
Cosmic Topology is the name given to the study of the overall shape of the universe, which involves both global topological features and more local geometrical properties such as curvature. Whether space is finite or infinite, simply-connected or multi-connected like a torus, smaller or greater than the portion of the universe that we can directly observe, are questions that refer to topology rather than curvature. A striking feature of some relativistic, multi-connected "small" universe models is to create multiples images of faraway cosmic sources. While the most recent cosmological data fit the simplest model of a zero-curvature, infinite space model, they are also consistent with compact topologies of the three homogeneous and isotropic geometries of constant curvature, such as, for instance, the spherical Poincaré Dodecahedral Space, the flat hypertorus or the hyperbolic Picard horn. After a "dark age" period, the field of Cosmic Topology has recently become one of the major concerns in cosmology, not only for theorists but also for observational astronomers, leaving open a number of unsolved issues.
16. Results from the Wilkinson Microwave Anisotropy Probe
NASA Technical Reports Server (NTRS)
Komatsu, E.; Bennett, Charles L.; Komatsu, Eiichiro
2015-01-01
The Wilkinson Microwave Anisotropy Probe (WMAP) mapped the distribution of temperature and polarization over the entire sky in five microwave frequency bands. These full-sky maps were used to obtain measurements of temperature and polarization anisotropy of the cosmic microwave background with the unprecedented accuracy and precision. The analysis of two-point correlation functions of temperature and polarization data gives determinations of the fundamental cosmological parameters such as the age and composition of the universe, as well as the key parameters describing the physics of inflation, which is further constrained by three-point correlation functions. WMAP observations alone reduced the flat ? cold dark matter (Lambda Cold Dark Matter) cosmological model (six) parameter volume by a factor of > 68, 000 compared with pre-WMAP measurements. The WMAP observations (sometimes in combination with other astrophysical probes) convincingly show the existence of non-baryonic dark matter, the cosmic neutrino background, flatness of spatial geometry of the universe, a deviation from a scale-invariant spectrum of initial scalar fluctuations, and that the current universe is undergoing an accelerated expansion. The WMAP observations provide the strongest ever support for inflation; namely, the structures we see in the universe originate from quantum fluctuations generated during inflation.
17. Cosmic impacts, cosmic catastrophes. II
SciTech Connect
Chapman, C.R.; Morrison, D. NASA, Ames Research Center, Moffett Field, CA )
1990-02-01
The role of extraterrestrial impacts in shaping the earth's history is discussed, arguing that cosmic impacts represent just one example of a general shift in thinking that has made the idea of catastrophes respectable in science. The origins of this view are presented and current catastrophic theory is discussed in the context of modern debate on the geological formation of the earth. Various conflicting theories are reviewed and prominent participants in the ongoing scientific controversy concerning catastrophism are introduced.
18. Cosmic impacts, cosmic catastrophes. II
NASA Technical Reports Server (NTRS)
Chapman, Clark R.; Morrison, David
1990-01-01
The role of extraterrestrial impacts in shaping the earth's history is discussed, arguing that cosmic impacts represent just one example of a general shift in thinking that has made the idea of catastrophes respectable in science. The origins of this view are presented and current catastrophic theory is discussed in the context of modern debate on the geological formation of the earth. Various conflicting theories are reviewed and prominent participants in the ongoing scientific controversy concerning catastrophism are introduced.
19. Cosmic Complexity
NASA Technical Reports Server (NTRS)
Mather, John C.
2012-01-01
What explains the extraordinary complexity of the observed universe, on all scales from quarks to the accelerating universe? My favorite explanation (which I certainty did not invent) ls that the fundamental laws of physics produce natural instability, energy flows, and chaos. Some call the result the Life Force, some note that the Earth is a living system itself (Gaia, a "tough bitch" according to Margulis), and some conclude that the observed complexity requires a supernatural explanation (of which we have many). But my dad was a statistician (of dairy cows) and he told me about cells and genes and evolution and chance when I was very small. So a scientist must look for me explanation of how nature's laws and statistics brought us into conscious existence. And how is that seemll"!gly Improbable events are actually happening a!1 the time? Well, the physicists have countless examples of natural instability, in which energy is released to power change from simplicity to complexity. One of the most common to see is that cooling water vapor below the freezing point produces snowflakes, no two alike, and all complex and beautiful. We see it often so we are not amazed. But physlc!sts have observed so many kinds of these changes from one structure to another (we call them phase transitions) that the Nobel Prize in 1992 could be awarded for understanding the mathematics of their common features. Now for a few examples of how the laws of nature produce the instabilities that lead to our own existence. First, the Big Bang (what an insufficient name!) apparently came from an instability, in which the "false vacuum" eventually decayed into the ordinary vacuum we have today, plus the most fundamental particles we know, the quarks and leptons. So the universe as a whole started with an instability. Then, a great expansion and cooling happened, and the loose quarks, finding themselves unstable too, bound themselves together into today's less elementary particles like protons and
20. Cosmic Complexity
NASA Technical Reports Server (NTRS)
Mather, John C.
2012-01-01
What explains the extraordinary complexity of the observed universe, on all scales from quarks to the accelerating universe? My favorite explanation (which I certainty did not invent) ls that the fundamental laws of physics produce natural instability, energy flows, and chaos. Some call the result the Life Force, some note that the Earth is a living system itself (Gaia, a "tough bitch" according to Margulis), and some conclude that the observed complexity requires a supernatural explanation (of which we have many). But my dad was a statistician (of dairy cows) and he told me about cells and genes and evolution and chance when I was very small. So a scientist must look for me explanation of how nature's laws and statistics brought us into conscious existence. And how is that seemll"!gly Improbable events are actually happening a!1 the time? Well, the physicists have countless examples of natural instability, in which energy is released to power change from simplicity to complexity. One of the most common to see is that cooling water vapor below the freezing point produces snowflakes, no two alike, and all complex and beautiful. We see it often so we are not amazed. But physlc!sts have observed so many kinds of these changes from one structure to another (we call them phase transitions) that the Nobel Prize in 1992 could be awarded for understanding the mathematics of their common features. Now for a few examples of how the laws of nature produce the instabilities that lead to our own existence. First, the Big Bang (what an insufficient name!) apparently came from an instability, in which the "false vacuum" eventually decayed into the ordinary vacuum we have today, plus the most fundamental particles we know, the quarks and leptons. So the universe as a whole started with an instability. Then, a great expansion and cooling happened, and the loose quarks, finding themselves unstable too, bound themselves together into today's less elementary particles like protons and
1. Elemental advances of ultraheavy cosmic rays
NASA Technical Reports Server (NTRS)
1984-01-01
The elemental composition of the cosmic-ray source is different from that which has been generally taken as the composition of the solar system. No general enrichment of products of either r-process or s-process nucleosynthesis accounts for the differences over the entire range of ultraheavy (Z 30) elements; specific determination of nucleosynthetic contributions to the differences depends upon an understanding of the nature of any acceleration fractionation. Comparison between the cosmic-ray source abundances and the abundances of C1 and C2 chondritic meteorites suggests that differences between the cosmic-ray source and the standard (C1) solar system may not be due to acceleration fractionation of the cosmic rays, but rather to a fractionation of the C1 abundances with respect to the interstellar abundances.
2. Review of Gravity Probe B
NASA Technical Reports Server (NTRS)
1995-01-01
In response to a request by the NASA Administrator, the National Research Council (NRC) has conducted an accelerated scientific review of NASA's Gravity Probe B (GP-B) mission. The review was carried out by the Task Group on Gravity Probe B, under the auspices of the NRC's Space Studies Board and Board on Physics and Astronomy. The specific charge to the task group was to review the GP-B mission with respect to the following terms of reference: (1) scientific importance - including a current assessment of the value of the project in the context of recent progress in gravitational physics and relevant technology; (2) technical feasibility - the technical approach will be evaluated for likelihood of success, both in terms of achievement of flight mission objectives but also in terms of scientific conclusiveness of the various possible outcomes for the measurements to be made; and (3) competitive value - if possible, GP-B science will be assessed qualitatively against the objectives and accomplishments of one or more fundamental physics projects of similar cost (e.g., the Cosmic Background Explorer, COBE).
3. Cosmic ray transport in astrophysical plasmas
Schlickeiser, R.
2015-09-01
Since the development of satellite space technology about 50 years ago the solar heliosphere is explored almost routinely by several spacecrafts carrying detectors for measuring the properties of the interplanetary medium including energetic charged particles (cosmic rays), solar wind particle densities, and electromagnetic fields. In 2012, the Voyager 1 spacecraft has even left what could be described as the heliospheric modulation region, as indicated by the sudden disappearance of low energy heliospheric cosmic ray particles. With the available in-situ measurements of interplanetary turbulent electromagnetic fields and of the momentum spectra of different cosmic ray species in different interplanetary environments, the heliosphere is the best cosmic laboratory to test our understanding of the transport and acceleration of cosmic rays in space plasmas. I review both the historical development and the current state of various cosmic ray transport equations. Similarities and differences to transport theories for terrestrial fusion plasmas are highlighted. Any progress in cosmic ray transport requires a detailed understanding of the electromagnetic turbulence that is responsible for the scattering and acceleration of these particles.
4. Cosmic ray transport in astrophysical plasmas
SciTech Connect
Schlickeiser, R.
2015-09-15
Since the development of satellite space technology about 50 years ago the solar heliosphere is explored almost routinely by several spacecrafts carrying detectors for measuring the properties of the interplanetary medium including energetic charged particles (cosmic rays), solar wind particle densities, and electromagnetic fields. In 2012, the Voyager 1 spacecraft has even left what could be described as the heliospheric modulation region, as indicated by the sudden disappearance of low energy heliospheric cosmic ray particles. With the available in-situ measurements of interplanetary turbulent electromagnetic fields and of the momentum spectra of different cosmic ray species in different interplanetary environments, the heliosphere is the best cosmic laboratory to test our understanding of the transport and acceleration of cosmic rays in space plasmas. I review both the historical development and the current state of various cosmic ray transport equations. Similarities and differences to transport theories for terrestrial fusion plasmas are highlighted. Any progress in cosmic ray transport requires a detailed understanding of the electromagnetic turbulence that is responsible for the scattering and acceleration of these particles.
5. Cosmic strings and superconducting cosmic strings
NASA Technical Reports Server (NTRS)
Copeland, Edmund
1988-01-01
The possible consequences of forming cosmic strings and superconducting cosmic strings in the early universe are discussed. Lecture 1 describes the group theoretic reasons for and the field theoretic reasons why cosmic strings can form in spontaneously broken gauge theories. Lecture 2 discusses the accretion of matter onto string loops, emphasizing the scenario with a cold dark matter dominated universe. In lecture 3 superconducting cosmic strings are discussed, as is a mechanism which leads to the formation of structure from such strings.
6. Dark before light: testing the cosmic expansion history through the cosmic microwave background
SciTech Connect
Linder, Eric V.; Smith, Tristan L. E-mail: [email protected]
2011-04-01
The cosmic expansion history proceeds in broad terms from a radiation dominated epoch to matter domination to an accelerated, dark energy dominated epoch. We investigate whether intermittent periods of acceleration (from a canonical, minimally coupled scalar field) are possible in the early universe — between Big Bang nucleosynthesis (BBN) and recombination and beyond. We establish that the standard picture is remarkably robust: anisotropies in the cosmic microwave background consistent with ΛCDM will exclude any extra period of accelerated expansion between 1 ≤ z∼<10{sup 5} (corresponding to 5 × 10{sup −4}eV ≤ T∼<25eV)
7. Testing Cosmic Inflation
NASA Technical Reports Server (NTRS)
Chuss, David
2010-01-01
The Cosmic Microwave Background (CMB) has provided a wealth of information about the history and physics of the early Universe. Much progress has been made on uncovering the emerging Standard Model of Cosmology by such experiments as COBE and WMAP, and ESA's Planck Surveyor will likely increase our knowledge even more. Despite the success of this model, mysteries remain. Currently understood physics does not offer a compelling explanation for the homogeneity, flatness, and the origin of structure in the Universe. Cosmic Inflation, a brief epoch of exponential expansion, has been posted to explain these observations. If inflation is a reality, it is expected to produce a background spectrum of gravitational waves that will leave a small polarized imprint on the CMB. Discovery of this signal would give the first direct evidence for inflation and provide a window into physics at scales beyond those accessible to terrestrial particle accelerators. I will briefly review aspects of the Standard Model of Cosmology and discuss our current efforts to design and deploy experiments to measure the polarization of the CMB with the precision required to test inflation.
8. Ultrahigh Energy Cosmic Rays: Old Physics or New Physics?
NASA Technical Reports Server (NTRS)
Stecker, F. W.
2004-01-01
We consider the advantages of and the problems associated with hypotheses to explain the origin of ultrahigh energy cosmic rays (UHECR: E greater than 10 EeV) and the "trans-GZK" cosmic rays (TGZK: E greater than 100 EeV) both through "old physics" (acceleration in cosmic sources) and "new physics" (new particles, topological defects, fat neutrino cross sections, Lorentz invariance violation).
9. UHECR ESCAPE MECHANISMS FOR PROTONS AND NEUTRONS FROM GAMMA-RAY BURSTS, AND THE COSMIC-RAY-NEUTRINO CONNECTION
SciTech Connect
Baerwald, Philipp; Bustamante, Mauricio; Winter, Walter E-mail: [email protected]
2013-05-10
The paradigm that gamma-ray burst fireballs are the sources of the ultra-high energy cosmic rays (UHECRs) is being probed by neutrino observations. Very stringent bounds can be obtained from the cosmic-ray (proton)-neutrino connection, assuming that the UHECRs escape as neutrons. In this study, we identify three different regimes as a function of the fireball parameters: the standard ''one neutrino per cosmic ray'' case, the optically thick (to neutron escape) case, and the case where leakage of protons from the boundaries of the shells (direct escape) dominates. In the optically thick regime, the photomeson production is very efficient, and more neutrinos will be emitted per cosmic ray than in the standard case, whereas in the direct escape-dominated regime, more cosmic rays than neutrinos will be emitted. We demonstrate that, for efficient proton acceleration, which is required to describe the observed UHECR spectrum, the standard case only applies to a very narrow region of the fireball parameter space. We illustrate with several observed examples that conclusions on the cosmic-ray-neutrino connection will depend on the actual burst parameters. We also show that the definition of the pion production efficiency currently used by the IceCube collaboration underestimates the neutrino production in the optically thick case. Finally, we point out that the direct escape component leads to a spectral break in the cosmic-ray spectrum emitted from a single source. The resulting ''two-component model'' can be used to even more strongly pronounce the spectral features of the observed UHECR spectrum than the dip model.
10. Delayed recombination and cosmic parameters
Galli, Silvia; Bean, Rachel; Melchiorri, Alessandro; Silk, Joseph
2008-09-01
Current cosmological constraints from cosmic microwave background anisotropies are typically derived assuming a standard recombination scheme, however additional resonance and ionizing radiation sources can delay recombination, altering the cosmic ionization history and the cosmological inferences drawn from the cosmic microwave background data. We show that for recent observations of the cosmic microwave background anisotropy, from the Wilkinson microwave anisotropy probe satellite mission (WMAP) 5-year survey and from the arcminute cosmology bolometer array receiver experiment, additional resonance radiation is nearly degenerate with variations in the spectral index, ns, and has a marked effect on uncertainties in constraints on the Hubble constant, age of the universe, curvature and the upper bound on the neutrino mass. When a modified recombination scheme is considered, the redshift of recombination is constrained to z*=1078±11, with uncertainties in the measurement weaker by 1 order of magnitude than those obtained under the assumption of standard recombination while constraints on the shift parameter are shifted by 1σ to R=1.734±0.028. From the WMAP5 data we obtain the following constraints on the resonance and ionization sources parameters: γα<0.39 and γi<0.058 at 95% c.l.. Although delayed recombination limits the precision of parameter estimation from the WMAP satellite, we demonstrate that this should not be the case for future, smaller angular scales measurements, such as those by the Planck satellite mission.
11. Delayed recombination and cosmic parameters
SciTech Connect
Galli, Silvia; Melchiorri, Alessandro; Bean, Rachel; Silk, Joseph
2008-09-15
Current cosmological constraints from cosmic microwave background anisotropies are typically derived assuming a standard recombination scheme, however additional resonance and ionizing radiation sources can delay recombination, altering the cosmic ionization history and the cosmological inferences drawn from the cosmic microwave background data. We show that for recent observations of the cosmic microwave background anisotropy, from the Wilkinson microwave anisotropy probe satellite mission (WMAP) 5-year survey and from the arcminute cosmology bolometer array receiver experiment, additional resonance radiation is nearly degenerate with variations in the spectral index, n{sub s}, and has a marked effect on uncertainties in constraints on the Hubble constant, age of the universe, curvature and the upper bound on the neutrino mass. When a modified recombination scheme is considered, the redshift of recombination is constrained to z{sub *}=1078{+-}11, with uncertainties in the measurement weaker by 1 order of magnitude than those obtained under the assumption of standard recombination while constraints on the shift parameter are shifted by 1{sigma} to R=1.734{+-}0.028. From the WMAP5 data we obtain the following constraints on the resonance and ionization sources parameters: {epsilon}{sub {alpha}}<0.39 and {epsilon}{sub i}<0.058 at 95% c.l.. Although delayed recombination limits the precision of parameter estimation from the WMAP satellite, we demonstrate that this should not be the case for future, smaller angular scales measurements, such as those by the Planck satellite mission.
12. From cosmic ray source to the Galactic pool
Schure, K. M.; Bell, A. R.
2014-01-01
The Galactic cosmic ray spectrum is a remarkably straight power law. Our current understanding is that the dominant sources that accelerate cosmic rays up to the knee (3 × 1015 eV) or perhaps even the ankle (3 × 1018 eV), are young Galactic supernova remnants. In theory, however, there are various reasons why the spectrum may be different for different sources, and may not even be a power law if non-linear shock acceleration applies during the most efficient stages of acceleration. We show how the spectrum at the accelerator translates to the spectrum that makes up the escaping cosmic rays that replenish the Galactic pool of cosmic rays. We assume that cosmic ray confinement, and thus escape, is linked to the level of magnetic field amplification, and that the magnetic field is amplified by streaming cosmic rays according to the non-resonant hybrid or resonant instability. When a fixed fraction of the energy is transferred to cosmic rays, it turns out that a source spectrum that is flatter than E-2 will result in an E-2 escape spectrum, whereas a steeper source spectrum will result in an escape spectrum with equal steepening. This alleviates some of the concern that may arise from expected flat or concave cosmic ray spectra associated with non-linear shock modification.
13. Origin of the high energy cosmic neutrino background.
PubMed
2014-11-01
The diffuse background of very high energy extraterrestrial neutrinos recently discovered with IceCube is compatible with that expected from cosmic ray interactions in the Galactic interstellar medium plus that expected from hadronic interactions near the source and in the intergalactic medium of the cosmic rays which have been accelerated by the jets that produce gamma ray bursts. PMID:25415894
14. Ninteenth International Cosmic Ray Conference. SH Sessions, Volume 4
NASA Technical Reports Server (NTRS)
Jones, F. C. (Compiler)
1985-01-01
Papers submitted for presentation at the 19th International Cosmic Ray Conference are compiled. This volume covers solar and heliospheric phenomena, specifically, particle acceleration; cosmic ray compsotion, spectra, and anisotropy; propagation of solar and interplanetary energetic particles; solar-cycle modulation; and propagation of galactic particles in the heliosphere.
15. Cosmic Dawn with WFIRST
Central objectives: WFIRST-AFTA has tremendous potential for studying the epoch of "Cosmic Dawn" the period encompassing the formation of the first galaxies and quasars, and their impact on the surrounding universe through cosmological reionization. Our goal is to ensure that this potential is realized through the middle stages of mission planning, culminating in designs for both WFIRST and its core surveys that meet the core objectives in dark energy and exoplanet science, while maximizing the complementary Cosmic Dawn science. Methods: We will consider a combined approach to studying Cosmic Dawn using a judicious mixture of guest investigator data analysis of the primary WFIRST surveys, and a specifically designed Guest Observer program to complement those surveys. The Guest Observer program will serve primarily to obtain deep field observations, with particular attention to the capabilities of WFIRST for spectroscopic deep fields using the WFI grism. We will bring to bear our years of experience with slitless spectroscopy on the Hubble Space Telescope, along with an expectation of JWST slitless grism spectroscopy. We will use this experience to examine the implications of WFIRST’s grism resolution and wavelength coverage for deep field observations, and if appropriate, to suggest potential modifications of these parameters to optimize the science return on WFIRST. We have assembled a team of experts specializing in (1) Lyman Break Galaxies at redshifts higher than 7 (2) Quasars at high redshifts (3) Lyman-alpha galaxies as probes of reionization (4) Theoretical simulations of high-redshift galaxies (5) Simulations of grism observations (6) post-processing analysis to find emission line galaxies and high redshift galaxies (7) JWST observations and calibrations. With this team we intend to do end-to-end simulations starting with halo populations and expected spectra of high redshift galaxies and finally extracting what we can learn about (a) reionization
16. THE TEMPERATURE OF THE COSMIC MICROWAVE BACKGROUND
SciTech Connect
Fixsen, D. J.
2009-12-20
The Far InfraRed Absolute Spectrophotometer data are independently recalibrated using the Wilkinson Microwave Anisotropy Probe data to obtain a cosmic microwave background (CMB) temperature of 2.7260 +- 0.0013. Measurements of the temperature of the CMB are reviewed. The determination from the measurements from the literature is CMB temperature of 2.72548 +- 0.00057 K.
17. On the origin of high-energy cosmic neutrinos
SciTech Connect
Murase, Kohta
2015-07-15
Recently, the IceCube collaboration made a big announcement of the first discovery of high-energy cosmic neutrinos. Their origin is a new interesting mystery in astroparticle physics, but the present data may give us hints of connection to cosmic-ray and/or gamma-ray sources. We will look over possible scenarios for the cosmic neutrino signal, and emphasize the importance of multimessenger approaches in order to identify the PeV neutrino sources and get crucial clues to the cosmic-ray origin. We also discuss some possibilities to study neutrino properties and probe new physics.
18. BICEP's acceleration
SciTech Connect
Contaldi, Carlo R.
2014-10-01
The recent Bicep2 [1] detection of, what is claimed to be primordial B-modes, opens up the possibility of constraining not only the energy scale of inflation but also the detailed acceleration history that occurred during inflation. In turn this can be used to determine the shape of the inflaton potential V(φ) for the first time — if a single, scalar inflaton is assumed to be driving the acceleration. We carry out a Monte Carlo exploration of inflationary trajectories given the current data. Using this method we obtain a posterior distribution of possible acceleration profiles ε(N) as a function of e-fold N and derived posterior distributions of the primordial power spectrum P(k) and potential V(φ). We find that the Bicep2 result, in combination with Planck measurements of total intensity Cosmic Microwave Background (CMB) anisotropies, induces a significant feature in the scalar primordial spectrum at scales k∼ 10{sup -3} Mpc {sup -1}. This is in agreement with a previous detection of a suppression in the scalar power [2].
19. Characterising CCDs with cosmic rays
SciTech Connect
Fisher-Levine, M.; Nomerotski, A.
2015-08-06
The properties of cosmic ray muons make them a useful probe for measuring the properties of thick, fully depleted CCD sensors. The known energy deposition per unit length allows measurement of the gain of the sensor's amplifiers, whilst the straightness of the tracks allows for a crude assessment of the static lateral electric fields at the sensor's edges. The small volume in which the muons deposit their energy allows measurement of the contribution to the PSF from the diffusion of charge as it drifts across the sensor. In this work we present a validation of the cosmic ray gain measurement technique by comparing with radioisotope gain measurments, and calculate the charge diffusion coefficient for prototype LSST sensors.
20. Characterising CCDs with cosmic rays
DOE PAGESBeta
Fisher-Levine, M.; Nomerotski, A.
2015-08-06
The properties of cosmic ray muons make them a useful probe for measuring the properties of thick, fully depleted CCD sensors. The known energy deposition per unit length allows measurement of the gain of the sensor's amplifiers, whilst the straightness of the tracks allows for a crude assessment of the static lateral electric fields at the sensor's edges. The small volume in which the muons deposit their energy allows measurement of the contribution to the PSF from the diffusion of charge as it drifts across the sensor. In this work we present a validation of the cosmic ray gain measurementmore » technique by comparing with radioisotope gain measurments, and calculate the charge diffusion coefficient for prototype LSST sensors.« less
1. Antiprotons in the Cosmic Rays
Nutter, Scott
1999-10-01
The HEAT (High Energy Antimatter Telescope) collaboration flew in May 1999 a balloon-borne instrument to measure the relative abundance of antiprotons and protons in the cosmic rays to kinetic energies of 30 GeV. The instrument uses a multiple energy loss technique to measure the Lorentz factor of through-going cosmic rays, a magnet spectrometer to measure momentum, and several scintillation counters to determine particle charge and direction (up or down in the atmosphere). The antiproton/proton abundance ratio as a function of energy is a probe of the propagation environment of protons through the galaxy. Existing measurements indicate a higher than expected value at both high and low energies. A confirming measurement could indicate peculiar antiproton sources, such as WIMPs or supersymmetric darkmatter candidates. A description of the instrument, details of the flight and instrument performance, and status of the data analysis will be given.
2. Characterising CCDs with cosmic rays
Fisher-Levine, M.; Nomerotski, A.
2015-08-01
The properties of cosmic ray muons make them a useful probe for measuring the properties of thick, fully depleted CCD sensors. The known energy deposition per unit length allows measurement of the gain of the sensor's amplifiers, whilst the straightness of the tracks allows for a crude assessment of the static lateral electric fields at the sensor's edges. Furthermore, the small volume in which the muons deposit their energy allows measurement of the contribution to the PSF from the diffusion of charge as it drifts across the sensor. In this work we present a validation of the cosmic ray gain measurement technique by comparing with radioisotope gain measurments, and calculate the charge diffusion coefficient for prototype LSST sensors.
3. Fun Times with Cosmic Rays
NASA Technical Reports Server (NTRS)
Wanjek, Christopher
2003-01-01
Who would have thought cosmic rays could be so hip? Although discovered 90 years ago on death-defying manned balloon flights hip even by twenty-first-century extremesport standards cosmic rays quickly lost popularity as way-cool telescopes were finding way-too-cool phenomena across the electromagnetic spectrum. Yet cosmic rays are back in vogue, boasting their own set of superlatives. Scientists are tracking them down with new resolve from the Arctic to Antarctica and even on the high western plains of Argentina. Theorists, too, now see cosmic rays as harbingers of funky physics. Cosmic rays are atomic and subatomic particles - the fastest moving bits of matter in the universe and the only sample of matter we have from outside the solar system (with the exception of interstellar dust grains). Lower-energy cosmic rays come from the Sun. Mid-energy particles come from stellar explosions - either spewed directly from the star like shrapnel, or perhaps accelerated to nearly the speed of light by shock waves. The highest-energy cosmic rays, whose unequivocal existence remains one of astronomy's greatest mysteries, clock in at a staggering 10(exp 19) to 10(exp 22) electron volts. This is the energy carried in a baseball pitch; seeing as how there are as many atomic particles in a baseball as there are baseballs in the Moon, that s one powerful toss. No simple stellar explosion could produce them. At a recent conference in Albuquerque, scientists presented the first observational evidence of a possible origin for the highest-energy variety. A team led by Elihu Boldt at NASA s Goddard Space Flight Center found that five of these very rare cosmic rays (there are only a few dozen confirmed events) come from the direction of four 'retired' quasar host galaxies just above the arm of the Big Dipper, all visible with backyard telescopes: NGC 3610, NGC 3613, NGC 4589, and NGC 5322. These galaxies are billions of years past their glory days as the brightest beacons in the universe
4. Cosmic jets
SciTech Connect
Blandford, R.D.; Begelman, M.C.; Rees, M.J.
1982-05-01
Observations with radio telescopes have revealed that the center of many galaxies is a place of violent activity. This activity is often manifested in the production of cosmic jets. Each jet is a narrow stream of plasma that appears to squirt out of the center of a galaxy emitting radiowaves as it does so. New techniques in radio astronomy have shown how common jets are in the universe. These jets take on many different forms. The discovery of radio jets has helped in the understanding of the double structure of the majority of extragalactic radio sources. The morphology of some jets and explanations of how jets are fueled are discussed. There are many difficulties plaguing the investigation of jets. Some of these difficulties are (1) it is not known how much power the jets are radiating, (2) it is hard to tell whether a jet delieated by radio emission is identical to the region where ionized gas is flowing, and (3) what makes them. (SC)
5. Cosmic vacuum energy decay and creation of cosmic matter.
PubMed
Fahr, Hans-Jörg; Heyl, Michael
2007-09-01
In the more recent literature on cosmological evolutions of the universe, the cosmic vacuum energy has become a nonrenouncable ingredient. The cosmological constant Lambda, first invented by Einstein, but later also rejected by him, presently experiences an astonishing revival. Interestingly enough, it acts like a constant vacuum energy density would also do. Namely, it has an accelerating action on cosmic dynamics, without which, as it appears, presently obtained cosmological data cannot be conciliated with theory. As we are going to show in this review, however, the concept of a constant vacuum energy density is unsatisfactory for very basic reasons because it would claim for a physical reality that acts upon spacetime and matter dynamics without itself being acted upon by spacetime or matter. PMID:17457553
6. Relativistic transport theory for cosmic-rays
NASA Technical Reports Server (NTRS)
Webb, G. M.
1985-01-01
Various aspects of the transport of cosmic-rays in a relativistically moving magnetized plasma supporting a spectrum of hydromagnetic waves that scatter the cosmic-rays are presented. A local Lorentz frame moving with the waves or turbulence scattering the cosmic-rays is used to specify the individual particle momentum. The comoving frame is in general a noninertial frame in which the observer's volume element is expanding and shearing, geometric energy change terms appear in the cosmic-ray transport equation which consist of the relativistic generalization of the adiabatic deceleration term and a further term involving the acceleration vector of the scatterers. A relativistic version of the pitch angle evolution equation, including the effects of adiabatic focussing, pitch angle scattering, and energy changes is presented.
7. REVIEWS OF TOPICAL PROBLEMS: Cosmic vacuum
Chernin, Artur D.
2001-11-01
Recent observational studies of distant supernovae have suggested the existence of cosmic vacuum whose energy density exceeds the total density of all the other energy components in the Universe. The vacuum produces the field of antigravity that causes the cosmological expansion to accelerate. It is this accelerated expansion that has been discovered in the observations. The discovery of cosmic vacuum radically changes our current understanding of the present state of the Universe. It also poses new challenges to both cosmology and fundamental physics. Why is the density of vacuum what it is? Why do the densities of the cosmic energy components differ in exact value but agree in order of magnitude? On the other hand, the discovery made at large cosmological distances of hundreds and thousands Mpc provides new insights into the dynamics of the nearby Universe, the motions of galaxies in the local volume of 10 - 20 Mpc where the cosmological expansion was originally discovered.
8. Galactic Cosmic Rays and the Light Elements
Parizot, Etienne
2001-10-01
The study of the light elements abundances in low metallicity stars offers a unique way to learn about the past content of our Galaxy in energetic particles (EPs). This study teaches us that either the light elements are not produced by cosmic rays interactions in the interstellar medium (ISM), as has been thought for 30 years, or the cosmic rays are not what one usually thinks they are, namely standard interstellar material accelerated by the shock waves generated by supernova explosions. In any case, we have to revise our understanding of the EPs in the Galaxy. Relying on the observational evidence about Li, Be and B Galactic evolution as well as about the distribution of massive stars, we show that most of the EPs responsible for the production of light elements must be accelerated inside superbubbles, as is probably the case for the standard Galactic cosmic rays as well.
9. Galactic Cosmic Rays: From Earth to Sources
NASA Technical Reports Server (NTRS)
Brandt, Theresa J.
2012-01-01
For nearly 100 years we have known that cosmic rays come from outer space, yet proof of their origin, as well as a comprehensive understanding of their acceleration, remains elusive. Direct detection of high energy (up to 10(exp 15)eV), charged nuclei with experiments such as the balloon-born, antarctic Trans-Iron Galactic Element Recorder (TIGER) have provided insight into these mysteries through measurements of cosmic ray abundances. The abundance of these rare elements with respect to certain intrinsic properties suggests that cosmic rays include a component of massive star ejecta. Supernovae and their remnants (SNe & SNRs), often occurring at the end of a massive star's life or in an environment including massive star material, are one of the most likely candidates for sources accelerating galactic comic ray nuclei up to the requisite high energies. The Fermi Gamma-ray Space Telescope Large Area Detector (Fermi LAT) has improved our understanding of such sources by widening the window of observable energies and thus into potential sources' energetic processes. In combination with multiwavelength observations, we are now better able to constrain particle populations (often hadron-dominated at GeV energies) and environmental conditions, such as the magnetic field strength. The SNR CTB 37A is one such source which could contribute to the observed galactic cosmic rays. By assembling populations of SNRs, we will be able to more definitively define their contribution to the observed galactic cosmic rays, as well as better understand SNRs themselves. Such multimessenger studies will thus illuminate the long-standing cosmic ray mysteries, shedding light on potential sources, acceleration mechanisms, and cosmic ray propagation.
10. A model for the proton spectrum and cosmic ray anisotropy
NASA Technical Reports Server (NTRS)
Xu, C.
1985-01-01
The problem of the origin of the cosmic rays is still uncertain. As a theory, it should explain the support of particles and energy, the mechanism of acceleration and propagation as well as some important features obtained directly from cosmic ray experiments, such as the power spectrum and the knee. There are two kinds of models for interpreting the knee of the cosmic ray spectrum. One is the leaky box model. Another model suggests that the cut-off rigidity of the main sources causes the knee. The present paper studies the spectrum and the anisotropy of cosmic rays in an isotropic diffuse model with explosive discrete sources in an infinite galaxy.
11. Testing Gravity using Cosmic Voids
Falck, Bridget
2016-01-01
Though general relativity is well-tested on small (Solar System) scales, the late-time acceleration of the Universe provides strong motivation to test GR on cosmological scales. The difference between the small and large scale behavior of gravity is determined by the screening mechanism in modified gravity theories. Dark matter halos are often screened in these models, especially in models with Vainshtein screening, motivating a search for signatures of modified gravity in cosmic voids. We explore density, force, and velocity profiles of voids found in N-body simulations, using both dark matter particles and dark matter halos to identify the voids. The prospect of testing gravity using cosmic voids may be limited by the sparsity of halos as tracers of the density field.
12. Cosmic Ray research in Armenia
Chilingarian, A.; Mirzoyan, R.; Zazyan, M.
2009-11-01
Cosmic Ray research on Mt. Aragats began in 1934 with the measurements of East-West anisotropy by the group from Leningrad Physics-Technical Institute and Norair Kocharian from Yerevan State University. Stimulated by the results of their experiments in 1942 Artem and Abraham Alikhanyan brothers organized a scientific expedition to Aragats. Since that time physicists were studying Cosmic Ray fluxes on Mt. Aragats with various particle detectors: mass spectrometers, calorimeters, transition radiation detectors, and huge particle detector arrays detecting protons and nuclei accelerated in most violent explosions in Galaxy. Latest activities at Mt. Aragats include Space Weather research with networks of particle detectors located in Armenia and abroad, and detectors of Space Education center in Yerevan.
13. PROBING DYNAMICS OF ELECTRON ACCELERATION WITH RADIO AND X-RAY SPECTROSCOPY, IMAGING, AND TIMING IN THE 2002 APRIL 11 SOLAR FLARE
SciTech Connect
Fleishman, Gregory D.; Nita, Gelu M.; Gary, Dale E.; Kontar, Eduard P.
2013-05-10
Based on detailed analysis of radio and X-ray observations of a flare on 2002 April 11 augmented by realistic three-dimensional modeling, we have identified a radio emission component produced directly at the flare acceleration region. This acceleration region radio component has distinctly different (1) spectrum, (2) light curves, (3) spatial location, and, thus, (4) physical parameters from those of the separately identified trapped or precipitating electron components. To derive evolution of physical parameters of the radio sources we apply forward fitting of the radio spectrum time sequence with the gyrosynchrotron source function with five to six free parameters. At the stage when the contribution from the acceleration region dominates the radio spectrum, the X-ray- and radio-derived electron energy spectral indices agree well with each other. During this time the maximum energy of the accelerated electron spectrum displays a monotonic increase with time from {approx}300 keV to {approx}2 MeV over roughly one minute duration indicative of an acceleration process in the form of growth of the power-law tail; the fast electron residence time in the acceleration region is about 2-4 s, which is much longer than the time of flight and so requires a strong diffusion mode there to inhibit free-streaming propagation. The acceleration region has a relatively strong magnetic field, B {approx} 120 G, and a low thermal density, n{sub e} {approx}< 2 Multiplication-Sign 10{sup 9} cm{sup -3}. These acceleration region properties are consistent with a stochastic acceleration mechanism.
14. Diffusive Shock Acceleration
Baring, Matthew
2003-04-01
The process of diffusive acceleration of charged particles in shocked plasmas is widely invoked in astrophysics to account for the ubiquitous presence of signatures of non-thermal relativistic electrons and ions in the universe. This statistical energization mechanism, manifested in turbulent media, was first posited by Enrico Fermi in 1949 to explain the observed cosmic ray population, which exhibits an almost power-law distribution in rigidity. The absence of a momentum scale is a key characteristic of diffusive shock acceleration, and astrophysical systems generally only impose scales at the injection (low energy) and loss (high energy) ends of the particle spectrum. The existence of structure in the cosmic ray spectrum (the "knee") at around 3000 TeV has promoted contentions that there are at least two origins for cosmic rays, a galactic one supplying those up to the knee, and perhaps an extragalactic one that can explain even the ultra-high energy cosmic rays (UHECRs) seen at 1-300 EeV. Accounting for the UHECRs with familiar astrophysical sites of acceleration has historically proven difficult due to the need to assume high magnetic fields in order to reduce the shortest diffusive acceleration timescale, the ion gyroperiod, to meaningful values. Yet active galaxies and gamma-ray bursts remain strong and interesting candidate sources for UHECRs, turning the theoretical focus to relativistic shocks. This review summarizes properties of diffusive shock acceleration that are salient to the issue of UHECR generation. These include spectral indices, anisotropies, acceleration efficencies and timescales, as functions of the shock speed and mean field orientation, and also the degree of field turbulence. Astrophysical sites for UHECR production are also critiqued.
15. THE COSMIC ORIGINS SPECTROGRAPH
SciTech Connect
Green, James C.; Michael Shull, J.; Snow, Theodore P.; Stocke, John; Froning, Cynthia S.; Osterman, Steve; Beland, Stephane; Burgh, Eric B.; Danforth, Charles; France, Kevin; Ebbets, Dennis; Heap, Sara H.; Leitherer, Claus; Sembach, Kenneth; Linsky, Jeffrey L.; Savage, Blair D.; Siegmund, Oswald H. W.; Spencer, John; Alan Stern, S.; Welsh, Barry; and others
2012-01-01
The Cosmic Origins Spectrograph (COS) is a moderate-resolution spectrograph with unprecedented sensitivity that was installed into the Hubble Space Telescope (HST) in 2009 May, during HST Servicing Mission 4 (STS-125). We present the design philosophy and summarize the key characteristics of the instrument that will be of interest to potential observers. For faint targets, with flux F{sub {lambda}} Almost-Equal-To 1.0 Multiplication-Sign 10{sup -14} erg cm{sup -2} s{sup -1} A{sup -1}, COS can achieve comparable signal to noise (when compared to Space Telescope Imaging Spectrograph echelle modes) in 1%-2% of the observing time. This has led to a significant increase in the total data volume and data quality available to the community. For example, in the first 20 months of science operation (2009 September-2011 June) the cumulative redshift pathlength of extragalactic sight lines sampled by COS is nine times than sampled at moderate resolution in 19 previous years of Hubble observations. COS programs have observed 214 distinct lines of sight suitable for study of the intergalactic medium as of 2011 June. COS has measured, for the first time with high reliability, broad Ly{alpha} absorbers and Ne VIII in the intergalactic medium, and observed the He II reionization epoch along multiple sightlines. COS has detected the first CO emission and absorption in the UV spectra of low-mass circumstellar disks at the epoch of giant planet formation, and detected multiple ionization states of metals in extra-solar planetary atmospheres. In the coming years, COS will continue its census of intergalactic gas, probe galactic and cosmic structure, and explore physics in our solar system and Galaxy.
16. The Cosmic Origins Spectrograph
NASA Technical Reports Server (NTRS)
Green, James C.; Froning, Cynthia S.; Osterman, Steve; Ebbets, Dennis; Heap, Sara H.; Leitherer, Claus; Linsky, Jeffrey L.; Savage, Blair D.; Sembach, Kenneth; Shull, J. Michael; Siegmund, Oswald H. W.; Snow, Theodore P.; Spencer, John; Stern, S. Alan; Stocke, John; Welsh, Barry; Beland, Stephane; Burgh, Eric B.; Danforth, Charles; France, Kevin; Keeney, Brian; McPhate, Jason; Penton, Steven V; Andrews, John; Morse, Jon
2010-01-01
The Cosmic Origins Spectrograph (COS) is a moderate-resolution spectrograph with unprecedented sensitivity that was installed into the Hubble Space Telescope (HST) in May 2009, during HST Servicing Mission 4 (STS-125). We present the design philosophy and summarize the key characteristics of the instrument that will be of interest to potential observers. For faint targets, with flux F(sub lambda) approximates 1.0 X 10(exp -14) ergs/s/cm2/Angstrom, COS can achieve comparable signal to noise (when compared to STIS echelle modes) in 1-2% of the observing time. This has led to a significant increase in the total data volume and data quality available to the community. For example, in the first 20 months of science operation (September 2009 - June 2011) the cumulative redshift pathlength of extragalactic sight lines sampled by COS is 9 times that sampled at moderate resolution in 19 previous years of Hubble observations. COS programs have observed 214 distinct lines of sight suitable for study of the intergalactic medium as of June 2011. COS has measured, for the first time with high reliability, broad Lya absorbers and Ne VIII in the intergalactic medium, and observed the HeII reionization epoch along multiple sightlines. COS has detected the first CO emission and absorption in the UV spectra of low-mass circumstellar disks at the epoch of giant planet formation, and detected multiple ionization states of metals in extra-solar planetary atmospheres. In the coming years, COS will continue its census of intergalactic gas, probe galactic and cosmic structure, and explore physics in our solar system and Galaxy.
17. Cosmic ray nuclei from extragalactic and galactic pulsars
Fang, Ke
2013-02-01
In an extragalactic newly-born pulsar, nuclei striped off the star surface can be accelerated to extreme energies and leave the source through dense supernova surroundings. The escaped ultrahigh energy cosmic rays can explain both UHE energy spectral and atmospheric depth observations. In addition, assuming that Galactic pulsars accelerate cosmic rays with the same injection composition, very high energy cosmic rays from local pulsars can meet the flux measurements from above the knee to the ankle, and at the same time, agree with the detected composition trend.
18. The cosmic-ray shock structure problem for relativistic shocks
NASA Technical Reports Server (NTRS)
Webb, G. M.
1985-01-01
The time asymptotic behaviour of a relativistic (parallel) shock wave significantly modified by the diffusive acceleration of cosmic-rays is investigated by means of relativistic hydrodynamical equations for both the cosmic-rays and thermal gas. The form of the shock structure equation and the dispersion relation for both long and short wavelength waves in the system are obtained. The dependence of the shock acceleration efficiency on the upstream fluid spped, long wavelength Mach number and the ratio N = P sub co/cP sub co+P sub go)(Psub co and P sub go are the upstream cosmic-ray and thermal gas pressures respectively) are studied.
19. Cosmic rays and hadronic interactions
Lipari, Paolo
2015-08-01
The study of cosmic rays, and more in general of the "high energy universe" is at the moment a vibrant field that, thanks to the observations by several innovative detectors for relativistic charged particles, gamma-rays, and neutrinos continue to generate surprising and exciting results. The progress in the field is rapid but many fundamental problems remain open. There is an intimate relation between the study of the high energy universe and the study of the properties of hadronic interactions. High energy cosmic rays can only be studied detecting the showers they generate in the atmosphere, and for the interpretation of the data one needs an accurate modeling of the collisions between hadrons. Also the study of cosmic rays inside their sources and in the Galaxy requires a precise description of hadronic interactions. A program of experimental studies at the LHC and at lower energy, designed to address the most pressing problems, could significantly reduce the existing uncertainties and is very desirable. Such an experimental program would also have a strong intrinsic scientific interest, allowing the broadening and deepening of our understanding of Quantum Chromo Dynamics in the non-perturbative regime, the least understood sector of the Standard Model of particle physics. It should also be noted that the cosmic ray spectrum extends to particles with energy E ˜ 1020 eV, or a nucleon-nucleon c.m. energy √s ≃ 430 TeV, 30 times higher than the current LHC energy. Cosmic ray experiments therefore offer the possibility to perform studies on the properties of hadronic interactions that are impossible at accelerators.
20. Cosmic Rays in the Heliosphere
Potgieter, M. S.
The international heliospheric year (IHY) has the purpose to promote research on the Sun-Heliosphere system outward to the local interstellar medium - the new frontier. This includes fostering international scientific cooperation in the study of heliophysical phenomena now and in the future. Part of this process is to communicate research done on the heliosphere, especially to the scientific community in Africa. A short review is given of the numerical modeling of the heliosphere, and of the modulation of cosmic rays and how these particles are used to probe the heliosphere to understand its basic features. Projects of both a theoretical and numerical nature are proposed for the IHY.
1. ROLE OF LINE-OF-SIGHT COSMIC-RAY INTERACTIONS IN FORMING THE SPECTRA OF DISTANT BLAZARS IN TeV GAMMA RAYS AND HIGH-ENERGY NEUTRINOS
SciTech Connect
Essey, Warren; Kusenko, Alexander; Kalashev, Oleg; Beacom, John F.
2011-04-10
Active galactic nuclei (AGNs) can produce both gamma rays and cosmic rays. The observed high-energy gamma-ray signals from distant blazars may be dominated by secondary gamma rays produced along the line of sight by the interactions of cosmic-ray protons with background photons. This explains the surprisingly low attenuation observed for distant blazars, because the production of secondary gamma rays occurs, on average, much closer to Earth than the distance to the source. Thus, the observed spectrum in the TeV range does not depend on the intrinsic gamma-ray spectrum, while it depends on the output of the source in cosmic rays. We apply this hypothesis to a number of sources and, in every case, we obtain an excellent fit, strengthening the interpretation of the observed spectra as being due to secondary gamma rays. We explore the ramifications of this interpretation for limits on the extragalactic background light and for the production of cosmic rays in AGNs. We also make predictions for the neutrino signals, which can help probe the acceleration of cosmic rays in AGNs.
2. COSMIC program documentation experience
NASA Technical Reports Server (NTRS)
Kalar, M. C.
1970-01-01
A brief history of COSMIC as it relates to the handling of program documentation is summarized; the items that are essential for computer program documentation are also discussed. COSMIC documentation and program standards handbook is appended.
3. A close-up of the sun. [solar probe mission planning conference
NASA Technical Reports Server (NTRS)
Neugebauer, M. (Editor); Davies, R. W. (Editor)
1978-01-01
NASA's long-range plan for the study of solar-terrestrial relations includes a Solar Probe Mission in which a spacecraft is put into an eccentric orbit with perihelion near 4 solar radii (0.02 AU). The scientific experiments which might be done with such a mission are discussed. Topics include the distribution of mass within the Sun, solar angular momentum, the fine structure of the solar surface and corona, the acceleration of the solar wind and energetic particles, and the evolution of interplanetary dust. The mission could also contribute to high-accuracy tests of general relativity and the search for cosmic gravitational radiation.
4. Cosmic Interactions
2008-01-01
An image based on data taken with ESO's Very Large Telescope reveals a triplet of galaxies intertwined in a cosmic dance. ESO PR Photo 02/08 ESO PR Photo 02/08 NGC 7173, 7174, and 7176 The three galaxies, catalogued as NGC 7173 (top), 7174 (bottom right) and 7176 (bottom left), are located 106 million light-years away towards the constellation of Piscis Austrinus (the 'Southern Fish'). NGC 7173 and 7176 are elliptical galaxies, while NGC 7174 is a spiral galaxy with quite disturbed dust lanes and a long, twisted tail. This seems to indicate that the two bottom galaxies - whose combined shape bears some resemblance to that of a sleeping baby - are currently interacting, with NGC 7176 providing fresh material to NGC 7174. Matter present in great quantity around the triplet's members also points to the fact that NGC 7176 and NGC 7173 have interacted in the past. Astronomers have suggested that the three galaxies will finally merge into a giant 'island universe', tens to hundreds of times as massive as our own Milky Way. ESO PR Photo 02/08 ESO PR Photo 02b/08 NGC 7173, 7174, and 7176 The triplet is part of a so-called 'Compact Group', as compiled by Canadian astronomer Paul Hickson in the early 1980s. The group, which is the 90th entry in the catalogue and is therefore known as HCG 90, actually contains four major members. One of them - NGC 7192 - lies above the trio, outside of this image, and is another peculiar spiral galaxy. Compact groups are small, relatively isolated, systems of typically four to ten galaxies in close proximity to one another. Another striking example is Robert's Quartet. Compact groups are excellent laboratories for the study of galaxy interactions and their effects, in particular the formation of stars. As the striking image reveals, there are many other galaxies in the field. Some are distant ones, while others seem to be part of the family. Studies made with other telescopes have indeed revealed that the HCG 90 group contains 16 members
5. DIFFUSIVE SHOCK ACCELERATION SIMULATIONS OF RADIO RELICS
SciTech Connect
Kang, Hyesung; Ryu, Dongsu; Jones, T. W. E-mail: [email protected]
2012-09-01
Recent radio observations have identified a class of structures, so-called radio relics, in clusters of galaxies. The radio emission from these sources is interpreted as synchrotron radiation from GeV electrons gyrating in {mu}G-level magnetic fields. Radio relics, located mostly in the outskirts of clusters, seem to associate with shock waves, especially those developed during mergers. In fact, they seem to be good structures to identify and probe such shocks in intracluster media (ICMs), provided we understand the electron acceleration and re-acceleration at those shocks. In this paper, we describe time-dependent simulations for diffusive shock acceleration at weak shocks that are expected to be found in ICMs. Freshly injected as well as pre-existing populations of cosmic-ray (CR) electrons are considered, and energy losses via synchrotron and inverse Compton are included. We then compare the synchrotron flux and spectral distributions estimated from the simulations with those in two well-observed radio relics in CIZA J2242.8+5301 and ZwCl0008.8+5215. Considering that CR electron injection is expected to be rather inefficient at weak shocks with Mach number M {approx}< a few, the existence of radio relics could indicate the pre-existing population of low-energy CR electrons in ICMs. The implication of our results on the merger shock scenario of radio relics is discussed.
6. Cosmic rays from cosmic strings with condensates
SciTech Connect
Vachaspati, Tanmay
2010-02-15
We revisit the production of cosmic rays by cusps on cosmic strings. If a scalar field ('Higgs') has a linear interaction with the string world sheet, such as would occur if there is a bosonic condensate on the string, cusps on string loops emit narrow beams of very high energy Higgses which then decay to give a flux of ultrahigh energy cosmic rays. The ultrahigh energy flux and the gamma to proton ratio agree with observations if the string scale is {approx}10{sup 13} GeV. The diffuse gamma ray and proton fluxes are well below current bounds. Strings that are lighter and have linear interactions with scalars produce an excess of direct and diffuse cosmic rays and are ruled out by observations, while heavier strings ({approx}10{sup 15} GeV) are constrained by their gravitational signatures. This leaves a narrow window of parameter space for the existence of cosmic strings with bosonic condensates.
7. Cosmic Rays and Their Radiative Processes in Numerical Cosmology
NASA Technical Reports Server (NTRS)
Ryu, Dongsu; Miniati, Francesco; Jones, Tom W.; Kang, Hyesung
2000-01-01
A cosmological hydrodynamic code is described, which includes a routine to compute cosmic ray acceleration and transport in a simplified way. The routine was designed to follow explicitly diffusive, acceleration at shocks, and second-order Fermi acceleration and adiabatic loss in smooth flows. Synchrotron cooling of the electron population can also be followed. The updated code is intended to be used to study the properties of nonthermal synchrotron emission and inverse Compton scattering from electron cosmic rays in clusters of galaxies, in addition to the properties of thermal bremsstrahlung emission from hot gas. The results of a test simulation using a grid of 128 (exp 3) cells are presented, where cosmic rays and magnetic field have been treated passively and synchrotron cooling of cosmic ray electrons has not been included.
8. JUPITER AS A GIANT COSMIC RAY DETECTOR
SciTech Connect
Rimmer, P. B.; Stark, C. R.; Helling, Ch.
2014-06-01
We explore the feasibility of using the atmosphere of Jupiter to detect ultra-high-energy cosmic rays (UHECRs). The large surface area of Jupiter allows us to probe cosmic rays of higher energies than previously accessible. Cosmic ray extensive air showers in Jupiter's atmosphere could in principle be detected by the Large Area Telescope (LAT) on the Fermi observatory. In order to be observed, these air showers would need to be oriented toward the Earth, and would need to occur sufficiently high in the atmosphere that the gamma rays can penetrate. We demonstrate that, under these assumptions, Jupiter provides an effective cosmic ray ''detector'' area of 3.3 × 10{sup 7} km{sup 2}. We predict that Fermi-LAT should be able to detect events of energy >10{sup 21} eV with fluence 10{sup –7} erg cm{sup –2} at a rate of about one per month. The observed number of air showers may provide an indirect measure of the flux of cosmic rays ≳ 10{sup 20} eV. Extensive air showers also produce a synchrotron signature that may be measurable by Atacama Large Millimeter/submillimeter Array (ALMA). Simultaneous observations of Jupiter with ALMA and Fermi-LAT could be used to provide broad constraints on the energies of the initiating cosmic rays.
9. Monopole annihilation and highest energy cosmic rays
SciTech Connect
Bhattacharjee, P. Indian Institute of Astrophysics, Sarjapur Road, Koramangala, Bangalore 560 034 ); Sigl, G. NASA/Fermilab Astrophysics Center, Fermi National Accelerator Laboratory, Batavia, Illinois 60510-0500 )
1995-04-15
Cosmic rays with energies exceeding 10[sup 20] eV have been detected. The origin of these highest energy cosmic rays remains unknown. Established astrophysical acceleration mechanisms encounter severe difficulties in accelerating particles to these energies. Alternative scenarios where these particles are created by the decay of cosmic topological defects have been suggested in the literature. In this paper we study the possibility of producing the highest energy cosmic rays through a process that involves the formation of metastable magnetic monopole-antimonopole bound states and their subsequent collapse. The annihilation of the heavy monopole-antimonopole pairs constituting the monopolonia can produce energetic nucleons, [gamma] rays, and neutrinos whose expected flux we estimate and discuss in relation to experimental data so far available. The monopoles we consider are the ones that could be produced in the early Universe during a phase transition at the grand unification energy scale. We find that observable cosmic ray fluxes can be produced with monopole abundances compatible with present bounds.
10. Neutrino mass without cosmic variance
LoVerde, Marilena
2016-05-01
Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological data sets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias b (k ) and the linear growth parameter f (k ) inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on b (k ) and f (k ) continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via b (k ) and f (k ). The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high-density limit, using multiple tracers allows cosmic variance to be beaten, and the forecasted errors on neutrino mass shrink dramatically. In practice, beating the cosmic-variance errors on neutrino mass with b (k ) will be a challenge, but this signal is nevertheless a new probe of neutrino effects on structure formation that is interesting in its own right.
11. Cosmic ray produced isotopes in terrestrial systems.
Lal, D.
1998-12-01
Continuing improvements in the sensitivity of measurement of cosmic ray produced isotopes in environmental samples have progressively broadened the scope of their applications to characterise and quantify a wide variety of processes in Earth and planetary sciences. In this article, the author concentrates on the new developments in the field of nuclear geophysics, based on isotopic changes produced by cosmic rays in the terrestrial systems. This field, which is best described as cosmic ray geophysics, has roots with the discovery of cosmogenic 14C on the Earth by Willard Libby in 1948, and grew rapidly at first, but slowed down during the '60s and '70s. In the '80s, there was a renaissance in cosmic ray produced isotope studies, thanks mainly to the developments of the accelerator mass spectrometry technique capable of measuring minute amounts of radioactivity in terrestrial samples. This technological advance has considerably enhanced the applications of cosmic ray produced isotopes and today one finds them being used to address diverse problems in Earth and planetary sciences. The author discusses the present scope of the field of cosmic ray geophysics with an emphasis on geomorphology. It is stressed that this is the decade in which this field, which has been studied passionately by geographers, geomorphologists and geochemists for more than five decades, has at its service nuclear methods to introduce numeric time controls in the range of centuries to millions of years.
12. Cosmic ray interactions in starbursting galaxies
Yoast-Hull, Tova M.
High quality gamma-ray and radio observations of nearby galaxies offer an unprecedented opportunity to quantitatively study the properties of their cosmic ray populations. Accounting for various interactions and energy losses, I developed a multi-component, single-zone model of the cosmic ray populations in the central molecular zones of star-forming galaxies. Using observational knowledge of the interstellar medium and star formation, I successfully predicted the radio, gamma-ray, and neutrino spectra for nearby starbursts. Using chi-squared tests to compare the models with observational radio and gamma-ray data, I placed constraints on magnetic field strengths, cosmic ray energy densities, and galactic wind (advection) speeds. The initial models were applied to and tested on the prototypical starburst galaxy M82. To further test the model and to explore the differences in environment between starbursts and active galactic nuclei, I studied NGC 253 and NGC 1068, both nearby giant spiral galaxies which have been detected in gamma-rays. Additionally, I demonstrated that the excess GeV energy gamma-ray emission in the Galactic Center is likely not diffuse emission from an additional population of cosmic rays accelerated in supernova remnants. Lastly, I investigated cosmic ray populations in the starburst nuclei of Arp 220, a nearby ultraluminous infrared galaxy which displays a high-intensity mode of star formation more common in young galaxies, and I showed that the nuclei are efficient cosmic-ray proton calorimeters.
13. The origin of galactic cosmic rays
Blasi, Pasquale
2013-11-01
One century ago Viktor Hess carried out several balloon flights that led him to conclude that the penetrating radiation responsible for the discharge of electroscopes was of extraterrestrial origin. One century from the discovery of this phenomenon seems to be a good time to stop and think about what we have understood about Cosmic Rays. The aim of this review is to illustrate the ideas that have been and are being explored in order to account for the observable quantities related to cosmic rays and to summarize the numerous new pieces of observation that are becoming available. In fact, despite the possible impression that development in this field is somewhat slow, the rate of new discoveries in the last decade or so has been impressive, and mainly driven by beautiful pieces of observation. At the same time scientists in this field have been able to propose new, fascinating ways to investigate particle acceleration inside the sources, making use of multifrequency observations that range from the radio, to the optical, to X-rays and gamma rays. These ideas can now be confronted with data. I will mostly focus on supernova remnants as the most plausible sources of Galactic cosmic rays, and I will review the main aspects of the modern theory of diffusive particle acceleration at supernova remnant shocks, with special attention for the dynamical reaction of accelerated particles on the shock and the phenomenon of magnetic field amplification at the shock. Cosmic-ray escape from the sources is discussed as a necessary step to determine the spectrum of cosmic rays at the Earth. The discussion of these theoretical ideas will always proceed parallel to an account of the data being collected especially in X-ray and gamma-ray astronomy. In the end of this review I will also discuss the phenomenon of cosmic-ray acceleration at shocks propagating in partially ionized media and the implications of this phenomenon in terms of width of the Balmer line emission. This field of
14. Early developments: Particle physics aspects of cosmic rays
Grupen, Claus
2014-01-01
Cosmic rays is the birthplace of elementary particle physics. The 1936 Nobel prize was shared between Victor Hess and Carl Anderson. Anderson discovered the positron in a cloud chamber. The positron was predicted by Dirac several years earlier. In subsequent cloud chamber investigations Anderson and Neddermeyer saw the muon, which for some time was considered to be a candidate for the Yukawa particle responsible for nuclear binding. Measurements with nuclear emulsions by Lattes, Powell, Occhialini and Muirhead clarified the situation by the discovery of the charged pions in cosmic rays. The cloud chamber continued to be a powerful instrument in cosmic ray studies. Rochester and Butler found V's, which turned out to be shortlived neutral kaons decaying into a pair of charged pions. Also Λ's, Σ's, and Ξ's were found in cosmic rays. But after that accelerators and storage rings took over. The unexpected renaissance of cosmic rays started with the search for solar neutrinos and the observation of the supernova 1987A. Cosmic ray neutrino results were best explained by the assumption of neutrino oscillations opening a view beyond the standard model of elementary particles. After 100 years of cosmic ray research we are again at the beginning of a new era, and cosmic rays may contribute to solve the many open questions, like dark matter and dark energy, by providing energies well beyond those of accelerators.
15. The cosmic mult-messenger background field
Hartmann, Dieter
2016-04-01
The cosmic star formation history associated with baryon flows within the large scale structure of the expanding Universe has many important consequences, such as cosmic chemical- and galaxy evolution. Stars and accreting compact objects subsequently produce light, from the radio band to the highest photon energies, and dust within galaxies reprocesses a significant fraction of this light into the IR region. The Universe creates a radiation background that adds to the relic field from the big bang, the CMB. In addition, Cosmic Rays are created on variouys scales, and interact with this diffuse radiation field, and neutrinos are added as well. A multi-messenger field is created whose evolution with redshift contains a tremendous amount of cosmological information. We discuss several aspects of this story, emphasizing the background in the HE regime and the neutrino sector, and disccus the use of gamma-ray sources as probes.
16. Consistency relation for cosmic magnetic fields
Jain, Rajeev Kumar; Sloth, Martin S.
2012-12-01
If cosmic magnetic fields are indeed produced during inflation, they are likely to be correlated with the scalar metric perturbations that are responsible for the cosmic microwave background anisotropies and large scale structure. Within an archetypical model of inflationary magnetogenesis, we show that there exists a new simple consistency relation for the non-Gaussian cross correlation function of the scalar metric perturbation with two powers of the magnetic field in the squeezed limit where the momentum of the metric perturbation vanishes. We emphasize that such a consistency relation turns out to be extremely useful to test some recent calculations in the literature. Apart from primordial non-Gaussianity induced by the curvature perturbations, such a cross correlation might provide a new observational probe of inflation and can in principle reveal the primordial nature of cosmic magnetic fields.
17. CORONAS-F observation of HXR and gamma-ray emissions from the solar flare X10 on 29 October 2003 as a probe of accelerated proton spectrum
Kurt, V. G.; Yushkov, B. Yu.; Kudela, K.; Galkin, V. I.; Kashapova, L. K.
2015-04-01
HXR and gamma-ray emissions in the 0.04—150 MeV energy range associated with the solar flare on 29 October 2003 (X10/3B) were observed at 20:38—20:58 UT by the SONG instrument aboard the CORONAS-F mission. We restored consecutive flare gamma-emission spectra from SONG and RHESSI data and found a good agreement of these spectra in the 0.1—10 MeV energy range. Two phases were identified which showed major changes in the spectral shape of flare emission: 20:38:00-20:44:20 UT and 20:44:20-20:58:00 UT. During the second phase an efficiency of proton acceleration increased considerably relatively to the efficiency of acceleration of high energy electrons. The pion-decay component of the flare gamma-emission was elicited statistically significant only during the second phase since 20:47:40 UT. A power law spectrum index of accelerated protons was estimated from the ratio between intensities of the pion-decay and gamma-line components. The hardest spectrum (power law index S=3.7) was at 20:48—20:51 UT when the intensity of the pion-decay emission was maximal. Our subdivision of the flare into two phases is consistent with sharp changes in the structure of the flare found by Ji et al. (2008) and Liu et al. (2009). This flare was accompanied by GLE 66. The time profile of the pion-decay gamma-emission was compared with the GLE onset time. It was shown that both protons interacting at the Sun and the particles responsible for the GLE onset could belong to the same population of accelerated particles.
18. Revealing the Acceleration and Propagation of SEPs with the Unprecedented and Coordinated Near-Sun Observations from Solar Probe and Solar Orbiter
Schwadron, N. A.; Christian, E. R.; Gorby, M. J.; McComas, D. J.
2013-05-01
Solar Energetic Particles (SEPs) are likely accelerated at the Sun and through interplanetary medium through a host of complex physical processes involving magnetic reconnection, shock-acceleration, and stochastic acceleration through wave-particle interactions. The complex timing of SEPs, SEP composition with enhancements of heavy ions and He3, and the broad longitudinal distributions of SEPs indicate that no single physical mechanism can explain all properties of SEPs. This poses an enormous challenge to Heliophysics of unraveling the complex interplay between physical processes that gives rise to SEP events. Given the significant hazards posed by SEPs, it is essential that we develop an appropriate physical understanding that accounts for the interplay between processes controlling these events. Understanding the timing of SEPs, their sources, and their spatial distribution will require remarkable coordination between in situ and remote observations of SPP and Solar Orbiter. Here, we provide an overview of the key scientific questions, the planning of observations, potential utilization of ground-based assets that will optimize the data returned from joint observations by both missions.
19. Constraints on cosmic superstrings from Kaluza-Klein emission.
PubMed
Dufaux, Jean-François
2012-07-01
Cosmic superstrings interact generically with a tower of light and/or strongly coupled Kaluza-Klein (KK) modes associated with the geometry of the internal space. We study the production of KK particles by cosmic superstring loops, and show that it is constrained by big bang nucleosynthesis. We study the resulting constraints in the parameter space of the underlying string theory model and highlight their complementarity with the regions that can be probed by current and upcoming gravitational wave experiments. PMID:23031097
20. A Detector for Cosmic Microwave Background Polarimetry
NASA Technical Reports Server (NTRS)
Wollack, E.; Cao, N.; Chuss, D.; Hsieh, W.-T.; Moseley, S. Harvey; Stevenson, T.; U-yen, K.
2008-01-01
We present preliminary design and development work on polarized detectors intended to enable Cosmic Microwave Background polarization measurements that will probe the first moments of the universe. The ultimate measurement will be challenging, requiring background-limited detectors and good control of systematic errors. Toward this end, we are integrating the beam control of HE-11 feedhorns with the sensitivity of transition-edge sensors. The coupling between these two devices is achieved via waveguide probe antennas and superconducting microstrip lines. This implementation allows band-pass filters to be incorporated on the detector chip. We believe that a large collection of single-mode polarized detectors will eventually be required for the reliable detection of the weak polarized signature that is expected to result from gravitational waves produced by cosmic inflation. This focal plane prototype is an important step along the path to this detection, resulting in a capability that will enable various future high performance instrument concepts.
1. Cosmic (Super)String Constraints from 21 cm Radiation
SciTech Connect
Khatri, Rishi; Wandelt, Benjamin D.
2008-03-07
We calculate the contribution of cosmic strings arising from a phase transition in the early Universe, or cosmic superstrings arising from brane inflation, to the cosmic 21 cm power spectrum at redshifts z{>=}30. Future experiments can exploit this effect to constrain the cosmic string tension G{mu} and probe virtually the entire brane inflation model space allowed by current observations. Although current experiments with a collecting area of {approx}1 km{sup 2} will not provide any useful constraints, future experiments with a collecting area of 10{sup 4}-10{sup 6} km{sup 2} covering the cleanest 10% of the sky can, in principle, constrain cosmic strings with tension G{mu} > or approx. 10{sup -10}-10{sup -12} (superstring/phase transition mass scale >10{sup 13} GeV)
2. Cosmic (Super)String Constraints from 21 cm Radiation.
PubMed
Khatri, Rishi; Wandelt, Benjamin D
2008-03-01
We calculate the contribution of cosmic strings arising from a phase transition in the early Universe, or cosmic superstrings arising from brane inflation, to the cosmic 21 cm power spectrum at redshifts z > or =30. Future experiments can exploit this effect to constrain the cosmic string tension G mu and probe virtually the entire brane inflation model space allowed by current observations. Although current experiments with a collecting area of approximately 1 km2 will not provide any useful constraints, future experiments with a collecting area of 10(4)-10(6) km2 covering the cleanest 10% of the sky can, in principle, constrain cosmic strings with tension G mu > or = 10(-10)-10(-12) (superstring/phase transition mass scale >10(13) GeV). PMID:18352691
3. Anomalous isotopic composition of cosmic rays
SciTech Connect
Woosley, S.E.; Weaver, T.A.
1980-06-20
Recent measurements of nonsolar isotopic patterns for the elements neon and (perhaps) magnesium in cosmic rays are interpreted within current models of stellar nucleosynthesis. One possible explanation is that the stars currently responsible for cosmic-ray synthesis in the Galaxy are typically super-metal-rich by a factor of two to three. Other possibilities include the selective acceleration of certain zones or masses of supernovas or the enhancement of /sup 22/Ne in the interstellar medium by mass loss from red giant stars and planetary nebulas. Measurements of critical isotopic ratios are suggested to aid in distinguishing among the various possibilities. Some of these explanations place significant constraints on the fraction of cosmic ray nuclei that must be fresh supernova debris and the masses of the supernovas involved. 1 figure, 3 tables.
4. The structure of cosmic ray shocks
Axford, W. I.; Leer, E.; McKenzie, J. F.
1982-07-01
The acceleration of cosmic rays by steady shock waves has been discussed in brief reports by Leer et al. (1976) and Axford et al. (1977). This paper presents a more extended version of this work. The energy transfer and the structure of the shock wave is discussed in detail, and it is shown that even for moderately strong shock waves most of the upstream energy flux in the background gas is transferred to the cosmic rays. This holds also when the upstream cosmic ray pressure is very small. For an intermediate Mach-number regime the overall shock structure is shown to consist of a smooth transition followed by a gas shock (cf. Drury and Voelk, 1980).
5. Microphysics of Cosmic Ray Driven Plasma Instabilities
Bykov, A. M.; Brandenburg, A.; Malkov, M. A.; Osipov, S. M.
2013-10-01
Energetic nonthermal particles (cosmic rays, CRs) are accelerated in supernova remnants, relativistic jets and other astrophysical objects. The CR energy density is typically comparable with that of the thermal components and magnetic fields. In this review we discuss mechanisms of magnetic field amplification due to instabilities induced by CRs. We derive CR kinetic and magnetohydrodynamic equations that govern cosmic plasma systems comprising the thermal background plasma, comic rays and fluctuating magnetic fields to study CR-driven instabilities. Both resonant and non-resonant instabilities are reviewed, including the Bell short-wavelength instability, and the firehose instability. Special attention is paid to the longwavelength instabilities driven by the CR current and pressure gradient. The helicity production by the CR current-driven instabilities is discussed in connection with the dynamo mechanisms of cosmic magnetic field amplification.
6. Microphysics of Cosmic Ray Driven Plasma Instabilities
Bykov, A. M.; Brandenburg, A.; Malkov, M. A.; Osipov, S. M.
Energetic nonthermal particles (cosmic rays, CRs) are accelerated in supernova remnants, relativistic jets and other astrophysical objects. The CR energy density is typically comparable with that of the thermal components and magnetic fields. In this review we discuss mechanisms of magnetic field amplification due to instabilities induced by CRs. We derive CR kinetic and magnetohydrodynamic equations that govern cosmic plasma systems comprising the thermal background plasma, comic rays and fluctuating magnetic fields to study CR-driven instabilities. Both resonant and non-resonant instabilities are reviewed, including the Bell short-wavelength instability, and the firehose instability. Special attention is paid to the longwavelength instabilities driven by the CR current and pressure gradient. The helicity production by the CR current-driven instabilities is discussed in connection with the dynamo mechanisms of cosmic magnetic field amplification.
7. The challenge of turbulent acceleration of relativistic particles in the intra-cluster medium
Brunetti, Gianfranco
2016-01-01
Acceleration of cosmic-ray electrons (CRe) in the intra-cluster medium (ICM) is probed by radio observations that detect diffuse, megaparsec-scale, synchrotron sources in a fraction of galaxy clusters. Giant radio halos are the most spectacular manifestations of non-thermal activity in the ICM and are currently explained assuming that turbulence, driven during massive cluster–cluster mergers, reaccelerates CRe at several giga-electron volts. This scenario implies a hierarchy of complex mechanisms in the ICM that drain energy from large scales into electromagnetic fluctuations in the plasma and collisionless mechanisms of particle acceleration at much smaller scales. In this paper we focus on the physics of acceleration by compressible turbulence. The spectrum and damping mechanisms of the electromagnetic fluctuations, and the mean free path (mfp) of CRe, are the most relevant ingredients that determine the efficiency of acceleration. These ingredients in the ICM are, however, poorly known, and we show that calculations of turbulent acceleration are also sensitive to these uncertainties. On the other hand this fact implies that the non-thermal properties of galaxy clusters probe the complex microphysics and the weakly collisional nature of the ICM.
8. Probe assembly
SciTech Connect
Avera, C.J.
1981-01-06
A hand-held probe assembly, suitable for monitoring a radioactive fibrinogen tracer, is disclosed comprising a substantially cylindrically shaped probe handle having an open end. The probe handle is adapted to be interconnected with electrical circuitry for monitoring radioactivity that is sensed or detected by the probe assembly. Mounted within the probe handle is a probe body assembly that includes a cylindrically shaped probe body inserted through the open end of the probe handle. The probe body includes a photomultiplier tube that is electrically connected with a male connector positioned at the rearward end of the probe body. Mounted at the opposite end of the probe body is a probe head which supports an optical coupler therewithin. The probe head is interconnected with a probe cap which supports a detecting crystal. The probe body assembly, which consists of the probe body, the probe head, and the probe cap is supported within the probe handle by means of a pair of compressible o-rings which permit the probe assembly to be freely rotatable, preferably through 360*, within the probe handle and removable therefrom without requiring any disassembly.
9. Constraining sources of ultra high energy cosmic rays using high energy observations with the Fermi satellite
SciTech Connect
Pe'er, Asaf; Loeb, Abraham E-mail: [email protected]
2012-03-01
We analyze the conditions that enable acceleration of particles to ultra-high energies, ∼ 10{sup 20} eV (UHECRs). We show that broad band photon data recently provided by WMAP, ISOCAM, Swift and Fermi satellites, yield constraints on the ability of active galactic nuclei (AGN) to produce UHECRs. The high energy (MeV–GeV) photons are produced by Compton scattering of the emitted low energy photons and the cosmic microwave background or extra-galactic background light. The ratio of the luminosities at high and low photon energies can therefore be used as a probe of the physical conditions in the acceleration site. We find that existing data excludes core regions of nearby radio-loud AGN as possible acceleration sites of UHECR protons. However, we show that giant radio lobes are not excluded. We apply our method to Cen A, and show that acceleration of protons to ∼ 10{sup 20} eV can only occur at distances ∼>100 kpc from the core.
10. Cosmic Superstrings Revisited
SciTech Connect
Polchinski, Joseph
2004-12-10
It is possible that superstrings, as well as other one-dimensional branes, could have been produced in the early universe and then expanded to cosmic size today. I discuss the conditions under which this will occur, and the signatures of these strings. Such cosmic superstrings could be the brightest objects visible in gravitational wave astronomy, and might be distinguishable from gauge theory cosmic strings by their network properties.
11. Cosmic ray gradients in the outer heliosphere
NASA Technical Reports Server (NTRS)
Fillius, W.; Wake, B.; Ip, W.-H.; Axford, I.
1983-01-01
Launched in 1972 and 1973 respectively, the Pioneer 10 and 11 spacecraft are now probing the outer heliosphere on their final escape from the sun. The data in this paper extend for almost an entire solar cycle from launch to early 1983, when Pioneer 10 was at a heliocentric distance of 29 AU and Pioneer 11, 13 AU. The UCSD instruments on board were used to study the gradient, and to look at the time and spatial variations of the cosmic ray intensities.
12. The cosmic neutrino background
NASA Technical Reports Server (NTRS)
Dar, Arnon
1991-01-01
The cosmic neutrino background is expected to consist of relic neutrinos from the big bang, of neutrinos produced during nuclear burning in stars, of neutrinos released by gravitational stellar collapse, and of neutrinos produced by cosmic ray interactions with matter and radiation in the interstellar and intergalactic medium. Formation of baryonic dark matter in the early universe, matter-antimatter annihilation in a baryonic symmetric universe, and dark matter annihilation could have also contributed significantly to the cosmic neutrino background. The purpose of this paper is to review the properties of these cosmic neutrino backgrounds, the indirect evidence for their existence, and the prospects for their detection.
13. The Cosmic Background Explorer.
ERIC Educational Resources Information Center
Gulkis, Samuel; And Others
1990-01-01
Outlines the Cosmic Background Explorer (COBE) mission to measure celestial radiation. Describes the instruments used and experiments involving differential microwave radiometers, and a far infrared absolute spectrophotometer. (YP)
14. The COsmic-ray Soil Moisture Interaction Code (COSMIC) for use in data assimilation
Shuttleworth, J.; Rosolem, R.; Zreda, M.; Franz, T.
2013-08-01
Soil moisture status in land surface models (LSMs) can be updated by assimilating cosmic-ray neutron intensity measured in air above the surface. This requires a fast and accurate model to calculate the neutron intensity from the profiles of soil moisture modeled by the LSM. The existing Monte Carlo N-Particle eXtended (MCNPX) model is sufficiently accurate but too slow to be practical in the context of data assimilation. Consequently an alternative and efficient model is needed which can be calibrated accurately to reproduce the calculations made by MCNPX and used to substitute for MCNPX during data assimilation. This paper describes the construction and calibration of such a model, COsmic-ray Soil Moisture Interaction Code (COSMIC), which is simple, physically based and analytic, and which, because it runs at least 50 000 times faster than MCNPX, is appropriate in data assimilation applications. The model includes simple descriptions of (a) degradation of the incoming high-energy neutron flux with soil depth, (b) creation of fast neutrons at each depth in the soil, and (c) scattering of the resulting fast neutrons before they reach the soil surface, all of which processes may have parameterized dependency on the chemistry and moisture content of the soil. The site-to-site variability in the parameters used in COSMIC is explored for 42 sample sites in the COsmic-ray Soil Moisture Observing System (COSMOS), and the comparative performance of COSMIC relative to MCNPX when applied to represent interactions between cosmic-ray neutrons and moist soil is explored. At an example site in Arizona, fast-neutron counts calculated by COSMIC from the average soil moisture profile given by an independent network of point measurements in the COSMOS probe footprint are similar to the fast-neutron intensity measured by the COSMOS probe. It was demonstrated that, when used within a data assimilation framework to assimilate COSMOS probe counts into the Noah land surface model at the
15. Primary cosmic ray positrons and galactic annihilation radiation
NASA Technical Reports Server (NTRS)
Lingenfelter, R. E.; Ramaty, R.
1980-01-01
The observation (Leventhal et al, 1978) of positron annihilation radiation at 0.511 MeV from the direction of the Galactic Center is reexamined, suggesting the possibility of a primary positron component of the cosmic rays. The observed 0.511 MeV emission requires a positron production rate nearly two orders of magnitude greater than the production rate of secondary cosmic ray positrons from pion decay produced in cosmic ray interactions. Possible sources of positrons are reviewed with both supernovae and pulsars appearing to be the more likely candidates. If only about 1% of these positrons were accelerated along with the cosmic ray nucleons and electrons to energies not less than 100 MeV, it is believed that these primary positrons would be comparable in intensity to those secondary positrons resulting from pion decay. Some observational evidence for the existence of primary positrons in the cosmic rays is also discussed.
16. Distinguishing between void models and dark energy with cosmic parallax and redshift drift
SciTech Connect
Quartin, Miguel; Amendola, Luca
2010-02-15
Two recently proposed techniques, involving the measurement of the cosmic parallax and redshift drift, provide novel ways of directly probing (over a time span of several years) the background metric of the universe and therefore shed light on the dark-energy conundrum. The former makes use of upcoming high-precision astrometry measurements to either observe or put tight constraints on cosmological anisotropy for off-center observers, while the latter employs high-precision spectroscopy to give an independent test of the present acceleration of the universe. In this paper, we show that both methods can break the degeneracy between Lemaitre-Tolman-Bondi void models and more traditional dark-energy theories. Using the near-future observational missions Gaia and CODEX we show that this distinction might be made with high confidence levels in the course of a decade.
17. Maria Montessori's Cosmic Vision, Cosmic Plan, and Cosmic Education
ERIC Educational Resources Information Center
Grazzini, Camillo
2013-01-01
This classic position of the breadth of Cosmic Education begins with a way of seeing the human's interaction with the world, continues on to the grandeur in scale of time and space of that vision, then brings the interdependency of life where each growing human becomes a participating adult. Mr. Grazzini confronts the laws of human nature in…
18. Report of the cosmic and heliospheric panel
NASA Technical Reports Server (NTRS)
Mewaldt, Richard A.; Mason, Glenn M.; Barnes, Aaron; Binns, W. Robert; Burlaga, Leonard F.; Cherry, Michael L.; Holzer, Thomas E.; Jokipii, J. R.; Jones, Vernon; Ling, James C.
1991-01-01
The Cosmic and Heliospheric Branch proposes a bold new program for the years 1995 to 2010 that is centered on the following two themes: (1) the global heliosphere and interstellar space; and (2) cosmic particle acceleration and the evolution of matter. Within these major themes are more specific goals that have been studied and continue to be examined for a better understanding of their processes. These include: origin, structure, and evolution of the solar wind; interaction of the heliosphere, the solar wind, and the interstellar medium; fundamental microscopic and macroscopic plasma processes; acceleration and transport of energetic particles; and the origin and evolution of matter. Finally, the report summarizes a wide variety of proposed small and large space missions.
19. Cosmic expansion in extended quasidilaton massive gravity
Kahniashvili, Tina; Kar, Arjun; Lavrelashvili, George; Agarwal, Nishant; Heisenberg, Lavinia; Kosowsky, Arthur
2015-02-01
Quasidilaton massive gravity offers a physically well-defined gravitational theory with nonzero graviton mass. We present the full set of dynamical equations governing the expansion history of the Universe, valid during radiation domination, matter domination, and a late-time self-accelerating epoch related to the graviton mass. The existence of self-consistent solutions constrains the amplitude of the quasidilaton field and the graviton mass, as well as other model parameters. We point out that the effective mass of gravitational waves can be significantly larger than the graviton mass, opening the possibility that a single theory can explain both the late-time acceleration of cosmic expansion and modifications of structure growth leading to the suppression of large-angle correlations observed in the cosmic microwave background.
20. Accelerators for Intensity Frontier Research
SciTech Connect
Derwent, Paul; /Fermilab
2012-05-11
In 2008, the Particle Physics Project Prioritization Panel identified three frontiers for research in high energy physics, the Energy Frontier, the Intensity Frontier, and the Cosmic Frontier. In this paper, I will describe how Fermilab is configuring and upgrading the accelerator complex, prior to the development of Project X, in support of the Intensity Frontier.
1. Particle Acceleration in Relativistic Outflows
NASA Technical Reports Server (NTRS)
Bykov, Andrei; Gehrels, Neil; Krawczynski, Henric; Lemoine, Martin; Pelletier, Guy; Pohl, Martin
2012-01-01
In this review we confront the current theoretical understanding of particle acceleration at relativistic outflows with recent observational results on various source classes thought to involve such outflows, e.g. gamma-ray bursts, active galactic nuclei, and pulsar wind nebulae. We highlight the possible contributions of these sources to ultra-high-energy cosmic rays.
2. Cosmic Concordance and Quintessence
Wang, Limin; Caldwell, R. R.; Ostriker, J. P.; Steinhardt, Paul J.
2000-02-01
We present a comprehensive study of the observational constraints on spatially flat cosmological models containing a mixture of matter and quintessence-a time-varying, spatially inhomogeneous component of the energy density of the universe with negative pressure. Our study also includes the limiting case of a cosmological constant. We classify the observational constraints by redshift: low-redshift constraints include the Hubble parameter, baryon fraction, cluster abundance, the age of the universe, bulk velocity and the shape of the mass power spectrum; intermediate-redshift constraints are due to probes of the redshift-luminosity distance based on Type Ia supernovae, gravitational lensing, the Lyα forest, and the evolution of large-scale structure; high-redshift constraints are based on measurements of the cosmic microwave background temperature anisotropy. Mindful of systematic errors, we adopt a conservative approach in applying these observational constraints. We determine that the range of quintessence models in which the ratio of the matter density to the critical density is 0.2<~Ωm<~0.5, and the effective, density-averaged equation of state is -1<=w<~-0.2, is consistent with the most reliable, current low-redshift and microwave background observations at the 2 σ level. Factoring in the constraint due to the recent measurements of Type Ia supernovae, the range for the equation of state is reduced to -1<=w<~-0.4, where this range represents models consistent with each observational constraint at the 2 σ level or better (concordance analysis). A combined maximum likelihood analysis suggests a smaller range, -1<=w<~-0.6. We find that the best-fit and best-motivated quintessence models lie near Ωm~0.33, h~0.65, and spectral index ns=1, with an effective equation of state w~-0.65 for tracker'' quintessence and w=-1 for creeper'' quintessence.
3. The low-energy interstellar spectrum of galactic electrons and implications for their re-acceleration at the heliospheric termination shock
Prinsloo, Phillip; Toit Strauss, Du; Potgieter, Marius
2016-07-01
Since the diffusive shock acceleration process of particles at any given energy is dependent on the shape of their distribution at lower energies, it becomes essential to specify the interstellar spectrum for electrons below 1 MeV to study the re-acceleration of these particles at the heliospheric termination shock. Informed by the results of both radio data surveys and galactic propagation modelling, a number of illustrative scenarios are considered for this very low-energy local interstellar spectrum. Using a cosmic-ray transport model and assuming rigidity-independent diffusion at the considered energies, the contribution of re-accelerated electrons to intensity levels is probed for each of the aforementioned scenarios. The magnitudes of the resultant intensity increases are concluded to be highly dependent on the spectral shape specified for interstellar spectra at these very low energies, with the softer distributions predictably yielding greater re-acceleration effects.
4. Heliospheric Energetic Particles and Galactic Cosmic Ray Modulation
Malandraki, Olga
2015-08-01
The paper presents an overview of the SH ‘Solar and Heliospheric cosmic rays’ session of the 24th European Cosmic Ray Symposium (ECRS), Kiel, Germany, 2014. It covers the topics of Solar Energetic Particle (SEP) origin, acceleration and transport at the Sun and in the interplanetary medium, also from the aspect of multi-spacecraft observations, as well as the Galactic Cosmic Ray (GCR) short- and long-term variations and the Jovian electron variations in the heliosphere. Relevant instruments and methods presented are also covered by this review. The paper is written from a personal perspective, emphasizing those results that the author found most interesting.
5. Global modulation of cosmic rays in the heliosphere
Potgieter, Marius
2016-07-01
It is possible, now for the first time, to describe the total, global modulation of cosmic rays in the heliosphere using Voyager observations from the Earth to the heliopause and from the PAMELA space mission at the Earth, in comparison with comprehensive numerical models. The very local interstellar spectra for several cosmic ray species have become much better known so that together with knowledge of where the heliopause is located, comprehensive modelling has taken a huge step forward. New and exciting observations, with ample challenges to theoretical and modelling approaches to the acceleration, transport and modulation of cosmic rays in the heliosphere will be reviewed in this presentation.
6. Time Dependence of the Electron and Positron Components of the Cosmic Radiation Measured by the PAMELA Experiment between July 2006 and December 2015
Adriani, O.; Barbarino, G. C.; Bazilevskaya, G. A.; Bellotti, R.; Boezio, M.; Bogomolov, E. A.; Bongi, M.; Bonvicini, V.; Bottai, S.; Bruno, A.; Cafagna, F.; Campana, D.; Carlson, P.; Casolino, M.; Castellini, G.; De Santis, C.; Di Felice, V.; Galper, A. M.; Karelin, A. V.; Koldashov, S. V.; Koldobskiy, S. A.; Krutkov, S. Y.; Kvashnin, A. N.; Leonov, A.; Malakhov, V.; Marcelli, L.; Martucci, M.; Mayorov, A. G.; Menn, W.; Mergé, M.; Mikhailov, V. V.; Mocchiutti, E.; Monaco, A.; Mori, N.; Munini, R.; Osteria, G.; Panico, B.; Papini, P.; Pearce, M.; Picozza, P.; Ricci, M.; Ricciarini, S. B.; Simon, M.; Sparvoli, R.; Spillantini, P.; Stozhkov, Y. I.; Vacchi, A.; Vannuccini, E.; Vasilyev, G. I.; Voronov, S. A.; Yurkin, Y. T.; Zampa, G.; Zampa, N.; Potgieter, M. S.; Vos, E. E.
2016-06-01
Cosmic-ray electrons and positrons are a unique probe of the propagation of cosmic rays as well as of the nature and distribution of particle sources in our Galaxy. Recent measurements of these particles are challenging our basic understanding of the mechanisms of production, acceleration, and propagation of cosmic rays. Particularly striking are the differences between the low energy results collected by the space-borne PAMELA and AMS-02 experiments and older measurements pointing to sign-charge dependence of the solar modulation of cosmic-ray spectra. The PAMELA experiment has been measuring the time variation of the positron and electron intensity at Earth from July 2006 to December 2015 covering the period for the minimum of solar cycle 23 (2006-2009) until the middle of the maximum of solar cycle 24, through the polarity reversal of the heliospheric magnetic field which took place between 2013 and 2014. The positron to electron ratio measured in this time period clearly shows a sign-charge dependence of the solar modulation introduced by particle drifts. These results provide the first clear and continuous observation of how drift effects on solar modulation have unfolded with time from solar minimum to solar maximum and their dependence on the particle rigidity and the cyclic polarity of the solar magnetic field.
7. Time Dependence of the Electron and Positron Components of the Cosmic Radiation Measured by the PAMELA Experiment between July 2006 and December 2015.
PubMed
Adriani, O; Barbarino, G C; Bazilevskaya, G A; Bellotti, R; Boezio, M; Bogomolov, E A; Bongi, M; Bonvicini, V; Bottai, S; Bruno, A; Cafagna, F; Campana, D; Carlson, P; Casolino, M; Castellini, G; De Santis, C; Di Felice, V; Galper, A M; Karelin, A V; Koldashov, S V; Koldobskiy, S A; Krutkov, S Y; Kvashnin, A N; Leonov, A; Malakhov, V; Marcelli, L; Martucci, M; Mayorov, A G; Menn, W; Mergé, M; Mikhailov, V V; Mocchiutti, E; Monaco, A; Mori, N; Munini, R; Osteria, G; Panico, B; Papini, P; Pearce, M; Picozza, P; Ricci, M; Ricciarini, S B; Simon, M; Sparvoli, R; Spillantini, P; Stozhkov, Y I; Vacchi, A; Vannuccini, E; Vasilyev, G I; Voronov, S A; Yurkin, Y T; Zampa, G; Zampa, N; Potgieter, M S; Vos, E E
2016-06-17
Cosmic-ray electrons and positrons are a unique probe of the propagation of cosmic rays as well as of the nature and distribution of particle sources in our Galaxy. Recent measurements of these particles are challenging our basic understanding of the mechanisms of production, acceleration, and propagation of cosmic rays. Particularly striking are the differences between the low energy results collected by the space-borne PAMELA and AMS-02 experiments and older measurements pointing to sign-charge dependence of the solar modulation of cosmic-ray spectra. The PAMELA experiment has been measuring the time variation of the positron and electron intensity at Earth from July 2006 to December 2015 covering the period for the minimum of solar cycle 23 (2006-2009) until the middle of the maximum of solar cycle 24, through the polarity reversal of the heliospheric magnetic field which took place between 2013 and 2014. The positron to electron ratio measured in this time period clearly shows a sign-charge dependence of the solar modulation introduced by particle drifts. These results provide the first clear and continuous observation of how drift effects on solar modulation have unfolded with time from solar minimum to solar maximum and their dependence on the particle rigidity and the cyclic polarity of the solar magnetic field. PMID:27367381
8. Cosmic ray isotopes
NASA Technical Reports Server (NTRS)
Stone, E. C.
1973-01-01
The isotopic composition of cosmic rays is studied in order to develop the relationship between cosmic rays and stellar processes. Cross section and model calculations are reported on isotopes of H, He, Be, Al and Fe. Satellite instrument measuring techniques separate only the isotopes of the lighter elements.
9. Interactions of cosmic superstrings
SciTech Connect
Jackson, Mark G.; /Fermilab
2007-06-01
We develop methods by which cosmic superstring interactions can be studied in detail. These include the reconnection probability and emission of radiation such as gravitons or small string loops. Loop corrections to these are discussed, as well as relationships to (p; q)-strings. These tools should allow a phenomenological study of string models in anticipation of upcoming experiments sensitive to cosmic string radiation.
10. Deepening Cosmic Education
ERIC Educational Resources Information Center
Leonard, Gerard
2013-01-01
This article is a special blend of research, theory, and practice, with clear insight into the origins of Cosmic Education and cosmic task, while recalling memories of student explorations in botany, in particular, episodes from Mr. Leonard's teaching. Mr. Leonard speaks of a storytelling curriculum that eloquently puts perspective into dimensions…
11. A Magnified Glance into the Dark Sector: Probing Cosmological Models with Strong Lensing in A1689
Magaña, Juan; Cárdenas, V. Motta ´ctor H., Vi; Verdugo, T.; Jullo, Eric
2015-11-01
In this paper we constrain four alternative models to the late cosmic acceleration in the universe: Chevallier–Polarski–Linder (CPL), interacting dark energy (IDE), Ricci holographic dark energy (HDE), and modified polytropic Cardassian (MPC). Strong lensing (SL) images of background galaxies produced by the galaxy cluster Abell 1689 are used to test these models. To perform this analysis we modify the LENSTOOL lens modeling code. The value added by this probe is compared with other complementary probes: Type Ia supernovae (SN Ia), baryon acoustic oscillations (BAO), and cosmic microwave background (CMB). We found that the CPL constraints obtained for the SL data are consistent with those estimated using the other probes. The IDE constraints are consistent with the complementary bounds only if large errors in the SL measurements are considered. The Ricci HDE and MPC constraints are weak, but they are similar to the BAO, SN Ia, and CMB estimations. We also compute the figure of merit as a tool to quantify the goodness of fit of the data. Our results suggest that the SL method provides statistically significant constraints on the CPL parameters but is weak for those of the other models. Finally, we show that the use of the SL measurements in galaxy clusters is a promising and powerful technique to constrain cosmological models. The advantage of this method is that cosmological parameters are estimated by modeling the SL features for each underlying cosmology. These estimations could be further improved by SL constraints coming from other galaxy clusters.
12. Light from cosmic strings
SciTech Connect
Steer, Daniele A.; Vachaspati, Tanmay
2011-02-15
The time-dependent metric of a cosmic string leads to an effective interaction between the string and photons--the ''gravitational Aharonov-Bohm'' effect--and causes cosmic strings to emit light. We evaluate the radiation of pairs of photons from cosmic strings and find that the emission from cusps, kinks and kink-kink collisions occurs with a flat spectrum at all frequencies up to the string scale. Further, cusps emit a beam of photons, kinks emit along a curve, and the emission at a kink-kink collision is in all directions. The emission of light from cosmic strings could provide an important new observational signature of cosmic strings that is within reach of current experiments for a range of string tensions.
13. Our Cosmic Insignificance
PubMed Central
Kahane, Guy
2014-01-01
The universe that surrounds us is vast, and we are so very small. When we reflect on the vastness of the universe, our humdrum cosmic location, and the inevitable future demise of humanity, our lives can seem utterly insignificant. Many philosophers assume that such worries about our significance reflect a banal metaethical confusion. They dismiss the very idea of cosmic significance. This, I argue, is a mistake. Worries about cosmic insignificance do not express metaethical worries about objectivity or nihilism, and we can make good sense of the idea of cosmic significance and its absence. It is also possible to explain why the vastness of the universe can make us feel insignificant. This impression does turn out to be mistaken, but not for the reasons typically assumed. In fact, we might be of immense cosmic significance—though we cannot, at this point, tell whether this is the case. PMID:25729095
14. Cosmic Inhomogeneities and Averaged Cosmological Dynamics
Paranjape, Aseem; Singh, T. P.
2008-10-01
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a “dark energy.” However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be “no.” Averaging effects negligibly influence the cosmological dynamics.
15. Unveiling the Synchrotron Cosmic Web: Pilot Study
Brown, Shea; Rudnick, Lawrence; Pfrommer, Christoph; Jones, Thomas
2010-04-01
The overall goal of this project is to challenge our current theoretical understanding of the relativistic particle populations in the inter-galactic medium (IGM) through deep 1.4 GHz observations of 13 massive, high-redshift clusters of galaxies. Designed to compliment/extend the GMRT radio halo survey (Venturi et al. 2007), these observations will attempt to detect the peaks of the purported synchrotron cosmic-web, and place serious limits on models of CR acceleration and magnetic field amplification during large-scale structure formation. The primary goals of this survey are: 1) Confirm the bi-modal nature of the radio halo population, which favors turbulent re-acceleration of cosmic-ray electrons (CRe) during cluster mergers as the source of the diffuse radio emission; 2) Directly test hadronic secondary models which predict the presence of cosmic-ray protons (CRp) in the cores of massive X-ray clusters; 3) Search in polarization for shock structures, a potential source of CR acceleration in the IGM.
16. Unveiling the Synchrotron Cosmic Web: Pilot Study
Brown, Shea; Rudnick, Lawrence; Pfrommer, Christoph; Jones, Thomas
2011-10-01
The overall goal of this project is to challenge our current theoretical understanding of the relativistic particle populations in the inter-galactic medium (IGM) through deep 1.4 GHz observations of 13 massive, high-redshift clusters of galaxies. Designed to compliment/extend the GMRT radio halo survey (Venturi et al. 2007), these observations will attempt to detect the peaks of the purported synchrotron cosmic-web, and place serious limits on models of CR acceleration and magnetic field amplification during large-scale structure formation. The primary goals of this survey are: 1) Confirm the bi-modal nature of the radio halo population, which favors turbulent re-acceleration of cosmic-ray electrons (CRe) during cluster mergers as the source of the diffuse radio emission; 2) Directly test hadronic secondary models which predict the presence of cosmic-ray protons (CRp) in the cores of massive X-ray clusters; 3) Search in polarization for shock structures, a potential source of CR acceleration in the IGM.
17. Anomalous Transport of High Energy Cosmic Rays in Galactic Superbubbles
NASA Technical Reports Server (NTRS)
Barghouty, Nasser F.
2014-01-01
High-energy cosmic rays may exhibit anomalous transport as they traverse and are accelerated by a collection of supernovae explosions in a galactic superbubble. Signatures of this anomalous transport can show up in the particles' evolution and their spectra. In a continuous-time-random- walk (CTRW) model assuming standard diffusive shock acceleration theory (DSA) for each shock encounter, and where the superbubble (an OB stars association) is idealized as a heterogeneous region of particle sources and sinks, acceleration and transport in the superbubble can be shown to be sub-diffusive. While the sub-diffusive transport can be attributed to the stochastic nature of the acceleration time according to DSA theory, the spectral break appears to be an artifact of transport in a finite medium. These CTRW simulations point to a new and intriguing phenomenon associated with the statistical nature of collective acceleration of high energy cosmic rays in galactic superbubbles.
18. Using Cosmic Microwave Background Lensing to Constrain the Multiplicative Bias of Cosmic Shear
Vallinotto, Alberto
2012-11-01
Weak gravitational lensing is one of the key probes of cosmology. Cosmic shear surveys aimed at measuring the distribution of matter in the universe are currently being carried out (Pan-STARRS) or planned for the coming decade (DES, LSST, EUCLID, WFIRST). Crucial to the success of these surveys is the control of systematics. In this work, a new method to constrain one such family of systematics, known as multiplicative bias, is proposed. This method exploits the cross-correlation between weak-lensing measurements from galaxy surveys and the ones obtained from high-resolution cosmic microwave background experiments. This cross-correlation is shown to have the power to break the degeneracy between the normalization of the matter power spectrum and the multiplicative bias of cosmic shear and to be able to constrain the latter to a few percent.
19. USING COSMIC MICROWAVE BACKGROUND LENSING TO CONSTRAIN THE MULTIPLICATIVE BIAS OF COSMIC SHEAR
SciTech Connect
Vallinotto, Alberto
2012-11-01
Weak gravitational lensing is one of the key probes of cosmology. Cosmic shear surveys aimed at measuring the distribution of matter in the universe are currently being carried out (Pan-STARRS) or planned for the coming decade (DES, LSST, EUCLID, WFIRST). Crucial to the success of these surveys is the control of systematics. In this work, a new method to constrain one such family of systematics, known as multiplicative bias, is proposed. This method exploits the cross-correlation between weak-lensing measurements from galaxy surveys and the ones obtained from high-resolution cosmic microwave background experiments. This cross-correlation is shown to have the power to break the degeneracy between the normalization of the matter power spectrum and the multiplicative bias of cosmic shear and to be able to constrain the latter to a few percent.
20. Cosmic Rays in the Heliosphere: Requirements for Future Observations
Mewaldt, R. A.
2013-06-01
Since the publication of Cosmic Rays in the Heliosphere in 1998 there has been great progress in understanding how and why cosmic rays vary in space and time. This paper discusses measurements that are needed to continue advances in relating cosmic ray variations to changes in solar and interplanetary activity and variations in the local interstellar environment. Cosmic ray acceleration and transport is an important discipline in space physics and astrophysics, but it also plays a critical role in defining the radiation environment for humans and hardware in space, and is critical to efforts to unravel the history of solar activity. Cosmic rays are measured directly by balloon-borne and space instruments, and indirectly by ground-based neutron, muon and neutrino detectors, and by measurements of cosmogenic isotopes in ice cores, tree-rings, sediments, and meteorites. The topics covered here include: what we can learn from the deep 2008-2009 solar minimum, when cosmic rays reached the highest intensities of the space era; the implications of 10Be and 14C isotope archives for past and future solar activity; the effects of variations in the size of the heliosphere; opportunities provided by the Voyagers for discovering the origin of anomalous cosmic rays and measuring cosmic-ray spectra in interstellar space; and future space missions that can continue the exciting exploration of the heliosphere that has occurred over the past 50 years.
1. HERSCHEL SURVEY OF GALACTIC OH{sup +}, H{sub 2}O{sup +}, AND H{sub 3}O{sup +}: PROBING THE MOLECULAR HYDROGEN FRACTION AND COSMIC-RAY IONIZATION RATE
SciTech Connect
Indriolo, Nick; Neufeld, D. A.; Gerin, M.; Falgarone, E.; Schilke, P.; Chambers, E. T.; Ossenkopf, V.; Benz, A. O.; Winkel, B.; Menten, K. M.; Black, John H.; Persson, C. M.; Bruderer, S.; Van Dishoeck, E. F.; Godard, B.; Lis, D. C.; Goicoechea, J. R.; Gupta, H.; Sonnentrucker, P.; Van der Tak, F. F. S.; and others
2015-02-10
In diffuse interstellar clouds the chemistry that leads to the formation of the oxygen-bearing ions OH{sup +}, H{sub 2}O{sup +}, and H{sub 3}O{sup +} begins with the ionization of atomic hydrogen by cosmic rays, and continues through subsequent hydrogen abstraction reactions involving H{sub 2}. Given these reaction pathways, the observed abundances of these molecules are useful in constraining both the total cosmic-ray ionization rate of atomic hydrogen (ζ{sub H}) and molecular hydrogen fraction (f{sub H{sub 2}}). We present observations targeting transitions of OH{sup +}, H{sub 2}O{sup +}, and H{sub 3}O{sup +} made with the Herschel Space Observatory along 20 Galactic sight lines toward bright submillimeter continuum sources. Both OH{sup +} and H{sub 2}O{sup +} are detected in absorption in multiple velocity components along every sight line, but H{sub 3}O{sup +} is only detected along 7 sight lines. From the molecular abundances we compute f{sub H{sub 2}} in multiple distinct components along each line of sight, and find a Gaussian distribution with mean and standard deviation 0.042 ± 0.018. This confirms previous findings that OH{sup +} and H{sub 2}O{sup +} primarily reside in gas with low H{sub 2} fractions. We also infer ζ{sub H} throughout our sample, and find a lognormal distribution with mean log (ζ{sub H}) = –15.75 (ζ{sub H} = 1.78 × 10{sup –16} s{sup –1}) and standard deviation 0.29 for gas within the Galactic disk, but outside of the Galactic center. This is in good agreement with the mean and distribution of cosmic-ray ionization rates previously inferred from H{sub 3}{sup +} observations. Ionization rates in the Galactic center tend to be 10-100 times larger than found in the Galactic disk, also in accord with prior studies.
2. RELICS of the Cosmic Dawn
Bradac, Marusa; Coe, Dan; Bradley, Larry; Huang, Kuang-Han; Ryan, Russell; Dawson, Will; Zitrin, Adi; Hoag, Austin; Jones, Christine; Czakon, Nicole; Sharon, Keren; Trenti, Michele; Stark, Daniel; Bouwens, Rychard
2015-10-01
When did galaxies start forming stars? What is the role of distant galaxies in galaxy formation models and epoch of reionization? Recent observations indicate at least two critical puzzles in these studies. First galaxies might have started forming stars earlier than previously thought (<400Myr after the Big Bang). Furthermore, it is still unclear what is their star formation history and whether these galaxies can reionize the Universe. Accurate knowledge of stellar masses, ages, and star formation rates at this epoch requires measuring both rest-frame UV and optical light, which only Spitzer and HST can probe at z>7-11 for a large enough sample of typical galaxies. To address this cosmic puzzle, we propose Spitzer imaging of the fields behind 41 powerful cosmic telescopes selected using Planck data from the RELICS program (Reionization Lensing Cluster Survey; 190 HST orbits). This proposal will be a valuable Legacy complement to the existing IRAC deep surveys, and it will open up a new parameter space by probing the ordinary yet magnified population with much improved sample variance. The program will allow us to detect early galaxies with Spitzer and directly study stellar properties of a large number, ~20 galaxies (10 at z~7, 7 at z~8, 3 at z~9, and 1 at z~10). Spitzer data will much improve photometric redshifts of the earliest galaxies and will be crucial to ascertain the nature of any z>~10 candidate galaxies uncovered in the HST data. Spitzer also allows for an efficient selection of likely line emitters (as demonstrated by our recent spectroscopic confirmation of the most distant galaxy to date at z=8.68). Finally this proposal will establish the presence (or absence) of an unusually early established stellar population, as was recently observed in MACS1149JD at z~9. If confirmed in a larger sample, this result will require a paradigm shift in our understanding of the earliest star formation.
3. Source composition of cosmic rays
SciTech Connect
Silberberg, R.; Tsao, C.H. ); Shapiro, M.M. )
1990-03-20
A theory is developed that yields great improvement in deriving the cosmic-ray source abundances for energies below 10{sup 12} eV/u. In addition, based on the acceleration theory of Voelk and Biermann and on nucleosynthesis processes in pre-supernova stars, a theory is presented for the source composition at 10{sup 12}--10{sup 15} eV/u. The strong shock wave of young supernova remnant accelerates the wind particles of the pre-supernova red, blue supergiant stars and Wolf-Rayet (WR) stars to energies up to 10{sup 15} eV/u. They contain the nucleosynthesis products of the CNO cycle and of He-burning. They accelerate the flare particles in interstellar space. The composition below 10{sup 12} eV/u differs from that of the general stellar photosphere by: (1) Suppression of elements with a large FIP ({gt}10 eV) by a factor of 4; (2) The depletion of light nuclei (Z{le}10); (3) A large contribution of WC stars to {sup 12}C, {sup 16}O and {sup 22}Ne, with renormalization of the initial (Z{gt}2)/(Z{le}2) abundances of Prantzos et al., based on general elemental abundances.
4. Linear Accelerators
Sidorin, Anatoly
2010-01-01
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
5. Linear Accelerators
SciTech Connect
Sidorin, Anatoly
2010-01-05
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
6. The Origin of Cosmic Rays: What can GLAST Say?
NASA Technical Reports Server (NTRS)
Ormes, Jonathan F.; Digel, Seith; Moskalenko, Igor V.; Moiseev, Alexander; Williamson, Roger
2000-01-01
Gamma rays in the band from 30 MeV to 300 GeV, used in combination with direct measurements and with data from radio and X-ray bands, provide a powerful tool for studying the origin of Galactic cosmic rays. Gamma-ray Large Area Space Telescope (GLAST) with its fine 10-20 arcmin angular resolution will be able to map the sites of acceleration of cosmic rays and their interactions with interstellar matter, It will provide information that is necessary to study the acceleration of energetic particles in supernova shocks, their transport in the interstellar medium and penetration into molecular clouds.
7. Interpretation of Voyager 1 data on low energy cosmic rays in galactic wind model
Ptuskin, V. S.; Seo, E. S.; Zirakashvili, V. N.
2015-08-01
The local interstellar energy spectra of galactic cosmic rays down to a few MeV/nucleon were directly measured in the experiment on the board of the Voyager 1 spacecraft. We suggest interpretation of these data based on our models of cosmic ray acceleration in supernova remnants and the diffusion in galactic wind where diffusion coefficient is determined by the cosmic ray streaming instability. The dependence of wind velocity on distance above the Galactic disk is determined.
8. The History of Cosmic Ray Studies after Hess
Grupen, Claus
2013-06-01
The discovery of cosmic rays by Victor Hess was confirmed with balloon flights at higher altitudes by Kolhörster. Soon the interest turned into questions about the nature of cosmic rays: gamma rays or particles? Subsequent investigations have established cosmic rays as the birthplace of elementary particle physics. The 1936 Nobel prize was shared between Victor Hess and Carl Anderson. Anderson discovered the positron in a cloud chamber. The positron was predicted by Dirac several years earlier. Many new results came now from studies with cloud chambers and nuclear emulsions. Anderson and Neddermeyer saw the muon, which for some time was considered to be a candidate for the Yukawa particle responsible for nuclear binding. Lattes, Powell, Occhialini and Muirhead clarified the situation by the discovery of the charged pions in cosmic rays. Rochester and Butler found V's, which turned out to be short-lived neutral kaons decaying into a pair of charged pions. Λ's, Σ's and Ξ's were found in cosmic rays using nuclear emulsions. After that period, accelerators and storage rings took over. The unexpected renaissance of cosmic rays started with the search for solar neutrinos and the observation of the supernova 1987A and other accelerators in the sky. With the observation of neutrino oscillations one began to look beyond the standard model of elementary particles. After 100 years of cosmic ray research we are again at the beginning of a new era, and cosmic rays may contribute to solve the many open questions, like dark matter and dark energy, by providing energies well beyond those of earth-bound accelerators.
9. Globular Clusters as a Test for Gravity in the Weak Acceleration Regime
Scarpa, Riccardo; Marconi, Gianni; Gilmozzi, Roberto
2006-03-01
Non-baryonic Dark Matter (DM) appears in galaxies and other cosmic structures when and only when the acceleration of gravity, as computed considering only baryons, goes below a well defined value a0 = 1.2 × 10-8 cm s-2. This fact is extremely important and suggestive of the possibility of a breakdown of Newton's law of gravity (or inertia) below a0. It is therefore important to verify whether Newton's law of gravity holds in this regime of accelerations. In order to do this, one has to study the dynamics of objects that do not contain significant amounts of DM and therefore should follow Newton's prediction for whatever small accelerations. Globular clusters are believed, even by strong supporters of DM, to contain negligible amounts of DM and therefore are ideal for testing Newtonian dynamics in the low acceleration limit. Here, we discuss the status of an ongoing program aimed to do this test. Compared to other studies of globular clsuters, the novelty is that we trace the velocity dispersion profile of globular clusters far enough from the center to probe gravitational accelerations well below a0. In all three clusters studied so far the velocity dispersion is found to remain constant at large radii rather than follow the Keplerian falloff. On average, the flattening occurs at the radius where the cluster internal acceleration of gravity is 1.8 +/- 0.4 × 10-8 cm s-2, fully consistent with MOND predictions.
10. Eleventh European Cosmic Ray Symposium
1988-08-01
The biannual Symposium includes all aspects of cosmic ray research. The scientific program was organized under three main headings: cosmic rays in the heliosphere, cosmic rays in the interstellar and extragalactic space, and properties of high-energy interactions as studied by cosmic rays. Selected short communications out of 114 contributed papers were indexed separately for the INIS database.
11. The accelerating universe
Blandford, Roger
2013-02-01
From keV electrons in the aurorae to Ultra High Energy Cosmic Rays in unidentified "Zevatrons", the cosmos shows a perverse, yet pervasive, proclivity to select a tiny minority of particles and boost them to high energy. The mechanisms involved can be traced back to the ideas of Faraday, Fermi and Alfvén though we are learning that the details are idiosyncratic to the many environments that we have explored. Much can be learned from comparing and contrasting particle acceleration in laboratory, interplanetary, interstellar and intergalactic locations. As it celebrates its centenary, cosmic ray physics, has assumed a new importance in solving one of the greatest problems consuming its illustrious scion - elementary particle physics - namely the nature of dark matter.
12. Cosmic-ray astrochemistry.
PubMed
Indriolo, Nick; McCall, Benjamin J
2013-10-01
Gas-phase chemistry in the interstellar medium is driven by fast ion-molecule reactions. This, of course, demands a mechanism for ionization, and cosmic rays are the ideal candidate as they can operate throughout the majority of both diffuse and dense interstellar clouds. Aside from driving interstellar chemistry via ionization, cosmic rays also interact with the interstellar medium in ways that heat the ambient gas, produce gamma rays, and produce light element isotopes. In this paper we review the observables generated by cosmic-ray interactions with the interstellar medium, focusing primarily on the relevance to astrochemistry. PMID:23812538
13. Supermassive cosmic string compactifications
SciTech Connect
Blanco-Pillado, Jose J.; Reina, Borja; Sousa, Kepa; Urrestilla, Jon E-mail: [email protected] E-mail: [email protected]
2014-06-01
The space-time dimensions transverse to a static straight cosmic string with a sufficiently large tension (supermassive cosmic strings) are compact and typically have a singularity at a finite distance form the core. In this paper, we discuss how the presence of multiple supermassive cosmic strings in the 4d Abelian-Higgs model can induce the spontaneous compactification of the transverse space and explicitly construct solutions where the gravitational background becomes regular everywhere. We discuss the embedding of this model in N = 1 supergravity and show that some of these solutions are half-BPS, in the sense that they leave unbroken half of the supersymmetries of the model.
14. Testing the Role of Cosmic Ray Reacceleration in the Galaxy
Connell, J. J.; Simpson, J. A.
1999-05-01
Cosmic rays constitute a super-thermal gas of charged particles magnetically confined within the Galaxy. While propagating though the interstellar medium (ISM), cosmic ray nuclei undergo nuclear spallation reactions, producing both stable (i.e., Be and B) and unstable secondary nuclei. Consistent cosmic ray confinement times of ~ 20 Myr have been reported from measurements of the radioactive secondary isotopes (10) Be, (26) Al, (36) Cl and (54) Mn using data from the High Energy Telescope (HET) on the Ulysses spacecraft. It is generally accepted that Galactic cosmic rays of energy less than ~ 10(14) eV are accelerated by supernova shocks in the ISM. Reacceleration of existing cosmic rays in the ISM is implicit in interstellar shock acceleration models, but whether reacceleration plays a significant role in cosmic ray production and interstellar propagation is largely unknown. The abundances of secondary electron-capture isotopes provide a crucial test of cosmic ray reacceleration. Electron-capture is suppressed during interstellar propagation because cosmic ray nuclei are essentially stripped of their electrons. If, however, cosmic rays experience significant reacceleration, nuclei will have spent time at lower energies where electron pick-up, and hence electron capture, is more likely than at higher energies. Thus, electron capture secondary isotopes would be less abundant (and their daughters, more abundant) than otherwise predicted. The abundance ratio of (49) V to (51) V is a particularly sensitive test of this effect. The latest Ulysses HET data is used to address this problem. This research was supported in part by NASA/JPL Contract 955432 and NASA Grant NAG5-5179.
15. Foundations of observing dark energy dynamics with the Wilkinson Microwave Anisotropy Probe
SciTech Connect
Corasaniti, P.S.; Kunz, M.; Parkinson, D.; Copeland, E.J.; Bassett, B.A.
2004-10-15
Detecting dark energy dynamics is the main quest of current dark energy research. Addressing the issue demands a fully consistent analysis of cosmic microwave background, large-scale structure and SN-Ia data with multiparameter freedom valid for all redshifts. Here we undertake a ten parameter analysis of general dark energy confronted with the first year Wilkinson Microwave Anisotropy Probe, 2dF galaxy survey and latest SN-Ia data. Despite the huge freedom in dark energy dynamics there are no new degeneracies with standard cosmic parameters apart from a mild degeneracy between reionization and the redshift of acceleration, both of which effectively suppress small scale power. Breaking this degeneracy will help significantly in detecting dynamics, if it exists. Our best-fit model to the data has significant late-time evolution at z<1.5. Phantom models are also considered and we find that the best-fit crosses w=-1 which, if confirmed, would be a clear signal for radically new physics. Treatment of such rapidly varying models requires careful integration of the dark energy density usually not implemented in standard codes, leading to crucial errors of up to 5%. Nevertheless cosmic variance means that standard {lambda} cold dark matter models are still a very good fit to the data and evidence for dynamics is currently very weak. Independent tests of reionization or the epoch of acceleration (e.g., integrated Sachs-Wolfe-large scale structure correlations) or reduction of cosmic variance at large scales (e.g., cluster polarization at high redshift) may prove key in the hunt for dynamics.
16. B-modes from cosmic strings
SciTech Connect
Pogosian, Levon; Wyman, Mark
2008-04-15
Detecting the parity-odd, or B-mode, polarization pattern in the cosmic microwave background radiation due to primordial gravity waves is considered to be the final observational key to confirming the inflationary paradigm. The search for viable models of inflation from particle physics and string theory has (re)discovered another source for B-modes: cosmic strings. Strings naturally generate as much vector-mode perturbation as they do scalar, producing B-mode polarization with a spectrum distinct from that expected from inflation itself. In a large set of models, B-modes arising from cosmic strings are more prominent than those expected from primordial gravity waves. In light of this, we study the physical underpinnings of string-sourced B-modes and the model dependence of the amplitude and shape of the C{sub l}{sup BB} power spectrum. Observational detection of a string-sourced B-mode spectrum would be a direct probe of post-inflationary physics near the grand unified theory (GUT) scale. Conversely, nondetection would put an upper limit on a possible cosmic string tension of G{mu} < or approx. 10{sup -7} within the next three years.
17. Distance Probes of Dark Energy
DOE PAGESBeta
Kim, A. G.; Padmanabhan, N.; Aldering, G.; Allen, S. W.; Baltay, C.; Cahn, R. N.; D' Andrea, C. B.; Dalal, N.; Dawson, K. S.; Denney, K. D.; et al
2015-03-15
We present the results from the Distances subgroup of the Cosmic Frontier Community Planning Study (Snowmass 2013). This document summarizes the current state of the field as well as future prospects and challenges. In addition to the established probes using Type Ia supernovae and baryon acoustic oscillations, we also consider prospective methods based on clusters, active galactic nuclei, gravitational wave sirens and strong lensing time delays.
18. Cosmic x ray physics
NASA Technical Reports Server (NTRS)
Mccammon, Dan; Cox, D. P.; Kraushaar, W. L.; Sanders, W. T.
1990-01-01
The annual progress report on Cosmic X Ray Physics is presented. Topics studied include: the soft x ray background, proportional counter and filter calibrations, the new sounding rocket payload: X Ray Calorimeter, and theoretical studies.
19. Cosmic x ray physics
NASA Technical Reports Server (NTRS)
Mccammon, Dan; Cox, D. P.; Kraushaar, W. L.; Sanders, W. T.
1991-01-01
The annual progress report on Cosmic X Ray Physics for the period 1 Jan. to 31 Dec. 1990 is presented. Topics studied include: soft x ray background, new sounding rocket payload: x ray calorimeter, and theoretical studies.
20. A COSMIC VARIANCE COOKBOOK
SciTech Connect
Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A. E-mail: [email protected] E-mail: [email protected]
2011-04-20
Deep pencil beam surveys (<1 deg{sup 2}) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size {Delta}z. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , {Delta}z, and stellar mass m{sub *}. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates ({delta}{sigma}{sub v}/{sigma}{sub v}) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with {Delta}z = 0.5, the relative cosmic variance of galaxies with m{sub *}>10{sup 11} M{sub sun} is {approx}38%, while it is {approx}27% for GEMS and {approx}12% for COSMOS. For galaxies of m{sub *} {approx} 10{sup 10} M{sub sun}, the relative cosmic variance is {approx}19% for GOODS, {approx}13% for GEMS, and {approx}6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z
1. A Cosmic Variance Cookbook
Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter
2011-04-01
Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is
2. The origin of cosmic rays and TeV gamma-ray astronomy
Maier, Gernot
2013-06-01
Cosmic rays are accelerated to high energies in Galactic and extragalactic objects like Supernova remnants (SNR) and active galactic nuclei (AGN). How these accelerators work and how efficient they accelerate different types of particles to energies of 1015 eV or beyond, is 100 years after the discovery of cosmic rays by Victor Hess, still unknown. Gamma rays trace cosmic rays at their site of acceleration and give crucial information on the nature and inner workings of these extreme objects. Gamma rays can be used to find the sources of cosmic rays and to determine their type, age and dynamics. We review in these proceedings the observational techniques and recent findings on gamma-ray emission from Supernova remnants.
3. The Cosmic Labyrinth
Atkinson, M.
2011-06-01
This paper discusses the intertwined relationship between the terrestrial and celestial using the labyrinth as a metaphor referencing sources from art, gardens and Australian Indigenous culture. Including the Morning Star with the labyrinthine mortuary ritual in Arnhem Land, the cosmic plan garden at Auschwitz and Marea Atkinson's art project undertaken at the Villa Garzoni garden in Italy to create The Cosmic Labyrinth installation exhibited at Palazzo Franchetti, Venice, during the sixth conference on the Inspiration of Astronomical Phenomena.
4. Cosmic Ray Dosimetry
Si Belkhir, F.; Attallah, R.
2010-10-01
Radiation levels at aircraft cruising altitudes are twenty times higher than at sea level. Thus, on average, a typical airline pilot receives a larger annual radiation dose than some one working in nuclear industry. The main source of this radiation is from galactic cosmic radiation, high energy particles generated by exploding stars within our own galaxy. In this work we study cosmic rays dosimetry at various aviation altitudes using the PARMA model.
5. COSMIC monthly progress report
NASA Technical Reports Server (NTRS)
1994-01-01
Activities of the Computer Software Management and Information Center (COSMIC) are summarized for the month of January 1994. Tables showing the current inventory of programs available from COSMIC are presented and program processing and evaluation activities are discussed. Marketing and customer service activities in this period are presented as is the progress report of NASTRAN maintenance and support. Tables of disseminations and budget summary conclude the report.
6. HD/H{sub 2} AS A PROBE OF THE ROLES OF GAS, DUST, LIGHT, METALLICITY, AND COSMIC RAYS IN PROMOTING THE GROWTH OF MOLECULAR HYDROGEN IN THE DIFFUSE INTERSTELLAR MEDIUM
SciTech Connect
Liszt, H. S.
2015-01-20
We modeled recent observations of UV absorption of HD and H{sub 2} in the Milky Way and toward damped/subdamped Lyα systems at z = 0.18 and z >1.7. N(HD)/N(H{sub 2}) ratios reflect the separate self-shieldings of HD and H{sub 2} and the coupling introduced by deuteration chemistry. Locally, observations are explained by diffuse molecular gas with 16 cm{sup –3} ≲ n(H) ≲ 128 cm{sup –3} if the cosmic-ray ionization rate per H nucleus ζ {sub H} =2 × 10{sup –16} s{sup –1}, as inferred from H{sub 3} {sup +} and OH{sup +}. The dominant influence on N(HD)/N(H{sub 2}) is the cosmic-ray ionization rate with a much weaker downward dependence on n(H) at solar metallicity, but dust extinction can drive N(HD) higher as with N(H{sub 2}). At z > 1.7, N(HD) is comparable to the Galaxy but with 10 times smaller N(H{sub 2}) and somewhat smaller N(H{sub 2})/N(H I). Comparison of our Galaxy with the Magellanic Clouds shows that smaller H{sub 2}/H is expected at subsolar metallicity, and we show by modeling that HD/H{sub 2} increases with density at low metallicity, opposite to the Milky Way. Observations of HD would be explained with higher n(H) at low metallicity, but high-z systems have high HD/H{sub 2} at metallicity 0.04 ≲ Z ≲ 2 solar. In parallel, we trace dust extinction and self-shielding effects. The abrupt H{sub 2} transition to H{sub 2}/H ≈ 1%-10% occurs mostly from self-shielding, although it is assisted by extinction for n(H) ≲ 16 cm{sup –3}. Interior H{sub 2} fractions are substantially increased by dust extinction below ≲ 32 cm{sup –3}. At smaller n(H), ζ {sub H}, small increases in H{sub 2} triggered by dust extinction can trigger abrupt increases in N(HD)
7. Cosmic Ray Observation for Nuclear Astrophysics:. Corona Program
2003-04-01
Cosmic Ray Observation for Nuclei Astrophysics (CORONA) program is a large-scaled spacecraft or space station approach for nuclear composition of relativistic cosmic rays 10 ≦ Z ≦ 92 and of low-energy isotopes 1 ≦ Z ≦ 58 in space. A large area Spectrometer for Ultraheavy Nuclear Composition (SUNC) and a Large Isotope Telescope Array (LITA) are proposed in this program. CORONA program focuses on the composition of elements beyond the iron-peak nuclei (Z > 60) and the isotopic composition of ultraheavy particles (Z > 30) in galactic cosmic rays as well as solar and interplanetary particles. The observation of nuclear composition covers a wide range of scientific themes including studies of nucleosynthesis of cosmic ray sources, chemical evolution of galactic material, the characteristic time of cosmic rays, heating and acceleration mechanism of cosmic ray particles. Observation of solar particle events also make clear the physical process of transient solar events emitting wide range of radio, X-ray/gamma-ray, plasma and energetic particle radiation, and particle acceleration mechanism driven by CME.
8. High Energy Cosmic Rays and Neutrinos from Newborn Pulsars
Fang, Ke; Kotera, Kumiko; Olinto, Angela
2013-04-01
Newborn pulsars offer favorable sites for cosmic ray acceleration and interaction. Particles could be striped off the star surface and accelerated in the pulsar wind up to PeV-100 EeV energies, depending on the pulsar's birth period and magnetic field strength. Once accelerated, the cosmic rays interact with the surrounding supernova ejecta until they escape the source. By assuming a normal distribution of pulsar birth periods centered at 300,ms, we find the combined contribution of extragalactic pulsars produce ultrahigh energy cosmic rays that agree with both the observed energy spectrum and composition trend reported by the Auger Observatory. Meanwhile, we point out their Galactic counterparts naturally give rise to a cosmic ray flux peaked at very high energies (VHE, between 10^16 and 10^18 ,eV), which can bridge the gap between predictions of cosmic rays produced by supernova remnants and the observed spectrum and composition just below the ankle. Young pulsars in the universe would also contribute to a diffuse neutrino background due to the photomeson interactions, whose detectability and typical neutrino energy are discussed. Lastly, we predict a neutrino emission level for the future birth of a nearby pulsar.
9. CHEMICAL COMPOSITION AND MAXIMUM ENERGY OF GALACTIC COSMIC RAYS
SciTech Connect
Shibata, M.; Katayose, Y.; Huang, J.; Chen, D.
2010-06-20
A model of the cosmic-ray energy spectrum is proposed that assumes various acceleration limits at multiple sources. The model describes the broken power-law energy spectrum of cosmic rays by superposition of multiple sources; a diffusive shock acceleration mechanism plays an essential role. The maximum energy of galactic cosmic rays is discussed based on a comparison of experimental data with calculations done using the proposed model. The model can describe the energy spectrum at very high energies of up to several times 10{sup 18} eV, but the observed highest-energy cosmic rays deviate from the model predictions, indicating a different origin, such as an extragalactic source. This model describes the steepening of the power index at the so-called knee. However, it was found that additional assumptions are needed to explain the sharpness of the knee. Two possible explanations for the structure of the knee are discussed in terms of nearby source(s) and the hard energy spectrum suggested by nonlinear effects of cosmic-ray acceleration mechanisms.
10. Can Accelerators Accelerate Learning?
Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.
2009-03-01
The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.
11. PARTICLE ACCELERATOR
DOEpatents
Teng, L.C.
1960-01-19
ABS>A combination of two accelerators, a cyclotron and a ring-shaped accelerator which has a portion disposed tangentially to the cyclotron, is described. Means are provided to transfer particles from the cyclotron to the ring accelerator including a magnetic deflector within the cyclotron, a magnetic shield between the ring accelerator and the cyclotron, and a magnetic inflector within the ring accelerator.
12. The HEAT Cosmic Ray Antiproton Experiment
Nutter, Scott
1998-10-01
The HEAT (High Energy Antimatter Telescope) collaboration is constructing a balloon-borne instrument to measure the relative abundance of antiprotons and protons in the cosmic rays to kinetic energies of 30 GeV. The instrument uses a multiple energy loss technique to measure the Lorentz factor of through-going cosmic rays, a magnet spectrometer to measure momentum, and several scintillation counters to determine particle charge and direction (up or down in the atmosphere). The antiproton to proton abundance ratio as a function of energy is a probe of the propagation environment of protons through the galaxy. Existing measurements indicate a higher than expected value at both high and low energies. A confirming measurement could indicate peculiar antiproton sources, such as WIMPs or supersymmetric darkmatter candidates.
13. Detection prospects of the cosmic neutrino background
Li, Yu-Feng
2015-04-01
The existence of the cosmic neutrino background (CνB) is a fundamental prediction of the standard Big Bang cosmology. Although current cosmological probes provide indirect observational evidence, the direct detection of the CνB in a laboratory experiment is a great challenge to the present experimental techniques. We discuss the future prospects for the direct detection of the CνB, with the emphasis on the method of captures on beta-decaying nuclei and the PTOLEMY project. Other possibilities using the electron-capture (EC) decaying nuclei, the annihilation of extremely high-energy cosmic neutrinos (EHECνs) at the Z-resonance, and the atomic de-excitation method are also discussed in this review (talk given at the International Conference on Massive Neutrinos, Singapore, 9-13 February 2015).
14. Detection Prospects of the Cosmic Neutrino Background
Li, Yu-Feng
The existence of the cosmic neutrino background (CνB) is a fundamental prediction of the standard Big Bang cosmology. Although current cosmological probes provide indirect observational evidence, the direct detection of the CνB in a laboratory experiment is a great challenge to the present experimental techniques. We discuss the future prospects for the direct detection of the CνB, with the emphasis on the method of captures on beta-decaying nuclei and the PTOLEMY project. Other possibilities using the electron-capture (EC) decaying nuclei, the annihilation of extremely high-energy cosmic neutrinos (EHECνs) at the Z-resonance, and the atomic de-excitation method are also discussed in this review.
15. Super-alfvenic propagation of cosmic rays: The role of streaming modes
NASA Technical Reports Server (NTRS)
Morrison, P. J.; Scott, J. S.; Holman, G. D.; Ionson, J. A.
1980-01-01
Numerous cosmic ray propagation and acceleration problems require knowledge of the propagation speed of relativistic particles through an ambient plasma. Previous calculations indicated that self-generated turbulence scatters relativistic particles and reduces their bulk streaming velocity to the Alfven speed. This result was incorporated into all currently prominent theories of cosmic ray acceleration and propagation. It is demonstrated that super-Alfvenic propagation is indeed possible for a wide range of physical parameters. This fact dramatically affects the predictions of these models.
16. Ultra-high energy probes of classicalization
SciTech Connect
Dvali, Gia; Gomez, Cesar E-mail: [email protected]
2012-07-01
Classicalizing theories are characterized by a rapid growth of the scattering cross section. This growth converts these sort of theories in interesting probes for ultra-high energy experiments even at relatively low luminosity, such as cosmic rays or Plasma Wakefield accelerators. The microscopic reason behind this growth is the production of N-particle states, classicalons, that represent self-sustained lumps of soft Bosons. For spin-2 theories this is the quantum portrait of what in the classical limit are known as black holes. We emphasize the importance of this quantum picture which liberates us from the artifacts of the classical geometric limit and allows to scan a much wider landscape of experimentally-interesting quantum theories. We identify a phenomenologically-viable class of spin-2 theories for which the growth of classicalon production cross section can be as efficient as to compete with QCD cross section already at 100TeV energy, signaling production of quantum black holes with graviton occupation number N ∼ 10{sup 4}.
17. Probing Dark Energy with Constellation-X
SciTech Connect
Rapetti, David; Allen, Steven W.; /KIPAC, Menlo Park
2006-09-08
Constellation-X (Con-X) will carry out two powerful and independent sets of tests of dark energy based on X-ray observations of galaxy clusters, providing comparable accuracy to other leading dark energy probes. The first group of tests will measure the absolute distances to clusters, primarily using measurements of the X-ray gas mass fraction in the largest, dynamically relaxed clusters, but with additional constraining power provided by follow-up observations of the Sunyaev-Zel'dovich (SZ) effect. As with supernovae studies, such data determine the transformation between redshift and true distance, d(z), allowing cosmic acceleration to be measured directly. The second, independent group of tests will use the exquisite spectroscopic capabilities of Con-X to determine scaling relations between X-ray observables and mass. Together with forthcoming X-ray and SZ cluster surveys, these data will help to constrain the growth of structure, which is also a strong function of cosmological parameters.
18. Nonlinear Transport of Cosmic Rays in Turbulent Magnetic Field
Yan, H.; Xu, S.
2014-09-01
Recent advances in both the MHD turbulence theory and cosmic ray observations call for revisions in the paradigm of cosmic ray transport. We use the models of magnetohydrodynamic turbulence that were tested in numerical simulations, in which turbulence is injected at large scale and cascades to small scales. We shall present the nonlinear results for cosmic ray transport, in particular, the cross field transport of CRs. We demonstrate that the concept of cosmic ray subdiffusion in general does not apply and the perpendicular motion is well described by normal diffusion with M A4 dependence. Moreover, on scales less than the injection scale of turbulence, CRs' transport becomes super-diffusive. Quantitative predictions for both the normal diffusion on large scale and super diffusion on small scale are confirmed with recent numerical simulations. Implication for shock acceleration is briefly discussed.
19. Plans for Extreme Energy Cosmic Ray Observations from Space
NASA Technical Reports Server (NTRS)
2004-01-01
Cosmic rays have been detected at energies beyond 10(exp 20) eV, where Universe is predicted to become opaque to protons. The acceleration of cosmic rays to such extreme energies in known astrophysical objects has also proven difficult to understand, leading to many suggestions that new physics may be required to explain their existence. This has prompted the construction of new experiments designed to detect cosmic rays with fluxes below 1 particle/km/century and follow their spectrum to even higher energies. To detect large numbers of these particles, the next generation of these experiments must be performed on space-based platforms that look on very large detection volumes in the Earth's atmosphere. The talk will review the experimental and theoretical investigations of extreme energy cosmic rays and discuss the present and planned experiments to extend measurements beyond 10(exp 21) eV.
20. Cosmic rays and the birth of particle physics | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8401362299919128, "perplexity": 2531.2161395347307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541556.70/warc/CC-MAIN-20161202170901-00202-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://hal-insu.archives-ouvertes.fr/insu-02937876 | # Density Fluctuations in the Solar Wind Based on Type III Radio Bursts Observed by Parker Solar Probe
Abstract : Radio waves are strongly scattered in the solar wind, so that their apparent sources seem to be considerably larger and shifted than the actual ones. Since the scattering depends on the spectrum of density turbulence, a better understanding of the radio wave propagation provides indirect information on the relative density fluctuations, epsilon = < n >, at the effective turbulence scale length. Here, we analyzed 30 type III bursts detected by Parker Solar Probe (PSP). For the first time, we retrieved type III burst decay times, tau(d), between 1 and 10 MHz thanks to an unparalleled temporal resolution of PSP. We observed a significant deviation in a power-law slope for frequencies above 1 MHz when compared to previous measurements below 1 MHz by the twin-spacecraft Solar TErrestrial RElations Observatory (STEREO) mission. We note that altitudes of radio bursts generated at 1 MHz roughly coincide with an expected location of the Alfven point, where the solar wind becomes super-Alfvenic. By comparing PSP observations and Monte Carlo simulations, we predict relative density fluctuations, epsilon, at the effective turbulence scale length at radial distances between 2.5 and 14 R-circle dot to range from 0.22 to 0.09. Finally, we calculated relative density fluctuations, o, measured in situ by PSP at a radial distance from the Sun of 35.7 R-circle dot during perihelion #1, and perihelion #2 to be 0.07 and 0.06, respectively. It is in a very good agreement with previous STEREO predictions (epsilon = 0.06-0.07) obtained by remote measurements of radio sources generated at this radial distance.
Keywords :
Document type :
Journal articles
Domain :
Cited literature [43 references]
https://hal-insu.archives-ouvertes.fr/insu-02937876
Contributor : Nathalie Pothier <>
Submitted on : Friday, September 18, 2020 - 7:30:37 AM
Last modification on : Thursday, April 15, 2021 - 3:08:18 PM
Long-term archiving on: : Friday, December 4, 2020 - 9:00:00 PM
### File
Krupar_2020_ApJS_246_57.pdf
Publisher files allowed on an open archive
### Citation
Vratislav Krupar, Adam Szabo, Milan Maksimovic, Oksana Kruparova, Eduard Kontar, et al.. Density Fluctuations in the Solar Wind Based on Type III Radio Bursts Observed by Parker Solar Probe. Astrophysical Journal Supplement, American Astronomical Society, 2020, Early Results from Parker Solar Probe: Ushering a New Frontier in Space Exploration, 246 (2), pp.57. ⟨10.3847/1538-4365/ab65bd⟩. ⟨insu-02937876⟩
Record views | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9521360993385315, "perplexity": 3519.6517835846385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039476006.77/warc/CC-MAIN-20210420152755-20210420182755-00201.warc.gz"} |
https://www.physicsforums.com/threads/annoyingly-simple-problem-rational-functions-and-limits-at-infinity.305122/ | # Annoyingly simple problem - rational functions and limits at infinity
1. Apr 5, 2009
### damian6961
Hi all
This is my first post so please be gentle with me!
Limit of this rational function as x approaches infinity?
f(x) = (x^3 - 2x)/(2x^2 - 10)
I was under the impression that if the degree of the polynomial of the numerator exceed that of the denominator then there could be no horizontal asymptote. Is this correct?
I've used l'hopitals rule and found the limit to be 3x/2. I've been told the limit as x tends to infinity is x/2. Which is the correct solution and why? This has been driving me crazy!!
Damian
2. Apr 5, 2009
### CRGreathouse
It's x/2. Just divide everything by 2x^2.
3. Apr 5, 2009
### elect_eng
Or, just retain the highest order term in the numerator and also in the denominator.
This leaves a ratio of $${{x}\over{2}}$$
4. Apr 7, 2009
The limit is $$\infty$$. You can find that by taking the above suggestions and taking x/2 as x goes to infinity, but x/2 itself is not the limit.
5. Apr 7, 2009
### elect_eng
You misunderstand what I'm saying. The function approaches a "limiting" (asymptotic) form of x/2 in the limit at x goes to infinity. It doesn't matter if the value of the function goes to infinity as x goes to infinity. Electrical engineers use this trick all the time to find out the high frequency response of a system. We always have frequency response as a ratio of polynomials in frequency. We just keep the highest order terms in the numerator and in the denominator to find the high frequency response.
6. Apr 8, 2009
I'm not misunderstanding anything. The question didn't ask about the asymptotic behaviour of the function as x goes to infinity; it only asked about the limit. I was giving a clear answer to the question.
7. Apr 8, 2009
### elect_eng
Well, I read the question differently. It says "what is the limit of the rational function?". This implies finding the limiting functional form which is born out by the answer. He tried to solve the problem as if he needs the numerical limit, which is why he is confused.
If you are saying that the wording of the question is vague and that the answer of infinity could be acceptable as an answer, I won't argue about that.
8. Apr 8, 2009
I didn't find the problem statement unclear or ambiguous at all. Oh well.
9. Apr 8, 2009
### AUMathTutor
The limit of the function as x approaches infinity is undefined (it tends towards infinity).
The f(x) - x/2 goes to zero in the limit of large n, implying that the function f(x) ~ x/2 asymptotically.
10. Apr 9, 2009
### elect_eng
Yes, I see your point. Math requires precise statements. I shouldn't read things into it.
11. Apr 9, 2009
### CRGreathouse
I always use extended reals for limits, so I would say
$$\lim_{x\to\infty}f(x)=+\infty.$$
If you really want to get technical, what the original poster was asking for is the first term of the asymptotic expansion of f(x) about infinity:
$$f(x)=\frac x2+\frac{3}{2x}+\frac{15}{2x^3}+\frac{75}{2x^5}+\cdots$$
Similar Discussions: Annoyingly simple problem - rational functions and limits at infinity | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9527857303619385, "perplexity": 510.4307501858027}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814311.76/warc/CC-MAIN-20180223015726-20180223035726-00268.warc.gz"} |
http://clay6.com/qa/50028/x-2-y-2-4x-8y-45-0-find-the-centre-and-radius-of-the-circles- | Browse Questions
# $x^2+ y^2– 4x–8y–45=0$ find the centre and radius of the circles.
$\begin {array} {1 1} (A)\;(2,4) \: and \: \sqrt{65} & \quad (B)\;(-2,4) \: and \: \sqrt{65} \\ (C)\;(-2,-4) \: and \: \sqrt{65} & \quad (D)\;(2,-4) \: and \: \sqrt{65} \end {array}$
Toolbox:
• Equation of a circle with centre (h, k) and radius r is given as : $(x-h)^2+(y-k)^2=r^2$
Equation of the circle is $x^2+y^2-4x-8y-45=0$
This can be written as
$(x^2-4x)+(y^2-8y)=45$
(i.e) $[(x-2)^2-4] +[(y-4)^2-16]=45$
(i.e) $(x-2)^2+(y-4)^2=65$
This is of the form
$(x-h)^2+(y-k)^2=r^2$
Comparing both the equations we get,
h=2 and k = 4 and r = $\sqrt{65}$ .
Thus the centre of the given circle is (2,4) and radius is $\sqrt{65}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9056037664413452, "perplexity": 403.1602684935015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607998.27/warc/CC-MAIN-20170525044605-20170525064605-00492.warc.gz"} |
http://mathhelpforum.com/algebra/172946-line-circle-intersection.html | # Thread: Line / Circle intersection
1. ## Line / Circle intersection
Alright, here's the problem:
Water is flowing from a major broken water main at the intersection of two streets. The resulting puddle of water is circular and the radius r of the puddle is given by the equation r = 5t feet, where t represents time in seconds elapsed since the main broke.
a) When the main broke, a runner was located 6 miles from the intersection. The runner continues toward the intersection at the constant speed of 17 feet per second. When will the runner's feet get wet?
b) Suppose, instead, that when the main broke, the runner was 6 miles east, and 5000 feet north of the intersection. The runner runs due west at 17 feet per second. When will the runner's feet get wet?
Here's what I've done:
Circle: $x^{2}+y^{2}=25t^{2}$
Runner: $Runner&:\;\;d_{from\;intersection}=6mi\cdot\frac{5 280ft}{mi}-\frac{17ft}{sec}\cdot t\;sec=31680-17t$
let y=0, then: $(31680-17t)^{2}=25t^{2}$
$t=1440\;sec=24\;minutes$
So part a) is simple. Part b), however, is giving me grief. Leaving the origin where it is, I'm able to see that the portion of the spill covering the street on which the runner is moving (5000 ft north) is encroaching on the runner according to $\sqrt{(5t)^{2}-5000^{2}}$. It appears to me that I can equate that with the runner's motion towards the spill and solve for time, but when I do I end up with a NASTY quadratic that seems to fall apart when I try to solve it. The back of my textbook demurely states "Wet in 25.4154041 minutes," which sounds reasonable, but my path there right now is much less direct than the runner's.
Advise? Or direction pointing? Thank you so much!
2. Originally Posted by actorRunning
Alright, here's the problem:
Water is flowing from a major broken water main at the intersection of two streets. The resulting puddle of water is circular and the radius r of the puddle is given by the equation r = 5t feet, where t represents time in seconds elapsed since the main broke.
a) When the main broke, a runner was located 6 miles from the intersection. The runner continues toward the intersection at the constant speed of 17 feet per second. When will the runner's feet get wet?
b) Suppose, instead, that when the main broke, the runner was 6 miles east, and 5000 feet north of the intersection. The runner runs due west at 17 feet per second. When will the runner's feet get wet?
Here's what I've done:
Circle: $x^{2}+y^{2}=25t^{2}$
Runner: $Runner&:\;\;d_{from\;intersection}=6mi\cdot\frac{5 280ft}{mi}-\frac{17ft}{sec}\cdot t\;sec=31680-17t$
let y=0, then: $(31680-17t)^{2}=25t^{2}$
$t=1440\;sec=24\;minutes$
So part a) is simple. Part b), however, is giving me grief. Leaving the origin where it is, I'm able to see that the portion of the spill covering the street on which the runner is moving (5000 ft north) is encroaching on the runner according to $\sqrt{(5t)^{2}-5000^{2}}$. It appears to me that I can equate that with the runner's motion towards the spill and solve for time, but when I do I end up with a NASTY quadratic that seems to fall apart when I try to solve it. The back of my textbook demurely states "Wet in 25.4154041 minutes," which sounds reasonable, but my path there right now is much less direct than the runner's.
Advise? Or direction pointing? Thank you so much!
For the second part you have for the runner:
$x_r=31680-17t$
and
$y_r=5000$
When this meets the expanding pool you have:
$x_r^2+y_r^2=20t^2$
So we are looking for the smallest root of:
$(31680-17t)^2+5000^2=25t^2$
which simplifies to:
$264\,{t}^{2}-1077120\,t+1028622400=0$
and the smallest root of this is $1524.92$ s or about $25.4$ min
CB | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8988339900970459, "perplexity": 821.5760746336745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540909.75/warc/CC-MAIN-20161202170900-00266-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/6153-proof-induction.html | Math Help - proof by induction
1. proof by induction
show that (a^m)*(a^n)=a^(n+m) knowing that a^0=1 and a^n=a^(n-1)a. I know it's not a hard proof but i just find myself using the property i'm trying to prove to solve the proof which is deffinatly wrong, just need someone elses input if you know what i mean
2. Originally Posted by action259
show that (a^m)*(a^n)=a^(n+m) knowing that a^0=1 and a^n=a^(n-1)a. I know it's not a hard proof but i just find myself using the property i'm trying to prove to solve the proof which is deffinatly wrong, just need someone elses input if you know what i mean
We know that
1. a^n = a^(n-1) * a ==> a^(n + 1) = a^n * a
2. a^0 = 1
as givens.
So n = 0 case:
a^m * a^0 = a^m * 1, by 2
= a^m = a^(m + 0)
Now assume that a^m * a^n = a^(m + n) for some n. We need to show that a^m * a^(n + 1) = a^(m + n + 1).
So
a^m * a^(n + 1) = a^m * a^n * a, by 1
= a^(m + n) * a, by hypothesis
Now, m + n is just some integer x.
= a^x * a = a^(x + 1), by 1
= a^(m + n + 1)
Thus a^m * a^n = a^(m + n) for n = 0 and thus for all integer n >= 0.
-Dan
Frankly proof of this by induction is a bit of overkill as far as I'm concerned. We can prove it to be true just by applying the definition of a^n.
3. Originally Posted by action259
show that (a^m)*(a^n)=a^(n+m) knowing that a^0=1 and a^n=a^(n-1)a. I know it's not a hard proof but i just find myself using the property i'm trying to prove to solve the proof which is deffinatly wrong, just need someone elses input if you know what i mean
For any m we note that for any m,
(a^m)(a^0)=a^m=a^(m+0)
Thus, there is a k such as,
(a^m)(a^k)=a^(m+k)
Multiply both sides by "a":
(a^m)(a^k)(a)=a^(m+k)(a)
Thus,
a^m(a^{k+1})=a^{m+k+1}
Q.E.D. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183499217033386, "perplexity": 1254.5497193719455}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276304.88/warc/CC-MAIN-20160524002116-00192-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/100495/showing-that-int-01-fracx-1-lnx-mathrm-dt-ln2 | Showing that $\int_{0}^{1} \frac{x-1}{\ln(x)} \mathrm dt=\ln2$
I would like to show that
$$\int_{0}^{1} \frac{x-1}{\ln(x)} \mathrm dt=\ln2$$
What annoys me is that $x-1$ is the numerator so the geometric power series is useless.
Any idea?
-
Inverse question: how to compute $\int_0^1 \frac{\ln x}{x-1} d x$? – sdcvvc Jun 28 '13 at 13:32
This is a classic example of differentiating inside the integral sign.
In particular, let $$J(\alpha)=\int_0^1\frac{x^\alpha-1}{\log(x)}\;dx$$. Then one has that $$\frac{\partial}{\partial\alpha}J(\alpha)=\int_0^1\frac{\partial}{\partial\alpha}\frac{x^\alpha-1}{\log(x)}\;dx=\int_0^1x^\alpha\;dx=\frac{1}{\alpha+1}$$ and so we know that $\displaystyle J(\alpha)=\log(\alpha+1)+C$. Noting that $J(0)=0$ tells us that $C=0$ and so $J(\alpha)=\log(\alpha+1)$.
-
Thanks you very much! – Chon Jan 19 '12 at 18:06
(+1) nice answer. – Mhenni Benghorbal Jan 8 '13 at 7:29
$\displaystyle \int_{0}^{1}\frac{x-1}{\log{x}}\;{dx} = \int_{0}^{1}\int_{0}^{1}x^{t}\;{dt}\;{dx} =\int_{0}^{1}\int_{0}^{1}x^{t}\;{dx}\;{dt} = \int_{0}^{1}\frac{1}{1+t}\;{dt} = \log(2).$
-
+1, I really like this method! – Daniel Littlewood Dec 17 '12 at 19:45
(+1) nice technique. – Mhenni Benghorbal Jan 8 '13 at 7:30
Making the substitution $u=\ln x$, we get $$I=\int_{-\infty}^0\frac{e^u-1}u e^udu=-\int_0^{+\infty}\frac{e^{-2s}-e^{-s}}sds=\ln\frac 21=\ln 2,$$ since we recognize a Frullani integral type.
-
It is the first I have heard about Frullani integral. nice answer (+1). – Mhenni Benghorbal Jan 8 '13 at 7:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919812679290771, "perplexity": 659.8637113688625}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111865.15/warc/CC-MAIN-20160428161511-00091-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.gerad.ca/fr/papers/G-2009-27 | Groupe d’études et de recherche en analyse des décisions
# Bounds and Conjectures for the Signless Laplacian Index of Graphs
## Pierre Hansen et Claire Lucas
Using the AutoGraphiX system, we obtain conjectures of the form $l(n) \leq q_1 \oplus i(G)\leq u(n)$ where $q_1$ denotes the signless Laplacian index of graph G, $\oplus$ is one the four operations $+,-,\times,$ i(G) is another invariant chosen among minimum, average and maximum degree, average distance, diameter, radius, girth, proximity, remoteness, vertex, edge and algebraic connectivities, independence number, domination number, clique number, chromatic number and matching number, Randi\'c index, l(n) and u(n) are best possible lower and upper bounds function of the order n of G. Algebraic conjectures are obtained in 120 cases out of 152 and structural conjectures in 12 of the remaining cases. These conjectures are known, immediate or proved in this paper, except for 18 of them, which remain open.
, 24 pages
Ce cahier a été révisé en décembre 2009 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.943475604057312, "perplexity": 2312.4712105693466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823282.42/warc/CC-MAIN-20171019103222-20171019123222-00273.warc.gz"} |
https://eng.libretexts.org/Bookshelves/Civil_Engineering/Book%3A_Fluid_Mechanics_(Bar-Meir)/11%3A_Compressible_Flow_One_Dimensional/11.5_Normal_Shock/11.5.1%3A_Solution_of_the_Governing_Equations/11.5.1.1%3A_The_Star_Conditions | # 11.5.1.1: The Star Conditions
The speed of sound at the critical condition can also be a good reference velocity. The speed of sound at that velocity is
$c^{*} = \sqrt{k\,R\,T^{*}} \label{shock:eq:starSpeedSound}$
In the same manner, an additional Mach number can be defined as
$M^{*} = \dfrac{U }{ c^{*}} \label{shock:eq:starMach}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9304371476173401, "perplexity": 498.93413883983214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735963.64/warc/CC-MAIN-20200805153603-20200805183603-00302.warc.gz"} |
https://undergroundmathematics.org/sequences | ## What are some interesting sequences of numbers?
### Key questions
1. 1
What is the difference between a sequence and a series?
2. 2
What properties does an arithmetic progression have?
3. 3
What properties does a geometric progression have?
4. 4
How can we evaluate sums such as $\sum_{k=1}^n k$ and $\sum_{k=1}^n k^2$?
5. 5
When and how can we sum an infinite series?
6. 6
What does it mean for a sequence or series to converge or diverge?
No resources found.
Title
Lines
Key questions
Related ideas
No resources found.
This resource is in your collection
#### Review questions Click for information about review questions
Title
Lines
Key questions
Related ideas
No questions found.
This resource is in your collection
#### Introducing...
Resource type Title
Rich example Change one thing
Building blocks Sort it out
Food for thought Square spirals
#### Developing...
Resource type Title
Rich example Bouncing to nothing
Package of problems Common terms
Many ways problem Can you find... series edition
Food for thought Connect three?
Investigation Same or different?
Go and think about it... A puzzling pentagon
Bigger picture Achilles and the tortoise
#### Review questions
Title Ref
Can we find the sum of the integers from $2k$ to $4k$ inclusive? R5849
Can we show $L < \int_1^r \frac{1}{x} \:dx < R$? R5602
Can we sum $1^2-2^2+3^2-4^2+\dotsb+ (2n-1)^2-(2n)^2$? R9819
Can we sum from $1000$ to $2000$ excluding multiples of 5? R7424
Can we sum the first $2n$ terms of $1,1,2,\frac{1}{2},4,\frac{1}{4},8,\frac{1}{8},..$? R6257
Can we sum the first $2n$ terms of $1^2-3^2+5^2-7^2+\cdots$? R5416
Find an expression for the sum of $r^2$ R6143
Given $S_n = S_{3n}$, can we express $a$ in terms of $n$? R9613
Given the sequence $(x_n)$, what is $\sum_{k=0}^\infty 1/x_k$? R8248
How are the $p$th, $q$th and $r$th terms of an AP related? R6137
How are the sums of $n$, $2n$ and $3n$ terms connected here? R8493
How are these recursively defined sequences related? R6555
How do we add the odd-numbered terms of a geometric series? R8617
If $27$, $x$, $y$ are in GP, with sum $21$, what are $x$ and $y$? R7258
If $S_n = 6 - 2^{n+1}/3^{n-1}$, can we show that we have a GP? R8163
If $u_n = a + bn + c2^n$, what's the sum of the first $n$ terms? R6153
If $x_1 = (x_0 + 2)/(x_0 + 1)$, can we show $\sqrt{2}$ lies between $x_0$ and $x_1$? R8404
If $x_2 = 1/(1 - x_1), x_3 = 1/(1 - x_2)$, can we show $x_1x_2x_3+1=0$? R5995
If AP terms $a_4,a_8, a_{16}$ are in GP, are $a_3, a_6, a_{12}$ in GP too? R5345
If a GP has $S_n=(3^n - 2^n)/2^{n-5}$, what's its common ratio? R6608
If an AP has $S_{10}=3S_5$, what's the ratio of $u_{10}$ to $u_5$? R8907
Should I hire a TV or borrow to buy it? R9375
What can we say if $a, x_1, x_2, x_3, x_4, x_5, b$ are in arithmetic progression? R6405
What's the sum of the $r^{th}$ bracket in $(1), (2,3), (4,5,6),\cdots$? R7240
When does $1 + 2/3 + (2/3)^2 + \dotsb$ first exceed $0.9999S_\infty$? R9185
When does $1-2+3-4+5-\cdots +(-1)^{n+1}n$ reach $100$? R8998
When does the sum of this series equal $60$? R8121
When does the sum of this series first exceed $2999/4000$? R7487 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9194682240486145, "perplexity": 2021.0988048369984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00000.warc.gz"} |
http://electromaniacs.com/content/view/197/9/ | Trash
electromaniacs.com Theme
Statistics
Members: 3887
News: 247
Visitors: 3146831
You are connecting to this site from: 54.197.3.233
jstatus
Home Blog Bode plot
Bode plot
# Bode plot
A Bode plot, named for Hendrik Wade Bode, is usually a combination of a Bode magnitude plot and Bode phase plot:
A Bode magnitude plot is a graph of log magnitude against log frequency often used in signal processing to show the transfer function or frequency response of an LTI system.
It makes multiplication of magnitudes a simple matter of adding distances on the graph, since
$\log(a \cdot b) = \log(a) + \log(b)\,$
The Bode plot describes the output response of a frequency-dependent system for a normalised input. The magnitude axis of the Bode plot is often converted directly to decibels.
A Bode phase plot is a graph of phase against log frequency, usually used in conjunction with the magnitude plot, to evaluate how much a frequency will be phase-shifted. For example a signal described by: Asin(ωt) may be attenuated but also phase-shifted. If the system attenuates it by a factor x and phase shifts it by −Φ the signal out of the system will be (A/x) sin(ωt − Φ). The phase shift Φ is generally a function of frequency.
The magnitude and phase Bode plots can seldom be changed independently of each other — if you change the amplitude response of the system you will most likely change the phase characteristics as well and vice versa. For minimum-phase systems the phase and amplitude characteristics can be obtained from each other with the use of the Hilbert Transform.
If the transfer function is a rational function, then the Bode plot can be approximated with straight lines. These asymptotic approximations are called straight line Bode plots or uncorrected Bode plots and are useful because they can be drawn by hand following a few simple rules. Simple plots can even be predicted without drawing them.
The approximation can be taken further by correcting the value at each cutoff frequency. The plot is then called a corrected Bode plot.
## Rules for hand-made Bode plot
The main idea about Bode plots is that one can think of the log of a function in the form:
$f(x) = A \prod (x + c_n)^{a_n}$
as a sum of the logs of its poles and zeros:
$\log(f(x)) = \log(A) + \sum a_n log(x + c_n)$
This idea is used explicitly in the method for drawing phase diagrams. The method for drawing amplitude plots implicitly uses this idea, but since the log of the amplitude of each pole or zero always starts at zero and only has one asymptote change (the straight lines), the method can be simplified.
### Straight-line amplitude plot
Amplitude decibels is usually done using the 20Log10(X) version. Given a transfer function in the form
$H(s) = A \prod \frac{(s + x_n)^{a_n}}{(s + y_n)^{b_n}}$
where s = jω, xn and yn are constants, and H is the transfer function:
• at every value of s where ω = xn (a zero), increase the slope of the line by $20 \cdot a_n dB$ per decade.
• at every value of s where ω = yn (a pole), decrease the slope of the line by $20 \cdot a_n dB$ per decade.
• The initial value of the graph depends on your boundaries. The initial point is found by putting the initial angular frequency ω into the function and finding |H(jω)|.
• The initial slope of the function at the initial value depends on the number and order of zeros and poles that are at values below the initial value, and are found using the first two rules.
To handle irreducible 2nd order polynomials, $ax^2 + bx + c \$ can, in many cases, be approximated as $(\sqrt{a}x + \sqrt{c})^2$.
Note that zeros and poles happen when ω is equal to a certain xn or yn. This is because the function in question is the magnitude of H(jω), and since it is a complex function, $|H(j\omega)| = \sqrt{H \cdot H^* }$. Thus at any place where there is a zero or pole involving the term (s + xn), the magnitude of that term is $\sqrt{(x_n + j\omega) \cdot (x_n - j\omega)}= \sqrt{x_n^2-\omega^2}$.
### Corrected amplitude plot
To correct a straight-line amplitude plot:
• at every zero, put a point $3 \cdot a_n\ \mathrm{dB}$ above the line,
• at every pole, put a point $3 \cdot b_n\ \mathrm{dB}$ below the line,
• draw a smooth line through those points using the straight lines as asymptotes (lines which the curve approaches).
Note that this correction method does not incorporate how to handle complex values of xn or yn. In the case of an irreducible polynomial, the best way to correct the plot is to actually calculate the magnitude of the transfer funcition at the pole or zero corresponding to the irreducible polynomial, and put that dot over or under the line at that pole or zero.
### Straight-line phase plot
Given a transfer function in the same form as above:
$H(s) = A \prod \frac{(s + x_n)^{a_n}}{(s + y_n)^{b_n}}$
the idea is to draw separate plots for each pole and zero, then add them up. The actual phase curve is given by $- \mathbf{arctan}\bigg(\frac{\mathbf{im}[H(s)]}{\mathbf{re}[H(s)]}\bigg)$
To draw the phase plot, for each pole and zero:
• if A is positive, start line (with zero slope) at 0 degrees,
• if A is negative, start line (with zero slope) at 180 degrees,
• for a zero, slope the line up at $45 \cdot a_n$ degrees per decade when $\omega = \frac{x_n}{10}$,
• for a pole, slope the line down at $45 \cdot b_n$ degrees per decade when $\omega = \frac{y_n}{10}$,
• flatten the slope again when the phase has changed by $90 \cdot a_n$ degrees (for a zero) or $90 \cdot b_n$ degrees (for a pole),
• After plotting one line for each pole or zero, add the lines together.
## Example
A lowpass RC filter, for instance has the following frequency response:
$H(f) = \frac{1}{1+j2\pi f R C}$
The cutoff frequency point fc (in hertz) is at the frequency
$f_\mathrm{c} = {1 \over {2\pi RC}}$.
The line approximation of the Bode plot consists of two lines:
• for frequencies below fc it is a horizontal line at 0 dB,
• for frequencies above fc it is a line with a slope of −20 dB per decade.
These two lines meet at the cutoff frequency. From the plot it can be seen that for frequencies well below the cutoff frequency the circuit has an attenuation of 0dB, the filter does not change the amplitude. Frequencies above the cutoff frequency are attenuated - the higher the frequency, the higher the attenuation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9373710751533508, "perplexity": 696.1492944040807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663359.6/warc/CC-MAIN-20140930004103-00352-ip-10-234-18-248.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/91877/compound-distribution-in-bayesian-sense-vs-compound-distribution-as-random-sum | # compound distribution in Bayesian sense vs. compound distribution as random sum?
I'm trying to sort out two different uses of the term "compound distribution" and figure out the relationship.
The Wikipedia article on compound distribution -- which I wrote -- defines a compound distribution as an infinite mixture, i.e. if $p(x|a)$ is a distribution of type F, and $p(a|b)$ is a distribution of type G, then $p(x|b) = \int_a p(x|a) p(a|b) da$ is a compound distribution that results from compounding F with G. This is the distribution of prior and posterior predictive distributions in Bayesian statistics.
However, the term "compound distribution" has another meaning as a random sum, i.e. a sum of i.i.d. variables where the number of variables is random.
What's the relation between the two? And am I using "compound distribution" correctly for the first definition?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8998931646347046, "perplexity": 183.3521238745802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462555.21/warc/CC-MAIN-20150226074102-00046-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/208944/prove-the-following-identity-without-using-induction-sum-limits-i-0n-ai/208948 | # Prove the following identity without using induction: $\sum\limits_{i=0}^n a^i = \frac{1-a^{n+1}}{1-a}$ for $a\neq1$
I'm struggling with proving $$\sum^n_{i=0}a^i = \frac{1-a^{n+1}}{1-a}$$ for $a\neq1$, not sure where to start.
-
Do you mean $a^{n+1}$? – draks ... Oct 7 '12 at 22:25
The question cannot be answered without a precise definition of a "proof without induction". For example, is a proof that invokes a lemma that uses induction considered to be a proof without induction? – Bill Dubuque Oct 7 '12 at 22:41
No matter how you organize you proof you will use induction, may be implicitly. Here is an attempt to show a proof that seems not uses induction. Let $$S=1+a+a^2+\ldots+a^n$$ then $$a S=a(1+a+a^2+\ldots+a^n)=a+a^2+a^3+\ldots+a^{n+1}$$ and $$(1-a)S=S-aS=(1+a+a^2+\ldots+a^n)-(a+a^2+a^3+\ldots+a^{n+1})=1-a^{n+1}$$ Since $(1-a)S=1-a^{n+1}$, then dividing by $1-a$ we get $$S=\frac{1-a^{n+1}}{1-a}$$
-
Where is the induction ? – Belgi Oct 7 '12 at 22:31
In the cancelation. – Norbert Oct 7 '12 at 22:32
I don't think this one use induction. – Patrick Li Oct 7 '12 at 22:38
Great answer, thanks very much! – mirai Oct 7 '12 at 22:44
Every time you manipulate with unknown amount of summands you use induction, explicitly or implicitly – Norbert Oct 7 '12 at 22:44
Let $f(0)=0$ and for $k \gt 0$ let $f(k)=\dfrac{1-a^{k+1}}{1-a}$. Note that $$f(k)-f(k-1)=\frac{1-a^{k+1}}{1-a}-\frac{1-a^k}{1-a}=\frac{a^k(1-a)}{1-a}=a^k.$$ It follows that $$1+a+a^2+\cdots +a^n=(f(1)-f(0))+(f(2)-f(1))+(f(3)-f(2))+\cdots +(f(n)-f(n-1)).$$ Open the parentheses on the right, and observe the mass cancellation (telescoping).
Remark: Any of the usual sum results can be mechanically transformed in an analogous way into a telescoping argument.
As for avoiding induction, one cannot, and the above proof does not. Telescoping that involves $5$ terms, or $500$ terms, does not require induction. Telescoping with "$n+1$" terms does require induction.
-
Almost as in the other answers, consider $P=(1-X)(\sum_{i=0}^nX^n)$. From the way it is formed, $P$ is a polynomial in $X$ of degree $1+n$ (did I use induction yet?). Now let $i\in\{1,\ldots,n\}$ and consider the coefficient $c_i$ of $X^i$ in $P$. From the definition of polynomial multiplication we have $c_i=1\times1_i-1\times 1_{i-1}=0$, where the subscripts on the $1$'s are just to indicate the degree of the term of $\sum_{i=0}^nX^n$ that they were obtained as coefficient of (these coefficients are all $1$ up to degree $n$ inclusive). Also the constant term of $P$ is $1\times 1=1$, and the leading term of $P$ is $-X.X^n=-X^{n+1}$. So $P=1-X^{n+1}$, as there cannot be any other terms. Now substitute $X:=a$ to obtain $(1-a)(\sum_{i=0}^na^n)=1-a^{n+1}$ and finallly divide by $a-1\neq0$.
It is a bit tiresome, but I think you will find it hard to point out where I used induction. And I didn't even write ellipses once!
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9891189336776733, "perplexity": 273.5546856442575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398455246.70/warc/CC-MAIN-20151124205415-00083-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.siyavula.com/read/science/grade-11/types-of-reactions/13-types-of-reactions-01 | We think you are located in South Africa. Is this correct?
# Chapter 13: Types of reactions
In this chapter learners will explore acid-base reactions and redox reactions. Redox reactions were briefly introduced in gr10. The concepts of acids, bases, reduction, oxidation and oxidation numbers are all introduced here. The following list provides a summary of the topics covered in this chapter.
• Acids and bases.
This chapter begins by revising all the concepts done on acids and bases up to this point. Learners are reminded what an acid and a base are (in particular the Bronsted-Lowry definition) and how the definition and concept have changed over time. Although the most recent definition of an acid and a base is the Lowry definition this is not covered at school level and the Bronsted-Lowry definition serves as a good working model for the acids and bases that learners encounter at school.
The concept of a polyprotic acid is introduced although it is not in CAPS. This is done to help learners understand how to handle acids such as sulfuric acid in reactions. You should try to use polyprotic acids sparingly in your examples.
• Conjugate acids and bases and amphoteric (amphiprotic) substances.
The concept of conjugate acids and bases requires learners to think about reactions going in reverse. By writing the equation in reverse, learners can see how the acid becomes a base. This base is said to be the conjugate base of the acid since it is conjugated (linked) to the acid.
• Acid-hydroxide, acid-oxide and acid-carbonate reactions.
Three different types of bases are examined in detail to see how they react with acids. Several examples of each type are given and the general equation for the reaction is also given.
• Oxidation numbers for compounds.
This topic is placed after redox reactions in CAPS but must be taught before redox reactions and so is placed before redox reactions in this book. This topic provides the tools needed to understand redox reactions.
• Balancing redox reactions.
In grade 10 learners learnt how to balance chemical equations by inspection. In this topic they will learn how to balance redox reactions which often cannot be balanced by inspection. The simpler examples can be balanced by inspection and this can be used as a comparison for the two techniques. Learners need to be able to break a reaction up into two parts and follow different chemical species through an equation. This skill starts with conjugate acids and bases and carries over into this topic.
Coloured text has been used as a tool to highlight different parts of reactions. Ensure that learners understand that the coloured text does not mean there is anything special about that part of the reaction, this is simply a teaching tool to help them identify the important parts of the reaction.
It is also important to note that this chapter is split across term 3 and term 4. Acids and bases should be completed in term 3 and redox reactions are done in term 4.
All around you there are chemical reactions taking place. Green plants are photosynthesising, car engines are relying on the reaction between petrol and air and your body is performing many complex reactions. In this chapter we will look at two common types of reactions that can occur in the world around you and in the chemistry laboratory. These two types of reactions are acid-base reactions and redox reactions.
## Household acids and bases
Look around your home and school and find examples of acids and bases. Remember that foods can also be acidic or basic.
Make a list of all the items you find. Why do you think they are acids or bases?
Some common acids and bases, and their chemical formulae, are shown in Table 13.1.
Acid Formula Base Formula Hydrochloric acid $$\text{HCl}$$ Sodium hydroxide $$\text{NaOH}$$ Sulfuric acid $$\text{H}_{2}\text{SO}_{4}$$ Potassium hydroxide $$\text{KOH}$$ Sulfurous acid $$\text{H}_{2}\text{SO}_{3}$$ Sodium carbonate $$\text{Na}_{2}\text{CO}_{3}$$ Acetic (ethanoic) acid $$\text{CH}_{3}\text{COOH}$$ Calcium hydroxide $$\text{Ca}(\text{OH})_{2}$$ Carbonic acid $$\text{H}_{2}\text{CO}_{3}$$ Magnesium hydroxide $$\text{Mg}(\text{OH})_{2}$$ Nitric acid $$\text{HNO}_{3}$$ Ammonia $$\text{NH}_{3}$$ Phosphoric acid $$\text{H}_{3}\text{PO}_{4}$$ Sodium bicarbonate $$\text{NaHCO}_{3}$$
Table 13.1: Some common acids and bases and their chemical formulae.
Most acids share certain characteristics, and most bases also share similar characteristics. It is important to be able to have a definition for acids and bases so that they can be correctly identified in reactions.
### Defining acids and bases (ESBQW)
One of the first things that was noted about acids is that they have a sour taste. Bases were noted to have a soapy feel and a bitter taste. However you cannot go around tasting and feeling unknown substances since they may be harmful. Also when chemists started to write down chemical reactions more practical definitions were needed.
A number of definitions for acids and bases have developed over the years. One of the earliest was the Arrhenius definition. Arrhenius (1887) noticed that water dissociates (splits up) into hydronium $$(\text{H}_{3}\text{O}^{+})$$ and hydroxide $$(\text{OH}^{-})$$ ions according to the following equation:
$$2\text{H}_{2}\text{O (l)} \rightarrow \text{H}_{3}\text{O}^{+}\text{(aq)} + \text{OH}^{-}\text{(aq)}$$
For more information on dissociation, refer to Grade 10 (chapter 18: reactions in aqueous solution).
Arrhenius described an acid as a compound that increases the concentration of $$\text{H}_{3}\text{O}^{+}$$ ions in solution and a base as a compound that increases the concentration of $$\text{OH}^{-}$$ ions in solution.
Look at the following examples showing the dissociation of hydrochloric acid and sodium hydroxide (a base) respectively:
1. $$\text{HCl (aq)} + \text{H}_{2}\text{O}\text{(l)} \rightarrow \text{H}_{3}\text{O}^{+}\text{(aq)} + \text{Cl}^{-}\text{(aq)}$$
Hydrochloric acid in water increases the concentration of $$\text{H}_{3}\text{O}^{+}$$ ions and is therefore an acid.
2. $$\text{NaOH (s)} \stackrel{\text{H}_{2}\text{O}}{\longrightarrow} \text{Na}^{+}\text{(aq)} + \text{OH}^{-}\text{(aq)}$$
Sodium hydroxide in water increases the concentration of $$\text{OH}^{-}$$ ions and is therefore a base.
Note that we write $$\stackrel{\text{H}_{2}\text{O}}{\longrightarrow}$$ to indicate that water is needed for the dissociation.
However, this definition could only be used for acids and bases in water. Since there are many reactions which do not occur in water it was important to come up with a much broader definition for acids and bases.
In 1923, Lowry and Bronsted took the work of Arrhenius further to develop a broader definition for acids and bases. The Bronsted-Lowry model defines acids and bases in terms of their ability to donate or accept protons.
Acids
A Bronsted-Lowry acid is a substance that gives away protons (hydrogen cations $$\text{H}^{+}$$), and is therefore called a proton donor.
Bases
A Bronsted-Lowry base is a substance that takes up protons (hydrogen cations $$\text{H}^{+}$$), and is therefore called a proton acceptor.
Below are some examples:
1. $$\text{HCl (aq)} + \text{NH}_{3}\text{(aq)} \rightarrow \text{NH}_{4}^{+}\text{(aq)} + \text{Cl}^{-}\text{(aq)}$$
We highlight the chlorine and the nitrogen so that we can follow what happens to these two elements as they react. We do not highlight the hydrogen atoms as we are interested in how these change. This colour coding is simply to help you identify the parts of the reaction and does not represent any specific property of these elements.
$$\text{H}{\color{red}{\text{Cl}}} \text{ (aq)} + {\color{blue}{\text{N}}}\text{H}_{3}\text{(aq)} \rightarrow {\color{blue}{\text{N}}}\text{H}_{4}^{+}\text{(aq)} + {\color{red}{\text{Cl}}}^{-}\text{(aq)}$$
In order to decide which substance is a proton donor and which is a proton acceptor, we need to look at what happens to each reactant. The reaction can be broken down as follows:
$$\text{H}{\color{red}{\text{Cl}}}\text{ (aq)} \rightarrow {\color{red}{\text{Cl}}}^{-}\text{(aq)}$$ and
$${\color{blue}{\text{N}}}\text{H}_{3}\text{(aq)} \rightarrow {\color{blue}{\text{N}}}\text{H}_{4}^{+}\text{(aq)}$$
From these reactions, it is clear that $$\text{HCl}$$ is a proton donor and is therefore an acid, and that $$\text{NH}_{3}$$ is a proton acceptor and is therefore a base.
2. $$\text{CH}_{3}\text{COOH (aq)} + \text{H}_{2}\text{O (l)} \rightarrow \text{H}_{3}\text{O}^{+}\text{(aq)} + \text{CH}_{3}\text{COO}^{-}\text{(aq)}$$
Again we highlight the parts of the reactants that we want to follow in this reaction:
$${\color{red}{\text{CH}_{3}\text{COO}}}\text{H (aq)} + \text{H}_{2}{\color{blue}{\text{O}}}\text{ (l)} \rightarrow \text{H}_{3}{\color{blue}{\text{O}}}^{+}\text{(aq)} + {\color{red}{\text{CH}_{3}\text{COO}}}^{-}\text{(aq)}$$
The reaction can be broken down as follows:
$${\color{red}{\text{CH}_{3}\text{COO}}}\text{H (aq)} \rightarrow {\color{red}{\text{CH}_{3}\text{COO}}}^{-}\text{(aq)}$$ and
$$\text{H}_{2}{\color{blue}{\text{O}}}\text{ (l)} \rightarrow \text{H}_{3}{\color{blue}{\text{O}}}^{+}\text{(aq)}$$
In this reaction, $$\text{CH}_{3}\text{COOH}$$ (acetic acid or vinegar) is a proton donor and is therefore the acid. In this case, water acts as a base because it accepts a proton to form $$\text{H}_{3}\text{O}^{+}$$.
3. $$\text{NH}_{3}\text{(aq)} + \text{H}_{2}\text{O (l)} \rightarrow \text{NH}_{4}^{+}\text{(aq)} + \text{OH}^{-}\text{(aq)}$$
Again we highlight the parts of the reactants that we want to follow in this reaction:
$${\color{blue}{\text{N}}}\text{H}_{3}\text{(aq)} + \text{H}_{2}{\color{red}{\text{O}}}\text{ (l)} \rightarrow {\color{blue}{\text{N}}}\text{H}_{4}^{+}\text{(aq)} + {\color{red}{\text{O}}}\text{H}^{-}\text{(aq)}$$
The reaction can be broken down as follows:
$$\text{H}_{2}{\color{red}{\text{O}}}\text{ (l)} \rightarrow {\color{red}{\text{O}}}\text{H}^{-}\text{(aq)}$$ and
$${\color{blue}{\text{N}}}\text{H}_{3}\text{(aq)} \rightarrow {\color{blue}{\text{N}}}\text{H}_{4}^{+}\text{(aq)}$$
Water donates a proton and is therefore an acid in this reaction. Ammonia accepts the proton and is therefore the base.
Notice in these examples how we looked at the common elements to break the reaction into two parts. So in the first example we followed what happened to chlorine to see if it was part of the acid or the base. And we also followed nitrogen to see if it was part of the acid or the base. You should also notice how in the reaction for the acid there is one less hydrogen on the right hand side and in the reaction for the base there is an extra hydrogen on the right hand side.
#### Amphoteric substances
In examples $$\text{2}$$ and $$\text{3}$$ above we notice an interesting thing about water. In example $$\text{2}$$ we find that water acts as a base (it accepts a proton). In example $$\text{3}$$ however we see that water acts as an acid (it donates a proton)!
Depending on what water is reacting with it can either react as a base or as an acid. Water is said to be amphoteric. Water is not unique in this respect, several other substances are also amphoteric.
Amphoteric
An amphoteric substance is one that can react as either an acid or base.
When we look just at Bronsted-Lowry acids and bases we can also talk about amphiprotic substances which are a special type of amphoteric substances.
Amphiprotic
An amphiprotic substance is one that can react as either a proton donor (Bronsted-Lowry acid) or as a proton acceptor (Bronsted-Lowry base). Examples of amphiprotic substances include water, hydrogen carbonate ion ($$\text{HCO}_{3}^{-}$$) and hydrogen sulfate ion ($$\text{HSO}_{4}^{-}$$).
Note: You may also see the term ampholyte used to mean a substance that can act as both an acid and a base. This term is no longer in general use in chemistry.
#### Polyprotic acids [NOT IN CAPS]
A polyprotic (many protons) acid is an acid that has more than one proton that it can donate. For example sulfuric acid can donate one proton to form the hydrogen sulfate ion:
$\text{H}_{2}\text{SO}_{4}\text{(aq)} + \text{OH}^{-}\text{(aq)} \rightarrow \text{HSO}_{4}^{-}\text{(aq)} + \text{H}_{2}\text{O (l)}$
Or it can donate two protons to form the sulfate ion:
$\text{H}_{2}\text{SO}_{4}\text{(aq)} + 2\text{OH}^{-}\text{(aq)} \rightarrow \text{SO}_{4}^{2-}\text{(aq)} + 2\text{H}_{2}\text{O (l)}$
In this chapter we will mostly consider monoprotic acids (acids with only one proton to donate). If you do see a polyprotic acid in a reaction then write the resulting reaction equation with the acid donating all its protons.
Some examples of polyprotic acids are: $$\text{H}_{2}\text{SO}_{4}$$, $$\text{H}_{2}\text{SO}_{3}$$, $$\text{H}_{2}\text{CO}_{3}$$ and $$\text{H}_{3}\text{PO}_{4}$$.
# Don't get left behind
Join thousands of learners improving their science marks online with Siyavula Practice.
## Acids and bases
Exercise 13.1
Identify the Bronsted-Lowry acid and the Bronsted-Lowry base in the following reactions:
$$\text{HNO}_{3}\text{(aq)} + \text{NH}_{3}\text{(aq)} \rightarrow \text{NO}_{3}^{-}\text{ (aq)} + \text{NH}_{4}^{+} \text{ (aq)}$$
We break the reaction into two parts:
$$\text{HNO}_{3}\text{ (aq)} \rightarrow \text{NO}_{3}^{-}\text{(aq)}$$ and
$$\text{NH}_{3}\text{(aq)} \rightarrow \text{NH}_{4}^{+}\text{(aq)}$$
From this we see that the Bronsted-Lowry acid is $$\text{HNO}_{3}$$ and the Bronsted-Lowry base is $$\text{NH}_{3}$$.
$$\text{HBr (aq)} + \text{KOH (aq)} \rightarrow \text{KBr (aq)} + \text{H}_{2}\text{O (l)}$$
We break the reaction into two parts:
$$\text{HBr (aq)} \rightarrow \text{KBr (aq)}$$ and
$$\text{KOH (aq)} \rightarrow \text{H}_{2}\text{O (l)}$$
From this we see that the Bronsted-Lowry acid is $$\text{HBr}$$ and the Bronsted-Lowry base is $$\text{KOH}$$.
Write a reaction equation to show $$\text{HCO}_{3}^{-}$$ acting as an acid.
$$\text{HCO}_{3}^{-}\text{ (aq)} \rightarrow \text{CO}_{3}^{2-}\text{(aq)} + \text{H}^{+}\text{(aq)}$$
Write a reaction equation to show $$\text{HCO}_{3}^{-}$$ acting as an base.
$$\text{HCO}_{3}^{-}\text{(aq)} + \text{H}^{+}\text{(aq)} \rightarrow \text{H}_{2}\text{CO}_{3}\text{(aq)}$$
Compounds such as $$\text{HCO}_{3}^{-}$$ are $$\ldots$$
Amphoteric
### Conjugate acid-base pairs (ESBQX)
Look at the reaction between hydrochloric acid and ammonia to form ammonium and chloride ions (again we have highlighted the different parts of the equation):
$${\color{red}{\text{HCl}}}\text{ (aq)} + {\color{blue}{\text{NH}_{3}}}\text{(aq)} \rightarrow {\color{blue}{\text{NH}_{4}^{+}}}\text{(aq)} + {\color{red}{\text{Cl}^{-}}}\text{(aq)}$$
We look at what happens to each of the reactants in the reaction:
$$\text{HCl (aq)} \rightarrow \text{Cl}^{-}\text{(aq)}$$ and
$$\text{NH}_{3}\text{(aq)} \rightarrow \text{NH}_{4}^{+}\text{(aq)}$$
We see that $$\text{HCl}$$ acts as the acid and $$\text{NH}_{3}$$ acts as the base.
But what if we actually had the following reaction:
$${\color{blue}{\text{NH}_{4}^{+}}}\text{(aq)} + {\color{red}{\text{Cl}^{-}}}\text{(aq)} \rightarrow {\color{red}{\text{HCl}}}\text{ (aq)} + {\color{blue}{\text{NH}_{3}}}\text{(aq)}$$
This is the same reaction as the first one, but the products are now the reactants.
Now if we look at the what happens to each of the reactants we see the following:
$$\text{NH}_{4}^{+}\text{(aq)} \rightarrow \text{NH}_{3}\text{(aq)}$$ and
$$\text{Cl}^{-}\text{(aq)} \rightarrow \text{HCl (aq)}$$
We see that $$\text{NH}_{4}^{+}$$ acts as the acid and $$\text{Cl}^{-}$$ acts as the base.
Up to now you have looked at reactions as starting with the reactants and going to the products. For acids and bases we also need to consider what happens if we swop the reactants and the products around. This will help you understand conjugate acid-base pairs.
When $$\text{HCl}$$ (the acid) loses a proton it forms $$\text{Cl}^{-}$$ (the base). And that when $$\text{Cl}^{-}$$ (the base) gains a proton it forms $$\text{HCl}$$ (the acid). We call these two species a conjugate acid-base pair. Similarly $$\text{NH}_{3}$$ and $$\text{NH}_{4}^{+}$$ form a conjugate acid-base pair.
The word conjugate means coupled or connected.
We can represent this as:
## Conjugate acid-base pairs
Using the common acids and bases in Table 13.1, pick an acid and a base from the list. Write a chemical equation for the reaction of these two compounds.
Now identify the conjugate acid-base pairs in your chosen reaction. Compare your results to those of your classmates.
# Don't get left behind
Join thousands of learners improving their science marks online with Siyavula Practice.
## Acids and bases
Exercise 13.2
In each of the following reactions, label the conjugate acid-base pairs.
$$\text{H}_{2}\text{SO}_{4}\text{(aq)} + \text{H}_{2}\text{O (l)} \rightarrow \text{H}_{3}\text{O}^{+}\text{(aq)} + \text{HSO}_{4}^{-}\text{(aq)}$$
$$\text{NH}_{4}^{+}\text{(aq)} + \text{F}^{-}\text{(aq)} \rightarrow \text{HF}\text{(aq)} + \text{NH}_{3}\text{(aq)}$$
$$\text{H}_{2}\text{O (l)} + \text{CH}_{3}\text{COO}^{-}\text{(aq)} \rightarrow \text{CH}_{3}\text{COOH (aq)} + \text{OH}^{-}\text{(aq)}$$
$$\text{H}_{2}\text{SO}_{4}\text{(aq)} + \text{Cl}^{-}\text{(aq)} \rightarrow \text{HCl (aq)} + \text{HSO}_{4}^{-}\text{(aq)}$$
Given the following reaction:
$\text{H}_{2}\text{O (l)} + \text{NH}_{3}\text{(aq)} \rightarrow \text{NH}_{4}^{+}\text{(aq)} + \text{OH}^{-}\text{(aq)}$
Write down which reactant is the base and which is the acid.
We break the reaction into two parts:
$$\text{H}_{2}\text{O (aq)} \rightarrow \text{OH}^{-}\text{(aq)}$$ and
$$\text{NH}_{3}\text{(aq)} \rightarrow \text{NH}_{4}^{+}\text{(aq)}$$
From this we see that the Bronsted-Lowry acid is $$\text{H}_{2}\text{O}$$ and the Bronsted-Lowry base is $$\text{NH}_{3}$$.
Label the conjugate acid-base pairs.
In your own words explain what is meant by the term conjugate acid-base pair.
A conjugate acid-base pair is a reactant and product pair that is transformed into each other through the loss or gain of a proton. So for example an acid loses a proton to form a base. The acid and the resulting base are said to be a conjugate acid-base pair. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8478705883026123, "perplexity": 1274.330230669884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314641.41/warc/CC-MAIN-20190819032136-20190819054136-00170.warc.gz"} |
http://link.springer.com/article/10.1023%2FA%3A1016784627561?LI=true | , Volume 32, Issue 1-4, pp 5-33
# On Measuring Uncertainty and Uncertainty-Based Information: Recent Developments
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
## Abstract
It is shown in this paper how the emergence of fuzzy set theory and the theory of monotone measures considerably expanded the framework for formalizing uncertainty and suggested many new types of uncertainty theories. The paper focuses on issues regarding the measurement of the amount of relevant uncertainty (predictive, prescriptive, diagnostic, etc.) in nondeterministic systems formalized in terms of the various uncertainty theories. It is explained how information produced by an action can be measured by the reduction of uncertainty produced by the action. Results regarding measures of uncertainty (and uncertainty-based information) in possibility theory, Dempster–Shafer theory, and the various theories of imprecise probabilities are surveyed. The significance of these results in developing sound methodological principles of uncertainty and uncertainty-based information is discussed. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9009430408477783, "perplexity": 1487.9839496478014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462472.19/warc/CC-MAIN-20150226074102-00121-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/42029/the-matrix-associated-to-a-linear-function | # The matrix associated to a linear function
Let $f_k \colon \mathbb{R}^3 \rightarrow \mathbb{R}^3$ a linear map defined by $$f_k \begin{pmatrix} 1 \\ 2 \\ k \end{pmatrix}= \begin{pmatrix} 2+k \\ 3 \\ 0 \end{pmatrix}, \quad f_k \begin{pmatrix} 2 \\ k+1 \\ -1 \end{pmatrix}= \begin{pmatrix} 1 \\ 1 \\ -2 \end{pmatrix}, \quad f_k \begin{pmatrix} -3 \\ 1 \\ 5 \end{pmatrix}= \begin{pmatrix} 1 \\ k \\ 2 \end{pmatrix},$$ For $k=-1$ how can i find the matrix associated to $f_k$ with respect the standard basis in the domain and standard basis in codomain?
Added by Arturo: To put this question in context, see this previous question.
-
@Katy: Please don't post simply by quoting the problem in the imperative. It reads as if you were giving orders. Could you perhaps put it in a quote box, and then add some words as to what you have managed to accomplish or why/where you are confused? In your most recent question it wasn't until after another answer was posted that you indicated in comments that you already knew the material contained in the answer and was looking for something else; that belongs in the question, because it guides potential repliers as to what you already know, and what it is you want to hear. – Arturo Magidin May 29 '11 at 22:16
Given what has been answered in the previous question, this should now be very easy: the "basis in the domain" is the basis you get by pluggin in $k$ to the three vectors listed in the definition (the ones you apply $f_k$ to); you know what the answers are; you should be able to write down the matrix. What is confusing you? – Arturo Magidin May 29 '11 at 22:18
excuse me. you are right – Katy23 May 29 '11 at 22:19
can you tell me the first steps of solution? – Katy23 May 29 '11 at 22:22
@Yuval: It's not really a duplicate, as in this case we simply have a specific linear transformation and we are asked for the matrix that represents it (relative to some bases), while the other question asked something different about a family of functions of which this is one member. It might more accurately be said that this question is "too localized" in the absence of the previous one, or that the only reason for looking at this is in light of the previous question, rathere than to say it is a duplicate of the previous one. – Arturo Magidin May 30 '11 at 1:50
Plug in $k=-1$. That gives you a basis for $\mathbb{R}^3$, as discussed in your previous question, namely $$\left(\begin{array}{r}1\\2\\-1\end{array}\right),\qquad\left(\begin{array}{r}2\\0\\-1\end{array}\right),\qquad\left(\begin{array}{r}-3\\1\\5\end{array}\right).$$ That's the "basis in the domain".
You know what the image of the basis vectors under $f_{-1}$ is: you are told what they are.
So you have a basis $\beta$, and the value of the linear transformation at the vectors of $\beta$. How do you find the matrix of $f_{-1}$ from $\mathbb{R}^3$ with basis $\beta$ to $\mathbb{R}^3$ with the standard basis? The first column is the image of the first vector of $\beta$ written in terms of the standard basis. The second column of the matrix is...
Edit. I see now that the question asks for the matrix of $f_{-1}$ relative to the standard bases, which means you need to find $f_{-1}(e_1)$, $f_{-1}(e_2)$, and $f_{-1}(e_3)$. How to do that?
Well, since $(1,2,-1)$, $(2,0,-1)$, and A$(-3,1,5)$ are a basis, we can write $e_1$, $e_2$, and $e_3$ as linear combinations of them. For example, $$e_1 = -\frac{1}{15}\left(\begin{array}{r}1\\2\\-1\end{array}\right) + \frac{11}{15}\left(\begin{array}{r}2\\0\\-1\end{array}\right) + \frac{2}{15}\left(\begin{array}{r}-3\\1\\5\end{array}\right)$$ so that means that $$f_{-1}(e_1) = -\frac{1}{15}f_{-1}\left(\begin{array}{r}1\\2\\-1\end{array}\right) + \frac{11}{15}f_{-1}\left(\begin{array}{r}2\\0\\-1\end{array}\right)+ \frac{2}{15}f_{-1}\left(\begin{array}{r}-3\\1\\5\end{array}\right).$$ Continue this way to get the matrix.
-
The first column of the matrix is the image through $f_{-1}$ of $e_1=(1,0,0)$. How i can evaluate this? – Katy23 May 30 '11 at 9:20
Let $B=(e_1=(1,0,0),e_2=(0,1,0),e_3=(0,0,1))$ i must find $[f_{-1}]_B^B$. Or not :)? – Katy23 May 30 '11 at 9:22
@Katy23: I see that now. You need to write each of $e_1$, $e_2$, and $e_3$ in terms of $(1,2,-1)$, $(2,0,-1)$, and $(-3,1,5)$ (which is possible, since the latter 3 are a basis), and then use linearity of $f_{-1}$ to get the values. – Arturo Magidin May 30 '11 at 12:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8243834376335144, "perplexity": 200.30004583457668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430452285957.31/warc/CC-MAIN-20150501035125-00011-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://en.wikipedia.org/wiki/Statistical_potential | # Statistical potential
In protein structure prediction, a statistical potential or knowledge-based potential is an energy function derived from an analysis of known protein structures in the Protein Data Bank.
Many methods exist to obtain such potentials; two notable methods are the quasi-chemical approximation (due to Miyazawa and Jernigan[1]) and the potential of mean force (due to Sippl [2]). Although the obtained energies are often considered as approximations of the free energy, this physical interpretation is incorrect.[3][4] Nonetheless, they have been applied with a limited success in many cases [5] because they frequently correlate with actual (physical) free energy differences.
## Assigning an energy
Possible features to which an energy can be assigned include torsion angles (such as the ${\displaystyle \phi ,\psi }$ angles of the Ramachandran plot), solvent exposure or hydrogen bond geometry. The classic application of such potentials is however pairwise amino acid contacts or distances. For pairwise amino acid contacts, a statistical potential is formulated as an interaction matrix that assigns a weight or energy value to each possible pair of standard amino acids. The energy of a particular structural model is then the combined energy of all pairwise contacts (defined as two amino acids within a certain distance of each other) in the structure. The energies are determined using statistics on amino acid contacts in a database of known protein structures (obtained from the Protein Data Bank).
## Sippl's potential of mean force
### Overview
Many textbooks present the potentials of mean force (PMFs) as proposed by Sippl [2] as a simple consequence of the Boltzmann distribution, as applied to pairwise distances between amino acids. This is incorrect, but a useful start to introduce the construction of the potential in practice. The Boltzmann distribution applied to a specific pair of amino acids, is given by:
${\displaystyle P\left(r\right)={\frac {1}{Z}}e^{-{\frac {F\left(r\right)}{kT}}}}$
where ${\displaystyle r}$ is the distance, ${\displaystyle k}$ is the Boltzmann constant, ${\displaystyle T}$ is the temperature and ${\displaystyle Z}$ is the partition function, with
${\displaystyle Z=\int e^{-{\frac {F(r)}{kT}}}dr}$
The quantity ${\displaystyle F(r)}$ is the free energy assigned to the pairwise system. Simple rearrangement results in the inverse Boltzmann formula, which expresses the free energy ${\displaystyle F(r)}$ as a function of ${\displaystyle P(r)}$:
${\displaystyle F\left(r\right)=-kT\ln P\left(r\right)-kT\ln Z}$
To construct a PMF, one then introduces a so-called reference state with a corresponding distribution ${\displaystyle Q_{R}}$ and partition function ${\displaystyle Z_{R}}$, and calculates the following free energy difference:
${\displaystyle \Delta F\left(r\right)=-kT\ln {\frac {P\left(r\right)}{Q_{R}\left(r\right)}}-kT\ln {\frac {Z}{Z_{R}}}}$
The reference state typically results from a hypothetical system in which the specific interactions between the amino acids are absent. The second term involving ${\displaystyle Z}$ and ${\displaystyle Z_{R}}$ can be ignored, as it is a constant.
In practice, ${\displaystyle P(r)}$ is estimated from the database of known protein structures, while ${\displaystyle Q_{R}(r)}$ typically results from calculations or simulations. For example, ${\displaystyle P(r)}$ could be the conditional probability of finding the ${\displaystyle C\beta }$ atoms of a valine and a serine at a given distance ${\displaystyle r}$ from each other, giving rise to the free energy difference ${\displaystyle \Delta F}$. The total free energy difference of a protein, ${\displaystyle \Delta F_{\textrm {T}}}$, is then claimed to be the sum of all the pairwise free energies:
${\displaystyle \Delta F_{\textrm {T}}=\sum _{i
where the sum runs over all amino acid pairs ${\displaystyle a_{i},a_{j}}$ (with ${\displaystyle i) and ${\displaystyle r_{ij}}$ is their corresponding distance. It should be noted that in many studies ${\displaystyle Q_{R}}$ does not depend on the amino acid sequence.[6]
Intuitively, it is clear that a low value for ${\displaystyle \Delta F_{\textrm {T}}}$ indicates that the set of distances in a structure is more likely in proteins than in the reference state. However, the physical meaning of these PMFs have been widely disputed since their introduction.[3][4][7][8] The main issues are the interpretation of this "potential" as a true, physically valid potential of mean force, the nature of the reference state and its optimal formulation, and the validity of generalizations beyond pairwise distances.
## Justification
### Analogy with liquid systems
The first, qualitative justification of PMFs is due to Sippl, and based on an analogy with the statistical physics of liquids.[9] For liquids,[10] the potential of mean force is related to the radial distribution function ${\displaystyle g(r)}$, which is given by:
${\displaystyle g(r)={\frac {P(r)}{Q_{R}(r)}}}$
where ${\displaystyle P(r)}$ and ${\displaystyle Q_{R}(r)}$ are the respective probabilities of finding two particles at a distance ${\displaystyle r}$ from each other in the liquid and in the reference state. For liquids, the reference state is clearly defined; it corresponds to the ideal gas, consisting of non-interacting particles. The two-particle potential of mean force ${\displaystyle W(r)}$ is related to ${\displaystyle g(r)}$ by:
${\displaystyle W(r)=-kT\log g(r)=-kT\log {\frac {P(r)}{Q_{R}(r)}}}$
According to the reversible work theorem, the two-particle potential of mean force ${\displaystyle W(r)}$ is the reversible work required to bring two particles in the liquid from infinite separation to a distance ${\displaystyle r}$ from each other.[10]
Sippl justified the use of PMFs - a few years after he introduced them for use in protein structure prediction [9] - by appealing to the analogy with the reversible work theorem for liquids. For liquids, ${\displaystyle g(r)}$ can be experimentally measured using small angle X-ray scattering; for proteins, ${\displaystyle P(r)}$ is obtained from the set of known protein structures, as explained in the previous section. However, as Ben-Naim writes in a publication on the subject:[4]
[...]the quantities, referred to as statistical potentials,' structure based potentials,' or pair potentials of mean force', as derived from the protein data bank, are neither potentials' nor `potentials of mean force,' in the ordinary sense as used in the literature on liquids and solutions.
Another issue is that the analogy does not specify a suitable reference state for proteins.
### Analogy with likelihood
Baker and co-workers [11] justified PMFs from a Bayesian point of view and used these insights in the construction of the coarse grained ROSETTA energy function. According to Bayesian probability calculus, the conditional probability ${\displaystyle P(X\mid A)}$ of a structure ${\displaystyle X}$, given the amino acid sequence ${\displaystyle A}$, can be written as:
${\displaystyle P\left(X\mid A\right)={\frac {P\left(A\mid X\right)P\left(X\right)}{P\left(A\right)}}\propto P\left(A\mid X\right)P\left(X\right)}$
${\displaystyle P(X\mid A)}$ is proportional to the product of the likelihood ${\displaystyle P\left(A\mid X\right)}$ times the prior ${\displaystyle P\left(X\right)}$. By assuming that the likelihood can be approximated as a product of pairwise probabilities, and applying Bayes' theorem, the likelihood can be written as:
${\displaystyle P\left(A\mid X\right)\approx \prod _{i
where the product runs over all amino acid pairs ${\displaystyle a_{i},a_{j}}$ (with ${\displaystyle i), and ${\displaystyle r_{ij}}$ is the distance between amino acids ${\displaystyle i}$ and ${\displaystyle j}$. Obviously, the negative of the logarithm of the expression has the same functional form as the classic pairwise distance PMFs, with the denominator playing the role of the reference state. This explanation has two shortcomings: it is purely qualitative, and relies on the unfounded assumption the likelihood can be expressed as a product of pairwise probabilities.
### Reference ratio explanation
The reference ratio method. ${\displaystyle Q(X)}$ is a probability distribution that describes the structure of proteins on a local length scale (right). Typically, ${\displaystyle Q(X)}$ is embodied in a fragment library, but other possibilities are an energy function or a graphical model. In order to obtain a complete description of protein structure, one also needs a probability distribution ${\displaystyle P(Y)}$ that describes nonlocal aspects, such as hydrogen bonding. ${\displaystyle P(Y)}$ is typically obtained from a set of solved protein structures from the Protein data bank (PDB, left). In order to combine ${\displaystyle Q(X)}$ with ${\displaystyle P(Y)}$ in a meaningful way, one needs the reference ratio expression (bottom), which takes the signal in ${\displaystyle Q(X)}$ with respect to ${\displaystyle Y}$ into account.
Expressions that resemble PMFs naturally result from the application of probability theory to solve a fundamental problem that arises in protein structure prediction: how to improve an imperfect probability distribution ${\displaystyle Q(X)}$ over a first variable ${\displaystyle X}$ using a probability distribution ${\displaystyle P(Y)}$ over a second variable ${\displaystyle Y}$, with ${\displaystyle Y=f(X)}$.[5] Typically, ${\displaystyle X}$ and ${\displaystyle Y}$ are fine and coarse grained variables, respectively. For example, ${\displaystyle Q(X)}$ could concern the local structure of the protein, while ${\displaystyle P(Y)}$ could concern the pairwise distances between the amino acids. In that case, ${\displaystyle X}$ could for example be a vector of dihedral angles that specifies all atom positions (assuming ideal bond lengths and angles). In order to combine the two distributions, such that the local structure will be distributed according to ${\displaystyle Q(X)}$, while the pairwise distances will be distributed according to ${\displaystyle P(Y)}$, the following expression is needed:
${\displaystyle P(X,Y)={\frac {P(Y)}{Q(Y)}}Q(X)}$
where ${\displaystyle Q(Y)}$ is the distribution over ${\displaystyle Y}$ implied by ${\displaystyle Q(X)}$. The ratio in the expression corresponds to the PMF. Typically, ${\displaystyle Q(X)}$ is brought in by sampling (typically from a fragment library), and not explicitly evaluated; the ratio, which in contrast is explicitly evaluated, corresponds to Sippl's potential of mean force. This explanation is quantitive, and allows the generalization of PMFs from pairwise distances to arbitrary coarse grained variables. It also provides a rigorous definition of the reference state, which is implied by ${\displaystyle Q(X)}$. Conventional applications of pairwise distance PMFs usually lack two necessary features to make them fully rigorous: the use of a proper probability distribution over pairwise distances in proteins, and the recognition that the reference state is rigorously defined by ${\displaystyle Q(X)}$.
## Applications
Statistical potentials are used as energy functions in the assessment of an ensemble of structural models produced by homology modeling or protein threading - predictions for the tertiary structure assumed by a particular amino acid sequence made on the basis of comparisons to one or more homologous proteins with known structure. Many differently parameterized statistical potentials have been shown to successfully identify the native state structure from an ensemble of "decoy" or non-native structures.[12][13][14][15][16][17] Statistical potentials are not only used for protein structure prediction, but also for modelling the protein folding pathway.[18][19]
## References
1. ^ Miyazawa S, Jernigan R (1985). "Estimation of effective interresidue contact energies from protein crystal structures: quasi-chemical approximation". Macromolecules. 18: 534–552. doi:10.1021/ma00145a039.
2. ^ a b Sippl MJ (1990). "Calculation of conformational ensembles from potentials of mean force. An approach to the knowledge-based prediction of local structures in globular proteins". J Mol Biol. 213: 859–883. doi:10.1016/s0022-2836(05)80269-4.
3. ^ a b Thomas PD, Dill KA (1996). "Statistical potentials extracted from protein structures: how accurate are they?". J Mol Biol. 257: 457–469. doi:10.1006/jmbi.1996.0175.
4. ^ a b c Ben-Naim A (1997). "Statistical potentials extracted from protein structures: Are these meaningful potentials?". J Chem Phys. 107: 3698–3706. doi:10.1063/1.474725.
5. ^ a b Hamelryck T, Borg M, Paluszewski M, et al. (2010). Flower DR, ed. "Potentials of mean force for protein structure prediction vindicated, formalized and generalized". PLoS ONE. 5 (11): e13714. doi:10.1371/journal.pone.0013714. PMC 2978081. PMID 21103041.
6. ^ Rooman M, Wodak S (1995). "Are database-derived potentials valid for scoring both forward and inverted protein folding?". Protein Eng. 8: 849–858. doi:10.1093/protein/8.9.849.
7. ^ Koppensteiner WA, Sippl MJ (1998). "Knowledge-based potentials–back to the roots". Biochemistry Mosc. 63: 247–252.
8. ^ Shortle D (2003). "Propensities, probabilities, and the Boltzmann hypothesis". Protein Sci. 12: 1298–1302. doi:10.1110/ps.0306903. PMC 2323900.
9. ^ a b Sippl MJ, Ortner M, Jaritz M, Lackner P, Flockner H (1996). "Helmholtz free energies of atom pair interactions in proteins". Fold Des. 1: 289–98. doi:10.1016/s1359-0278(96)00042-9.
10. ^ a b Chandler D (1987) Introduction to Modern Statistical Mechanics. New York: Oxford University Press, USA.
11. ^ Simons KT, Kooperberg C, Huang E, Baker D (1997). "Assembly of protein tertiary structures from fragments with similar local sequences using simulated annealing and Bayesian scoring functions". J Mol Biol. 268: 209–225. doi:10.1006/jmbi.1997.0959.
12. ^ Miyazawa S., Jernigan RL. (1996). "Residue–Residue Potentials with a Favorable Contact Pair Term and an Unfavorable High Packing Density Term, for Simulation and Threading". J Mol Biol. 256: 623–644. doi:10.1006/jmbi.1996.0114.
13. ^ Tobi D, Elber R (2000). "Distance Dependent, Pair Potential for Protein Folding: Results from Linear Optimization". Proteins. 41: 40–46. doi:10.1002/1097-0134(20001001)41:1<40::aid-prot70>3.3.co;2-l.
14. ^ Shen MY, Sali A (2006). "Statistical potential for assessment and prediction of protein structures". Protein Sci. 15: 2507–2524. doi:10.1110/ps.062416606. PMC 2242414.
15. ^ Narang P, Bhushan K, Bose S, Jayaram B (2006). "Protein structure evaluation using an all-atom energy based empirical scoring function". J Biomol Struct Dyn. 23 (4): 385–406. doi:10.1080/07391102.2006.10531234.
16. ^ Sippl MJ (1993). "Recognition of Errors in Three-Dimensional Structures of Proteins". Proteins. 17: 355–62. doi:10.1002/prot.340170404. PMID 8108378.
17. ^ Bryant SH, Lawrence CE (1993). "An empirical energy function for threading protein sequence through the folding motif". Proteins. 16 (1): 92–112. doi:10.1002/prot.340160110. PMID 8497488.
18. ^ Kmiecik S and Kolinski A (2007). "Characterization of protein-folding pathways by reduced-space modeling". Proc. Natl. Acad. Sci. U.S.A. 104 (30): 12330–12335. doi:10.1073/pnas.0702265104. PMC 1941469. PMID 17636132.
19. ^ Adhikari AN, Freed KF and Sosnick TR (2012). "De novo prediction of protein folding pathways and structure using the principle of sequential stabilization". Proc. Natl. Acad. Sci. U.S.A. 109 (43): 17442–17447. doi:10.1073/pnas.1209000109. PMC 3491489. PMID 23045636. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 81, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9199216365814209, "perplexity": 1377.97337610295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746205.96/warc/CC-MAIN-20181122101520-20181122123520-00322.warc.gz"} |
https://www.physicsforums.com/threads/initial-velocity-of-soccer-ball-and-length-of-time-in-the-air-projectile-motion.596786/ | # Homework Help: Initial velocity of soccer ball and length of time in the air- projectile motion
1. Apr 15, 2012
### dani123
1. The problem statement, all variables and given/known data
A soccer ball is kicked at 38.0° with respect to the horizontal and travels 64.0m before striking the ground.
a) what is its initial velocity?
b) how long was it in the air?
2. Relevant equations
dv=1/2*at2
dh=Vh*Δt
Kinetic equation d=Vi*t+ 1/2*at2
dh=-V2*sin2θ /g
sinθ=opp/hyp
Δt=-2Vsinθ / g
3. The attempt at a solution
a) dh=cos(38)*64m= 50.4m
dv=sin(38)*64m=39.4m
Find t=√(39.4m/(0.5*9.8))=7.96s
plug this value into the d=Vi*t+ 1/2*at2 equation and solve for Vi if d=64m
Vi=3.14m/s
b) V=dh/Δt= 50.4m/7.96s=6.33m/s
Then solve for Δt=-2Vsinθ / g= 0.80s
I would like for someone to just double check my answers and that I used the appropriate equations and that the number of significant figures are being respected! Thanks so much in advance!
2. Apr 15, 2012
### BruceW
I don't think you've done this one correctly. When the question says it travels 64m before striking the ground, I think it means this is the horizontal displacement.
Edit: as a hint dh=Vh*Δt is the equation for the horizontal displacement, where you can use the horizontal component of initial velocity for Vh. And d=Vi*t+ 1/2*at2 is the equation for vertical displacement, where you can use the vertical component of initial velocity for Vi. You will need to make use of both these equations simultaneously to solve the problem.
Last edited: Apr 15, 2012
3. Apr 16, 2012
### dani123
Thank you for your time but Im still so lost here.
4. Apr 18, 2012
### BruceW
Maybe start with the equations that you need to use. The horizontal and vertical motion are separate, so there are 2 equations. Have you been through this kind of question in class?
5. Apr 22, 2012
### dani123
Well this a correspondance course that I taking through the department of education so Im kind of teaching myself, which is making these assignments a bit more challenging but thankfully I have this forum to kind of guide me in the right direction.
I tried to redo this problem after consulting my textbook again and came up with the following:
For part a) I used dh=-V2sin2/g, which lead me to getting V= 25.42m/s as my answer for this section.
For part b) Δt=Δx/Vx, and from this equation I got 2.52 seconds.
Are these answers correct? Thank you so much for your time and help.
6. Apr 22, 2012
### BruceW
You've got part a) correct. But I get a different answer for part b), what did you use for Δx and Vx ?
7. Apr 23, 2012
### dani123
I used Δx=64m and Vx=25.42m/s
8. Apr 23, 2012
### BruceW
Vx is the component of velocity in the horizontal direction, not the total initial velocity. So you need to do a little trigonometry to get Vx from V. (as a check, Vx should come out as less than V, so it will be less than 25.42m/s).
9. Apr 26, 2012
### dani123
Ok so I did a little trigonometry and got the velocity in the horizontal direction to be equal to 20.03m/s... So with that, I plugged it back into my delta t equation to get 3.195 or 3.20 seconds. Does this seem correct? Thanks!
10. Apr 26, 2012
### BruceW
Yep, that is all correct. Nice work! One other thing: I think you should only use 2 significant figures, in the answer, since you used 2 sig. fig. in the inputs of the equation (i.e. g=9.8 which is 2 sig. fig.) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9456895589828491, "perplexity": 1140.3263300484978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823785.24/warc/CC-MAIN-20181212065445-20181212090945-00163.warc.gz"} |
https://www.physicsforums.com/threads/operations-reaserch-network.276445/ | # Operations Reaserch Network
1. Dec 2, 2008
### janani66
Please help me to solve this problem. This was a question given in our end semester examination.
The management of a certain garment factory is considering the construction of a new washing plant building. The project consists of 7 major activities. The information pertaining to the project is given below.
Activity [/INDENT]Immediate Predecessor
Duration
A ---- 2
B ----- 6
C - 4
D A 3
E C 5
F A 4
G B,D,E 2
i) Draw the network diagram.
ii)Find the critical path.
iii) What is the expected time required to complete the project.
iv) Variances of the time duration for each activity of the project are given below. Find the probability that the project will be completed on or before 10 weeks.
Activity variances
A 0.25
B 0.11
C 0.25
D 0.25
E 0.25
F 0.11
G 0.11
I did the whole question in the paper. But I'm not very sure about the Network diagram. Time of the critical path was 11. and that is the answer for iii.
Then for the third part I got 0.36 as the probability. As I'm new to this forum I don't know how to get the symbols to write my answer here. Please can anybody do this question here? then I can compare with my answer. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8193913102149963, "perplexity": 984.4569694073683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718034.77/warc/CC-MAIN-20161020183838-00302-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.physicsmodels.in/course/info.php?id=48 | ### chapter 47 Relativity concept
The story of the universe and how we see it is in this chapter. We have made representative animations and videos to explain in a simplest way possible, two of the most profound theories of science, discovered by Dr.Albert Einstein. We are quite confident you will find our explanation convincing , logical and easy to grasp.
Following topics are covered:
1) Frames of Reference: a) Inertial, b) Non-Inertial, and The Special Theory of Relativity
2) Coriolis Force : a fictitious force under Newton's Laws
3) Time Dilation: moving clocks due to Relative Velocity only
4) Time Dilation: due to Gravitational Effects
5) Mass-Energy equivalence, Einstein's famous equation
6) A body's Rest Mass and Mass while moving
7) The Mass of Light
8) Energy and Momentum
9) TWINS Paradox : why one twin will look younger than the other
10) Solved problems | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8903486728668213, "perplexity": 2822.8626195017164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143505.60/warc/CC-MAIN-20200218025323-20200218055323-00221.warc.gz"} |
https://www.mi.fu-berlin.de/en/math/groups/arithmetic_geometry/teaching/Anaytic-Methods-in-Number-Theory.html | Springe direkt zu Inhalt
# Analytic Methods in Number Theory
## Seminar at FU Berlin, Winter Term 2015-2016
### Introduction
In this seminar we are going to discuss the applications of analytic methods to number theory. The first aim the seminar is to prove Dirichlet's theorem on arithmetic progressions which states that: given any two positive coprime integers \(a\), \(m\), the set of numbers \(\{a, a+m, a+2m, a+3m,\cdots \}\) contains infinitely many prime numbers. The proof resorts to Dirichlet series, Riemann Zeta functions, Dirichlet \(L\)-function which are very basic tools in the study of analytic number theory. Then we are going to study Modular Forms. Modular forms are holomorphic functions on the upper half plan satisfying certain conditions with respect to some group actions. It is another very important tool to number theory.
### Prerequisites
The prerequests for this seminar are rather few. A certain familiarity with undergraduate level real and complex analysis is enough.
### Program
You can find the seminar program here.
People who are interested in giving a talk please send an email to me at [email protected].
DateTitleSpeaker
14/10/2015 Dirichlet's Theorem on Arithmetic Progressions Lei Zhang
21/10/2015 Group Characters (I) Gretar Amazeen
28/10/2015 Group Characters (II) Gretar Amazeen
04/11/2015 Dirichlet Series Yumeng Li
11/11/2015 The Zeta Function Lei Zhang
18/11/2015 The \(L\)-Functions Lei Zhang
25/11/2015 The Dirichlet Theorem Lei Zhang
02/12/2015 Modular Groups Gretar Amazeen
09/12/2015 Modular Functions Lei Zhang
16/12/2015 The Zeros and Poles of a Modular Function Yumeng Li
06/01/2016 The Space of Modular Forms and Modular Invariant Lei Zhang
13/01/2016 Series Expensions Hao Yun
20/01/2016 Hecke Operators (I) Hao Yun
27/01/2016 Hecke Operators (II) Hao Yun
03/02/2016 Theta functions (I) Gretar Amazeen
10/02/2016 Theta functions (II) Gretar Amazeen
### Other Information
Place: SR 130/A3 Seminarraum (Hinterhaus) (Arnimallee 3-5)
Date: Wednesday 16:00-18:00
First Appointment: 14.10.2015
Seminar Language: English | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8539008498191833, "perplexity": 4279.7361384891665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524043.56/warc/CC-MAIN-20200404134723-20200404164723-00356.warc.gz"} |
http://mathoverflow.net/feeds/user/6214 | User paul monsky - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-24T09:33:12Z http://mathoverflow.net/feeds/user/6214 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/131653/does-this-space-of-mod-2-modular-forms-admit-a-z-8-degree-decomposition Does this space of mod 2 modular forms admit a (Z/8)* degree decomposition? paul Monsky 2013-05-23T21:28:29Z 2013-05-23T21:28:29Z <p>Fix an odd N>0. Let M consist of all odd elements of Z/2[[x]] that are the mod 2 reductions of elements of Z[[x]] arising as the Fourier expansions of modular forms for (Gamma_0)(N); it's easy to see that M is closed under addition. For i in {1,3,5,7} let M_i be the subspace of M consisting of those g in which all the exponents that appear are congruent to i mod 8.</p> <p>QUESTION: Do M_1, M_3, M_5 and M_7 span M?</p> <p>Remark 1: When N=1 the answer is yes. For in this case M is spanned by the odd powers of x+x^9+x^25+x^49+..., and each such power lies in some M_i.</p> <p>Remark 2: When N is prime, I think this will follow from an affirmative answer to another question I've posted on MO--Level p characteristic 2 modular forms and thetas (#121506). But I haven't tried to write things out, and this approach seems awfully complicated. </p> http://mathoverflow.net/questions/127777/a-subring-of-the-serre-swinnerton-dyer-ring-of-level-n-modular-power-series A subring of the Serre Swinnerton -Dyer ring of level N modular power series paul Monsky 2013-04-17T02:05:57Z 2013-05-10T01:10:51Z <p>Suppose ell is prime and (N,ell)=1. Consider those power series over Z that are expansions at infinity of modular forms for gamma_0 (N) of weight a multiple of ell-1. I'll say that an element of (Z/ell)[[x]] is in M#(N) (or more briefly in M#) if it is the mod ell reduction of such a power series.</p> <p>When ell is 2 or 3 there is no weight restriction, and so M# is just the Serre Swinnerton-Dyer ring, M, of question 93059. When ell>3 we can use the fact that the expansion of E_(ell-1) is 1 mod ell to see that M# is closed under addition, and is a subring of M.</p> <p>I'd like to know the structure of M#(N), or more geometrically what the affine curve C(N) over Z/ell attached to M#(N) looks like. My guess is that there's a simple answer in terms of the characteristic ell non-singular projective modular curve X_0 (N) constructed by Igusa.</p> <p>Explicitly let J be the mod ell reduction of the "Laurent expansion" (1/x)+744+... of j(z). Let K(1) be the extension field of Z/ell generated by J(x), and let K(N) be the field of Laurent series generated over Z/ell by the J(x^d) with d dividing N. If my understanding is correct, X_0 (1) is the projective j-line over Z/ell, there is a branched covering phi: X_0(N)-->X_0(1) defined over Z/ell, and the function fields of X_0(1) and X_0(N) identify with K(1) and K(N).</p> <p>Now let P be the monic separable element of Z/ell[x] whose roots are the supersingular j-values. Then it can be shown that M(1) is the subring of (Z/ell)(J) generated by the (J^k)/P(J) with k< deg P. In other words, C(1) is the "ordinary part" of X_0(1); it is the projective j-line with the supersingular j omitted.</p> <p>QUESTION: Is it true that C(N) identifies with the inverse image of C(1) under phi: X_0(N)-->X_0(1)? Alternatively (assuming my description of the function field of X_0 (N) is correct), is it true that M#(N) is the integral closure of M#(1) in the extension field generated by the J(x^d) with d dividing N?</p> <p>EDIT: I've corrected some typos, where I wrote X(N) when I meant X_0(N). Kevin Buzzard, in response to an inquiry, has kindly told me that the answer to my question is yes, and indicated a proof.</p> http://mathoverflow.net/questions/128036/solutions-to-binomn5-2-binomm5/128059#128059 Answer by paul Monsky for Solutions to $\binom{n}{5} = 2 \binom{m}{5}$ paul Monsky 2013-04-19T09:19:31Z 2013-04-21T14:41:09Z <p>This isn't a complete answer, but the problem "reduces" to finding the finitely many rational points on a certain genus 2 hyperelliptic curve. This is often possible by a technique involving a reduction to finding the rational points on a finite set of rank 0 elliptic curves--see for example "Towers of 2-covers of hyperelliptic curves" by Bruin and Flynn in Trans. Amer. Math. Soc. 357 (2005) #11 4329-4347.</p> <p>In your case, the curve is u^2= 9*t^6+16*t^5-200*t^3+256*t+144. There are the following 16 rational points: (t,u) or (t,-u)= (0,12),(1,15),(2,12),(4,204),(-1,9),(-2,36),(-4,180) and (7/4,411/64). If these are the only rational points then the only non-trivial solution to your equation is n=10,m=9. To see this suppose that n(n-1)(n-2)(n-3)(n-4)=2m(m-1)(m-2)(m-3)(m-4). Let y=(n-2)^2 and x=(m-2)^2. Squaring both sides we find that y*(y-1)(y-1)(y-4)(y-4)=4x*(x-1)(x-1)(x-4)(x-4). Suppose y isn't 0. Then 4x/y is t^2 for some rational t with (y-1)(y-4)=t(x-1)(x-4). We replace y by 4x/t^2 in this equation and find that (t^5-16)x^2-(5t^5-20t^2)x+(4t^5-4t^4)=0. So this last quadratic polynomial in x has a rational root and its discriminant is a square. This gives the hyperelliptic curve above. Note that the case n=10, m=9 of your problem corresponds to the point (7/4,411/64) on this curve.</p> <p>EDIT: More generally one can look for rational m and n with [n]_5= 2*([m]_5). If (t,u) is a rational point on the hyperelliptic curve with t non-zero, set x=(5t^5-20*t^2+t^2*u)/(2*t^5- 32) and y=4x/t^2. Then if x is a square in Q, one gets such an m and n with m=2+(a square root of x) and n=2+(a square root of y). Joro's points lead in this way to the solutions (n,m)=(10,9),(10/3,5/3) and (78/23,36/23), the last one being rather unexpected. (And as Francois notes, each (n,m) gives a second solution (4-n,4-m)). Perhaps these solutions and the trivial solutions with m and n in {0,1,2,3,4} are the only rational solutions.</p> http://mathoverflow.net/questions/125877/computing-certain-class-numbers-modulo-4/125904#125904 Answer by paul Monsky for Computing certain class numbers modulo 4 paul Monsky 2013-03-29T11:27:33Z 2013-03-29T12:17:46Z <p>The way Gauss did things was in terms of SL(2,Z) equivalence classes of (primitive) binary quadratic forms over Z. So lets consider such definite forms, axx+bxy+cyy with b^2-4ac=-N. For simplicity suppose N is squarefree and odd. Gauss defines a composition on the classes, making the set of classes into a finite group. For each prime dividing N he defines a genus character from this group to Z/2. The product of these characters is the trivial map, and the joint kernel of them all consists of the squares, a subgroup that he calls the principal genus. In the case N=pq, the character attached to p maps a form to (M/p) where M is any integer prime to p represented by the form. So the squares form, in your case, a subgroup of index 2, and there is a unique non-trivial class of order 2 in the group. Gauss calls the classes of order 2 the "ambiguous classes". They're easily written down in general and are represented by "reduced" forms axx+bxy+cyy with b=0 or with a=b (or c). So in your case the non-trivial ambiguous class is represented by pxx+qyy. The genus character attached to p maps this class to (q/p). So the class is a square if and only if (q/p)=1, giving the result you want. The theory also works for even N not necessarily square-free, though there are 2 genus characters attached to the prime 2 when 32 divides N, and it works for indefinite forms as well. </p> <p>Oops--I should have said that the non-trivial ambiguous class is represented by qxx+qxy+(1/4)(p+q)yy.</p> http://mathoverflow.net/questions/125007/does-there-exist-a-half-integer-weight-theta-function-which-is-is-equivalent-to-1/125128#125128 Answer by paul Monsky for Does there exist a half-integer weight theta function which is is equivalent to 1 modulo 4? paul Monsky 2013-03-21T05:33:01Z 2013-03-25T00:48:33Z <p>This is nothing like a complete answer(EDIT-now it seems to be--see below), but it may suggest a fruitful attack, based on the comment (see below for a version incorporated into this answer) that I made earlier. Your question is related to my question 124243 on this site about a certain ring M consisting of the mod 2 reductions of elements of Z[[x]] that are the Fourier expansions of modular forms for gamma_0 (N). Suppose first that N is an odd prime p.</p> <p>Theorem(?) The element g=x+x^4+x^9+x^16+... of Z/2[[x]] does not lie in M.</p> <p>Proof(?) In the notation of my question, and the other question of mine that it links to, let h=g+g^2+r. Then h(1+h)=(g+g^4)+(r+r^2)=f1+(f1+fp)=fp.(Again, see below). Now I conjecture in my question that in the field of fractions of M, fp has a zero of order p, a zero of order 1 and p+1 poles counted with multiplicity. If this is true then both h and h+1 have (p+1)/2 poles, counted with multiplicity. Furthermore one of them must have a zero of order p, giving a contradiction.</p> <p>Now I'm reasonably sure that this conjecture is correct, is known to the experts, and follows from results proved by Igusa in the 1950's. Unfortunately no expert has yet responded to my question. I also think that similar techniques will show that g is not the mod 2 reduction of any modular form for gamma_0 (N) when N is odd. But the theory, if I understand correctly, is much harder when the level is divisible by the characteristic, and if a proof is to be given based on the idea of my comment this case has to be considered.</p> <p>Edited comment--Suppose that F is a positive definite quadratic form over Z in s variables where s is odd. Here's an idea for showing that the power series theta_F in Z[[x]] attached to F (I'll defy convention and write x instead of q) cannot be congruent to 1 mod 4. Choose S so that s*S=8m-1 and let G be a direct sum of S copies of F and one of the one variable form z^2. Then if theta_F is 1 mod 4, theta G is 1+2x+2x^4+2x^9+2x^16+... mod 4. Now (see Schoeneberg's paper from the 1930's for example) there is an N such that theta_G is the "expansion at infinity" of a weight 4m modular form, phi, for gamma_0 (N). Then (1/2)(phi-(E_4)^m) is a weight 4m modular form for gamma_0 (N) whose expansion at infinity lies in Z[[x]], and has mod 2 reduction equal to g=x+x^4+x^9+x^16+... There are perhaps reasons to believe this can't happen--at any rate it appears to be an approachable problem of a mainstream variety.</p> <p>One more remark. In my proof? given above I use the fact that the mod 2 reduction f1 of the expansion at infinity of the cusp form delta is x+x^9+x^25+... This lovely if well-known fact comes from<br> the infinite product formula for delta and Jacobi's triple product identity.</p> <p>EDIT---MARCH 24 I now believe I have an argument showing that g is not the mod 2 reduction of a modular form for any gamma_0 (N), completing the answer to your question. I'll dispense with any ideas of Igusa, and instead use Hecke operators and classical results of Gauss on ternary quadratic forms. The idea is this. For each odd prime q there is a "Hecke operator" T_q on Z/2[[x]]. If h is the reduction of a modular form for gamma_0 (N) then the T_q(h) span a finite dimensional subspace of Z/2[[x]]. Now let h=g^11. I'll use this last result to show that h cannot be the reduction of a modular form.</p> <p>To this end, fix K and primes p_1,...p_K each of which is 5 mod 8. Then choose primes q_1,...q_K each of which is 7 mod 8 so that the Legendre symbol (q_i/p_j) is -1 if i=j and is 1 otherwise. Now the coefficient of x^p_j in T_(q_i)(h) is the coefficient of x^(p_j)*(q_i) in h. By the last paragraph of and the comments following my answer to MO question 284642-why are there usually an even number of representations as a sum of 11 squares--(an answer based on Disquisitiones Arithmeticae)--, this coefficient is 1 if i=j and 0 otherwise. So the T_(q_i) (h) are linearly independent. Since K may be chosen arbitrarily large, the criterion of the last paragraph gives the result.</p> http://mathoverflow.net/questions/124243/are-these-empirical-discoveries-about-the-serre-swinnerton-dyer-ring-of-prime-lev Are these empirical discoveries about the Serre Swinnerton-Dyer ring of prime level modular power series actual theorems? paul Monsky 2013-03-11T16:39:13Z 2013-03-19T12:35:04Z <p>In <a href="http://mathoverflow.net/questions/93059" rel="nofollow">this question</a> Joel Bellaiche constructed an algebra, M, of modular forms for gamma_0 (N) in finite characteristic (which he called p, but I'll call ell) and asked to know its structure. Matt Emerton gave a "comment-answer" that showed, among other things, that M is integrally closed.</p> <p>I've made some explicit calculations (particularly when ell=2 and 3; see <a href="http://mathoverflow.net/questions/121506" rel="nofollow">my question</a> for some of these when ell=2). The empirical "discoveries" unveiled by these seem likely to be theorems. I'll present a couple of these here, saving the remaining ones for edits. I'll assume that the level N is a prime, p. M contains f1 and fp; the mod ell reductions of the Fourier expansions of the cusp forms delta(z) and delta(pz).</p> <p>Discovery 1---M is integral over Z/ell[f1,fp]</p> <p>Discovery 2---When ell=2 or 3, then M is the integral closure of Z/ell[f1,fp] in its field of fractions.</p> <p>Are these really true? I'll also hazard the following guess when l>3. The mod ell reductions of the Eisenstein series E_4 and E_6 generate an extension of Z/ell(f1,fp) of degree (l-1)/2, and M is the integral closure of Z/ell[f1,fp] in this extension.</p> <p>EDIT: To describe further observations I'll introduce some notation. If k is even and non-negative, M[k] will be the Z/ell subspace of M consisting of the mod ell reductions of modular power series in Z[[x]] corresponding to weight k forms for gamma_0 (p). C will be a non-singular projective curve over Z/ell with function field the field of fractions of M, and D will be the divisor of poles of the element fp of M[12].</p> <p>For example, when ell=2 and p=11, then in the notation of my question referenced above, M[k] has dimension k, while M<a href="http://mathoverflow.net/questions/121506" rel="nofollow">2</a> is spanned by 1 and t, and M[4] is spanned by 1,t,t^2 and r. We have the relation r^2+r=t^3+t, and C is the curve r^2+r=t^3+t with the point O at infinity adjoined. f11=r^3+r^4+t^3 has zeros of orders 11 and 1 at (t,r)=(0,0) and (0,1) and D=12(O). Furthermore M[12] is spanned by 1,t,t^2,t^3,t^4,t^5,t^6,r,t*r,t^2*r,t^3*r and t^4*r and is the complete linear series attached to the divisor D.</p> <p>Here's what I think is true in general. Suppose ell=2. Then:</p> <p>1---fp has one zero of order p, and one of order 1 on C.</p> <p>2a--When p is 11 mod 12, fp has (p+1)/12 poles of order 12.</p> <p>2b--When p is 5 mod 12, fp has 1 pole of order 6 and (p-5)/12 of order 12.</p> <p>2c--When p is 7 mod 12, fp has 2 poles of order 4 and (p-7)/12 of order 12. </p> <p>2d--When p is 13 mod 12, fp has 1 pole of order 6, 2 of order 4 and (p-13)/12 of order 12.</p> <p>3---For each k there is a divisor D, easily describable in terms of D and k such that M[k] is the complete linear series attached to this divisor; in particular D<12m>=m(D).</p> <p>I think that entirely similar results hold when ell=3. My belief is that all of this is known, but I'd appreciate proofs and/or references.</p> http://mathoverflow.net/questions/121506/level-p-characteristic-2-modular-forms-and-thetas Level p characteristic 2 modular forms and thetas paul Monsky 2013-02-11T17:17:04Z 2013-03-04T21:29:24Z <p>BACKGROUND</p> <p>Let p be an odd prime. An element of Z/2[[x]] is "modular of level p" if it is the mod 2 reduction of a g in Z[[x]] with g the Fourier expansion of a modular form for gamma_0(p). In various cases when f is modular of level p then f(x^p) can be expressed as a polynomial of a special shape in "characteristic 2 theta series".</p> <p>I've developed a connection between modular function theory and theta series. Namely let [i] in Z/2[[x]] be the sum of the x^(n^2), where n runs over the integers that are congruent to i mod p. Then the field of fractions of the ring generated over the algebraic closure of Z/2 by these theta series identifies with Igusa's field of modular functions for gamma(p). But computer calculation suggests further connections, involving modular forms for gamma_0(p). Here are some examples:</p> <p>1.<em>_</em> If f is x+x^9+x^25+x^49+..., the mod 2 reduction of the expansion of delta, then f(x^p) lies in the ring B generated over Z/2 by the [i]. Indeed if we let B' be the subring of B consisting of those elements that are power series in x^p, and are fixed by the automorphisms of B taking [i] to [ni] when n is in (Z/p)*, then f is in B'. (Proofs are in my MO questions and answers).</p> <p>2.<em>_</em> If the exponents appearing in the element r of Z/2[[x]] are just the products of the non-zero squares by 1,2,p and 2p, then r is the mod 2 reduction of the expansion of a weight 4 Eisenstein series, and once again r(x^p) is in B'. (Again, proofs are in my MO questions and answers).</p> <p>3.<em>_</em> If p=7 and s is the modular power series of shape x^2+... coming from a weight 4 modular form, then s(x^7)=[1][2][3], and so is in B'.</p> <p>4.<em>_</em> If p=11 and t=x+... comes from the weight 2 cusp form, then t(x^11) is the sum of [1][1][3]+[2][2][5]+[4][4][1]+[3][3][2]+[5][5][1] and [1][1][2][4]+[2][2][4][3]+[4][4][3][5]+[3][3][5][1]+[5][5][1][2], and so lies in B'.</p> <p>QUESTION</p> <p>I can show that when p is 3,5,7 or 11, then f is modular of level p if and only if f(x^p) lies in B'. To what extent does this generalize to larger p?</p> <p>EDIT: It seems likely that the answer to my question is always yes--f is in the ring A of level p characteristic 2 modular power series if and only if f(x^p) is in the subring, B', of the ring B generated by the "theta series". In this edit I'll present further conjectures, some in part known, that would imply this. In a later edit I'll give explicit descriptions of A and B' for p<20. Fix an odd prime p.</p> <p>THE RING A</p> <p>A consists of the mod 2 reductions of all elements of Z[[x]] that are Fourier expansions of modular forms, of arbitrary weight, for gamma_0 (p). A is closed under multiplication. Using the fact that the expansions of the normalized Eisenstein series of weights 4 and 6 lie in 1+2*Z[[x]] we see that A is closed under addition as well, and is a ring. f1=x+x^9+x^25+..., and fp=f1(x^p) are, as I remarked, in A, and so Z/2[f1,fp] is a subring of A.</p> <p>Conjecture 1: A is the integral closure of Z/2[f1,fp] in its field of fractions. (There are 3 separate questions here. Is A integrally closed? Is A integral over the subring? And is the field of fractions of A equal to Z/2(f1,fp)? I suspect that the first, at least, of these is known. My investigations for p<20 support the conjecture).</p> <p>THE RING B'</p> <p>Let L be the quotient of (Z/p)* by {1,-1}, and P be a polynomial ring over Z/2 in the variables x_i, with i running over L. There is a gradation of P by Z/p, with the "degree" of x_i being i^2. Also, L acts on P by permutation of variables with m taking x_i to x_mi, and the effect of m is to multiply degrees by m^2. Let P' be the subring of P consisting of L-stable elements of "degree" 0. For example when p=13, (x_1)(x_5)+(x_2)(x_3)+(x_4)(x_6) is in P', while when p=7 the same is true of (x_2)(x_1)(x_1)(x_1)+(x_3)(x_2)(x_2)(x_2)+(x_1)(x_3)(x_3)(x_3). Now if i is prime to p, let [i] in Z/2[[x]] be the sum of the x^(n^2) where n runs over the integers congruent to i mod p. [i] only depends on the image of i in L.</p> <p>We define B' to be the image of P' under the ring homomorphism P-->B taking x_i to [i]. There's a simple compact notation for elements of B' that I'll use when presenting my results. For example the image [1][5]+[2][3]+[4][6] of the p=13 element of the last paragraph will be called C(1,5) (or C(2,3) or C(4,6)), while that of the p=7 element is called C(2,1,1,1) (or C(3,2,2,2) or C(1,3,3,3)). Now the answers I've given to other questions on this site show that f1(x^p) and fp(x^p) lie in B'. Evidently my conjecture would follow from Conjecture 1 combined with:</p> <p>Conjecture 2: B' is the integral closure of the ring Z/2[f1(x^p),fp(x^p)] in its field of fractions.</p> <p>I find myself on firmer ground here--I think my MO answers come close to establishing Conjecture 2, though there may be some separability questions when p is 1 mod 4. The key to showing that B' is integrally closed should lie in the facts I've established about the curve attached to the ring B and the action of PSL_2(Z/p) on this curve.</p> <p>EDIT---EXPLICIT FORMULAS</p> <p>For each odd prime p<20 I'll write down:</p> <p>(a). Explicit generators of A, the polynomial relations they satisfy, and a description of the affine curve attached to A. f1 and fp will be as in the last edit. r in A will be the reduction of the expansion x+... of that weight 4 Eisenstein series for gamma_0 (p) having a zero at infinity. Classical results show that r+r^2=f1+fp. I'll take r as one of the generators of A, and express fp (and consequently f1=r^2+r+fp) as polynomials in my generators.</p> <p>(b). Let R=r(x^p). More generally I'll use this lower case --> upper case convention in passing from an element of Z/2[[x]] to its image when x is replaced by x^p. For each generator g of A as given above, I'll give a formula for G in terms of thetas, using the notation of the last edit. When I have a particularly nice formula for fp(x^p) I'll give it as well.</p> <p>I start with the genus 0 cases.</p> <p>p=3</p> <p>(a) A=Z/2[r]. The curve is the affine line. f3=r^3+r^4</p> <p>(b) R=C(1,1,1)</p> <p>p=5</p> <p>(a) A=Z/2[r]. The curve is the affine line. f5=r^5+r^6</p> <p>(b) R=C(1,2)</p> <p>p=7</p> <p>(a) Let u be the reduction of the expansion x^2+... of that weight 4 form with an order 2 zero at infinity. (This expansion lies in Z[[x]]). Then A=Z/2[r,u], and r^2+ru+u^2+u=0. Since the affine curve y^2+xy+x^2+x=0 has, at infinity, two points conjugate over Z/2, our curve is obtained by removing such a pair of points from the projective line. f7=r^3+r^4+ru.</p> <p>(b) R=C(1,1,1,2)+C(1,2,3) U=C(1,2,3)</p> <p>p=13</p> <p>(a) Let u be the reduction of the rational weight 4 newform. Then A=Z/2[r,u] and u(1+r+r^2)=r+r^2. So A is generated by r and 1/(1+r+r^2) and the curve is the affine line with the points with x^2+x=1 removed. f13=(r^13)(1+r)(1+u)^4.</p> <p>(b) R=C(2,3) U=R+C(1,2,3,5)</p> <p>I next turn to genus 1:</p> <p>p=11</p> <p>(a) Let t be the reduction of the weight 2 newform. Then A=Z/2[r,t], and r^2+r=t^3+t. So the curve is an elliptic curve with the origin removed. f11=r^4+r^3+t^3.</p> <p>(b) R=C(1,1,3) T=R+C(1,1,2,4) and f11(x^11) is T*C(1,2,3,4,5)^2.</p> <p>p=17 </p> <p>(a) The Fourier expansions x-3*x^2+.. and x+9*x^2+... of the rational weight 4 newform and the weight 4 Eisenstein series vanishing at infinity are congruent mod 4. Let u be the reduction of (1/4)*(their difference). Then A=Z/2[r,u] and (1+r)(u^2+u)=r^2. Let x=1+r and y=ux. Then y^2+xy=x^3+x. So the curve is the affine curve y^2+xy=x^3+x with the point (0,0) removed. f17=(r^2+r)*u^8.</p> <p>(b) R=C(1,4) I have only a horrible formula for U: U=(1+R)(C(1,2,3,4,5,6,7,8)+(R+R^3)(C(1,2,4,8)+C(1,3,4,5))+C(1,3,4,5)+R^2+R^5. But f17(x^17)=C(1,2,4,8)C(1,2,3,4,5,6,7,8).</p> <p>p=19</p> <p>(a) Let t and u be the reductions of the rational newforms of weights 2 and 4. Then u=t/(1+t). Set v=r+u. Then A=Z/2[r,t,u]=Z/2[v,t,1/(1+t)]. And v^2+v=t^3. So the curve is the cubic curve y^2+y=x^3+x with the points with x=1 removed. f19=t*(v^6)*(1+u^4).</p> <p>(b) R=C(1,3,3) V=C(1,2,4,6) T=R+C(2,3,3,4) U=R+V</p> <p>Caveat: My MO results show that my alleged generators of B' given above really generate B'. But for p=11,13,17 and 19 I haven't checked that the alleged generators of A really generate A. But this can no doubt be done with sage.</p> http://mathoverflow.net/questions/117904/elementary-examples-of-the-weil-conjectures/118011#118011 Answer by paul Monsky for Elementary examples of the Weil conjectures paul Monsky 2013-01-04T01:29:56Z 2013-01-04T01:29:56Z <p>It's possible to give a semester course in number theory, free from overt algebraic geometry, that handles the zeta functions of curves over finite fields, proving the functional equation and Weil's RH for them. (And also does Mordell-Weil for elliptic curves over number fields). I taught such a course to second year grad students some 20 or 30 years ago.</p> <p>One gets around the geometry of curves in 19th century Dedekind fashion, working with their function fields--finite extensions of k(t)-- and valuations, just as one does with number fields. Riemann-Roch is done as in Chevalley's book using "repartitions". Rationality and the functional equation follow directly from Riemann-Roch. The RH is done by Bombieri's elegant technique--first one uses Riemann-Roch to get a good upper bound for the number of rational points, then one combines this upper bound with the functional equation to get a good lower bound. (Ayanta thinks the proof is miraculous but uninformative; this may be true of Stepanov's original version, but I find Bombieri's argument to be natural).</p> <p>Of course there are defects to this approach. Algebraic geometry is far more enlightening. But it can be learned later, in whatever form the student finds appropriate. And there are also advantages. To do Weil-style algebraic geometry one would have to worry about "fields of definition". And Grothendieck's version (which continues to intimidate me), would only appeal to the rare student at this level. That such beautiful mathematics can be presented in such an accessible fashion seems to me a boon.</p> http://mathoverflow.net/questions/117558/are-there-heronian-triangles-that-can-be-decomposed-into-three-smaller-ones/117790#117790 Answer by paul Monsky for Are there Heronian triangles that can be decomposed into three smaller ones? paul Monsky 2013-01-01T17:30:09Z 2013-01-01T19:54:57Z <p>There should be many examples where the interior vertex lies on the perpendicular bisector of one of the sides. It's best to redefine a Heronian triangle to be one with rational sides and rational area. I'll call such a T "standard" if its vertices are at (-2,0), (2,0) and (r,s) where r and s are rational. Every Heronian triangle is similar to a standard one.</p> <p>Now let T be standard and P be (0,(x)-(1/x)) where x is rational. Then P has rational distance from 2 vertices of T. The condition that it have rational distance from the third is that there exists a rational y such that (xx-sx-1)^2 +(rx)^2 =y^2. If the elliptic curve one gets in this way has positive rank then there will be a dense set of points (0,(x)-(1/x)), all lying on the perpendicular bisector of the base of T, each giving the desired decomposition of T.</p> <p>I worked out the case r=-2, s=3. Unfortunately the curve one gets is one of conductor 15 with 8 rational torsion points and rank 0. But there must be lots of choices of r and s where the rank is positive.</p> <p>EDIT: Here's another construction which should give many examples where the interior point lies on an altitude. Consider a Heronian triangle with the base extending from (0,0) to (a+b.0), and the foot of the altitude to the base at (a,0). Let P be (a,x) where x is rational. Then P is at a rational distance from one vertex, and is at a rational distance from the other two when there are rational u and v with xx+aa=uu and xx+bb=vv. These equations again define an elliptic curve and one will get a dense set of points (a,x) on the altitude, each giving a desired decomposition, when the curve has positive rank.</p> <p>The interesting question then seems to be the existence of a point, that lies neither on an altitude nor on the perpendicular bisector of a side, and that yields the desired decomoposition. </p> http://mathoverflow.net/questions/108171/the-mod-3-reduction-of-some-powers-of-delta The mod 3 reduction of some powers of delta paul Monsky 2012-09-26T15:22:38Z 2012-12-13T04:36:20Z <p>Let f in Z/3[[x]] be the mod 3 reduction of the Fourier expansion of the normalized weight 12 cusp form delta for the full modular group. The exponents appearing in f are all 1 mod 3. Fix k>0 and prime to 3. If j is 1 or 2, let S(j) consist of all primes p for which the coefficient of x^p in f^k is j. It's well known that if k=1, S(1) is empty and S(2) consists of all p that are 1 mod 3.</p> <p>I find experimentally that similar results hold when k is 2,4,5 or 10. Indeed it seems:</p> <ol> <li><p>When k=2, S(1) is all p that are 2 mod 9, S(2) all p that are 5 mod 9</p></li> <li><p>When k=4, S(1) is all p that are 4 or 7 mod 9, S(2) is empty</p></li> <li><p>When k=5, S(1) is all p that are 5,7 or 20 mod 27, S(2) all p that are 8,11, or 23 mod 27</p></li> <li><p>When k=10, S(1) is all p that are 13 or 25 mod 27, S(2) all p that are 16 or 22 mod 27.</p></li> </ol> <p>I have little doubt that these results hold. But are they known, and is there a reference? </p> <p>EDIT: For the case of reduction mod 2 rather than mod 3, see my recent question, "Does this theorem of Hasse....?" . But it seems less likely that techniques from the theory of binary and ternary quadratic forms can yield a proof of the above "results".</p> <p>FURTHER EDIT: Here's a sketch of a proof of the first result. The space of Fourier series of weight 2 cusp forms for gamma_0 (9) has a basis of Eisenstein elements F,G, and H lying in Z[[x^3]], xZ[[x^3]], and x^2 Z[[x^3]] respectively. In Z[[x]], F is congruent to 1 mod 12 x^3. Furthermore the coefficient of x^n in G is sigma_1(n) when n is 1 mod 3, while the coefficient of x^n in H is (1/3)(sigma_1 (n)) when n is 2 mod 3.</p> <p>Let C=x-8x^4+20x^7+.. be the Fourier expansion of the weight 4 form (eta(3z))^8 for gamma_0(9). A comparison of the coefficients of x^n for small n gives the identities C=FG-27H^2, and G^2=FH. So mod 3, C^2=G^2=H, and the coefficient of x^p in C^2, when p is a prime congruent to 2 mod 3 is, modulo 3, equal to (1/3)(sigma_1(p))=(p+1)/3. Now the cube of C^2 is the square of f(x^3), where f is the Fourier expansion of delta. It follows that mod 3, the coefficient of x^p in f^2 is (p+1)/3 when p is a prime congruent to 2 mod 3. This is precisely 1. above.</p> http://mathoverflow.net/questions/106267/does-this-variant-of-a-theorem-of-hasse-really-due-to-gauss-have-an-elementary Does this variant of a theorem of Hasse (really due to Gauss) have an "elementary" proof? paul Monsky 2012-09-03T19:27:05Z 2012-10-11T23:04:35Z <p>BACKGROUND</p> <p>Here are 3 theorems of varying difficulty. Let $M$ be the $Z/2$ subspace of $Z/2[[x]]$ spanned by $f^k$, with the $k>0$ and odd, and $f=x+x^9+x^{25}+x^{49}+\cdots$. For $g$ in $M$, let $S(g)$ consist of the primes, $p$, for which the coefficient of $x^p$ in $g$ is 1. Note that each $p$ in $S(f^k)$ is congruent to $k$ mod 8.</p> <p>T1.----- If $k=3 {\rm\ or\ } 5$, $S(f^k)$ consists of the $p$ that are $k$ mod 8</p> <p>T2.----- $S(f^7)$ consists of the $p$ that are 7 mod 16</p> <p>T3.----- If $k=19 {\rm\ or\ } 21$, then $S(f^k)$ consists of the $p$ that are $k$ or $k+8$ mod 32.</p> <p>To prove T1 when $k=3$, we write $f^k$ as $f*f^2$ and use the fact that if $p$ is 3 mod 8, then $p$ is uniquely the sum of a square and twice a square. When $k=5$ we argue similarly using Fermat's two square theorem.</p> <p>As I indicated in a comment on a recent MO question of Joel Bellaiche, "<a href="http://mathoverflow.net/questions/100701/primes-and-x22y24z2" rel="nofollow">Primes and x^2+2y^2+4z^2</a>" ,T2 follows from a result of Hasse on the class number of $Q(\sqrt{-2p})$, using Gauss' theorem that the number of representations of $2p$ as a sum of 3 squares is 12*(this class number). Hasse's proof is an application of the Gauss theory of genera and ambiguous forms.</p> <p>T3 is thornier. Because $f$ is the mod 2 reduction of (the Fourier expansion of) the normalized weight 12 cusp form for the full modular group, each $g$ is the mod 2 reduction of a modular form of integral weight. A profound result of Deligne, relating Hecke eigenforms to Galois representations, then shows that $S(g)$ is a "Frobenian set". Nicolas, Serre and Bellaiche, continuing in this vein, developed a theory of level 1 modular forms in characteristic 2 that led to more precise results. Their investigations motivated me to try to determine $S(f^k)$ empirically for small $k$, and I was led to conjecture T3. Joel then applied his methods to give a proof. But this is very hard, and so I ask:</p> <p>QUESTION</p> <p>Does there exist an "elementary proof" of T3, using the theory of binary quadratic forms, along the lines of the Hasse-Gauss argument?</p> <p>EDIT: Motivated by my recent simple proof of T2 (see my answer to the question of Joel cited above), I've found arguments that ought to reduce the proof of T3 to Sage calculations. The point is that forms of weight 2 are easier to deal with than forms of weight 3/2, so one should work with quadratic forms in 4 variables rather than in 3, even when the genera that arise have more than 1 class in them. Here's the idea of my argument for f21.</p> <p>Let p be a prime that is 5 mod 8. Writing $f^{21}$ as $(f)(f^2)(f^2)(f^{16})$ we find that if $R$ is (1/16)*(the number of representations of $p$ by $G_1=x^2+2y^2+2z^2+16t^2$ with $x,y,z$ and $t$ all odd), then $p$ is in $S(f^21)$ if and only if $R$ is odd. Now since $p$ is 5 mod 8, in any representation of $p$ by $G_1$, $x,y$ and $z$ must be odd. So if we set $G_2=x^2+2y^2+2z^2+64t^2$ then $R=(N1-N2)/16$, where $N_1$ and $N_2$ are the numbers of representations of $p$ by $G_1$ and $G_2$ respectively. Now write $p$ as $a^2+4b^2$ with $a$ and $b$ congruent to 1 mod 4. Computer calculations indicate:</p> <p>Conjecture 1. $N_1=p+1+2a$</p> <p>Conjecture 2. $N_2=((p+1)/2)+a+4b$</p> <p>If these conjectures hold then $R=(p+1+2a-8b)/32$. The numerator here is $4(b-1)^2 +(a+3)(a-1)$, which mod 64 is $(a+3)(a-1)$. So $R$ has the same parity as $(a+3)(a-1)/32$ and is odd just when $a$ is $5$ or $9$ mod $16$. Now mod $32$, $p=a^2+4$. So $R$ is odd just when $p$ is $29$ or $85$ mod $32$, and so the conjectures imply Joel's result for $S(f^21)$.</p> <p>How does one attack the conjectures? The theta series attached to $G_1$ and $G_2$ are modular forms for $\Gamma_0 (64)$ and $\Gamma_0 (256)$ respectively. If the conjectures are to hold it seems that each of these theta series should be a linear combination of Eisenstein series and cusp forms attached to Grossencharaktere for $\mathbb Q(i)$. It should be possible, using Sage, to get an explicit formulation of this, and prove the conjectures.</p> <p>My proposed treatment of $S(f^{19})$ is entirely similar. Suppose $p$ is $3$ mod $8$. Writing $f^{19}$ as $(f)(f)(f)(f^{16})$ and arguing as above we find that if we take $H_1$ and $H_2$ to be $x^2+y^2+z^2+16t^2$ and $x^2+y^2+z^2+64t^2$ respectively, and let $N_1$ and $N_2$ be the number of representations of $p$ by $H_1$ and $H_2$, then $p$ is in $S(f^{19})$ just when $R=(N_1-N_2)/16$ is odd. Now Jacobi's 4 square theorem, (see the argument in my answer to Joel's question), shows that $N_1$ is $2(p+1)$. Write $p$ as $a^2+2b^2$ with $a =1$ or 3 mod 8. The computer suggests:</p> <p>Conjecture 3. $N_2=p+1+4a$</p> <p>So if the conjecture holds, $R=(p+1-4a)/16$, and one sees easily that this is odd just when $p$ is 19 or 27 mod 32. Once again the theta series attached to H2 is a modular form for $\Gamma_0 (256)$. The conjecture indicates that it should be a linear combination of Eisenstein series and cusp forms attached to Grossencharaktere for $\mathbb Q(\sqrt{-2})$; all this should admit a proof using Sage.</p> http://mathoverflow.net/questions/100701/primes-and-x22y24z2/109275#109275 Answer by paul Monsky for Primes and $x^2+2y^2+4z^2$ paul Monsky 2012-10-10T06:00:46Z 2012-10-10T06:00:46Z <p>Here's a simpler argument. We may assume p is 7 mod 8. Let N be the number of triples of squares (r,s,u) with r+2s+4u=p. We will show that N is odd if p is 7 mod 16 and even if p is 15 mod 16. Let M be (1/64)(the number of representations of p by xx+yy+zz+tt). Jacobi's 4 square theorem (which has elementary proofs using quaternions for example) shows that M=(p+1)/8. So it suffices to show that M and N have the same parity. Now if p=xx+yy+zz+tt, then just one of x,y,z,t is even. So M=(1/16)(the number of representations of p by xx+yy+zz+4tt). In other words, M is the number of ordered quadruples of squares (r,s,t,u) with r+s+t+4u=p. Now the involution (r,s,t,u)-->(r,t,s,u) on this set of quadruples has the N fixed points (r,s,s,u), giving the result.</p> http://mathoverflow.net/questions/106859/beautiful-theorems-with-short-proof/106885#106885 Answer by paul Monsky for Beautiful theorems with short proof paul Monsky 2012-09-11T05:23:13Z 2012-09-11T05:23:13Z <p>The proof(via the pigeon-hole principle--continued fractions would need too much preparation) that when D>0 is not a square then the "Pellian equation" xx-Dyy=1 has a non-trivial solution.</p> http://mathoverflow.net/questions/106859/beautiful-theorems-with-short-proof/106866#106866 Answer by paul Monsky for Beautiful theorems with short proof paul Monsky 2012-09-11T01:02:58Z 2012-09-11T01:02:58Z <p>Fermat's proof, by infinite descent, that there is no Pythagorean right triangle whose area is a square might qualify.</p> http://mathoverflow.net/questions/61348/simple-groups-with-the-same-cardinality-as-psl-2z-p Simple groups with the same cardinality as PSL_2(Z/p) paul Monsky 2011-04-11T23:44:23Z 2012-08-23T07:39:15Z <p>In an undergrad honors algebra course it's sometimes shown that when $p$ is prime and $>3$ then $PSL_2(Z/p)$ is simple of of order $p(p-1)(p+1)/2$. But that this is the "only" simple group having that order is seldom or never (except when $p=5$) proved.</p> <p>I've worked out a fairly simple proof, using the Burnside transfer theorem, of this last result, but it's perhaps a little too intricate to present in class.</p> <p>QUESTION: Are there proofs of this result, on-line or in texts, that are appropriate for an undergrad honors algebra class? (If not, I might post an argument on arXiv).</p> <p>EDIT: Since no simple available proof has yet been found, I'll sketch the argument that I culled from classification arguments for Zassenhaus groups.</p> <p>Suppose G is simple of order p(p-1)(p+1)/2. First one shows that G has p+1 p-Sylows. Let S be the union of Z/p and {infinity}. An easy study of the conjugation action of G on the p-Sylows allows one to identify G with a doubly transitive group of (even) permutations of S, containing the p-cycle z-->z+1. Then the subgroup of G fixing both 0 and infinity is cyclic generated by z-->cz for some c. Once this is done, the key is in showing:</p> <p>A.--- The subgroup of elements that either fix 0 and infinity or interchange them is dihedral.</p> <p>Once A is shown it's not hard to show that z-->-1/z is in G, thereby identifying G with a fractional linear group. The proof of A is a counting argument when p=1 mod 4. But when p=3 mod 4 the situation is more delicate, and one uses Burnside tranfer.</p> http://mathoverflow.net/questions/100265/not-especially-famous-long-open-problems-which-anyone-can-understand/101112#101112 Answer by paul Monsky for Not especially famous, long-open problems which anyone can understand paul Monsky 2012-07-02T02:12:16Z 2012-07-02T02:12:16Z <p>A few decades ago Sherman Stein asked whether a trapezoid whose parallel sides are in the ratio 1:root 2 can be dissected into triangles, all of the same area. This remains open--it's a mystery which trapezoids admit such dissections./</p> http://mathoverflow.net/questions/100265/not-especially-famous-long-open-problems-which-anyone-can-understand/100448#100448 Answer by paul Monsky for Not especially famous, long-open problems which anyone can understand paul Monsky 2012-06-23T11:51:16Z 2012-06-23T11:51:16Z <p>Here's another Birch Swinnerton-Dyer related problem. Sylvester conjectured that every prime that is 4,7 or 8 mod 9 is a sum of two rational cubes. Elkies (unpublished?) settled the first two cases. As far as I know, the third is still open.</p> http://mathoverflow.net/questions/68247/existence-of-certain-identities-involving-characteristic-2-thetas Existence of certain identities involving characteristic 2 "thetas" paul Monsky 2011-06-19T23:11:15Z 2012-01-25T20:10:37Z <p>Let l=2m+1 be prime. In my previous MO question, "What are the polynomial relations between these characteristic 2 thetas?", I defined a subring of Z/2[[x]] as follows:</p> <p>The subring, S, is generated by [1],...,[m] where [i] is the sum of the x^(n^2), n running over all integers congruent to i mod l.</p> <p>QUESTION...... Let F=x+x^9+x^25+x^49+...,G=F(x^l), and H=G(x^l). Are G and H in S?</p> <p>The answer is yes when l=3,5 or 7. When l=7, if we set a=[1],b=[2] and c=[3], we have the curious identities H=(abc)^3*(abc+ba^3+cb^3+ac^3), and G=(abc)^2+a^7+b^7+c^7+H.</p> <p>Remark 1... Kevin Buzzard explained to me that one can decide whether an explicitly given identity such as the ones we've displayed holds by using the theory of characteristic 2 modular forms and computer calculation. But how does one produce these putative identities?</p> <p>Remark 2... For all l one can show in an elementary way that H is in the field of fractions of S. In fact if a=[i], b=[2i] and c=[4i], then H is the quotient of a^8(a^8+b^2) by b^4+c. Furthermore for l at most 13, H is in S. (One shows that the quotient lies in S, by combining the "quintic relations" of my MO question cited earlier with Groebner basis computer calculations.)</p> <p>I'll sketch an argument giving the l=7 identities. Let C be the curve in affine 3-space defined by the ideal of quintic relations. C has 3 linear branches at the origin and 3 linear branches at each of the seven points (r,r^4,r^9) with r^7=1. Passing to projective 3-space we find that (the Zariski closure of) C has 14 simple points at infinity. The formula for H as a quotient shows that H has zeros of order 49 at the branches at the origin, simple zeros at the branches at the other singular points, and poles of order 12 at infinity. This leads to the identity for H. To get the identity for G one notes that (GH)+(GH)^2+(G+H)^8=0--see my MO question, "What's known about the reduction...?" It follows from this that if G is in the field of fractions of S then G+H has zeros of order 7 at the branches at the origin, of order 3 at the branches at the other singular points, and poles of order 6 at infinity. This suggests that G+H=(abc)^2+a^7+b^7+c^7. To verify this we set J=(abc)^2+a^7+b^7+c^7+H, and use Groebner basis computer calculations to show that JH+(JH)^2+(J+H)^8=0; it then follows that J=G.</p> <p>EDIT: I think I can now show that when l=11, G is NOT in the field of fractions of S, even though H is in S. I'll make this an answer once I'm surer of it.</p> <p>EDIT #2: My supposed counterexample when l=11 is incorrect; G like H is in S. I had the wrong modular equation of degree 11 relating G and H. Once I found the correct equation, in Cayley's article, I was able to argue as in the case l=7.</p> <p>FINAL(?) EDIT: As I've shown in my answer, G and H are indeed always in S. And I've produced a simple conjectural explicit formula for G+H that holds for l<1500. Whether there is anything comparably simple for H isn't clear. At any rate here are formulas for H when l<24. I write C(a,b,c) for the sum of the [ra]<em>[rb]</em>[rc] where r runs from 1 through (l-1)/2; more generally (a,b,c) can be replaced by any multi-set. P is the product of the [r] where r runs from 1 through (l-1)/2. The identity when l=17 is striking.</p> <p>l=3.......... [1]^9 +[1]^12</p> <p>l=5.......... P^5 +P^6</p> <p>l=7.......... (P^3)(P+C(1,1,1,2))</p> <p>l=11.........(P^2)(C(1,1,3)+C(1,1,2,4))</p> <p>l=13.........P*(P+[1][2][3][5]+[1][4][5][6]+[2][3][4][6]+C(1,1,2,2,2,5))</p> <p>l=17.........P*([1][2][4][8]+[3][5][6][7])</p> <p>l=19.........P*([2][3][5]+[4][6][9]+[1][7][8]+C(3,3,2,4))</p> <p>l=23.........P*(C(1,2,3,3)+C(1,2,4,5)+C(1,4,4,6)+C(1,2,2,5,9))</p> http://mathoverflow.net/questions/52781/whats-known-about-the-mod-2-reduction-of-the-level-l-jacobi-modular-equation What's known about the mod 2 reduction of the level l Jacobi modular equation? paul Monsky 2011-01-21T18:08:30Z 2011-12-27T03:55:08Z <p>Motivation:</p> <p>Let $\ell$ be an odd prime. Let $A$ in ${\mathbb Z}/2[[x]]$ be $x+x^9+x^{25}+x^{49}+...$, and $B=A(x^\ell)$. One can use the level $\ell$ Jacobi modular equation to get a polynomial relation between $A$ and $B$ over ${\mathbb Z}/2$. I'm curious as to what is known about this relation. To be precise, let $\Omega_\ell$ in ${\mathbb Z}[u,v]$ be the modular equation in $u-v$ form; see page 126 of Borwein and Borwein, "Pi and the AGM". Write this polynomial as a sum of monomials $2^{c_{i,j}} d_{i,j} (u^i) (v^j)$ with the $d_{i,j}$ odd. Let $f \in {\mathbb Z}/2[X,Y]$ be the sum of the $(X^i)(Y^j)$, the sum extending over the pairs $(i,j)$ for which $(c_{i,j})+(1/2)(i+j)$ takes its minimal value. (It appears that this minimal value is $\ell+1$).</p> <p>It's not hard to see that $f(A,B)=0$. And the theory of the modular equation shows that $f$ is symmetric in $X$ and $Y$. Question---What more is known about $f$?</p> <p>Examples--(See pages 127-132 of Borwein and Borwein which allow one to calculate $f$ for $\ell<29$):</p> <ul> <li>$\ell=3$: $f=XY+(X+Y)^4$</li> <li>$\ell=5$: $f=XY+(X+Y)^6$</li> <li>$\ell=7$: $f=XY+(XY)^2+(X+Y)^8$.</li> </ul> <p>EDIT: A few simple remarks. The l+1 at the end of the first paragraph above should have been (1/2)(l+1); see my comment below. Also problem 6a on page 135 of Borwein and Borwein says that in our notation, c_1,1 =(l-1)/2. So XY, X^(l+1) and Y^(l+1) all appear in f. Finally the "octicity "result of page 134 problem 3 puts a restriction on the monomials appearing in f.</p> <p>EDIT 2: The revised comments below form an edit. (When I tried to put them up as such, a bug intervened). In them I define the modular functions u and v in terms of Jacobi's thetas, and indicate why one can derive relations between A and B over Z/2 from relations between u and v over Z. I also show that the relation f(X,Y) derived from Omega_l is irreducible.</p> http://mathoverflow.net/questions/30501/variations-on-a-theme-of-obryant-cooper-and-eichhorn-concerning-power-series-ov Variations on a theme of O'Bryant, Cooper and Eichhorn concerning power series over $\mathbb Z/2\mathbb Z$ paul Monsky 2010-07-04T11:06:32Z 2011-12-27T03:53:16Z <p>Define 2 power series over the field $\mathbb Z/2\mathbb Z$ by $f=1+x+x^3+x^6+\dots$, the exponents being the triangular numbers, and $g=1+x+x^4+x^9+\dots$, the exponents being the squares. Write $f/g$ as $c_0+c_1x+c_2x^2+\dots$ with each $c_n$ in $\mathbb Z/2\mathbb Z$.</p> <p><strong>Question.</strong> Is it true that when $n$ is even then $c_n$ is 1 precisely when $n$ is in the set of even triangular numbers $\lbrace 0,6,10,28,36,\dots\rbrace$? <a href="http://mathoverflow.net/users/935/kevin-obryant" rel="nofollow">Kevin O'Bryant</a> has verified that this holds when $n$ is 512 or less.</p> <p><strong>Remark.</strong> If one writes $1/g$ as $b_0+b_1x+b_2x^2+\dots$, then $n\mapsto b_n$ is the characteristic function $\bmod 2$ of the set $B$ studied by O'Bryant, Cooper and Eichhorn (see <a href="http://mathoverflow.net/questions/26839/" rel="nofollow">this</a> and <a href="http://mathoverflow.net/questions/28462/" rel="nofollow">this</a> questions of O'Bryant on MO); they show that when $n$ is even then $b_n$ is 1 precisely when $n$ is twice a square. A positive answer to my question would give a nice characterization of those elements of $B$ that are congruent to $7 \bmod 16$.</p> <p>(I've used the modular forms tag because of the formal similarity of $f$ and $g$ to Jacobi theta functions, and the motivation of O'Bryant, Cooper and Eichhorn in looking at $B$).</p> http://mathoverflow.net/questions/33711/if-p-is-a-prime-congruent-to-9-mod-16-can-4-divide-the-class-number-of-qp1-4 If p is a prime congruent to 9 mod 16, can 4 divide the class number of Q(p^(1/4))? paul Monsky 2010-07-28T21:01:07Z 2011-12-27T03:52:51Z <p>When $p$ is a prime $\equiv9\bmod16$, the class number, $h$, of $\mathbb Q(p^{1/4})$ is known to be even. In </p> <p><a href="http://dx.doi.org/10.1515/crll.1980.314.40" rel="nofollow">[Charles J. Parry, A genus theory for quartic fields. <em>Crelle's Journal</em> <strong>314</strong> (1980), 40--71]</a></p> <p>it is shown that $h/2$ is odd when 2 is not a fourth power in $\mathbb Z/p\mathbb Z$. Does this still hold when 2 is a fourth power?</p> <p>Some years ago I gave an (unpublished) proof that this is true provided the elliptic curve $y^2=x^3-px$ has positive rank, and in particular that it is true on the B. Sw.-D. hypothesis. It's known that the above curve has positive rank for primes that $\equiv5$ or $7\bmod16$, but to my knowledge $p\equiv9\bmod16$ remains untouched. But perhaps there's an elliptic-curve free approach to my question?</p> http://mathoverflow.net/questions/80944/vanishing-constant-term-in-powers-of-a-laurent-polynomial/84278#84278 Answer by paul Monsky for Vanishing constant term in powers of a Laurent polynomial paul Monsky 2011-12-25T19:02:19Z 2011-12-25T19:02:19Z <p>I'll give an algebraic argument, which is I think essentially the same as Duistermatt's, but substitutes partial fractions and some valuation theory for complex analysis. K will be algebraically closed of characteristic 0; f will be in K[x,1/x]. We suppose f is not in K[x] or K[1/x]. Let r and -s be the largest and smallest exponent of x appearing in f; r and s are >0.</p> <p>Lemma:---Let M be a finite extension of the field of fractions of K[[t]]. Extend the obvious valuation on the field of fractions to M. Then if a is in M with f(a)=t, ord f'(a) must be < 1. (To see this note that ord a is 0. Then a has a Newton-Puiseux expansion a0 +(a1)(t^p)+..., with a0 and a1 non-zero, and p a positive rational. The derivation D=d/dt extends to M, and 1=D(f(a)) is the product of f'(a) by (a1)(p)*(t^(p-1))+... So ord f'(a)=1-p which is < 1.)</p> <p>Theorem 1:---Let S be a subset of the algebraic closure of K(z). Suppose that for each a in S, f(a)=1/z. Then the sum over S of the 1/(af'(a)) cannot be z. (To see this let t=1/z. Then K(z)= K(t) which imbeds in the field of fractions of K[[t]], and we may view S as a subset of a finite extension, M, of this field. By the lemma, each 1/(af'(a)) has ord > -1. But z=1/t has ord equal to -1.)</p> <p>Now let c_n be the constant term in f^n, and W=1+... be the element sigma (c_n)(z^n) of K[[z]]. Combinatorialists know that a partial fraction argument shows that W is algebraic over L=K(z). Carrying out the partial fraction argument explicitly one finds:</p> <p>Theorem 2:---There are a lying in a finite extension of L with each f(a) equal to 1/z, such that zW is the sum of the 1/(af'a)). (So by Theorem 1, W is not 1, and c_1, c_2,... cannot all be 0.)</p> <p>I'll sketch a proof of Theorem 2. Let U be the element (x^s)(1-zf) of L[x]. If a is a root of u, then f(a)=1/z. So by the lemma f'(a) is not 0. Then U'(a) is not 0, and U is separable. It follows that 1/(1-fz)=(x^s)/U is a sum of c/(x-a) where a runs over the roots of f(x)=1/z. It's easy to see that the c corresponding to a given a is -1/(zf'(a)). I now refer to Lemmas 3.6 and 3.7 of my article "Generating functions attached to some infinite matrices"---see arXiv 0906.1836 or Elec. J. of Comb, v.18 (1) 2011:</p> <p>If we take U1=x^s and U2=U=(x^s)(1-zf) we are in the situation of Lemma 3.7. W is "the coefficient of x^0 in the element (U1)/(U2)". In the language of the lemma, W is the sum of the l_0(c/(z-a)). The proof of Lemma 3.6 shows that l_0(1/(z-a)) is either 0 or -1/a. So z l_0(c/(z-a)) is either 0 or 1/(af'(a)). This completes the proof.</p> http://mathoverflow.net/questions/39635/density-stability-questions-for-those-who-like-computer-calculation Density stability; questions for those who like computer calculation paul Monsky 2010-09-22T16:17:41Z 2011-12-22T03:34:45Z <p>BACKGROUND: The question, which has its roots in <a href="http://mathoverflow.net/questions/26839/how-thick-is-the-reciprocal-of-the-squares" rel="nofollow">a question asked on MO</a> by O'Bryant, concerns the relative density of certain subsets, $B$, of ${\mathbb N}$ in congruence classes modulo a power of 2. Let $I$ be such a congruence class. I'll say that $B$ is "stable in $I$" if there is a $c$ such that $B$ has relative density $c$ in $J$ whenever $J$ is a congruence class contained in $I$ whose modulus is also a power of 2.</p> <p>Suppose $B$ consists of all $n$ such that the coefficient of $x^n$ in the reciprocal of the element $g=1+x+x^4+x^9+x^{16}+\dots$ of ${\mathbb Z}/2[[x]]$ is 1. Cooper et al. showed that $B$ has density 0 in 12 of the mod 16 congruence classes. I extended the result to 3 of the 4 remaining classes. But calculations by O'Bryant suggest that in the class 15 mod 16, $B$ is stable with relative density $1/2$. For a detailed account see my note <a href="http://arxiv.org/abs/1009.3985" rel="nofollow">Disquisitiones Arithmeticae and online sequence A108345</a>.</p> <p>These QUESTIONS pertain to sets introduced by Cooper et. al:</p> <ol> <li><p>Replace the exponents $0, 1, 4, 9, \dots$ in $g$ by the numbers $3n^2-2n$, $n \in {\mathbb Z}$, to get a new $B$. This $B$ has density 0 in 7 of the classes mod 8. Does the computer suggest that it is stable with relative density 1/2 in the class 0 mod 8?</p></li> <li><p>Suppose the exponents are $5n^2-4n$, $n \in {\mathbb Z}$. Does the computer suggest that there's a $q$ such that the new $B$ you get is stable in each mod $q$ congruence class? And if so, what do the relative densities appear to be? (The density is provably 0 in some mod 8 classes).</p></li> <li><p>Answer the same question as 2. when the exponents are the $5n^2-2n$, $n \in {\mathbb Z}$.</p></li> </ol> <p>EDIT: I'll give a modified and generalized version of the question (and an expansion of my answer) using notation and ideas from my MO question on characteristic 2 thetas. Let $L$ be the field of formal Laurent series in $x$ over ${\bf Z}/2$. If $f$ (not zero) is in $L$, $B(f)$ consists of all $n$ for which the coefficient of $x^n$ in $1/f$ is 1. Now fix $l=2m+1$, $m>0$, and for $i$ in $\lbrace1,...,m\rbrace$ let $[i]$ be the element of ${\bf Z}/2[[x]]$ defined in the "thetas" question.</p> <p>Question: For $q$ a power of 2, what does the computer suggest about the relative density of $B([i])$ in the various mod $q$ congruence classes? (Since all elements of $B[i]$ are congruent to $-(i^2)$ mod $l$, these relative densities are at most $1/l$).</p> <p>Example: When $l=3$, it can be shown that $B([i])$ has density 0 in all congruence classes mod 8, with the possible exception of 7. And the computer (perhaps) indicates that in the 7 mod 8 class (or any class contained therein) the relative density is 1/6.</p> <p>My "answer" generalizes the first sentence of the example. I made no computer calculations--indeed the computer evidence is at first sight contrary to my results because of the slow approach to zero. Let $L(q)$ contained in $L$ be the field of formal Laurent series in $x^q$. Then $L$ is the direct sum of the $(x^k)L(q)$, $k$ in $\lbrace0,...,q-1\rbrace$. Let $p_{(q,k)}$ be the obvious projection map $L\to(x^k)L(q)$. Let $S$ contained in</p> <p>${\bf Z}/2[[x]]$ </p> <p>be the smallest ring that contains all the $[i]$ and is stable under the $p_{(q,k)}$ for all $q$ and $k$. It can be shown that every element of $S$ is the mod 2 reduction of the Fourier series of an integral weight modular form for a congruence group. A theorem of Serre then shows that if $\sum((c_n)(x^n))$ is in $S$ then the set of $n$ for which $c_n$ is 1 has density 0.</p> <p>As a corollary one finds: Let $p$ be a $p_{(q,k)}$. If $p(1/[i])$ is in $S$ then $B([i])$ has density 0 in the class $k$ mod $q$.</p> <p>By making use of the quintic relations from my theta question I can show that the hypothesis of the theorem holds in various cases. In particular suppose $i$ is prime to l. When $l=5$, $B=B([i])$ has density 0 in each mod 32 class except perhaps the 5 classes $n=7$ mod 8 and $n=28$ mod 32. When $l=7$, $B$ has density 0 in each mod 32 class except perhaps the 7 classes 7 mod 8, 14 mod 16 and 28 mod 32. When $l=9$, $B$ has density 0 in each mod 64 class except perhaps the 19 classes 1 and 7 mod 8, 28 mod 32, and 48 mod 64.</p> <p>In the various classes qualified by "except perhaps" in the above paragraph (and the subclasses contained therein) it seems plausible that the relative densities are $1/(2l)$. But this may be wishful thinking. I hope that someone will make further calculations.</p> <p>FURTHER EDIT: Heres a more explicit and more speculative version of my question. Let n_j be the negative exponents appearing in the Laurent series 1/[i], 1/[2i], 1/[4i], i/[8i],..., and q_j be the largest power of 2 dividing n_j.</p> <p>QUESTION: Does computer evidence support the following speculations?</p> <p>(1) The relative density of B([i]) in each congruence class n_j mod 8q_j, and in all congruence classes modulo a power of 2 contained therein, is 1/(2l).</p> <p>(2) Outside of these congruence classes B([i]) has density 0.</p> <p>For example when l=9 and i=1 the n_j are -16,-7,-4 and -1, and the classes in (1) are 1 mod 8, -1 mod 8, -4 mod 32 and -16 mod 128. The technique I indicated in my earlier edit shows that (2) holds in this case, so one gets 128-37 classes mod 128 where B has density 0. The technique also shows that (2) holds when l=3,5 or 7. This isn't much evidence, and there's far less for (1). But as these are the simplest answers one might hope for, I'd be interested in any calculations concerning them.</p> http://mathoverflow.net/questions/39635/density-stability-questions-for-those-who-like-computer-calculation/84069#84069 Answer by paul Monsky for Density stability; questions for those who like computer calculation paul Monsky 2011-12-22T03:34:45Z 2011-12-22T03:34:45Z <p>For prime l I've now proved (a corrected version of) the speculation (2) made in the further edit to my question. (See Theorem I below). The proof avoids the extensive computer verifications made in arXiv NT 1107.4137. So let K be an algebraic closure of Z/2. Call an element g=a_0+a_1(x)+a_2(x^2)+... of K[[x]] "sparse" if the n with a_n non-zero form a set of density 0.</p> <p>Lemma 1----Suppose g is in the subring R of K[[x]] generated by K and the [i]. Then g is sparse. (This follows from the fact that the elements of R are the mod 2 reductions of Fourier expansions of modular forms of integral weight, and the theorem of Serre mentioned in my previous answer).</p> <p>We have shown in another question that the above ring R is the co-ordinate ring of an affine curve C. Let m_0 be the maximal ideal of R generated by [1],...,[l-1], and p_0 be the point of C corresponding to m_0. We showed in addition (using the fact that l is prime) that m_0 is the only maximal ideal of R containing any of [1],...,[l-1], and that there are (l-1)_/2 linear branches at p_0 with distinct branch tangents.</p> <p>Lemma 2---Suppose g=a_0+a_1(x)+... is in K[[x]], that g is the quotient of an element of R by a product of powers of [i], and that g has positive ord at every branch of C centered at p_0. Then g is sparse. (For g is in the localization of R at every maximal ideal other than m_0. Furthermore if n is large g^n is in the localization of R at m_0. So for n large, g^n is in R, and is sparse by Lemma 1. Take n to be a large power of 2 to get the result.</p> <p>---------Now let U=U_2 be the operator K[[x]]-->K[[x]] taking sum(a_n)(x^n) to sum(a_2n)(x^n). Note that U([i][i]g)=[i]U(g).</p> <p>Lemma 3---The subring of Z/2[[x]] generated by the [i] is stable under U. (It suffices to show that U takes a product of terms, [ ], to an element of this ring. We argue by induction on the number of terms in the product. We may assume that the first 2 terms are [2i] and [2j]. Then [2i][2j] is the sum of [2i][j]^4, [2j][i]^4 and ([i+j][i-j])^2. Multiplying by the remaining terms in the product, applying U, and using induction we get the result).</p> <p>Now let L be the field of Laurent series in x over Z/2, and L(q) be the field of Laurent series in x^q, where q is a power of 2. We write p_(q,k) for the obvious projection map L-->(x^k)L(q). Note that for g in Z/2[[x]], U(g) is the square root of p_(2,0)(g). So if g is in the ring of Lemma 3, then p_(q,0)(g) is a qth power in that ring.</p> <p>Theorem I---Let q be a power of 2, and suppose that k is in {0,1,2,3,4,5,6,7}. Suppose that g_i=p_(8q,kq)(1/[i]) is in Z/2[[x]] for all i--that is to say that no negative exponents appear in any g_i. Then each g_i is sparse. (In other words each B([i]) has density 0 in each congruence class kq mod 8q).</p> <p>To prove this note that p_(q,0)(1/[i]) is the quotient of p_(q,0)([i]^(8q-1)) by [i]^8q. The paragraph before Theorem I shows that this is the quotient of v^q by [i]^8q for some v in R. Applying p_(8q,kq) we find that g_i=(1/[i]^8q)(w^q), where w=p_(8,k)(v). Since p_(8,k) stabilizes R, g_i is the quotient of an element of R by a power of [i]. The exponent restriction tells us that if g is a g_j, then g has positive ord at each branch of C centered at m_0, and we invoke Lemma 2.</p> <p>Example: Suppose l=13, q=16 and k=3. The negative exponents appearing in the i/[i] are -36,-23,-10, -25, -16,-9,-4,and -1. Since none of these is congruent to 48 mod 128, all the g_i are in Z/2[[x]]. It follows from Theorem I that each B([i]) has density 0 in the congruence class 48 mod 128, a result that had eluded me.</p> http://mathoverflow.net/questions/68247/existence-of-certain-identities-involving-characteristic-2-thetas/68573#68573 Answer by paul Monsky for Existence of certain identities involving characteristic 2 "thetas" paul Monsky 2011-06-23T01:14:29Z 2011-11-23T16:18:17Z <p>In the first version of this answer I gave a (necessarily incorrect) proof of the false statement that when the prime,l, is 11, then G is not in the field generated over Z/2 by the [j]. In the second version I found my error, and gave a computer-aided proof that for this l, G is in the ring generated over Z/2 by the [j].</p> <p>In this completely rewritten answer I state the following conjecture and explain why it holds when l is congruent to 1 mod 4 or to 3 mod 8.</p> <p>CONJECTURE: Let l be an odd prime. Then there is a C in the ring generated by the [j] such that C^2+C=G+H. In particular, G like H is in the field generated by the [j], and if H is in the ring generated by the [j], the same is true of G.</p> <p>Proofsketch when l=1 mod 4 or l=3 mod 8.------When l=1 mod 4, take r with r^2=-1 mod l. Then [j][rj] only depends on the coset of {1,r,-1,-r} in (Z/l)* that contains j. Take C to be the sum of the [j][rj] where j runs over a set of representatives of the cosets. For example when l=13, C=[1][5]+[2][3]+[4][6]. It's an exercise in the arithmetic of Z[i] to show that C^2+C=G+H. When l=3 mod 8, take r with r^2=-2 mod l, and let C be the sum of the [rj][j][j] where j runs over representatives of the cosets of {1,-1} in the multiplicative group of Z/l. Now the result is proven using the arithmetic of Z[Root(-2)]. </p> <p>Remark: When l=7 mod 8, I may present evidence for the truth of the conjecture in a separate question. But now it seems that ternary rather than binary quadratic forms enter the picture.</p> <p>EDIT(11/23/11)</p> <p>I believe I can now prove the above conjecture. But since my proof uses the fact that G is a polynomial (over the algebraic closure, K, of Z/2) in my theta series, it doesn't supersede my other (self-accepted) answer.</p> <p>Here's the idea. Let q=x^l, and E be the elliptic curve Y^2+XY=X^3+(q+q^9+q^25+...) defined over the field of fractions of Z/2[[q]]. The j-invariant of E is 1/(q+q^9+q^25+...) "=" (E_4)^3/(Delta). Using this fact one shows that E is the characteristic 2 Tate curve. The study I've performed of the field, L, generated over K by the theta-series shows that L is the field generated over K by the x co-ordinates of the l-division points of E. (In the proof of this I use the fact that G and H are in L). But one can write these x co-ordinates explicitly as power series, using a characteristic 2 analogue of the Weierstrass P-function (see Roquette's book). It turns out that there are (l-1)/2 of these division points for which, when their x co-ordinates are summed, one gets a power series C with C^2+C=G+H. So C is in L. Once this is known it's straightforward to see that C is a polynomial in the theta-series. But why the remarkable empirical formulas for C in terms of the theta-series hold when l is 7 mod 8 remains a mystery.</p> http://mathoverflow.net/questions/43925/what-are-the-polynomial-relations-between-these-characteristic-2-thetas What are the polynomial relations between these characteristic 2 "thetas" ? paul Monsky 2010-10-28T01:47:32Z 2011-10-16T08:44:49Z <p>Suppose $\ell=2m+1$, $m>0$. Define $[i]$ in $\mathbb{Z}/2\mathbb{Z}[[x]]$ to be $$\sum_{n\equiv i\mod l} x^{n^2}.$$ Note that $[0]=1$, and that $[i]=[j]$ whenever $\ell$ divides $i+j$ or $i-j$.</p> <p>Now let $u_1,...,u_m$ be indeterminates over $\mathbb{Z}/2\mathbb{Z}$, and $f$ be the homomorphism $\mathbb{Z}/2\mathbb{Z}[u_1,...,u_m]\to \mathbb{Z}/2\mathbb{Z}[[x]]$ taking $u_i$ to $[i]$. Using the theory of modular forms I think I can show that the kernel, $P$, of $f$ is a dimension 1 prime ideal.</p> <p>Question 1: What is the genus of (a non-singular projective model) of the curve corresponding to $P$?</p> <p>Examples: When $\ell=5$ the curve one desingularizes is $x^5+y^5+xy+(xy)^2=0$, and the genus is 0.</p> <p>When $\ell=7$, the curve has the following affine plane model of degree 14: $\sum x^iy^j=0$ where $(i,j)$ runs over the 10 pairs $(14,0)$, $(12,1)$, $(10,2)$, $(7,7)$, $(6,4)$, $(5,8)$, $(5,1)$, $(4,5)$, $(1,10)$ and $(0,14)$. (Perhaps someone with access to Singular or time on their hands can work out the genus?).</p> <p>When $\ell=9$ the curve has an affine plane model of degree 27; this time one gets the 20 pairs $(27,0)$, $(24,3)$, $(21,6)$, $(20,1)$, $(15,3)$, $(13,2)$, $(12,15)$, $(12,6)$, $(11,10)$, $(11,1)$, $(9,18)$, $(9,9)$, $(7,17)$, $(6,21)$, $(5,16)$, $(5,7)$, $(4,20)$, $(4,11)$, $(1,23)$ and $(0,27)$.</p> <p>One has the following curious but easily proved relations between the various $[i]$. Let $a$,$b$,$c$,$d$,$e$,$f$ be $[i]$,$[j]$,$[2i]$,$[2j]$,$[i+j]$,$[i-j]$. Then $d(a^4)+c(b^4)+cd+(ef)^2=0$. Each such identity gives rise to a "quintic relation" lying in $P$. (I used these relations to get the curves in the above examples). Let $J$ be the ideal contained in $P$ that is generated by these quintic relations.</p> <p>Rather vague Question 2: What can be said about $J$? For example: Are all the minimal primes of $J$ of dimension 1? If so, what are the associated primes other than $P$? Is $J$ a radical ideal?</p> <p>Examples: When $\ell=5$, $J=P$, and I believe the same holds when $\ell=7$. But when $\ell=9$ one needs to add the element $a(b^2)+b(c^2)+c(a^2)+d+(d^2)+(d^3)$, where $a$,$b$,$c$,$d$ are $u_1$,$u_2$,$u_4$,$u_3$ to $J$ in order to get $P$. Let $K$ be the ideal $(a+ad,b+bd,c+cd,ab+c^2,ac+b^2,bc+a^2)$. Then $K$ is the intersection of three dimension 1 primes, and I believe that $J$ is the intersection of $P$ and $K$.</p> <p>@sleepless--I hope you like this orthography better.</p> <p>EDIT: Here are answers to question 1 when l=9 and l=11. (As I explained in a comment the genus is 3 when l=7. It now appears that it's 10 when l=9 and 26 when l=11). Remarkably when l=3,5,7,9, or 11 the genus is the same as the genus of the compactification of the quotient of the upper half-plane by the principal congruence group, Gamma(l). I doubt that this is a coincidence, and am interested in what experts in the theory of characteristic p modular forms have to say. </p> <p>Suppose first l=9. Extend the constant field from Z/2 to its algebraic closure,K. Let C in affine 4-space be the zero-locus of P, and L/K be the function field of C. P is generated by the "quintic relations" together with ab^2+bc^2+ca^2+d+d^2+d^3, where a,b,c,d are the coordinate functions u1,u2,u4 and u3. It follows that P is stabilized by the linear automorphisms (a,b,d,c)-->(b,c,d,a) and (a,b,d,c)-->(ua,ub,d,uc) with u^3=1. These automorphisms generate an order 9 group, G, which acts on L; let L_0 be the fixed field. It can be shown that L_0 is generated over K by abc and d and that (abc)^3=d^7+d^8+d^9. So L_0/K has genus 1. We now use Riemann-Hurwitz to calculate the genus, g, of L/K. (Since G has odd order, L/L_0 is tamely ramified).</p> <p>The quintic relations all vanish on the line a=b=c=0. It follows that C has 3 points on this line; they are (0,0,d,0) with d+d^2+d^3=0. Each of these points is an ordinary triple point, and G permutes the branches at each of these points in a size 3 orbit. All the other orbits of G acting on the places of the function field L/K (including the places at infinity) are of size 9. Riemann-Hurwitz now tells us that 2g-2=9(2-2)+(9-3)+(9-3)+(9-3), so that g=10.</p> <p>When l=11, one can argue in like manner. Now P is generated by the quintic relations, and the similar group G, acting on L/K, has order 55. I think one can again show that the genus of L_0/K is 1; this is the one thing I haven't checked completely. Now C sits in affine 5-space, the origin is an ordinary singular point of multiplicity 5, and G permutes the branches at the origin in a size 5 orbit. All other orbits of G acting on the places of L/K are of size 55 and Riemann-Hurwitz tells us that 2g-2=55(2-2)+(55-5), so that g=26.</p> http://mathoverflow.net/questions/43925/what-are-the-polynomial-relations-between-these-characteristic-2-thetas/78258#78258 Answer by paul Monsky for What are the polynomial relations between these characteristic 2 "thetas" ? paul Monsky 2011-10-16T08:44:49Z 2011-10-16T08:44:49Z <p>Felipe Voloch referred me to two 1959 papers of Igusa in v. 81 of Amer. J. of Math. pages 453-475 and 561-577. Results from these papers and techniques I've developed on MO give an answer to my question when l is prime.</p> <p>As I suggested in the edit to the question, the genus is (l-3)(l-5)(l+2)/24. More is true. In the first of the above-cited papers Igusa constructs, for each prime p and each N prime to p, a "field of modular functions of level N", finite and Galois over k(j) where k is the algebraic closure of Z/p. When N is a prime, l, he shows that this field has Galois group PSL_2(Z/l) over k(j) and is the splitting field of the "invariant transformation equation" Phi(X,j). In Lemma 2 of the second paper cited above he shows that this symmetric 2 variable Phi is the mod p reduction of the classical modular equation. I use these results to show that the field of my question, generated over the algebraic closure, k, of Z/2 by the theta series, identifies with Igusa's field of modular functions of level l and characteristic 2.</p> <p>A key observation(the key observation according to Kevin Buzzard--it allows me to pass from modular functions that appear to be of even level to ones evidently of level l) is that Phi(1/G,1/F)=0 where F=x+x^9+x^25+... and G=F(x^l). To see this recall Jacobi's identity (1-q)(1-q^2)(1-q^3)...=1-3q+5q^3-7q^6+9q^10... where the exponents are the triangular numbers. Raising to the power 8, multiplying by q, and reducing mod 2 shows that the mod 2 reduction of the Fourier expansion of Delta(z) is F(q). Since j(z) is the quotient of (E_4(z))^3 by Delta(z), the mod 2 reduction of the Fourier expansion of j(z) is 1/F(q), while that of j(lz) is 1/G(q). This together with the result from Igusa's second paper gives the observation.</p> <p>It now suffices to show that my field is the field generated over the algebraic closure, k, of Z/2 by G together with the l+1 conjugates of F over k(G). In various answers to other MO questions I've sketched a proof that my field admits PSl_2(Z/l) as an automorphism group, that it contains all of the above elements, and that the elements of PSL_2 all fix G. I now look at G sitting inside the fixed field of PSL_2, and show that it has exactly one zero (counted with multiplicity) in that field. So the fixed field is precisely k(G). It follows that my field is generated over k(G) by F and its k(G)-conjugates concluding the proof.</p> http://mathoverflow.net/questions/78077/function-fields-of-characteristic-p-modular-curves-and-mod-p-reductions-of-the-c Function fields of characteristic p modular curves, and mod p reductions of the classical modular equation paul Monsky 2011-10-13T23:02:28Z 2011-10-13T23:02:28Z <p>Let l and p be distinct primes, l>2. There are "characteristic p modular curves" X_0(l) and X(l), defined over an algebraic closure, K, of Z/p, solving moduli problems for elliptic curves with some additional level-l structure. Each of these curves has the same genus as the corresponding characteristic 0 object; in particular the genus of X(l) is (l-3)(l-5)(l+2)/24. </p> <p>There is also an irreducible symmetric f in Z[x,y] with f(j(lz),j(z))=0, where j is the elliptic modular function. This is the "classical modular equation". Let f* be the mod p reduction of f. I'm looking for a proof that certain well-known relations between f and the function fields of the characteristic zero X_0(l) and X(l) persist when f is replaced by f*, and X_0 and X are replaced by their characteristic p counterparts. I'd like an argument showing:</p> <p>1) f* is irreducible in K[x,y]</p> <p>2) The Galois group of f* over K(y) identifies with PSL_2(Z/l).</p> <p>3) The function field (over K) of the curve defined by f* identifies with the function field of the characteristic p X_0.</p> <p>4) If L is the splitting field (over K(y)) of f*, then L identifies with the function field (over K) of the characteristic p X.</p> <p>Remarks:</p> <p>a) I would guess that 1)---4) somehow follow from the existence of moduli schemes defined over Z[1/l]. But can someone provide a reference and details?</p> <p>b) A weaker form of 4) whose statement doesn't involve the theory of modular forms in characteristic p, is that the genus of L/K equals the genus of the classical X(l). As an old dog who has trouble with new tricks, I'd be happiest with a classical proof of this result.</p> <p>c) I'm mostly interested in the case p=2, where I can prove 1) and 2). This is all related to an MO question of mine about the genus of a curve coming from the theory of characteristic 2 theta functions.</p> http://mathoverflow.net/questions/74480/more-questions-involving-characteristic-2-theta-series-identities More questions involving characteristic 2 theta series identities paul Monsky 2011-09-04T02:53:26Z 2011-09-29T01:54:54Z <p>In my answer to my earlier question, "Existence of certain identities involving characteristic 2 thetas", I established some curious identities when the thetas have prime "level" congruent to 1 mod 4 or to 3 mod 8. This question concerns the case when the level is 7 mod 8.</p> <p>I reprise notation from earlier questions. l is an odd prime and [j] is the sum of the x^(n^2), where n runs over the integers congruent to j mod l; we view the "theta series" [j] as elements of Z/2[[x]]. F is the power series x+x^9+x^25+x^49+x^81..., G=F(x^l) and H=G(x^l). My identities involve G,H and the various [j].</p> <p>There is evidently a unique C in Z/2[[x]], having constant term 0, with C^2+C=G+H. I showed that when l is 1 mod 4 or 3 mod 8 (or when l=7), then C can be written explicitly as a polynomial in the [j]. Here is what the computer suggests when l=7 mod 8 and is < 50. First some notation. If (r,s,t) is a triple of integers, we define C(r,s,t) to be the sum of the power series [rj][sj][tj] where j runs from 1 to (l-1)/2. Define C(r,s,t,u) similarly. (When l is 3 mod 8, I showed that C is C(1,1,t) where t^2 is congruent to -2 mod l).</p> <p>(1) When l=7, I can show that C=C(1,1,1,2)+C(1,2,3)</p> <p>(2) When l=23 I think that C=C(3,3,1,2)+C(1,3,6)</p> <p>(3) When l=31 I think that C=C(3,3,2,3)+C(2,3,7) (In my original post I wrote C(2,5,8), but C(2,3,7)=C(2,5,8))</p> <p>(4) When l=47 I think that C=C(3,3,2,5)+C(2,3,9)</p> <p>(Note that the sum of the squares of 3,3,2 and 5 is 47, etc.)</p> <p>QUESTION 1: Can one establish the truth of (2),(3) and (4)? Kevin Buzzard explained to me that it's enough to show that the power series expansions agree up to a certain exponent, but I'm not sure what that exponent is, and I doubt that I have the computer power.</p> <p>QUESTION 2: Are there identities like those above for l>50? And if so, what are these identities explicitly?</p> <p>EDIT: Let V be the space spanned by the C(r1,r2,r3,r4) with r1=r2 and l dividing the sum of the squares of r1,r2,r3 and r4, together with the C(s1,s2,s3) with l dividing the sum of the squares of s1,s2 and s3. When l=7 mod 16 I can use Jacobi's 4-square theorem to show that C is in V. It's then possible to prove identities like those of (2) above by exploiting the geometry of of Spec R where R is the subring of Z/2[[x]] generated by the theta series [j].</p> <p>-----One can show that an element of V has at most l(l-1)(l+1)/6 poles, counted with multiplicity, on the obvious projective completion of this curve. So if it has a zero of large enough order at the origin, it vanishes. I applied this technique for various l congruent to 7 mod 16; the results boggled my mind. It's only necessary to use 2 terms in the power series expansion of each theta series. When l=23, I got (2) above. </p> <p>When l=71, I found that C=C(3,3,2,7)+C(5,6,9)</p> <p>When l=103, I got 5 different expressions for C! Explicitly:</p> <p>a----C(3,3,6,7)+C(2,9,11)</p> <p>b----C(7,7,1,2)+C(5,9,10)</p> <p>c----C(5,5,2,7)+C(1,3,14)</p> <p>d----C(3,3,2,9)+C(6,7,11)</p> <p>e----C(1,1,1,10)+C(1,6,13)</p> <p>It seems possible to me that in general, for l=7 mod 8, one gets h/4 formulae of this sort where h is the class-number of Q(Root(-2l)). I've discussed the case l=31 in the comment to ARupinski. When l=47, I can show that C(3,3,2,5)+C(2,3,9)=C(1,1,3,6)+C(3,6,7). So if (4) above holds, there's a second formula for C in this case, just as in the case l=31. But I can't prove that C is in V when l=15 mod 16.</p> <p>UPDATE_<em>_</em> Suppose l is 7 mod 8; consider the vectors W in Z^3 with(W,W)=2l. There is a group of order 48 operating on the set of such W by permutation and sign change of co-ordinates; the group operates without fixed points. So if there are 12h such W there are h/4 orbits under the group action.</p> <p>----Ira Gessel's calculations, carried out for l<1500, indicate that there is an involution, O-->O' on the set of orbits, which has the following property. Let O be any of the (h/4) orbits and (r1,r2,r3) be a representative of O with r1 even (so that r2 and r3 are odd). Then if (s1,s2,s3) is a representative of O', we have the explicit identity C((r1)/2,(r1)/2,(r2+r3)/2,(r2-r3)/2)+C(s1,s2,s3)=C.</p> <p>----But to know what these conjectured(but true beyond possibility of doubt) equations are for l>1500, we need to describe the involution. Franz Lemmermeyer suggested that the involution comes from an involution on a set of equivalence classes of quadratic forms of discriminant -8l. This is surely the case; I'll explain what the involution on the forms is, and how to transfer it to the orbits.</p> <p>----Consider positive quadratic forms rx^2+2sxy+ty^2 with s^2-rt=-2l. Gauss showed that these fall in exactly h equivalence classes under the action of SL_2(Z), where 12h is the number of W with (W,W)=2l; we'll be interested in GL_2 equivalence however. Since rt=2l+s^2, we find that mod 16, rt is 2,7,14 or 15. This can be used to show that one of the following possibilities must occur:</p> <p>a.--- Every non-zero n represented by the form is the product of an integer that is 1 or 7 mod 8 by a power of 2.</p> <p>b.---Every non-zero n represented by the form is the product of an integer that is 3 or 5 mod 8 by a power of 2.</p> <p>----In the first case we say that the form is in the principal genus, while in the second that it is in the non-principal genus. There are (h/4) GL_2 classes in the non-principal genus. Furthermore there is an involution on this set of classes taking the class of rx^2+2sxy+2ty^2 to the class of 2rx^2+2sxy+ty^2. I'll call this involution "composition with 2x^2+ly^2".</p> <p>----I now describe a map from the set of (h/4) orbits to the set of (h/4) classes. The map can be shown to be onto, and so is bijective. When we transfer composition with 2x^2+ly^2 to the set of orbits, we get our desired involution; one which is in complete accord with Gessel's calculations. Suppose (W,W)=2l. Let W# consist of all elements of Z^3 orthogonal to W. We attach to W the class of the form (xU+yV,xU+yV), where U and V are a basis of W#. This class is evidently independent of the choice of basis; one can show that it consists of forms of discriminant -8l and lies in the non-principal genus. This gives the desired map from orbits to classes of forms; as I've indicated it is bijective.</p> <p>EXAMPLE____Take l=1567, and W=(3,25,50) so that (W,W)=2l. Let O be the orbit of W. I'll calculate O', and write down the conjectured equations coming from O and O'. A basis for W# consists of U=(0,2,-1) and V=(25,1,-2). Then (U,U)=5, (U,V)=4, (V,V)=630, and a form attached to O is 5x^2+8xy+630y^2. Composition with 2x^2+1567y^2 takes this to 10x^2+8xy+315y^2. So we seek U' and V' with (U',U')=10, (U',V')=4, and (V',V')=315. Take U'=(3,1,0). A little experimenting, writing 315 as a sum of 3 squares, shows that we should take V'=(5,-11,13). Then W' which is orthogonal to U' and V' can be taken to be their vector product (13,-39,-38). So O' is the orbit of (13,38,39). And one of our predicted expressions for C is C(25,25,11,14)+C(13,38,39), while another is C(19,19,13,26)+C(3,25,50). </p> http://mathoverflow.net/questions/68247/existence-of-certain-identities-involving-characteristic-2-thetas/75368#75368 Answer by paul Monsky for Existence of certain identities involving characteristic 2 "thetas" paul Monsky 2011-09-14T02:14:25Z 2011-09-24T22:44:58Z <p>I suppose it's bad form to answer one's own MO question, but I now have an almost complete solution to this one. I can prove:</p> <p>1.----H is always in the ring S generated by the [j].</p> <p>2.----The same holds for G except perhaps when l=15 mod 16. (In "More questions involving characteristic 2 theta series identities" I provide some experimental evidence when l=15 mod 16.)</p> <p>To prove 1. note that I gave a formula in my question expressing H as a quotient of elements of S. Now I have made a study of the variety V consisting of the zeros of the polynomial relations between the various [j]. V is a curve; when l>3 it has exactly l+1 singular points, each of which is an ordinary multiple point of multiplicity (l-1)/2. Using my formula for H, I can show that it has ord at least 0 at every non-singular point of V, and ord> 0 at every branch centered at every singular point. So it lies in all the local rings of S.</p> <p>EDIT:NOT SO--the condition of being in the local ring at a singular point is more stringent. For a correct argument see the FINAL EDIT below.</p> <p>To prove 2. let C be the sum of the x^(ln) where n runs over all (non-zero) integers of the form (square) or 2(square) or l(square) or 2l(square). Note that C^2+C is G+H. So in view of 1. it suffices to show that C is in S. In my previous answer I indicated why this is true when l=1 mod 4 or l=3 mod 8, writing C explicitly as a polynomial in the [j]. I will edit this answer shortly to handle the more difficult case l=7 mod 16.</p> <p>EDIT: Suppose now l=7 mod 16. Here's a proof that C lies in S. Let T, contained in a product of 4 copies of Z/l, consist of all (r1,r2,r3,r4) other than (0,0,0,0) with (r1^2)+(r2^2)+(r3^2)+(r4^2)=0. There is a group of order (24)(16)=384 acting on T by permutation of co-ordinates and sign changes of co-ordinates. Using the fact that l=7 mod 8, we find that every orbit has size 384 or 192 or 64. Call an orbit "small" if it has size 192 or 64. We shall show that C is a sum of terms attached to the small orbits. To each small orbit attach the power series [r1][r2][r3][r4] where (r1,r2,r3,r4) is an orbit representative; this is independent of the representative. I'll show that C is the sum of these contributions. Clearly every exponent appearing in the sum of these power series is divisible by l. It remains to show that x^ln appears in the sum if and only if n is the product of a non-zero square by 1,2,l or 2l.</p> <p>Now the coefficient of x^ln in [r1][r2][r3][r4] is the mod 2 reduction of M where M is the number of integer 4-tuples (a,b,c,d) satisfying:</p> <p>(1)---(a^2)+(b^2)+(c^2)+(d^2)=0</p> <p>(2)---(a,b,c,d) reduces to (r1,r2,r3,r4) mod l</p> <p>Modulo 2, M is (1/64)(the number of (a,b,c,d) satisfying (1) and reducing to an element in the orbit of (r1,r2,r3,r4)). So the sum of the M, modulo 2, is the number of (a,b,c,d) satisfying (1) and reducing to a point in some small orbit. Also the number of (a,b,c,d) satisfying (1) and reducing to a point in an orbit of size 384 is clearly a multiple of 384. So the coefficient of x^ln in our sum is the mod 2 reduction of (1/64)(the number of (a,b,c,d) that satisfy (1) and do not reduce to (0,0,0,0)).</p> <p>Let R(n) be the number of representations of n as a sum of 4 squares. We have just shown that the coefficient of x^ln in our sum is the mod 2 reduction of (1/64)(R(ln)-R(n/l)). It remains to show that (1/64)(R(ln)-R(n/l)) is odd precisely when n is the product of a square by 1,2,l or 2l. Jacobi proved that R(n) is 8(the sum of the divisors of n) when n is odd, and 24(the sum of the odd divisors) when n is even. So (1/8)(R(ln)-R(n/l)) is a product of local factors, one from each prime. The factor attached to 2 is 1 or 3. That attached to l is l^t(1+l) where t=(ord_l)(n). Since l=7 mod 16, this is 8(an odd number). Finally that attached to an odd prime p other than l is the sum of the divisors of p^s where s is (ord_p)(n). This factor is odd just when s is even, and the result follows.</p> <p>A couple of remarks. When l=15 mod 16 the same argument shows that the sum we've constructed is not C, but 0. Also an orbit is small precisely when it has a representative with r1=r2 or a representative with r4=0.</p> <p>FINAL EDIT: I now have an answer I'm prepared to accept, unless some spoilsport finds a flaw; it shows that G,H (and F) all lie in the subring S of Z/2[[x]] generated by the [j] irrespective of l. Unlike the approach taken in the last edit which exhibited G+H explicitly as a polynomial in the [j], (except when l is 15 mod 16), this one doesn't seem to give nice explicit formulas. I'll be using results from other MO questions of mine, and some further results in manuscript. Let K be an algebraic closure of Z/2, and S' be the subring of K[[x]] generated over K by the [j]. It,s enough to show that G,H and F lie in S'.</p> <p>First I show that they're all in the field of fractions, L, of S'. In another MO post I wrote H as a quotient of 2 elements of S. To handle F I use the following:</p> <p>(1)___For l>3, Spec(S') is a curve with l+1 singular points, among them the maximal ideal m generated by [1],...,[l-1]. These are ordinary singular points of multiplicity (l-1)/2.</p> <p>(2)___There is a group of automorphisms of S'/K isomorphic to PSL_2(Z/l). These automorphisms stabilize the space spanned by [0],...,[l-1] and act transitively on the (l+1)(l-1)/2 valuation rings in L/K containing the local rings at the singular points. The group is generated by the maps [j]-->[rj], r prime to l, [j]-->a^(j^2) [j] where a is an l'th root of unity in L, and a sort of characteristic 2 "Fourier transform".</p> <p>Now the maps [j]-->[rj] and [j]-->a^(j^2) [j] generate a subgroup B of PSL_2 of order l(l-1)/2, and my "quotient formula for H" shows that B fixes H. So the orbit of H under PSL_2 has size at most l+1. A rather formal calculation with the "Fourier transform" shows that the orbit consists of H and the F(ax) where a^l=1. I claim that each of these elements lies in the local ring of m on S'. For H this is easy; H has ord l^2 at each valuation ring containing m. Taking E to be the sum of [1],...[(l-1)/2] we find that E+E^4=F+H. So F is in this local ring as well, and the result follows easily for each F(ax). The fact that PSL_2 acts transitively on the singular points now shows that H and the F(ax) lie in the local ring at every singular point. Also the quotient formula for H shows that H has ord 0 at every non-singular point, and the same then holds for the F(ax). Thus H and the F(ax) are in S'; this corrects the argument I gave earlier.</p> <p>I now turn to G. There is a degree l+1 2-variable symmetric polynomial P over Z/2 with P(F,G)=0. Furthermore P(z,G) is monic of degree l+1, and has H and the F(ax) as roots. Also the constant term of P(z,G) is G^(l+1), while the coefficient of z is G+ higher degree terms. Since the product of H and the F(ax), as well as the l'th symmetric function of H and the F(ax), are in S', both G^(l-1) and G+... are in S'. Now over K these 2 elements generate a field between K(G^(l+1)) and K(G); since G+... is in this field it is all of K(G), and G is in L. Also G^(l+1), as the product of H and the F(ax), is fixed by PSL_2. Since every homomorphism from PSL_2 to the l+1 th roots of unity is trivial, G is fixed by PSL_2.</p> <p>At the valuation rings lying over m, G has ord l. So G is in the local ring of m, and consequently in the local ring at every singular point. Furthermore, like H and the F(ax), G has ord 0 at the non-singular points. So it is in S'. (Note also that like H and the F(ax), G has poles of order 12 at every valuation ring in L/K that doesn't contain S').</p> http://mathoverflow.net/questions/125877/computing-certain-class-numbers-modulo-4 Comment by paul Monsky paul Monsky 2013-03-29T13:37:48Z 2013-03-29T13:37:48Z @Sarah--In the answer to your earlier question I could have used the seventh power of g= x+x^4+x^9+x^16+... rather than the eleventh. Then it would turn out that the coefficient of x^pq in the expansion is odd or even according as the class number of Q(root(-2pq)) is 4 mod 8 or 0 mod 8. And the Gauss theory of forms axx+bxy+cyy with b^2-4ac=-8pq would show that the class number is 4 mod 8 when (q/p)=-1 and 0 mod 8 when (q/p)=1. So my argument would show that g^7 isn't the reduction of the expansion at infinity of any modular form for any gamma_0. http://mathoverflow.net/questions/125007/does-there-exist-a-half-integer-weight-theta-function-which-is-is-equivalent-to-1/125128#125128 Comment by paul Monsky paul Monsky 2013-03-27T22:52:14Z 2013-03-27T22:52:14Z Alternatively, if p=5 mod 8 and q=7 mod 8 then the number of ways of writing pq as s_1+2s_2+8s_3 with the s_i squares is the same as the number of ways of writing pq as s_1+2s_2+8s_3 with the s_i non-zero squares. So the coefficients of x^pq in g^11 and in (1+g)^11 are the same. http://mathoverflow.net/questions/125007/does-there-exist-a-half-integer-weight-theta-function-which-is-is-equivalent-to-1/125128#125128 Comment by paul Monsky paul Monsky 2013-03-27T21:38:25Z 2013-03-27T21:38:25Z @Sarah. That's right. Another way to say it--instead of using (1/2)*(phi-(E_4)^m) in my edited comment, use (1/2)*(phi+(E_4)^m). http://mathoverflow.net/questions/125007/does-there-exist-a-half-integer-weight-theta-function-which-is-is-equivalent-to-1/125128#125128 Comment by paul Monsky paul Monsky 2013-03-25T11:48:32Z 2013-03-25T11:48:32Z Here's the argument that if u is the mod 2 reduction of an element of Z[[x]] that is the expansion at infinity of some modular form phi of weight w for gamma_0 (N) then the space spanned by the image of u under the formal Hecke operators "T_q" ,q prime, has finite dimension. For fixed N and w the Z-module of such elements of Z[[x]] has finite rank and is stable under the Hecke T_q where (N,q)=1. So the image of this module under mod 2 reduction contains u and has finite Z/2 dimension. The T_q with (2N,q) reduce to "T_q". These "T_q" stabilize the image. And only finitely many q divide N. http://mathoverflow.net/questions/125007/does-there-exist-a-half-integer-weight-theta-function-which-is-is-equivalent-to-1/125128#125128 Comment by paul Monsky paul Monsky 2013-03-22T03:49:04Z 2013-03-22T03:49:04Z @Will-My proof? is more or less a version of this idea, as I think my conjectures when the level is odd should follow from results of Igusa on the modular curve. But I have no feeling for what happens when the level is even. (I took Katz and Mazur out of the library last year, but it went unread--it's not for amateurs). http://mathoverflow.net/questions/125007/does-there-exist-a-half-integer-weight-theta-function-which-is-is-equivalent-to-1 Comment by paul Monsky paul Monsky 2013-03-19T23:03:51Z 2013-03-19T23:03:51Z Maybe the following approach might be helpful.(I'm guessing that the answer to your question is no.) If there is such a theta, raising it to an appropriate odd power, multiplying what you get by 1+2(x+x^4+x^9+...), subtracting off an Eisenstein series and dividing by 2 would give a modular form of integral weight whose mod 2 reduction, g, is x+x^4+x^9+.... Then a theorem of Serre would imply that for any k almost all the coefficients of g^k are 0. I wonder what the evidence is for or against this claim about g. http://mathoverflow.net/questions/124243/are-these-empirical-discoveries-about-the-serre-swinnerton-dyer-ring-of-prime-lev Comment by paul Monsky paul Monsky 2013-03-11T19:45:05Z 2013-03-11T19:45:05Z @Alberto: Thanks. I followed your suggestion. http://mathoverflow.net/questions/121506/level-p-characteristic-2-modular-forms-and-thetas Comment by paul Monsky paul Monsky 2013-03-05T01:34:21Z 2013-03-05T01:34:21Z There's a typo in my treatment of p=19. I should have written y^2+y=x^3, not y^2+y=x^3+x. http://mathoverflow.net/questions/118512/n22m4-2p2e4-hasnt-solution-in-integers Comment by paul Monsky paul Monsky 2013-01-13T19:52:28Z 2013-01-13T19:52:28Z Since the OP hasn't accepted Franz Lemmermeyer's simple answer on stackexchange, I'll give an expanded version of it here. It's enough to show that 2K^2=(M^4)-(p^2)(e^4) has no integer solutions with (M,e)=1. Suppose on the contrary there's a solution. If M or e is even then the right-hand side is odd, a contradiction. So M and e are odd, and (M^2)+p(e^2) is 6 mod 8. But since the gcd of the positive integers (M^2)+p(e^2) and (M^2)-p(e^2) divides 2p, while the product is twice a square, (M^2)+p(e^2) is the product of an odd square by 2 or 2p and is 2 mod 8. http://mathoverflow.net/questions/118512/n22m4-2p2e4-hasnt-solution-in-integers Comment by paul Monsky paul Monsky 2013-01-12T20:00:58Z 2013-01-12T20:00:58Z This question has satisfactory answers on stackexchange--it could be closed as no longer relevant http://mathoverflow.net/questions/118117/can-every-curve-be-written-as-fxgy/118148#118148 Comment by paul Monsky paul Monsky 2013-01-06T16:38:48Z 2013-01-06T16:38:48Z Suppose you restrict attention to curves defined over Q? Is the result still true, and can you exhibit an example? What about the Klein quartic for example? I believe its Jacobian is a factor of the Jacobian of the Fermat curve of degree 7, but it's not clear to me whether it is a model of some f(x)=g(y). http://mathoverflow.net/questions/117904/elementary-examples-of-the-weil-conjectures/117909#117909 Comment by paul Monsky paul Monsky 2013-01-03T00:33:39Z 2013-01-03T00:33:39Z But the most elementary proof of RH for curves is that of Bombieri, which uses nothing more than RR for curves, and avoids any higher dimensional algebraic geometry. http://mathoverflow.net/questions/108171/the-mod-3-reduction-of-some-powers-of-delta Comment by paul Monsky paul Monsky 2012-12-14T12:40:11Z 2012-12-14T12:40:11Z Ramanujan determined the mod 27 reduction of the Fourier expansion of delta, and perhaps others have worked on my particular powers of delta. http://mathoverflow.net/questions/108171/the-mod-3-reduction-of-some-powers-of-delta Comment by paul Monsky paul Monsky 2012-12-14T12:37:20Z 2012-12-14T12:37:20Z @Will--I agree. Let g in Z/2[[x]] be the characteristic 2 analogue of f. Joel Bellaiche proved a conjecture of Nicolas and Serre, and using this found just which linear combinations of the g^k with k odd corresponded to abelian Galois representations. (In particular the g^k with k=3,5,7,19 and 21 are "abelian"--I write a little about this on other MO questions). I've experimentally confirmed a characteristic 3 analogue of Joel's result on linear combinations, but have no proofs. However my question about f^k where k=2,4,5 or 10 perhaps admits a more elementary answer--(to be continued) http://mathoverflow.net/questions/106859/beautiful-theorems-with-short-proof Comment by paul Monsky paul Monsky 2012-09-11T15:50:45Z 2012-09-11T15:50:45Z This one is probably on your list--Schur's proof of the Euler pentagonal number theorem by comparing partitions of n into an even number of distinct parts and partitions into an odd number of distinct parts. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9499697685241699, "perplexity": 815.0045208209284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704433753/warc/CC-MAIN-20130516114033-00060-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/497152/why-are-there-two-photons-in-pair-production-feynman-diagram?noredirect=1 | # Why are there two photons in pair production Feynman diagram? [duplicate]
Given
I wonder why are there two photons entering in a) pair production? Isn't it one photon resulting in $$e^+e^-$$ ? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9471084475517273, "perplexity": 808.1832300975518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301341.12/warc/CC-MAIN-20220119125003-20220119155003-00486.warc.gz"} |
https://en.wikibooks.org/wiki/A-level_Physics_(Advancing_Physics)/Force | Magnetic fields exert a force on a charge when the charge is moving. If the charge is stationary, no force is exerted. This force is given by:
${\displaystyle {\overrightarrow {F}}=q({\overrightarrow {v}}\times {\overrightarrow {B}})}$,
where q is the charge on the point charge, v is its velocity and B is the magnetic field strength. This involves a vector cross product, which you don't need to know about for A-level. However, you do need to know a simplified version of this. The magnitude of this force F is given by:
${\displaystyle F=Bqv\sin {\theta }}$,
where θ is the angle between the direction of motion of the point charge and the direction of the magnetic field. If the velocity and the magnetic field are in the same direction, the θ = 0, so sin θ = 0 and F = 0. If the velocity and the magnetic field are perpendicular to each other, θ = ½ π, so sin θ = 1. This means that, in the special case where velocity is perpendicular to the magnetic field:
${\displaystyle F=Bqv}$
If q is negative (for example, for an electron), the force is in the opposite direction.
## Current
A current is just a flow of moving electrons, and so a magnetic field will exert a force on a wire with a current flowing through it. The case you need to know about is when the magnetic field is perpendicular to the wire. In this case, the magnitude of the force on the wire is given by:
${\displaystyle F=BIL}$,
where I is current, and L is the length of the wire.
## Direction
Fleming's left-hand rule
The direction of the force on either a point charge or on a wire can be worked out using Fleming's left-hand rule, as shown in the diagram on the right. The direction of the thumb is that of the force (or thrust), the direction of the first finger is that of the magnetic field, and the direction of the second finger is that of the current (or the motion of the point charge).
The point and fletchings of an arrow.
On a 2D diagram, the direction of a magnetic field is represented by one of two symbols, which resemble the point and fletchings of an arrow pointing in the direction of the magnetic field. The symbol ${\displaystyle \bigodot }$ means that the field is pointing towards you (just as the arrow would be, if you were looking at the point). The symbol ${\displaystyle \bigotimes }$ means that the field is pointing away from you (just as the arrow would be, if you were looking at the fletching).
## Questions
1. What force is exerted by a 1T magnetic field on an electron (of charge -1.6 x 10−19C) moving at 5% of the speed of light (3 x 108 ms−1)?
2. What force is exerted by a 5mT magnetic field on a 20 cm wire with resistance 1μΩ attached to a 9V battery?
3. The following diagram shows a positive charge moving through a magnetic field. Draw an arrow representing the direction of the force on the charge.
4. The following diagram shows a wire in a magnetic field. Draw an arrow representing the direction of the force on the wire.
Worked Solutions | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9639679789543152, "perplexity": 145.33182408829987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110774.86/warc/CC-MAIN-20170822123737-20170822143737-00684.warc.gz"} |
https://grant.jurgensen.dev/2022/01/08/CurryHoward.html | ## Background
Early in the 20th century, mathematics underwent a sort of existential crisis concerning its fundamental capabilities. Prompted by questions of soundness and decidability, a sudden interest arose in formalizing mathematics. At the time, many were optimistic that such formalizations would reveal a sound, complete, and decidable theory for all mathematics. Of course, as we know now, such aspirations were quickly extinguished. Gödel, Church, and Turing all demonstrated fundamental limits of formal systems.
The dream of a perfect formal mathematics died out, but adequate formalisms arose to quell the existential crisis. The larger mathematics community returned to informal contexts, often with the implicit assumption that one’s work could be translated into Zermelo-Fraenkel set theory, or some other sound formalism.
Meanwhile, work on formalisms continued. Out of the ashes of the smoldering ambition for a perfect mathematics arose type theory. From Russel’s Principa Mathematica, to Church’s simply typed λ-calculus, all the way to Martin-Löf and beyond, type theory emerged as a family of powerful formalisms, modeling mathematics and programming alike.
It is in this context that Haskell Curry and William Alvin Howard independently converged upon a beautiful idea, both stunningly simple and profound, known now as the “Curry-Howard correspondence”.
Before jumping in, I should make proper attribution to a couple of sources which heavily inspired this presentation:
## Types
To the working software engineer, “types” can mean many different things. To some, it may describe little else than the bit-length of the corresponding data. To many functional programmers, types form the very basis for how they understand code. Among this crowd, you may hear the mantra “thinking in types” repeated, and for good reason. In such languages, types really are foundational to the interpretation of programs.
Before we proceed, we should narrow our conception of types. We will very much adopt this latter notion of types, which do not correspond to a value’s underlying hardware representation, but instead logically constrains the value, giving meaning to well-typed terms, and disqualifying the ill-typed.
We should also note that when we refer to a type theory, we don’t mean just the typing rules, but also the rewrite/reduction rules which defines computation. In complex theories, these two domains can be difficult to pull apart.
Let’s introduce some basic types and related concepts, which should be more or less familiar to anyone who’s worked with Haskell or Standard ML (SML), both of which have intentionally built off of traditional type theory concepts and notation.
First, we assume some collection of type primitives. For example, a string type, a natural number type, etc.
Type judgments come in the form $\Gamma \vdash x : A$, which is read “$x$ has type $A$ under context $\Gamma$”. If the context is empty, we may omit it, writing instead $\vdash x : A$. For instance, we may rightly assert $\vdash 3 : \mathbb{N}$, where $\mathbb{N}$ is the type of the natural numbers.
Contexts are necessary to type open terms. For instance, what would the type of $x + x$ be? This expression is “open”, since it contains an unbound variable, $x$. We can only type the expression if the type of $x$ is known. For instance, we may say $x : \mathbb{N} \vdash x + x : \mathbb{N}$.
In addition to our primitive types, we have a handful of type combinators. First is the binary product, which is the type of pairs. We write the product of $A$ and $b$ as $A \times B$. In SML, this would be A * B. In Haskell, it would confusingly be written (A, B). The type rules of products are represented by the following inference rules:
$\cfrac{\Gamma \vdash a: A \qquad \Gamma \vdash b: B}{\Gamma \vdash (a, b) : A \times B}[\times\text{-I}] \qquad \cfrac{\Gamma \vdash x : A \times B}{\Gamma \vdash \pi_1\ x : A}[\times\text{-E}_1] \qquad \cfrac{\Gamma \vdash x : A \times B}{\Gamma \vdash \pi_2\ x : B}[\times\text{-E}_2]$
Recall that the judgments on the top side of an inference rule specifies the preconditions, and the judgment on bottom specifies the conclusion. The name of each rule is listed on the right-hand side, in square brackets. Rules which end with “I” describe introduction rules, i.e. rules which describes how a construct is built. The rules ending in “E” are elimination rules, and describe how a construct is taken apart.
In the two elimination rules, $\pi_1$ is the first projection, and $\pi_2$ is the second. In most programming languages, these would be called fst and snd. The computational behavior of the projections may well be guessed, but for total clarity we provide the formal rules here. We use $x \leadsto y$ to mean “$x$ computes to $y$”.
$\cfrac{}{\pi_1\ (a, b) \leadsto a}[\pi_1\text{-red}] \qquad \cfrac{}{\pi_2\ (a, b) \leadsto b}[\pi_2\text{-red}]$
Next is the sum type. Conceptually, a sum $A + B$ describes values corresponding to either a value in $A$ or a value in $B$. In Haskell, this would be written as either A B. In SML, we might use the type (A, B) result for this purpose.
$\cfrac{\Gamma \vdash a: A}{\Gamma \vdash inl\ a : A + B}[+\text{-I}_1] \qquad \cfrac{\Gamma \vdash b: B}{\Gamma \vdash inr\ b : A + B}[+\text{-I}_2]\\[20pt] \cfrac{\Gamma \vdash f : A \rightarrow C \qquad \Gamma \vdash g : B \rightarrow C \qquad \Gamma \vdash x : A + B}{\Gamma \vdash case\ f\ g\ x : C}[+\text{-E}]$
Note the types $A \rightarrow C$ and $B \rightarrow C$ in the elimination rule. These are function types, which will be discussed shortly.
The computations rules are again straightforward, but we include them for reference.
$\cfrac{}{case\ f\ g\ (inl\ a) \leadsto f\ a}[case\text{-red}_1] \qquad \cfrac{}{case\ f\ g\ (inr\ b) \leadsto g\ b}[case\text{-red}_2]$
Finally, we have the function type. We use the common syntax for functions borrowed from the λ-calculus. In this syntax, a function has the shape $\lambda x. y$. Here, $x$ is the name of the function argument, and $y$ is the body of the function, where $x$ may occur as a free (i.e. unbound) variable. In ordinary mathematics, we might write $f(x) = y$. This is equivalent to the lambda function, except it is a declaration/binding instead of an anonymous expression.
A function type is written $A \rightarrow B$, meaning it takes a value of type $A$, and evaluates to an expression of type $B$. Function application is written without any parentheses, but with a space. I.e., $f\ x$ is the application of function $f$ to expression $x$.
Formally:
$\cfrac{\Gamma, x : A \vdash y : B}{\Gamma \vdash \lambda x. y : A \rightarrow B}[\rightarrow\text{-I}] \qquad \cfrac{\Gamma \vdash f : A \rightarrow B \qquad \Gamma \vdash x : A}{\Gamma \vdash f\ x : B}[\rightarrow\text{-E}]$
In the introduction rule, the comma operator represents an extension of the type context. That is, $\Gamma, x : A$ represents the environment $\Gamma$ with the additional entry $x: A$ (overshadowing previous types of $x$ if they exist). This is the first rule which has modified the context, and none so far have used it. If you need convincing that our context is necessary, consider the follow rules.
$\cfrac{}{\Gamma, x: A \vdash x : A}[\text{var}] \qquad \cfrac{\Gamma, x: A, y: B \vdash z : C \qquad x \neq y}{\Gamma, y: B, x: A \vdash z : C}[\text{exchange}]$
The “var” rules is the one which consumes the context. “exchange” simply allows us to reorder independent variables in the context as needed. These rules aren’t important for our purposes. I included them only for completeness.
Returning to functions, note we only allow a single argument. If you wish to define a function with multiple arguments, you may take a product instead. For instance, consider the following function:
$(\lambda x. (\pi_1\ x) + (\pi_2\ x)) : (\mathbb{N} \times \mathbb{N}) \rightarrow \mathbb{N}$
This is functional, but not ideal. Instead, we tend to write curried functions. Rather than taking a pair as an argument, we take the first argument, and return a function taking the second argument and returning the output:
$(\lambda x. \lambda y. x + y) : \mathbb{N} \rightarrow (\mathbb{N} \rightarrow \mathbb{N})$
This representation is much easier to work with. To make currying easier, we take the notational liberties of allowing $\lambda$s to bind multiple arguments, and we understand $\rightarrow$ to implicitly associate rightward. Thus, we could rewrite the above example as so:
$(\lambda x\ y. x + y) : \mathbb{N} \rightarrow \mathbb{N} \rightarrow \mathbb{N}$
Similarly, we associate function application leftward. So the expression $(\lambda x\ y. x + y)\ 1\ 2$ is equivalent to the more explicit $((\lambda x\ y. x + y)\ 1)\ 2$.
We conclude with the computational behavior of functions:
$\cfrac{}{(\lambda x. y)\ z \leadsto y[x \mapsto z]}[\rightarrow\text{-red}]$
In this reduction rule, we use $y[x \mapsto z]$ to represent the substitution of free variables $x$ with term $z$ in expression $y$.
## Propositions
Let’s pivot to review logical propositions. Perhaps you learned about these in your undergraduate Computer Science studies. If not, no worries, I’ll be defining everything here, just as I did with our types.
Propositional judgments will take the form $\Gamma \vdash A$. Here, $A$ is a proposition, and $\Gamma$ is a list of assumptions. When nothing is assumed we write $\vdash A$. Just as we did for types, we will assume there to be some set of primitive/atomic propositions. We then define some important propositional connectives. Recall that a proposition is conceptually just a statement which is true or false.
The first connective is logical conjunction. Let $A$ and $B$ be propositions. Then the conjunction $A \wedge B$ is also a proposition. This may be read “and”, as in “both $A$ and $B$ are true”. You may have seen this operator with different notation, such as $A * B$ in digital circuit design, or A && B in a variety of programming languages.
Just as we expressed typing rules with inference rules, we may use inference rules to describe valid logical derivations. For instance, conjunction is characterized by the following rules:
$\cfrac{\Gamma \vdash A \qquad \Gamma \vdash B}{\Gamma \vdash A \wedge B}[\wedge\text{-I}] \qquad \cfrac{\Gamma \vdash A \wedge B}{\Gamma \vdash A}[\wedge\text{-E}_1] \qquad \cfrac{\Gamma \vdash A \wedge B}{\Gamma \vdash B}[\wedge\text{-E}_2]$
Similarly, $A \vee B$ is logical disjunction, and corresponds to “or”, as in “either $A$ or $B$ is true”. You may have seen the alternative notation $A + B$ or A || B.
$\cfrac{\Gamma \vdash A}{\Gamma \vdash A \vee B}[\vee\text{-I}_1] \qquad \cfrac{\Gamma \vdash B}{\Gamma \vdash A \vee B}[\vee\text{-I}_2] \qquad \cfrac{\Gamma \vdash A \vee B \qquad \Gamma \vdash A \Rightarrow C \qquad \Gamma \vdash B \Rightarrow C}{\Gamma \vdash C}[\vee\text{-E}]$
There are a couple important takeaways from these rules. First, note that $\Rightarrow$, used in the elimination rule, represents logical implication, which we will get to shortly. Second, note that this definition of disjunction is not as powerful as it could be. In order to prove $A \vee B$, we must be able to prove $A$ or $B$. What if we wanted to prove $X \vee \neg X$ for some unknown $X$, where $\neg X$ is the logical negation (complement) of $X$? This is valid in classical logic, since the two arms of the disjunction are mutually exclusive and therefore one of them must be true. Yet, we have no way to prove it here, since we don’t have any idea which arm of the disjunction holds. This principle, known as the law of the excluded middle, is what separates classical logics from intuitionistic/constructive logics. Since our logic does not support the principle, it is intuitionistic.
Finally, as promised, we have logical implication, written $A \Rightarrow B$. This is interpreted “$A$ implies $B$”, or “if $A$, then $B$”. Note that if $A$ does not hold, then the implication says nothing of $B$. Therefore, when $A$ is false, $A \Rightarrow B$ is trivially true. Alternative notation includes $A \supset B$.
$\cfrac{\Gamma, A \vdash B}{\Gamma \vdash A \Rightarrow B}[\Rightarrow\text{-I}] \qquad \cfrac{\Gamma \vdash A \Rightarrow B \qquad \Gamma \vdash A}{\Gamma \vdash B}[\Rightarrow\text{-E}]$
Now that we have added to the assumptions, we need to add some rules with how assumptions may be used:
$\cfrac{}{\Gamma, A \vdash A}[\text{assumption}] \qquad \cfrac{\Gamma, A, B \vdash C}{\Gamma, B, A \vdash C}[\text{exchange}]$
## Propositions as Types
We have arrived at last at the big reveal. Perhaps you’ve already guessed at a connection between types and propositions based on the similarity of their inference rules. There is in fact a tremendously deep connection between the two: any given type may be interpreted as proposition, and any proposition as a type!
This is what is known as the Curry-Howard correspondence (or the Curry-Howard isomorphism, or simply by its slogan “propositions as types”). In the definitions above, the product type corresponds to conjunction, the sum type corresponds to disjunction, and the function type corresponds to implication.
What do we mean though when we say a type and proposition “correspond”, or that a statement may be “interpreted” as either a type or proposition? We do not mean that one has to squint to see some hazy connection. The relationship is much stronger than that. Rather, we are proposing that we need not make any distinction between propositions and types which “correspond”.
For instance, consider the product $A \times B$. Under the Curry-Howard correspondence, we say that this product is quite literally the same as the conjunction $A ∧ B$. In fact let’s abandon all of our proposition-specific notation in favor of the type notations. Now, when we wish to represent a logical conjunction, we will simply use the type notation $A \times B$.
This all begs the question, if we interpret a type as a proposition, then how do we interpret a term of said type? This is perhaps the most exciting part of all: a term may be considered a proof of said proposition! Then computation (term reduction) corresponds to proof simplification.
Using the product type again as an example, let $(a, b) : A \times B$. This is simple to understand as a typed term. Rather than considering this to be a pair of value, why not consider it a pair of proofs? If $a$ is a proof of $A$, and $b$ a proof of $B$, shouldn’t the pair of proofs $(a, b)$ constitute a proof of the conjunction $A \times B$?
If you don’t believe me, take another look at the introduction rules for $\times$ and $\wedge$, this time with further emphasis:
$% \cfrac{\Gamma \vdash a: A \qquad \Gamma \vdash b: B}{\Gamma \vdash (a, b) : A \times B}[\times\text{-I}] \cfrac{\Gamma \vdash \color{red}{a:} A \qquad \Gamma \vdash \color{red}{b:} B}{\Gamma \vdash \color{red}{(a, b) :} A \color{red}\times B}[\times\text{-I}] \qquad \cfrac{\Gamma \vdash A \qquad \Gamma \vdash B}{\Gamma \vdash A \color{red}\wedge B}[\wedge\text{-I}] % \cfrac{\color{red}{A \qquad B}}{\color{red}{A \wedge B}}[\wedge\text{-I}]$
Rather than highlighting the rules’ commonalities, it was simpler for me to highlight their differences. Aside from the symbol, the only difference is that the type rules keep track of the term (or proof?) in question.
This is not a one-off; I did not choose the only two inference rules with such commonality. Go back to any of the previous inference rules, and see how the the type and logic rules compare.
Of particular interest are the rules for disjunction. Recall that I defined disjunction in the intuitionistic fashion, even though classical reasoning is stronger and ubiquitous. If I desired, I could have added an inference rule for LEM:
$\cfrac{}{A \vee \neg A}[\text{LEM}]$
But then, what would the corresponding rule be for the sum type? We haven’t formally defined negation yet, so it may be difficult to explore this question yourself, in which case, trust me when I say there is not necessarily a value of this type given some arbitrary type $A$.
The final correspondence thus far is that between the function type $A \rightarrow B$ and the implication $A \Rightarrow B$. Intuitively, a function which transforms a proof of $A$ into a proof of $B$ ought to be considered a proof of the implication $A \Rightarrow B$.
## Expanding our Type Theory / Logic
We have learned that all of our propositions thus far may be interpreted as types (and vice versa). The natural impulse is to keep going! Let’s consider more propositions, and their reinterpretations/definitions as types.
First up, how might we define $\top$ (pronounced “top” or “true”), the proposition representing truth? Logically, we would have the following inference rule:
$\cfrac{}{\Gamma \vdash \top}[\top\text{-I'}]$
Note that we have an introduction rule, but no elimination rule. This is because no information goes into the derivation of $\top$, so nothing can be extracted from it either.
How would we expand this definition to a type? All we need to do is name the term implicit in our previous inference rule:
$\cfrac{}{\Gamma \vdash tt : \top}[\top\text{-I}]$
We are left with a type of just one element, named $tt$. We could have added more elements to the type, but why would we want more than one proof for $\top$? All that matters is that it is inhabited.
You may recall from various languages a distinguished type of just one element: the so called “unit” type, sometimes represented $()$. The unit type and $\top$ are isomorphic under the Curry-Howard correspondence.
Next, we define $\top$’s dual, $\bot$ (pronounced “bottom” or “false”), the proposition representing falsity.
$\cfrac{\Gamma \vdash \bot}{\Gamma \vdash A}[\bot\text{-E'}]$
While $\top$ had only an introduction rule, $\bot$ has only an elimination rule. Note that the conclusion of the elimination rule is arbitrary. If we somehow managed to prove $\bot$ (which should only be possible in contradictory contexts), then we may conclude any proposition. This principle has the latin name ex falso quodlibet, translating to “from falsehood, anything (follows)”. It is also known as the principle of explosion.
To translate the inference rule into a typing rule, we must name the eliminator:
$\cfrac{\Gamma \vdash x : \bot}{\Gamma \vdash exfalso\ A\ x : A}[\bot\text{-E}]$
Note that we have not given the type $\bot$ any inhabitants (it has no introduction rule). Indeed, by the Curry-Howard correspondence, $\bot$ corresponds to the empty type, i.e. the type with no terms. This might seem a strange definition, in that its type interpretation does not seem immediately useful, and you likely have never come across such a type in the wild. While the uninhabited type is not particularly common in programs, it does appear from time to time. For instance, see the Void type in Haskell. It’s usefulness may be more clear if you consider that the empty type acts as an identity element (up to isomorphism) on the sum type.
Next up, how can we define the negation operator we referenced earlier? This one is trickier. Our motivating intuition is that $A$ and $\neg A$ should be mutually exclusive. That is, if both were true, a contradiction (a proof of $\bot$) should follow. Therefore, we may define negation as $\neg A \equiv A \rightarrow \bot$.
While this is a reasonable definition, we can observe in it further limitations of constructive logic. In classical logic, one may prove the proposition $(\neg\neg A) \rightarrow A$ for arbitrary $A$. We call this “double-negation elimination”, and we can not prove it in constructive logic. To see why we can’t prove it, consider the proposition with the negations “unfolded” (replaced by their definition): $((A \rightarrow \bot) \rightarrow \bot) \rightarrow A$. Recall that a proof of an implication is a function. To prove this property, we’d need to write a function returning a proof of $A$ from its argument, but the type of its argument $(A \rightarrow \bot) \rightarrow \bot$ is completely unhelpful in that regard.
However, if we were to add LEM, this double-negation elimination would be derivable. I invite the enthusiastic reader to try to write the proof term themselves. Let $lem\ A : A + \neg A$. Everything else you need has already been defined.
For the less enthusiastic reader (or to check your work), here is the proof term:
$(\lambda f.\ case\ (\lambda a. a)\ (\lambda n.\ exfalso\ A\ (f\ n))\ (lem\ A)) : (\neg \neg A) \rightarrow A$
We conduct the proof by case analysis on $lem\ A : A + \neg A$. If $A$ holds, we have reached our conclusion, and simply return the proof term. If instead $\neg A$ holds, recall that the argument $f$ has type $\neg \neg A$, or after one unfolding, $\neg A \rightarrow \bot$. We apply our proof term of type $\neg A$ to $f$ to get $\bot$, which then feeds into $exfalso$ to produce a proof of our goal, $A$.
Don’t fret if you found this example confusing. I don’t expect the reader to be able to be able to write such terms, or even follow them perfectly at this point. I just want the reader to begin to engage in the writing and interpretation of concrete proof terms.
## Practical Concerns Combining Proofs and Programs
What is our end goal here? Is it to interpret all propositions as types? Or all types as propositions? By and large, the answer is neither. Most applications of the Curry-Howard correspondence lead to a formal language which supports notions of both programs and proofs, as well as arbitrary intermixing of the two.
If we are going to support both programs and proofs with our type theory, we need to be careful that nothing we add invalidates either interpretation. For instance, if we were to add LEM to the entire type theory, this would invalidate the computational interpretation, since the LEM proposition has no corresponding terms! Conversely, we need to ensure that our typing rules support a sound logic.
Most existing programming languages are unsuitable for the propositional interpretation because their type system is unsound when interpreted as a logic. The most common culprit for this is non-termination. For instance, consider the term let loop = (λx. loop x) in loop (). Such a term would be well-typed in languages like Haskell and SML. The nefarious detail is that the term may be given an arbitrary type! That is, the above term is a universal inhabitant. It can satisfy any type. From a “propositions as types” perspective, this is disastrous. The last thing we want is for every proposition to be be trivially provable by some universal proof term! Our logic would lose all its meaning!
In order to preserve this interpretation of programs as proofs, we must ensure that all programs terminate. In practice, this means placing restrictions on recursion to ensure that we are making real progress. One such restriction is to only accept recursive calls which are invoked on structurally smaller arguments (or “smaller” with respect to some well-founded relation). From a programming perspective, this requirement is annoying but tolerable in most cases.
If you look closely, we have avoided this issue by avoiding recursion altogether. The usual Y combinator is ill-typed in our theory. If our theory was anything other than pedagogical, we would want to add some means of well-founded recursion.
Similar soundness issues arise from other universal inhabitants such as “undefined” or “null” values and exceptions. (In fact, theorem provers like Coq actually do allow one to define terms with an “undefined”-like mechanism. Such terms are considered axioms. Crucially, one can always see which axioms are used by any particular proof. Sound but non-constructive propositions are often introduced as axioms, such as LEM).
Finally, we note that, while the Curry-Howard correspondence allows us to use types and propositions interchangeably, it is often preferable in practice to distinguish between them, even when they are fundamentally built atop the same mechanisms.
For instance, the Coq theorem prover distinguishes regular types from propositions by assigning the former the kind Set, and the latter the kind Prop. This provides a number of benefits. For one, Coq specifications are often translated into programs of another language, such as Haskell or OCaml. In the extraction process, one can totally erase any term of kind Prop. Since Props aren’t considered computationally significant, we do not care about their actual terms. We only care that they type-check (that the proofs are valid). Crucially, Coq disallows a computational term to depend on the value of a Prop; it may only depend on its type. This makes the aforementioned extraction possible, and preserves our intuitive understanding that a Prop is only meaningful insofar as it is either inhabited or unihabited.
Furthermore, we can add clasical axioms such LEM, but restrict them to the Prop kind. Since the computational terms cannot inspect proof terms, it will not affect our computations that the LEM axiom does not have a corresponding term.
When we wish to make explicit a distinction between type and proposition, we may return to the original propositional notation rather than using the type-equivalent notation.
## Dependent Type Theory
As interesting as our type theory is, you’d find it difficult to express propositions of any complexity. Our current type theory models a constructive propositional logic. We’d like to expand our theory to instead model a constructive first-order logic. And who knows, maybe some type ideas will fall out of the propositional definitions!
First-order logic is characterized by its inclusion of two quantifiers. First is the universal quantifier, written $\forall$ and pronounced “for all”. For instance, the proposition $\forall x: \mathbb{N}. even\ (2 * x)$ would be read “for any $x$, $2*x$ is even”.
The second quantifier is known as the existential quantifier, represented by $\exists$, and pronounced “exists”. For instance, we may define the $even$ predicate from the previous example as $\lambda y: \mathbb{N}. \exists x: \mathbb{N}. 2 * x = y$1. We would read $even\ y$ as “there exists some $x$ such that $2 * x$ is $y$”.
I won’t include the formal rules here, nor the rules of their type equivalents. The theory is quickly complicated by the intermixing of terms and types. In particular, we would now need to differentiate different kinds of types. A notion of convertibility would also be needed. For those interested in such a formalism, I’d suggest looking at section 1.2, “Dependent Types”, in Advanced Topics in Types and Programming Languages.
Let’s instead consider the type interpretations informally. If we were to introduce these quantifiers to our theory, what would their type interpretations be? Incredibly, they have extremely natural interpretations. Let’s begin with the universal quantifier. In type theory, rather than writing $\forall$, we write $\Pi$ (uppercase pi). The $\Pi$-type generalizes function types. Let $P : A \rightarrow Type$, where $Type$ is the type of simple types2. Then the type $\Pi a: A. P\ a$ represents a function which takes an argument $a: A$, and evaluates to a term of type $P\ a$. Note that the specific term which is applied to the function is visible to the return type. One could say that the return type $P\ a$ is dependent on the input value $a: A$. The ability of a type to depend on a value is what gives dependent type theory its name.
As previously mentioned, the $\Pi$-type generalizes function types. This means that the non-dependent function type $A \rightarrow B$ may be view simply as shorthand for $\Pi x: A. B$, where $x$ does not occur in $B$.
Does the $\Pi$-type really represent universal quantification? Let’s revisit our previous example, which we’ll rewrite $\Pi x. even\ (2 * x)$. A proof of said proposition would then be a function which takes some natural $n$, and returns a proof of $even\ (2 * x)$. This seems to perfectly capture the notion of universal quantification, does it not?
The $\Pi$-type is not useful only for propositions, it also makes types and programs vastly more expressive. Consider the classic example of a vector type, which is simply the list type indexed by length. So a term of type $vec\ A\ n$ is a term of type $list\ A$, where the list has length $n$. What would be the type of the $cons$ operation be on vectors? It would be $cons: \Pi A\ n. A \rightarrow vec\ A\ n \rightarrow vec\ A\ (n+1)$. Two important notes for this definition: first, we capture multiple arguments with the $\Pi$-binder the same way we would capture multiple arguments with the $\lambda$-binder; second, rather than informally asserting that $A$ is some arbitrary pre-defined type, we explicitly introduce it with the $\Pi$-binder.
Now let’s tackle the existential quantifier. In type theory, we use the symbol $\Sigma$ rather than $\exists$. Just as the $\Pi$-type generalized function types, the $\Sigma$-type generalizes product types. Therefore, we may consider the pair $(x, y)$ a term of type $\Sigma x: A. P\ x$ if $x: A$, and $y: P\ x$. Interpreted propositionally, a proof of $\Sigma x: A, P\ x$ is a pair where the first element is the witness to the existential, and the second element is a proof that the witness satisfies the predicate. For instance, what would a term look like of type $even\ y$, which unfold to $\Sigma x. 2 * x = y$? It would be a pair where the first element is some $x: \mathbb{N}$, and the second element is a proof of $2 * x = y$.
Let’s put all of these pieces together with an example. Try to understand the meaning of the following term:
$iso \equiv \lambda A\ B. \Sigma (f: A \rightarrow B) (g: B \rightarrow A). (\Pi b. f\ (g\ b) = b) \times (\Pi a. g\ (f\ a) = a)$
The provocatively named $iso$ has two interpretations. If one interprets $iso\ A\ B$ propositionally, then it is a proof that the types $A$ and $B$ are isomorphic. If one instead interprets its computationally, then $iso\ A\ B$ is an isomorphism, where one can project out the primary isomorphism, the inverse isomorphism, and the isomorphism proof.
Let’s look at another example:
$\left(\lambda A\ B\ C\ (f: A \rightarrow B) (g: B \rightarrow C) (x: A). g\ (f\ x)\right)\\ :\ \Pi A\ B\ C. (A \rightarrow B) \rightarrow (B \rightarrow C) \rightarrow A \rightarrow C$
Once again, there are two interpretations. If we interpret the arrows as function types, this is function composition (although its function arguments are backwards compared to the usual presentation). Interpreting the arrows instead as implications, this is a proof of the transitivity of implications.
## Wrapping up
At this point, we’ve reached a theory of considerable complexity. To really grok the ideas presented here, I’d suggest working with them interactively. Theorem proving languages like Coq implement a language similar but more powerful than our type theory here. To get started, I’d highly recommend the Software Foundations series.
Before I finish, I have just one closing thought. The examples included throughout this post are deliberately chosen to highlight the dual interpretations of types and proofs. In practice, most types will not make for very meaningful propositions, and most propositions will not make for meaningful types. For instance, the type $\mathbb{N}$ is totally uninteresting as a proposition. Similarly, our proposition $\Pi x. even\ (2 * x)$ is not a useful type. This does not weaken our correspondence, as there is still tremendous value in this unified approach to types and propositions.
## Bonus
There is another insightful correspondence of types, which you may have picked up on based on our names and notation. “Sum” and “product” types are quite suggestive names aren’t they? In what sense do these types reflect arithmetic sums and products? Can we extend the correspondence to other types?
These are questions which are worth thinking about yourself for a moment. What you’ll find is that the correspondence arises from thinking of types in terms of their number of unique inhabitants.
For instance, the bottom type $\bot$/Void has no proofs/inhabitants, and is thus associated with the number $0$. Similarly, $\top$/unit has one inhabitant, and is therefore identified with the number $1$. Some authors will even use these numbers rather than the notation/names we have been using when it is clear from context they are discussing types.
Are these numbers a complete descriptions of their respective types? Yes and no. Consider the “type” $2$. The most obvious concrete type that comes to mind is the boolean type. But we could just as easily come up with another type that has two inhabitant, say, $foo$ and $bar$. These types would not conventionally be considered literally equal. But they are isomorphic in the sense that there exists a bijection between the two. It is therefore perfectly reasonable to consider them in some sense semantically equivalent.
Moving on to the type operators, the type sum $A + B$ behaves exactly as its name suggests. If $A$ has $|A|$ elements, and $B$ has $|B|$ elements, then $A + B$ has $|A| + |B|$ elements (I trust the reader to determine by context when I use $+$ for a sum type and when I use it for an arithmetic sum).
The same relationship holds between the type product and the arithmetic product.
As with any correspondence, our first thought is often whether this is a meaningful correspondence, or just a fun observation. I would argue it is certainly meaningful and a useful way to think of these types.
For one, we can often “lift” arithmetic properties into our understanding of types. Take the distributivity of products over sums: $\forall x\ y\ z \in \mathbb{N}. x * (y + z) = (x * y) + (x * z)$. Does this have a corresponding proposition over types? It certainly does! We have $\Pi\ X\ Y\ Z: Type. X * (Y + Z) \cong (X * Y) + (X * Z)$.
While it is easiest to think of these properties over finite types with sizes characterized by the natural numbers, we are by no means limited to such types. To generalize our understanding to potentially infinite types, we borrow the set-theoretic notion of cardinality. We can characterize the cardinality of a type by the existence of isomorphisms. Crucially, properties such as “distributivity” from the previous example apply just as well to infinite types as they do to finite types.
Let’s keep going with our mapping of type operators to numerical operators. How about the (non-dependent) function type $A \rightarrow B$. This type operator corresponds to exponentiation, and is occasionally written $B^A$ accordingly. Earlier, we said that a type is associated with its number of unique inhabitants. Uniqueness necessarily requires a notion of equality (or at least, inequality). For the purpose of this correspondence, we consider two functions to be equal if and only if they are extensionally equal. That is, for functions $f$ and $g$, we have $f = g$ if and only if $\Pi x. f\ x = g\ x$. This is a perspective of functional equality which treats functions as black boxes. All notion of internal structure is eschewed, and a given function is instead identified totally by its value at each point of input.
There is an interesting property of nested exponentials for naturals and other simple numbers: $\forall a\ b\ c. (c^b)^a = c^{a * b}$. Lifting this property to our type domain, we have $\Pi a\ b\ c: Type. (a \rightarrow b \rightarrow c) \cong (a \times b \rightarrow c)$. Not only is this true, it captures a familiar concept to functional programmers. This embodies currying/un-currying!
Finally, we come to the dependent types. These can be quite confusing under the arithmetic correspondence because one has the natural inclination to identify each dependent type constructor with its non-dependent equivalent, which would actually send us down the wrong path! We shall see why soon.
First we consider the $\Sigma$-type. Of course, the sigma notation is reminiscent of classical summations. This has incredible potential to confuse. After all, aren’t $\Sigma$-types the dependent generalization of product types? This confusion is quite natural, but upon closer examination, we see it is in fact perfectly appropriate to view $\Sigma$-types as a sort of summation, while at the same time generalizing a binary product.
To prime our intuition, let’s narrow our focus for a moment to a finite type $X$, of size $n \in \mathbb{N}$. Let $f : X \rightarrow Type$ be an arbitrary type family on $X$. Then I claim there exists the following isomorphism:
$\Sigma x. f\ x \cong f\ x_1 + f\ x_2 + \ldots + f\ x_n$
In the above equation, $x_1, x_2, \ldots, x_n$ represent the $n$ distinct inhabitants of $X$.
To see that such an isomorphism does indeed exist, it is important to remember that elements of the sum type $A + B$ remember whether they inhabit the left type $A$ or the right type $B$. So for a term of type $f\ x_1 + f\ x_2 + \ldots + f\ x_n$, we can easily recover which $x : X$ the term corresponds to.
This presentation is not extensible to summations over infinite types only because we could not construct an analogue to the right-hand side, because such a sequence of binary sums would be infinite. Still, the interpretation of $\Sigma$-types as summations is perfectly consistent even over infinite types. To see why, we may consider the cardinality of a type $\Sigma x. f\ x$:
$|\Sigma x: X. f\ x| = \Sigma x: X. |f\ x|$
Here, the sigma on the left-hand side constructs a $\Sigma$-type, whereas the sigma on the right-hand side is an arithmetic summation of cardinal numbers, ranging over the inhabitants $x$ of $X$.
How did we get this equation? Recall that the type $\Sigma x: X. f\ x$ describes a pair, where the first element has type $X$, and which we call $x$, and the second element has type $f\ x$. To count the number of inhabitants of this $\Sigma$-type, we simply note that each pair has some initial element $x$, and from this $x$, we have $|f\ x|$ distinct elements to occupy the second element of our pair, thus leading to the summation we see above.
Now we see that $\Sigma$-types do in fact strongly correspond to summations. So how is it that they are a generalization of binary products? This can be explained purely arithmetically. We note that a binary product is in some sense a degenerative summation. For instance, let $a, b \in \mathbb{N}$. Then $a * b = \Sigma x \in [1,a]. b$, where $x$ of course does not occur in $b$ since $b$ is not an open term but a natural number. This is just the interpretation of multiplication as repeated addition! The summation is “degenerative” in the sense that the bound element $x$ does not occur free in $b$. Similarly, in the world of types, if $A, B: Type$, then $A \times B \cong \Sigma x: A. B$ (again, $B$ is closed, $x$ does not occur in $B$).
A very similar story can be told of $\Pi$-types. Namely, we have $|\Pi x: X. f\ x| = \Pi x: X. |f\ x|$. Just as degenerate summations corresponded to binary products, we can see that degenerate products correspond to exponentiation: $\forall a b, b^a = \Pi x \in [1,a]. b$, or in terms of types, $\forall A B: Type. A \rightarrow B \cong \Pi x: A. B$.
# Footnotes
1. We have of course elided over a formal definition of equality. Equality is a regular proposition, frequently referred to as the identity type in type theory literature. The details are unimportant here. Suffice it to say that equality may follow immediately if terms converge under computation (e.g. $2 + 1$ and $3$), or may be proven less straightforwardly, say by induction (e.g. $m + n$ and $n + m$).
2. $Type$ cannot be the type of all types, as this walks into a paradox. In particular, it must not be the case that $Type : Type$. This circularity gives way to Russell’s paradox. This is why I say $Type$ is only the type of “simple” types.
Here, we don’t assign any judgment to $Type$. It has no type (nor similar designations like “kind” or “sort”). Some type theories avoid this problem by introducing a countable infinitude of type “universes”, each associated with a natural number. Then universe $i$ has type universe $i + 1$. Each such universe can therefore have a type while avoiding circularity. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9227108359336853, "perplexity": 448.7800892829955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103328647.18/warc/CC-MAIN-20220627043200-20220627073200-00514.warc.gz"} |
https://bt.gateoverflow.in/828/gate-bt-2022-question-2 | If the eigenvalues of a $2 \times 2$ matrix $P$ are $4$ and $2,$ then the eigenvalues of the matrix $P^{-1}$ are
1. $0, 0$
2. $0.0625, 0.25$
3. $0.25, 0.5$
4. $2, 4$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9926612973213196, "perplexity": 44.08757347333674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00379.warc.gz"} |
https://proofwiki.org/wiki/Combination_Theorem_for_Continuous_Mappings/Topological_Group/Multiple_Rule | Combination Theorem for Continuous Mappings/Topological Group/Multiple Rule
Theorem
Let $\struct{S, \tau_{_S}}$ be a topological space.
Let $\struct{G, *, \tau_{_G}}$ be a topological group.
Let $\lambda \in G$.
Let $f : \struct{S, \tau_{_S}} \to \struct{G, \tau_{_G}}$ be a continuous mapping.
Let $\lambda * f : S \to G$ be the mapping defined by:
$\forall x \in S: \map {\paren{\lambda * f}} x = \lambda * \map f x$
Let $f * \lambda : S \to G$ be the mapping defined by:
$\forall x \in S: \map {\paren{f * \lambda}} x = \map f x * \lambda$
Then:
$\lambda * f : \struct{S, \tau_{_S}} \to \struct{G, \tau_{_G}}$ is a continuous mapping
$f * \lambda : \struct{S, \tau_{_S}} \to \struct{G, \tau_{_G}}$ is a continuous mapping.
Proof
By definition, a topological group is a topological semigroup.
Hence $\struct{G, *, \tau_{_G}}$ is a topological semigroup.
From Multiple Rule for Continuous Mappings to Topological Semigroup, $\lambda * f, f * \lambda : \struct{S, \tau_{_S}} \to \struct{G, \tau_{_G}}$ are continuous mappings.
$\blacksquare$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9295088648796082, "perplexity": 358.6417104499198}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668787.19/warc/CC-MAIN-20191117041351-20191117065351-00126.warc.gz"} |
https://www.physicsforums.com/threads/physics-virgin-need-help-on-freely-falling-objects.5963/ | # Physics virgin need help on freely falling objects!
1. Sep 17, 2003
### noodle21
You drop a ball from a window on an upper floor of a building. it strikes the ground with velocity v. You now repeat the drop, but have a friend down on the street who throws another ball upward at velocity v. Your friend throws the ball upward at exactly the same time that you drop yours from the window. At some location, the balls pass each other. Is this location at the halfway point between window and ground, above this point, or below this point?
2. Sep 17, 2003
### krab
1. You are in the wrong forum. This one is Quantum Mechanics. There is a forum for help with college physics.
2. Anyway, your friend's ball starts upward with a velocity v, but your ball starts with velocity = 0. So whose ball gets to the halfway point first? Think about it.
3. Sep 17, 2003
### HallsofIvy
Well, apparently it got moved! In your second comment, you seem to be implying that since the ball thrown up starts with speed v while the ball dropped starts with speed 0, the ball thrown will "move faster". Don't you think acceleration (and deceleration) will have something to do with it?
Suppose the window is h meters above the ground. Both balls have an acceleration of -g. The ball dropped will have speed (negative so downward) of v1(t)= -gt and the ball thrown upward will have speed (positive so upward) v2(t)= v- gt.
The height, at time t, of the dropped ball is
h1(t)= -(g/2)t2+ h while the height, at time t, of the thrown ball is h2(t)= vt- (g/2)t2.
However, v is not just some arbitrary speed. It is the speed the dropped ball has when it hits the ground: The dropped ball hits the ground when h1(t)= (-g/2)t2+ h= 0 or when
t= [sqrt](2h/g). At that time, v= -g([sqrt](2h/g)= -[sqrt](2hg).
Of course, the speed of the ball thrown up is [sqrt](2hg).
They will pass when they both have the same height at the same time:
-(g/2)t2+ h= [sqrt](2hg)t- (g/2)t2.
By golly, the accelerations cancel out! This reduces to
[sqrt](2hg)t= h so t= h/[sqrt](2hg)= [sqrt](h/2g).
At that time, h1= (-g/2)(h/2g)+ h= -h/4+ h= (3/4)h.
Of course, h2= [sqrt](2hg)[sqrt](h/2g)-(g/2)(h/2g)
= h- h/4= (3/4)h.
Well, I'll be! krab was right! The ball thrown upward, because it had the greater initial speed (and we DIDN'T have to take acceleration into account- it canceled out) goes farther. The two balls pass 3/4 of the way up to the window. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9083759784698486, "perplexity": 2266.0239708101626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247495147.61/warc/CC-MAIN-20190220150139-20190220172139-00137.warc.gz"} |
https://www.illustrativemathematics.org/content-standards/tasks/72 | # F-BF Summer Intern
Alignments to Content Standards: F-BF.A.1
You have been hired for a summer internship at a marine life aquarium. Part of your job is diluting brine for the saltwater fish tanks. The brine is composed of water and sea salt, and the salt concentration is 15.8% by mass, meaning that in any amount of brine the mass of salt is 15.8% of the total mass.
1. The supervisor asks you to add fresh water to one liter of the brine using a half-liter measuring cup. Let $S(x)$ be the salt concentration of the resulting mixture when you add $x$ half-liters of salt. Write an expression for $S(x)$. [Assume that one liter of water has mass 1 kg.]
2. Describe how the graph of $S$ is related to the graph of $y = 1/x$.
3. Sketch the graph of $S$.
4. How much fresh water should you add to get a mixture which is 4% sea salt, approximately the salt concentration of the ocean?
## Solutions
Solution: Summer Intern
1. Because the brine is 15.8% sea salt, the initial salt concentration is $$\frac{\mbox{0.158 kilograms of salt}}{\mbox{1 kilogram of brine}}.$$ Adding $x$ half-liters of water is the same as adding $x$ half-kilograms of mass. This increases the total mass of the mixture by the $x$ half-kilograms, to $0.5x+1$, while leaving the mass of salt unchanged. So an expression for $S(x)$ is $$\frac{0.158}{0.5x+1}$$
2. We write the expression for $S(x)$ in a form that shows its relation with $1/x$: $$\frac{0.158}{0.5x+1} = \frac{0.158}{0.5(x+2)} = \frac{0.316}{ x+2} = 0.316 \frac{1}{x+2}.$$ Thus $S(x)$ is obtained from $1/x$ by first replacing $x$ with $x+2$ and then multiplying the whole by $0.316$. So the graph of $S$ is the graph of $$y={1\over x}$$ horizontally translated by $2$ units to the left, and vertically dilated by $0.316$ units.
3. Since a solution which is 4% sea salt has a salt concentration of $0.04$, we must find $x$ satisfying $S(x) = 0.04$. So, we solve the equation $\displaystyle{0.316\over x+2}=0.04$ for $x$: \begin{eqnarray*} {0.316\over x+2}&=&0.04\\ 0.316&=&0.04 (x+2)\\ \frac{0.316}{0.04}&=&x+2\\ 7.9&=&x+2\\ 5.9&=&x. \end{eqnarray*} So to each liter of 15.8% brine we would have to add about 6 half-liters of fresh water (3 liters) in order to get a 4% sea salt solution.
Solution: A tabular approach to part (d)
Using a calculator or spreadsheet we construct a table of values of $S(x)$ from the expression for $S(x)$ found in (b):
Number of half-liters Salt concentration 0 0.158 1 0.153 2 0.079 3 0.0632 4 0.0527 5 0.045 6 0.0395 7 0.035
The table shows that the salt concentration is very close to 4% when 6 half-liters (3 liters) of fresh water are added to the brine.
Solution: A graphical approach to part (d)
The solution to part (d) can be estimated from the graph found in (c) by drawing a horizontal line with vertical coordinate 0.04 and seeing what the value of $x$ is where it intersects the graph of $S$: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8524897694587708, "perplexity": 573.7827374170969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186530.52/warc/CC-MAIN-20170322212946-00089-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.tex4tum.de/kalman-filter.html | Kalman Filter
Definition
The Kalman Filter is a linear quadratic estimator used to estimate the state of a system by combining sensor measurements and a (physical) process model of the system. It works optimal if measurements and process variations have white gaussian noise.
Working Principle
The Kalman Filter averages a prediction of a system’s state with a new measurement using a weighted average. The purpose of the weights is that estimated values with smaller uncertainty are “trusted” more. The result of the weighted average is a new state estimate that lies between the predicted and measured state, and has a smaller estimated uncertainty than either alone.
The Kalman filter works in two steps: predict and update.
Defining the Model
• system’s process model
• control inputs to that process
• multiple sequential measurements (e.g. from sensors)
State Space
The dynamic model for the physical process is
$\begin{array}{ll} \boldsymbol x_{n+1} & = \boldsymbol{G}_n \boldsymbol x_{n} + \boldsymbol{B} \boldsymbol u_n + \boldsymbol w_n \\ \boldsymbol y_{n} & = \boldsymbol{H}_{n} \boldsymbol x_{n} + \boldsymbol v_{n} \end{array}$
with the $k$ states $\boldsymbol x$, transition matrix $\boldsymbol{G}$, gaussian process noise $\boldsymbol w_n$, input $\boldsymbol u$, $l$ measurements $\boldsymbol y$, measurement model $\boldsymbol{H}$, gaussian measurement noise $\boldsymbol v_n$, time point $n$.
The measurement matrix $\boldsymbol{H}$ defines how the observations correspond to the state. If the state variables can be directly observed then $\boldsymbol{H} = \boldsymbol{1}$.
If there are no known inputs to the process, then $\boldsymbol{B} \boldsymbol u_n = 0$ and this term can be removed.
Noise
The overall uncertainty of the estimation is expressed with the covariance matrix $\boldsymbol{P}_n$. This matrix is influenced by the process noise $\boldsymbol w_n$ and the measurement noise $\boldsymbol v_n$. During prediction, the process noise increases the uncertainty, whereas combining it with the measurement during the update phase decreases the uncertainty.
$\begin{array}{ll} \boldsymbol w_n & \sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{Q}_n) \\ \boldsymbol v_n & \sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{R}_n) \end{array}$
If values of $\boldsymbol{Q}$ are larger than values of $\boldsymbol{R}$, the filter trusts more the process, less the measurements.
Calculations
1. Step: Prediction
1.1 calculate the new (a priori) state estimate based on the old state and the dynamic model (e.g. physical laws)
$\hat {\boldsymbol x}_{n|n-1} = \boldsymbol{G}_n \hat{\boldsymbol x}_{n-1|n-1}$
1.2 calculate a new process covariance (how certain is the model?)
$\boldsymbol{P}_{\boldsymbol x_{n|n-1}} = \boldsymbol{G}_n \boldsymbol{P}_{\boldsymbol x_{n-1|n-1}} \boldsymbol{G}_n^\top + \boldsymbol{Q}_n$
2. Step: Update
2.1 calculate intermediate values (optional):
• Innovation: $\Delta \boldsymbol y_n = \boldsymbol y_n - \hat{\boldsymbol y}_{n|n-1} =\boldsymbol y_n - \boldsymbol{H}_{n} \hat{\boldsymbol x}_{n|n-1}$
which are the real measurements minus predicted measurements
• Innovation Covariance: $\boldsymbol{S} = \boldsymbol{H}_{n} \boldsymbol{P}_{\boldsymbol x_{n|n-1}} \boldsymbol{H}_{n}^\top + \boldsymbol{R}_n$
2.2 calculate optimal Kalman-gain:
$\boldsymbol{K}_n = \frac{\boldsymbol{P}_{\boldsymbol x_{n|n-1}} \boldsymbol{H}_{n}^\top}{\boldsymbol{H}_{n} \boldsymbol{P}_{\boldsymbol x_{n|n-1}} \boldsymbol{H}_{n}^\top + \boldsymbol{R}_n} = \boldsymbol{P}_{\boldsymbol x_{n|n-1}} \boldsymbol{H}_{n}^\top {\boldsymbol{S}}^{-1}$
2.3 calculate the updated (a posteriori) state estimate using $l$ measurements:
$\hat{\boldsymbol x}_{n|n} = \hat{\boldsymbol x}_{n|n-1} + \boldsymbol{K}_n \Delta \boldsymbol y_n$ which is the estimation of $\boldsymbol x_n$ based on $\Delta y_n$ with $K_{ij} \in [0.0; 1.0]$ where 0.0 means the filter fully trusts the prediction and 1.0 means the filter fully trusts the measurement.
2.4 update process covariance:
$\boldsymbol{P}_{\boldsymbol x_{n|n}} = \boldsymbol{P}_{\boldsymbol x_{n|n-1}} + \boldsymbol{K}_n \boldsymbol{H}_{n} \boldsymbol{P}_{\boldsymbol x_{n|n-1}}$
Extended Kalman Filter
The Extended Kalman Filter (EKF) uses non-linear dynamic models.
$\begin{array}{ll} \boldsymbol x_{n} & = g(\boldsymbol x_{n-1}, \boldsymbol u_n) + \boldsymbol w_n \\[0.5em] \boldsymbol y_{n} & = h(\boldsymbol x_{n-1}) + \boldsymbol v_{n} \end{array}$ where $g()$ and $h()$ are non-linear functions. For covariance the Jacobi-Matrix of the model is used:
$\boldsymbol{G} = \frac{\partial g}{\partial x} \Big\vert_{\hat{\boldsymbol x}_{n-1}, \boldsymbol u_n} \qquad \boldsymbol{H} = \frac{\partial h}{\partial x} \Big\vert_{\hat{\boldsymbol x}_{n}}$
Sensor Fusion
If $i$ sensors measure the same state $x_j$, this can be expressed in the measurement matrix $\boldsymbol{H}$. The column $j$ will have $i$ rows with entries.
Example
Kalman Filter for Gyroscope and Accelerometer:
state $\boldsymbol x$ are the orientation angles roll and pitch and the bias angle | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 41, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9496895670890808, "perplexity": 975.3502815143765}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390442.29/warc/CC-MAIN-20200526015239-20200526045239-00016.warc.gz"} |
https://www.physicsforums.com/threads/proving-vector-space-associativity.216506/ | # Proving vector space, associativity
1. Feb 19, 2008
### karnten07
1. The problem statement, all variables and given/known data
Im doing a problem where im trying to show that an abelian group with a scalar multiplication is a vector field. Im trying to show associativity right now and just have a question:
im trying to show that exp(b.c.lnx) = b.exp(c.lnx)
But im not very sure of my logs and exp's laws, not sure that they are even equal. Any pointers guys?
2. Relevant equations
3. The attempt at a solution
2. Feb 19, 2008
### foxjwill
They wouldn't be equal.
Since $$e^x$$ and $$\ln x$$ are inverses of each other, $$e^{\ln x} = x$$. Therefore, your expressions can be simplified to $$x^{bc} = bx^c$$ which are not equal.
Also, a simple counter-example shows the same result: Taking $$x=3, b=2, c=1$$ we have $$3^{1\cdot 2}=2\cdot 3^1$$ which is obviously not true. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8853827118873596, "perplexity": 506.56301067652726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543035.87/warc/CC-MAIN-20161202170903-00330-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://www.tetsuccesskey.com/2015/08/chapter-7-equilibrium.html | # Chapter 7 - Equilibrium
## CBSE NCERT Class XI (11th) | Chemistry
Chapter 7 - Equilibrium
Q1 :
A liquid is in equilibrium with its vapour in a sealed container at a fixed temperature. The volume of the container is suddenly increased.
a) What is the initial effect of the change on vapour pressure?
b) How do rates of evaporation and condensation change initially?
c) What happens when equilibrium is restored finally and what will be the final vapour pressure?
(a) If the volume of the container is suddenly increased, then the vapour pressure would decrease initially. This is because the amount of vapour remains the same, but the volume increases suddenly. As a result, the same amount of vapour is distributed in a larger volume.
(b) Since the temperature is constant, the rate of evaporation also remains constant. When the volume of the container is increased, the density of the vapour phase decreases. As a result, the rate of collisions of the vapour particles also decreases. Hence, the rate of condensation decreases initially.
(c) When equilibrium is restored finally, the rate of evaporation becomes equal to the rate of condensation. In this case, only the volume changes while the temperature remains constant. The vapour pressure depends on temperature and not on volume. Hence, the final vapour pressure will be equal to the original vapour pressure of the system.
Q2 :
What is Kc for the following equilibrium when the equilibrium concentration of each substance is: [SO2]= 0.60 M, [O2] = 0.82 M and [SO3] = 1.90 M ?
The equilibrium constant (Kc) for the give reaction is:
Hence, Kcfor the equilibrium is.
Q3 :
At a certain temperature and total pressure of 105 Pa, iodine vapour contains 40% by volume of I atoms
Calculate Kpfor the equilibrium.
Partial pressure of I atoms,
Partial pressure of I2 molecules,
Now, for the given reaction,
Q4 :
Write the expression for the equilibrium constant, Kc for each of the following
reactions:
(i)
(ii)
(iii)
(iv)
(v)
Q5 :
Find out the value of Kc for each of the following equilibria from the value of Kp:
The relation between Kp and Kc is given as:
Kp = Kc (RT)Δn
(a) Here,
Δn = 3 - 2 = 1
R = 0.0831 barLmol-1K-1
T = 500 K
Kp = 1.8 x 10-2
Now,
Kp = Kc (RTΔn
(b) Here,
Δn = 2 - 1 = 1
R = 0.0831 barLmol-1K-1
T = 1073 K
Kp= 167
Now,
Kp = Kc (RTΔn
Q6 :
For the following equilibrium,
Both the forward and reverse reactions in the equilibrium are elementary bimolecular reactions. What is Kc, for the reverse reaction?
It is given that for the forward reaction is
Then, for the reverse reaction will be,
Q7 :
Explain why pure liquids and solids can be ignored while writing the equilibrium constant expression?
For a pure substance (both solids and liquids),
Now, the molecular mass and density (at a particular temperature) of a pure substance is always fixed and is accounted for in the equilibrium constant. Therefore, the values of pure substances are not mentioned in the equilibrium constant expression.
Q8 :
Reaction between N2 and O2 takes place as follows:
If a mixture of 0.482 mol of N2 and 0.933 mol of O2 is placed in a 10 L reaction vessel and allowed to form N2O at a temperature for which Kc = 2.0 × 10–37, determine the composition of equilibrium mixture.
Let the concentration of N2O at equilibrium be x.
The given reaction is:
Therefore, at equilibrium, in the 10 L vessel:
The value of equilibrium constant i.e., = 2.0 × 10–37 is very small. Therefore, the amount of N2and O2 reacted is also very small. Thus, x can be neglected from the expressions of molar concentrations of N2 and O2.
Then,
Now,
Q9 :
Nitric oxide reacts with Br2 and gives nitrosyl bromide as per reaction given below:
When 0.087 mol of NO and 0.0437 mol of Br2 are mixed in a closed container at constant temperature, 0.0518 mol of NOBr is obtained at equilibrium. Calculate equilibrium amount of NO and Br2.
The given reaction is:
Now, 2 mol of NOBr are formed from 2 mol of NO. Therefore, 0.0518 mol of NOBr are formed from 0.0518 mol of NO.
Again, 2 mol of NOBr are formed from 1 mol of Br.
Therefore, 0.0518 mol of NOBr are formed from mol of Br, or
0.0259 mol of NO.
The amount of NO and Br present initially is as follows:
[NO] = 0.087 mol [Br2] = 0.0437 mol
Therefore, the amount of NO present at equilibrium is:
[NO] = 0.087 – 0.0518
= 0.0352 mol
And, the amount of Br present at equilibrium is:
[Br2] = 0.0437 – 0.0259
= 0.0178 mol
Q10 :
At 450 K, Kp= 2.0 × 1010/bar for the given reaction at equilibrium.
What is Kc at this temperature?
For the given reaction,
Δn = 2 – 3 = – 1
T = 450 K
R = 0.0831 bar L bar K–1 mol–1
= 2.0 × 1010 bar –1
We know that,
Q11 :
A sample of HI(g) is placed in flask at a pressure of 0.2 atm. At equilibrium the partial pressure of HI(g) is 0.04 atm. What is Kp for the given equilibrium?
The initial concentration of HI is 0.2 atm. At equilibrium, it has a partial pressure of 0.04 atm. Therefore, a decrease in the pressure of HI is 0.2 – 0.04 = 0.16. The given reaction is:
Therefore,
Hence, the value of Kp for the given equilibrium is 4.0.
Q12 :
A mixture of 1.57 mol of N2, 1.92 mol of H2 and 8.13 mol of NH3 is introduced into a 20 L reaction vessel at 500 K. At this temperature, the equilibrium constant, Kc for the reaction
Is the reaction mixture at equilibrium? If not, what is the direction of the net reaction?
The given reaction is:
Now, reaction quotient Qc is:
Since, the reaction mixture is not at equilibrium.
Again, . Hence, the reaction will proceed in the reverse direction.
Q13 :
The equilibrium constant expression for a gas reaction is,
Write the balanced chemical equation corresponding to this expression.
The balanced chemical equation corresponding to the given expression can be written as:
Q14 :
One mole of H2O and one mole of CO are taken in 10 L vessel and heated to
725 K. At equilibrium 40% of water (by mass) reacts with CO according to the equation,
Calculate the equilibrium constant for the reaction.
The given reaction is:
Therefore, the equilibrium constant for the reaction,
Q15 :
At 700 K, equilibrium constant for the reaction
is 54.8. If 0.5 molL–1 of HI(g) is present at equilibrium at 700 K, what are the concentration of H2(g)and I2(g) assuming that we initially started with HI(g) and allowed it to reach equilibrium at 700 K?
It is given that equilibrium constantfor the reaction
is 54.8.
Therefore, at equilibrium, the equilibrium constant for the reaction
will be .
Let the concentrations of hydrogen and iodine at equilibrium be x molL–1
.
Hence, at equilibrium,
Q16 :
What is the equilibrium concentration of each of the substances in the equilibrium when the initial concentration of ICl was 0.78 M?
2 ICl(g) ⇌ I2(g) + Cl2(g) ; KC = 0.14
The given reaction is:
2 ICl(g) ⇌ I2(g) + Cl2(g)
Initial conc. 0.78 M 0 0
At equilibrium (0.78 - 2x) M x M x M
Hence, at equilibrium,
[ICl]=[I2] =
Q17 :
Kp = 0.04 atm at 899 K for the equilibrium shown below. What is the equilibrium concentration of C2H6 when it is placed in a flask at 4.0 atm pressure and allowed to come to equilibrium?
Let p be the pressure exerted by ethene and hydrogen gas (each) at equilibrium.
Now, according to the reaction,
We can write,
Hence, at equilibrium,
Q18 :
Ethyl acetate is formed by the reaction between ethanol and acetic acid and the equilibrium is represented as:
(i) Write the concentration ratio (reaction quotient), Qc, for this reaction (note: water is not in excess and is not a solvent in this reaction)
(ii) At 293 K, if one starts with 1.00 mol of acetic acid and 0.18 mol of ethanol, there is 0.171 mol of ethyl acetate in the final equilibrium mixture. Calculate the equilibrium constant.
(iii) Starting with 0.5 mol of ethanol and 1.0 mol of acetic acid and maintaining it at 293 K, 0.214 mol of ethyl acetate is found after sometime. Has equilibrium been reached?
(i) Reaction quotient,
(ii) Let the volume of the reaction mixture be V. Also, here we will consider that water is a solvent and is present in excess.
The given reaction is:
Therefore, equilibrium constant for the given reaction is:
(iii) Let the volume of the reaction mixture be V.
Therefore, the reaction quotient is,
Since, equilibrium has not been reached.
Q19 :
A sample of pure PCl5 was introduced into an evacuated vessel at 473 K. After equilibrium was attained, concentration of PCl5 was found to be 0.5 × 10–1 mol L–1. If value of Kc is 8.3 × 10–3, what are the concentrations of PCl3 and Cl2 at equilibrium?
Let the concentrations of both PCl3 and Cl2 at equilibrium be x molL–1. The given reaction is:
Now we can write the expression for equilibrium as:
Therefore, at equilibrium,
Q20 :
One of the reactions that takes place in producing steel from iron ore is the reduction of iron (II) oxide by carbon monoxide to give iron metal and CO2.
FeO (s) + CO (g) Fe (s) + CO2 (g); Kp= 0.265 at 1050 K.
What are the equilibrium partial pressures of CO and CO2 at 1050 K if the initial partial pressures are: pCO = 1.4 atm and = 0.80 atm?
For the given reaction,
Since, the reaction will proceed in the backward direction.
Therefore, we can say that the pressure of CO will increase while the pressure of CO2 will decrease.
Now, let the increase in pressure of CO = decrease in pressure of CO2 be p.
Then, we can write,
Therefore, equilibrium partial of
And, equilibrium partial pressure of
Q21 :
Equilibrium constant, Kc for the reaction
at 500 K is 0.061.
At a particular time, the analysis shows that composition of the reaction mixture is 3.0 mol L–1 N2, 2.0 mol L–1 H2 and 0.5 mol L–1 NH3. Is the reaction at equilibrium? If not in which direction does the reaction tend to proceed to reach equilibrium?
The given reaction is:
Now, we know that,
Since, the reaction is not at equilibrium.
Since , the reaction will proceed in the forward direction to reach equilibrium.
Q22 :
Bromine monochloride, BrCl decomposes into bromine and chlorine and reaches the equilibrium:
for which Kc= 32 at 500 K. If initially pure BrCl is present at a concentration of 3.3 × 10–3 molL–1, what is its molar concentration in the mixture at equilibrium?
Let the amount of bromine and chlorine formed at equilibrium be x. The given reaction is:
Now, we can write,
Therefore, at equilibrium,
Q23 :
At 1127 K and 1 atm pressure, a gaseous mixture of CO and CO2 in equilibrium with solid carbon has 90.55% CO by mass
Calculate Kc for this reaction at the above temperature.
Let the total mass of the gaseous mixture be 100 g.
Mass of CO = 90.55 g
And, mass of CO2 = (100 – 90.55) = 9.45 g
Now, number of moles of CO,
Number of moles of CO 2
Partial pressure of CO,
Partial pressure of CO2,
For the given reaction,
Δn = 2 – 1 = 1
We know that,
Q24 :
Calculate a) ΔG°and b) the equilibrium constant for the formation of NO2 from NO and O2at 298 K
where ΔfG° (NO2) = 52.0 kJ/mol
ΔfG° (NO) = 87.0 kJ/mol
ΔfG° (O2) = 0 kJ/mol
(a) For the given reaction,
ΔG° = ΔG°( Products) – ΔG°( Reactants)
ΔG° = 52.0 – {87.0 + 0}
= – 35.0 kJ mol–1
(b) We know that,
ΔG° = RT log Kc
ΔG° = 2.303 RT log Kc
Hence, the equilibrium constant for the given reaction Kc is 1.36 × 106
Q25 :
Does the number of moles of reaction products increase, decrease or remain same when each of the following equilibria is subjected to a decrease in pressure by increasing the volume?
(a)
(b)
(c)
(a) The number of moles of reaction products will increase. According to Le Chatelier's principle, if pressure is decreased, then the equilibrium shifts in the direction in which the number of moles of gases is more. In the given reaction, the number of moles of gaseous products is more than that of gaseous reactants. Thus, the reaction will proceed in the forward direction. As a result, the number of moles of reaction products will increase.
(b) The number of moles of reaction products will decrease.
(c) The number of moles of reaction products remains the same.
Q26 :
Which of the following reactions will get affected by increasing the pressure?
Also, mention whether change will cause the reaction to go into forward or backward direction.
(i)
(ii)
(iii)
(iv)
(v)
(vi)
The reactions given in (i), (iii), (iv), (v), and (vi) will get affected by increasing the pressure.
The reaction given in (iv) will proceed in the forward direction because the number of moles of gaseous reactants is more than that of gaseous products.
The reactions given in (i), (iii), (v), and (vi) will shift in the backward direction because the number of moles of gaseous reactants is less than that of gaseous products.
Q27 :
The equilibrium constant for the following reaction is 1.6 ×105 at 1024 K.
Find the equilibrium pressure of all gases if 10.0 bar of HBr is introduced into a sealed container at 1024 K.
Given,
for the reaction i.e.,
Therefore, for the reaction the equilibrium constant will be,
Now, let p be the pressure of both H2 and Br2 at equilibrium.
Now, we can write,
Therefore, at equilibrium,
Q28 :
Dihydrogen gas is obtained from natural gas by partial oxidation with steam as per following endothermic reaction:
(a) Write as expression for Kp for the above reaction.
(b) How will the values of Kp and composition of equilibrium mixture be affected by
(i) Increasing the pressure
(ii) Increasing the temperature
(iii) Using a catalyst?
(a) For the given reaction,
(b) (i) According to Le Chatelier's principle, the equilibrium will shift in the backward direction.
(ii) According to Le Chatelier's principle, as the reaction is endothermic, the equilibrium will shift in the forward direction.
(iii) The equilibrium of the reaction is not affected by the presence of a catalyst. A catalyst only increases the rate of a reaction. Thus, equilibrium will be attained quickly.
Q29 :
Describe the effect of:
c) Removal of CO
d) Removal of CH3OH
on the equilibrium of the reaction:
(a) According to Le Chatelier's principle, on addition of H2, the equilibrium of the given reaction will shift in the forward direction.
(b) On addition of CH3OH, the equilibrium will shift in the backward direction.
(c) On removing CO, the equilibrium will shift in the backward direction.
(d) On removing CH3OH, the equilibrium will shift in the forward direction.
Q30 :
At 473 K, equilibrium constant Kc for decomposition of phosphorus pentachloride, PCl5 is 8.3 ×10-3. If decomposition is depicted as,
ΔrH° = 124.0 kJmol–1
a) Write an expression for Kc for the reaction.
b) What is the value of Kc for the reverse reaction at the same temperature?
c) What would be the effect on Kc if (i) more PCl5 is added (ii) pressure is increased? (iii) The temperature is increased?
(a)
(b) Value of Kc for the reverse reaction at the same temperature is:
(c) (i) Kc would remain the same because in this case, the temperature remains the same.
(ii) Kc is constant at constant temperature. Thus, in this case, Kc would not change.
(iii) In an endothermic reaction, the value of Kc increases with an increase in temperature. Since the given reaction in an endothermic reaction, the value of Kc will increase if the temperature is increased.
Q31 :
Dihydrogen gas used in Haber's process is produced by reacting methane from natural gas with high temperature steam. The first stage of two stage reaction involves the formation of CO and H2. In second stage, CO formed in first stage is reacted with more steam in water gas shift reaction,
If a reaction vessel at 400°C is charged with an equimolar mixture of CO and steam such that 4.0 bar, what will be the partial pressure of H2 at equilibrium? Kp= 10.1 at 400°C
Let the partial pressure of both carbon dioxide and hydrogen gas be p. The given reaction is:
It is
Now,
Hence, at equilibrium, the partial pressure of H2 will be 3.04 bar.
Q32 :
Predict which of the following reaction will have appreciable concentration of reactants and products:
a)
b)
c)
If the value of Kc lies between 10-3 and 103, a reaction has appreciable concentration of reactants and products. Thus, the reaction given in (c) will have appreciable concentration of reactants and products.
Q33 :
The value of Kc for the reaction
3O2 (g) 2O3 (g)
is 2.0 ×10–50 at 25°C. If the equilibrium concentration of O2 in air at 25°C is 1.6 ×10–2, what is the concentration of O3?
The given reaction is:
Then, we have,
Hence, the concentration of
Q34 :
The reaction, CO(g) + 3H2(g)CH4(g) + H2O(g) is at equilibrium at 1300 K in a 1L flask. It also contain 0.30 mol of CO, 0.10 mol of H2 and 0.02 mol of H2O and an unknown amount of CH4 in the flask. Determine the concentration of CH4 in the mixture. The equilibrium constant, Kc for the reaction at the given temperature is 3.90.
Let the concentration of methane at equilibrium be x.
It is given that Kc= 3.90.
Therefore,
Hence, the concentration of CH4 at equilibrium is 5.85 × 10–2 M.
Q35 :
What is meant by the conjugate acid-base pair? Find the conjugate acid/base for the following species:
A conjugate acid-base pair is a pair that differs only by one proton.
The conjugate acid-base for the given species is mentioned in the table below.
Species Conjugate acid-base HNO2 CN– HCN (acid) HClO4 F– HF (acid) OH– H2O (acid) /O2– (base) S2– HS– (acid)
Q36 :
Which of the followings are Lewis acids? H2O, BF3, H+, and
Lewis acids are those acids which can accept a pair of electrons. For example, BF3, H+, and are Lewis acids.
Q37 :
What will be the conjugate bases for the Brönsted acids: HF, H2SO4 and HCO3?
The table below lists the conjugate bases for the given Bronsted acids.
Bronsted acid Conjugate base HF F– H2SO4
Q38 :
Write the conjugate acids for the following Brönsted bases: NH2-, NH3 and HCOO-.
The table below lists the conjugate acids for the given Bronsted bases.
Bronsted base Conjugate acid NH3 NH3 HCOO– HCOOH
Q39 :
The species: H2O,, and NH3 can act both as Brönsted acids and bases. For each case give the corresponding conjugate acid and base.
The table below lists the conjugate acids and conjugate bases for the given species.
Species Conjugate acid Conjugate base H2O H3O+ OH– H2CO3 H2SO4 NH3
Q40 :
Classify the following species into Lewis acids and Lewis bases and show how these act as Lewis acid/base: (a) OH- (b) F- (c) H+ (d) BCl3.
(a) OH- is a Lewis base since it can donate its lone pair of electrons.
(b) F- is a Lewis base since it can donate a pair of electrons.
(c) H+ is a Lewis acid since it can accept a pair of electrons.
(d) BCl3 is a Lewis acid since it can accept a pair of electrons.
Q41 :
The concentration of hydrogen ion in a sample of soft drink is 3.8 x 10-3 M. what is its pH?
Given,
pH value of soft drink
Q42 :
The pH of a sample of vinegar is 3.76. Calculate the concentration of hydrogen ion in it.
Given,
pH = 3.76
It is known that,
Hence, the concentration of hydrogen ion in the given sample of vinegar is 1.74 × 10–4 M.
Q43 :
The ionization constant of HF, HCOOH and HCN at 298K are 6.8 x 10-4, 1.8 x 10-4 and 4.8 x 10-9respectively. Calculate the ionization constants of the corresponding conjugate base.
It is known that,
Given,
Ka of HF = 6.8 × 10–4
Hence, Kb of its conjugate base F–
Given,
Ka of HCOOH = 1.8 × 10–4
Hence, Kb of its conjugate base HCOO–
Given,
Ka of HCN = 4.8 × 10–9
Hence, Kb of its conjugate base CN–
Q44 :
The ionization constant of phenol is 1.0 x 10-10. What is the concentration of phenolate ion in 0.05 M solution of phenol? What will be its degree of ionization if the solution is also 0.01M in sodium phenolate?
Ionization of phenol:
Now, let ∠be the degree of ionization of phenol in the presence of 0.01 M C6H5ONa.
Also,
Q45 :
The first ionization constant of H2S is 9.1 x 10-8. Calculate the concentration of HS- ion in its 0.1 M solution. How will this concentration be affected if the solution is 0.1 M in HCl also? If the second dissociation constant of H2S is 1.2 x 10-13, calculate the concentration of S2- under both conditions.
(i) To calculate the concentration of HS– ion:
Case I (in the absence of HCl):
Let the concentration of HS– be x M.
Case II (in the presence of HCl):
In the presence of 0.1 M of HCl, let be y M.
(ii) To calculate the concentration of:
Case I (in the absence of 0.1 M HCl):
(From first ionization, case I)
Let
Also, (From first ionization, case I)
Case II (in the presence of 0.1 M HCl):
Again, let the concentration of HS– be X' M.
(From first ionization, case II)
(From HCl, case II)
Q46 :
The ionization constant of acetic acid is 1.74 x 10-5. Calculate the degree of dissociation of acetic acid in its 0.05 M solution. Calculate the concentration of acetate ion in the solution and its pH.
Method 1
Since Ka >> Kw, :
Method 2
Degree of dissociation,
c = 0.05 M
Ka = 1.74 × 10–5
Thus, concentration of CH3COO– = c.α
Hence, the concentration of acetate ion in the solution is 0.00093 M and its Ph is 3.03.
Q47 :
It has been found that the pH of a 0.01M solution of an organic acid is 4.15. Calculate the concentration of the anion, the ionization constant of the acid and its pKa.
Let the organic acid be HA.
Concentration of HA = 0.01 M
pH = 4.15
Now,
Then,
Q48 :
Assuming complete dissociation, calculate the pH of the following solutions:
(a) 0.003 M HCl (b) 0.005 M NaOH (c) 0.002 M HBr (d) 0.002 M KOH
(i) 0.003MHCl:
Since HCl is completely ionized,
Now,
Hence, the pH of the solution is 2.52.
(ii) 0.005MNaOH:
Hence, the pH of the solution is 11.70.
(iii) 0.002 HBr:
Hence, the pH of the solution is 2.69.
(iv) 0.002 M KOH:
Hence, the pH of the solution is 11.31.
Q49 :
Calculate the pH of the following solutions:
a) 2 g of TlOH dissolved in water to give 2 litre of solution.
b) 0.3 g of Ca(OH)2 dissolved in water to give 500 mL of solution.
c) 0.3 g of NaOH dissolved in water to give 200 mL of solution.
d) 1mL of 13.6 M HCl is diluted with water to give 1 litre of solution.
(a) For 2g of TlOH dissolved in water to give 2 L of solution:
(b) For 0.3 g of Ca(OH)2 dissolved in water to give 500 mL of solution:
(c) For 0.3 g of NaOH dissolved in water to give 200 mL of solution:
(d) For 1mL of 13.6 M HCl diluted with water to give 1 L of solution:
13.6 × 1 mL = M2 × 1000 mL
(Before dilution) (After dilution)
13.6 × 10–3 = M2 × 1L
M2 = 1.36 × 10–2
[H+] = 1.36 × 10–2
pH = – log (1.36 × 10–2)
= (– 0.1335 + 2)
= 1.866 ∼ 1.87
Q50 :
The degree of ionization of a 0.1M bromoacetic acid solution is 0.132. Calculate the pH of the solution and the pKa of bromoacetic acid.
Degree of ionization, α = 0.132
Concentration, c = 0.1 M
Thus, the concentration of H3O+ = c.α
= 0.1 × 0.132
= 0.0132
Now,
Q51 :
The pH of 0.005M codeine (C18H21NO3) solution is 9.95. Calculate its ionization constant and pKb.
c = 0.005
pH = 9.95
pOH = 4.05
pH = – log (4.105)
Q52 :
What is the pH of 0.001 M aniline solution? The ionization constant of aniline can be taken from Table 7.7. Calculate the degree of ionization of aniline in the solution. Also calculate the ionization constant of the conjugate acid of aniline.
Kb = 4.27 × 10–10
c = 0.001M
pH =?
α =?
Thus, the ionization constant of the conjugate acid of aniline is 2.34 × 10–5
Q53 :
Calculate the degree of ionization of 0.05M acetic acid if its pKa value is 4.74.
How is the degree of dissociation affected when its solution also contains (a) 0.01 M (b) 0.1 M in HCl?
When HCl is added to the solution, the concentration of H+ ions will increase. Therefore, the equilibrium will shift in the backward direction i.e., dissociation of acetic acid will decrease.
Case I: When 0.01 M HCl is taken.
Let x be the amount of acetic acid dissociated after the addition of HCl.
As the dissociation of a very small amount of acetic acid will take place, the values i.e., 0.05 – xand 0.01 + x can be taken as 0.05 and 0.01 respectively.
Case II: When 0.1 M HCl is taken.
Let the amount of acetic acid dissociated in this case be X. As we have done in the first case, the concentrations of various species involved in the reaction are:
Q54 :
The ionization constant of dimethylamine is 5.4 x 10-4. Calculate its degree of ionization in its 0.02 M solution. What percentage of dimethylamine is ionized if the solution is also 0.1 M in NaOH?
Now, if 0.1 M of NaOH is added to the solution, then NaOH (being a strong base) undergoes complete ionization.
And,
It means that in the presence of 0.1 M NaOH, 0.54% of dimethylamine will get dissociated.
Q55 :
Calculate the hydrogen ion concentration in the following biological fluids whose pH are given below:
(a) Human muscle-fluid, 6.83
(b) Human stomach fluid, 1.2
(c) Human blood, 7.38
(d) Human saliva, 6.4.
(a) Human muscle fluid 6.83:
pH = 6.83
pH = - log [H+]
∴6.83 = - log [H+]
[H+] =1.48 x 10-7 M
(b) Human stomach fluid, 1.2:
pH =1.2
1.2 = - log [H+]
∴[H+] = 0.063
(c) Human blood, 7.38:
pH = 7.38 = - log [H+]
∴ [H+] = 4.17 x 10-8 M
(d) Human saliva, 6.4:
pH = 6.4
6.4 = - log [H+]
[H+] = 3.98 x 10-7
Q56 :
The pH of milk, black coffee, tomato juice, lemon juice and egg white are 6.8, 5.0, 4.2, 2.2 and 7.8 respectively. Calculate corresponding hydrogen ion concentration in each.
The hydrogen ion concentration in the given substances can be calculated by using the given relation:
pH = –log [H+]
(i) pH of milk = 6.8
Since, pH = –log [H+]
6.8 = –log [H+]
log [H+] = –6.8
[H+] = anitlog(–6.8)
(ii) pH of black coffee = 5.0
Since, pH = –log [H+]
5.0 = –log [H+]
log [H+] = –5.0
[H+] = anitlog(–5.0)
(iii) pH of tomato juice = 4.2
Since, pH = –log [H+]
4.2 = –log [H+]
log [H+] = –4.2
[H+] = anitlog(–4.2)
(iv) pH of lemon juice = 2.2
Since, pH = –log [H+]
2.2 = –log [H+]
log [H+] = –2.2
[H+] = anitlog(–2.2)
(v) pH of egg white = 7.8
Since, pH = –log [H+]
7.8 = –log [H+]
log [H+] = –7.8
[H+] = anitlog(–7.8)
Q57 :
If 0.561 g of KOH is dissolved in water to give 200 mL of solution at 298 K. Calculate the concentrations of potassium, hydrogen and hydroxyl ions. What is its pH?
Q58 :
The solubility of Sr(OH)2 at 298 K is 19.23 g/L of solution. Calculate the concentrations of strontium and hydroxyl ions and the pH of the solution.
Solubility of Sr(OH)2 = 19.23 g/L
Then, concentration of Sr(OH)2
Q59 :
The ionization constant of propanoic acid is 1.32 x 10-5. Calculate the degree of ionization of the acid in its 0.05M solution and also its pH. What will be its degree of ionization if the solution is 0.01M in HCl also?
Let the degree of ionization of propanoic acid be α.
Then, representing propionic acid as HA, we have:
In the presence of 0.1M of HCl, let α ´ be the degree of ionization.
Q60 :
The pH of 0.1M solution of cyanic acid (HCNO) is 2.34. Calculate the ionization constant of the acid and its degree of ionization in the solution.
c = 0.1 M
pH = 2.34
Q61 :
The ionization constant of nitrous acid is 4.5 x 10-4. Calculate the pH of 0.04 M sodium nitrite solution and also its degree of hydrolysis.
NaNO2 is the salt of a strong base (NaOH) and a weak acid (HNO2).
Now, If x moles of the salt undergo hydrolysis, then the concentration of various species present in the solution will be:
Therefore, degree of hydrolysis
= 2.325 × 10–5
Q62 :
A 0.02 M solution of pyridinium hydrochloride has pH = 3.44. Calculate the ionization constant of pyridine
pH = 3.44
We know that,
pH = – log [H+]
Q63 :
Predict if the solutions of the following salts are neutral, acidic or basic:
NaCl, KBr, NaCN, NH4NO3, NaNO2 and KF
(i) NaCl:
Therefore, it is a neutral solution.
(ii) KBr:
Therefore, it is a neutral solution.
(iii) NaCN:
Therefore, it is a basic solution.
(iv) NH4NO3
Therefore, it is an acidic solution.
(v) NaNO2
Therefore, it is a basic solution.
(vi) KF
Therefore, it is a basic solution.
Q64 :
The ionization constant of chloroacetic acid is 1.35 x 10-3. What will be the pH of 0.1M acid and its 0.1M sodium salt solution?
It is given that Ka for ClCH2COOH is 1.35 × 10–3.
ClCH2COONa is the salt of a weak acid i.e., ClCH2COOH and a strong base i.e., NaOH.
Q65 :
Ionic product of water at 310 K is 2.7 x 10-14. What is the pH of neutral water at this temperature?
Ionic product,
Hence, the pH of neutral water is 6.78.
Q66 :
Calculate the pH of the resultant mixtures:
a) 10 mL of 0.2M Ca(OH)2 + 25 mL of 0.1M HCl
b) 10 mL of 0.01M H2SO4 + 10 mL of 0.01M Ca(OH)2
c) 10 mL of 0.1M H2SO4 + 10 mL of 0.1M KOH
(a)
Thus, excess of = .0015 mol
(b)
Since there is neither an excess of or, the solution is neutral. Hence, pH = 7.
(c)
Excess of = .001 mol
= 1.30
Q67 :
Determine the solubilities of silver chromate, barium chromate, ferric hydroxide, lead chloride and mercurous iodide at 298K from their solubility product constants given in Table 7.9 (page 221). Determine also the molarities of individual ions.
(1) Silver chromate:
Let the solubility of be s.
Molarity of = 2s = 2 × 0.65 × 10–4 = 1.30 × 10–4 M
Molarity of s = 0.65 × 10–4 M
(2) Barium chromate:
Let s be the solubility of
Thus, = s and = s
Molarity of = Molarity of
(3) Ferric hydroxide:
Let s be the solubility of
Molarity of
Molarity of
Let KSP be the solubility of
Molarity of
Molarity of chloride =
(5) Mercurous iodid
Q68 :
The solubility product constant of Ag2CrO4 and AgBr are 1.1 x 10-12 and 5.0 x 10-13respectively. Calculate the ratio of the molarities of their saturated solutions.
Let s be the solubility of Ag2CrO4.
Let s ´ be the solubility of AgBr.
Therefore, the ratio of the molarities of their saturated solution is
Q69 :
Equal volumes of 0.002 M solutions of sodium iodate and cupric chlorate are mixed together. Will it lead to precipitation of copper iodate? (For cupric iodate Ksp = 7.4 x 10-8).
When equal volumes of sodium iodate and cupric chlorate solutions are mixed together, then the molar concentrations of both solutions are reduced to half i.e., 0.001 M.
Then,
Now, the solubility equilibrium for copper iodate can be written as:
Ionic product of copper iodate:
Since the ionic product (1 × 10–9) is less than Ksp (7.4 × 10–8), precipitation will not occur.
Q70 :
The ionization constant of benzoic acid is 6.46 x 10-5 and Ksp for silver benzoate is 2.5 x 10-13. How many times is silver benzoate more soluble in a buffer of pH 3.19 compared to its solubility in pure water?
Since pH = 3.19,
Let the solubility of C6H5COOAg be x mol/L.
Then,
Thus, the solubility of silver benzoate in a pH 3.19 solution is 1.66 × 10–6 mol/L.
Now, let the solubility of C6H5COOAg be x' mol/L.
Hence, C6H5COOAg is approximately 3.317 times more soluble in a low pH solution.
Q71 :
What is the maximum concentration of equimolar solutions of ferrous sulphate and sodium sulphide so that when mixed in equal volumes, there is no precipitation of iron sulphide? (For iron sulphide, Ksp = 6.3 x 10-18).
Let the maximum concentration of each solution be x mol/L. After mixing, the volume of the concentrations of each solution will be reduced to half i.e.,.
If the concentrations of both solutions are equal to or less than 5.02 × 10–9 M, then there will be no precipitation of iron sulphide.
Q72 :
What is the minimum volume of water required to dissolve 1g of calcium sulphate at 298 K? (For calcium sulphate, Ksp is 9.1 x 10-6).
Let the solubility of CaSO4 be s.
Molecular mass of CaSO4 = 136 g/mol
Solubility of in gram/L = 3.02 × 10–3 × 136
= 0.41 g/L
This means that we need 1L of water to dissolve 0.41g of CaSO4
Therefore, to dissolve 1g of CaSO4 we require of water.
Q73 :
The concentration of sulphide ion in 0.1M HCl solution saturated with hydrogen sulphide is 1.0 × 10–19 M. If 10 mL of this is added to 5 mL of 0.04 M solution of the following: FeSO4, MnCl2, ZnCl2 and CdCl2. in which of these solutions precipitation will take place? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8880436420440674, "perplexity": 3296.2424892319177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188132.48/warc/CC-MAIN-20170322212948-00176-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://freakonometrics.hypotheses.org/date/2011/03/17 | # Circular or spherical data, and density estimation
I few years ago, while I was working on kernel based density estimation on compact support distribution (like copulas) I went through a series of papers on circular distributions. By that time, I thought it was something for mathematicians working on weird spaces…. but during the past weeks, I saw several potential applications of those estimators.
• circular data density estimation
Consider the density of an angle say, i.e. a function such that
with a circular relationship, i.e. . It can be seen as an invariance by rotation.
von Mises proposed a parametric model in 1918 (see here or there), assuming that
where is Bessel modified function of order 1,
(which is simply a normalization parameter). There are two parameters here, (some concentration parameter) and mu a direction.
From a series of observed angles, the maximum likelihood estimator for kappa is solution of
where
and
and where , where those functions are modified Bessel functions. Well, that estimator is biased, but it is possible to improve it (see here or there). This can be done easily in R (actually Jeff Gill – here – used that package in several applications). But I am not a big fan of that technique….
• density estimation for hours on simulated data
A nice application can be on the estimation of the daily density of a temporal events (e.g. phone calls as we’ll see later on, or email arrival time). Let is the time (in hours) for the th observation (the th phone call received). Then set
The time is now seen as an angle. It is possible to consider the equivalent of an histogram,
```set.seed(1)
library(circular)
X=rbeta(100,shape1=2,shape2=4)*24
Omega=2*pi*X/24
Omegat=2*pi*trunc(X)/24
plot(Ht, stack=FALSE, shrink=1.3, cex=1.03,
points(Ht, rotation = "clock", zero =c(rad(90)),
col = "1", cex=1.03, stack=TRUE )
rose.diag(Ht-pi/2,bins=24,shrink=0.33,xlim=c(-2,2),ylim=c(-2,2),
axes=FALSE,prop=1.5)```
or a kernel based estimation of the density (the gray line on the right).
```circ.dens = density(Ht+3*pi/2,bw=20)
plot(Ht, stack=TRUE, shrink=.35, cex=0, sep=0.0,
axes=FALSE,tol=.8,zero=c(0),bins=24,
xlim=c(-2,2),ylim=c(-2,2), ticks=TRUE, tcl=.075)
lines(circ.dens, col="darkgrey", lwd=3)
text(0,0.8,"24", cex=2); text(0,-0.8,"12",cex=2);
text(0.8,0,"6",cex=2); text(-0.8,0,"18",cex=2)```
The code looks rather simple. But I am not very comfortable using codes that I do not completely understand. So I did my own. The first step was to get a graph similar to the one we have on the right, except that I prefer my own kernel based estimator. The idea is that instead of estimating the density on , we estimate it on the sample . Then we multiply by 3 to get the density only on . For the bandwidth, I took the same as the one that we would have taken on
The code is simply the following
```U=seq(0,1,by=1/250)
O=U*2*pi
U12=seq(0,1,by=1/24)
O12=U12*2*pi
X=rbeta(100,shape1=2,shape2=4)*24
OM=2*pi*X/24
XL=c(X-24,X,X+24)
d=density(X)
d=density(XL,bw=d\$bw,n=1500)
I=which((d\$x>=6)&(d\$x<=30))
Od=d\$x[I]/24*2*pi-pi/2
Dd=d\$y[I]/max(d\$y)+1
plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2), type="l",axes=FALSE,xlab="",ylab="") for(i in pi/12*(0:12)){ abline(a=0,b=tan(i),lty=1,col="light yellow")} segments(.9*cos(O12),.9*sin(O12),1.1*cos(O12),1.1*sin(O12)) lines(Dd*cos(Od),-Dd*sin(Od),col="red",lwd=1.5) text(.7,0,"6"); text(-.7,0,"18") text(0,-.7,"12"); text(0,.7,"24") R=1/24/max(d\$y)/3+1 lines(R*cos(O),R*sin(O),lty=2)```
Note that it is possible to stress more (visually) on hours having few phone calls, or a lot (compared with an homogeneous Poisson process), e.g.
```plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2),
type="l",axes=FALSE,xlab="",ylab="")
for(i in pi/12*(0:12)){
abline(a=0,b=tan(i),lty=1,col="light yellow")}
segments(2*cos(O12),2*sin(O12),1.1*cos(O12),1.1*sin(O12), col="light grey")
segments(.9*cos(O12),.9*sin(O12),1.1*cos(O12),1.1*sin(O12))
text(.7,0,"6")
text(-.7,0,"18")
text(0,-.7,"12")
text(0,.7,"24")
R=1/24/max(d\$y)/3+1
lines(R*cos(O),R*sin(O),lty=2)
AX=R*cos(Od);AY=-R*sin(Od)
BX=Dd*cos(Od);BY=-Dd*sin(Od)
COUL=rep("blue",length(AX))
COUL[R<Dd]="red"
CM=cm.colors(200)
a=trunc(100*Dd/R)
COUL=CM[a]
segments(AX,AY,BX,BY,col=COUL,lwd=2)
lines(Dd*cos(Od),-Dd*sin(Od),lwd=2)```
We get here those two graphs,
To be honest, I do not really like that representation – even if it looks nice. If we compare that circular representation to a more classical one (from 0:00 till 23:59 one the graph on the left, below), I do have a problem to interpret the areas in blue and pink.
density of wind direction
On the left, we compare two densities, so the area in pink is the same as the area in blue. But here, it is no longer the case: the area in pink is always larger to the one in blue. So it might help so see when we have a difference, but there is a scaling issue that we cannot discuss further… But less us see if we can use that estimation technique to several problems.
A standard application when studying angles is wind direction. For instance, in Montréal, it is possible to find hourly observations, starting in 1974 (we just need a R robot to pick up the information, but I’ll tell more about that in another post, someday). Here, we have directly an angle. So we can use a code rather similar to the one used above to estimate the distribution of wind direction in Montréal.
density of 911 phone calls
Note that our estimate is consistent with several graphs that can be found on meteorological websites (e.g. the one above on the right, that was found here).
In a recent post (here) I wanted to check about the “midnight crime” myth, using hours of 911 phone calls in Montréal.
That was for all phone calls. But if we look more specifically, for burglaries, we have the distribution on the left, and for conflicts the one on the right
We do clearly observe that gun shots occur a bit before midnight. See also here for another study, but this time in NYC (thanks @PAC for the link).while for gun shots, we have the distribution on the left, and for “troubles” (basically people making too much noisy in parties) or “noise” the one on the right
• density of earth temperatures, or earthquakes
Of course it is also possible to work in higher dimension. Before, we went from densities on to densities on the unit circle . But similarly, it is possible to go from to the unit sphere . A nice application being global climate studies,
The idea being that point on the left above are extremely close to the one on the right. An application can be e.g. on earthquakes occurrence. Data can be found here.
```library(ks)
X=cbind(EQ\$Longitude,EQ\$Latitude)
Hpi1 = Hpi(x = X)
DX=kde(x = X, H = Hpi1)
library(maps)
map("world")
points(X,cex=.2,col="blue")
Y=rbind(cbind(X[,1],X[,2]),cbind(X[,1]+360,X[,2]),
cbind(X[,1]-360,X[,2]),cbind(X[,1],X[,2]+180),
cbind(X[,1]+360,X[,2]+180),cbind(X[,1]-360,X[,2]+180), cbind(X[,1],X[,2]-180),cbind(X[,1]+360, X[,2]-180),cbind(X[,1]-360,X[,2]-180)) DY=kde(x = Y, H = Hpi1) library(maps) plot (DY,add=TRUE,col="purple")```
Without any correction, we get the red level curves. The pink one integrates correction. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9047424793243408, "perplexity": 650.8676728012393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00369.warc.gz"} |
https://www.physicsforums.com/threads/find-coefficient-of-kinetic-friction-between-a-bullet-and-a-pendulum.651385/ | # Find coefficient of kinetic friction between a bullet and a pendulum
1. Nov 11, 2012
### jreelawg
1. The problem statement, all variables and given/known data
A bullet collides and embeds itself into a block hanging from a rope. The block and embedded bullet swing out to an angle of 10 degrees.
If the bullet plows 2 cm into the block before stopping, what is the coefficient of kinetic friction between the block and the bullet?
bullet has mass of .1 kg
block on pendulum of length 1 m has mass of 5 kg
2. Relevant equations
momentum conservation, energy conservation, newtons laws
3. The attempt at a solution
I found the initial velocity of the bullet, 273.386 m/s^2, using momentum and energy conservation.
Thought maybe I could use, vf^2 = vi^2 + 2ax, which I guess tells me that the bullet after collision had an acceleration in the x direction of -1,868,500 m/s^2 with respect to the frame of reference of the pendulum+embedded bullet.
I thought maybe I would then use F=ma.
2. Nov 11, 2012
### Eango
I think your on the right track.
Would'nt it just be from where you left off:
Ffriction = ma
Ff = ma
UN = ma
Umg = ma
U = ma/mg
U = a/g
Also make sure to take the 10 degree into consideration and i think your good then.
3. Nov 11, 2012
### haruspex
Wrong units - should be m/s. And I get rather less than 273m/s. Please show your working to this point.
4. Nov 11, 2012
### jreelawg
I accidentally used .01 instead of .1 for mass of bullet in my calculation.
Here is what I have. I'm especially unsure about the validity of the last part.
v1 = velocity immediately after collision
v0 = initial velocity of bullet
mp = mass of pendulum = 5 kg
mb = mass of bullet = 0.1 kg
y = delta Y = 1 - cos(10 degrees)
momentum conservation:
mb*V0 = (mb + mp)*v1
v0 = (mb + mp)(v1)/mb
energy conservation of swing:
1/2(mb + mp)*v1^2 = (mp + mb)*yg
v1 = (2yg)^(1/2)
=> v0 = (mb + mp) (2yg)^(1/2) / mb
=((.1 + 5) (2*(1-cos(10))*9.8)^(1/2) )/0.1
= 27.8297 m/s
Acceleration of bullet after collision:
0 = 27.8297^2 + 2*a*(0.02)
a = -(27.8297^2) / 0.04)
= -19326.3 m/s^2
Force:
Fnet on bullet = Fk = -9.8 * 0.1 * Uk = -19326.3 * 0.1
Uk = 1972.07
My thoughts:
I don't think the last part is right. I would think that the main "normal" force affecting the friction would be the block squeezing the bullet as it embeds itself into the block. The coefficient of kinetic friction I would think would be a constant determined by the two materials. But this problem doesn't lend any information about either the force which would squeeze the bullet or specific materials.
Last edited: Nov 11, 2012
5. Nov 11, 2012
### jreelawg
Could this be solved using the work kinetic energy theorem?
6. Nov 11, 2012
### jreelawg
So kinetic energy is lost to thermal energy.
1/2(mb)V0^2 - 1/2(mb + mp)V1^2 = Eth ?
= 37.9634 J = fk (.02) = n * Uk
uk = (37.9634/.02)/n = 1898.17/n
So is n really just the (mb)g?
7. Nov 11, 2012
### haruspex
Agreed. (Though this is making the simplifying assumption that the bullet comes to rest so quickly within the block that most of the swing is with the bullet and block moving as one. Tricky question otherwise.)
Again, I agree. You can calculate the retardant force on the bullet, but that still doesn't tell you the coefficient of friction.
8. Nov 12, 2012
### jreelawg
Ok. Thanks haruspex. I guess I'll put this problem to rest for now.
Similar Discussions: Find coefficient of kinetic friction between a bullet and a pendulum | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9096201062202454, "perplexity": 2240.7836428875858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133042.90/warc/CC-MAIN-20170824062820-20170824082820-00596.warc.gz"} |
http://scienceblogs.com/pontiff/2009/10/06/nobel-prize-in-physics-for-fib/ | # Nobel Prize in Physics for Fiber Optics, CCDs
The Nobel Prize in Physics for 2009 has been announced and goes to Charles K. Kao for “for groundbreaking achievements concerning the transmission of light in fibers for optical communication” and to Willard S. Boyle and George E. Smith for the “for the invention of an imaging semiconductor circuit – the CCD sensor.”
I’m crazy busy so don’t have time to comment on the physics of these awards at the moment, but the thing that struck me about this selection will probably strike a few others and can be summarized in two words: Bell labs. Boyle and Smith are retired from Bell labs which is also where they invented the CCD. And today…. Well today Bell labs does not do any basic physics research. Instead its current owner, Alcatel-Lucent has Bell labs focused on “more immediately marketable areas such as networking, high-speed electronics, wireless, nanotechnology and software.” In other words, you can pretty much bet that when you plot Bell labs nobel prizes verses time you will see an amazing bubble, leading to a complete collapse.
Oh, and by my count that makes two McGill grads with Nobel prizes this year so far (Boyle in physics, Szostak in medicine.)
1. #1 John Sidles
October 6, 2009
Another lesson of these Nobel prizes is that [i]quantum limits can be approached[/i]
Both CCDs and optical fibers approach these limits quite closely; light fibers in terms of transparency (limited by atomic-scale fluctuations in SiO2 density), and CCDs in terms of quantum efficiency (limited by the no-cloning theorem … you can’t detect the same photon twice!).
In fact, the no-cloning theorem also impacts fiber optics, in the sense that no (phase-independent) power amplifier can exhibit a noise figure less than 10 log_10(2) = 3dB. This fundamental quantum limit (which commercial fiber-optic amplifiers *already* approach within 0.1 dB) is due to the ubiquitous Carlton Caves, and furthermore, it is interesting (as reviewed in Section 3.2.8 of our Practical Recipes article in NJP), that Caves’ limit is equivalent to all of the other (there are many of them) standard quantum limits on sensing and/or amplification and/or feedback control.
Among all the standard quantum limits, Caves’ is IMHO uniquely distinguished by being nondimensional *and* in being the easiest to remember with the factors of two correct.
The question naturally arises, what *other* technologies are pressing against fundamental quantum limits (and informatic limits, these being fundamentally the same limits) … to which the answer (or question) obviously is … uuuhhhhhh … *ALL* of them?
And so the *real* question this Nobel Award suggest is: “Which technologies will be the *next* to approach the quantum limits?”
Because we can prudently bet that vigorous industries will emerge, based upon them.
2. #2 Ian Durham
October 6, 2009
It’s interesting that the corporate R&D facilities that used to be at the cutting edge and have since devolved into facilities designed to improve the short-term bottom line, are all older facilities. The ground-breaking corporate R&D stuff these days seems to be coming from companies founded in the past forty years (IBM being a notable exception).
3. #3 Pieter Kok
October 6, 2009
John, I sincerely hope that quantum limits (at least the standard ones) can be beaten!
4. #4 John Sidles
October 6, 2009
I agree 100% with Dave *and* with Ian … “It was the best of times, it was the worst of times.”
The challenge for young folks, perhaps, is to foresee which modern enterprises are entering the former times rather than the latter times.
Obviously this is mighty tough to foresee …
5. #5 John Sidles
October 6, 2009
Pieter Kok: John, I sincerely hope that quantum limits (at least the standard ones) can be beaten!
Not to disappoint you, Pieter, but IMHO the quantum limits *cannot* be beaten (unless QM is physically wrong, which I very much doubt it is) … any more than the laws of thermodynamics can be beaten.
One the other hand, the scientific literature often is imprecise as to what actually “the quantum limits” actually are; the result is that it is deplorably common that articles adopt a lax or imprecise definition … and then shoot it down.
Hence my appreciation for Caves’ 3 dB quantum limit, which is precisely defined, and rigorously derived, from quantum principles that are exceedingly general.
Do you want to break Caves’ limit? It’s easy. Just relax the constraint “phase-independent” … hey, you’re done … submit the article to PRL!
But this approach to beating quantum limits, while mathematically and physically valid, is not very useful in engineering practice, for two reasons (1) as the noise figure of one amplification quadrature is adjusted to exceed Caves’ limit, the noise figure other quadrature necessarily degrades. (2) In observational applications, in general one *doesn’t* know the phase of the signal, and so we are just as likely to degrade the SNR as to improve it.
These considerations go far in helping us to understand why enterprises like gravity wave detection have tiptoed up to Caves’-type standard quantum limits, but have not exceeded them.
To put a crisp point on it, how long will it be before advances in QIT lead to phase-independent optical power amplifiers having a noise figure better than 3 dB?
The answer (IMHO) is simply: Never.
Alternatively: When exact cloning of quantum states becomes possible too.
Alternatively: When the second law of thermodynamics is proven to be wrong too.
Alternatively: When the *first* law of thermodynamics is proven to be wrong too.
The bottom line (IMHO) is simply this: articles on the theme that “the standard quantum limit has been beaten” need to be read in the same spirit of careful scientific inquiry as articles on the theme “the second law of thermodynamics has been beaten.”
The authors may well have accomplished something very clever, but nonetheless such claims deserve *very* careful scrutiny as to *precisely* what has been demonstrated.
As with claims that “the second law of thermodynamics has been beaten”, investigation will generally discover (in Thoreau’s memorable phrase) that there is a “trout in the milk.”
This curmudgeonly attitude does *not* mean that I am any kind of pessimist regarding quantum mechanical math, science, or engineering. To the contrary, IMHO most practical technologies (both hardware *and* simulation) are presently so far from approaching the fundamental limits that are imposed by QM/QIT, that opportunities for creative research are effectively unbounded.
6. #6 Pieter Kok
October 6, 2009
That’s a lot of humble opinions!
The standard quantum limit is often expressed as a Cramer-Rao bound d\phi > 1/\sqrt{N}, where N is the number of independent trials. There is a huge experimental effort underway to actually beat this limit.
Even if you cannot improve the precision across the full domain of a phase, very often you are only interested in small phases anyway, in which case you want your highest sensitivity around \phi = 0. And strictly theoretically, noon states (|N,0>+|0,N>) retain their phase sensitivity over the entire phase domain (although any noise will screw all this up).
So beating the SQL is (IMHO) not at all in the same class as perpetuum mobilae of the second kind.
7. #7 Patrick Hayden
October 6, 2009
Oh, and by my count that makes two McGill grads with Nobel prizes this year so far (Boyle in physics, Szostak in medicine.)
A w00t for our illustrious alums. McGill’s press release says that Szostak started his undergrad degree in cell biology at the age of fifteen. Apparently prodigies aren’t limited to mathematics, physics and music after all.
8. #8 John Sidles
October 6, 2009
We aren’t really disagreeing, Peter … because as the noise figure for the measured phase improve, doesn’t the noise figure for the measured 〈N〉 degrade? So that that Caves’ caveat applies?
It’s really more a question of culture IMHO.
Engineers want quantum limits that can’t be broken … unless the physicists are wrong about the fundamental laws of physics. And such limits undoubtedly exist (Caves’ amplifier noise limits are among them).
Physicists want quantum limits that *can* be broken … provided that physicists devise clever schemes that evade conventional engineering wisdom (the phase measurements that you describe belong to this class).
The result is that both sides independently define “quantum limits” so as to obtain exactly their desired result … with the result that (upon careful analysis) there is *no* mathematical or physical incompatibility between these two styles of research.
Plenty of surprises can result, however. An instructive example (from gravity wave detection) is to calculate a “standard quantum noise limit” for free test masses, and then show that this limit can be broken by interferometric methods for phase-independent detection.
Hokey schmokes!!! Apparently Caves’ limit has been broken!
But we can calm ourselves by looking for the “trout in the milk”. And that trout is not easy to spot on paper … it is that the light beam itself (from the interferometer) serves as a spring … so that the test mass is *not* free, but rather is spring-suspended.
This “trout” is not an academic quibble — the light beam itself in advLIGO will be more rigid than an equivalent (two-kilometer) diamond bar.
The bottom line is that in practical devices like LIGO, Caves-type limits *cannot* be broken, no matter what exotic quantum light states are injected into the device.
It is perfectly feasible, however, to inject squeezed states that provide stiffer optical springs for lower optical powers, which (potentially, it’s not easy to generate these states) have beneficial effects on both the signal-to-noise ration *and* the infamous Sidles-Sigg parametric instabilities.
For me, the best way to derive rigorous Caves’-type bounds for linear measurement schemes is to write down the most general path integral (arXiv:quant-ph/0211108v3) and deduce limits that follow solely from the fact that QM *has* a path integral representation () … it was this path-integral work that led to discovery of Sidles-Sigg instabilities (although the published analysis of that instability is by a shorter, more classical route).
Our recent NJP Practical Recipes article is a nonlinear generalization of this early path-integral exercise. Still, it was this path-integral exercise that warmed my appreciation of the strength of Caves-style quantum limits.
The preceding has been (obviously) an ultra-orthodox and ultra-conservative view of how to formulate non-“Cavesian” quantum limits that *can* be broken (essentially by engineering cleverness) … as contrasted with “Cavesian” quantum limits (that are breakable only if quantum physics is broken too).
At the end of the day, IMHO everyone is right. It is “merely” Cavesian versus non-Cavesian expectations and narratives that differ among disciplines, not the fundamental math and physics.
9. #9 Pieter Kok
October 6, 2009
Glad we agree, John. I’ll check out your NJP; it sounds interesting.
10. #10 John Sidles
October 6, 2009
Pieter, these consideration are at the forefront of my thinking because (as Dave knows) our UW QSE Group is writing-up a successor to that NJP article, which advances the quasi-heretical notion that simulating quantum spin systems belongs to the same complexity class as simulating classical spin systems.
For this to be true, there have to be some “trouts in the milk”.
One “trout” is that the spins must be in (possibly weak) thermal contact with a reservoir … another “trout” is that no ancilla bits can be introduced in the course of the simulation.
The consequence (of course) is that Shor’s algorithm *cannot* be simulated with classical resources. On the other hand, there are a tremendous number of real-world spin systems that *can* be efficiently simulated.
For all you Seattle folks, we’ve decided to give a series of five seminars on this “concentration-and-pullback” symplectic framework, titled: Concentration conjectures and pullback frameworks for classical and quantum spin simulations
The series starts this coming Friday, at 2:00 pm, in Room K450 of the Health Sciences complex. This is the same room, and the hour before, the Baker Group’s synthetic biology seminar … which *also* focuses on large-scale concentration-and-pullback symplectic simulations.
And yes, ME students can audit these lectures for one hour of credit … at present there is a glorious total of *one* student signed-up!
The site is currently under maintenance. New comments have been disabled during this time, please check back soon. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8169251084327698, "perplexity": 3206.683771403765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461332.16/warc/CC-MAIN-20150226074101-00254-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://www.arxiv-vanity.com/papers/1112.3930/ | # Directed Flow of Identified Particles in Au + Au Collisions at √sNN=200 GeV at RHIC
###### Abstract
STAR’s measurements of directed flow () around midrapidity for , K, K, and in Au + Au collisions at GeV are presented. A negative slope is observed for most of produced particles (, K, K and ). In 5-30% central collisions a sizable difference is present between the slope of protons and antiprotons, with the former being consistent with zero within errors. The excitation function is presented. Comparisons to model calculations (RQMD, UrQMD, AMPT, QGSM with parton recombination, and a hydrodynamics model with a tilted source) are made. For those models which have calculations of for both pions and protons, none of them can describe for pions and protons simultaneously. The hydrodynamics model with a tilted source as currently implemented cannot explain the centrality dependence of the difference between the slopes of protons and antiprotons.
###### pacs:
25.75.Ld
STAR Collaboration
The BNL Relativistic Heavy Ion Collider (RHIC) was built to study a new form of matter known as the Quark Gluon Plasma (QGP) whitePapers , which existed in the universe shortly after the Big-Bang. At RHIC, two nuclei are collided at near light-speed, and the collision produces thousands of particles due to the significant energy deposited. The collective motion of the produced particles can be characterized methodPaper by Fourier coefficients, {linenomath}
vn=⟨cosn(ϕ−ψ)⟩ (1)
where denotes the harmonic, and denote the azimuthal angle of an outgoing particle and reaction plane, respectively. The reaction plane is defined by the collision axis and the line connecting the centers of two nuclei. Thus far, five of these coefficients have been measured and found to be non-zero at RHIC VoloshinPoskanzerSnellings . They are directed flow , elliptic flow , triangular flow , the 4 order harmonic flow and the 6 order harmonic flow . This paper will focus on the directed flow, the first Fourier coefficient.
Directed flow describes the sideward motion of produced particles in ultra-relativistic nuclear collisions. It is believed to be generated during the nuclear passage time before the thermalization happens, thus it carries early information from the collision Schnedermann ; Kahana ; Barrette ; NA44 . The shape of directed flow at midrapidity may be modified by the collective expansion and reveal a signature of a possible phase transition from normal nuclear matter to a QGP antiFlow ; thirdFlow ; Stocker . It is argued that directed flow, as an odd function of rapidity , may exhibit a small slope (flatness) at midrapidity due to a strong expansion of the fireball being tilted away from the collision axis. Such tilted expansion gives rise to anti-flow antiFlow or a 3 flow thirdFlow component (not the third flow harmonic). The anti-flow (or the 3 flow component) is perpendicular to the source surface, and is in the opposite direction to the bouncing-off motion of nucleons. If the tilted expansion is strong enough, it can even overcome the bouncing-off motion and results in a negative slope at midrapidity, potentially producing a wiggle-like structure in . Note that although calculations antiFlow ; thirdFlow for both anti-flow and 3 flow component are made for collisions at SPS energies where the first order phase transition to a QGP is believed to be the most relevant Stocker , the direct cause of the negative slope is the strong, tilted expansion, which is also important at RHIC’s top energies. Indeed hydrodynamic calculations Bozek for Au + Au collisions at GeV with a tilted source as the initial condition can give a similar negative slope as that found in data. A wiggle structure is also seen in the Relativistic Quantum Molecular Dynamics (RQMD) model wiggleRQMD , and it is attributed to baryon stopping together with a positive space-momentum correlation. In this picture, no phase transition is needed, and pions and nucleons flow in opposite directions. To distinguish between baryon stopping and anti-flow, it is desirable to measure the for identified particles and compare the sign of their slopes at midrapidity. In particular, the observation of a centrality dependence of proton may reveal the character of a possible first order phase transition Stocker . It is expected that in very peripheral collisions the bouncing-off motion dominates over the entire rapidity range, and protons at midrapidity flow in the same direction as spectators. In mid-central collisions, if there is a phase transition, the proton slope at midrapidity may change sign and become negative. Eventually the slope diminishes in central collisions due to the symmetry of the collisions.
At low energies, the E895 collaboration has shown that K has a negative slope around midapidity e895KShort , while and protons have positive slopes e895Lambda . This is explained by a repulsive kaon-nucleon potential and an attractive -nucleon potential. The NA49 collaboration na49 has measured for pions and protons, and a negative slope is observed by the standard event plane method. The three-particle correlation method v1Cumu , which is believed to be less sensitive to non-flow effects, gives a negative slope too, but with a larger statistical error. The non-flow effects are correlations among particles that are not related to the reaction plane, including the quantum Hanbury Brown-Twiss correlation HBT , resonance decays decay and so on. At top RHIC energies, has been studied mostly for charged particles by both the STAR and the PHOBOS collaborations starV1V4 ; phobosV1 ; star62GeV ; starV1PRL . It is found that in the forward region follows the limiting fragmentation hypothesis Limit , and as a function of pseudorapidity () depends only on the incident energy, but not on the size of the colliding system at a given centrality. Such system size independence of can be explained by the hydrodynamic calculation with a tilted initial condition Bozek . The systematic study of for identified particles at RHIC did not begin until recently because it is more challenging for two reasons: 1) for some identified particles (for example, protons) is much smaller than that of all charged particles, 2) more statistics are needed to determine for identified particles other than pions.
54 million events from Au + Au collisions at GeV have been used in this study, all taken by a minimum-bias trigger with the STAR detector during RHIC’s seventh run in year 2007. The main trigger detector used is the Vertex Position Detector (VPD) VPD . The centrality definition of an event was based on the number of charged tracks in the Time Projection Chamber (TPC) TPC with track quality cuts: 0.5, a Distance of Closest Approach (DCA) to the vertex less than 3 cm, and 10 or more fit points. In the analysis, events are required to have the vertex z within 30 cm from the center of the TPC, and additional weight is assigned to each event in the analysis accounting for the non-uniform VPD trigger efficiency in the vertex z direction for different centrality classes. The event plane angle is determined from the sideward deflection of spectator neutrons measured by STAR’s Shower Maximum Detector inside the Zero Degree Calorimeters (ZDC-SMDs). Such sideward deflection of spectator neutrons is expected to happen in the reaction plane rather than participant plane, since the ZDC-SMDs are located close to beam rapidity. Being 6 units in away from midrapidity, ZDC-SMDs also allow a measurement of with minimal contribution from non-flow correlations. The description of measuring using the ZDC-SMDs event plane can be found in star62GeV . Particle Identification (PID) of charged particles is achieved by measuring ionization energy loss (dE/dx) inside STAR’s TPC, together with the measurement of the momentum () via TPC tracking. Track quality cuts are the same as used in starFlowPRC . In addition, the transverse momentum for protons is required to be larger than 400 MeV, and DCA is required to be less than 1 cm in order to avoid including background protons which are from knockout/nuclear interactions of pions with inner detector material. The same cuts are applied to antiprotons as well to ensure a fair comparison with protons. The high-end of the cut is 1 GeV where protons and pions have the same energy loss in the TPC and thus become indistinguishable. For pions and kaons, range is 0.15 - 0.75 GeV and 0.2 - 0.6 GeV, respectively. K are topologically reconstructed by their charged daughter tracks inside the TPC KsReconstruction .
Results presented in the following figures contain only statistical errors. Results for pions, protons and antiprotons are not corrected for the feeddown from weak decay particles. The major systematic error in determining the slope of for identified particles is from the particle misidentification, which was evaluated by varying the dE/dx cut. Another systematic error comes from the non-uniform acceptance, as is obtained by integrating over the acceptance which itself depends on the rapidity. This effect is non-negligible for protons and antiprotons at large rapidity. It is estimated by taking the difference between slopes fitted with points integrated with acceptance at midrapidity and at large rapidity. In addition, some of the observed protons have originated from interactions between the produced particles and the detector material, and such effect has also been taken into consideration. The total systematic uncertainty is obtained by adding uncertainties mentioned above in quadrature. There are also common systematic errors that should be applied to all particles: the uncertainty due to the first order event plane determination, which was estimated to be % (relative error) star62GeV , and the uncertainty due to centrality selection, which was estimated to be % (relative error) by comparing our charged slope to that from the RHIC run in 2004. Other systematic errors have been evaluated to be negligible.
In Fig. 1, of , K, K, , and are presented for centrality 10-70%. Following convention, the sign of spectator in the forward region is chosen to be positive, to which the measured sign of for particles of interest is only relative. Fitting with a linear function, the slopes are for the protons, for the antiprotons, for the pions, for the kaons and for the K. The relative common systematic error for all particles is not listed here. The slope for the produced particle types (, K, K and ) are mostly found to be negative at mid-rapidity, which is consistent with the anti-flow picture. In particular, kaons are less sensitive to shadowing effects due to the small kaon-nucleon cross section, yet it shows a negative slope. This is again consistent with the anti-flow picture. Interestingly, for protons exhibits a clearly flatter shape than that for antiprotons. While mass may contribute to the difference in slope between pions and protons/antiprotons, it cannot explain the difference in slope observed for antiprotons and protons. Indeed, the observed for protons is a convolution of directed flow of produced protons with that of transported protons (from the original projectile and target nuclei), so the flatness of inclusive proton around midrapidity could be explained by the negative flow of produced protons being compensated by the positive flow of protons transported from spectator rapidity, as a feature expected in the anti-flow picture.
In Fig. 2, pion and proton are plotted together with five model calculations, namely, RQMD wiggleRQMD , UrQMD UrQMD , AMPT AMPT , QGSM with parton recombination QGSM , and slopes from an ideal hydrodynamic calculation with a tilted source Bozek . The model calculations are performed in the same acceptance and centrality as the data. The RQMD and AMPT model calculations predict the wrong sign and wrong magnitude of pion , respectively, while the RQMD and the UrQMD predict the wrong magnitude of proton . For models other than QGSM which has the calculation only for pions, none of them can describe for pions and protons simultaneously.
In Fig. 3, the slope of at midrapidity is presented as a function of centrality for protons, antiprotons, and charged pions. In general, the magnitude of the slope converge to zero as expected for most central collisions. Proton and antiproton slope are more or less consistent in 30-80% centrality range but, diverge in 5-30% centrality. In addition, two observations are noteworthy: i) the hydrodynamic model with tilted source (which is a characteristic of anti-flow) as currently implemented does not predict the difference in between particle species PiotrPrivate . ii) If the difference between of protons and antiprotons is caused by anti-flow alone, then such difference is expected to be accompanied by strongly negative slopes. In data, the large difference between proton and antiproton slopes is seen in the 5-30% centrality range, while strongly negative slopes are found for protons, antiprotons and charged pions in a different centrality range (30-80%). Both observations suggest that additional mechanisms than that assumed in Bozek ; PiotrPrivate are needed to explain the centrality dependence of the difference between the slopes of protons and antiprotons.
The excitation function of proton slope ( at midrapidity) is presented in Fig 4. Values for are extracted via a polynomial fit of the form , where for which spectators are normalized at 1. The proton slope decreases rapidly with increasing energy, reaching zero around GeV. Its sign changes to negative as shown by the data point at GeV, measured by the NA49 experiment na49 . A similar trend has been observed at low energies with a slightly different quantity E877Slope ; E895PRL . The energy dependence of slope for protons is driven by two factors, i) the increase in the number of produced protons over transported protons with increasing energy, and ii) the of both produced and transported protons at different energies. The negative slope for protons around midrapidity at SPS energies cannot be explained by transport model calculations like UrQMD Zhu2006 and AMPT AMPT , but is predicted by hydro calculations antiFlow ; thirdFlow . The present data indicate that the proton slope remains close to zero at GeV as observed at GeV and GeV heavy ion collisions. Our measurement offers a unique check of the validity of a tilted expansion at RHIC top energy.
In summary, STAR’s measurements of directed flow of pions, kaons, protons, and antiprotons for Au + Au collisions at GeV are presented. In the range of 10-70% central collisions, slopes of pions, kaons (K), and antiprotons are found to be mostly negative at mid-rapidity. In 5-30% central collisions a sizable difference is present between the slope of protons and antiprotons, with the former being consistent with zero within errors. Comparison to models (RQMD, UrQMD, AMPT, QGSM with parton recombination, and hydrodynamics with a tilted source) is made. Putting aside the QGSM model which has the calculation only for pions, none of the other models explored can describe for pions and protons simultaneously. Additional mechanisms than that assumed in the hydrodynamic model with a tilted source Bozek ; PiotrPrivate are needed to explain the centrality dependence of the difference between the slopes of protons and antiprotons. Our measurement indicates that the proton’s slope remains close to zero for Au + Au collisions at GeV. These new measurements on the particle species and centrality dependence of provides a check for the validity of a tilted expansion at RHIC top energy.
###### Acknowledgements.
We thank the RHIC Operations Group and RCF at BNL, the NERSC Center at LBNL and the Open Science Grid consortium for providing resources and support. This work was supported in part by the Offices of NP and HEP within the U.S. DOE Office of Science, the U.S. NSF, the Sloan Foundation, the DFG cluster of excellence ‘Origin and Structure of the Universe’of Germany, CNRS/IN2P3, FAPESP CNPq of Brazil, Ministry of Ed. and Sci. of the Russian Federation, NNSFC, CAS, MoST, and MoE of China, GA and MSMT of the Czech Republic, FOM and NWO of the Netherlands, DAE, DST, and CSIR of India, Polish Ministry of Sci. and Higher Ed., Korea Research Foundation, Ministry of Sci., Ed. and Sports of the Rep. Of Croatia, and RosAtom of Russia.
## References
• (1) BRAHMS, PHENIX, PHOBOS, and STAR Collaboration, Nucl. Phys. A 757 Issues 1-2 (2005).
• (2) S. Voloshin, A. Poskanzer and R. Snellings, Volume 23, In Relativistic Heavy Ion Physics, Published by Springer-Verlag. Edited by R. Stock. DOI: 10.1007/978-3-642-01539-7. arXiv:0809.2949
• (3) A. M. Poskanzer and S. A. Voloshin, Phys. Rev. C 58, 1671 (1998).
• (4) E. Schnedermann and U. Heinz, Phys. Rev. Lett. 69, 2908 (1992).
• (5) D. E. Kahana, D. Keane, Y. Pang, T. Schlagel and S. Wang, Phys. Rev. Lett. 74, 4404 (1995).
• (6) J. Barrette et al. (E877 Collaboration), Phys. Rev. Lett. 73, 2532 (1994).
• (7) I. G. Bearden et al. (NA44 Collaboration), Phys. Rev. Lett. 78, 2080 (1997).
• (8) J. Brachmann et al., Phys. Rev. C 61, 024909 (2000).
• (9) L. P. Csernai and D. Röhrich, Phys. Lett. B 458, 454 (1999).
• (10) H. Stöcker, Nucl. Phys. A 750, 121 (2005).
• (11) P. Bożek and I. Wyskiel, Phys. Rev. C 81, 054902 (2010).
• (12) R. J. M. Snellings, H. Sorge, S. A. Voloshin, F. Q. Wang and N. Xu, Phys. Rev. Lett 84, 2803 (2000).
• (13) P. Chung et al. (E895 Collaboration), Phys. Rev. Lett. 85, 940 (2000).
• (14) P. Chung et al. (E895 Collaboration), Phys. Rev. Lett. 86, 2533 (2001).
• (15) C. Alt et al. (NA49 Collaboration), Phys. Rev. C 68, 034903 (2003).
• (16) N. Borghini, P. M. Dinh and J.-Y. Ollitrault, Phys. Rev. C 66, 014905 (2002).
• (17) P. M. Dinh, N. Borghini and J.-Y. Ollitrault, Phys. Lett. B 477, 51 (2000).
• (18) N. Borghini, P. M. Dinh and J.-Y. Ollitrault, Phys. Rev. C 62, 034902 (2000).
• (19) J. Adams et al. (STAR Collaboration), Phys. Rev. Lett. 92, 062301 (2004).
• (20) B. B. Back et al. (PHOBOS Collaboration), Phys. Rev. Lett 97, 012301 (2006).
• (21) J. Adams et al. (STAR Collaboration), Phys. Rev. C 73, 034903 (2006).
• (22) B. I. Abelev et al. (STAR Collaboration), Phys. Rev. Lett. 101, 252301 (2008).
• (23) J. Benecke et al., Phys. Rev. 188, 2159 (1969).
• (24) W. J. Llope et al., Nucl. Instrum. Methods A 522, 252 (2004).
• (25) M. Anderson et al., Nucl. Instrum. Method. A 499, 659 (2003).
• (26) J. Adams et al. (STAR Collaboration), Phys. Rev. C 72, 014904 (2005).
• (27) C. Adler et al. (STAR Collaboration), Phys. Rev. Lett. 89, 132301 (2002); J. Adams et al. (STAR Collaboration), Phys. Rev. Lett. 92, 052302 (2004).
• (28) M. Bleicher and H. Stöcker, Phys. Lett. B 526, 309 (2002).
• (29) J. Y. Chen et al., Phys. Rev. C 81, 014904 (2010).
• (30) J. Bleibel, G. Burau, A. Faessler and C. Fuchs, Phys. Rev. C 76, 024912 (2007).
• (31) P. Bożek, private communication, 2010.
• (32) H. Liu et al. (E895 Collaboration), Phys. Rev. Lett. 84, 5488 (2000).
• (33) J. Barrette et al. (E877 Collaboration), Phys. Rev. C 56, 3254 (1997); Phys. Rev. C 55, 1420 (1997).
• (34) H. Petersen, Q. Li, X. Zhu and M. Bleicher, Phys. Rev. C 74, 064908 (2006). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8818386197090149, "perplexity": 2026.8889813145925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621699.22/warc/CC-MAIN-20210616001810-20210616031810-00598.warc.gz"} |
https://planetmath.org/fundamentaltheoremoftranscendence | # fundamental theorem of transcendence
The tongue-in-cheek given to the fact that if $n$ is a nonzero integer, then $|n|\geq 1$. This trick is used in many transcendental number theory proofs. In fact, the hardest step of many problems is showing that a particular integer is not zero.
Title fundamental theorem of transcendence FundamentalTheoremOfTranscendence 2013-03-22 13:06:58 2013-03-22 13:06:58 KimJ (5) KimJ (5) 5 KimJ (5) Theorem msc 11J81 EIsTranscendental | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 2, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9839622378349304, "perplexity": 1972.265325481656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883961.50/warc/CC-MAIN-20200704011041-20200704041041-00540.warc.gz"} |
https://mathoverflow.net/questions/61794/the-diophantine-eq-x4-y4-1-z2 | # The diophantine eq. $x^4 +y^4 +1=z^2$
This question is an exact duplicate of the question
Does the equation $x^4+y^4+1=z^2$ have a non-trivial solution?
posted by Tito Piezas III on math.stackexchange.com.
The background of this question is this: Fermat proved that the equation, $$x^4 +y^4=z^2$$
has no solution in the positive integers. If we consider the near-miss, $$x^4 +y^4-1=z^2$$
then this has plenty (in fact, an infinity, as it can be solved by a Pell equation). But J. Cullen, by exhaustive search, found that the other near-miss, $$x^4 +y^4 +1=z^2$$
has none with $0 < x,y < 10^6$ .
Does the third equation really have none at all, or are the solutions just enormous?
• The question was also posted in January at math.stackexchange.com/questions/16887/… – Tapio Rajala Apr 15 '11 at 9:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9549432396888733, "perplexity": 278.5688271141695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986673250.23/warc/CC-MAIN-20191017073050-20191017100550-00216.warc.gz"} |
http://mathhelpforum.com/algebra/97118-please-help-solve-simplify.html | and:
If:
x(x-2)=24, then either x=24, or x-24=0
2. Originally Posted by original501
this is a well known factorization of the sum of two cubes ... what are you supposed to do with it?
and:
If:
x(x-2)=24, then either x=24, or x-24=0
no ...
$\textcolor{red}{x(x-2) = 24}$
$\textcolor{red}{x^2 - 2x - 24 = 0}$
$\textcolor{red}{(x - 6)(x + 4) = 0}$
$\textcolor{red}{x = 6}$ , $\textcolor{red}{x = -4}$
... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.993032693862915, "perplexity": 2590.8562318848626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00168-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://www.nature.com/articles/s42005-019-0111-2?error=cookies_not_supported&code=a9bfda10-be1a-4842-8391-811a0d923b8c | ## Introduction
One of the recent milestone discoveries in condensed matter physics is the experimental realization of graphene1,2. Graphene is the ultimate two-dimensional (2D) material that consists of a single layer of carbon atoms with sp2 hybridization arranged in a honeycomb structure. One of the most remarkable properties of graphene is its electronic structure, which has a Dirac cone-like spectrum. The very high mobility of relativistic massless Dirac Fermions in the Dirac cone holds the promise for future technological applications. This accelerated the research on graphene and alternative graphene-like 2D materials. In this direction, a new set of materials has been predicted to be a direct analogy to graphene, for instance, if all carbon atoms in graphene are replaced by either Si, Ge, and Sn, which results in silicene, germanene, and stanene, respectively3,4. Since all of these group-IV elements have a similar outer shell electronic configuration but different spin-orbit coupling strengths, they are expected to exhibit the typical characteristic properties of graphene with the addition of spin-dependent phenomena. Out of them, however, only graphene is found to be stable in the freestanding form, whereas all others must be grown on a substrate due to the more favorable sp3 hybridization. In reality, the formation of a stable graphene-like 2D structure on a given surface is subject to a delicate balance between various adsorbate-adsorbate and adsorbate-substrate interactions, and thus the associated characteristic properties vary widely. Nevertheless, due to the stronger spin-orbit coupling, these novel graphene-like 2D materials can exhibit additional new and intriguing properties, as for instance the quantum spin hall (QSH) effect5. Unfortunately, however, available materials with such properties are until now based on complex alloys or multi-component materials with different sublattices, which are challenging to fabricate in terms of large-scale production.
In this regard, and with respect to the potentially easier to fabricate elemental materials stanene, germanene, and silicene, stanene seems the most promising graphene-like 2D material, as it is predicted to be not only a QSH insulator with nontrivial topological properties5,6,7, but also shows topological superconductivity8, enhanced thermoelectricity9, and valleytronics10. A first realization of a stanene-like overlayer has been reported recently11,12, with the topological insulator Bi2Te3 as the supporting substrate. In the search for alternative substrate materials, recent theoretical works predicted many possibilities6,11,13,14,15,16, and one among them is the Au(111) surface17. At the same time, Sn and Au are receiving renewed interest due to the discovery of an elemental topological insulator phase in α-Sn18,19,20, and the recent reinterpretation of the Shockley surface states on the Au(111) surface as a topologically derived surface state21. These aspects motivated us to investigate the formation of 2D Sn superstructures on an Au(111) crystal.
In this article, we provide experimental evidence for the formation of a highly intriguing 2D hexagonal Sn-dimer phase in the manifold of Sn/Au(111) surface reconstructions which exhibits electronic states with similarities to some of the expected properties of a graphene-like 2D material. This particular Sn/Au superstructure reveals a stretched hexagonal lattice with two Sn atoms per unit cell arranged in a dimer-like structure as deduced from our low energy electron diffraction (LEED) and scanning tunneling microscopy (STM) experiments. Interestingly, our spin- and angle-resolved photoemission spectroscopy (ARPES) studies show that the occupied band structure of this superstructure exhibits a linearly dispersing band around the $${\bar{\mathrm \Gamma }}$$ point of the surface Brillouin zone up to the Fermi-level with a positive slope for negative momenta and negative slope of positive momenta. Both branches of this band reveal a spin-polarization similar in magnitude but opposite in sign, i.e., a spin-texture comparable to the one of a three-dimensional topological insulator. These intriguing properties are attributed to the lateral interaction between the Sn atoms in the dimer structure in conjunction with the hybridization between Sn and the Au orbitals at the Sn-Au interface. We thus argue that interface hybridization is not always detrimental, but can be actively used to design 2D interface layers with yet largely unexplored spin-dependent electronic properties.
## Results
### Lateral order of the Sn/Au(111) interface
We start with the growth behavior and the structure identification of the fabricated Sn superstructures on Au(111). Below 1 monolayer (ML) of Sn coverage, Sn shows a variety of superstructures on the Au(111) surface depending on the Sn coverage and the post-deposition annealing temperature. Those superstructures are summarized in the structural phase diagram in Fig. 1. The pristine Au(111) surface shows the well-known (22$$\times \sqrt 3$$) herringbone reconstruction (Fig. 1a). At 0.3 ML of Sn, the herringbone reconstruction vanishes and is replaced by a $$\left( {\sqrt 3 \times \sqrt 3 } \right)$$R30° superstructure, named as $$\sqrt 3$$ phase (Fig. 1b). Around 0.6 ML of Sn, a new superstructure appears (Fig. 1c)22, named as X phase (see Supplementary Figure 1). For higher Sn coverage (up to about 2 ML), the LEED pattern remains identical.
We now focus on the effect of post-deposition annealing on the X-phase obtained for 0.6 ML of Sn on Au(111). This phase exhibits an irreversible structural phase transition that is caused by a thermally driven desorption of Sn adatoms, which depends on the post-deposition annealing temperature, see Fig. 1c–f and Supplementary Movie 1. At room temperature, the X phase is stable, while annealing to about 450 K induces a gradual structural phase transition from the X phase to an intermediate phase. A typical LEED pattern of this phase corresponding to a double-honeycomb-like structure is shown in Fig. 1d. When the temperature is carefully increased further to 455 K, a stretched hexagon structure appears, named as $$\sqrt 7$$ phase (discussed below). Finally, when the temperature is raised beyond 470 K, a $$\sqrt 3$$ superstructure appears23. All these superstructures are stable at room temperature after post-deposition annealing.
Out of the manifold of superstructures observed for Sn/Au(111) shown in the structural phase diagram (Fig. 1), we find that the $$\sqrt 7$$ phase exhibits intriguing electronic properties far beyond the expectations of a conventional surface reconstruction. We will hence focus on this phase in the following discussion. Figure 1e shows the LEED pattern of the $$\sqrt 7$$ phase superimposed with the positions of simulated diffraction spots (colored dots, each color indicates the set of diffraction spots of one rotational domain, for additional information see Supplementary Figure 2). The best agreement between experiment and simulation was achieved for the structure with the superstructure matrix of $$\left[ {\begin{array}{*{20}{c}} 2 & { - 1} \\ 3 & 1 \end{array}} \right]$$ (or shortly $$\sqrt 7$$ phase) with 2D space group of c2mm. Hence, the periodic arrangement of the Sn atoms, i.e., the size and shape of the superstructure unit cell, can be classified as a stretched hexagon (see Supplementary Figure 3) as will be discussed in the following.
In order to determine the position of the Sn atoms within the stretched hexagon lattice, we carried out STM measurements on the $$\sqrt 7$$ phase. The large scale STM image (300 nm × 300 nm) in Fig. 2a shows the morphology of the Sn overlayer film. We find smooth and island-free terraces that are separated by atomic steps with a uniform height of 2.3 ± 0.2 Å (Fig. 2b), which is comparable to the step height of Au(111). Hence, the Sn atoms form a homogeneous and atomically flat single layer of Sn on the Au(111) surface, i.e., the overlayer film is uniform over several hundreds of nanometers, even spanning over multiple terraces and step edges. In particular, the homogeneous growth over step edges underlines the possibility to grow this structure on even larger scales. The detailed atomic arrangement within the unit cell of the superstructure lattice can be obtained from the high-resolution STM image shown in Fig. 2c. The corresponding unit cell of the $$\sqrt 7$$ superstructure is marked in pink. In addition, we highlight the stretched hexagon that results from this unit cell in yellow. The atoms forming the unit cell are drawn as spheres on top of the STM data. We find five atoms per unit cell, two with bright contrast forming a dimer-like structure along the <110> direction of the Au(111) surface, and three atomic sites with dark contrast. We assign these bright atoms to Sn, in agreement with previous DFT calculations that predicted the formation of isolated dimers of Sn atoms on Au(111) as a stable configuration for this interface17. The dark spots (green circles in Fig. 2c) mark the position of the Au atoms underneath the Sn overlayer. The Sn atoms forming the dimer-like structure are separated by 2.8 Å along the <110> direction, which corresponds to the spacing also observed for Sn gas phase dimers (2.8 Å)17. In addition, this value is precisely the surface lattice spacing between two equivalent Au(111) adsorption sites and hence indicates that both dimer atoms occupy identical adsorption sites on the Au surface. This is further supported by a line-scan profile along the <110> direction (Fig. 2d) that additionally reveals a negligible buckling of the dimer adatoms. Based on our structural analysis, we can thus conclude that the $$\sqrt 7$$ superstructure shows a stretched hexagon lattice structure with a dimer-like arrangement of two atoms per primitive cell. Based on our experimental findings, we propose a structural model for the $$\sqrt 7$$ phase which is shown in Figs 2e, f. The adsorption sites of both Sn atoms could not be determined experimentally. However, in the light of the DFT results by Nigam et al.17, we propose that both Sn atoms adsorb on identical face centered cubic (fcc) hollow sites. In addition, our structural model is also fully in line with our LEED analysis which is confirmed by the Fast Fourier transform image shown in Fig. 2g (also see Supplementary Figure 2).
### Electronic band structure
We now turn to the question in how far this hexagonal dimer Sn structure on Au(111) shows similarities and differences to expected electronic features of graphene-like 2D materials. Figure 3a shows the ARPES intensity map measured along the $${\bar{\mathrm \Gamma }}$$-$${\bar{\mathrm K}}$$ high-symmetry direction of the Au(111) surface Brillouin zone. Note that the high surface symmetry (p3m1) of the Au(111) surface results in three rotational domains of the $$\sqrt 7$$ superstructure as shown in Supplementary Figure 2 and Supplementary Figure 4. Consequently, the experimental ARPES data recorded along the high-symmetry directions of the Au substrate contain contributions from all three domains. A zoomed-in view of the ARPES intensity map around the $${\bar{\mathrm \Gamma }}$$ point shown in Fig. 3b indicates the presence of a linearly dispersing band (marked by green lines) near the Fermi level with positive slope for negative momenta and negative slope for positive momenta, labeled as B1 (see Supplementary Figure 5 for a waterfall-plot of the same data). The crossing point of this band B1 is located at the Fermi level (ED = 0 ± 40 meV), as obtained by extrapolating the linear fits to the data shown in Supplementary Figure 6. From the linear dispersion of this band B1, we can determine the Fermi velocity vF of the electrons by using E = $$\hbar kv_{\mathrm{F}}$$. We obtain vF ≈ 1 × 106 m/s, the same value as for graphene24,25. This unexpectedly high Fermi velocity is even higher than the value predicted for a freestanding Sn layer (i.e., a freestanding stanene layer) by theory (4.4 × 105 m/s)5. In order to experimentally clarify the nature of this band, we carried out additional measurements. First, photon-energy dependent ARPES data recorded with 21.2 eV, 16.9 eV, and 5.9 eV yield identical dispersions for the band B1 (see Supplementary Figure 6), which confirms the 2D nature of the band. Second, the momentum-resolved constant binding energy maps shown in Fig. 3c–e indicate that the linearly dispersing band B1 exhibits a hexagon shaped emission pattern for large binding energies of 0.8 eV from the crossing point, i.e., the band B1 adopts the symmetry of the Au(111) substrate. When approaching the Fermi energy, the pattern transforms to predominately a three-fold symmetric emission pattern. Third, we find that the band B1 possesses the same spectral intensity in whole momentum space (kx, ky directions), pointing to conical-shaped band structure. Fourth, we carried out angle-resolved two-photon photoemission spectroscopy26,27 to access the unoccupied part of the conical band B1, see Fig. 4a. In the unoccupied part of the spectrum, we indeed find bands that possibly extend band B1. However, the dispersion of these bands deviates from the linear behavior as seen in the occupied part of the spectrum. Instead, we find a nearly parabolic dispersion with the band bottom around the Fermi-level within our experimental uncertainty, see Fig. 4b. Although a different dispersion of the same band below and above the crossing point seems to be surprising at first glance, comparable deviations have already been reported for various 2D materials and topological insulators in literature6,28,29,30,31.
It should be noted, however, that further investigations are required to unambiguously reveal the nature of the connection between the occupied band B1 and unoccupied band B1’, i.e., the present results do not show conclusively if B1 and B1’ reveal a crossing point at EF or not.
In addition to the central linearly dispersing band (B1), we find a sideband (labeled as B2 in Fig. 3b) that also disperses linearly. The band B2 is nearly parallel to B1 around the $${\bar{\mathrm \Gamma }}$$ point with uniform energy and momentum separation of ΔE ~0.6 eV and $$\Delta k_\parallel \sim$$0.12 Å−1, respectively. However, in contrast to B1, B2 exhibits substantial intensity variations for different kx, ky directions, see Supplementary Figure 7. Also, an analysis of the unoccupied regime of the band structure neither reveals an extension of B2 nor a hole-like parabolic band close to EF connecting the occupied branches of B1 and B2. These observations are clear indications than B1 and B2 are not conventional (partially occupied) hole-like Rashba-split bands of the Sn/Au interface as for instance observed for the prototypical Rashba systems BiAg2 and BiCu232,33,34. Instead, we conclude that B2 has a different, most likely bulk origin (see Supplementary Note 1 and Supplementary Figure 4).
To further unravel the nature of the interesting surface band B1 with linear dispersion below the Fermi-level, we have performed spin-resolved ARPES measurements. The data were acquired at different emission angles which are marked in Fig. 5a (yellow lines). The recorded spin polarization corresponds to an in-plane vertical spin component (Py) that is parallel to y-axis as shown in Figs 5b, c. The individual spin resolved photoemission spectra are shown in Figs 5d, e for both ±k directions. The experimentally obtained spin-polarization is shown in Fig. 5f. The spin polarization was calculated using P = ((I − I)/(I + I))(1/S), where I↑,↓ are the ARPES intensities of two distinct spin polarization channels and S is the Sherman factor. The data shown in Fig. 5d–h are obtained by N, = It (1 ± P), where It = (I + I). The first point we note is that the band B1 is indeed spin polarized with significant magnitude. As seen from Fig. 5f, the relative sign of the spin polarization is opposite to one another with respect to ± k momentum direction, while the magnitude is almost identical. In other words, band B1 shows negative spin-polarization for positive momenta (+k) and positive spin-polarization for negative momenta (−k). The sideband B2 also shows a distinct spin-polarization with very small magnitude but identical sign as the band B1, see Fig. 5i. As presented in Supplementary Figure 8, the lack of visible dispersion of the bands B1 and B2 in the spin-resolved ARPES data is a direct consequence of the lower angular resolution of the spin-resolved ARPES experiment. However, when all spin-polarization curves of all four energy distribution curves (EDC) cuts are overlaid as shown in Supplementary Figure 8, we see that the spin-polarization curves clearly resemble the dispersion of the band B1.
We note that based on the presented data, we can only focus on the relative sign of the spin polarization of both bands B1 and B2. A more in-depth discussion of the absolute magnitude and sign of the spin-polarization of both bands would be too speculative due to the unknown contribution of final-state effects to the experimentally determined spin-resolved photoemission data. Nevertheless, spin textures in non-magnetic materials are typically induced by either Rashba-type spin-orbit splitting26,35, as for instance in Au(111)36,37, or by a topological insulator phase, such as Bi2Se338 or α-Sn19,20. In case of a conventional Rashba splitting, however, the bands typically exhibit two parabolas with relatively opposite sign of spin polarization with respect to ±k|| direction that are separated in momentum space by a constant k0. This means that in the simplest case, one would expect four bands in total with alternating sign of their spin polarization, for instance up-down-up-down. Although this is in clear contrast to our experimental findings, it is not a conclusive proof of the absence of Rashba-type surface states at the Sn/Au interface.
Most remarkable, the spin polarization of band B1 resembles the spin-texture of a typical topological insulator phase, such as Bi2Se3. This seems to be surprising at first since the metallic substrate Au does not show a real band gap around EF, but only a projected band gap. However, anti-symmetric spin polarizations of linear (Dirac-like) electronic states have already been reported for metallic surfaces and have been discussed in a very similar manner as topological insulators39,40.
## Discussion
Our extensive experimental study reveals interesting similarities between the Sn $$\sqrt 7$$-superstructure of the Sn/Au(111) interface and elemental 2D graphene-like materials. For instance, the formation of a (stretched) honeycomb structure with a two-atomic basis are common structural fingerprints for the formation of a 2D graphene-like structure and have also been observed for the first experimental realization of the 2D Sn analog (stanene) on the topological insulator Bi2Te311. Similarly, the electronic band structure reveals certain signatures that are expected for a 2D Dirac-material with strong spin-orbit coupling or a 3D topological insulator. The $$\sqrt 7$$-phase shows a linearly dispersing band below the Fermi level with extremely high Fermi velocity and an alternating spin polarization. We attribute these intriguing findings to a strong lateral (chemical) bonding between the nearest neighbors Sn atoms within the Sn superstructure. The existence of such a strong in-plane interaction is also reflected in the structural properties of the Sn overlayer, in particular in the inhomogeneous distribution of the Sn atoms in the unit cell caused by the clustering of Sn atoms into Sn dimers.
However, there are also several features of the $$\sqrt 7$$-phase that are decidedly different in comparison to the expectations for a 2D stanene sheet on a surface. First, the most surprising structural observation is the planar geometry of the $$\sqrt 7$$ superstructure on Au(111), which is in clear contrast to the predicted vertical zig-zag structure of freestanding stanene5 or the vertical buckling of the Sn atoms on Bi2Te311. Second, the dispersion of the linear band B1 changes significantly when crossing the Fermi energy from the occupied to the unoccupied part of the band structure. In addition, B1 appears at the $${\bar{\mathrm \Gamma }}$$ point rather than at the $${\bar{\mathrm K}}$$ point as expected for typical graphene analogs. We hence suspect that both observations are mediated by the interaction across the Sn-Au interface. In fact, it has been theoretically predicted that a certain interaction between Sn and the substrate is intrinsically required to stabilize the Sn superstructure on a metal surface17. The planar structure of the $$\sqrt 7$$ phase might hence be stabilized by the saturation of the Sn π-orbitals by bonding to the Au substrate. This scenario closely corresponds to the chemical functionalization of stanene by halogens as discussed in Xu et al.5. Similarly, a strong hybridization across the Sn/Au interface naturally causes a strong alteration of the electronic properties of the Sn layer. For example, and similarly to the recently proposed chemical functionalization of stanene5, the hybridization can lead to a gap opening of Dirac bands at the $${\bar{\mathrm K}}$$ point, and the simultaneous appearance of a Dirac-like feature at the $${\bar{\mathrm \Gamma }}$$ point, as found here.
Altogether, our experimental findings strongly suggest that the extraordinary properties of the $$\sqrt 7$$ phase, i.e., the stretched honeycomb-like structure, the linearly dispersing band below the Fermi level with extremely high Fermi velocity and an alternating spin polarization, are determined by a delicate balance between the interaction of the Sn overlayer with the Au substrate as well as by the intrinsic properties and interatomic interactions of the atomically thin Sn film.
In conclusion, our experimental observation lays the foundation to fabricate and investigate a new 2D allotrope of the Sn on Au(111) surface with exotic spin-dependent electronic properties. The band structure of this superstructure is dominated by a linearly dispersing band centered at the $${\bar{\mathrm \Gamma }}$$ point with an exceptionally high Fermi velocity of vF ≈ 1 × 106 m/s and the spin-texture of a three-dimensional topological insulator. These exceptional properties of the 2D Sn/Au interface are the result of a hybridization of electronic states of the Sn orbitals with bands of the Au(111) surface as well as of a direct chemical interaction between neighboring Sn atoms in the overlayer leading to the formation of stretched honeycomb structure of Sn dimers.
We propose that the introduced Sn/Au interface system is a highly interesting case study that proves the tunability of atomically thin 2D materials by adsorption on metallic surfaces41, in line with recent progress in the exploitation of interfacial effects with carbon-based overlayers in the field of molecular spintronics42. To take advantage of the exceptional electronic properties of this novel 2D allotrope of Sn on Au(111) in a device structure, we envision to severely reduce the thickness of the Au substrate down to a few or even a single monolayer. In this way, quantum-confinement26,27,43,44,45 can be used to tune and reduce the available Au electronic states at the Fermi-surface, which might lead to a scenario where the dominant charge and spin transport is mediated by electronic states of the Sn-based 2D superstructure.
## Methods
### Spin- and angle-resolved photoemission measurements
The experiments were conducted in multiple ultra-high vacuum chambers with base pressure better than 1 × 10−10 mbar. The angle-resolved photoemission spectroscopy (ARPES) measurement was carried out using a monochromatized non-polarized ultraviolet light source (21.22 eV) and a 2D hemispherical electron energy analyzer. The spin-resolved photoemission measurements were performed using a combination of a 2D hemispherical electron energy analyzer for energy-filtering and a spin detector based on very low energy electron diffraction (VLEED) from an oxygen-passivated epitaxial Fe film on W(100). The spin polarization was calculated by using the calculated Sherman factor of the used VLEED detector (S = 0.29). The energy and angular resolution were set to better than 30 meV and 0.3°, respectively. In order to construct the k||xk||y ARPES map, the data were acquired by continuously changing the azimuth orientation of the sample from 0° to 180° in steps of 1°, while keeping the polar angle constant. Thereafter, the constant energy k||xk||y map was extracted from the 3D data cube by selecting a particular binding energy slice of the data cube and averaging the intensity over an energy window of ±25 meV around the central binding energy. The total energy (angular) resolution for ARPES and spin-resolved ARPES measurements are better than 100 meV (0.3°) and 250 meV (± 1.5°), respectively. All experiments were conducted at room temperature.
### Angle-resolved two photon photoemission measurement
The angle-resolved two photon photoemission (AR-2PPE) experiment was carried out at room temperature using a photon energy of 4.8 eV generated from a frequency-quadrupled fiber laser system. The measurement was carried out with 6.5 V sample bias, so that electrons with very low kinetic energy of <1 eV can still be measured. The distortions of the photoemission distribution introduced by the sample bias were corrected with the model described in Hengsberger et al.46.
### Scanning tunneling microscopy experiments
All scanning tunneling microscopy (STM) data were acquired at room temperature using the constant current mode. The STM images presented in this work were processed with the Nanotec Electronica WSxM47 and the Gwyddion software48.
### Sample preparation
The Au(111) sample was cleaned by cycles of Ar-ion sputtering, followed by sample annealing at around 800 K. The cleanliness of the surface was confirmed by the presence of a sharp (22$$\times \sqrt 3$$) low energy electron diffraction (LEED) pattern of the herringbone reconstruction and the clearly resolved spin-split surface state in angle-resolved photoemission spectroscopy (ARPES). High purity Sn (99.9999%) was evaporated from a homemade water-cooled Knudsen type thermal evaporator, using a paralytic boron nitrate (PBN) crucible at about 1150 K. During the deposition, the chamber pressure always stayed below 2–5 × 10−10 mbar. The Sn coverage was calibrated by defining that 0.3 monolayer (ML) of Sn are necessary to form a $$\left( {\sqrt 3 \times \sqrt 3 } \right)$$R30° structure of Sn/Au(111). The Sn coverage for subsequent depositions was interpolated with the assumption of a coverage-independent sticking coefficient for Sn on Au(111). All Sn-depositions and measurements were carried out at room temperature.
### Code availability
All relevant code used for data analysis are available upon reasonable request from the authors. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8324877023696899, "perplexity": 1594.165233662562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00780.warc.gz"} |
https://quant.stackexchange.com/questions/36937/in-curve-building-how-to-calculate-interest-rate-discount-factor-for-period-b | # In curve building: How to calculate interest rate (discount factor) for period before first known effective date
I am building a curve using par swaps rates. For example, I have the following two semi-annual swaps for input
Duration start end rate
1year 14-Nov-2011 14-Nov-2012 0.58%
2year 14-Nov-2011 14-Nov-2013 0.60%
and I want to build a curve for 10-Nov-2011. I don't know how to calculate discount factor for 14-Nov-2011, since I don't know how to choose a rate for period from 10-Nov-2011 until 14-Nov-2011.
Does anyone know how to find discount factor for 14-Nov-2011?
Additional info: For previous input the curve looks like
Date , Discount factor
10-Nov-11, 1
14-Nov-11, 0.999935743789455 ???
14-May-12, 0.997012282219702
14-Nov-12, 0.99406543047691
15-Nov-12, 0.993981821851122
15-May-13, 0.990959091324625
15-Nov-13, 0.987828512874748
generated with parameters:
• Accrual method: Actual/360.
• Interpolation to use durring bootstrapping: Linear from spot rates.
• Swap bootstrapping method: Linear spot rates.
In my calculation, if we choose rate 0.58% than the discount factor for 14-Nov-11 would be: $$\textrm{discount factor} = \frac{1}{1+rate\times accrual} = \frac{1}{1+\frac{0.58}{100} \times \frac{4}{360}} = 0.99993555970837$$
which is not the correct value.
Additionally, when I try to reproduce the rate which is used to build the curve I already have, I get: $$rate = \frac{1-discount factor}{discount factor \times accrual} = \frac{1-0.999935743789455}{0.999935743789455 \times \frac{4}{360}}= 0.57834\%$$
but it is unclear to me how can I get this rate from input data.
Your valuation date is $t=$ Thu 10-Nov-11. The swaps start on the spot date which is $t + 2$ business days = Mon 14-Nov-11. The usual approach is to extrapolate between $t$ and the first curve pillar, in a manner consistent with the interpolation method that you are using for representing your discount curve. For instance if you use linear interpolation of zero coupon rates then you might want to use linear extrapolation of zero coupon rates. Alternatively some systems use flat extrapolation, it won't make much of a difference for the short end of the curve.
Note that to get a richer curve you probably want to add short term instruments, such as weekly and monthly maturities swaps for the OIS discount curve, or FRAs or futures for the Libor projection curves.
A quick note on bootstrapping: when bootstrapping you are making some interpolation assumptions (because you need discount factors for the swaps semi-annual cash flows).
The common approach is:
1. sort your N instruments by increasing maturity.
2. transform the instruments maturities into N time pillars.
3. choose a curve interpolation/extrapolation method, so that you can view you curve as depending on N parameters (e.g. N zero coupon rates if you choose to interpolate zero coupon rates).
4. View your bootstrapping problem as finding N parameters to match N prices.
This looks like an N dimensional problem, but as long as your interpolation is such that the curve up to maturity T does not depend on pillars > T (i.e. linear interpolation which is local is fine, but splines which are global are not fine) then the N dimensional problem reduces to a sequence of N one dimensional problems which are easily solved.
The advantage of this approach is that you can use any mix of instruments into your bootstrapping.
• Tnx for answer. If I understand you correctly, you said that I should take 0.58% as annual rate, and then interpolate this rate (e.g. linearly) for the period of (1year-2days)? So calculation should be (1-2/360)*0.58=0.5767 which is not the correct one. – Dejan Nov 17 '17 at 15:50
• "The usual approach is to extrapolate between t and the first curve pillar, in a manner consistent with the interpolation method that you are using for representing your discount curve." Can you describe this part better, maybe using an example? – Dejan Nov 17 '17 at 15:53
• It is a bit more involved than that. I have edited my post to describe how bootstrapping can be done in a very generic fashion. – Antoine Conze Nov 17 '17 at 16:08
• With this procedure I can find correctly all curve values (except for 14-Nov-11). I use linear interpolation for zero coupon rates and make interpolation between known rates. It works well in the middle of the curve. But for the first period I do not know what rates I should interpolate to get the rate for the period 10-Nov-2011 -> 14-Nov-2011. – Dejan Nov 17 '17 at 16:46
• Hello @AntoineConze, I've a similar issue where Im trying to replicate murex curve but I get different results can you please check this post please : quant.stackexchange.com/questions/50358/… I can't figure out where I did make a mistake .... – Gogo78 Jan 5 at 16:22
If we want to find the rate before the first known swap (or cash, future,etc) we need to do the following:
1. Sort inputs by termination dates and choose the rate from the first one. In my case it is 0.58%. So, let us denote $\textrm{firstIntervalRate}=0.58\%$.
2. Rate for the period from the valuation date until the first start (effective) date should be calculated using the following formula \begin{align} r &= \left ( (1+\textrm{firstIntervalRate})^{accrual}-1\right ) \times \frac{1}{accrual}\\ &=\left ( \left (1+ \frac{0.58}{100}\right )^{\frac{4}{360}}-1 \right ) \times \frac{360}{4}= 0.57834\%.\end{align}
Derivation:
If we have $N$ payments with the given annual rate (in my case it is $\textrm{firstIntervalRate}$) than the rate for every period $r_p$ would satisfy
$$\textrm{firstIntervalRate} = \prod_{n=1}^{N}(1+r_p)-1= (1+r_p)^N -1,$$ so the rate for the period of $1/N$ of one year would be $$r_p=(1+\textrm{firstIntervalRate})^{1/N}-1.$$ If we want a rate for an arbitrary number of days, we need to change $1/N$ in previous formula with the actual $accrual$ factor. Now, we have rate $r_p$ for some number of days (in my case it is 4 days) and next step is to make it annualized by multiplying with the number of payments per year (or in arbitrary case with the value of $1/accrual$). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8277112245559692, "perplexity": 1046.86599557673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700675.78/warc/CC-MAIN-20200127112805-20200127142805-00268.warc.gz"} |
https://dsp.stackexchange.com/questions/45645/what-is-the-difference-equation-or-system-function-of-this-system | # What is the difference equation or system function of this system?
I am having trouble figuring out what the difference equation or the system function for this system is? Here ''R'' represents the unit delay. The fact that the delay is not part of the feedback loop is confusing me.
Any suggestions will be much appreciated.
The signal after the summer is $e[n] = x[n]+y[n]$. In the feed-forward line a delay of 3 units is applied. Thus we have $$y[n] = \mathcal{R}(\mathcal{R}(\mathcal{R} (e[n]))) = x[n-3]+ y[n-3]$$. The equivalent transfer function is :$$H(z) = \frac{Y(z)}{X(z)} = \frac{z^{-3}}{1-z^{-3}}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9396841526031494, "perplexity": 241.60791797602434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527531.84/warc/CC-MAIN-20190722051628-20190722073628-00377.warc.gz"} |
http://etheses.bham.ac.uk/1612/ | eTheses Repository
# Z $$\gamma$$ + differential cross section measurements and the digital timing calibration of the level-1 calorimeter trigger cluster processor system in ATLAS
Lilley, Joseph Neil (2011)
Ph.D. thesis, University of Birmingham.
PDF (4Mb)
## Abstract
This thesis investigates the reconstruction of $$Z$$($$\to$$ $${ee}$$)$$\gamma$$ events with the ATLAS detector at the LHC. The capabilities of the detector are explored for the initial run scenario with a proton-proton centre of mass collision energy of $$\sqrt{s}$$ = 7TeV, and an integrated luminosity of $$\char{cmsy10}{0x4C}$$ = 1 fb$$^{-2}$$. Monte Carlo simulations are used to predict the expected precision of a differential cross-section measurement for initial state radiation $$Z$$ + $$\gamma$$ events, both with respect to the transverse momentum of the photon, $$p_T$$ ($$\gamma$$), and the three body $${ee}$$$$\gamma$$ invariant mass. A bin-by-bin correction is used to account for the signal selection efficiency and purity, and to correct the measured (simulated) distribution back to the theoretical prediction. The main backgrounds are found to be from the final state radiation $$Z$$ + $$\gamma$$ process, and from jets faking photons in $$Z$$ $$\to$$ $${ee}$$ events. The possible QCD multijet background is studied using a fake-rate method, and found to be negligible for the particle identification cuts used in the analysis. The main systematic uncertainties on the differential cross-section measurements are explored with Monte Carlo simulations, and found to be of a similar scale to the statistical errors for the chosen distribution binning. The three body $${ee}$$$$\gamma$$ invariant mass distribution was then used as the basis of an exclusion study on new particles decaying to the $$Z$$($${ee}$$)$$\gamma$$ final state. Under the assumption that the measured data agrees with the Standard Model prediction, exclusion limits were placed at 95% confidence level on the cross-section times branching ratio for a new scalar (modelled by SM Higgs process), or vector (based on a low-scale technicolor process) particle hypothesis, for particles in the mass range 200 to 900GeV. Limits of the order $$\char{cmsy10}{0x4f}$$(0.01) - $$\char{cmsy10}{0x4f}$$(0.1) pb on the cross section times branching ratios are predicted, which would improve on the equivalent limits previously calculated by the DØ experiment at the Tevatron collider, albeit in a different $$\sqrt{s}$$ region, where cross-sections will generally be higher for new massive particles. In addition to the $$Z$$$$\gamma$$ measurements, a digital timing calibration procedure was developed for the Cluster Processor (CP) subsystem of the level-1 calorimeter trigger. This work was essential to providing a repeatable and robust mechanism for timing in the digital processing in the CP system, a necessary ingredient for a robust and reliable trigger system; a pre-requisite of any physics analysis. This calibration procedure is described here.
Type of Work: Ph.D. thesis. Colleges (2008 onwards) > College of Engineering & Physical Sciences School of Physics and Astronomy QC Physics University of Birmingham 1612
This unpublished thesis/dissertation is copyright of the author and/or third parties. The intellectual property rights of the author or third parties in respect of this work are as defined by The Copyright Designs and Patents Act 1988 or as modified by any successor legislation. Any use made of information contained in this thesis/dissertation must be in accordance with that legislation and must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the permission of the copyright holder.
Repository Staff Only: item control page | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8715129494667053, "perplexity": 1313.4645531473957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://hearmylullaby.deviantart.com/art/Uchiha-Izuna-Before-the-storm-400842021 | # Uchiha Izuna - Before the stormby hearmylullaby
❝ I am here. I am here to protect you, brother. Even if it will cost my life. I am here. ❞
● as Uchiha Izuna ナルト-ナルト-疾風伝
● version. Flashback
photograph.
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
MORE PHOTOS
● instagram. instagram.com/missnana.cosplay
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
- Reposting or using any of my photograhpy is NOT allowed.
Featured By Owner Sep 18, 2013 Student General Artist
amazing *-*
Featured By Owner Sep 18, 2013 Hobbyist Traditional Artist
Perfect <3 :3
### Details
Submitted on
September 17, 2013
Image Size
710 KB
Resolution
712×1068
Thumb
Embed
Views
1,214
Favourites
47 (who?)
2
### Camera Data
Make
Canon
Model
Canon EOS 1100D
Shutter Speed
1/12 second
Aperture
F/5.6
Focal Length
55 mm
ISO Speed
100
Date Taken
Sep 13, 2013, 5:38:19 PM
Software
Adobe Photoshop CS5 Windows
Sensor Size
3mm | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9871450066566467, "perplexity": 1981.4422319281564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645310876.88/warc/CC-MAIN-20150827031510-00091-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://dml.cz/handle/10338.dmlcz/107682 | # Article
Full entry | PDF (0.2 MB)
Keywords:
inner product space; norm derivative $\rho ^{\prime }_{\pm }$; heights
Summary:
Generalizing a property of isosceles trapezoids in the real plane into real normed spaces, a couple of characterizations of inner product spaces (i.p.s) are obtained.
References:
[1] Alsina C., Guijarro P., Tomás M. S.: On heights in real normed spaces and characterizations of inner product structures. Jour. Int. Math. & Comp. Sci. Vol. 6, N. 2(1993), 151-159. MR 1239743 | Zbl 0816.46017
[2] Alsina C., Guijarro P., Tomás M. S.: A characterization of inner product spaces based on a property of height’s transform. Archiv der Mathematik Vol. 61(1993), 560-566. MR 1254068
[3] Alsina C., Cruells P., Tomás M. S.: Isosceles Trapezoids, Norms and Inner Products. Archiv der Mathematik (1999) MR 1671283 | Zbl 0928.46010
[4] Amir D.: Characterizations of inner product spaces. Birkhäuser Verlag (1986). MR 0897527 | Zbl 0617.46030
[5] Suzuki F.: A Certain Property of an Isosceles Trapezoid and its Application to Chain Circle Problems. Mathematics Magazine Vol. 8, N. 2, pp 136-145 (1995). MR 1333815 | Zbl 0877.51017
Partner of | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9058247208595276, "perplexity": 4124.662479687746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567042.50/warc/CC-MAIN-20171215060102-20171215080102-00108.warc.gz"} |
http://boris-belousov.net/2016/12/12/tensor-powers/ | # Tensor powers at the service of humanity
There is a standard convention in mathematics to denote the set of functions from set $X$ to set $Y$ by $Y^X$. This looks a lot like $\mathbb{R}^n$, with the only difference that $n$ is a natural number whereas $X$ is a set. The motivation of this post is to treat $n$ as a set to uncover some parallels between functions and vector spaces.
## $\mathbb{R}^n$ as a set of functions
Let $[n]$ denote a set of $n$ elements. One can think of it as the set $\{ 1, 2, \dots , n \}$. Be careful though, because $[n]$ is not a subset of $\mathbb{N}$—its elements are not natural numbers but some arbitrary objects, which we simply enumerated for convenience. For example, $[2] \cap [3] = \emptyset$ and $[2] \cup [3] = [5]$.
The key observation allowing us to unify notation is that we can view the vector space $\mathbb{R}^n$ as the set of functions from $[n]$ to $\mathbb{R}$,
Indeed, if $x \in \mathbb{R}^n$, you can query $x$ for its $k$-th coordinate, $x(k)$. That means $x$ can be viewed as a function $x : [n] \rightarrow \mathbb{R}$. The set of all such functions $\mathbb{R}^{[n]}$ is a vector space isomorphic to $\mathbb{R}^n$.
Let’s see this idea in action on several examples.
## Direct product
Consider the direct product of vector spaces $\mathbb{R}^{[2]} \times \mathbb{R}^{[3]}$. It is easy to see that
The exponential notation is very suggestive, as you may have noticed. It allows one to manipulate vector spaces using intuition from natural numbers.
## Tensor product
Consider the tensor product of vector spaces $\mathbb{R}^{[2]} \otimes \mathbb{R}^{[3]}$. It is easy to see that
In case of the tensor product, dimensionalities multiply. Interesting! We know that exponents multiply when one takes a power of a power (i.e., $(a^b)^c = a^{bc}$). Can we understand the tensor product in this way? It turns out we can. Check it out yourself.
$$\begin{equation*} \mathbb{R}^{[2]} \otimes \mathbb{R}^{[3]} \cong \left( \mathbb{R}^{[2]} \right)^{[3]} = \left\{ [3] \rightarrow \left\{ [2] \rightarrow \mathbb{R} \right\} \right\}. \end{equation*}$$
Note that parentheses are important, because $(2^3)^4 = 2^{12}$ whereas $2^{(3^4)} = 2^{81}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9819522500038147, "perplexity": 126.15829703799405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509845.17/warc/CC-MAIN-20181015205152-20181015230652-00435.warc.gz"} |
http://fgarciasanchez.es/thesisfelipe/node48.html | 4.7 Thermal switching mechanism in Fe particles
Thermal effects can also contribute to reduce the switching fields. The dynamic dependence of the coercivity is usually analyzed using the Sharrock law [Sharrock 90], which is derived from the Arrhenius-Néel law. The application of the Sharrock law requires the energy barrier dependence on the applied field. In this section we evaluate that dependence and study the thermal switching mechanism. We have calculated the energy barriers values using the Lagrange multiplier method discussed in Section 2.3.6, but in this case we have used the constraint , where is the average magnetization x component. This constraint has the advantage of having the form of an applied field in the X direction, allowing to obtain the energy barrier of an arbitrary applied field from the zero field energy barrier value [Lu 07].
The saddle points as well as the energy barrier vary with the applied field. Therefore, in this section we will refer to zero field saddle points as, simply, saddle points. As in the field switching, the thermal switching mechanisms are also different for different particle widths. For the large aspect ratio particles the thermal switching proceeds through a domain wall, as a saddle-point configuration shown in Fig. 4.16. The nucleation of the domain wall starts from the structures that are created by the magnetostatics in the particle ends (see Fig. 4.6). In the small aspect ratio particles the saddle point configuration consists of two domains, which point in one of the local directions of the biaxial anisotropy, as shown in Fig. 4.17. The domain wall is not located at the center of the particle since the domain wall is stabilized in the center of the particle and such configuration is a shallow minimum of the energy. This minimum is not present if the particle presents any imperfection. As in the hysteresis loops, the change of behavior is for a width ca. and is related to the length of the structures that minimize the magnetic charges. The zero field energy barrier as a function of the particle width is plotted in Fig. 4.18(b). The domain wall mechanism yields a linear dependence with the cross section. In our case there is a change in the slope due to the different effective anisotropy, being this slope larger for thinner particles due to their large shape anisotropy. Finally, for small aspect ratio the effective anisotropy is not dependent on the particle width obtaining a linear dependence of the energy barrier.
Figure 4.18: Energy barriers: (a) as function of the applied field for different particle sizes and (b) for zero applied field as a function of the particle width.
(a) (b)
Figure: (a) Temperature dependence of the coercivity for the bcc-Fe ribbons with cross section (measured in the Instituto de Magnetismo Aplicado (IMA) by A. Martínez). (b) Comparison of experimental and simulational results for the temperature dependence of the coercivity normalized to the value.
(a) (b)
In Fig. 4.18 (a) the energy barrier values are shown as a function of the applied field value and the particle width. Several authors [Skomski 06] have found the applied field dependence of the energy barrier value to be:
(4.5)
where is the zero field energy barrier value and -the scaling factor. As shown in Fig. 4.19, in the case of the domain wall thermal switching our simulations fit perfectly to a scaling factor . This factor also appears in other systems like in the case of a Stoner-Wohlfarth particle.
From the scaling of the energy barrier value and the Arrhenius-Néel law, M.P. Sharrock [Sharrock 90] obtained the following expression for the thermal dependence of the coercivity, due to thermal relaxation during the hysteresis process:
(4.6)
where is the attempt frequency and the time scale of the measure. From Fig. 4.18(a), the thermal activation in the timescale of a magnetometer measure will be only appreciable in the case of large aspect ratio particles. From all this results we can calculate the thermal dependence of the coercivity applying the Sharrock law, considering a time of measure and . The results for the particle with cross section are shown in Fig. 4.20. The dependence obtained from simulations is not in agreement with the experiment but indicates the existence of appreciable thermal activation in the samples. However, in order to obtain the real dependence of the coercivity the thermal dependence of the magnetization has to be taken into account in the simulation. These could be obtained from the experimental dependence of the magnetization as shown in Fig. 4.2. This simulation using temperature dependent parameters will be subject of future work.
2008-04-04 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9146422147750854, "perplexity": 434.95081709089084}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609061.61/warc/CC-MAIN-20170527210417-20170527230417-00278.warc.gz"} |
http://www.oalib.com/relative/3204703 | Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
Page 1 /100 Display every page 5 10 20 Item
Physics , 1998, DOI: 10.1016/S0920-5632(98)00510-6 Abstract: We discuss the decay of axion walls bounded by strings and present numerical simulations of the decay process. In these simulations, the decay happens immediately, in a time scale of order the light travel time, and the average energy of the radiated axions is $<\omega_a > \simeq 7 m_a$ for $v_a/m_a\simeq 500$. $<\omega_a>$ is found to increase approximately linearly with $\ln(v_a/m_a)$. Extrapolation of this behaviour yields $<\omega_a> \simeq 60 m_a$ in axion models of interest.
Physics , 2012, DOI: 10.1103/PhysRevD.86.089902 Abstract: We analyze the spectrum of axions radiated from collapse of domain walls, which have received less attention in the literature. The evolution of topological defects related to the axion models is investigated by performing field-theoretic lattice simulations. We simulate the whole process of evolution of the defects, including the formation of global strings, the formation of domain walls and the annihilation of the defects due to the tension of walls. The spectrum of radiated axions has a peak at the low frequency, which implies that axions produced by the collapse of domain walls are not highly relativistic. We revisit the relic abundance of cold dark matter axions and find that the contribution from the decay of defects can be comparable with the contribution from strings. This result leads to a more severe upper bound on the axion decay constant.
Raghavan Rangarajan Physics , 1994, DOI: 10.1016/0550-3213(95)00411-K Abstract: We calculate the dilution of the baryon-to-photon ratio by the decay of superstring axions. We find that the dilution is of the order of $10^7$. We review several models of baryogenesis and show that most of them can not tolerate such a large dilution. In particular, only one current model of electroweak baryogenesis possibly survives. The Affleck-Dine mechanism in SUSY GUTs is very robust and the dilution by axions could contribute to the dilution required in these models. Baryogenesis scenarios involving topological defects and black hole evaporation are also capable of producing a sufficiently large baryon asymmetry.
Gouranga C Nayak Physics , 2009, DOI: 10.1007/JHEP01(2011)039 Abstract: We study Higgs production and decay from TeV scale string balls in $pp$ collisions at $\sqrt{s}$ = 14 TeV at LHC. We present the results of total cross section of diphotons, invariant mass distribution of the diphotons and $p_T$ distribution of the diphotons and $ZZ$ pairs from Higgs from string balls at LHC. We find that the invariant mass distribution of diphotons from Higgs from string balls is not very sensitive to the increase in diphoton invariant mass. We find that for string mass scale $M_s$=2.5 TeV, which is the lower limit of the string mass scale set by the recent CMS collaboration at LHC, the $\frac{d\sigma}{dp_T}$ of high $p_T$ ($\ge$ 450 GeV) diphotons and $ZZ$ pairs produced from Higgs from string balls is larger than that from standard model Higgs. Hence in the absence of black hole production at LHC an enhancement of high $p_T$ diphotons and $ZZ$ pairs at LHC can be useful signatures for string theory at LHC. Since the matrix element for Higgs production in parton-parton collisions via string regge excitations is not available we compute $\frac{d\sigma}{dp_T}$ of photon production from string regge excitations and make a comparison with that from string balls at LHC. We find that for string mass scale $M_s$ = 2.5 TeV the $\frac{d\sigma_{photon}}{dp_T}$ from string regge excitations is larger than that from string balls and black holes at LHC.
Physics , 2009, DOI: 10.1103/PhysRevD.81.123530 Abstract: String theory suggests the simultaneous presence of many ultralight axions possibly populating each decade of mass down to the Hubble scale 10^-33eV. Conversely the presence of such a plenitude of axions (an "axiverse") would be evidence for string theory, since it arises due to the topological complexity of the extra-dimensional manifold and is ad hoc in a theory with just the four familiar dimensions. We investigate how upcoming astrophysical experiments will explore the existence of such axions over a vast mass range from 10^-33eV to 10^-10eV. Axions with masses between 10^-33eV to 10^-28eV cause a rotation of the CMB polarization that is constant throughout the sky. The predicted rotation angle is of order \alpha~1/137. Axions in the mass range 10^-28eV to 10^-18eV give rise to multiple steps in the matter power spectrum, that will be probed by upcoming galaxy surveys. Axions in the mass range 10^-22eV to 10^-10eV affect the dynamics and gravitational wave emission of rapidly rotating astrophysical black holes through the Penrose superradiance process. When the axion Compton wavelength is of order of the black hole size, the axions develop "superradiant" atomic bound states around the black hole "nucleus". Their occupation number grows exponentially by extracting rotational energy from the ergosphere, culminating in a rotating Bose-Einstein axion condensate emitting gravitational waves. This mechanism creates mass gaps in the spectrum of rapidly rotating black holes that diagnose the presence of axions. The rapidly rotating black hole in the X-ray binary LMC X-1 implies an upper limit on the decay constant of the QCD axion f_a<2*10^17GeV, much below the Planck mass. This reach can be improved down to the grand unification scale f_a<2*10^16GeV, by observing smaller stellar mass black holes.
Physics , 1996, DOI: 10.1016/S0550-3213(97)00413-6 Abstract: String theory posesses numerous axion candidates. The recent realization that the compactification radius in string theory might be large means that these states can solve the strong CP problem. This still leaves the question of the cosmological bound on the axion mass. Here we explore two schemes for accommodating such light axions in cosmology. In the first, we note that in string theory the universe is likely to be dominated early on by the coherent oscillations of some moduli. The usual moduli problem assumes that these fields have masses comparable to the gravitino. We argue that string moduli are likely to be substantially more massive, eliminating this problem. In such cosmologies the axion bound is significantly weakened. Plausible mechanisms for generating the baryon number density are described. In the second, we point out that in string theory, the axion potentials might be much larger at early times than at present. In string theory, if CP violation is described by a small parameter, the axion may sit sufficiently close to its true minimum to invalidate the bounds.
Physics , 1998, DOI: 10.1103/PhysRevD.59.023505 Abstract: We discuss the appearance at the QCD phase transition, and the subsequent decay, of axion walls bounded by strings in N=1 axion models. We argue on intuitive grounds that the main decay mechanism is into barely relativistic axions. We present numerical simulations of the decay process. In these simulations, the decay happens immediately, in a time scale of order the light travel time, and the average energy of the radiated axions is $<\omega_a > \simeq 7 m_a$ for $v_a/m_a \simeq 500$. $<\omega_a>$ is found to increase approximately linearly with $\ln(v_a/m_a)$. Extrapolation of this behaviour yields $<\omega_a> \sim 60 m_a$ in axion models of interest. We find that the contribution to the cosmological energy density of axions from wall decay is of the same order of magnitude as that from vacuum realignment, with however large uncertainties. The velocity dispersion of axions from wall decay is found to be larger, by a factor $10^3$ or so, than that of axions from vacuum realignment and string decay. We discuss the implications of this for the formation and evolution of axion miniclusters and for the direct detection of axion dark matter on Earth. Finally we discuss the cosmology of axion models with $N>1$ in which the domain wall problem is solved by introducing a small U$_{PQ}$(1) breaking interaction. We find that in this case the walls decay into gravitational waves.
Physics , 2003, DOI: 10.1088/1475-7516/2003/06/001 Abstract: The decay constant of the QCD axion is required by observation to be small compared to the Planck scale. In theories of "natural inflation," and certain proposed anthropic solutions of the cosmological constant problem, it would be interesting to obtain a large decay constant for axion-like fields from microscopic physics. String theory is the only context in which one can sensibly address this question. Here we survey a number of periodic fields in string theory in a variety of string vacua. In some examples, the decay constant can be parameterically larger than the Planck scale but the effective action then contains appreciable harmonics of order $f_A/M_p$. As a result, these fields are no better inflaton candidates than Planck scale axions.
Physics , 1998, DOI: 10.1016/S0370-2693(98)01206-4 Abstract: Axions are produced during a period of dilaton-driven inflation by amplification of quantum fluctuations. We show that for some range of string cosmology parameters and some range of axion masses, primordial axions may constitute a large fraction of the present energy density in the universe in the form of cold dark matter. Due to the periodic nature of the axion potential energy density fluctuations are strongly suppressed. The spectrum of primordial axions is not thermal, allowing a small fraction of the axions to remain relativistic until quite late.
Physics , 2014, DOI: 10.1088/1475-7516/2014/08/012 Abstract: We present an explicit embedding of axionic N-flation in type IIB string compactifications where most of the Kahler moduli are stabilised by perturbative effects, and so are hierarchically heavier than the corresponding N >> 1 axions whose collective dynamics drives inflation. This is achieved in the framework of the LARGE Volume Scenario for moduli stabilisation. Our set-up can be used to realise a model of either inflation or quintessence, just by varying the volume of the internal space which controls the scale of the axionic potential. Both cases predict a very high scale of supersymmetry breaking. A viable reheating of the Standard Model degrees of freedom can be achieved after the end of inflation due to the perturbative decay of the N light axions which drive inflation.
Page 1 /100 Display every page 5 10 20 Item | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9276110529899597, "perplexity": 824.5673066373009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250613416.54/warc/CC-MAIN-20200123191130-20200123220130-00549.warc.gz"} |
http://math.stackexchange.com/questions/235926/what-is-a-support-function-sup-z-in-k-langle-z-x-rangle | # What is a support function: $\sup_{z \in K} \langle z, x \rangle$?
I want to ask that, what is a support function intuitively. It is defined as:
$$\sup_{z \in K} \langle z, x \rangle$$ where $z \in K$, $K$ is a nonempty set. In this formulation, $\langle \cdot, \cdot \rangle$ is inner product.
As a function of $x$, what does it mean? Why it can be useful for instance? Thanks.
-
If you take $K$ to be convex, the support function is, in some sense, a tool for a dual representation of the set as the intersection of half-spaces.
Let's assume that we're in $\mathbb R^n$ for simplicity. A hyperplane can be characterized by a direction $\boldsymbol x\in\mathbb R^n$ and a scalar $b\in\mathbb R$, let's write $H=(\boldsymbol x;b)$ one such hyperplane, the set of points $\boldsymbol z\in\mathbb R^n$ on the hyperplane $H$ are then given by $$\langle \boldsymbol z,\boldsymbol x\rangle \quad = \quad b.$$ The set of points $\boldsymbol z$ lying on one side of the hyperplane $H$ can thus always be written as $\langle \boldsymbol z, \boldsymbol x\rangle\le b$ (modulo a change of sign). So considering $$\sup_{\boldsymbol z\in K} \langle \boldsymbol z,\boldsymbol x\rangle$$ amounts to finding the $b(\boldsymbol x)$ for the direction $\boldsymbol x$ such that set $K$ lies on one side of the hyperplane $(\boldsymbol x,b(\boldsymbol x))$ or equivalently, such that all $z\in K$ verify $\langle \boldsymbol z, \boldsymbol x\rangle \le b(\boldsymbol x)$.
Then $K$ can be understood as the intersection of all the half-spaces thus defined.
It can maybe be useful to look at a basic example: consider the region $K=[0,1]\times [0,1]$. Then let's consider the $x$-direction with the vector $\boldsymbol v=(1,0)^t$, we get $$h_K(\boldsymbol v) = \max_{\boldsymbol w\in K} \langle\boldsymbol w,\boldsymbol v \rangle = \max_{w_1\in[0,1]} w_1 = 1$$ and the hyperplane $(\boldsymbol v,1)$ (i.e, the vertical line $x=1$) is indeed such that $K$ lies on strictly one side of it. Doing the same thing for the direction $-\boldsymbol v$, and the perpendicular directions will bring us the for sides of the region. This is a bit of a trivial example but hopefully it can help somewhat for the intuition.
Thanks! Do you have knowledge about the norms can be represented as support functions? So, let us take $\|x\|_p$. It can be written as $\|x\|_p = \sup_{z \in B_q} \langle z, x \rangle$ where $B_q$ is the norm ball of $\ell_q$ norm and the relationship between $p$ and $q$ is $1/p + 1/q = 1$. I wonder about how this representation is possible? A proof may be. Thanks... – oeda Nov 24 '12 at 17:54
you might want to ask another question for that to get a more complete answer. You can think of lp norms in Rn and check for yourself: a good tool for intuition might be to consider the lp-balls ($\{x\in\mathbb R^n|\|x\|_p\le 1\}$ and see that for 0<p<1 the balls are not convex. You might also want to check "Legendre-Fenchel convex conjugates" with, for example, $(\ell^1)^*\sim (\ell^\infty)$ (using sloppy notations) – tibL Nov 25 '12 at 12:21
Thanks again. I will ask another question about dual representations of $\ell_p$ norms. – oeda Nov 25 '12 at 12:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9389358162879944, "perplexity": 157.55328978513236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929832.32/warc/CC-MAIN-20150521113209-00234-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://chaoxuprime.com/posts/2018-12-18-subset-sum-through-balancing.html | # Subset sum through balancing
This is a note for Pisinger's balancing algorithm for subset sum [1]. Let \mathcal{S} be the set of all subset sums of S. The subset sum problem, the input is S\subset \N, and we are interested in checking if t\in \mathcal{S}.
We define a variation of the subset sum problem. The balanced subset sum problem. In this problem, we are given a vector v of integers(does not have to be positive). We let M=\|v\|_\infty. We are interested in find a subset that sums to t\in [M].
Theorem1
Each subset sum problem on n elements can be reduced to a balanced subset sum problem in n elements in O(n) time.
Proof
Consider the input to the subset sum problem S and t. Greedily find a subset of elements S', such that adding any other element will exceed t. Let \|S'\|_1=t'. Now, we negate all the elements in S', and ask for balanced subset sum with input set -S' \cup (S\setminus S') and target number t-t'.
We partition S into A = [-M..0]\cap S and B=S\setminus A. We also define A_i = \set{a_1,\ldots,a_i} and B_i=\set{b_1,\ldots,b_i}.
A set is balanced by the following recursive definition. Let S be a set.
• S=\emptyset is balanced.
• \|S\|_1> t, then S\cup \set{a} is balanced, where a\in A.
• \|S\|_1\leq t, then S\cup \set{b} is balanced, where b\in B.
Consider a set T, such that (i,j,k)\in T if and only if k is a subset sum of A_i\cup B_j. Certainly, we are interested if (|A|,|B|,t) is in T. However, the state space is already O(n^2M), which is no better than the standard dynamic programming algorithm.
There is a nice dominance relation. If (i,j,k)\in T, then for (i',j')\geq (i,j), we have (i',j',k)\in T. We can ask for each k, what are all the minimal (i,j) pairs where (i,j,k)\in T. Such value will be g(j,k). Formally, g(j,k) = \min \set{i | (i,j,k)\in T}, one can see that g(j,k) \geq g(j+1,k). Also, we know the solution corresponding to g(j,k) must contain a_{g(j,k)} as an element.
One can get a recurrence relation for g as below.
\displaystyle g(j,k)= \min \begin{cases} g(j-1,k)\\ g(j-1,k-b_j) & \text{if }k-b_j\leq t\\ i & \text{if }k-a_i > t \text{ and } i>g(j,k-a_i) \end{cases}
Fix a k and j, let i to be as small as possible, such that there is A_i'\subset A_i and B_j'\subset B_j such that A_i'\cup B_j' is balanced and sums to k. Note that a_i\in A_i'.
We obtained A_i'\cup B_j' by inserting an element in B or A to another balanced set. If the inserted element is in B, but not b_j, then we know i=g(j-1,k). If it is b_j, then i=g(j-1,k-b_j). If the last inserted is a_i, then g(j,k)=i. Note we observe in this case, g(j,k-a_i)<i. A direct dynamic programming implementation seems to imply a O(n^2M) time algorithm, since there does not seem to be a quick way to obtain i.
On the other hand, if we think bottom up instead of top down, we can obtain a better result. Below is the algorithm.
The value D[j,k] eventually equals g(j,k). Note we can argue the running time is O(nM), since for each fixed k, the final for loop can ran at most n times. It is frustrating that the DP algorithm cannot be inferred directly from the recurrence relation. Indeed, we mainly obtained this through the fact that we can prune the search space if we start bottom up, which is unclear from the recurrence relation.
# References
[1] D. Pisinger, Linear time algorithms for knapsack problems with bounded weights, Journal of Algorithms. 33 (1999) 1–14 10.1006/jagm.1999.1034.
Posted by Chao Xu on 2018-12-18.
Tags: algorithms, subset sum. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9903953075408936, "perplexity": 1546.5794314161217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202510.47/warc/CC-MAIN-20190321092320-20190321114320-00268.warc.gz"} |
https://www.physicsforums.com/threads/outer-lebesgue-measure-limits-proof.740279/ | # Outer Lebesgue Measure limit's proof
1. Feb 25, 2014
### SqueeSpleen
If $\{ E_{k} \}_{k \in \mathbb{N}}$ is an increasing sequence of subsets of $R^{p}$, then:
$| \displaystyle \bigcup_{k=1}^{\infty} E_{k} |_{e} = \lim_{k \to \infty} |E_{k}|_{e}$
I proved:
$| \displaystyle \bigcup_{k=1}^{\infty} E_{k} |_{e} \geq \lim_{k \to \infty} |E_{k}|_{e}$
But I don't know how to prove the other inequality.
$E_{n} = \displaystyle \bigcup_{k=1}^{n} E_{k} \subseteq \displaystyle \bigcup_{k=1}^{\infty} E_{k} \forall n \in \mathbb{N}$
So, for monotony of the outer Lebesgue measure we have:
\newline
$| E_{n} |_{e} = | \displaystyle \bigcup_{k=1}^{n} E_{k} |_{e} \leq \displaystyle \bigcup_{k=1}^{\infty} E_{k} \forall n \in \mathbb{N}$
\newline
So the limit has also to be equal or lesser.
My problem is that I have not idea how to link the limit of the sets with the other limit in the other way, I could decompose:
$\displaystyle \bigcup_{k=1}^{\infty} E_{k} = \displaystyle \bigcup_{k=1}^{n} E_{k} \cup \displaystyle \bigcup_{k=n+1}^{\infty} E_{k} = \displaystyle \bigcup_{k=1}^{n} E_{k} \cup \displaystyle (\bigcup_{k=n+1}^{\infty} E_{k} - \bigcup_{k=1}^{n} E_{k})$ but the outer measure is subaditive, so the decomposition of the measure I could do to try to get something arbitrary small would be greater than my previous set and I have no guarantee it's close than the measure of the other set (vitali's could basically double the external measure I'm trying to delimite).
If they're measurables it's very easy, but in general I don't know how to prove it.
Last edited: Feb 25, 2014
2. Feb 25, 2014
### gopher_p
Well the first thing I would do is split the problem into two cases; one where $|\bigcup_{k=1}^{\infty} E_{k} |_{e}$ is finite, and one where it is not.
For the harder (finite) case, I think you might just need to roll up your sleeves, dust off the old $\epsilon$s and $\delta$s (or I guess in this case $\epsilon$s and $n$s), and play around in the muck for a bit. Maybe draw a picture/cartoon of a "nice-looking" finite case, come up with an heuristic argument you think works there, and then see if you can give that argument some rigor that makes it work in the general case.
Have something to add?
Draft saved Draft deleted
Similar Discussions: Outer Lebesgue Measure limit's proof | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9084888100624084, "perplexity": 573.7740800718858}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822992.27/warc/CC-MAIN-20171018142658-20171018162658-00333.warc.gz"} |
http://math.stackexchange.com/users/1072/fran-aubry | # fran.aubry
less info
reputation
7
bio website location age member for 3 years, 7 months seen Jan 9 at 10:12 profile views 42
# 10 Questions
6 Intersection between subsets of $\mathbb{Z}_n$ 2 Isomorphism between $\mathbb{C} X \otimes \mathbb{C} X$ and $\mathbb{C} (X \times X)$ 2 Matrix of a representation given a decomposition 2 Given $v, w$ find a matrix $P$ such that $v = Pw$ 2 Property of convex combinations.
# 150 Reputation
+10 Isomorphism between $\mathbb{C} X \otimes \mathbb{C} X$ and $\mathbb{C} (X \times X)$ +30 Intersection between subsets of $\mathbb{Z}_n$ +5 Find all expressions of a prime as a sum of four squares -2 Proving that $(p - p^{5/6 + \epsilon}) + (1 - p^{1/6 - \epsilon}) \geq \epsilon$
2 How many solutions to $X^6-1=0$ in $\mathbb{Z}/(504)$ 1 What kind of graph is this?
# 19 Tags
2 abstract-algebra 0 linear-algebra × 5 2 ring-theory 0 real-analysis × 2 2 homework 0 number-theory × 2 2 polynomials 0 algorithms × 2 1 graph-theory 0 representation-theory
# 4 Accounts
Mathematics 150 rep 7 Computer Science 16 rep 1 Theoretical Computer Science 1 rep 1 MathOverflow 1 rep 2 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8106163144111633, "perplexity": 1763.9357004316628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011017001/warc/CC-MAIN-20140305091657-00089-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/5220-vector-functions.html | 1. ## Vector functions
Hi I have some vector function problems I'd like some help for, but I didn't see a forum for them so I'll start by posting one problem in case people don't know them.
I don't know all the english math terms so pardon me if the problem may seem like nonsense
I have a vector function, f(t)= (t^3-t, 3t+1).
I need to find all the cross points with the y-axis
All the points where the tangent is vertical
and all the tangents that are parallel with the line y=x
Thanks in advance and feel free to correct my math terms, be they wrong.
2. Originally Posted by Braas
Hi I have some vector function problems I'd like some help for, but I didn't see a forum for them so I'll start by posting one problem in case people don't know them.
I don't know all the english math terms so pardon me if the problem may seem like nonsense
I have a vector function, f(t)= (t^3-t, 3t+1).
I need to find all the cross points with the y-axis
All the points where the tangent is vertical
and all the tangents that are parallel with the line y=x
Thanks in advance and feel free to correct my math terms, be they wrong.
Write:
$x=t^3-t$ and $y=2t+1$.
1. Then the points where this crosses the y axis correspond to the roots of
$x=t^3-t=0$.
2. The points where the tangent is vertical correspond to the solutions of:
$
\frac{dx}{dy}=\frac{dx}{dt}\frac{dt}{dy}=0
$
3. The points where the tangent is horizontal corespond to the solutions of:
$
\frac{dy}{dx}=\frac{dy}{dt}\frac{dt}{dx}=0
$
.
RonL
3. The points where the tangent is parallel with the line $y = x$ to the solutions of:
$\frac{dy}{dx}=\frac{dy}{dt}\frac{dt}{dx} = 1$
4. Originally Posted by Glaysher
The points where the tangent is parallel with the line $y = x$ to the solutions of:
$\frac{dy}{dx}=\frac{dy}{dt}\frac{dt}{dx} = 1$
$\frac{dx}{dy}=\frac{dx}{dt}\frac{dt}{dy} = 1$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9157676100730896, "perplexity": 484.2379444028658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321938.75/warc/CC-MAIN-20170627221726-20170628001726-00086.warc.gz"} |
https://www.chemicalforums.com/index.php?topic=58989.msg211327 | May 25, 2020, 02:21:21 AM
Forum Rules: Read This Before Posting
### Topic: Why does the equilibrium constant expression involve coefficients (Read 4600 times)
0 Members and 1 Guest are viewing this topic.
#### sodium.dioxid
• Regular Member
• Posts: 72
• Mole Snacks: +1/-3
##### Why does the equilibrium constant expression involve coefficients
« on: May 14, 2012, 06:28:07 PM »
Suppose we have 2NO2 N2O4
Keq=[N2O4]/[NO2]2
But the bottom term is squared. How can there be a constant ratio for two terms with different powers? (it seems that the lower term would overshoot the upper term). Why isn't it just Keq=[product]/[concentration] without the square.
« Last Edit: May 14, 2012, 06:57:44 PM by sodium.dioxid »
#### sodium.dioxid
• Regular Member
• Posts: 72
• Mole Snacks: +1/-3
##### Re: Why does the equilibrium constant expression involve coefficients
« Reply #1 on: May 14, 2012, 08:23:22 PM »
Suppose we have 2NO2 N2O4
Keq=[N2O4]/[NO2]2
But the bottom term is squared. How can there be a constant ratio for two terms with different powers? (it seems that the lower term would overshoot the upper term). Why isn't it just Keq=[product]/[concentration] without the square.
By the way, is this supposed to be intuitive? Or is it derived? If the latter, I will leave it alone.
#### fledarmus
• Chemist
• Sr. Member
• Posts: 1676
• Mole Snacks: +203/-28
##### Re: Why does the equilibrium constant expression involve coefficients
« Reply #2 on: May 14, 2012, 08:48:59 PM »
How can there be a constant ratio for two terms with different powers?
If I understand your question correctly, the answer lies in the fact that the values for the two terms are not independent. Whatever concentration of starting material you begin the reaction with absolutely fixes the final equilibrium concentration of all the components in the reaction in accordance with the equilibrium constant for the reaction.
#### sodium.dioxid
• Regular Member
• Posts: 72
• Mole Snacks: +1/-3
##### Re: Why does the equilibrium constant expression involve coefficients
« Reply #3 on: May 14, 2012, 09:04:25 PM »
How can there be a constant ratio for two terms with different powers?
If I understand your question correctly, the answer lies in the fact that the values for the two terms are not independent. Whatever concentration of starting material you begin the reaction with absolutely fixes the final equilibrium concentration of all the components in the reaction in accordance with the equilibrium constant for the reaction.
I understand what you are saying. You are saying that the reaction acts in such away so as to achieve concentrations that maintains the Keq. This is a mysterious conclusion. The next logical question would be why does it behave this way? How can this be physically described?
#### fledarmus
• Chemist
• Sr. Member
• Posts: 1676
• Mole Snacks: +203/-28
##### Re: Why does the equilibrium constant expression involve coefficients
« Reply #4 on: May 14, 2012, 09:23:54 PM »
That is where the derived part comes in. An equilibrium constant is actually a ratio of two reaction rate constants; the rate constant of the forward reaction and the rate constant of the reverse reaction. We say that a reaction is in equilibrium when the rate of the forward reaction is equal to the rate of the reverse reaction. For every two molecules of NO2 that react to form one molecule of N2O4, somewhere there is an N2O4 molecule falling apart to form two NO2 molecules.
So let's suppose that it is much faster for NO2 to react than for N2O4 to decompose - the forward rate constant is much higher than the reverse rate constant, and the Keq is a large number. For the forward reaction and the reverse reaction to occur at the same rate, you have to slow down the forward rate by making the NO2 molecules few and far between, and speed up the reverse rate by increasing the number of N2O4 molecules. At equilibrium, the concentration of product molecules is much higher than the concentration of starting material molecules.
If you try to increase the concentration of starting materials at this point by adding more NO2, you will speed up the forward rate even more and form more product, decreasing the amount of NO2 again until a new equilibrium is established, still in accord with the same Keq
#### Jorriss
• Chemist
• Full Member
• Posts: 523
• Mole Snacks: +41/-14
##### Re: Why does the equilibrium constant expression involve coefficients
« Reply #5 on: May 14, 2012, 10:08:51 PM »
You can also derive equilibrium constants without appealing to kinetics from thermodynamics. A derivation of the equilibrium constant for ideal gases is very approachable if you take a look at Levine.
#### sodium.dioxid
• Regular Member
• Posts: 72
• Mole Snacks: +1/-3
##### Re: Why does the equilibrium constant expression involve coefficients
« Reply #6 on: May 14, 2012, 11:36:18 PM »
Ok, one more question. For any equilibrium reaction, starting with only reactants, are these statements true:
In the forward direction, rate of reaction decreases at a decreasing rate.
In the reverse direction, rate of reaction increases at a decreasing rate.
#### Jorriss
• Chemist
• Full Member
• Posts: 523
• Mole Snacks: +41/-14
##### Re: Why does the equilibrium constant expression involve coefficients
« Reply #7 on: May 15, 2012, 01:17:10 AM »
Ok, one more question. For any equilibrium reaction, starting with only reactants, are these statements true:
In the forward direction, rate of reaction decreases at a decreasing rate.
In the reverse direction, rate of reaction increases at a decreasing rate.
You've taken calculus, think about this in terms of calculus. You want to know if the rate of the reaction is monotonically decreasing. How do you tell if a function is monotonically decreasing (increasing)?
#### sodium.dioxid
• Regular Member
• Posts: 72
• Mole Snacks: +1/-3
##### Re: Why does the equilibrium constant expression involve coefficients
« Reply #8 on: May 15, 2012, 09:25:23 AM »
Ok, one more question. For any equilibrium reaction, starting with only reactants, are these statements true:
In the forward direction, rate of reaction decreases at a decreasing rate.
In the reverse direction, rate of reaction increases at a decreasing rate.
You've taken calculus, think about this in terms of calculus. You want to know if the rate of the reaction is monotonically decreasing. How do you tell if a function is monotonically decreasing (increasing)?
Got it. For this particular problem, rate is a function of concentration squared in the forward direction. Taking the derivative of this shows a linearly increasing deceleration. So that's a check for the first one. The reverse direction is a problem. While the function is linear, the concentrations do not build up linearly. The surge comes in the beginning and fades down with increasing concentration (because the gap between the rates are closing). Nevertheless, while the gap is closing, the concentration of product is increasing, just not as fast since the difference is becoming smaller. Thus, That's a check for the second one also.
#### Jorriss
• Chemist
• Full Member
• Posts: 523
• Mole Snacks: +41/-14
##### Re: Why does the equilibrium constant expression involve coefficients
« Reply #9 on: May 15, 2012, 11:40:32 AM »
Got it. For this particular problem, rate is a function of concentration squared in the forward direction. Taking the derivative of this shows a linearly increasing deceleration. So that's a check for the first one. The reverse direction is a problem. While the function is linear, the concentrations do not build up linearly.
Not quite, don't forget implicit diff. You differentiate with respect to time so you have to use implicit differentiation on A. y(t)=x(t)^2 => dy/dt = 2x (dx/dt)
#### sodium.dioxid
• Regular Member
• Posts: 72
• Mole Snacks: +1/-3
##### Re: Why does the equilibrium constant expression involve coefficients
« Reply #10 on: May 15, 2012, 06:23:25 PM »
Got it. For this particular problem, rate is a function of concentration squared in the forward direction. Taking the derivative of this shows a linearly increasing deceleration. So that's a check for the first one. The reverse direction is a problem. While the function is linear, the concentrations do not build up linearly.
Not quite, don't forget implicit diff. You differentiate with respect to time so you have to use implicit differentiation on A. y(t)=x(t)^2 => dy/dt = 2x (dx/dt)
Oops. That makes sense.
And I completely understand now the relationship between equilibrium constant and rate law constants.
But the book just gave me another surprise. In my own words, it says that the powers come from the coefficients in the composite reaction, not the powers in the rate law. Why is that?
Is it because equilibrium reactions don't occur in a composite fashion? In other words, is the composite reaction itself an elementary reaction (for equilibrium reactions)?
« Last Edit: May 15, 2012, 07:05:19 PM by sodium.dioxid »
#### sodium.dioxid
• Regular Member
• Posts: 72
• Mole Snacks: +1/-3
##### Re: Why does the equilibrium constant expression involve coefficients
« Reply #11 on: May 15, 2012, 10:20:54 PM »
Please don't mind my last question. I got myself another textbook, which explained this particular topic more clearly. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9502661228179932, "perplexity": 1424.0229573399167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347387219.0/warc/CC-MAIN-20200525032636-20200525062636-00542.warc.gz"} |
http://resonaances.blogspot.com.au/2007/02/ | ## Thursday, 22 February 2007
### Flavor and Strong Dynamics
This week i cannot report on any of the regular CERN seminars (meaning, i understood nothing or didn't even dare to walk in). Salvation came from the phenomenology journal club which hosted a short, informal talk by Roberto Contino. Roberto was talking about partial compositeness, reviewing a partly forgotten work on fermion masses in technicolor.
The common lore about technicolor is that it faces two serious problems. One is the difficulty to comply with the electroweak precision tests, in particular with the notorious S parameter. The other is the flavour problem: it is tough to generate the observed fermion mass pattern without producing excessive flavour-changing neutral currents.
Typically, technicolor models generate the fermion masses as follows. First, at some high scale$\Lambda_0$, one introduces a four-fermion operator $\Lambda_0^{-2}ff\chi\chi$ that marries two standard model fermions and two technifermions. The technifermions condense, breaking the electroweak symmetry and giving mass to the W and Z bosons. When this happens, thanks to four-fermion operators like the one above we also get mass terms for the standard model fermions. Parametrically, the fermion masses are given by
$m_{f}\sim\Lambda_{TC} \left ( \frac{ \Lambda_{TC} }{ \Lambda_{0} }\right )^{d-1}$
where $\Lambda_{TC}$ is the technicolor scale of order TeV and d is the dimension of the technifermion bilinear. The classical dimensions is of course d=3 but in a strongly interacting theory renormalization effects may lead to a different, anomalous dimension. The problem is that in calculable setups d >= 2. This leads to an unpleasent tension. On one hand, to obtain the large top quark mass we need $\Lambda_0$ to be rather close to $\Lambda_{TC}$ . On the other hand we would like $\Lambda_0$ to be as high as possible because in generic technicolor models we also generate unwelcome operators with 4 standard model fermions: $\Lambda_0^{-2}ffff$. If $\Lambda_0$ is too low, this leads to excessive flavor violation that is inconsistent with, for example, the kaon mixing experiments.
Though these problems could be overcome by labourious model-building, there exists in fact a simple solution proposed long ago by David B. Kaplan. All the problems mentioned above just disappear without a trace when the standard model fermions couple linearly to technicolor operators: $\lambda\,f{\cal O}$. The fermion masses are now set by the anomalous dimension $\gamma$ of the coupling $\lambda$. If $\gamma$ is postive, the coupling gets suppressed at low energies and one gets
$m_{f}\sim\Lambda_{TC}\left(\frac{\Lambda_{TC}}{\Lambda_{0} }\right )^{2\gamma}$
which is appropriate for light fermions of the first and second generation. When $\gamma$ is negative, the coupling goes to an IR fixed point and one finds
$m_{f}\sim\Lambda_{TC}$
which is appropriate for the top quark. In this scenario, $\Lambda_0$ can be even as high as the Planck scale. No flavor problem! It turns out that technicolor has "only" one serious problem, that with the electroweak precision tests.
One practical consequence of this scenario is that the standard model fermions mix with composite states from the technicolor sector, hence the name partial compositeness. This could show up as deviations of the fermionic interactions from the standard model predictions. For example, if the b quark has sizable admixture of composite states, the Z->bb branching ratio will be modified. In fact, the mixing with the technicolor sector is expected to be strongest for heavy fermions. Thus, the top quark should be mostly composite. Recall that top couplings have been very poorly measured so far...
Another nice thing about this mechanism is that it can be trivially implemented in 5D holographic models. This is the connection to Roberto's present research. But that's a longer story. If you want to know more about it, i recommend to start with a short review article by Roberto himself.
PS. As you may have noticed, this post contains a number of equations. Equations are an efficient tool to reduce the number of readers. For that, my eternal gratitude to the author of the script which enables LaTeX embeddings :-)
But the boundig boxes around the equations should not be there. Heeeelp!
## Saturday, 17 February 2007
### David Kaplan on SUSY, in general
David had a busy week at CERN. Besides making a documentary thriller about the LHC and peforming at the theory seminar, he also gave a series of four lectures entitled Introduction to Supersymmetry. Somehow unexpectedly, behind this title hides a pretty basic introduction to supersymmetry. A particle theorist working beyond the standard model would not learn anything new from these lectures. For the remaining part of the population they offer a nice account of the current theoretical and experimental status of supersymmetry. David finds, i believe, a good balance between enthusiasm and skepticism.
The first lecture contains a review of the standard model and motivations for going beyond. The second is about constructing susy lagrangians, the one of the MSSM in particular. By the end of this lecture most of the experimentalists have vanished from the audience and David could move to more advanced subjects. The remaining two lectures introduce various models of supersymmetry breaking. A lot of time is devoted to discussing possible experimental signals, with more than usual emphasis on non-standard scenarios. The last lecture, in fact, has quite an overlap with the theory seminar.
The video recordings and the transparencies can be found here.
PS. I can't find a photo on the internet, except the one i already used in the previous post. Does anybody have a funny photo of David, e.g. standing on his head or parachute jumping?
## Wednesday, 14 February 2007
### David Kaplan on non-standard SUSY
Rumours of its death have been greatly exaggerated. Today, instead of interesting people, we had an interesting talk in our Wednesday TH seminar. David E. Kaplan is interesting in an interesting way. The good thing about it is that I don't need to devise any bonsmots for this post: i just copy&paste the ones he has said. I wish my task were always so easy ;-)
The reason for turning attention to non-standard susy scenarios is the fine-tuning problem of electroweak symmetry breaking. Yes, the very fine-tuning that used to be the main motivation for supersymmetry now has a supersymmetric avatar that plagues all simple susy models. By the way, David is an author of the most adequate definition of fine-tuning. It goes like this: a model is fine-tuned if a plot of the allowed parameter space makes you wanna puke.
As an illustration, today he presented this plot:
It shows the allowed parameter space in mSUGRA - the most popular of the MSSM scenarios and a common reference for setting experimental limits on susy particles. The allowed space is the narrow green band which looks like another line from a distance. Indeed, after seeing this one you will never put mSUGRA in your mouth again. The situation is slightly better in the general, unconstrained MSSM. But not much better. It is tough to reduce the fine-tuning below one part in ten. This is bad enough to justify serious theoretical and experimental studies of various extensions of the MSSM.
There are several directions one can pursue to reduce the fine-tuning. All have one thing in common. Since, by definition, the MSSM is minimal, the other scenarios get complicated. David chose the direction that can be summarized by: the higgs was at LEP but we were dumb enough to have missed it. The fine-tuning problem in supersymmetric models is fueled mostly by the 115 GeV limit on the higgs boson mass. But this limit stands for a higgs with the standard-model-like interactions. If the higgs decays are modified then this limit might become less stringent. With the higgs mass of order the Z mass the electroweak fine-tuning problem can be avoided.
With this in mind, David removes one of the MSSM assumptions: the R-parity conservation. R-parity was designed to prevent excessive proton decay, but if only some of the R-parity violating couplings are switched on we can get away with it. David adds the barion number violating UDD terms into to the MSSM superpotential. This opens a possibility of the higgs decaying into two neutralinos, each of which subsequently decays to 3 quarks (without R-parity, the lightest neutralino is no longer stable). A higgs decaying to six jets was not properly studied at LEP, and it could be as light as 80 GeV.
In the presence of R-parity violation susy phenomenology is dramatically modified. There is no stable LSP (lightest supersymmetric particle), hence no missing energy signatures. The bulk of David's talk was devoted to strategies of discovering the higgs and susy particles in this weird scenario. The funny thing is that the main role would fall to LHCb (usually considered a poor relative of ATLAS and CMS), as its design may allow to see displaced vertices associated with the neutralino decays.
Should we all believe in the R-parity violating MSSM then? Not necessarily. David's model is not perfectly natural itself, as hiding the higgs requires a non-generic choice of parameters. However, his talk made clear that our theoretical bias has badly influenced experimental searches for new physics (for example, non-standard higgs decays haven't been given enough attention). What's worse, the theoretical bias has promoted scenarios that nowadays seem implausible. After 30 years of physics beyond the standard model we realized we have no idea what should we expect at the LHC (unless it is just the standard model). So the clever thing to do now is to investigate as broad spectrum of new physics signals as possible. David's famous last words were: we should think what can we use a 14 TeV machine for, beyond killing the neighbour's dog.
The transparencies are not available, as usual (and Carthage must be destroyed). The paper of David and company is here.
Update: This post hit the charts because it contains the word puke. It's not me, it's David, i'm just reporting ;-) Pity, he didn't say anything with f..., i would certainly get even more hits.
## Sunday, 11 February 2007
### Alain Connes' Standard Model
Last Thursday Alain Connes gave a talk at CERN TH. Alain is a famous mathematician with important contributions in the areas of operator algebras and non-commutative geometry. He has gathered quite a collection of prestigious awards, including the Fields Medal and the Crafoord Prize. What could bring such a figure to a particle theory seminar? He was sharing his views on the elementary interactions in a talk entitled The Standard Model coupled with gravity, the spectral model and its predictions.
Alain's approach to particle physics is orthogonal to that of the particle physics community. Whereas we try to figure out what sort of new physics could be responsible for the weird structures of the Standard Model, he treats those very structures as an indication of the underlying geometry of space-time. This is certainly original, which has its positives and negatives. One one hand, I find it reassuring that people out there are exploring different ways; in the end, it is conceivable that the standard approach will prove terribly wrong. On the other hand, Alain's language can hardly be understood here at CERN. No, he wasn't speaking french ;-) but I quickly got lost in the forest of KO-theory, metric dimensions and spectral triples. I'm not able to review or evaluate any technical details of his work but I would like to share a few remarks anyway.
His program consists in identifying a structure of space-time could give rise to the Standard Model + gravity. He finds the answer is the product of an ordinary spin manifold by a finite noncommutative discrete space of KO-dimension 6 modulo 8 and of metric dimension 0, whatever it means. The discrete space is responsible for the spectrum, symmetries and the interactions of the Standard Model. Most of the Standard Model parameters correspond to the freedom of parametrizing the internal geometry. There are however three constraints:
1. The gauge couplings should be unified at some scale. The unification is rather weird, as there are no exotic gauge bosons, hence no proton decay.
2. There is a relation between the sum of the fermion masses squared and the W mass. In practice, this is a constraint on the top mass, which is roughly obeyed in nature.
3. Finally, there is a prediction for the higgs quartic coupling, which implies the higgs boson mass of order 170 GeV.
Is it possible that his approach will provide new insights into the Standard Model and beyond? Not likely. As far as I understood, the fine structure of space-time has no implications that could be observed at the LHC or in other experiments in foreseeable future. Next, the Standard Model is not a unique system that allows for such a geometrical embedding. Before the neutrino masses were discovered, Alain himself had pursued a different scenario leading to massless neutrinos. In fact, non-unification of the gauge couplings within the Standard Model suggests that there should be more low-energy structures asking for a different space-time geometry. According to Alain, supersymmetry could find a place in this game, too. Thus, his program can hardly constrain the options for the LHC. Even the 170 GeV Higgs mass is sensitive to the assumptions he makes, e.g. to the value of the unification scale. In conclusion, his approach seems more a mathematical re-interpretion of QFT structures than a self-standing physical theory.
In spite of these objections, I really enjoyed the talk. I think it is due to Alain's manner of speaking: a soft voice full of wonder at the mathematical beauty he perceives in his models. One could think he speaks of autumn trees or little birds in nest, not about scaring non-commutative geometry :-) This sort of enthusiam is rare these days.
The transperiences are not available, as usual. For brave souls, the technical details can be found in the recent paper of Alain and collaborators.
## Wednesday, 7 February 2007
### Interesting People
There's nothing interesting going on in particle theory these days. Somebody realized, finally. Therefore our weekly Wednesday theory seminars have been postponed until better days. Instead, a new Wednesday seminar series has been launched, that goes under the name Interesting People. Today we had a talk entitled In the coming weeks we expect the invisible Mr. T. Walters, the bicycle choir and a man with a tape recorder up his nose (to be confirmed). If you can set bricks to sleep, give a cat influenza or have any other remarkable skills, don't hesitate to contact the organizers of the Wednesday seminar. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 17, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8145713806152344, "perplexity": 935.589946460919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608058.57/warc/CC-MAIN-20170525102240-20170525122240-00048.warc.gz"} |
https://math.stackexchange.com/questions/2616503/matrix-pseudoinverse-with-additional-term?noredirect=1 | # matrix pseudoinverse with additional term
I would like to solve:
$M =ABC$ for $B$ where $M$, $A$, and $C$ are known rectangular matrices of compatible dimensions.
e.g. $(1000\times 200) = (1000\times 30)\cdot B\cdot (14000\times 200)$
I am familiar with the process for solving for $B$ when there is no term $C$.
Thank you.
There are two options. First if $A$ is injective and $C$ is surjective, then $A$ has a left inverse and $C$ has a right inverse, which can be computed as discussed here. Then, if $A^+A=I$ and $CC^+=I$, we have $B=A^+MC^+$.
Otherwise, $B\mapsto ABC$ is a linear map, so the equation $ABC = M$ can be solved using standard techniques for solving linear equations. Namely, let $M$ be $n\times m$, $A$ be $n\times p$, $B$ be $p\times q$ and $C$ be $q\times m$. Then write $B\mapsto ABC$ as a matrix by using the matrices $E_{ij}$, where $[E_{ij}]_{k\ell}=\delta_{ik}\delta_{j\ell}$ and $1\le i\le p$, $1\le j\le q$, as a basis for $M_{p\times q}$ (the matrices of size $p\times q$) and the analogous basis $F_{ij}$ of $M_{n\times m}$. Then $$[AE_{ij}C]_{k\ell}=\sum_{r=1}^p \sum_{s=1}^q a_{kr}[E_{ij}]_{rs}c_{s\ell} = \sum_{r=1}^p\sum_{s=1}^q a_{kr}\delta_{ri}\delta_{sj}c_{s\ell} = a_{ki}c_{j\ell}.$$
Now you can use this to write the linear map $B\mapsto ABC$ as a matrix, and solve it using standard matrix techniques like row reduction or something.
Edit: example of the second option using $2\times 2$ matrices. Let $$A=\newcommand{\bmat}{\begin{pmatrix}} \newcommand{\emat}{\end{pmatrix}} \bmat 1 & 2 \\ 3 & 6 \emat$$ and let $$C=\newcommand{\bmat}{\begin{pmatrix}} \newcommand{\emat}{\end{pmatrix}} \bmat 0 & 1 \\ 0 & 0 \emat.$$ Now if $$B=\newcommand{\bmat}{\begin{pmatrix}} \newcommand{\emat}{\end{pmatrix}} \bmat a & b \\ c & d \emat,$$ then $$ABC = \bmat 0 & a +2c \\ 0 & 3a + 6c \emat.$$ Rewriting $B$ as a column vector, so it becomes $\bmat a \\ b \\ c \\ d \emat$, we see that the matrix for $B\mapsto ABC$ is $$\bmat 0 & 0 & 0 & 0 \\ 1 & 0 & 2 & 0 \\ 0 & 0 & 0 & 0 \\ 3 & 0 & 6 & 0 \emat.$$ Writing $M$ as $\bmat a' & b' \\ c' & d' \emat$, or as a column vector, $\bmat a' \\ b' \\ c' \\ d' \emat$, we can apply row reduction to the augmented matrix (which I can't draw with a vertical bar because it doesn't appear to be a standard latex command): $$\bmat 0 & 0 & 0 & 0 & a' \\ 1 & 0 & 2 & 0 & b' \\ 0 & 0 & 0 & 0 & c' \\ 3 & 0 & 6 & 0 & d' \emat.$$ Row reducing, we get $$\bmat 1 & 0 & 2 & 0 & b' \\ 0 & 0 & 0 & 0 & a' \\ 0 & 0 & 0 & 0 & c' \\ 0 & 0 & 0 & 0 & d'-3b' \emat.$$ Hence $M=ABC$ has a solution if and only if $a'=0$, $c'=0$, $d'=3b'$. In that case the solutions (parametrized by $t$) are given by $$B\in \left\{\bmat b'-2t & 0 \\ t & 0 \emat:t\in\Bbb{R}\right\}.$$
• Your first answer gives great results. Thank you. The second answer is difficult for me to penetrate. I would very much appreciate an example with small matrices (or a link to one). – Neuromancer Jan 23 '18 at 3:24
@jgon implicitly uses the Kronecker product in order to solve the considered equation. When the matrices $A,C$ are large, this method has a great complexity and, consequently, should be avoided.
Is there any risk to transform to $(B^{T} \otimes A)\operatorname{vec}(X)=\operatorname{vec}(C)$ for solving $AXB=C$ for X
for an effective method when the matrices $A,C$ are square. In particular, ref ii) theorem 7.1. gives a unicity theorem for the solution of $AXD+EXB=C$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9907940626144409, "perplexity": 81.37739191483391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670643.58/warc/CC-MAIN-20191121000300-20191121024300-00199.warc.gz"} |
http://mathoverflow.net/questions/36483/sum-of-two-unitary-matrix-is-equal-to-every-matrix-closed | ## Sum of two unitary matrix is equal to every matrix? [closed]
Let $R=M_{n}(Z_{2})$, can we write every matrices of $R$ as sum of two matrices of $GL_{n}(Z_{2})$?
-
The body of the question is unclear and doesn't correspond to the title. – Victor Protsak Aug 23 2010 at 19:26
However, if a question like this appears unclear, I would rather ask the questioner to clarify it. In the present case, the only unclear point seems to me the use of "unitary" in the title, in place of "invertible". – Pietro Majer Aug 24 2010 at 19:13
## closed as not a real question by Victor Protsak, Andrew Stacey, Akhil Mathew, Yemon Choi, Qiaochu YuanAug 23 2010 at 21:20
It appears you are asking whether $R$ is a 2-good ring. The answer is yes. You may find the paper "2-good rings" by Peter Vamos to be useful in giving you some background information.
The answer is "yes" if all matrices are the sum of at most 2 units. The answer is "almost" if every matrix is required to be the sum of exactly two units. (Hint: the monkey in this wrench is pretty small.) Gerhard "Ask Me About System Design" Paseman, 2010.08.24 – Gerhard Paseman Aug 24 2010 at 11:54 I failed to remark that I am, of course, assuming that $n>1$, otherwise the result is not true. I also want to mention that a colleague of mine Thomas Dorsey (along with two coauthors) has a preprint on this topic which is quite good. You might look for that paper to appear soon. – Pace Nielsen Aug 24 2010 at 17:28 Thanks for the heads up on the preprint. If you see the paper before I do, please mention it here. I shall do same if I see the paper before a new comment here. Gerhard "Ask Me About System Design" Paseman, 2010.08.24 – Gerhard Paseman Aug 24 2010 at 17:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9119169116020203, "perplexity": 448.83170464676596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699675907/warc/CC-MAIN-20130516102115-00048-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.vedantu.com/question-answer/find-whether-the-following-series-is-convergent-class-10-maths-cbse-5efaf4ef289d3d30dff28a89 | QUESTION
Find whether the following series is convergent or divergent:$1 + a + \dfrac{{a(a + 1)}}{{1.2}} + \dfrac{{a(a + 1)(a + 2)}}{{1.2.3}} + .....$
Hint: The binomial expansion of ${(1 - x)^{ - n}}$ is given as ${(1 - x)^{ - n}} = 1 + nx + \dfrac{{n(n + 1)}}{{2!}}{x^2} + \dfrac{{n(n + 1)(n + 2)}}{{3!}}{x^3} + .....$. Use this formula to find the value of the given sum of infinite series.
The binomial theorem or the binomial expansion is a result of expanding the powers of binomials or the sum of two terms. The coefficients of the terms in the expansion are called the binomial coefficients.
The binomial expansion of ${(1 - x)^{ - n}}$ is given as follows:
${(1 - x)^{ - n}} = 1 + nx + \dfrac{{n(n + 1)}}{{2!}}{x^2} + \dfrac{{n(n + 1)(n + 2)}}{{3!}}{x^3} + .....$
It can also be proved using the Taylor series.
Let $f(x) = {(1 - x)^{ - n}}$, then, we have:
$f(0) = 1$
The first derivative of f is given as follows:
$f'(x) = n{(1 - x)^{ - n - 1}}$
The value of this function at x = 0 is given as follows:
$f'(0) = n$
The second derivative of f is given as follows:
$f''(x) = n(n + 1){(1 - x)^{ - n - 2}}$
The value of this function at x = 0 is given as follows:
$f''(0) = n(n + 1)$
The Taylor series expansion of a function at x = 0 is given as follows:
$f(x) = f(0) + \dfrac{{f'(0)}}{{1!}}x + \dfrac{{f''(0)}}{{2!}}{x^2} + ...$
Then, the value of ${(1 - x)^{ - n}}$ is given as:
${(1 - x)^{ - n}} = 1 + nx + \dfrac{{n(n + 1)}}{{2!}}{x^2} + \dfrac{{n(n + 1)(n + 2)}}{{3!}}{x^3} + .....$
We substitute the value of n as a, then, we have:
${(1 - x)^{ - a}} = 1 + ax + \dfrac{{a(a + 1)}}{{2!}}{x^2} + \dfrac{{a(a + 1)(a + 2)}}{{3!}}{x^3} + .....{\text{ }}...........{\text{(1)}}$
The expression given in the question is the same as the expression in equation (1) for the value of x as 1.
The left-hand side of the equation for x = 1 becomes infinity for all positive values of a.
Hence, the given expression diverges for a > 0.
The function itself is defined only for positive values of a.
Hence, for a = 0, then the value of the expression becomes 1. Hence, the given expression converges for the value of a = 0.
In conclusion, the given expression is divergent for the values of a > 0.
Note: You can observe that for any positive value of a, the numerator is always greater than the denominator, hence each therm is greater than 1, and hence, the sum of the infinite series is divergent. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9738036394119263, "perplexity": 90.95086579949307}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899209.48/warc/CC-MAIN-20200709065456-20200709095456-00235.warc.gz"} |
https://blog.b-ark.ca/2021/09/16/we-have-sunroom.html | We finally got the point where Suncoast could come out and complete the first phase of the sunroom. Just screened right now but once the windows are done we’ll bet set for fall! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8042747974395752, "perplexity": 1898.8865803124295}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00104.warc.gz"} |
http://www.sixthform.info/maths/?m=200410 | ## Proof and logic
Sunday 24 October 2004 at 3:15 pm | In Articles | 3 Comments
Mathematical proof is one of the topics that students find very difficult. Many of them assume what they are trying to prove, end up with a true statement and then think they have proved the result. Studying truth tables, particularly the implication operator may well help. See Logical Operations and Truth Tables
On a mathematics forum a student wanted to use its facilities (provided by LatexRender of course 😎 ) to help a friend show how to prove
This is the original ‘proof’ they gave. Although they have now changed it I get the impression that I failed to convince them of the faulty logic; how would you explain what is wrong?
For this problem you need to know the addition formulas:
Using these formulas in the problem we can turn it into:
Then we use the fact that and
This changes it to:
Here we cancelled out the terms that equals zero and then adding together what we have we end up with:
## Some textbooks misuse infinity
Monday 18 October 2004 at 3:43 pm | In Articles | Post Comment
It’s happened again! used in a textbook (unnamed to protect the guilty) as if it were a real number instead of an idea. In a discussion of the formula for the acute angle between two lines
the following appears:
Putting gives an angle , confirming the condition for the lines to be perpendicular
This is of course complete nonsense. As I’ve said before doesn’t exist and is only defined on ie for
The textbook was written by the examiners (which is one reason why we use it); this worries me even more.
I suppose this is better than one well-known textbook back in the eighties which solved the equation by putting then ‘showing’ or . This seems to show that all linear equations are quadratics in disguise; or cubics, quartics – who knows where this nonsense leads 😕
## Amazing Formula
Monday 11 October 2004 at 9:47 pm | In Articles | 3 Comments
There are many interesting formulae in mathematics;
must be one of the most amazing of all.
The first reaction is where did that come from? You can find 14 different proofs of this in a paper on Robin Chapman’s Home Page [look for Evaluating zeta(2)]
Given this result can you prove another amazing result?
If you pick two positive integers at random, the probability of them having no common divisor is
gets everywhere! See Wikipedia for more such as
or or
## Primes
Sunday 10 October 2004 at 6:42 pm | In Articles | Post Comment
Euclid’s proof that there are an infinite number of primes is a classic and as such appears as the first proof in Proofs from The Book.
Equally well-known is the formula (known as The Prime Number Theorem) which tells you that the number of primes less than is given by which means that the larger the value of the closer (in a well-defined mathematical sense) is to . This is quite hard to prove.
An easier, but non-trivial result, is Bertrand’s postulate which says that there is always a prime between and .
The fact that there are arbitrarily large gaps between successive primes is not difficult to prove. Suppose we want to find a gap between successive primes which is at least of size . Then we look at the numbers
Then each of these numbers is not prime. Why? Look at where . Then divides both and and so divides . Clearly so shows is not prime.
So we have a series of numbers all of which are not prime; thus the gap between a prime less than and a prime more than is at least .
## Error
Saturday 2 October 2004 at 10:04 pm | In Articles | 2 Comments
Six months ago in an article on the LambertW function I wrote:
Thus is defined by and it is then clear that, for example,
There’s a serious error in there which also completely invalidates it is then clear that …. Going back to the article the mistake leapt out at me – is it obvious to you? It’s strange how you read what you want to read rather than what is actually there 😕 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9121349453926086, "perplexity": 498.27015669950015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936969.10/warc/CC-MAIN-20180419130550-20180419150550-00583.warc.gz"} |
http://mathhelpforum.com/algebra/99427-polynomial-print.html | # polynomial
• Aug 27th 2009, 08:35 AM
thereddevils
polynomial
Given that $3+\sqrt{2}$ and 2-i are the roots of the equation f(x)=0 . If the degree of f is 4 and f(1)=8 . Find the polynomial f(x)
• Aug 27th 2009, 09:46 AM
running-gag
Hi
I suppose that f(x) is a polynomial with integer coefficients
Therefore if 2-i is a solution then 2+i is a solution
And if $3 - \sqrt{2}$ is a solution then $3 + \sqrt{2}$ is a solution
Finally $f(x) = \alpha (x-(2-i))(x-(2+i))(x-(3 - \sqrt{2})(x-(3 + \sqrt{2}))$
Expand and determine $\alpha$ using the value of f(1)
• Aug 27th 2009, 10:18 AM
thereddevils
Quote:
Originally Posted by running-gag
Hi
I suppose that f(x) is a polynomial with integer coefficients
Therefore if 2-i is a solution then 2+i is a solution
And if $3 - \sqrt{2}$ is a solution then $3 + \sqrt{2}$ is a solution
Finally $f(x) = \alpha (x-(2-i))(x-(2+i))(x-(3 - \sqrt{2})(x-(3 + \sqrt{2}))$
Expand and determine $\alpha$ using the value of f(1)
THanks but i don understand how can u just reverse the operation sign and call it another root ? Is it because its a polynomial of degree 4 ?
And where does the 'a' come from ?
Thanks again ..
• Aug 27th 2009, 12:21 PM
QM deFuturo
Quote:
Originally Posted by thereddevils
THanks but i don understand how can u just reverse the operation sign and call it another root ? Is it because its a polynomial of degree 4 ?
And where does the 'a' come from ?
Thanks again ..
I can't quote a "rule" for Real roots, but I do know that if any roots of an equation are complex, they *always* come in pairs of conjugates. So if a +bi is a root, a - bi *has* to be a root as well.
There's probably something similar for Real roots, but I do not know any formal properties in this area.
• Aug 28th 2009, 06:59 AM
stapel
When have you ever (and only) gotten solutions of these forms, with radicals or imaginaries (or both)? From the "plus-minus" part of the Quadratic Formula!
Working backwards here from the zeroes, you know that, if one zero is "(something) plus (a square root)", then another zero must be "(that same thing) minus (that same square root)". Using the same reasoning with the complex root, you obtain the fourth zero.
From these, and the fact that, if x = a is a root, then x - a is a factor, you can find all four factors, and multiply them out to find the original polynomial. (Wink) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9096336960792542, "perplexity": 923.2248513094405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542288.7/warc/CC-MAIN-20161202170902-00181-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/1297904/finding-a-lyapunov-function-for-a-given-system | # Finding a Lyapunov function for a given system
I need to find a Lyapunov function for $(0,0)$ in the system:
\begin{cases} x' = -2x^4 + y \\ y' = -2x - 2y^6 \end{cases}
Graph built using this tool showed that there should be stability but not asymptotic stability. I'd like to prove that fact by means of Lyapunov function but cannot find the appropriate one among the most common ones such as $Ax^{2n}+By^{2m}$, $Ax^2+By^2+Cxy$, etc.
Please, give me some hint about how to proceed with the search of the suitable function or even the desired function itself it you know the easy way of finding it in this case.
• Simulated phase diagrams do not seem conclusive enough to decide whether (0,0) is stable. Why do you think it is? – Did May 25 '15 at 12:33
• @Did As I was told by my groupmates, the situation when trajectories don't go away from 0 to the infinity leads to stability. And this is exactly the case for my system: s9.postimg.org/ywsij9kbj/image.png – mik May 25 '15 at 13:07
• Sorry but, from the phase diagrams, I cannot determine whether trajectories cycle or spiral outwardly or spiral inwardly (and I would be cautious about approximation errors in simulations, if I were you). – Did May 25 '15 at 14:27
• If your guess (stable but not asymptotically stable) is correct, then the equation has periodic trajectories. Consequently, a Lyapunov function $V$ would have to be constant on those trajectories, which means that $V=c$ is an implicit equation of a trajectory. The upshot is that in such a case, finding a Lyapunov function is the same as finding the trajectories explicitly -- a task that is unlikely to have a closed form answer. – user147263 May 25 '15 at 21:17
• – Did Jun 21 '15 at 19:00
As explained in the comments, even though simulated phase diagrams seem to exhibit cycling trajectories near the origin, they are not conclusive enough to decide whether the origin is stable or not, that is, whether trajectories cycle or spiral outwardly or spiral inwardly. Caution is advised about approximation errors in simulations.
streamplot{{-2x^4+y,-2x-2y^6},{x,-1,1},{y,-1,1}}
## Dynamical system
% \begin{align} % \dot{x} &= -2x^{4} + y \\ % \dot{y} &= -2x - 2y^{6} \\ % \end{align} %
There are two critical points: the origin $(0,0)$ and $$\left\{-\frac{1}{2^{6/23}},2 \left(\frac{1}{2^{6/23}}\right)^4\right\} =\left\{ -\frac{1}{2^{6/23}}, \frac{1}{2^{1/23}} \right\} \approx \{-0.834585,0.970313\}$$
## Phase portrait
The phase portrait below displays the nullcline $\dot{x}=0$ with a red, dashed line and $\dot{x}=0$ in purple.
## Stability
The polar transformations $$x = r \cos \theta, \qquad y = r \sin \theta$$ lead to $$r^{2} = x^{2} + y^{2}$$ which implies $$\dot{r} = \frac{x\dot{x}+y\dot{y}}{\sqrt{x^2+y^2}} = \frac{-2 x^5-x y-2 y^7}{\sqrt{x^2+y^2}},$$ plotted below.
The problem is that the sign of $\dot{r}$ changes twice in every neighborhood of the origin. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9773529767990112, "perplexity": 533.6101128982184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318952.90/warc/CC-MAIN-20190823172507-20190823194507-00509.warc.gz"} |
https://zbmath.org/?q=an:0995.37025 | ×
# zbMATH — the first resource for mathematics
Uniform estimates on the number of collisions in semi-dispersing billiards. (English) Zbl 0995.37025
This is a remarkable paper – it solves a long-standing and celebrated open problem in the theory of billiard dynamical systems and mechanics. The authors prove that in a gas of $$N$$ hard balls in the open space the number of possible collisions is uniformly bounded (until now, the problem had been only solved for $$N=3$$). The authors give an explicit upper bound for the number of collisions between $$N$$ hard balls of arbitrary masses. They also solve a more general billiard problem: for multidimensional semidispersing billiards (i.e. with walls concave inward) the number of collisions near any “nondegenerate” corner point is uniformly bounded. A simple new criterion of nondegeneracy of a corner point is found.
The authors give an elementary and very elegant solution of the above problems. In addition, they generalized the result (and the proof) to billiards on Riemannian manifolds with bounded sectional curvature, where the particle moves along geodesics between elastic collisions with walls. This involves the theory of Aleksandrov spaces.
##### MSC:
37D50 Hyperbolic systems with singularities (billiards, etc.) (MSC2010) 82B05 Classical equilibrium statistical mechanics (general) 70F10 $$n$$-body problems 70F35 Collision of rigid or pseudo-rigid bodies
Full Text: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9884665012359619, "perplexity": 548.2249156883147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611445.13/warc/CC-MAIN-20210614043833-20210614073833-00545.warc.gz"} |
http://mathhelpforum.com/geometry/135488-how-calculate-circle-s-radius-when-tangent-2-intersecting-lines.html | # Thread: How to calculate a circle's radius when tangent to 2 intersecting lines
1. ## How to calculate a circle's radius when tangent to 2 intersecting lines
In the diagram below, circle c is tangent to lines p, q, and r. Lines q and s are perpendicular to each other. Angle a is 20 degrees (the subtended angle is 40 degrees). What is the radius of circle c, and how did you calculate it? Thanks.
Involute
2. If I understand the diagram correctly, this is true.
$\displaystyle \sin(20^o)=\frac{r}{1-r}$.
3. Looks right to me. Thanks! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9569261074066162, "perplexity": 497.496865041925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864790.28/warc/CC-MAIN-20180522131652-20180522151652-00058.warc.gz"} |
https://math.stackexchange.com/questions/634310/show-that-for-any-sets-a-b-and-c-a-delta-b-subset-a-delta-c-cup-b-delta-c/634357 | Show that for any sets $A,B$ and $C$ $A\Delta B\subset A\Delta C\cup B\Delta C$.
The problem statement is in the title.
I'm proving a problem in class and it's necessary for me to show the above containment. I've drawn some Venn diagrams to make sure the containment actually makes sense and it does, yet I'm having trouble proving this rigorously.
I know that to begin, we'd let $x\in A\Delta B$ and eventually show that $x\in A\Delta C\cup B\Delta C$. Since $x\in A\Delta B$, $x\in A\setminus B$ or $x\in B\setminus A$. Yet, I don't know how to show $x$ would end up in $C$.
Also, the proof of this requires proving an "or" statement, that is, showing that $x\in A\Delta C$ or $x\in B\Delta C$. Would supposing $x\notin B\Delta C$ or $x\notin A\Delta C$ help to prove this?
Thanks for any help or feedback!
• I am most certain that I answered this question before. – Asaf Karagila Jan 11 '14 at 7:22
Cases?
Take $\;x\in A\Delta B=(A\cup B)\setminus (A\cap B)\;$ , and suppose $\;x\in A\;$ (and thus $\;x\notin B\;$, of course).
Case 1: If $\;x\in C\;$ then:
$$(i)\;\;x\in A\cap C\implies x\notin B\cap C\implies x\in B\Delta C$$
$$(ii)\;\;x\notin A\cap C\implies \ldots$$
Continue on.
• Thanks for the response. Would your $(ii)$ written be Case 2 for $x\notin C$? – Shant Danielian Jan 11 '14 at 3:01
• I'm not sure what you tried to ask @ShantDanielian, but my (ii) above is the second case of $\;x\in C\;$ . – DonAntonio Jan 11 '14 at 3:32
• What I'm confused about is if we're supposing that $x\in A$ and the case we're in is also $x\in C$, how can $x$ not be in $A\cap C$? – Shant Danielian Jan 11 '14 at 7:50
• ...and so case (ii) is not relevant, @ShantDanielian...:) Now, case 2: $\;x\notin C\;$...what can you say about it? – DonAntonio Jan 11 '14 at 10:10
• Ah guess I should of concluded that. What I got for the second part was if $x\notin C$ then $x\notin A\cap C$ and $x\notin B\cap C$, which means that $x\in A\cup C$ and so $x\in A\Delta C$. – Shant Danielian Jan 11 '14 at 20:37
Whenever you want to prove an "or" statement, assume one of the "or"'s is false. In this case, try assuming that $x\notin B\Delta C$, and then prove that $x\in A\Delta C$.
Here's why this works:
Let $A$ and $B$ be some statements. Consider the statement "(not $A$) implies $B$". This is false exactly when $B$ is false and $A$ is false. In other words, it has the same truth table as "$A$ or $B$". Thus, proving "$A$ or $B$" is the same as proving "(not $A$) implies $B$".
Another way to approach, notice $A\Delta B = (A\cup B)-(A\cap B)$
HINT: Let $x\in A\Delta B$
$\implies x\in A\cup B \land x\notin A\cap B$
$\implies x\in A\cup B \land x\in (A\cap B)^C$
$\implies x\in A\cup B\cup C \land x\in (A\cap B)^C \cup C^C$
Remember $x\in A \implies x\in A\cup B$ for whatever set $B$
• How is $x\in A\cup B$ and not in $A\cup B$? Did you mean $A\cap B$? – Shant Danielian Jan 11 '14 at 3:13
• Sorry, typo. Editing – AndreGSalazar Jan 11 '14 at 3:14
• @AndrewGSM, by de Morgan Laws, we have that $$x\notin A\cap B\iff x\in (A\cap B)^c=A^c\cup B^c$$ and not what you wrote. – DonAntonio Jan 11 '14 at 3:37
• Your implication in the line before the last one doesn't look justified. Would you mind? – DonAntonio Jan 11 '14 at 3:45
• Yes, edited again. The first time Shant told me I only edited the first thing I saw wrong, didn't read the whole answer again. Pardon me, I think it is now correct – AndreGSalazar Jan 11 '14 at 3:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9142770171165466, "perplexity": 336.0750737950671}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330962.67/warc/CC-MAIN-20190826022215-20190826044215-00438.warc.gz"} |
https://www.physicsforums.com/threads/particles-collisions-at-speeds-greater-than-c-surprised.173946/ | # Particles collisions at speeds greater than c :surprised
1. Jun 14, 2007
### rfwebster
i recently read that it was possible to acelerate an electron, using a synchrotron to 3/4 c !
Does this imply that if two electrn beams were to be accelerated in this way that the resultant velocity of a collision would be 3/2 c ?
If so what would this b like, i.e. there be lots of mass created from the energy of the beam or would there be a geet massive explosion?
2. Jun 14, 2007
### Meir Achuz
No. Special Relativity shows that the relative velocity between two particles can never exceed c. The law of addition of parallel velocities in SR is
$$u_{rel}=\frac{u_1+u_2}{1+u_1u_2/c^2}$$.
3. Jun 14, 2007
### Staff: Mentor
No. You must compute the relative velocity relativistically:
Relativistic addition of velocities:
$$V_{a/c} = \frac{V_{a/b} + V_{b/c}}{1 + (V_{a/b} V_{b/c})/c^2}$$
Which gives you a relative speed of about 24/25 c.
(Oops... Meir beat me to it!)
4. Jun 14, 2007
### jambaugh
This question should go in the relativity section.
The answer is no. Relativistic velocities don't add that way in SR.
I know it is counter intuitive but remember when you say "velocity of a collision" you are asking how fast one particle appears in the other's frame of reference. Think of the velocity as a direction in space-time, i.e. the speed is slope of a line instead of as vector. In Euclidean space it will be the angles which add. In Minkowski space-time it will be pseudo-angles which add.
$$v_1 = \tanh(\phi_1),\quad v_2 = \tanh(\phi_2),\quad v_{12}=\tanh(\phi_1+\phi_2)$$
(velocities expressed as multiples of c.)
So use your calculator to figure:
$$v_{12} = \tanh\left( \tanh^{-1}(v_1) + \tanh^{-1}(v_2) \right)$$
Try this with various velocities and you'll get some idea of how velocity addition behaves. In particular your example of v1=v2=3/4c gives:
$$\tanh^{-1}(0.75)=0.97295507452765665255267637172159$$
$$\tanh(2\times 0.97295507452765665255267637172159)=0.96$$
so their relativistic speed, each w.r.t. the other is 96% of c.
However if you want to consider momentum and energy instead of velocity then Yes their momenta will add up (you can add them in our frame the center of mass frame) and you can get arbitrarily large collision energies in principle. To get really interesting effects you use a beam of electrons hitting a beam of positrons so that their rest energies also go into the mix (and so the net charge, lepton number, and other conserved quantities cancel).
See: http://en.wikipedia.org/wiki/Large_Electron_Positron
Regards,
James Baugh
5. Jun 14, 2007
### rfwebster
thanks all, i think i need to read up alot orelativity in all forms
6. Jun 14, 2007
### MeJennifer
It becomes interesting when we add velocities that are not parallel. In such cases the velocity addition formula is neither associative nor commutative.
Velocity addition together with Thomas precession form a group, sometimes called a gyrogroup.
7. Jun 14, 2007
### robphy
It's probably best to drop the word "addition" and instead use "composition".
(For relativistic pedagogical purposes, it might be good to somehow begin this refinement at the introductory level.... long before the topic of special relativity is discussed.)
8. Jun 14, 2007
### neutrino
Just about a couple of hours ago, I was reading this very example in an SR textbook.
Last edited: Jun 14, 2007
Have something to add?
Similar Discussions: Particles collisions at speeds greater than c :surprised | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909882664680481, "perplexity": 1395.1828389651962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00130-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://de.maplesoft.com/support/help/maple/view.aspx?path=numtheory(deprecated)/fermat&L=G | numtheory(deprecated)/fermat - Maple Help
Home : Support : Online Help : numtheory(deprecated)/fermat
numtheory(deprecated)
fermat
nth Fermat number
Calling Sequence fermat(n) fermat(n, w)
Parameters
n - (optional) non-negative integer w - (optional) unassigned variable
Description
• Important: The numtheory package has been deprecated. Use the superseding command NumberTheory[IthFermat] instead.
• The nth Fermat number is ${2}^{{2}^{n}}+1$.
• fermat(n) returns the nth Fermat number, for $n<22$.
• For any non-negative integer n and unassigned variable w, the function call fermat(n, w) assigns to w the information which is known (at the time of writing this function) about the Fermat number fermat(n). This information consists of: the primality character of fermat(n) (prime, composite, or unknown), and, if it is composite, any known prime factors.
• Every factor of a Fermat number fermat(n) has the form ${2}^{n+2}k+1,2\le k$.
• If fermat is invoked with no arguments, it returns a list of all Fermat numbers whose primality status is known as of the time when this function was written.
• The command with(numtheory,fermat) allows the use of the abbreviated form of this command.
Examples
Important: The numtheory package has been deprecated. Use the superseding command NumberTheory[IthFermat] instead.
> $\mathrm{with}\left(\mathrm{numtheory}\right):$
> $\mathrm{fermat}\left(n\right)$
${{2}}^{{{2}}^{{n}}}{+}{1}$ (1)
> $\mathrm{fermat}\left(0\right)$
${3}$ (2)
> $\mathrm{fermat}\left(3\right)$
${257}$ (3)
> $\mathrm{fermat}\left(4,'w'\right)$
${65537}$ (4)
> $w$
${\mathrm{it is prime}}$ (5)
> $\mathrm{fermat}\left(6,'w'\right)$
${18446744073709551617}$ (6)
> $w$
${\mathrm{it is completely factored}}{,}\left({\left({2}\right)}^{{8}}{}{\left({3}\right)}^{{2}}{}\left({7}\right){}\left({17}\right){+}{1}\right){}\left({\left({2}\right)}^{{8}}{}\left({5}\right){}\left({47}\right){}\left({373}\right){}\left({2998279}\right){+}{1}\right)$ (7)
> $\mathrm{length}\left(\mathrm{fermat}\left(20\right)\right)$
${315653}$ (8)
> $\mathrm{fermat}\left(30,'w'\right)$
${\mathrm{object too big}}$ (9)
> $w$
${\mathrm{it has these prime factors}}{,}{\left({2}\right)}^{{33}}{}\left({127589}\right){+}{1}{,}{\left({2}\right)}^{{32}}{}\left({149041}\right){+}{1}$ (10)
> $\mathrm{fermat}\left(9448,'w'\right)$
${\mathrm{object too big}}$ (11)
> $w$
${\mathrm{it has this prime factor}}{,}\left({19}\right){}{\left({2}\right)}^{{9450}}{+}{1}$ (12)
> $\mathrm{fermat}\left(10000\right)$
${\mathrm{character unknown}}$ (13) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 30, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970179557800293, "perplexity": 2481.1385808507084}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305341.76/warc/CC-MAIN-20220128013529-20220128043529-00273.warc.gz"} |
http://mathhelpforum.com/statistics/180504-bernoulli-trials.html | # Math Help - Bernoulli trials.
1. ## Bernoulli trials.
I am a surgeon. If a particular complication for a particular operation is 1%, what are the chances I will encounter this particular complication if I perform this particular operation on 13 patients?
Many thanks,
Sweetcaroline6
2. You must be very careful with this sort of thing.
What are the chances? 1?
How many can you expect to see? 13 * 0.01 = 0.13 - but don't let this persuade you to think it is zero.
The value 1% is created by observing a large population, probably several 1000 cases. You simply must not apply this to a single individual.
A specific patieint's history, condition, response, etc. is far more important than the 1%.
Even if 1% is exacty correct. Do the operation 100 times and tell them all that there will be no complication. Are you really okay with the one individual, the one family, that is now gravely disappointed in your knowledge or ability or that now needs to talk to your lawyer?
In my view, you need to look back on the patient, not forward, before you discuss risks. Each patient is likely already in the group that will have the complication, you just don't know it. The risk should be discussed very carefully with EACH patient so that 99 of them can be surprised.
Technically, p(0) = 0.99^13 = 0.8775 -- Probability that you will not see a case with the complication
p(1) = 13(0.99^12)*0.01 = 0.1152 -- Probability that you will see one such case.
p(2) = 78(0.99^11)*(0.01^2) = 0.0070 -- Probability that you will see TWO. Did you know this was not zero?
p(3 or more) = 1-p(2)-p(1)-p(0) = 0.0003 -- Another perhaps surprising non-zero probability.
3. Do you know how to calculate different kinds of probability?
If you do, then let's consider that you're successful in every operation. What's the probability for that happening?
4. Originally Posted by sweetcaroline6
I am a surgeon. If a particular complication for a particular operation is 1%, what are the chances I will encounter this particular complication if I perform this particular operation on 13 patients?
Many thanks,
Sweetcaroline6
You're are a surgeon ....? Right. And I'm Batman.
X ~ Binomial(p = 0.01, n = 13).
Calculate Pr(X > 0) = 1 - Pr(X = 0). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8269063830375671, "perplexity": 1132.6538741375189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443451.12/warc/CC-MAIN-20141017005723-00012-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/27465/symmetrical-transitional-and-antisymmetrical-question | # Symmetrical, Transitional and Antisymmetrical question!
If $R \subseteq A \times A$ is it true that $R$ is symmetrical since $xRy$ then $yRx$ ?
I have written that this is also antisymmetrical if both $x\leq y$ and $y\leq x$ if $x =y$
How does I then make $R \subseteq A \times A$ transitive?
-
In English, it's called "transitive". – Arturo Magidin Mar 16 '11 at 21:38
You are either confused, or we are running against a language barrier (both are certainly possible).
It is not true that simply by virtue of being a subset of $A\times A$, a subset $R$ will be "symmetrical."
Rather: we define $R$ to be "symmetrical" if and only if for every $x$ and $y$ in $A$, if $(x,y)\in R$ (that is, if $xRy$), then $(y,x)\in R$ (that is, $yRx$).
An example of a symmetric relation on $\mathbb{R}$ is $$R = \{ (a,b)\in\mathbb{R}\times\mathbb{R} \mid |a|=|b|\}$$ since, if $aRb$, then $|a|=|b|$, hence $|b|=|a|$, hence $bRa$.
An example of a non-symmetric relation on $\mathbb{R}$ is $$S = \{(a,b)\in\mathbb{R}\times\mathbb{R} \mid a\geq 0\}.$$ It is not symmetric, because $(1,-1)\in S$, but $(-1,1)\notin S$.
We define $R$ to be "antisymmetric" if and only if for every $x$ and $y$ in $A$, if $xRy$ and $yRx$ are both true, then $x=y$.
(You have the implication reversed).
In the example $R$ above, $R$ is not antisymmetric, because $(1,-1)\in R$, $(-1,1)\in R$, but $-1\neq 1$.
However, an example of an antisymmetric relation on $\mathbb{R}$ is: $$T = \{(a,b)\in\mathbb{R}\times\mathbb{R}\mid a\leq b\}.$$ This is antisymmetric because if $(a,b)\in T$ and $(b,a)\in T$, that means that $a\leq b$ and $b\leq a$, so then we conclude that $a=b$.
Finally, a relation $R\subseteq A\times A$ is said to be "transitive" if and only if for every $a,b,c\in A$, if $aRb$ and $bRc$ both hold, then $aRc$ holds.
Both $R$, $S$, and $T$ above are transitive. To see an example of a relation on $\mathbb{R}$ that is not transitive, let $$U = \{(a,b)\in\mathbb{R}\times\mathbb{R}\mid a\neq b\}.$$ Then $0U1$ holds, and $1U0$ holds, but $0U0$ does not hold. So $U$ is not transitive.
-
I think it was both language barrier, and I was confused by symmetric and antisymmetric meaning. It now makes more sense. Its the small things like this that make the rest of the path. Thank you. – user8322 Mar 16 '11 at 21:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9925352334976196, "perplexity": 102.50539148629558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246643283.35/warc/CC-MAIN-20150417045723-00128-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/could-this-solution-be-accepted.55262/ | # Homework Help: Could this solution be accepted?
1. Dec 4, 2004
### twoflower
Hi, assume this sum:
$$\sum_{n = 1}^{\infty} \frac{ \sqrt{n^2 + n} - \sqrt[3]{n^3 + n}}{ \sqrt{n^3}}$$
So I chose this divergent sum:
$$b_{n} = \frac{1}{n}$$
to compare with my original sum. Then
$$\lim_{n \rightarrow \infty} \frac{a_{n}}{b_{n}} = \lim_{n \rightarrow \infty} \frac{ \sqrt{n^4 + n^3} - \sqrt[3]{n^6 + n^4}}{ \sqrt{n^3}} = \lim_{n \rightarrow \infty} \sqrt { \frac{ n^4 + n^3}{n^3}} - \lim_{n \rightarrow \infty} \sqrt[3]{ \frac{n^6 + n^4}{n^9}} = \infty - 0 = \infty$$
Thus my original sum diverges (is "larger" than the divergent sum 1/n).
Would this solution be accepted in test?
Thank you.
2. Dec 4, 2004
### WaR
I'm not entirely sure what you did in the first step of you limit but
$$\lim_{n \rightarrow \infty} \frac{a_{n}}{b_{n}}$$ using your suggested $$a_{n}$$ and $$b_{n}$$ is equal to $$0$$ not $$\infty$$.
Your Limit Comparission Test would be inconclusive.
3. Dec 4, 2004
### twoflower
Whooops I see it. Hm that's bad, I thought I had it :-(
4. Dec 4, 2004
### arildno
Twoflower:
In order to handle this sum, you should first rewrite the expression for the terms as follows:
$$a_{n}=\frac{\sqrt{1+\frac{1}{n}}-\sqrt[3]{1+\frac{1}{n^{2}}}}{\sqrt{n}}$$
Your next step should be a clever use of Taylor series approximations to the two radicand expressions in your numerator, which both are of the following form (provided n is large:
$$(1+\epsilon)^{a},\epsilon<<1$$
You should be able to show that the Taylor series representations for fractional "a"'s are ULTIMATELY an alternating series with terms of decreasing magnitude.
Use that fact to bound your numerator; you should end up with the conclusion that your series is convergent.
good luck!
5. Dec 4, 2004
### shmoe
You should be able to do a direct comparison test. The annoying thing is the roots in the numerator. Try bounding the polynomial $$n^2+n$$ by a polynomial that's a perfect square so you can remove this root. Aim for a perfect cube for the other one.
Remember the difference in sign, so if you are trying to bound the entire expression from above, bound $$n^2+n$$ from above and $$n^3+n$$ from below (vice versa if you want a lower bound-I'll leave it to you to determine whether you want an upper or a lower bound). Also remember that this bound only needs to hold for large n.
6. Dec 4, 2004
### twoflower
Thank you for the solution arildno, unfortunately I don't know Taylor series yet. Maybe I should learn them in advance when I see the amount of sums solvable with them...
7. Dec 4, 2004
### arildno
Then you should use shmoe's suggestion.
8. Dec 4, 2004
### twoflower
Thank you shmoe, however I don't fully understand to the second part of your answer. Yes, I can multiply it with such expressions, so that I get this:
$$\sum_{n=1}^{ \infty} \frac{ n + 1 }{ \sqrt{ n^3 + n^2 } } - \frac{ n - 1 }{ \sqrt{ n^3 } \left( \sqrt[3]{ 1 - \frac{ 3n^2 - 2n + 1 }{ n^3 + n } } \right) }$$
Well, it seems yet more complicated to me...
9. Dec 4, 2004
### arildno
In order to bound the terms along shmoe's lines, do as follows:
1) We first prove:
The 3rd root expression in the numerator is strictly less than the square root expression:
$$n(\sqrt{1+\frac{1}{n}}-\sqrt[3]{1+\frac{1}{n^{2}}})=n(\sqrt{1+\frac{1}{n}}-\sqrt{1+\frac{1}{n^{2}}}\frac{\sqrt[3]{1+\frac{1}{n^{2}}}}{\sqrt{1+\frac{1}{n^{2}}}})=n(\sqrt{1+\frac{1}{n}}-\frac{\sqrt{1+\frac{1}{n^{2}}}}{(1+\frac{1}{n^{2}})^{\frac{1}{6}}})\geq{n}(\sqrt{1+\frac{1}{n}}-\sqrt{1+\frac{1}{n^{2}}})\geq{0}$$
2) We bound therefore the first term by:
$$\sqrt{n^{2}+n}\leq\sqrt{(n+1)^{2}}=n+1$$
3) We bound the negative term as follows:
$$-\sqrt[3]{n^{3}+n}}\leq{-\sqrt[3]{n^{3}}}=-n$$
4)
Hence,
$$a_{n}\leq\frac{(n+1)+(-n)}{\sqrt{n^{3}}}=\frac{1}{n^{\frac{3}{2}}}$$
10. Dec 4, 2004
### twoflower
Wow, now I'm clear about that. Your suggestions are always very "brain-freshing" for me, arildno :) Thank you.
11. Dec 4, 2004
### twoflower
Yeah and one more question: why do we have to prove that the 3rd root expression in the numerator is strictly less than the square root expression? Wouldn't the two following bounds be sufficient?
12. Dec 4, 2004
### shmoe
You need to show the terms are positive as well. Just knowing they are less than $$1/n^{3/2}$$ doesn't stop them from being very negative.
You can use arildno's method, or you can show a weaker $$\sqrt[3]{n^3 + n}-\sqrt{n^2 + n}<1$$ using bounds similar to our earlier ones. This will actually show $$|\sqrt{n^2 + n} - \sqrt[3]{n^3 + n}|<1$$ which will show your sum is absolutely convergent (which you'd know anyway if you had proven it was positive).
13. Dec 4, 2004
### twoflower
Ok I have it now, thank you shmoe! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9461688995361328, "perplexity": 964.9531442531907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742779.14/warc/CC-MAIN-20181115141220-20181115163220-00159.warc.gz"} |
https://www.physicsforums.com/members/wellorderedset.487240/recent-content | Recent content by WellOrderedSet
1. Chronological Order to Study Mathematics
Another text that many people gush over is G. H. Hardy's A Course of Pure Mathematics. I've never read it, but at least a couple of editions of it are in the public domain. (Check archive.org.)
2. Chronological Order to Study Mathematics
While Courant's book isn't as rigorous as Spivak's, it's certainly more rigorous than any of the more popular undergraduate texts. Stewart, for instance, is merely bearable: most of the proofs are there, dutifully interlarded, but they're generally useless with regard to the end-of-chapter...
3. Chronological Order to Study Mathematics
Differential and Integral Calculus Vol I. -- Richard Courant Differential and Integral Calculus Vol II. -- Richard Courant Calculus with Analytic Geometry, 2nd Ed. -- George F. Simmons Calculus and Analytic Geometry, 4th Ed. -- George B. Thomas Linear Algebra -- Jim Hefferon Linear Algebra... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9105491042137146, "perplexity": 1189.3448368756742}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363327.64/warc/CC-MAIN-20211206224536-20211207014536-00336.warc.gz"} |
https://questions.examside.com/past-years/gate/gate-ee/engineering-mathematics/calculus | GATE EE
Engineering Mathematics
Calculus
Previous Years Questions
## Marks 1
Let $$f(x) = \int\limits_0^x {{e^t}(t - 1)(t - 2)dt}$$. Then f(x) decreases in the interval.
Let R be a region in the first quadrant of the xy plane enclosed by a closed curve C considered in counter-clockwise direction. Which of the following...
Let $$\,{y^2} - 2y + 1 = x$$ and $$\,\sqrt x + y = 5.\,\,$$ The value of $$\,x + \sqrt y \,\,$$ equals ________. (Given the answer up to three decima...
Consider a function $$f\left( {x,y,z} \right)$$ given by $$f\left( {x,y,z} \right) = \left( {{x^2} + {y^2} - 2{z^2}} \right)\left( {{y^2} + {z^2}} \r... Let$$x$$and$$y$$be integers satisfying the following equations$$$2{x^2} + {y^2} = 34$x + 2y = 11$$The value of$$(x+y)$$is _________.... Let$${\rm I} = c\int {\int {_Rx{y^2}dxdy,\,\,} } $$where$$R$$is the region shown in the figure and$$c = 6 \times {10^{ - 4}}.\,\,$$The value of ... The maximum value attained by the function$$f(x)=x(x-1) (x-2)$$in the interval$$\left[ {1,2} \right]$$is _________. A particle, starting from origin at$$t=0s,$$is traveling along$$x$$-axis with velocity$$v = {\pi \over 2}\cos \left( {{\pi \over 2}t} \righ...
Minimum of the real valued function $$f\left( x \right) = {\left( {x - 1} \right)^{2/3}}$$ occurs at $$x$$ equal to
Let $$f\left( x \right) = x{e^{ - x}}.$$ The maximum value of the function in the interval $$\left( {0,\infty } \right)$$ is
A function $$y = 5{x^2} + 10x\,\,$$ is defined over an open interval $$x=(1,2).$$ At least at one point in this interval, $${{dy} \over {dx}}$$ is exa...
Roots of the algebraic equation $${x^3} + {x^2} + x + 1 = 0$$ are
The function $$f\left( x \right) = 2x - {x^2} + 3\,\,$$ has
At $$t=0,$$ the function $$f\left( t \right) = {{\sin t} \over t}\,\,$$ has
Consider the function $$f\left( x \right) = {\left( {{x^2} - 4} \right)^2}$$ where $$x$$ is a real number. Then the function has
For the function $$f\left( x \right) = {x^2}{e^{ - x}},$$ the maximum occurs when $$x$$ is equal to
If $$S = \int\limits_1^\infty {{x^{ - 3}}dx}$$ then $$S$$ has the value
The area enclosed between the parabola $$y = {x^2}$$ and the straight line $$y=x$$ is _______.
$$\mathop {Lim}\limits_{\theta \to 0} \,{{\sin \,m\,\theta } \over \theta },$$ where $$m$$ is an integer, is one of the following :
$$\mathop {Lim}\limits_{x \to \infty } \,x\sin {\raise0.5ex\hbox{\scriptstyle 1} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{\scriptstyle x}} = \_\... If$$f(0)=2$$and$$f'\left( x \right) = {1 \over {5 - {x^2}}},$$then the lower and upper bounds of$$f(1)$$estimated by the mean value theorem are... The integration of$$\int {{\mathop{\rm logx}\nolimits} \,dx} $$has the value The volume generated by revolving the area bounded by the parabola$${y^2} = 8x$$and the line$$x=2$$about$$y$$-axis is ## Marks 2 Let$$g\left( x \right) = \left\{ {\matrix{ { - x} & {x \le 1} \cr {x + 1} & {x \ge 1} \cr } } \right.$$and$$f\left( x \right) ...
A function $$f(x)$$ is defined as $$f\left( x \right) = \left\{ {\matrix{ {{e^x},x < 1} \cr {\ln x + a{x^2} + bx,x \ge 1} \cr } \,\,,\... The value of the integral$$\,\,2\int_{ - \infty }^\infty {\left( {{{\sin \,2\pi t} \over {\pi t}}} \right)} dt\,\,$$is equal to Let$$\,\,S = \sum\limits_{n = 0}^\infty {n{\alpha ^n}} \,\,$$where$$\,\,\left| \alpha \right| < 1.\,\,$$The value of$$\alpha $$in the range... The volume enclosed by the surface$$f\left( {x,y} \right) = {e^x}$$over the triangle bounded by the lines$$x=y;x=0;y=1$$in the$$xy$$pl... To evaluate the double integral$$\int\limits_0^8 {\left( {\int\limits_{y/2}^{\left( {y/2} \right) + 1} {\left( {{{2x - y} \over 2}} \right)dx} } \ri...
The minimum value of the function $$f\left( x \right) = {x^3} - 3{x^2} - 24x + 100$$ in the interval $$\left[ { - 3,3} \right]$$ is
The maximum value of $$f\left( x \right) = {x^3} - 9{x^2} + 24x + 5$$ in the interval $$\left[ {1,6} \right]$$ is
The value of the quantity, where $$P = \int\limits_0^1 {x{e^x}\,dx\,\,\,}$$ is
If $$(x, y)$$ is continuous function defined over $$\left( {x,y} \right) \in \left[ {0,1} \right] \times \left[ {0,1} \right].\,\,\,$$ Given two cons...
The integral $$\,\,{1 \over {2\pi }}\int\limits_0^{2\Pi } {Sin\left( {t - \tau } \right)\cos \tau \,d\tau \,\,\,}$$ equals
The expression $$V = \int\limits_0^H {\pi {R^2}{{\left( {1 - {h \over H}} \right)}^2}dh\,\,\,}$$ for the volume of a cone is equal to _________.
EXAM MAP
Joint Entrance Examination
JEE MainJEE AdvancedWB JEE
Graduate Aptitude Test in Engineering
GATE CSEGATE ECEGATE EEGATE MEGATE CEGATE PIGATE IN
Medical
NEET | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9968148469924927, "perplexity": 1351.0635030068124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00207.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/159812-do-they-form-vector-space.html | # Math Help - Do they form a vector space?
1. ## Do they form a vector space?
Do n-by-n traceless matrices form a vector space over the field C of complex numbers? Why or why not? If yes, show a basis and give the dimension of the vector space!
Over the field R of real numbers, I know that they form a vector space:
Tr(A)+Tr(B)=Tr(A)+Tr(B)
TR(c*A)=c*Tr(A)
0 matrix is in the vector space
-A matrix is in
But I don't know what is a basis over real numbers.
And I don't even know if they form a vector space over the field C of complex numbers. I suppose they do, but I am not sure, I can't prove it.
2. The only difference between "over the field R" and "over the field C" is that, in the second, the scalar, "c", you multiply by could be a complex number. Is TR(c*A)= c*Tr(A) when c is any complex number?
3. Originally Posted by HallsofIvy
The only difference between "over the field R" and "over the field C" is that, in the second, the scalar, "c", you multiply by could be a complex number. Is TR(c*A)= c*Tr(A) when c is any complex number?
Yes, I think it is true that TR(c*A)= c*Tr(A) when c is any complex number. Am I right? And if so, does it mean that the basis and the dimension of the vectorspace are the same over the field R and over the field C?
Thanks for helping me! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9546809792518616, "perplexity": 381.49865177141965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988310.3/warc/CC-MAIN-20150728002308-00031-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://mathhelpforum.com/geometry/204937-volume.html | # Math Help - Volume
1. ## Volume
A cone is attached to a hemisphere of radius 4 cm. If the total height of the object is 10 cm, find its volume.
2. ## Re: Volume
Let's calculate the volume of the cone and the volume of the hemisphere seperatly and then add them. I will be accurate to 2 points after the decimal point.
Hemisphere: V = (2/3)*pi*r^3 = 128*pi/3 = 134.04
If the height of the entire object is 10, and the radius of the hemisphere is 4, then the height of the cone is 10 - 4 = 6.
Cone: V = (1/3)*pi*r^2*h = 32*pi = 100.53
Vcone + Vhemisphere = 134.04 + 100.53 = 234.57
3. ## Re: Volume
Hello, Farisco!
A cone is attached to a hemisphere of radius 4 cm.
If the total height of the object is 10 cm, find its volume.
Code:
- * * *
: * : *
: * : *
: * :4 *
: :
: * - - - - * - - - - *
: \ 4 : 4 /
10 \ : /
: \ : /
: \ :6 /
: \ : /
: \ : /
: \ : /
: \ : /
: \:/
- *
We have a hemisphere with radius 4.
We have cone with radius 4 and height 6.
You should be able to find the total volume
. . without a calculator and without rounded-off decimals.
A sphere has volume: . $V \:=\:\tfrac{4}{3}\pi r^3$, where $r$ is the radius.
A half-sphere with radius 4 has volume: . $V \:=\:\tfrac{1}{2} \times \tfrac{4}{3}\pi(4^3) \:=\:\frac{128\pi}{3}$
A circular cone has volume: . $V \:=\:\tfrac{\pi}{3}r^2h\;\;(r = \text{radius, }\:h = \text{height})$
A cone with $r = 4,\,h=6$ has volume: . $V \;=\;\tfrac{\pi}{3}(4^2)(6) \:=\:32\pi$
The total volume is: . $\frac{128\pi}{3} + 32\pi \;=\;\boxed{\frac{224\pi}{3}\text{ cm}^3}$
If you want a decimal, now is the time to crank it out:
. . . . $\boxed{2}\,\boxed{2}\,\boxed{4}\;\;\boxed{\times} \;\; \boxed{\pi}\;\;\boxed{\div}\;\;\boxed{3}\;\;\boxed {=}$
and we get: . $\boxed{234.5722515}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8224019408226013, "perplexity": 1305.4121342128776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115858583.14/warc/CC-MAIN-20150124161058-00219-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://proofwiki.org/wiki/Closed_Real_Interval_is_not_Open_Set | # Closed Real Interval is not Open Set
## Theorem
Let $\R$ be the real number line considered as an Euclidean space.
Let $\closedint a b \subset \R$ be a closed interval of $\R$.
Then $\closedint a b$ is not an open set of $\R$.
## Proof
From Closed Real Interval is Neighborhood Except at Endpoints, $a$ and $b$ have no open $\epsilon$-ball lying entirely in $\closedint a b$.
The result follows by definition of open set.
$\blacksquare$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9694421887397766, "perplexity": 194.63987783373278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488538041.86/warc/CC-MAIN-20210623103524-20210623133524-00044.warc.gz"} |
http://fingerwaverecords.com/nf0cz/steradian-to-degree-302b9e | By the same argument, the maximum solid angle that can be subtended at any point is 4π sr. [pr. Use this … One steradian is equal to (180/π) 2 square degrees. unit of angles. A steradian is also equal to the spherical area of a polygon having an angle excess of 1 radian, to 1 / 4 π of a complete sphere, or to (180 / π) 2 ≈ 3282.80635 square degrees. stereos « solide » (cf. Radian is a see also of steradian. The steradian (symbol: sr) or square radian is the SI unit of solid angle.It is used in three-dimensional geometry, and is analogous to the radian, which quantifies planar angles.Whereas an angle in radians, projected onto a circle, gives a length on the circumference, a solid angle in steradians, projected onto a sphere, gives an area on the surface. Solid Angle in Square Degrees. Noté /5. That is, if you rotate a two-dimensional angle around one side, what is the equivalent in solid angle? The Fixture Beam Angle is 10 Degree. The steradian, like the radian, is a dimensionless unit, the quotient of the area subtended and the square of its distance from the center. Calculation: Required Lux at Surface (E2) =1390 Lux. Konvertieren Sie Grad in Radiant. Convertidor grados sexagesimales en … One steradian corresponds to one unit of area on the unit sphere surrounding the apex, so an object that blocks all rays from the apex would cover a number of steradians equal to the total surface area of the unit sphere, . The solid angle in steradians and square degrees of a sphere is developed from simple arguments beginning with the circumference of a circle. The point from which the object is viewed is called the apex of the solid angle, and the object is said to subtend its solid angle from that point. Exemples. Steradian, unit of solid-angle measure in the International System of Units (SI), defined as the solid angle of a sphere subtended by a portion of the surface whose area is equal to the square of the sphere’s … The symbol for radian is rad. L2/L2 = 1, dimensionless). One degree is equal 0.01745329252 radians: 1° = π/180° = 0.005555556π = 0.01745329252 rad. For example, the apparent diameter of the moon seen from earth is θ=0.5 °, which is equivalent to a solid angle of about 6e-5 steradian. Radians to degrees converter How to convert degrees to radians. It is used in three-dimensional geometry, and is analogous to the radian, which quantifies planar angles. A steradian is also equal to the spherical area of a polygonhaving an angle excessof 1 radian, to 1/4πof a complete sphere, or to (180⁄π)2≈ 3282.80635 square degrees. Radiant intensity (how brightly something shines) can be measured in watts per steradian (W/sr). Non-SI-compliant unit measure of solid angle. Both the numerator and denominator of this ratio have dimension length squared (i.e. Comment convertir des degrés en radians. in the same way a radian is related to the circumference of a circle: So a sphere measures 4π steradians, or about 12.57 steradians. For example, to convert 120 degrees you would have 120 x π/180 = 120π/180. Note that rounding errors may occur, so always check the results. stéradian nom masculin Mathematically the solid angle is unitless, but for practical reasons the steradian (s.r.) And because we measure an angle, it doesn't matter what size the sphere is, it will always measure 4π steradians. The solid angle is related to the area it cuts out of a sphere: Because the surface area A of a sphere is 4πr2, the definition implies that a sphere subtends 4π steradians (≈ 12.56637 sr) at its center. : pr. is assigned so that 1 steradian = 1 radian2. Français. Since the complete surface area of a sphere is 4π times the square of its radius, the total solid angle about a point is equal to 4π steradians. So, one steradian receives about 0.1 W × (4m2/0.0025m2) = 160 W/sr. What is the conversion between degrees and steradians? Конвертируйте градусы в радианы здесь. Dernière mise à jour : 2014-12-03 Fréquence d'utilisation : 1 Qualité : Référence: IATE. Le regard d'un œil humain embrasse environ 0,5 sr ; Un cône circulaire, de demi-angle au sommet θ découpe dans l'espace un angle solide de 2π (1 - cosθ). Anglais. Simply carry out the multiplication process, by multiplying the number of degrees by … Once you’ve gotten your answer, simplify the radians. Units similar to or like Steradian. : di an] – Din fr. And because the sensor is relatively small, its flat surface area is approximately the area of sphere that it occupies. Three fisheye images which are 120 degrees apart are obtained using a fisheye lens with 180 degree steradian. 1 hr = 15 o at constant … As the surface area of a sphere is given by the formula $$S = 4 \pi r^2$$, where $$r$$ is the radius of the sphere, and the area subtended by a steradian … 1 radian is equal to 1000000 microradian, or 57.295779513082 degree. Figure … What are Radians and how are Radian (unit of plane angle) related to Degrees? It is useful, however, to distinguish between dimensionless quantities of a different nature, so the symbol "sr" is used to indicate a solid angle. The steradian (symbol: sr) or square radian[1][2] is the SI unit of solid angle. Whereas an angle in radians, projected onto a circle, gives a length on the circumference, a solid angle in steradians, The unit of solid angle; the solid angle that encloses an area on the surface of a sphere equivalent to the square of the radius of the sphere. A steradian is defined as conical in shape, as shown in the illustration. Wikipedia. The solid angle of a cone whose cross-section subtends the angle 2θ is: Millisteradians (msr) and microsteradians (μsr) are occasionally used to describe light and particle beams. radian) angle of the cross-section of a simple cone subtending the plane angle 2θ, with θ given by: A steradian is used to measure "solid angles". Other denotations include ''sq. Answer: At 2m, one steradian cuts through 2×2 = 4 m2 of the sphere. Therefore, in this case, one steradian corresponds to the plane (i.e. The name steradian is made up from the Greek, The surface area of a steradian is just r. Everybody knows what a 90° angle … So 0.05 × 0.05 = 0.0025m2. unit of astronomical length • Degree unit of measurement of an angle • Steradian Unit of solid angle measurement • Dyne is a unit of Force. SI multiples We assume you are converting between milliradian and degree. Required illumination at surface is 1390 Lux The distance from Lighting Fixture to illumination surface is 3 Meter. stéréo ) et radian ♦ Unité de mesure d angle solide (symb.sr). Square degree, °², is a less common, much smaller unit as steradian. A steradian is also equal to the spherical area of a polygon having an angle excess of 1 radian, to 1 / 4π of a complete sphere, or to ( 180 ⁄ π) 2 ≈ 3282.80635 square degrees. For example, the number of square degrees in a sphere is equal to 4 pi x (57.2957795)^2 = 41,253 square degrees (rounded to the nearest square degree). A degree … The concept of a solid angle and the term steradian can be perplexing to people only acquainted with ordinary angles. Les degrés et les radians sont deux unités de mesure des angles. steradian. [pr. Distance between Lighting Fixture and Surface (D) = 3… steradian: translation. How to convert Radians to Degrees Radians to degrees conversion formula. Convert degree to radian here. If A = r2, it corresponds to the area of a spherical cap (A = 2πrh) (where h stands for the "height" of the cap) and the relationship .mw-parser-output .sr-only{border:0;clip:rect(0,0,0,0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px;white-space:nowrap}h/r = 1/2π holds. 1°² = (π/180)² sr = 0.0003046174 sr Steradian, unit of solid-angle measure in the International System of Units (SI), defined as the solid angle of a sphere subtended by a portion of the surface whose area is equal to the square of the sphere’s radius. Unitate de măsură pentru unghiurile corpurilor solide. The angle α in radians is equal to the angle α in degrees times pi constant divided by 180 degrees: α (radians) = α (degrees… It is used in three-dimensional geometry, and is analogous to the radian, which quantifies planar angles. The degree is the older way of measuring angles, dating back to the ancients. One radian is equal 57.295779513 degrees: 1 rad = 180°/π = 57.295779513° The angle α in degrees is equal to the angle α in radians times 180 degrees … Therefore, in this case, one steradian corresponds to the plane (i.e. [G. stereos, solid, + radion STERADIÁN, steradieni, s.m. The steradian (symbol: sr) or square radian is the SI unit of solid angle. Pour un cône régulier, l'angle solide Ω est égal à Ω =2 π x (1 - cos(θ/2)) où θ est l'angle plan de l'apex du cône. 1°² = (π/180)² sr = 0.0003046174 sr Steradian. The solid angle of a cone whose cross-section subtends the angle 2θ is: $\Omega = 2\pi\left(1 - \cos\theta\right)\,\mathrm{sr}$. Steradian is a derived term of radian. [ steradjɑ̃ ] n. m. • 1923; angl. Since radians are dimensionless, steradians are also dimensionless. Le stéradian fut à partir de 1960 avec le radian une unité SI supplémentaire, mais cette catégorie fut retirée du système international en 1995. A steradian is also equal to the spherical area of a polygon having an angle excess of 1 radian, to 1/(4π) of a complete sphere, or to (180/π)² or 3282.80635 square degree s. The steradian was formerly an SI supplementary unit, but this category was abolished from the SI in 1995 and the steradian is now considered an SI derived unit. Retrouvez Articles on Units of Angle, Including: Minute of ARC, Radian, Steradian, Grad (Angle), Grade (Slope), Turn (Geometry), Degree (Angle), Angular Mil, Sq et des millions de livres en stock sur Amazon.fr. radian) angle of the cross-section of a simple cone subtending the plane angle 2θ, with θ given by: This angle corresponds to the plane aperture angle of 2θ ≈ 1.144 rad or 65.54°. For those who prefer to work in square degrees, it is helpful to remember that the number of square degrees in … The following is a list of definitions relating to conversions between radians and degrees. The solid angle of a cone whose cross … Le stéradian (symbole : sr) est l'unité dérivée du Système international pour la mesure d'angles solides.Son nom est partiellement dérivé du grec ancien στερεός (stereos) « solide, dur, cubique ».. Définition. Esta página web también existe en español. Reading eyes vision field is θ=3 ° (0.002 sr) (foveal zone). The name is derived from the Greek στερεός stereos 'solid' + radian. What is a radian (rad)? A steradian is related to the surface area of a sphere Square degree. • SI unit of pressure is Pascal. (geometry) In the International System of Units, the derived unit of plane angular measure of angle equal to the angle subtended at the centre of a circle by an arc of its circumference equal in … Examples There are … Stephen M. Shafroth, James Christopher Austin, R. Bracewell, Govind Swarup, "The Stanford microwave spectroheliograph antenna, a microsteradian pencil beam interferometer", https://en.wikipedia.org/w/index.php?title=Steradian&oldid=986814803, Short description is different from Wikidata, Articles containing Ancient Greek (to 1453)-language text, Creative Commons Attribution-ShareAlike License, This page was last edited on 3 November 2020, at 04:35. Эта страница также существует на русском языке. A steradian is (180/π)2square degrees or about … For example, radiant intensity can be measured in watts per steradian (W⋅sr−1). [4][5] Other multiples are rarely used. Solid angles can also be measured in squares of angular measures such as degrees, minutes, and seconds. And each movement is counted as 1° area on its surface W⋅sr−1 ) radian.The for. Vision field is θ=3 ° ( 0.002 steradian to degree ) ( foveal zone ) ( RA ) = W/sr. Commonly used older way of measuring an angle, it is in fact the SI derived unit for angle the! 2014-12-03 Fréquence d'utilisation: 1 Qualité: Référence: IATE so always the! A sphere also be measured in squares of angular measure in the International System of units ( SI.... Per steradian ( W/sr ) because we measure an angle, even if degree... Angular measure in the International System of units ( SI ) between milliradian and degree a radian.The... Converter how to convert degrees to be converted and multiply it by π/180 minute d'arc et seconde d'arc ligne. Into fractions gotten your answer, simplify the radians obtient trois images oeil-de-poisson espacées de 120 DEG utilisant! Other multiples are rarely used shape, as shown in the International System units! One side, what is the older way of measuring an angle, even if the degree is to. The results cone whose cross-section subtends the angle 2θ is: = ( − ) sr pour sphère. Denominator of this ratio have dimension length squared ( i.e 1390 Lux the distance Lighting! Foveal zone ) m. • 1923 ; angl angular measure in the International System of (. Cross-Section subtends the angle 2θ is:, as shown in the illustration with! Measure steradian to degree how large the object appears to an observer looking from that.... The name is derived from the Greek στερεός stereos 'solid ' +.... Is divided into 360 degrees, and is analogous to the radian, which quantifies angles. Surface is 3 Meter the radian on obtient trois images oeil-de-poisson espacées de 120 DEG en utilisant un oeil-de-poisson! Watts per steradian ( W/sr ) using a fisheye lens with 180 degree.. [ G. stereos, solid, + radion steradian how large the object to! And each movement steradian to degree counted as 1° the same argument, the of. Utilisant un objectif oeil-de-poisson avec 180 DEG steradian to degree 1: Calculate Lighting Fixture to illumination surface 1390... ) = 160 W/sr geometry, and seconds is analogous to the plane ( i.e solid! About 0.1 W × ( 4m2/0.0025m2 ) = 160 W/sr because we measure an angle, it will always 4π! Degree steradian ( how brightly something shines ) can be measured in squares of angular measures such degrees! 0.005555556Π = 0.01745329252 rad angle is the older way of measuring angles, back. Another term for a steradian is used in three-dimensional geometry, and analogous! Figure … the degree is the radian the equivalent in solid angle a., in this case, one steradian cuts through 2×2 = 4 m2 of sphere! Smaller unit as steradian: Référence: IATE each movement is counted as 1° required Lux surface..., is a square radian.The abbreviation for steradian is a less common, much smaller as. This ratio have dimension length squared ( i.e 2 square degrees because we measure an,... For angle is the older way of measuring angles, dating back to the plane ( i.e to measure solid! The radian ) = 160 W/sr any point is 4π sr pour la sphère entière is θ=3 ° 0.002... Surface having following details Lux at surface is 3 Meter angular measures such as degrees, and movement! Exprimé en stéradian ( sr ), unité du système International its surface counted as.! Milliradian or degree the SI derived unit for angle is the radian, which quantifies planar angles subtended! ) = 160 W/sr images which are 120 degrees you would have 120 x π/180 = 120π/180 π/180 120π/180. 2×2 = 4 m2 of the sphere ] n. m. • 1923 ; angl unit by. Sr.. how many steradians in a sphere is equal to ( 180/π ) 2 square.! As shown in the International System of units ( SI ) from that point by converting both numbers fractions! Ray '' dee euhn/, n. Geom et 4π sr, it will always measure 4π steradians each! What size the sphere is, it will always measure 4π steradians ( − ) '', called. Lumen and Diameter of illumination at surface is 1390 Lux the distance Lighting. Sont deux unités de mesure des angles converting both numbers into fractions for steradian is defined conical. Even if the degree is equal 0.01745329252 radians: 1° = π/180° = 0.005555556π = 0.01745329252.! Rounding errors may occur, so always check the results subtended at any point is 4π.! Units, just as for hours '', are called minutes '' and seconds '' ) radian! Plane ( i.e many steradians in a sphere % of a cone whose subtends! Argument, the number of … radians to degrees converter how to degrees... Les degrés et les radians sont deux unités de mesure d angle solide ( )! Degrees or about 3282.8 square degrees or about 3282.8 square degrees or about %... Diameter of illumination at surface ( E2 ) =1390 Lux 1: Calculate Lighting Fixture to illumination is. Lens with 180 degree steradian degree ( DEG or ° ) divided into 360 degrees and! minutes '' and seconds '' Qualité: Référence: IATE a. ( E2 ) =1390 Lux: milliradian or degree the SI derived unit angle! Be converted and multiply it by π/180 about 0.1 W × ( 4m2/0.0025m2 ) = W/sr... Measuring angles, dating back to the ancients one degree is the,... Et seconde d'arc en ligne the object appears to an observer looking from that point something shines can. Système International steradian ( W⋅sr−1 ) be defined as the solid angle of a cone whose subtends! Lighting Fixture to illumination surface is 1390 Lux the distance from Lighting Fixture to illumination surface 3... Such as degrees, minutes, steradian to degree is analogous to the plane ( i.e: Lux. You would have 120 x π/180 = 120π/180 does n't matter what size sphere! 0.005555556Π = 0.01745329252 rad n. Geom by π/180 stereos 'solid ' + radian euhn/, n. Geom are degrees. ( − ) examples There are … the degree is equal to 180/π. =1390 Lux a radian is a degree ( DEG or ° ) measuring! Est compris en 0 et 4π sr units, just as for hours,... And multiply it by π/180 4π steradians angle of a cone whose cross-section subtends the angle 2θ is: your... Ascension ( RA ) = 160 W/sr or about 8 % of a sphere euhn/, n..... And each movement is counted as 1° from Lighting Fixture ’ s Lumen Diameter. Measures such as degrees, and each movement is counted as 1° number... Lighting Fixture to illumination surface is 1390 Lux the distance from Lighting ’. 180/Π ) 2 square degrees or about 8 % of a sphere this by converting both numbers fractions. In watts per steradian ( W/sr ) sphère entière hr = circle objectif oeil-de-poisson avec 180 DEG stéradians and... 2 square degrees dernière mise à jour: 2014-12-03 Fréquence d'utilisation: 1 Qualité Référence! Ray '' dee euhn/, n. Geom, n. Geom 2×2 = 4 m2 of sphere! Fixture to illumination surface is 3 Meter hour ( hr ) of Right Ascension ( RA =. The sensor is relatively small, its flat surface area is approximately the area of sphere that it occupies ). Distance from Lighting Fixture to illumination surface is 3 Meter 0.005555556π = 0.01745329252 rad of to... ’ s Lumen and Diameter of illumination at surface is 1390 Lux the from! = 0.005555556π = 0.01745329252 rad unit sphere by a unit area on its surface of (... Square degrees or about 3282.8 square degrees 3 Meter so, one steradian is used in three-dimensional geometry, each! ] n. m. • 1923 ; angl to illumination surface is 1390 Lux distance! The older way of measuring angles, dating back to the ancients common much. Per steradian ( W⋅sr−1 ) dee euhn/, n. Geom around one side, what is radian. A solid angle that can be subtended at any point is 4π.! To radians, take the number of … radians to degrees converter how to degrees. 60 minutes = 3600 seconds, where 24 hr = circle angle that can measured! Bit more complicated rarely used case, one steradian corresponds to the radian, grade, d'arc! Denominator of this ratio have dimension length squared ( i.e sr pour la entière. Means that each circle is divided into 360 degrees, and is analogous the! Used in three-dimensional geometry, and is analogous to the radian a radian is equal 1000000! By a unit area on its surface, to convert degrees to radians, take the number of degrees radians. Is divided into 360 degrees, and is analogous to the plane ( i.e any point is sr. Or steradian to degree ) 0.0003046174 sr one steradian is similar to these units: degree. For measuring an angle, it will always measure 4π steradians, is a square radian.The abbreviation for steradian a. Les mesures d'angle en degré, radian, which quantifies planar angles a two-dimensional around... Milliradian and degree d'arc en ligne effective method of measuring an angle, it will measure. Sphere by a unit area on its surface what size the sphere is, it is fact. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9260815382003784, "perplexity": 3614.4170633621184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039526421.82/warc/CC-MAIN-20210421065303-20210421095303-00029.warc.gz"} |
https://physics.aps.org/synopsis-for/10.1103/PhysRevLett.107.250504 | # Synopsis: Defeating Bedlam
Combining two noisy information channels can yield noiseless information transmission.
How much information can be transmitted over a noisy channel? Noise plagues nearly every channel, and thus error-correction must be used to maximize the transmission rate; by adding more and more channels the error can be driven to zero. However, some channels are so noisy that, like a blocked drainpipe, nothing useful can get through. Extending the analogy, it would seem crazy to think that combining two such blocked pipes would let the water flow, yet in the case of quantum mechanics, theorists have shown that the phenomenon of entanglement can clear out the information stoppage.
Writing in Physical Review Letters, Jianxin Chen, at the University of Guelph and the University of Waterloo, Canada, and colleagues show theoretically that pairs of “noise-blocked” quantum channels can, in combination, transmit classical information with zero error. This is a form of “superadditivity,” an effect in which the upper limit of information capacity of two combined channels can exceed the summed capacities of the individual channels. The authors show that not only can noise be entirely defeated for transmitting classical information over quantum channels, but that the result also holds for quantum information, which is more fragile and beset by decoherence. – David Voss
More Features »
### Announcements
More Announcements »
## Subject Areas
Quantum Information
## Previous Synopsis
Superconductivity
## Next Synopsis
Atomic and Molecular Physics | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8718200922012329, "perplexity": 1447.8503015020406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201953.19/warc/CC-MAIN-20190319093341-20190319115341-00390.warc.gz"} |
https://resonaances.blogspot.com/2018/06/can-miniboone-be-right.html | Tuesday, 5 June 2018
Can MiniBooNE be right?
The experimental situation in neutrino physics is confusing. One one hand, a host of neutrino experiments has established a consistent picture where the neutrino mass eigenstates are mixtures of the 3 Standard Model neutrino flavors νe, νμ, ντ. The measured mass differences between the eigenstates are Δm12^2 ≈ 7.5*10^-5 eV^2 and Δm13^2 ≈ 2.5*10^-3 eV^2, suggesting that all Standard Model neutrinos have masses below 0.1 eV. That is well in line with cosmological observations which find that the radiation budget of the early universe is consistent with the existence of exactly 3 neutrinos with the sum of the masses less than 0.2 eV. On the other hand, several rogue experiments refuse to conform to the standard 3-flavor picture. The most severe anomaly is the appearance of electron neutrinos in a muon neutrino beam observed by the LSND and MiniBooNE experiments.
This story begins in the previous century with the LSND experiment in Los Alamos, which claimed to observe νμνe antineutrino oscillations with 3.8σ significance. This result was considered controversial from the very beginning due to limitations of the experimental set-up. Moreover, it was inconsistent with the standard 3-flavor picture which, given the masses and mixing angles measured by other experiments, predicted that νμνe oscillation should be unobservable in short-baseline (L ≼ km) experiments. The MiniBooNE experiment in Fermilab was conceived to conclusively prove or disprove the LSND anomaly. To this end, a beam of mostly muon neutrinos or antineutrinos with energies E~1 GeV is sent to a detector at the distance L~500 meters away. In general, neutrinos can change their flavor with the probability oscillating as P ~ sin^2(Δm^2 L/4E). If the LSND excess is really due to neutrino oscillations, one expects to observe electron neutrino appearance in the MiniBooNE detector given that L/E is similar in the two experiments. Originally, MiniBooNE was hoping to see a smoking gun in the form of an electron neutrino excess oscillating as a function of L/E, that is peaking at intermediate energies and then decreasing towards lower energies (possibly with several wiggles). That didn't happen. Instead, MiniBooNE finds an excess increasing towards low energies with a similar shape as the backgrounds. Thus the confusion lingers on: the LSND anomaly has neither been killed nor robustly confirmed.
In spite of these doubts, the LSND and MiniBooNE anomalies continue to arouse interest. This is understandable: as the results do not fit the 3-flavor framework, if confirmed they would prove the existence of new physics beyond the Standard Model. The simplest fix would be to introduce a sterile neutrino νs with the mass in the eV ballpark, in which case MiniBooNE would be observing the νμνsνe oscillation chain. With the recent MiniBooNE update the evidence for the electron neutrino appearance increased to 4.8σ, which has stirred some commotion on Twitter and in the blogosphere. However, I find the excitement a bit misplaced. The anomaly is not really new: similar results showing a 3.8σ excess of νe-like events were already published in 2012. The increase of the significance is hardly relevant: at this point we know anyway that the excess is not a statistical fluke, while a systematic effect due to underestimated backgrounds would also lead to a growing anomaly. If anything, there are now less reasons than in 2012 to believe in the sterile neutrino origin the MiniBooNE anomaly, as I will argue in the following.
What has changed since 2012? First, there are new constraints on νe appearance from the OPERA experiment (yes, this OPERA) who did not see any excess νe in the CERN-to-Gran-Sasso νμ beam. This excludes a large chunk of the relevant parameter space corresponding to large mixing angles between the active and sterile neutrinos. From this point of view, the MiniBooNE update actually adds more stress on the sterile neutrino interpretation by slightly shifting the preferred region towards larger mixing angles... Nevertheless, a not-too-horrible fit to all appearance experiments can still be achieved in the region with Δm^2~0.5 eV^2 and the mixing angle sin^2(2θ) of order 0.01.
Next, the cosmological constraints have become more stringent. The CMB observations by the Planck satellite do not leave room for an additional neutrino species in the early universe. But for the parameters preferred by LSND and MiniBooNE, the sterile neutrino would be abundantly produced in the hot primordial plasma, thus violating the Planck constraints. To avoid it, theorists need to deploy a battery of tricks (for example, large sterile-neutrino self-interactions), which makes realistic models rather baroque.
But the killer punch is delivered by disappearance analyses. Benjamin Franklin famously said that only two things in this world were certain: death and probability conservation. Thus whenever an electron neutrino appears in a νμ beam, a muon neutrino must disappear. However, the latter process is severely constrained by long-baseline neutrino experiments, and recently the limits have been further strengthened thanks to the MINOS and IceCube collaborations. A recent combination of the existing disappearance results is available in this paper. In the 3+1 flavor scheme, the probability of a muon neutrino transforming into an electron one in a short-baseline experiment is
where U is the 4x4 neutrino mixing matrix. The Uμ4 matrix elements controls also the νμ survival probability
The νμ disappearance data from MINOS and IceCube imply |Uμ4|≼0.1, while |Ue4|≼0.25 from solar neutrino observations. All in all, the disappearance results imply that the effective mixing angle sin^2(2θ) controlling the νμνsνe oscillation must be much smaller than 0.01 required to fit the MiniBooNE anomaly. The disagreement between the appearance and disappearance data had already existed before, but was actually made worse by the MiniBooNE update.
So the hypothesis of a 4th sterile neutrino does not stand scrutiny as an explanation of the MiniBooNE anomaly. It does not mean that there is no other possible explanation (more sterile neutrinos? non-standard interactions? neutrino decays?). However, any realistic model will have to delve deep into the crazy side in order to satisfy the constraints from other neutrino experiments, flavor physics, and cosmology. Fortunately, the current confusing situation should not last forever. The MiniBooNE photon background from π0 decays may be clarified by the ongoing MicroBooNE experiment. On the timescale of a few years the controversy should be closed by the SBN program in Fermilab, which will add one near and one far detector to the MicroBooNE beamline. Until then... years of painful experience have taught us to assign a high prior to the Standard Model hypothesis. Currently, by far the most plausible explanation of the existing data is an experimental error on the part of the MiniBooNE collaboration.
Luboš Motl said...
Your explanation must be an error by both LSND and MiniBooNE, right? How does it happen that they made errors with the compatible consequences - in two rather different experiments? What kind of an error is it? I also think it should be an error - these are my real questions, not rhetorical ones.
Unknown said...
Yes, both are probably wrong. The similarity between the two experiments is the lack of a near detector that would allow one to get a robust measurement of the background. Moreover, the two collaborations have a significant overlap (which may or may not be relevant for the error propagation). If I knew what the error is, our trouble would be over; certainly, the experimentalists involved are more competent than me in this respect. Sometime pi0 decays to photons are pointed out as the most difficult and suspicious background.
StevieB said...
I'm American, and I never knew Benjamin Franklin said that; I always learn something on this blog. Thanks, Jester!
andrew said...
The Daya Bay and Reno collaborations have both suggested strongly that the composition of the reactor fuel (which changes over time) is a likely source of the reactor anomalies observed at LSDN and MiniBooNE, something that the recent preprint from MiniBoonNE does not analyze rigorously or figure into its systemic error budget in a meaningful way.
A preprint from Daya Bay in 2017 (https://arxiv.org/abs/1704.01082) notes in its abstract that:
"The Daya Bay experiment has observed correlations between reactor core fuel evolution and changes in the reactor antineutrino flux and energy spectrum. Four antineutrino detectors in two experimental halls were used to identify 2.2 million inverse beta decays (IBDs) over 1230 days spanning multiple fuel cycles for each of six 2.9 GWth reactor cores at the Daya Bay and Ling Ao nuclear power plants. Using detector data spanning effective 239Pu fission fractions, F239, from 0.25 to 0.35, Daya Bay measures an average IBD yield, σ¯f, of (5.90±0.13)×10−43 cm2/fission and a fuel-dependent variation in the IBD yield, dσf/dF239, of (−1.86±0.18)×10−43 cm2/fission. This observation rejects the hypothesis of a constant antineutrino flux as a function of the 239Pu fission fraction at 10 standard deviations. The variation in IBD yield was found to be energy-dependent, rejecting the hypothesis of a constant antineutrino energy spectrum at 5.1 standard deviations. While measurements of the evolution in the IBD spectrum show general agreement with predictions from recent reactor models, the measured evolution in total IBD yield disagrees with recent predictions at 3.1σ. This discrepancy indicates that an overall deficit in measured flux with respect to predictions does not result from equal fractional deficits from the primary fission isotopes 235U, 239Pu, 238U, and 241Pu. Based on measured IBD yield variations, yields of (6.17±0.17) and (4.27±0.26)×10−43 cm2/fission have been determined for the two dominant fission parent isotopes 235U and 239Pu. A 7.8% discrepancy between the observed and predicted 235U yield suggests that this isotope may be the primary contributor to the reactor antineutrino anomaly."
The Reno Collaboration also points to fuel discrepancies as the source of the reactor anomaly. It's abstract to a June 2, 2018 preprint (https://arxiv.org/abs/1806.00574) states:
"We report a fuel-dependent reactor antineutrino yield using six 2.8\,GWth reactors in the Hanbit nuclear power plant complex, Yonggwang, Korea. This analysis uses an event sample acquired through inverse beta decay (IBD) interactions in identically designed near and far detectors for 1807.9 live days from August 2011 to February 2018. Based on multiple fuel cycles, we observe a fuel dependent variation in the IBD yield of (6.03±0.21)×10−43~cm2/fission for 235U and (4.17±0.29)×10−43~cm2/fission for 239Pu while a total average IBD yield per fission (y⎯⎯⎯f) is (5.79±0.11)×10−43~cm2/fission. The hypothesis of no fuel dependent IBD yield or identical spectra of fuel isotopes is ruled out at more than 6\,σ. The measured IBD yield per 235U fission shows the largest deficit relative to a reactor model prediction. Reevaluation of the 235U IBD yield per fission may mostly solve reactor antineutrino anomaly. We also report a hint of correlation between the 5\,MeV excess in observed IBD spectrum and the reactor fuel isotope fraction of 235U."
Unknown said...
You're confusing the MiniBooNE and reactor anomalies. These are different things.
Avelino said...
Great post, Jester! With so many news about the discovery of sterile neutrinos a more realistic opinion was very much needed.
Nicophil said...
The other lines of evidence have been weakening. For example, theorists have for years noted that nuclear reactors appear to produce about 6% fewer electron antineutrinos than standard theory predicts (suggesting that some of them were perhaps oscillating into sterile neutrinos). However, in April 2017, physicists with the Daya Bay Reactor Neutrino Experiment near Shenzhen, China, reported that the entire deficit could be explained if theorists had simply overestimated the number of antineutrinos produced by one component of the complex fuel, uranium-235. Ha ha !
Chris said...
StevieB, I like Jester's paraphrasing, but the actual quote is "In this world nothing can be said to be certain, except death and taxes."
Pedro said...
Just to add on the discussion, an underestimated π0 background may be responsible for MiniBooNE low energy excess, but LSND is completely different. Their signal is inverse beta decay, that is anti-nue plus proton to e+ plus neutron. The experimental signature consists of prompt 511 keV photons from e+e- annihilation, and delayed 2.2 MeV photon from neutron capture. So π0's cannot explain LSND...
milkshake said...
I wouldn't be so quick as dismissing LSND /MiniBooNE anomaly as error just because there is a poor fit with results from other experiments: the energy range is substantially different from solar/cosmic ray-generated neutrinos. And unlike the astronomy sources and reactor-produced antineutrinos, they have a precise control over the production rates of their muon neutrinos and the direction of the beam.
If it does not fit the Planck-measured microwave background, well it is too bad but don't forget that the data from microwave background is not something taken straight from the detector display - there is a long chain of analysis dependent on the chosen cosmology model, which does contain adjustable parameters "to make things come out right" and now this piece of data ruins everyone's favorite setting of those knobs...
Maybe there is a source of systematic error that affects LSND /MiniBooNE - or maybe all other experiments too, just to a different degree, and it wasn't noticed. But that in itself can be interesting. With regards to "most likely scenario is just plain vanilla SM" - neutrino oscillations are already an extension of SM, initially neutrinos were thought to me massless; maybe the current model is simplistic: what if there is more than one right-handed massive neutrinos...
John Duffield said...
Good post, Mad Hatter. IMHO there's an awful lot of inference when it comes to neutrinos, and not a lot of evidence. For example a lot of people take neutrino mass for granted, but I'm still waiting for somebody to show me a neutrino moving slower than light.
Unknown said...
That small window from LSND at around 1eV would remain if one just ignored the MiniBooNE result.
Unknown said...
I remain astonished that muon neutrino fluxes are so well-determined at high energy that a 1% loss of flux can be ruled out at the several sigma level as that should be the criterion for contradicting LSND and MiniBooNE. I am also astonished that particle physicists continue to take astrophysical constraints so seriously, as they depend on being certain in detail about 15 billion years of physics history which has undergone a number of significant revisions since the last millennium. (That's a reaction to the blog's historical reference style.) Consistency is great but can anyone rule out intermediate epochs of dark energy style acceleration of the expansion rate? The conclusions (oft revised despite Mike Turner's assurances) depend on knowing that we do know the all of the possible variations from the current best fit model. One must always be skeptical but I recall Felix Boehm's long-held certainty that neutrinos were massless and could not oscillate. Besides, if any dark matter is spinorial in nature, we are assured that sterile neutrinos do exist -- and likely couple somehow.
Ara Ioannisian said...
arXiv:1909.08571
A Standard Model explanation for the excess of electron-like events in MiniBooNE
ABSTRACT:
We study the dependence of neutral current (NC) neutrino-induced π0/photon production (νμ+A→νμ+1π0/γ+X) on the atomic number of the target nucleus, A, at 4-momentum transfers relevant to the MiniBooNE experiment: Δ resonance mass region. Our conclusion is based on experimental data for photon-nucleus interactions from the A2 collaboration at the Mainz MAMI accelerator. We work in the approximation that decays of Δ resonance unaffected by its production channel, via photon or Z boson. 1π0+X production scales as A2/3, the surface area of the nucleus. Meanwhile the photons created in Δ decays will leave the nucleus, and that cross section will be proportional to the atomic number of the nucleus. Thus the ratio of photon production to π0 production is proportional to A1/3. For carbon 12C this factor is ≈2.3. MiniBooNE normalises the rate of photon production to the measured π0 production rate. The reduced neutral pion production rate would yield at least twice as many photons as previously expected, thus significantly lowering the number of unexplained electron-like events. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9208690524101257, "perplexity": 2163.323630443137}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00612.warc.gz"} |
http://mathhelpforum.com/differential-equations/122679-problem-ode-modeling.html | Math Help - A problem on ODE modeling
1. A problem on ODE modeling
im a beginner in Ode and had a problem about this ode modeling question:
The population of mosquitoes in a certain area increases at a rate proportional to the current population and in the absence of other factors, the population doubles each week.There are 200,000 mosquitoes in the area initially, and predators(birds, bats, and so forth) ear 20,000 mosquitoes/day. Determine the population of mosquitoes in the area in any time.
It seems to be simple but i still cannot figure out the equation for this question.
The solution is dp/dt=(In2)*P-140000.
The '140000' indicates the equation actually measures in weeks so it is 20,000*7=140000.
Then how about the 'In2' ? it says without any external factors the population doubles each week then where does this "In2" come? why not just "2"...
thank you so much for you valuable help.
2. Originally Posted by rexhegemony
im a beginner in Ode and had a problem about this ode modeling question:
The population of mosquitoes in a certain area increases at a rate proportional to the current population and in the absence of other factors, the population doubles each week.There are 200,000 mosquitoes in the area initially, and predators(birds, bats, and so forth) ear 20,000 mosquitoes/day. Determine the population of mosquitoes in the area in any time.
It seems to be simple but i still cannot figure out the equation for this question.
The solution is dp/dt=(In2)*P-140000.
The '140000' indicates the equation actually measures in weeks so it is 20,000*7=140000.
Then how about the 'In2' ? it says without any external factors the population doubles each week then where does this "In2" come? why not just "2"...
thank you so much for you valuable help.
Without predation the population satisfies:
$\frac{dP}{dt}=kP$
which has solution $P(t)=P_0 e^{kt}$, as the population doubles in 1 week we have from this:
$e^k=2$
or:
$k=\ln(2)$
CB
3. oh very clear!
thank you so much! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9068332314491272, "perplexity": 619.1472870869944}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678683052/warc/CC-MAIN-20140313024443-00092-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://gmatclub.com/forum/cole-drove-from-home-to-work-at-an-average-speed-of-75-kmh-he-then-205596.html | It is currently 19 Feb 2018, 00:13
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Cole drove from home to work at an average speed of 75 kmh. He then
Author Message
TAGS:
### Hide Tags
Intern
Joined: 10 Sep 2015
Posts: 26
Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
15 Sep 2015, 12:44
3
This post was
BOOKMARKED
00:00
Difficulty:
35% (medium)
Question Stats:
71% (02:47) correct 29% (03:19) wrong based on 160 sessions
### HideShow timer Statistics
Cole drove from home to work at an average speed of 75 kmh. He then returned home at an average speed of 105 kmh. If the round trip took a total of 2 hours, how many minutes did it take Cole to drive to work?
A) 66
B) 70
C) 72
D) 75
E) 78
Source: GMAT Prep Now - http://www.gmatprepnow.com/module/gmat- ... /video/914
[Reveal] Spoiler: OA
Intern
Joined: 10 Sep 2015
Posts: 26
Re: Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
15 Sep 2015, 12:52
I tried solving the question by starting with this:
time to work + time home = 2
d/75 + d/105 = 2
105d/7875 + 75d/7875 = 15750
180d/7875 = 15750
180d = 7875*15750
I used my calculator for the rest and got d = 689062.5 which is obviously too big to work with the question.
So I stopped.
What went wrong?
Intern
Joined: 14 Jul 2015
Posts: 14
Location: United Arab Emirates
GMAT 1: 660 Q49 V32
WE: Business Development (Energy and Utilities)
Re: Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
15 Sep 2015, 13:55
Let the distance one way be x
Time from home to work = x/75
Time from work to home = x/105
Total time = 2 hrs
(x/75) + (x/105)= 2
Solving for x, we get x = 175/2
Time from home to work in minutes= (175/2)*60/75 = 70 minutes
Ans= B
Intern
Joined: 14 Jul 2015
Posts: 14
Location: United Arab Emirates
GMAT 1: 660 Q49 V32
WE: Business Development (Energy and Utilities)
Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
15 Sep 2015, 13:57
1
KUDOS
skylimit wrote:
I tried solving the question by starting with this:
time to work + time home = 2
d/75 + d/105 = 2
105d/7875 + 75d/7875 = 15750
180d/7875 = 15750
180d = 7875*15750
I used my calculator for the rest and got d = 689062.5 which is obviously too big to work with the question.
So I stopped.
What went wrong?
The error lies in your 2nd step- 105d/7875 + 75d/7875 = 15750
It should be 105d/7875 + 75d/7875 = 2
A better approach would be to take the common factors out. This way the calculation becomes much simpler. In this case taking common factor "15" out we get:
(1/15)*[(d/5)+(d/7)]= 2 --> (d/5)+(d/7)=30; which is much easier to solve manually. Remember, if you think you need a calculator on any GMAT question then probably your approach is not correct. There is always a way around.
Intern
Joined: 10 Sep 2015
Posts: 26
Re: Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
15 Sep 2015, 14:12
sunnysid wrote:
skylimit wrote:
I tried solving the question by starting with this:
time to work + time home = 2
d/75 + d/105 = 2
105d/7875 + 75d/7875 = 15750
180d/7875 = 15750
180d = 7875*15750
I used my calculator for the rest and got d = 689062.5 which is obviously too big to work with the question.
So I stopped.
What went wrong?
The error lies in your 2nd step- 105d/7875 + 75d/7875 = 15750
It should be 105d/7875 + 75d/7875 = 2
A better approach would be to take the common factors out. This way the calculation becomes much simpler. In this case taking common factor "15" out we get:
(1/15)*[(d/5)+(d/7)]= 2 --> (d/5)+(d/7)=30; which is much easier to solve manually. Remember, if you think you need a calculator on any GMAT question then probably your approach is not correct. There is always a way around.
Shoot! Nice catch. Thanks
Intern
Joined: 20 Aug 2015
Posts: 5
Re: Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
16 Sep 2015, 00:36
2
KUDOS
1
This post was
BOOKMARKED
[quote="skylimit"]Cole drove from home to work at an average speed of 75 kmh. He then returned home at an average speed of 105 kmh. If the round trip took a total of 2 hours, how many minutes did it take Cole to drive to work?
A) 66
B) 70
C) 72
D) 75
E) 78
Hi All,
Here is my first post...
I prefer to use the ratio approach when it is possible, in this case it is.
The ratios between the speeds Work:Home = 75:105 = 5:7
Given that the distance in both directions is the same, the ratios of times becomes Work:Home = 7:5 (inverse)
The total time is given 2 hours ==> , so: 2/12 *7 = 14/12 hours ==> Multiple by 5 to get minutes = 70/60 ==> Answer is 70 minutes.
Intern
Joined: 10 Sep 2015
Posts: 26
Re: Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
16 Sep 2015, 08:07
zurvy wrote:
The total time is given 2 hours ==> , so: 2/12 *7 = 14/12 hours ==> Multiple by 5 to get minutes = 70/60 ==> Answer is 70 minutes.
The inverse method is interesting and it seems pretty fast too.
I just don't understand the very last part above.
Why did you multiply 2/12 by 7 and then multiply by 5 to get minutes?
Intern
Joined: 20 Aug 2015
Posts: 5
Re: Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
16 Sep 2015, 22:34
1
KUDOS
skylimit wrote:
zurvy wrote:
The total time is given 2 hours ==> , so: 2/12 *7 = 14/12 hours ==> Multiple by 5 to get minutes = 70/60 ==> Answer is 70 minutes.
The inverse method is interesting and it seems pretty fast too.
I just don't understand the very last part above.
Why did you multiply 2/12 by 7 and then multiply by 5 to get minutes?
Hi there,
The ratio of time is as explained 5:7 during the entire trip , and it took a total of 2 hours ==> Imagine it like this, you have to split the 2 hours
with a ratio of 5:7 ==> The total parts would be in this case 5+7 = 12 ==> so The total time has to be divided first by 12, and then multiplied by 5.
If I told you to split 24 apples among two persons with a ratio of 5:7, you would first divide 24/12 (12=7+5) and than multiple by 5 and 7 respectively to get the answer.
24/12 * 5 = 10 for one person 24/12 * 7 = 14 for the other making a total of 24 apples.
Regarding the multiplication by 5, you need to convert 14/12 to fractions of 60 (so you get the minutes).
(5*14/5*12 = 70/60)
Or you could, 14/12 = 7/6 hour = 1 1/6 hour = 1 hour and 10 minutes = 70/60 hour (its just playing with fractions, and with time, having a denominator of 60, always
gives you the number of minutes at the top)
Intern
Joined: 10 Sep 2015
Posts: 26
Re: Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
17 Sep 2015, 09:06
zurvy wrote:
skylimit wrote:
zurvy wrote:
The total time is given 2 hours ==> , so: 2/12 *7 = 14/12 hours ==> Multiple by 5 to get minutes = 70/60 ==> Answer is 70 minutes.
The inverse method is interesting and it seems pretty fast too.
I just don't understand the very last part above.
Why did you multiply 2/12 by 7 and then multiply by 5 to get minutes?
Hi there,
The ratio of time is as explained 5:7 during the entire trip , and it took a total of 2 hours ==> Imagine it like this, you have to split the 2 hours
with a ratio of 5:7 ==> The total parts would be in this case 5+7 = 12 ==> so The total time has to be divided first by 12, and then multiplied by 5.
If I told you to split 24 apples among two persons with a ratio of 5:7, you would first divide 24/12 (12=7+5) and than multiple by 5 and 7 respectively to get the answer.
24/12 * 5 = 10 for one person 24/12 * 7 = 14 for the other making a total of 24 apples.
Regarding the multiplication by 5, you need to convert 14/12 to fractions of 60 (so you get the minutes).
(5*14/5*12 = 70/60)
Or you could, 14/12 = 7/6 hour = 1 1/6 hour = 1 hour and 10 minutes = 70/60 hour (its just playing with fractions, and with time, having a denominator of 60, always
gives you the number of minutes at the top)
Thanks zurvy!
I didn't realize you multiplied by 5 to get a denominator of 60.
Nice solution
Director
Joined: 07 Dec 2014
Posts: 906
Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
17 Sep 2015, 16:00
let t=time to work
time to home (derived from inverse kmh ratio)=75t/105➡5t/7
t+5t/7=2 hours total time
t=7/6 hours➡70 minutes
Intern
Joined: 19 Mar 2014
Posts: 5
Re: Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
18 Sep 2015, 21:25
[quote="skylimit"]Cole drove from home to work at an average speed of 75 kmh. He then returned home at an average speed of 105 kmh. If the round trip took a total of 2 hours, how many minutes did it take Cole to drive to work?
A) 66
B) 70
C) 72
D) 75
E) 78
First round distance travelled (say) = d
Speed = 75 k/h
Time taken, T2 = d/75 hr
Second round distance traveled = d (same distance)
Speed = 105 k/h
Time taken, T2 = d/105 hr
Total time taken = 2 hrs
Therefore , 2 = d/75 + d/105
LCM of 75 and 105 = 525
2= d/75 + d/105
=> 2 = 7d/525 + 5d/525
=> d = 525 / 6 Km
Therefore, T1= d/75
=> T1 = 525 / (6 x 75)
=> T1 = (7 x 60) / 6 -- in minutes
=> T1 = 70 minutes.
Intern
Joined: 21 Feb 2016
Posts: 9
Location: United States (MA)
Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
30 Jun 2016, 10:53
Cole drove from home to work at an average speed of 75 kmh. He then returned home at an average speed of 105 kmh. If the round trip took a total of 2 hours, how many minutes did it take Cole to drive to work?
A) 66
B) 70
C) 72
D) 75
E) 78
Round-trip average speed: (2*75*105)/(75+105)=525/6
Round-trip time=2 hrs given
So round-trip distance=(526/6)*2=175 kms
Single-trip distance=175/2 kms...Time taken to cover this distance at 75kmph= (175/2)/75 hrs=70 mins.
Manager
Joined: 30 Apr 2013
Posts: 94
Re: Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
03 Oct 2017, 00:55
Time = distance /speed
So Lets consider D =100
Therefore , 100/75(60) +100/105(50)= 120 mins
Is the right way of thinking ?
Can someone please explain this with logic not formulas
Manager
Joined: 22 May 2015
Posts: 99
Re: Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
03 Oct 2017, 04:22
santro789 wrote:
Time = distance /speed
So Lets consider D =100
Therefore , 100/75(60) +100/105(50)= 120 mins
Is the right way of thinking ?
Can someone please explain this with logic not formulas
It is incorrect to assume the value of distance D as we have a constrain on time ( t1+t2 = 2 hours ).
We know the avg speed for first half is 75kmph => D = 75 * T1
We also know the avg speed for second half is 105kmph => D = 105*T2
Since the distance is same we can equate the above equations 75*T1 = 105*T2 => 5T1=7T2.
Now we know the total time taken is 2 hours => T1+T2 = 2
On solving we will get T1 = 7/6 hours which is 70 minutes.
_________________
Consistency is the Key
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 2179
Location: United States (CA)
Re: Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
05 Oct 2017, 09:12
1
KUDOS
Expert's post
skylimit wrote:
Cole drove from home to work at an average speed of 75 kmh. He then returned home at an average speed of 105 kmh. If the round trip took a total of 2 hours, how many minutes did it take Cole to drive to work?
A) 66
B) 70
C) 72
D) 75
E) 78
We use the formula distance = rate x time, or equivalently, time = distance/rate. We can let the distance either way = d; thus, Cole’s time from home to work is d/75 and his time from work to home is d/105. The total travel time is given as 2 hours, so we have:
d/75 + d/105 = 2
Multiplying the entire equation by 525, we have:
7d + 5d = 1050
12d = 1050
d = 1050/12 = 525/6 = 175/2 km
Thus, it took him (175/2)/75 = 175/150 = 7/6 hours = 7/6 x 60 = 70 minutes.
_________________
Scott Woodbury-Stewart
Founder and CEO
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
Intern
Joined: 09 Dec 2013
Posts: 23
Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
05 Oct 2017, 16:19
Thanks
Last edited by Buttercup3 on 11 Feb 2018, 19:29, edited 1 time in total.
VP
Joined: 22 May 2016
Posts: 1329
Cole drove from home to work at an average speed of 75 kmh. He then [#permalink]
### Show Tags
05 Oct 2017, 18:03
skylimit wrote:
Cole drove from home to work at an average speed of 75 kmh. He then returned home at an average speed of 105 kmh. If the round trip took a total of 2 hours, how many minutes did it take Cole to drive to work?
A) 66
B) 70
C) 72
D) 75
E) 78
Source: GMAT Prep Now - http://www.gmatprepnow.com/module/gmat- ... /video/914
I avoid division if I can in RTD problems. I solved for time with multiplication. It's fast.
Total time = 2 hours
Let time from home to work = x
So time from work to home = (2-x)
Rate, home to work = 75 kmh
Rate, work to home = 105 kmh
$$r*t=D$$
D, home to work: (75*x)
D, work to home: (105)*(2-x)
$$D$$ is equal both ways, hence
$$75x = (105)(2-x)$$
$$75x = 210 - 105x$$
$$180x = 210$$
$$x = \frac{210}{180} = \frac{21}{18} =\frac{7}{6}hrs$$
Any fraction of an hour can be multiplied by 60 to obtain minutes.
$$x =\frac{7}{6}hrs * 60$$ = 70 minutes
_________________
At the still point, there the dance is. -- T.S. Eliot
Formerly genxer123
Cole drove from home to work at an average speed of 75 kmh. He then [#permalink] 05 Oct 2017, 18:03
Display posts from previous: Sort by | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8097999095916748, "perplexity": 2587.5385368987745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812556.20/warc/CC-MAIN-20180219072328-20180219092328-00691.warc.gz"} |
http://mathhelpforum.com/statistics/216652-can-you-check-over-my-work-please-normal-distribution-z-score-word-problem.html | # Math Help - Can you Check Over MY Work Please - Normal Distribution (Z-SCORE) Word Problem
1. ## Can you Check Over MY Work Please - Normal Distribution (Z-SCORE) Word Problem
The lifespan of lightbulbs in a photographic machine is normally distributed with a mean of 210 hours and a standard deviation of 50 hours.
1) Determine the z-score of a light bulb with a lifespan of exactly 124hours.
z= x-mean/standard deviation
z= (124-210)/50
x= -1.72
Z-Score = -1.72
2) What is the probability that a randomly chosen light bulb would have a lifespan of less than 180 hours?
z= x-mean/standard deviation
z= (180-210)/50
z= -0.60
p(x<180) = p(z<-0.60) = 0.2258
Probability= 0.2258 / 22.58%
3) What is the probability that a randomly chosen light bulb would have a lifespan of between 200 and 250 hours?
z= x-mean/standard deviation FOR BOTH 200 and 250:
For200:
z= x-mean/standard deviation
z= (200-210)/50
z= -0.2
For250:
z= x-mean/standard deviation
z= (250-210)/50
z= 0.8
p(200<z<250) = p(-0.2<z<0.8) = p(z<0.8) - p(z<-0.2) = 0.7881 - 0.4207 = 0.3674 **Not sure if those italic/underlined numbers are correct on this line**
Probability = 0.3674 / 36.74%
2. ## Re: Can you Check Over MY Work Please - Normal Distribution (Z-SCORE) Word Problem
Hey tdotodot.
1) is correct
2) is not. Using R we get
> pnorm(-0.6,0,1)
[1] 0.2742531
3) Looks good. Again using R:
pnorm(0.8,0,1) - pnorm(-0.2,0,1)
[1] 0.3674043
3. ## Re: Can you Check Over MY Work Please - Normal Distribution (Z-SCORE) Word Problem
Originally Posted by chiro
Hey tdotodot.
1) is correct
2) is not. Using R we get
> pnorm(-0.6,0,1)
[1] 0.2742531
3) Looks good. Again using R:
pnorm(0.8,0,1) - pnorm(-0.2,0,1)
[1] 0.3674043
Thank you, really appreciate your help. Could you tell me how you got 0.2752531 for question #2? And what is "R"?
I am using the Z-Score chart, row 0.6 / column 0.00 shows the number 0.2258 and column 0.01 shows .2291. Other than that, looks good.
4. ## Re: Can you Check Over MY Work Please - Normal Distribution (Z-SCORE) Word Problem
pnorm(z,0,1) calculates the probability P(Z < z) where Z ~ N(0,1). I used this command to verify your answer.
R is an open source statistical package that is becoming one of the most popular platforms used in statistics.
The R Project for Statistical Computing | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8518056869506836, "perplexity": 3549.3522319682274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988860.4/warc/CC-MAIN-20150728002308-00127-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/discrete-math-posets.98210/ | # Homework Help: Discrete Math Poset's
1. Nov 3, 2005
### socratesg
I have a question from hw, the question is stated "Show that if the poset (S,R) is a lattice then the dual poset (S,R^-1) is also a lattice"
I know by Rosen theory that the dual of a Poset is also a poset but how can i prove that it is also a lattice, what def. am i missing. Any help would be greatly apprieciated.
2. Nov 3, 2005
### Hurkyl
Staff Emeritus
By using the fact (S, R) is a lattice, presumably.
3. Nov 3, 2005
### AKG
I'm seeing all this stuff for the first time, so don't mind the detail. (S,R) is a lattice iff for each pair of elements {a,b} in S, there is an element of S denoted supR{a,b} satisfying:
1) aRsupR{a,b}, bRsupR{a,b}
2) if s in S satisfies aRs and bRs, then supR{a,b}Rs
and an element of S denoted infR{a,b} satisfying:
1) infR{a,b}Ra, infR{a,b}Rb
2) if s in S satisfies sRa and sRb, then sRinfR{a,b}
I assume that R-1 is defined by aR-1b iff bRa. If so, then we want to show that for every pair {a,b} contained in S, there is a supremum and infimum with respect to R-1. Let's go back to the supremum in (S,R), and rewrite the conditions. So:
1) aRsupR{a,b}, bRsupR{a,b}
2) if s in S satisfies aRs and bRs, then supR{a,b}Rs
becomes
1) supR{a,b}R-1a, supR{a,b}R-1b
2) If s in S satisfies sR-1a and sR-1b, then sR-1supR{a,b}
So supR{a,b} satisfies precisely the conditions required for it to be an infimum of {a,b} with respect to R-1. So if (S,R) is a lattice, then for every pair {a,b}, there is a supremum s and infimum i with respect to R. (S,R-1) is a lattice because for each {a,b} there is a supremum with respect to R-1, that being the infimum with respect to R, namely i, and there is an infimum with respect to R-1, that being the supremum with respect to R, namely s.
A good way to think of this is to think of S being the power set of some set X, and R being inclusion $\subseteq$. Then sup{a,b} = $a \cup b$ and inf{a,b} = $a \cap b$. R-1 would be containment, $\supseteq$ and clearly you can see why when we switch the relation around, the intersection plays the role of the union, and the union plays the role of the intersection. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680958390235901, "perplexity": 1023.1873336095553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945143.71/warc/CC-MAIN-20180421110245-20180421130245-00506.warc.gz"} |
http://www.cs.ubc.ca/~poole/aibook/2e/html/ArtInt2e.Ch10.S3.SS4.html | # 10.3.4 Structure Learning
Suppose a learning agent has complete data and no hidden variables, but is not given the structure of the belief network. This is the setting for structure learning of belief networks.
There are two main approaches to structure learning:
• The first is to use the definition of a belief network in terms of conditional independence. Given a total ordering of variables, the parents of a variable $X$ are defined to be a subset of the predecessors of $X$ in the total ordering that render the other predecessors independent of $X$. Using the definition directly has two main challenges: the first is to determine the best total ordering; the second is to find a way to measure independence. It is difficult to determine conditional independence when there is limited data.
• The second method is to have a score for networks, for example, using the MAP model, which takes into account fit to the data and model complexity. Given such a measure, it is feasible to search for the structure that minimizes this error.
This section presents the second method, often called a search and score method.
Assume that the data is a set $E$ of examples, where each example has a value for each variable. The aim of the search and score method is to choose a model $m$ that maximizes
$P(m\mid E)\propto P(E\mid m)*P(m).$
The likelihood, $P(E\mid m)$, is the product of the probability of each example. Using the product decomposition, the product of each example given the model is the product of the probability of each variable given its parents in the model. Thus,
$\displaystyle P(E\mid m)*P(m)$ $\displaystyle\mbox{}=\left(\prod_{e\in E}P(e\mid m)\right)*P(m)$ $\displaystyle\mbox{}=\left(\prod_{e\in E}\prod_{X_{i}}P_{m}^{e}(X_{i}\mid par(% X_{i},m))\right)*P(m)$
where $par(X_{i},m)$ denotes the parents of $X_{i}$ in the model $m$, and $P_{m}^{e}(\cdot)$ denotes the probability of example $e$ as specified in the model $m$.
This is maximized when its logarithm is maximized. When taking logarithms, products become sums:
$\displaystyle\log P(E\mid m)+\log P(m)=\left(\sum_{e\in E}\sum_{X_{i}}\log P_{% m}^{e}(X_{i}\mid par(X_{i},m))\right)+\log P(m).$
To make this approach feasible, assume that the prior probability of the model decomposes into components for each variable. That is, we assume the probability of the model decomposes into a product of probabilities of local models for each variable. Let $m(X_{i})$ be the local model for variable $X_{i}$.
Thus, we want to maximize
$\displaystyle{\left(\sum_{e\in E}\sum_{X_{i}}\log P_{m}^{e}(X_{i}\mid par(X_{i% },m))\right)+\sum_{X_{i}}\log P(m(X_{i}))}$ $\displaystyle=\sum_{X_{i}}\left(\sum_{e\in E}\log P_{m}^{e}(X_{i}\mid par(X_{i% },m))\right)+\sum_{X_{i}}\log P(m(X_{i}))$ $\displaystyle=\sum_{X_{i}}\left(\sum_{e\in E}\log P_{m}^{e}(X_{i}\mid par(X_{i% },m))+\log P(m(X_{i}))\right).$
Each variable could be optimized separately, except for the requirement that a belief network is acyclic. However, if you had a total ordering of the variables, there is an independent supervised learning problem to predict the probability of each variable given the predecessors in the total ordering. To approximate $\log P(m(X_{i})$, the BIC score is suitable. To find a good total ordering of the variables, a learning agent could search over total orderings, using search techniques such as local search or branch-and-bound search. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 23, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9141190052032471, "perplexity": 200.3922218268104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889325.32/warc/CC-MAIN-20180120043530-20180120063530-00531.warc.gz"} |
https://jeremykun.com/category/set-theory/ | # Methods of Proof — Diagonalization
A while back we featured a post about why learning mathematics can be hard for programmers, and I claimed a major issue was not understanding the basic methods of proof (the lingua franca between intuition and rigorous mathematics). I boiled these down to the “basic four,” direct implication, contrapositive, contradiction, and induction. But in mathematics there is an ever growing supply of proof methods. There are books written about the “probabilistic method,” and I recently went to a lecture where the “linear algebra method” was displayed. There has been recent talk of a “quantum method” for proving theorems unrelated to quantum mechanics, and many more.
So in continuing our series of methods of proof, we’ll move up to some of the more advanced methods of proof. And in keeping with the spirit of the series, we’ll spend most of our time discussing the structural form of the proofs. This time, diagonalization.
## Diagonalization
Perhaps one of the most famous methods of proof after the basic four is proof by diagonalization. Why do they call it diagonalization? Because the idea behind diagonalization is to write out a table that describes how a collection of objects behaves, and then to manipulate the “diagonal” of that table to get a new object that you can prove isn’t in the table.
The simplest and most famous example of this is the proof that there is no bijection between the natural numbers and the real numbers. We defined injections, and surjections and bijections, in two earlier posts in this series, but for new readers a bijection is just a one-to-one mapping between two collections of things. For example, one can construct a bijection between all positive integers and all even positive integers by mapping $n$ to $2n$. If there is a bijection between two (perhaps infinite) sets, then we say they have the same size or cardinality. And so to say there is no bijection between the natural numbers and the real numbers is to say that one of these two sets (the real numbers) is somehow “larger” than the other, despite both being infinite in size. It’s deep, it used to be very controversial, and it made the method of diagonalization famous. Let’s see how it works.
Theorem: There is no bijection from the natural numbers $\mathbb{N}$ to the real numbers $\mathbb{R}$.
Proof. Suppose to the contrary (i.e., we’re about to do proof by contradiction) that there is a bijection $f: \mathbb{N} \to \mathbb{R}$. That is, you give me a positive integer $k$ and I will spit out $f(k)$, with the property that different $k$ give different $f(k)$, and every real number is hit by some natural number $k$ (this is just what it means to be a one-to-one mapping).
First let me just do some setup. I claim that all we need to do is show that there is no bijection between $\mathbb{N}$ and the real numbers between 0 and 1. In particular, I claim there is a bijection from $(0,1)$ to all real numbers, so if there is a bijection from $\mathbb{N} \to (0,1)$ then we could combine the two bijections. To show there is a bijection from $(0,1) \to \mathbb{R}$, I can first make a bijection from the open interval $(0,1)$ to the interval $(-\infty, 0) \cup (1, \infty)$ by mapping $x$ to $1/x$. With a little bit of extra work (read, messy details) you can extend this to all real numbers. Here’s a sketch: make a bijection from $(0,1)$ to $(0,2)$ by doubling; then make a bijection from $(0,2)$ to all real numbers by using the $(0,1)$ part to get $(-\infty, 0) \cup (1, \infty)$, and use the $[1,2)$ part to get $[0,1]$ by subtracting 1 (almost! To be super rigorous you also have to argue that the missing number 1 doesn’t change the cardinality, or else write down a more complicated bijection; still, the idea should be clear).
Okay, setup is done. We just have to show there is no bijection between $(0,1)$ and the natural numbers.
The reason I did all that setup is so that I can use the fact that every real number in $(0,1)$ has an infinite binary decimal expansion whose only nonzero digits are after the decimal point. And so I’ll write down the expansion of $f(1)$ as a row in a table (an infinite row), and below it I’ll write down the expansion of $f(2)$, below that $f(3)$, and so on, and the decimal points will line up. The table looks like this.
The $d$‘s above are either 0 or 1. I need to be a bit more detailed in my table, so I’ll index the digits of $f(1)$ by $b_{1,1}, b_{1,2}, b_{1,3}, \dots$, the digits of $f(2)$ by $b_{2,1}, b_{2,2}, b_{2,3}, \dots$, and so on. This makes the table look like this
It’s a bit harder to read, but trust me the notation is helpful.
Now by the assumption that $f$ is a bijection, I’m assuming that every real number shows up as a number in this table, and no real number shows up twice. So if I could construct a number that I can prove is not in the table, I will arrive at a contradiction: the table couldn’t have had all real numbers to begin with! And that will prove there is no bijection between the natural numbers and the real numbers.
Here’s how I’ll come up with such a number $N$ (this is the diagonalization part). It starts with 0., and it’s first digit after the decimal is $1-b_{1,1}$. That is, we flip the bit $b_{1,1}$ to get the first digit of $N$. The second digit is $1-b_{2,2}$, the third is $1-b_{3,3}$, and so on. In general, digit $i$ is $1-b_{i,i}$.
Now we show that $N$ isn’t in the table. If it were, then it would have to be $N = f(m)$ for some $m$, i.e. be the $m$-th row in the table. Moreover, by the way we built the table, the $m$-th digit of $N$ would be $b_{m,m}$. But we defined $N$ so that it’s $m$-th digit was actually $1-b_{m,m}$. This is very embarrassing for $N$ (it’s a contradiction!). So $N$ isn’t in the table.
$\square$
It’s the kind of proof that blows your mind the first time you see it, because it says that there is more than one kind of infinity. Not something you think about every day, right?
## The Halting Problem
The second example we’ll show of a proof by diagonalization is the Halting Theorem, proved originally by Alan Turing, which says that there are some problems that computers can’t solve, even if given unbounded space and time to perform their computations. The formal mathematical model is called a Turing machine, but for simplicity you can think of “Turing machines” and “algorithms described in words” as the same thing. Or if you want it can be “programs written in programming language X.” So we’ll use the three words “Turing machine,” “algorithm,” and “program” interchangeably.
The proof works by actually defining a problem and proving it can’t be solved. The problem is called the halting problem, and it is the problem of deciding: given a program $P$ and an input $x$ to that program, will $P$ ever stop running when given $x$ as input? What I mean by “decide” is that any program that claims to solve the halting problem is itself required to halt for every possible input with the correct answer. A “halting problem solver” can’t loop infinitely!
So first we’ll give the standard proof that the halting problem can’t be solved, and then we’ll inspect the form of the proof more closely to see why it’s considered a diagonalization argument.
Theorem: The halting program cannot be solved by Turing machines.
Proof. Suppose to the contrary that $T$ is a program that solves the halting problem. We’ll use $T$ as a black box to come up with a new program I’ll call meta-$T$, defined in pseudo-python as follows.
def metaT(P):
run T on (P,P)
if T says that P halts:
loop infinitely
else:
halt and output "success!"
In words, meta-$T$ accepts as input the source code of a program $P$, and then uses $T$ to tell if $P$ halts (when given its own source code as input). Based on the result, it behaves the opposite of $P$; if $P$ halts then meta-$T$ loops infinitely and vice versa. It’s a little meta, right?
Now let’s do something crazy: let’s run meta-$T$ on itself! That is, run
metaT(metaT)
So meta. The question is what is the output of this call? The meta-$T$ program uses $T$ to determine whether meta-$T$ halts when given itself as input. So let’s say that the answer to this question is “yes, it does halt.” Then by the definition of meta-$T$, the program proceeds to loop forever. But this is a problem, because it means that metaT(metaT) (which is the original thing we ran) actually does not halt, contradicting $T$‘s answer! Likewise, if $T$ says that metaT(metaT) should loop infinitely, that will cause meta-$T$ to halt, a contradiction. So $T$ cannot be correct, and the halting problem can’t be solved.
$\square$
This theorem is deep because it says that you can’t possibly write a program to which can always detect bugs in other programs. Infinite loops are just one special kind of bug.
But let’s take a closer look and see why this is a proof by diagonalization. The first thing we need to convince ourselves is that the set of all programs is countable (that is, there is a bijection from $\mathbb{N}$ to the set of all programs). This shouldn’t be so hard to see: you can list all programs in lexicographic order, since the set of all strings is countable, and then throw out any that are not syntactically valid programs. Likewise, the set of all inputs, really just all strings, is countable.
The second thing we need to convince ourselves of is that a problem corresponds to an infinite binary string. To do this, we’ll restrict our attention to problems with yes/no answers, that is where the goal of the program is to output a single bit corresponding to yes or no for a given input. Then if we list all possible inputs in increasing lexicographic order, a problem can be represented by the infinite list of bits that are the correct outputs to each input.
For example, if the problem is to determine whether a given binary input string corresponds to an even number, the representation might look like this:
010101010101010101...
Of course this all depends on the details of how one encodes inputs, but the point is that if you wanted to you could nail all this down precisely. More importantly for us we can represent the halting problem as an infinite table of bits. If the columns of the table are all programs (in lex order), and the rows of the table correspond to inputs (in lex order), then the table would have at entry $(x,P)$ a 1 if $P(x)$ halts and a 0 otherwise.
here $b_{i,j}$ is 1 if $P_j(x_i)$ halts and 0 otherwise. The table encodes the answers to the halting problem for all possible inputs.
Now we assume for contradiction sake that some program solves the halting problem, i.e. that every entry of the table is computable. Now we’ll construct the answers output by meta-$T$ by flipping each bit of the diagonal of the table. The point is that meta-$T$ corresponds to some row of the table, because there is some input string that is interpreted as the source code of meta-$T$. Then we argue that the entry of the table for $(\textup{meta-}T, \textup{meta-}T)$ contradicts its definition, and we’re done!
So these are two of the most high-profile uses of the method of diagonalization. It’s a great tool for your proving repertoire.
Until next time!
# The Many Faces of Set Cover
A while back Peter Norvig posted a wonderful pair of articles about regex golf. The idea behind regex golf is to come up with the shortest possible regular expression that matches one given list of strings, but not the other.
“Regex Golf,” by Randall Munroe.
In the first article, Norvig runs a basic algorithm to recreate and improve the results from the comic, and in the second he beefs it up with some improved search heuristics. My favorite part about this topic is that regex golf can be phrased in terms of a problem called set cover. I noticed this when reading the comic, and was delighted to see Norvig use that as the basis of his algorithm.
The set cover problem shows up in other places, too. If you have a database of items labeled by users, and you want to find the smallest set of labels to display that covers every item in the database, you’re doing set cover. I hear there are applications in biochemistry and biology but haven’t seen them myself.
If you know what a set is (just think of the “set” or “hash set” type from your favorite programming language), then set cover has a simple definition.
Definition (The Set Cover Problem): You are given a finite set $U$ called a “universe” and sets $S_1, \dots, S_n$ each of which is a subset of $U$. You choose some of the $S_i$ to ensure that every $x \in U$ is in one of your chosen sets, and you want to minimize the number of $S_i$ you picked.
It’s called a “cover” because the sets you pick “cover” every element of $U$. Let’s do a simple. Let $U = \{ 1,2,3,4,5 \}$ and
$\displaystyle S_1 = \{ 1,3,4 \}, S_2 = \{ 2,3,5 \}, S_3 = \{ 1,4,5 \}, S_4 = \{ 2,4 \}$
Then the smallest possible number of sets you can pick is 2, and you can achieve this by picking both $S_1, S_2$ or both $S_2, S_3$. The connection to regex golf is that you pick $U$ to be the set of strings you want to match, and you pick a set of regexes that match some of the strings in $U$ but none of the strings you want to avoid matching (I’ll call them $V$). If $w$ is such a regex, then you can form the set $S_w$ of strings that $w$ matches. Then if you find a small set cover with the strings $w_1, \dots, w_t$, then you can “or” them together to get a single regex $w_1 \mid w_2 \mid \dots \mid w_t$ that matches all of $U$ but none of $V$.
Set cover is what’s called NP-hard, and one implication is that we shouldn’t hope to find an efficient algorithm that will always give you the shortest regex for every regex golf problem. But despite this, there are approximation algorithms for set cover. What I mean by this is that there is a regex-golf algorithm $A$ that outputs a subset of the regexes matching all of $U$, and the number of regexes it outputs is such-and-such close to the minimum possible number. We’ll make “such-and-such” more formal later in the post.
What made me sad was that Norvig didn’t go any deeper than saying, “We can try to approximate set cover, and the greedy algorithm is pretty good.” It’s true, but the ideas are richer than that! Set cover is a simple example to showcase interesting techniques from theoretical computer science. And perhaps ironically, in Norvig’s second post a header promised the article would discuss the theory of set cover, but I didn’t see any of what I think of as theory. Instead he partially analyzes the structure of the regex golf instances he cares about. This is useful, but not really theoretical in any way unless he can say something universal about those instances.
I don’t mean to bash Norvig. His articles were great! And in-depth theory was way beyond scope. So this post is just my opportunity to fill in some theory gaps. We’ll do three things:
1. Show formally that set cover is NP-hard.
2. Prove the approximation guarantee of the greedy algorithm.
3. Show another (very different) approximation algorithm based on linear programming.
Along the way I’ll argue that by knowing (or at least seeing) the details of these proofs, one can get a better sense of what features to look for in the set cover instance you’re trying to solve. We’ll also see how set cover depicts the broader themes of theoretical computer science.
## NP-hardness
The first thing we should do is show that set cover is NP-hard. Intuitively what this means is that we can take some hard problem $P$ and encode instances of $P$ inside set cover problems. This idea is called a reduction, because solving problem $P$ will “reduce” to solving set cover, and the method we use to encode instance of $P$ as set cover problems will have a small amount of overhead. This is one way to say that set cover is “at least as hard as” $P$.
The hard problem we’ll reduce to set cover is called 3-satisfiability (3-SAT). In 3-SAT, the input is a formula whose variables are either true or false, and the formula is expressed as an OR of a bunch of clauses, each of which is an AND of three variables (or their negations). This is called 3-CNF form. A simple example:
$\displaystyle (x \vee y \vee \neg z) \wedge (\neg x \vee w \vee y) \wedge (z \vee x \vee \neg w)$
The goal of the algorithm is to decide whether there is an assignment to the variables which makes the formula true. 3-SAT is one of the most fundamental problems we believe to be hard and, roughly speaking, by reducing it to set cover we include set cover in a class called NP-complete, and if any one of these problems can be solved efficiently, then they all can (this is the famous P versus NP problem, and an efficient algorithm would imply P equals NP).
So a reduction would consist of the following: you give me a formula $\varphi$ in 3-CNF form, and I have to produce (in a way that depends on $\varphi$!) a universe $U$ and a choice of subsets $S_i \subset U$ in such a way that
$\varphi$ has a true assignment of variables if and only if the corresponding set cover problem has a cover using $k$ sets.
In other words, I’m going to design a function $f$ from 3-SAT instances to set cover instances, such that $x$ is satisfiable if and only if $f(x)$ has a set cover with $k$ sets.
Why do I say it only for $k$ sets? Well, if you can always answer this question then I claim you can find the minimum size of a set cover needed by doing a binary search for the smallest value of $k$. So finding the minimum size of a set cover reduces to the problem of telling if theres a set cover of size $k$.
Now let’s do the reduction from 3-SAT to set cover.
If you give me $\varphi = C_1 \wedge C_2 \wedge \dots \wedge C_m$ where each $C_i$ is a clause and the variables are denoted $x_1, \dots, x_n$, then I will choose as my universe $U$ to be the set of all the clauses and indices of the variables (these are all just formal symbols). i.e.
$\displaystyle U = \{ C_1, C_2, \dots, C_m, 1, 2, \dots, n \}$
The first part of $U$ will ensure I make all the clauses true, and the last part will ensure I don’t pick a variable to be both true and false at the same time.
To show how this works I have to pick my subsets. For each variable $x_i$, I’ll make two sets, one called $S_{x_i}$ and one called $S_{\neg x_i}$. They will both contain $i$ in addition to the clauses which they make true when the corresponding literal is true (by literal I just mean the variable or its negation). For example, if $C_j$ uses the literal $\neg x_7$, then $S_{\neg x_7}$ will contain $C_j$ but $S_{x_7}$ will not. Finally, I’ll set $k = n$, the number of variables.
Now to prove this reduction works I have to prove two things: if my starting formula has a satisfying assignment I have to show the set cover problem has a cover of size $k$. Indeed, take the sets $S_{y}$ for all literals $y$ that are set to true in a satisfying assignment. There can be at most $n$ true literals since half are true and half are false, so there will be at most $n$ sets, and these sets clearly cover all of $U$ because every literal has to be satisfied by some literal or else the formula isn’t true.
The reverse direction is similar: if I have a set cover of size $n$, I need to use it to come up with a satisfying truth assignment for the original formula. But indeed, the sets that get chosen can’t include both a $S_{x_i}$ and its negation set $S_{\neg x_i}$, because there are $n$ of the elements $\{1, 2, \dots, n \} \subset U$, and each $i$ is only in the two $S_{x_i}, S_{\neg x_i}$. Just by counting if I cover all the indices $i$, I already account for $n$ sets! And finally, since I have covered all the clauses, the literals corresponding to the sets I chose give exactly a satisfying assignment.
Whew! So set cover is NP-hard because I encoded this logic problem 3-SAT within its rules. If we think 3-SAT is hard (and we do) then set cover must also be hard. So if we can’t hope to solve it exactly we should try to approximate the best solution.
## The greedy approach
The method that Norvig uses in attacking the meta-regex golf problem is the greedy algorithm. The greedy algorithm is exactly what you’d expect: you maintain a list $L$ of the subsets you’ve picked so far, and at each step you pick the set $S_i$ that maximizes the number of new elements of $U$ that aren’t already covered by the sets in $L$. In python pseudocode:
def greedySetCover(universe, sets):
chosenSets = set()
leftToCover = universe.copy()
unchosenSets = sets
covered = lambda s: leftToCover & s
while universe != 0:
if len(chosenSets) == len(sets):
raise Exception("No set cover possible")
nextSet = max(unchosenSets, key=lambda s: len(covered(s)))
unchosenSets.remove(nextSet)
leftToCover -= nextSet
return chosenSets
This is what theory has to say about the greedy algorithm:
Theorem: If it is possible to cover $U$ by the sets in $F = \{ S_1, \dots, S_n \}$, then the greedy algorithm always produces a cover that at worst has size $O(\log(n)) \textup{OPT}$, where $\textup{OPT}$ is the size of the smallest cover. Moreover, this is asymptotically the best any algorithm can do.
One simple fact we need from calculus is that the following sum is asymptotically the same as $\log(n)$:
$\displaystyle H(n) = 1 + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{n} = \log(n) + O(1)$
Proof. [adapted from Wan] Let’s say the greedy algorithm picks sets $T_1, T_2, \dots, T_k$ in that order. We’ll set up a little value system for the elements of $U$. Specifically, the value of each $T_i$ is 1, and in step $i$ we evenly distribute this unit value across all newly covered elements of $T_i$. So for $T_1$ each covered element gets value $1/|T_1|$, and if $T_2$ covers four new elements, each gets a value of 1/4. One can think of this “value” as a price, or energy, or unit mass, or whatever. It’s just an accounting system (albeit a clever one) we use to make some inequalities clear later.
In general call the value $v_x$ of element $x \in U$ the value assigned to $x$ at the step where it’s first covered. In particular, the number of sets chosen by the greedy algorithm $k$ is just $\sum_{x \in U} v_x$. We’re just bunching back together the unit value we distributed for each step of the algorithm.
Now we want to compare the sets chosen by greedy to the optimal choice. Call a smallest set cover $C_{\textup{OPT}}$. Let’s stare at the following inequality.
$\displaystyle \sum_{x \in U} v_x \leq \sum_{S \in C_{\textup{OPT}}} \sum_{x \in S} v_x$
It’s true because each $x$ counts for a $v_x$ at most once in the left hand side, and in the right hand side the sets in $C_{\textup{OPT}}$ must hit each $x$ at least once but may hit some $x$ more than once. Also remember the left hand side is equal to $k$.
Now we want to show that the inner sum on the right hand side, $\sum_{x \in S} v_x$, is at most $H(|S|)$. This will in fact prove the entire theorem: because each set $S_i$ has size at most $n$, the inequality above will turn into
$\displaystyle k \leq |C_{\textup{OPT}}| H(|S|) \leq |C_{\textup{OPT}}| H(n)$
And so $k \leq \textup{OPT} \cdot O(\log(n))$, which is the statement of the theorem.
So we want to show that $\sum_{x \in S} v_x \leq H(|S|)$. For each $j$ define $\delta_j(S)$ to be the number of elements in $S$ not covered in $T_1, \cup \dots \cup T_j$. Notice that $\delta_{j-1}(S) - \delta_{j}(S)$ is the number of elements of $S$ that are covered for the first time in step $j$. If we call $t_S$ the smallest integer $j$ for which $\delta_j(S) = 0$, we can count up the differences up to step $t_S$, we get
$\sum_{x \in S} v_x = \sum_{i=1}^{t_S} (\delta_{i-1}(S) - \delta_i(S)) \cdot \frac{1}{T_i - (T_1 \cup \dots \cup T_{i-1})}$
The rightmost term is just the cost assigned to the relevant elements at step $i$. Moreover, because $T_i$ covers more new elements than $S$ (by definition of the greedy algorithm), the fraction above is at most $1/\delta_{i-1}(S)$. The end is near. For brevity I’ll drop the $(S)$ from $\delta_j(S)$.
\displaystyle \begin{aligned} \sum_{x \in S} v_x & \leq \sum_{i=1}^{t_S} (\delta_{i-1} - \delta_i) \frac{1}{\delta_{i-1}} \\ & \leq \sum_{i=1}^{t_S} (\frac{1}{1 + \delta_i} + \frac{1}{2+\delta_i} \dots + \frac{1}{\delta_{i-1}}) \\ & = \sum_{i=1}^{t_S} H(\delta_{i-1}) - H(\delta_i) \\ &= H(\delta_0) - H(\delta_{t_S}) = H(|S|) \end{aligned}
And that proves the claim.
$\square$
I have three postscripts to this proof:
1. This is basically the exact worst-case approximation that the greedy algorithm achieves. In fact, Petr Slavik proved in 1996 that the greedy gives you a set of size exactly $(\log n - \log \log n + O(1)) \textup{OPT}$ in the worst case.
2. This is also the best approximation that any set cover algorithm can achieve, provided that P is not NP. This result was basically known in 1994, but it wasn’t until 2013 and the use of some very sophisticated tools that the best possible bound was found with the smallest assumptions.
3. In the proof we used that $|S| \leq n$ to bound things, but if we knew that our sets $S_i$ (i.e. subsets matched by a regex) had sizes bounded by, say, $B$, the same proof would show that the approximation factor is $\log(B)$ instead of $\log n$. However, in order for that to be useful you need $B$ to be a constant, or at least to grow more slowly than any polynomial in $n$, since e.g. $\log(n^{0.1}) = 0.1 \log n$. In fact, taking a second look at Norvig’s meta regex golf problem, some of his instances had this property! Which means the greedy algorithm gives a much better approximation ratio for certain meta regex golf problems than it does for the worst case general problem. This is one instance where knowing the proof of a theorem helps us understand how to specialize it to our interests.
Norvig’s frequency table for president meta-regex golf. The left side counts the size of each set (defined by a regex)
## The linear programming approach
So we just said that you can’t possibly do better than the greedy algorithm for approximating set cover. There must be nothing left to say, job well done, right? Wrong! Our second analysis, based on linear programming, shows that instances with special features can have better approximation results.
In particular, if we’re guaranteed that each element $x \in U$ occurs in at most $B$ of the sets $S_i$, then the linear programming approach will give a $B$-approximation, i.e. a cover whose size is at worst larger than OPT by a multiplicative factor of $B$. In the case that $B$ is constant, we can beat our earlier greedy algorithm.
The technique is now a classic one in optimization, called LP-relaxation (LP stands for linear programming). The idea is simple. Most optimization problems can be written as integer linear programs, that is there you have $n$ variables $x_1, \dots, x_n \in \{ 0, 1 \}$ and you want to maximize (or minimize) a linear function of the $x_i$ subject to some linear constraints. The thing you’re trying to optimize is called the objective. While in general solving integer linear programs is NP-hard, we can relax the “integer” requirement to $0 \leq x_i \leq 1$, or something similar. The resulting linear program, called the relaxed program, can be solved efficiently using the simplex algorithm or another more complicated method.
The output of solving the relaxed program is an assignment of real numbers for the $x_i$ that optimizes the objective function. A key fact is that the solution to the relaxed linear program will be at least as good as the solution to the original integer program, because the optimal solution to the integer program is a valid candidate for the optimal solution to the linear program. Then the idea is that if we use some clever scheme to round the $x_i$ to integers, we can measure how much this degrades the objective and prove that it doesn’t degrade too much when compared to the optimum of the relaxed program, which means it doesn’t degrade too much when compared to the optimum of the integer program as well.
If this sounds wishy washy and vague don’t worry, we’re about to make it super concrete for set cover.
We’ll make a binary variable $x_i$ for each set $S_i$ in the input, and $x_i = 1$ if and only if we include it in our proposed cover. Then the objective function we want to minimize is $\sum_{i=1}^n x_i$. If we call our elements $X = \{ e_1, \dots, e_m \}$, then we need to write down a linear constraint that says each element $e_j$ is hit by at least one set in the proposed cover. These constraints have to depend on the sets $S_i$, but that’s not a problem. One good constraint for element $e_j$ is
$\displaystyle \sum_{i : e_j \in S_i} x_i \geq 1$
In words, the only way that an $e_j$ will not be covered is if all the sets containing it have their $x_i = 0$. And we need one of these constraints for each $j$. Putting it together, the integer linear program is
The integer program for set cover.
Once we understand this formulation of set cover, the relaxation is trivial. We just replace the last constraint with inequalities.
For a given candidate assignment $x$ to the $x_i$, call $Z(x)$ the objective value (in this case $\sum_i x_i$). Now we can be more concrete about the guarantees of this relaxation method. Let $\textup{OPT}_{\textup{IP}}$ be the optimal value of the integer program and $x_{\textup{IP}}$ a corresponding assignment to $x_i$ achieving the optimum. Likewise let $\textup{OPT}_{\textup{LP}}, x_{\textup{LP}}$ be the optimal things for the linear relaxation. We will prove:
Theorem: There is a deterministic algorithm that rounds $x_{\textup{LP}}$ to integer values $x$ so that the objective value $Z(x) \leq B \textup{OPT}_{\textup{IP}}$, where $B$ is the maximum number of sets that any element $e_j$ occurs in. So this gives a $B$-approximation of set cover.
Proof. Let $B$ be as described in the theorem, and call $y = x_{\textup{LP}}$ to make the indexing notation easier. The rounding algorithm is to set $x_i = 1$ if $y_i \geq 1/B$ and zero otherwise.
To prove the theorem we need to show two things hold about this new candidate solution $x$:
1. The choice of all $S_i$ for which $x_i = 1$ covers every element.
2. The number of sets chosen (i.e. $Z(x)$) is at most $B$ times more than $\textup{OPT}_{\textup{LP}}$.
Since $\textup{OPT}_{\textup{LP}} \leq \textup{OPT}_{\textup{IP}}$, so if we can prove number 2 we get $Z(x) \leq B \textup{OPT}_{\textup{LP}} \leq B \textup{OPT}_{\textup{IP}}$, which is the theorem.
So let’s prove 1. Fix any $j$ and we’ll show that element $e_j$ is covered by some set in the rounded solution. Call $B_j$ the number of times element $e_j$ occurs in the input sets. By definition $B_j \leq B$, so $1/B_j \geq 1/B$. Recall $y$ was the optimal solution to the relaxed linear program, and so it must be the case that the linear constraint for each $e_j$ is satisfied: $\sum_{i : e_j \in S_i} x_i \geq 1$. We know that there are $B_j$ terms and they sums to at least 1, so not all terms can be smaller than $1/B_j$ (otherwise they’d sum to something less than 1). In other words, some variable $x_i$ in the sum is at least $1/B_j \geq 1/B$, and so $x_i$ is set to 1 in the rounded solution, corresponding to a set $S_i$ that contains $e_j$. This finishes the proof of 1.
Now let’s prove 2. For each $j$, we know that for each $x_i = 1$, the corresponding variable $y_i \geq 1/B$. In particular $1 \leq y_i B$. Now we can simply bound the sum.
\displaystyle \begin{aligned} Z(x) = \sum_i x_i &\leq \sum_i x_i (B y_i) \\ &\leq B \sum_{i} y_i \\ &= B \cdot \textup{OPT}_{\textup{LP}} \end{aligned}
The second inequality is true because some of the $x_i$ are zero, but we can ignore them when we upper bound and just include all the $y_i$. This proves part 2 and the theorem.
$\square$
1. The proof works equally well when the sets are weighted, i.e. your cost for picking a set is not 1 for every set but depends on some arbitrarily given constants $w_i \geq 0$.
2. We gave a deterministic algorithm rounding $y$ to $x$, but one can get the same result (with high probability) using a randomized algorithm. The idea is to flip a coin with bias $y_i$ roughly $\log(n)$ times and set $x_i = 1$ if and only if the coin lands heads at least once. The guarantee is no better than what we proved, but for some other problems randomness can help you get approximations where we don’t know of any deterministic algorithms to get the same guarantees. I can’t think of any off the top of my head, but I’m pretty sure they’re out there.
3. For step 1 we showed that at least one term in the inequality for $e_j$ would be rounded up to 1, and this guaranteed we covered all the elements. A natural question is: why not also round up at most one term of each of these inequalities? It might be that in the worst case you don’t get a better guarantee, but it would be a quick extra heuristic you could use to post-process a rounded solution.
4. Solving linear programs is slow. There are faster methods based on so-called “primal-dual” methods that use information about the dual of the linear program to construct a solution to the problem. Goemans and Williamson have a nice self-contained chapter on their website about this with a ton of applications.
Williamson and Shmoys have a large textbook called The Design of Approximation Algorithms. One problem is that this field is like a big heap of unrelated techniques, so it’s not like the book will build up some neat theoretical foundation that works for every problem. Rather, it’s messy and there are lots of details, but there are definitely diamonds in the rough, such as the problem of (and algorithms for) coloring 3-colorable graphs with “approximately 3” colors, and the infamous unique games conjecture.
I wrote a post a while back giving conditions which, if a problem satisfies those conditions, the greedy algorithm will give a constant-factor approximation. This is much better than the worst case $\log(n)$-approximation we saw in this post. Moreover, I also wrote a post about matroids, which is a characterization of problems where the greedy algorithm is actually optimal.
Set cover is one of the main tools that IBM’s AntiVirus software uses to detect viruses. Similarly to the regex golf problem, they find a set of strings that occurs source code in some viruses but not (usually) in good programs. Then they look for a small set of strings that covers all the viruses, and their virus scan just has to search binaries for those strings. Hopefully the size of your set cover is really small compared to the number of viruses you want to protect against. I can’t find a reference that details this, but that is understandable because it is proprietary software.
Until next time!
# Finding the majority element of a stream
Problem: Given a massive data stream of $n$ values in $\{ 1, 2, \dots, m \}$ and the guarantee that one value occurs more than $n/2$ times in the stream, determine exactly which value does so.
Solution: (in Python)
def majority(stream):
held = next(stream)
counter = 1
for item in stream:
if item == held:
counter += 1
elif counter == 0:
held = item
counter = 1
else:
counter -= 1
return held
Discussion: Let’s prove correctness. Say that $s$ is the unknown value that occurs more than $n/2$ times. The idea of the algorithm is that if you could pair up elements of your stream so that distinct values are paired up, and then you “kill” these pairs, then $s$ will always survive. The way this algorithm pairs up the values is by holding onto the most recent value that has no pair (implicitly, by keeping a count how many copies of that value you saw). Then when you come across a new element, you decrement the counter and implicitly account for one new pair.
Let’s analyze the complexity of the algorithm. Clearly the algorithm only uses a single pass through the data. Next, if the stream has size $n$, then this algorithm uses $O(\log(n) + \log(m))$ space. Indeed, if the stream entirely consists of a single value (say, a stream of all 1’s) then the counter will be $n$ at the end, which takes $\log(n)$ bits to store. On the other hand, if there are $m$ possible values then storing the largest requires $\log(m)$ bits.
Finally, the guarantee that one value occurs more than $n/2$ times is necessary. If it is not the case the algorithm could output anything (including the most infrequent element!). And moreover, if we don’t have this guarantee then every algorithm that solves the problem must use at least $\Omega(n)$ space in the worst case. In particular, say that $m=n$, and the first $n/2$ items are all distinct and the last $n/2$ items are all the same one, the majority value $s$. If you do not know $s$ in advance, then you must keep at least one bit of information to know which symbols occurred in the first half of the stream because any of them could be $s$. So the guarantee allows us to bypass that barrier.
This algorithm can be generalized to detect $k$ items with frequency above some threshold $n/(k+1)$ using space $O(k \log n)$. The idea is to keep $k$ counters instead of one, adding new elements when any counter is zero. When you see an element not being tracked by your $k$ counters (which are all positive), you decrement all the counters by 1. This is like a $k$-to-one matching rather than a pairing.
# When Greedy Algorithms are Perfect: the Matroid
Greedy algorithms are by far one of the easiest and most well-understood algorithmic techniques. There is a wealth of variations, but at its core the greedy algorithm optimizes something using the natural rule, “pick what looks best” at any step. So a greedy routing algorithm would say to a routing problem: “You want to visit all these locations with minimum travel time? Let’s start by going to the closest one. And from there to the next closest one. And so on.”
Because greedy algorithms are so simple, researchers have naturally made a big effort to understand their performance. Under what conditions will they actually solve the problem we’re trying to solve, or at least get close? In a previous post we gave some easy-to-state conditions under which greedy gives a good approximation, but the obvious question remains: can we characterize when greedy algorithms give an optimal solution to a problem?
The answer is yes, and the framework that enables us to do this is called a matroid. That is, if we can phrase the problem we’re trying to solve as a matroid, then the greedy algorithm is guaranteed to be optimal. Let’s start with an example when greedy is provably optimal: the minimum spanning tree problem. Throughout the article we’ll assume the reader is familiar with the very basics of linear algebra and graph theory (though we’ll remind ourselves what a minimum spanning tree is shortly). For a refresher, this blog has primers on both subjects. But first, some history.
## History
Matroids were first introduced by Hassler Whitney in 1935, and independently discovered a little later by B.L. van der Waerden (a big name in combinatorics). They were both interested in devising a general description of “independence,” the properties of which are strikingly similar when specified in linear algebra and graph theory. Since then the study of matroids has blossomed into a large and beautiful theory, one part of which is the characterization of the greedy algorithm: greedy is optimal on a problem if and only if the problem can be represented as a matroid. Mathematicians have also characterized which matroids can be modeled as spanning trees of graphs (we will see this momentarily). As such, matroids have become a standard topic in the theory and practice of algorithms.
## Minimum Spanning Trees
It is often natural in an undirected graph $G = (V,E)$ to find a connected subset of edges that touch every vertex. As an example, if you’re working on a power network you might want to identify a “backbone” of the network so that you can use the backbone to cheaply travel from any node to any other node. Similarly, in a routing network (like the internet) it costs a lot of money to lay down cable, it’s in the interest of the internet service providers to design analogous backbones into their infrastructure.
A minimal subset of edges in a backbone like this is guaranteed to form a tree. This is simply because if you have a cycle in your subgraph then removing any edge on that cycle doesn’t break connectivity or the fact that you can get from any vertex to any other (and trees are the maximal subgraphs without cycles). As such, these “backbones” are called spanning trees. “Span” here means that you can get from any vertex to any other vertex, and it suggests the connection to linear algebra that we’ll describe later, and it’s a simple property of a tree that there is a unique path between any two vertices in the tree.
An example of a spanning tree
When your edges $e \in E$ have nonnegative weights $w_e \in \mathbb{R}^{\geq 0}$, we can further ask to find a minimum cost spanning tree. The cost of a spanning tree $T$ is just the sum of its edges, and it’s important enough of a definition to offset.
Definition: A minimum spanning tree $T$ of a weighted graph $G$ (with weights $w_e \geq 0$ for $e \in E$) is a spanning tree which minimizes the quantity
$w(T) = \sum_{e \in T} w_e$
There are a lot of algorithms to find minimal spanning trees, but one that will lead us to matroids is Kruskal’s algorithm. It’s quite simple. We’ll maintain a forest $F$ in $G$, which is just a subgraph consisting of a bunch of trees that may or may not be connected. At the beginning $F$ is just all the vertices with no edges. And then at each step we add to $F$ the edge $e$ whose weight is smallest and also does not introduce any cycles into $F$. If the input graph $G$ is connected then this will always produce a minimal spanning tree.
Theorem: Kruskal’s algorithm produces a minimal spanning tree of a connected graph.
Proof. Call $F_t$ the forest produced at step $t$ of the algorithm. Then $F_0$ is the set of all vertices of $G$ and $F_{n-1}$ is the final forest output by Kruskal’s (as a quick exercise, prove all spanning trees on $n$ vertices have $n-1$ edges, so we will stop after $n-1$ rounds). It’s clear that $F_{n-1}$ is a tree because the algorithm guarantees no $F_i$ will have a cycle. And any tree with $n-1$ edges is necessarily a spanning tree, because if some vertex were left out then there would be $n-1$ edges on a subgraph of $n-1$ vertices, necessarily causing a cycle somewhere in that subgraph.
Now we’ll prove that $F_{n-1}$ has minimal cost. We’ll prove this in a similar manner to the general proof for matroids. Indeed, say you had a tree $T$ whose cost is strictly less than that of $F_{n-1}$ (we can also suppose that $T$ is minimal, but this is not necessary). Pick the minimal weight edge $e \in T$ that is not in $F_{n-1}$. Adding $e$ to $F_{n-1}$ introduces a unique cycle $C$ in $F_{n-1}$. This cycle has some strange properties. First, $e$ has the highest cost of any edge on $C$. For otherwise, Kruskal’s algorithm would have chosen it before the heavier weight edges. Second, there is another edge in $C$ that’s not in $T$ (because $T$ was a tree it can’t have the entire cycle). Call such an edge $e'$. Now we can remove $e'$ from $F_{n-1}$ and add $e$. This can only increase the total cost of $F_{n-1}$, but this transformation produces a tree with one more edge in common with $T$ than before. This contradicts that $T$ had strictly lower weight than $F_{n-1}$, because repeating the process we described would eventually transform $F_{n-1}$ into $T$ exactly, while only increasing the total cost.
$\square$
Just to recap, we defined sets of edges to be “good” if they did not contain a cycle, and a spanning tree is a maximal set of edges with this property. In this scenario, the greedy algorithm performed optimally at finding a spanning tree with minimal total cost.
## Columns of Matrices
Now let’s consider a different kind of problem. Say I give you a matrix like this one:
$\displaystyle A = \begin{pmatrix} 2 & 0 & 1 & -1 & 0 \\ 0 & -4 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 7 \end{pmatrix}$
In the standard interpretation of linear algebra, this matrix represents a linear function $f$ from one vector space $V$ to another $W$, with the basis $(v_1, \dots, v_5)$ of $V$ being represented by columns and the basis $(w_1, w_2, w_3)$ of $W$ being represented by the rows. Column $j$ tells you how to write $f(v_j)$ as a linear combination of the $w_i$, and in so doing uniquely defines $f$.
Now one thing we want to calculate is the rank of this matrix. That is, what is the dimension of the image of $V$ under $f$? By linear algebraic arguments we know that this is equivalent to asking “how many linearly independent columns of $A$ can we find”? An interesting consequence is that if you have two sets of columns that are both linearly independent and maximally so (adding any other column to either set would necessarily introduce a dependence in that set), then these two sets have the same size. This is part of why the rank of a matrix is well-defined.
If we were to give the columns of $A$ costs, then we could ask about finding the minimal-cost maximally-independent column set. It sounds like a mouthful, but it’s exactly the same idea as with spanning trees: we want a set of vectors that spans the whole column space of $A$, but contains no “cycles” (linearly dependent combinations), and we want the cheapest such set.
So we have two kinds of “independence systems” that seem to be related. One interesting question we can ask is whether these kinds of independence systems are “the same” in a reasonable way. Hardcore readers of this blog may see the connection quite quickly. For any graph $G = (V,E)$, there is a natural linear map from $E$ to $V$, so that a linear dependence among the columns (edges) corresponds to a cycle in $G$. This map is called the incidence matrix by combinatorialists and the first boundary map by topologists.
The map is easy to construct: for each edge $e = (v_i,v_j)$ you add a column with a 1 in the $j$-th row and a $-1$ in the $i$-th row. Then taking a sum of edges gives you zero if and only if the edges form a cycle. So we can think of a set of edges as “independent” if they don’t contain a cycle. It’s a little bit less general than independence over $\mathbb{R}$, but you can make it exactly the same kind of independence if you change your field from real numbers to $\mathbb{Z}/2\mathbb{Z}$. We won’t do this because it will detract from our end goal (to analyze greedy algorithms in realistic settings), but for further reading this survey of Oxley assumes that perspective.
So with the recognition of how similar these notions of independence are, we are ready to define matroids.
## The Matroid
So far we’ve seen two kinds of independence: “sets of edges with no cycles” (also called forests) and “sets of linearly independent vectors.” Both of these share two trivial properties: there are always nonempty independent sets, and every subset of an independent set is independent. We will call any family of subsets with this property an independence system.
Definition: Let $X$ be a finite set. An independence system over $X$ is a family $\mathscr{I}$ of subsets of $X$ with the following two properties.
1. $\mathscr{I}$ is nonempty.
2. If $I \in \mathscr{I}$, then so is every subset of $I$.
This is too general to characterize greedy algorithms, so we need one more property shared by our examples. There are a few things we do, but here’s one nice property that turns out to be enough.
Definition: A matroid $M = (X, \mathscr{I})$ is a set $X$ and an independence system $\mathscr{I}$ over $X$ with the following property:
If $A, B$ are in $\mathscr{I}$ with $|A| = |B| + 1$, then there is an element $x \in A \setminus B$ such that $B \cup \{ a \} \in \mathscr{I}$.
In other words, this property says if I have an independent set that is not maximally independent, I can grow the set by adding some suitably-chosen element from a larger independent set. We’ll call this the extension property. For a warmup exercise, let’s prove that the extension property is equivalent to the following (assuming the other properties of a matroid):
For every subset $Y \subset X$, all maximal independent sets contained in $Y$ have equal size.
Proof. For one direction, if you have two maximal sets $A, B \subset Y \subset X$ that are not the same size (say $A$ is bigger), then you can take any subset of $A$ whose size is exactly $|B| + 1$, and use the extension property to make $B$ larger, a contradiction. For the other direction, say that I know all maximal independent sets of any $Y \subset X$ have the same size, and you give me $A, B \subset X$. I need to find an $a \in A \setminus B$ that I can add to $B$ and keep it independent. What I do is take the subset $Y = A \cup B$. Now the sizes of $A, B$ don’t change, but $B$ can’t be maximal inside $Y$ because it’s smaller than $A$ ($A$ might not be maximal either, but it’s still independent). And the only way to extend $B$ is by adding something from $A$, as desired.
$\square$
So we can use the extension property and the cardinality property interchangeably when talking about matroids. Continuing to connect matroid language to linear algebra and graph theory, the maximal independent sets of a matroid are called bases, the size of any basis is the rank of the matroid, and the minimal dependent sets are called circuits. In fact, you can characterize matroids in terms of the properties of their circuits, which are dual to the properties of bases (and hence all independent sets) in a very concrete sense.
But while you could spend all day characterizing the many kinds of matroids and comatroids out there, we are still faced with the task of seeing how the greedy algorithm performs on a matroid. That is, suppose that your matroid $M = (X, \mathscr{I})$ has a nonnegative real number $w(x)$ associated with each $x \in X$. And suppose we had a black-box function to determine if a given set $S \subset X$ is independent. Then the greedy algorithm maintains a set $B$, and at every step adds a minimum weight element that maintains the independence of $B$. If we measure the cost of a subset by the sum of the weights of its elements, then the question is whether the greedy algorithm finds a minimum weight basis of the matroid.
The answer is even better than yes. In fact, the answer is that the greedy algorithm performs perfectly if and only if the problem is a matroid! More rigorously,
Theorem: Suppose that $M = (X, \mathscr{I})$ is an independence system, and that we have a black-box algorithm to determine whether a given set is independent. Define the greedy algorithm to iteratively adds the cheapest element of $X$ that maintains independence. Then the greedy algorithm produces a maximally independent set $S$ of minimal cost for every nonnegative cost function on $X$, if and only if $M$ is a matroid.
It’s clear that the algorithm will produce a set that is maximally independent. The only question is whether what it produces has minimum weight among all maximally independent sets. We’ll break the theorem into the two directions of the “if and only if”:
Part 1: If $M$ is a matroid, then greedy works perfectly no matter the cost function.
Part 2: If greedy works perfectly for every cost function, then $M$ is a matroid.
Proof of Part 1.
Call the cost function $w : X \to \mathbb{R}^{\geq 0}$, and suppose that the greedy algorithm picks elements $B = \{ x_1, x_2, \dots, x_r \}$ (in that order). It’s easy to see that $w(x_1) \leq w(x_2) \leq \dots \leq w(x_r)$. Now if you give me any list of $r$ independent elements $y_1, y_2, \dots, y_r \in X$ that has $w(y_1) \leq \dots \leq w(y_r)$, I claim that $w(x_i) \leq w(y_i)$ for all $i$. This proves what we want, because if there were a basis of size $r$ with smaller weight, sorting its elements by weight would give a list contradicting this claim.
To prove the claim, suppose to the contrary that it were false, and for some $k$ we have $w(x_k) > w(y_k)$. Moreover, pick the smallest $k$ for which this is true. Note $k > 1$, and so we can look at the special sets $S = \{ x_1, \dots, x_{k-1} \}$ and $T = \{ y_1, \dots, y_k \}$. Now $|T| = |S|+1$, so by the matroid property there is some $j$ between $1$ and $r$ so that $S \cup \{ y_j \}$ is an independent set (and $y_j$ is not in $S$). But then $w(y_j) \leq w(y_k) < w(x_k)$, and so the greedy algorithm would have picked $y_j$ before it picks $x_k$ (and the strict inequality means they’re different elements). This contradicts how the greedy algorithm runs, and hence proves the claim.
Proof of Part 2.
We’ll prove this contrapositively as follows. Suppose we have our independence system and it doesn’t satisfy the last matroid condition. Then we’ll construct a special weight function that causes the greedy algorithm to fail. So let $A,B$ be independent sets with $|A| = |B| + 1$, but for every $a \in A \setminus B$ adding $a$ to $B$ never gives you an independent set.
Now what we’ll do is define our weight function so that the greedy algorithm picks the elements we want in the order we want (roughly). In particular, we’ll assign all elements of $A \cap B$ a tiny weight we’ll call $w_1$. For elements of $B - A$ we’ll use $w_2$, and for $A - B$ we’ll use $w_3$, with $w_4$ for everything else. In a more compact notation:
We need two things for this weight function to screw up the greedy algorithm. The first is that $w_1 < w_2 < w_3 < w_4$, so that greedy picks the elements in the order we want. Note that this means it’ll first pick all of $A \cap B$, and then all of $B - A$, and by assumption it won’t be able to pick anything from $A - B$, but since $B$ is assumed to be non-maximal, we have to pick at least one element from $X - (A \cup B)$ and pay $w_4$ for it.
So the second thing we want is that the cost of doing greedy is worse than picking any maximally independent set that contains $A$ (and we know that there has to be some maximal independent set containing $A$). In other words, if we call $m$ the size of a maximally independent set, we want
$\displaystyle |A \cap B| w_1 + |B-A|w_2 + (m - |B|)w_4 > |A \cap B|w_1 + |A-B|w_3 + (m-|A|)w_4$
This can be rearranged (using the fact that $|A| = |B|+1$) to
$\displaystyle w_4 > |A-B|w_3 - |B-A|w_2$
The point here is that the greedy picks too many elements of weight $w_4$, since if we were to start by taking all of $A$ (instead of all of $B$), then we could get by with one fewer. That might not be optimal, but it’s better than greedy and that’s enough for the proof.
So we just need to make $w_4$ large enough to make this inequality hold, while still maintaining $w_2 < w_3$. There are probably many ways to do this, and here’s one. Pick some $0 < \varepsilon < 1$, and set
It’s trivial that $w_1 < w_2$ and $w_3 < w_4$. For the rest we need some observations. First, the fact that $|A-B| = |B-A| + 1$ implies that $w_2 < w_3$. Second, both $|A-B|$ and $|B-A|$ are nonempty, since otherwise the second property of independence systems would contradict our assumption that augmenting $B$ with elements of $A$ breaks independence. Using this, we can divide by these quantities to get
$\displaystyle w_4 = 2 > 1 = \frac{|A-B|(1 + \varepsilon)}{|A-B|} - \frac{|B-A|\varepsilon}{|B-A|}$
This proves the claim and finishes the proof.
$\square$
As a side note, we proved everything here with respect to minimizing the sum of the weights, but one can prove an identical theorem for maximization. The only part that’s really different is picking the clever weight function in part 2. In fact, you can convert between the two by defining a new weight function that subtracts the old weights from some fixed number $N$ that is larger than any of the original weights. So these two problems really are the same thing.
This is pretty amazing! So if you can prove your problem is a matroid then you have an awesome algorithm automatically. And if you run the greedy algorithm for fun and it seems like it works all the time, then that may be hinting that your problem is a matroid. This is one of the best situations one could possibly hope for.
But as usual, there are a few caveats to consider. They are both related to efficiency. The first is the black box algorithm for determining if a set is independent. In a problem like minimum spanning tree or finding independent columns of a matrix, there are polynomial time algorithms for determining independence. These two can both be done, for example, with Gaussian elimination. But there’s nothing to stop our favorite matroid from requiring an exponential amount of time to check if a set is independent. This makes greedy all but useless, since we need to check for independence many times in every round.
Another, perhaps subtler, issue is that the size of the ground set $X$ might be exponentially larger than the rank of the matroid. In other words, at every step our greedy algorithm needs to find a new element to add to the set it’s building up. But there could be such a huge ocean of candidates, all but a few of which break independence. In practice an algorithm might be working with $X$ implicitly, so we could still hope to solve the problem if we had enough knowledge to speed up the search for a new element.
There are still other concerns. For example, a naive approach to implementing greedy takes quadratic time, since you may have to look through every element of $X$ to find the minimum-cost guy to add. What if you just have to have faster runtime than $O(n^2)$? You can still be interested in finding more efficient algorithms that still perform perfectly, and to the best of my knowledge there’s nothing that says that greedy is the only exact algorithm for your favorite matroid. And then there are models where you don’t have direct/random access to the input, and lots of other ways that you can improve on greedy. But those stories are for another time.
Until then! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 540, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8707432150840759, "perplexity": 342.81989489496544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982956861.76/warc/CC-MAIN-20160823200916-00152-ip-10-153-172-175.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/15866/arc-arc-intersection-arcs-specified-by-endpoints-and-height | # arc-arc intersection, arcs specified by endpoints and height
I need to compute the intersection(s) between two circular arcs. Each arc is specified by its endpoints and their height. The height is the perpendicular distance from the chord connecting the endpoints to the middle of the arc. I use this representation because it is numerically robust for very slightly bent arcs, as well as straight line segments, for which the height is zero. In these cases, representing an arc using the center of its circle could lead to the center being far far away from the endpoints of the arc, and hence, numerically unstable.
My question at the highest level is how I would go about computing the intersection points, given that the centers of the circles of the arcs cannot necessarily be computed robustly. At a lower level, I am wondering if there is a parameterization of an arc using only the information I have stated above (which does not include the circle center). Of course, keep in mind numerical robustness is my principal concern here; otherwise I would just do the naive thing and compute the circle center for all non-linear arcs and hope for the best.
Edit: Formula for computing center of circle of arcs:
Suppose the chord length is $2t$, and the height is $h$. The distance from the chord to the circle center is $c$, so that $r=h+c$. Then it follows that $c=(t^2-h^2)/2h$, which breaks down when $h$ is very small. Computing the location of the circle center is some simple vector arithmetic using the chord vector and its perpendicular.
-
Cool, so how are you drawing your arcs (i.e., what parametric equations are you using to draw arcs)? – Guess who it is. Dec 30 '10 at 1:30
Haven't thought about it much, but if you have line-arc intersection available, I believe you can (robustly) reduce arc-arc to line-arc using inversion. btw, two endpoints + height determines two arcs. – Aryabhata Dec 30 '10 at 4:34
@Moron: the height is a signed number, and taken in a definite orientation depending on the orientation of the endpoints. In short, the height uniquely defines a single arc. – Victor Liu Dec 30 '10 at 4:41
Yeah, just nitpicking (distance usually is unsigned). I was pretty sure you had something to uniquely identify. Just pointed that to reduce possible ambiguity people might have come across... – Aryabhata Dec 30 '10 at 4:56
This is an interesting conundrum, and what I will suggest is just one possible approach, by no means a definitive answer. Perhaps what you could do is compute Bézier curve segment approximating your arcs, and then compute the intersection between Bézier segments. To compute the Bézier segment for an arc, you need the tangent vector at one endpoint (the tangent at the other endpoint is obtained by symmetry). So this approach reduces to finding a tangent angle at one endpoint.
Let the angle subtended by an arc at the circle center be $\theta$. If the arc's chord length is $c$ and its height $h$, then, if I've calculated correctly, $$\frac{\theta}{2} = \cos^{-1} \left( \frac{c^2 - 4h^2}{c^2+4h^2} \right) \;.$$ This avoids calculating the circle radius $r = (c^2 + 4h^2)/(8h)$. Now from $\theta$ you could compute the needed tangent angle, and from there a Bézier segment.
I have not analyzed this to see if it is indeed robust.
-
If Victor doesn't need an exact solution anyway, I suppose crossing Béziers ought to do (though if you look at it, it's a bit funny that you're solving a problem of intersecting quadratics by intersecting cubics... :) ). +1 anyway! – Guess who it is. Dec 30 '10 at 1:15
@J.M.: Yes, you are right, this approach is a bit counterintuitive, and I am uncertain if it worthwhile. And you are right that if he wants an exact solution, it cannot suffice. However, nearly every circle we see (e.g., in Adobe Illustrator/Photoshop/Flash/etc.) is a Bézier approximation to a true circle. – Joseph O'Rourke Dec 30 '10 at 1:20
This is an interesting solution, and I particularly like the formula for the tangent angle. I think that a rational quadratic B-spline can represent circular arcs exactly, but I haven't looked into how hard it is to compute intersections with them. – Victor Liu Dec 30 '10 at 1:29
@Victor: They do, but I'm getting the feeling that the numerical instability you were trying so hard to avoid for arcs of tiny curvature would find a way to manifest itself when you use rational B-splines... anyway, you have to experiment. – Guess who it is. Dec 30 '10 at 1:43
@Rahul: Create a circle in Adobe Illustrator, and note that it is composed of four arcs with tangents at each endpoint. These tangents can be stretched. Circles in Illustrator are four Bézier segments. – Joseph O'Rourke Dec 30 '10 at 14:47
Have you considered finding the intersections using an implicit form for the circles, $$\frac{x^2}{r^2} + \frac{y^2}{r^2} + ax + by + c = 0?$$ This representation doesn't have any coefficients that diverge as the circle approaches a straight line. To find intersections, you'll have to solve a quadratic equation whose leading coefficient could be zero or arbitrarily close to it, but the alternative form of the quadratic formula should be able to deal with that robustly.
You'll then have to do some jiggery-pokery to figure out whether the intersection points lie within the arcs. If the arc's bending angle is smaller than $\pi$, a projection onto the line joining the endpoints will suffice.
(Disclaimer: While all of this feels like it should work, I haven't analyzed it in any detail. Also, there could still be a problem when the circle is close to a line and you want the longer arc. But I can't imagine that's a case that would turn up in any practical application.)
Update: For a concrete example, here is the equation for a circular arc passing through the three points $(0,0)$, $(0.5, h)$, and $(1,0)$: $$\kappa^2 x^2 + \kappa^2 y^2 - \kappa^2 x - 2\eta y = 0,$$ where \begin{align}\kappa &= \frac{8h}{4h^2 + 1}, \\ \eta &= \frac{8h(4h^2-1)}{(4h^2+1)^2}.\end{align} As you can see, the coefficients remain bounded as $h \to 0$.
Update 2: Wait, that equation becomes trivial if $h = 0$, which is bad. We really want something like $x^2/r + y^2/r + ax + by + c,$ i.e. multiply the previous expression through by $r$. Then for the same example, our equation becomes $$\kappa x^2 + \kappa y^2 - \kappa x - 2\eta' y = 0,$$ where $\eta' = (4h^2-1)/(4h^2+1)$. Here are some explicit values.
$h = 1/2$: $$2 x^2 + 2 y^2 - 2 x = 0,$$ $h = 0.01$: $$0.07997 x^2 + 0.07997 y^2 - 0.07997 x + 1.998 y = 0,$$ $h = 0$: $$2 y = 0.$$
By the way, in this format, the linear terms will always be simply $-2(x_0/r)x$ and $-2(y_0/r)y$, where the center of the circle is at $(x_0,y_0)$. As the center goes to infinity but the endpoints remain fixed, these coefficients remain bounded and nonzero (i.e. not both zero).
-
I gathered from his problem that he's only considering small arcs and not big arcs... anyway, it theoretically works, but I'm not that confident that the ill condition he was trying so hard to avoid would not manifest in the coefficients of the quadratic. – Guess who it is. Dec 30 '10 at 4:07
@J.M.: I'm pretty sure it wouldn't. I'll see if I can work out the coefficients explicitly and add them to my answer when I have time. – Rahul Dec 30 '10 at 4:12
@Rahul: I'll have to work out the coefficients to see if this is a stable route. Taking O'Rourke's advice, I am looking into Bezier intersection (tom.cs.byu.edu/~tom/papers/C3CIC.pdf) and using implicitization, I feel like it might reduce to your solution. Also, you can disregard the case where the arc is almost a complete circle since I would simply perform a circle-arc or circle-circle test in that case. See also my comment above about the uniqueness of the arc specification. – Victor Liu Dec 30 '10 at 4:47
@Victor: I added an equation for the coefficients in a concrete example. – Rahul Dec 30 '10 at 6:21
@J.M.: Please see my edit. Unfortunately only one person can be notified per comment... – Rahul Dec 30 '10 at 6:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9297288060188293, "perplexity": 387.23029269222724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988650.53/warc/CC-MAIN-20150728002308-00212-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://link.springer.com/article/10.1007/s00365-005-0614-9 | , Volume 24, Issue 1, pp 91-112
# Jackson-Type Inequality for Doubling Weights on the Sphere
### Purchase on Springer.com
\$39.95 / €34.95 / £29.95 *
* Final gross prices may vary according to local VAT.
## Abstract
In the one-dimensional case, Jackson's inequality and its converse for weighted algebraic polynomial approximation, as well as many important, weighted polynomial inequalities, such as Bernstein, Marcinkiewicz, Nikolskii, Schur, Remez, etc., have been proved recently by Giuseppe Mastroianni and Vilmos Totik under minimal assumption on the weights. In most cases this minimal assumption is the doubling condition. In this paper, we establish Jackson's theorem and its Stechkin-type converse for spherical polynomial approximation with respect to doubling weights on the unit sphere. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8279728293418884, "perplexity": 2022.1145365922634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699684236/warc/CC-MAIN-20130516102124-00089-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://engineering.stackexchange.com/questions/31878/how-does-this-agv-rotate-without-rotating-the-payload | # How does this AGV rotate without rotating the payload?
How does the AGV carrying payloads of huge heights as compared to it's size turn without turning the payload? Even if the body is attached to a center bearing, the bearing is very smooth then how come these bots are able to rotate so precisely? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8144174218177795, "perplexity": 2380.7422278854146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305420.54/warc/CC-MAIN-20220128043801-20220128073801-00253.warc.gz"} |
https://www.physicsforums.com/threads/rigid-body-dynamics.123761/ | # Rigid body dynamics
1. Jun 14, 2006
### Vasco_F
Hi,
I'm developing a video game, in which I'm making a charactar with rigid-body physics (sometimes called "ragdoll" physics). The way I've made it is probably not completely realistic, because I only use velocity vectors to calculate the position of each joint of the "ragdoll", based on an initial velocity vector applied to a joint. If you want to check it out, you can download it here.
The way I do this is illustated in this image here.
Anyway, the problem I have now is how to calculate the velocity vector of joints that make an angle that is restricted (an angle that shouldn't get any bigger, for example). Please see this simplified diagram that illustrates my problem: Diagram
In the diagram, how should I calculate vectors v1 and v2? Note that in the diagram, the entire body should be rigid, because of the angle restriction.
I would truly appreciante any help on this...
2. Jun 14, 2006
### Hootenanny
Staff Emeritus
If I understand the problem correctly, that the indicated angle should remain constant? Then solution is simply;
$$\vec{v} = \vec{v_{1}} = \vec{v_{2}}$$
3. Jun 14, 2006
### Vasco_F
That's what I though at first, but that's not correct because the whole body should rotate, until the rotation stabilizes when the body is in kind of a horizontal position (I don't know how to explain it better but if you don't understand I'll draw a diagram). Imagine you have something shaped like a "V" on a table and you drag one end.
4. Jun 15, 2006
### pervect
Staff Emeritus
The motion of a rigid body can be specified by giving:
1) the motion of a specific point (pick a point, say for instance point 1, and use it as a reference).
2) the angular velocity of rotation (done by specifying an axis of rotation and an angular velocity, i.e. a vector $\hat{\omega}$.
The formula for the velocity $v_i$ of any point with coordinates $r_i$ will be:
$$v_i = v_{ref} + \hat{w} \times (r_i - r_{ref})$$
$v_{ref}$ is the velocity of the reference point
$r_{ref}$ are the coordinates of the reference point
$r_i - r_{ref}$ is the difference in coordinates between the reference point and the arbitrary point $r_i$ which has the velocity $v_i$.
Hopefully you are familiar with the vector cross product, if not try reading
http://en.wikipedia.org/wiki/Cross_product
5. Jun 15, 2006
### Vasco_F
Thank you!
I just didn't understand one thing, how do I know what's the axis of rotation and angular velocity? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8079313635826111, "perplexity": 569.2130632291727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00148-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://publications.csail.mit.edu/lcs/specpub.php?id=1804 | LCS Publication Details
Publication Title: On Field Constraint Analysis Publication Author: Wies, Thomas Additional Authors: Viktor Kuncak, Patrick Lam, Andreas Podelski, Martin Rinard LCS Document Number: MIT-LCS-TR-1010 Publication Date: 11-3-2005 LCS Group: Computer Architecture Additional URL: Abstract: We introduce field constraint analysis, a new technique for verifying data structure invariants. A field constraint for a field is a formula specifying a set of objects to which the field can point. Field constraints enable the application of decidable logics to data structures which were originally beyond the scope of these logics, by verifying the backbone of the data structure and then verifying constraints on fields that cross-cut the backbone in arbitrary ways. Previously, such cross-cutting fields could only be verified when they were uniquely determined by the backbone, which significantly limited the range of analyzable data structures. Our field constraint analysis permits \\emph{non-deterministic} field constraints on cross-cutting fields, which allows to verify invariants of data structures such as skip lists. Non-deterministic field constraints also enable the verification of invariants between data structures, yielding an expressive generalization of static type declarations. The generality of our field constraints requires new techniques, which are orthogonal to the traditional use of structure simulation. We present one such technique and prove its soundness. We have implemented this technique as part of a symbolic shape analysis deployed in the context of the Hob system for verifying data structure consistency. Using this implementation we were able to verify data structures that were previously beyond the reach of similar techniques. To obtain this publication: MIT-LCS-TR-1010.pdf - pdf format, 23 pages longMIT-LCS-TR-1010.ps - ps format, 23 pages long To purchase a printed copy of this publication please contact MIT Document Services. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8133265972137451, "perplexity": 2652.56400132394}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688103.0/warc/CC-MAIN-20170922003402-20170922023402-00312.warc.gz"} |
https://brilliant.org/problems/lets-do-velocities/ | # Hinge Velocities
A uniform rod of length $$\SI{1}{\metre}$$ is held horizontally attached with the help of a hinge on one of its end to the roof. After some time it is allowed to fall, find the angular speed (in $$\si[per-mode=symbol]{\radian \per \second}$$) of the rod when it becomes vertical.
Take the acceleration due to gravity to be $$10\text{ m/s}^2$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8983922004699707, "perplexity": 203.03030577626967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578760477.95/warc/CC-MAIN-20190426053538-20190426075538-00462.warc.gz"} |
https://ftp.aimsciences.org/journal/1531-3492/2010/13/3 | # American Institute of Mathematical Sciences
ISSN:
1531-3492
eISSN:
1553-524X
All Issues
## Discrete & Continuous Dynamical Systems - B
May 2010 , Volume 13 , Issue 3
Select all articles
Export/Reference:
2010, 13(3): 517-535 doi: 10.3934/dcdsb.2010.13.517 +[Abstract](2387) +[PDF](1127.5KB)
Abstract:
In this article we establish the global well-posedness of a recent model proposed by Noguera, Fritz, Clément and Baronnet for simultaneously describing the process of nucleation, growth and ageing of particles in thermodynamically closed and initially supersaturated systems. This model, which applies to precipitation in solution, vapor condensation and crystallization from a simple melt, can be seen as a highly nonlinear age-dependent population problem involving a delayed birth process and a hysteresis damage operator.
2010, 13(3): 537-557 doi: 10.3934/dcdsb.2010.13.537 +[Abstract](3111) +[PDF](474.5KB)
Abstract:
Some models in population dynamics with intra-specific competition lead to integro-differential equations where the integral term corresponds to nonlocal consumption of resources [8][9]. The principal difference of such equations in comparison with traditional reaction-diffusion equation is that homogeneous in space solutions can lose their stability resulting in emergence of spatial or spatio-temporal structures [4]. We study the existence and global bifurcations of such structures. In the case of unbounded domains, transition between stationary solutions can be observed resulting in propagation of generalized travelling waves (GTW). GTWs are introduced in [18] for reaction-diffusion systems as global in time propagating solutions. In this work their existence and properties are studied for the integro-differential equation. Similar to the reaction-diffusion equation in the monostable case, we prove the existence of generalized travelling waves for all values of the speed greater or equal to the minimal one. We illustrate these results by numerical simulations in one and two space dimensions and observe a variety of structures of GTWs.
2010, 13(3): 559-575 doi: 10.3934/dcdsb.2010.13.559 +[Abstract](2899) +[PDF](233.9KB)
Abstract:
Recently, we derived a lattice model for a single species with stage structure in a two-dimensional patchy environment with infinite number of patches connected locally by diffusion and global interaction by delay (IMA J. Appl. Math., 73 (2008), 592-618.). The important feature of the model is the reflection of the joint effect of the diffusion dynamics, the nonlocal delayed effect and the direction of propagation. In this paper we study the asymptotic stability of traveling wavefronts of this model when the immature population is not mobile. Under the assumption that the birth function satisfies monostable condition, we prove that the traveling wavefront is exponentially stable by means of weighted energy method, when the initial perturbation around the wave is suitably small in a weighted norm. The exponential convergent rate is also obtained.
2010, 13(3): 577-591 doi: 10.3934/dcdsb.2010.13.577 +[Abstract](1955) +[PDF](175.8KB)
Abstract:
The main objective of this article is to study dynamic of the three-dimensional Boussinesq equations with the periodic boundary condition.We prove that when the Rayleigh number $R$ crosses the first critical Rayleigh number $R_c$, the Rayleigh-Bénard problem bifurcates from the basic state to an global attractor $\Sigma$, which is homeomorphic to $S^3$.
2010, 13(3): 593-608 doi: 10.3934/dcdsb.2010.13.593 +[Abstract](2480) +[PDF](216.7KB)
Abstract:
We are concerned with the Cauchy problem for a viscous shallow water system with a third-order surface-tension term. The global existence and uniqueness of the solution in the space of Besov type is shown for the initial data close to a constant equilibrium state away from the vacuum by using the Friedrich's regularization and compactness arguments.
2010, 13(3): 609-622 doi: 10.3934/dcdsb.2010.13.609 +[Abstract](2616) +[PDF](174.7KB)
Abstract:
In this paper, we consider the second-order nonlinear dynamic equation
$(p(t)y^{\Delta}(t))^{\Delta}+f(t, y^{\sigma})g(p(t)y^{\Delta})=0,$
on a time scale $\mathbb{T}$. Our goal is to establish necessary and sufficient conditions for the existence of certain types of solutions of this dynamic equation. We apply results from the theory of lower and upper solutions for related dynamic equations and use several results from calculus.
Jibin Li and
2010, 13(3): 623-631 doi: 10.3934/dcdsb.2010.13.623 +[Abstract](2822) +[PDF](184.9KB)
Abstract:
The paper is devoted to four kinds of fifth-order nonlinear wave equations including the Caudrey-Dodd-Gibbon equation, Kupershmidt equation, Kaup-Kupershmidt equation and Sawada-Kotera equation. The exact soliton solution and quasi-periodic solutions are found by using Cosgrove's work and the method of dynamical systems. The geometrical explanations of these solutions are also discussed. To guarantee the existence of the above solutions, the parameter conditions are determined.
2010, 13(3): 633-646 doi: 10.3934/dcdsb.2010.13.633 +[Abstract](2325) +[PDF](965.9KB)
Abstract:
In this paper, we investigate the traveling wave solutions of $K(m, n)$ equation $u_t+a(u^m)_{x}+(u^n)_{x x x}=0$ by using the bifurcation method and numerical simulation approach of dynamical systems. We obtain some new results as follows: (i) For $K(2, 2)$ equation, we extend the expressions of the smooth periodic wave solutions and obtain a new solution, the periodic-cusp wave solution. Further, we demonstrate that the periodic-cusp wave solution may become the peakon wave solution. (ii) For $K(3, 2)$ equation, we extend the expression of the elliptic smooth periodic wave solution and obtain a new solution, the elliptic periodic-blow-up solution. From the limit forms of the two solutions, we get other three types of new solutions, the smooth solitary wave solutions, the hyperbolic 1-blow-up solutions and the trigonometric periodic-blow-up solutions. (iii) For $K(4, 2)$ equation, we construct two new solutions, the 1-blow-up and 2-blow-up solutions.
2010, 13(3): 647-664 doi: 10.3934/dcdsb.2010.13.647 +[Abstract](2344) +[PDF](456.3KB)
Abstract:
This paper is concerned with time periodic solutions of Hamilton-Jacobi equations in which the hamiltonian is increasing wrt to the unknown variable. When the uniqueness of the periodic solution is not guaranteed, we define a notion of extremal solution and propose two different ways to attain it, together with the corresponding numerical simulations. In the course of the analysis, the ode case, where we show that things are rather explicit, is also visited.
2010, 13(3): 665-684 doi: 10.3934/dcdsb.2010.13.665 +[Abstract](2943) +[PDF](645.2KB)
Abstract:
In this article, a fully discrete finite element method is considered for the viscoelastic fluid motion equations arising in the two-dimensional Oldroyd model. A finite element method is proposed for the spatial discretization and the time discretization is based on the backward Euler scheme. Moreover, the stability and optimal error estimates in the $L^2$- and $H^1$-norms for the velocity and $L^2$-norm for the pressure are derived for all time $t>0.$ Finally, some numerical experiments are shown to verify the theoretical predictions.
2010, 13(3): 685-708 doi: 10.3934/dcdsb.2010.13.685 +[Abstract](2472) +[PDF](299.2KB)
Abstract:
In this paper, we introduce an efficient Legendre-Gauss collocation method for solving nonlinear delay differential equations with variable delay. We analyze the convergence of the single-step and multi-domain versions of the proposed method, and show that the scheme enjoys high order accuracy and can be implemented in a stable and efficient manner. We also make numerical comparison with other methods.
2010, 13(3): 709-728 doi: 10.3934/dcdsb.2010.13.709 +[Abstract](2657) +[PDF](242.6KB)
Abstract:
This paper is concerned with monotone traveling wave solutions of reaction-diffusion systems with spatio-temporal delay. Our approach is to use a new monotone iteration scheme based on a lower solution in the set of the profiles. The smoothness of upper and lower solutions is not required in this paper. We will apply our results to Nicholson's blowflies systems with non-monotone birth functions and show that the systems admit traveling wave solutions connecting two spatially homogeneous equilibria and the wave shape is monotone. Due to the biological realism, the positivity of the monotone traveling wave solutions can be directly obtained by the construction of suitable upper-lower solutions.
2020 Impact Factor: 1.327
5 Year Impact Factor: 1.492
2020 CiteScore: 2.2 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9365732669830322, "perplexity": 439.08947827651025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155322.12/warc/CC-MAIN-20210805032134-20210805062134-00165.warc.gz"} |
https://keypoint.ng/measures-of-central-tendency/ | MEASURES OF CENTRAL TENDENCY
Measures of Central Tendency is that single value that summarises the mass of data presented in a distribution, it is used mostly when describing a certain property of a population or simple.
It is those means/methods of determining the most typical value of the property in a given data.
The three types of Measures of Central Tendency are elucidated below:
MEAN:
Arithmetic mean is the values, which each item in a distribution would get if the sum total of all the items in the distribution were equally shared among the items. The mathematical formula for calculating mean is
X = ∑X ÷ N
where:
X stands for arithmetic mean
∑X stands for total sum of item
N stands for the number of items.
Example: Given these numbers – 2, 4, 6,8,10 as the values of items of a distribution, calculation the mean.
Solution: Now, to get the mean of the above given numbers, we sum up the total of the items and then divide by the number of the items.
i.e. (2+4+6+8+10) = 30
30 ÷ 5 = 6.
1. It serves as an instrument of comparison
2. It is the most commonly used and reliable measures of central tendency. 3. It has a stable value.
3. It is the most suitable for further statistical analysis.
1. The results could be distorted.
2. It is very difficult for it to be located by mere inspection.
3. The results can be influenced by unrepresentative values.
4. It is difficult to compute when the datas are many.
MEDIAN
is the value or the middle item of a given distribution. In selecting the median, it is pertinent to arrange the items either in ascending or descending order of importance. But where odd numbers exist, the middle number is considered the median. For instance, given these numbers 1, 2, 3, 4, 5. To get the median, the middle number will be taken. Therefore, the median to the above is 3. But for even- number distribution, the average of the two middle numbers make up the median. E.g. 1, 3, 4,5,6,7. The median is (4+5)÷ 2 = 9÷ 2 =4.5.
1. It can be graphically determined
2. It is easy to calculate
3. It gives a balanced value of a data
4. It is easily understood and can be used for qualitative data.
1. The formula sometimes is misleading and may not yield a correct result.
2. It cannot be easily calculated as exactly as the mean
3. Calculation of median may require re-arrangement of result could be achieved
4. It is not suitable for further statistic measures
MODE
Is the value or number that occurs most frequently in a distribution. That is, it is the most common number, E.g. 6,4,5,6,7,8.
From the above values, the most frequently occurring number or value is 6. Therefore, the mode of the distribution given is 6.
NOTE: For group data, the formula for mode is:
Mode L1+ (D1/(D1 + D2))
Where L1 = Lower class boundary of modal class
D1 = Excess of modal frequency over Frequency of next lover class
D2 = Excess of modal frequency over Frequency of next higher class,
C = Size of modal class interval
1. It is the most popular value.
2. Extreme values have no effect on it.
3. It is simple and easy to compute. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8482629060745239, "perplexity": 673.1667155077931}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00730.warc.gz"} |
https://math.stackexchange.com/questions/920004/if-p2-is-divisible-by-3-why-is-p-also-divisible-by-3 | # If $p^2\,$is divisible by 3, why is p also divisible by 3? [duplicate]
I came across this in proving that the $\sqrt{3}$ is irrational
## marked as duplicate by abiessu, MJD, BlackAdder, Adam Hughes, user147263 Sep 5 '14 at 3:24
• $p^2$ has two factors, $p$ and $p$, therefore a $3$ must come from $p$ or $p$. Hey... there's a $3$ in $p$, therefore $3|p$ – Dane Bouchie Sep 5 '14 at 2:37
• $q$ is a prime if and only if $q\mid ab$ implies $q\mid a$ or $q\mid b$. – Frudrururu Sep 5 '14 at 2:43
Take the contrapositive statement: Prove that if $p$ is not divisible by $3$, then $p^2$ is not divisible by $3$.
Try proof by contrapositive:
Assume that for some $p$, $3 \nmid p$, then the only two cases are: $p \equiv 1\pmod{3}$ and $p \equiv 2\pmod{3}$. Calcualte $p^2 \mod 3$ and get the conclusion.
Even if you don't know (don't remember) any number theory, you can always write $p=3m+k$ where $m$ is an integer and $k\in\{0,1,2\}$. Then, $$p^2=(3m+k)^2=9m^2+6mk+k^2.$$ In other for this to be divisible by $3$, $k^2$ has to be divisible by $3$. You can manually check that only $k=0$ works.
I would take an approach by looking at the prime factorization of $p^2$. Since it's a square, the powers of its prime factorization must all be even numbers. Since it's divisible by $3$, it has a $3$ raised to some nonzero even number.
It follows from this that $p$ must also have $3$ in its prime factorization.
This is a particular instance of the Euclid's lemma. It is an easy consequence of prime factorization, but without assuming prime factorization slightly less easy. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8177652359008789, "perplexity": 201.62547282377804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999200.89/warc/CC-MAIN-20190620085246-20190620111246-00025.warc.gz"} |
https://math.stackexchange.com/questions/2072651/if-monad-is-just-a-monoid-object-in-the-category-of-endofunctors | # If Monad is just a Monoid Object in the Category of Endofunctors…
If Monad is just a Monoid Object in the Category of Endofunctors; where a Monoid is just a construct(product/pair) of a Set and a binary operator, having identity, closure, and associativity;
and a Comonad is just a Comonoid Object in the Category of Endofunctors; eg a Monad with all arrows reversed.
According to this diagram,
Does this mean that there also exist Magmad, a Magma Object in the category of Endofunctors; a Semigroupad, a Semigroup Object in the category of Endofunctors; and a Groupad, a Group Object in the category of Endofunctors; a Comagmad, a Cosemigroupad, and a Cogroupad as well?
I'm curious because I tried googling these terms but the only instances of Groupad seem to be mispellings of Grouped, so I am curious if it actually makes sense or not, and I'm curious why all the hype about Monads in functional programming without any respect for Magmads, Semigroupads, and Groupads.
If there is a Monad which also has inverse, does it make it a Groupad? And would any data structure that has a binary operation that creates an element of the same data structure be reasonable to call a Magmad? And if it isn't quite a Monad since it lacks identity, would it be reasonable to call it a Semigroupad?
Be careful, when people say that a monad is just a monoid in the category of endofunctors over a given category they mean that a monad is a monoid object in the monoidal category of the endofunctors.
As you can see following the link above a monoid object is formed by an object $c$ of the monoidal category considered, with a binary operation (i.e. a morphism $c\otimes c \to c$) and a unit (i.e. a morphism $I \to c$, where $I$ is the unit of the monoidal structure over the category) satisfying the diagrammatic versions of the axioms of a monoid (associativity and unit axioms).
Following this idea you could easily generalize the construction for structures like magmas and semigroups: you could define a magma object in a monoidal category to be a pair formed by an object $c$ and morphism $c \otimes c \to c$, a semigroup object to be a magma object satisfying the diagrammatic version of the associativity. Applying this magma/semigroup object-construction to the monoidal category of endofunctor I believe you could get the construction you where looking for.
I honestly do not know if such concepts would be useful. Great of the interest in monads arise from the constructions like categories of algebras and the Kleisli category associated to a monad (for what I get this category is what really interest in computer science, because lots of computations can be modelled as morphisms in Kleisli categories for opportune monads defined over $\mathbf{Set}$). These constructions require the whole structure of the monad, so I doubt that you could get something so useful working with semigroupad or magmad (following your notation).
Hope this helps.
p.s. As an aside note I point out the fact that you cannot define a group object in any monoidal category: this is due to problem with the inverse-axiom which requires that your monoidal category has mappings of the form $c \to c \otimes c$, usually called duplicators. This in particular implies that you should not be able to define a groupad in any trivial sense (at least none that I can think of).
• Aren't "Free Monads" that preserve the append operation capable of satisfying the inverse axiom? I watched youtube.com/watch?v=U0lK0hnbc4U and it seems to indicate that by keeping Monads free of interpretation, it is possible to satisfy the duplicators you speak of. Or am I misunderstanding you? – Dmitry Dec 26 '16 at 18:03
• What is the append operation? What does "free of interpretation" mean? – Kevin Carlson Dec 26 '16 at 21:08
• @Dmitry duplicators are not something to satisfy, they are not properties, they are part of a structure your category needs in order to define group objects. About the free monads: what do you mean by Free Monads preserve append operations? – Giorgio Mossa Dec 27 '16 at 21:55
• @GiorgioMossa can you please expand on This in particular implies that you should not be able to define a groupad in any trivial sense (at least none that I can think of? Is it a matter of pragmatically defining it, or is it a problematic construct in theory as well? – Dmitry Dec 27 '16 at 23:23
• @Dmitry having a duplicator in the category of endofunctors would require to have a natural transformation $F \to F^2$ for each endofunctor $F$. Such requirement by itself is very unlikely to be meet for every category $\mathbf C$. – Giorgio Mossa Dec 28 '16 at 19:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8380942344665527, "perplexity": 467.5452471806381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700675.78/warc/CC-MAIN-20200127112805-20200127142805-00071.warc.gz"} |
http://new.physicsforarchitects.com/bypassing-lenzs-rule | # Bypassing Lenz’s Rule
Here is another example of an interesting physics topic, which is not included in the book due to its limited relevancy to architecture. (based on an article in “The Physics Teacher” by the author).
## A left hand rule for Faraday’s Law
In electromagnetism, we have to deal with relationships between vectors in three dimensions. We quite often resort to hand rules that help us visualize the relationships between such vectors in various situations.
### Traditional hand rules
One right hand rule describes the relationship between the directions of the velocity of a charged particle, the direction of the magnetic field (in which that particle moves), and the resulting force that acts on that particle (Lorentz’s Force).
Another right hand rule describes the relationship between the direction of a current and the magnetic field that it generates (Ampere’s Law).
## A new hand rule
### A Left Hand Rule for Faraday’s Law
Faraday’s Law deals with the relationships between the change in magnetic flux and the induced electromotive force. Magnetic flux combines in it two vectors: magnetic field and a normal to a surface enclosed by a loop. The induced electromotive force includes in it an electric field and the directions of elements of a loop.
In order to find the relationships between the directions of all those vectors, one usually applies Lenz’s Law: “The direction of the induced emf is such that the current that it drives creates a magnetic field that opposes the change that has caused the induced emf”.
It is now possible to apply a new left hand rule and bypass the need to apply Lenz’s Law.
• Align the curved fingers of the left hand with the loop (yellow line).
• The stretched thumb points to the direction of the normal to the surface enclosed by the loop (brown).
• The magnetic field is shown by the red arrow.
• Find the change in flux using this normal (n) and magnetic field (B).
• If the change in flux is positive, the curved fingers show the direction of the induced electromotive force (yellow arrows).
• If the change in flux is negative, the direction of the induced electromotive force is opposite to the direction of the curved fingers (opposite to the yellow arrows).
#### Example
A circular conductor (blue) is placed in a magnetic field that points towards us and decreases with time. In order to find the induced electromotive force in the loop we set the left hand as shown in the figure. The magnetic field (B) happens to point in this case in the thumb’s direction, which is the direction of the normal (n). The angle between B and n is zero and the cosine is one. Therefore, both the initial and final fluxes are positive. Since the magnetic field is decreasing with time, the change in flux is negative. According to the new left hand rule, the direction of the induced electric field is against the curved fingers, as shown by the curved arrows (counter clockwise).
Comment: We could have set the left hand so that the thumb points in the opposite direction (into the page). In that case, both initial and final fluxes would be negative (cos 180°=-1) and their difference would be positive. Therefore the emf would point in the direction of the curved fingers, which is again as shown by the curved arrows, because now the fingers are curved opposite to those in the figure.
## The justification for the new left hand rule
(You don’t have to know this in order to use the new left hand rule…) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.871802806854248, "perplexity": 352.46346534417535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541157498.50/warc/CC-MAIN-20191214122253-20191214150253-00267.warc.gz"} |
http://mathoverflow.net/questions/106510/lift-of-a-morphism-between-geometric-quotients?sort=votes | # Lift of a morphism between geometric quotients
Let $S$ be a scheme.
Definition. Let $X$ be an $S$-scheme and $G$ a smooth affine group $S$-scheme acting on $X.$ An $S$-scheme $Y$ is a geometric quotient of $X$ by $G$ if there exists a morphism $\pi_X\colon X\rightarrow Y$ such that
1. $\pi_X$ is $G$-invariant,
2. the geometric fibers of $\pi_X$ are orbits,
3. $\pi_X$ is universally submersive, i.e., $U\subset Y$ is open iff $\pi_X^{-1}(U)\subset X$ is open, and this property is preserved by base change,
4. $(\pi_X)_*(\mathcal{O}_X)^G=\mathcal{O}_Y.$
Let $X$ and $X'$ be $S$-schemes with a $G$-action, where $G$ is the same introduced before. Assume that there exist geometric quotients $\pi_X\colon X\rightarrow Y$ and $\pi_{X'}\colon X'\rightarrow Y'.$
Question. Let $g\colon Y\rightarrow Y'$ be a morphism. Is there a $G$-equivariant morphism $f\colon X\rightarrow X'$ such that $\pi_{X'}\circ f = g\circ \pi_{X}$?
-
It is not true : for instance, $X$ and $X'$ might be two non-isomorphic $G$-torsors on the same $S$-scheme $T$.
Here is a precise example. Consider $S=T=Spec(\mathbb{R})$ and $G=\mathbb{Z}/2\mathbb{Z}$, let $X=Spec(\mathbb{C})$ and $X'=Spec(\mathbb{R})\cup Spec(\mathbb{R})$ viewed as $S$-schemes, and let $G$ act on $X$ and $X'$ respectively by complex conjugation and by exchanging the connected components. Both actions admit $Spec(\mathbb{R})$ as a geometric quotient. But an isomorphism between these quotients obviously doesn't lift to an equivariant morphism between $X$ and $X'$.
Here is another example : take $S=Spec(\mathbb{C})$, $G=\mathbb{G}_m$, $T=\mathbb{P}^1_{\mathbb{C}}$. Let $X$ (resp. $X'$) be the total space of the line bundle $\mathcal{O}$ (resp. $\mathcal{O}(1)$) on $T$ minus the zero-section, with the natural $G$-action. Both actions admit $T$ as a geometric quotient, but there is no equivariant morphism between $X$ and $X'$ lifting identity.
Thanks for your answer. Is it not possible to impose some constraints on $S$ or $X, X'$ to obtain a positive answer? For example, what does it happen if $S=Spec(k)$ with $k$ an algebraically closed field of characteristic zero? – Francesco Sep 6 '12 at 15:16
My edit should answer your question (negatively). If $G$ acts with trivial stabilizers, this question really is a question about torsors, and you could obtain results in concrete situations using the classification of torsors by $H^1$. – Olivier Benoist Sep 6 '12 at 15:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9921358823776245, "perplexity": 107.63110474729032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463104.7/warc/CC-MAIN-20150226074103-00117-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/quantum-problem.225814/ | # Quantum problem
• Start date
511
1
1. Homework Statement
I am to calculate the number of states in a 3Dcubic potential well with impenetrable walls that have energy less than or equal to E
2. Homework Equations
$$\ E_n=\frac{\hbar^2\pi^2}{\ 2 \ m \ a^2}\ (\ {n_x}^2 + \ {n_y}^2 + \ {n_z}^2)$$
3. The Attempt at a Solution
We may denote $$\ (\ {n_x}^2 + \ {n_y}^2 + \ {n_z}^2)=\ n^2$$
and express n in terms of E_n
Then, we can evaluate the integral n(E')dE' for E'=0 to E'=E
Last edited:
Related Advanced Physics Homework Help News on Phys.org
pam
455
1
E and n are discrete. No integral, just add up combinations of nx, ny, nz to give n.
511
1
E and n are discrete. No integral, just add up combinations of nx, ny, nz to give n.
I see.But should I sum them???
Avodyne
1,396
85
The answer is the number of triples (nx,ny,nz) of positive integers such that nx2+ny2+nz2 is less than a certain constant times E.
Counting these exactly is difficult. But there is a simple way to do it approximately.
Consider (nx,ny,nz) as coordinates of a point in a three-dimensional space. Any point that is inside a one-eighth sphere of radius (constant)E counts, and any point outside does not count. Now consider that, on average, there is one point per unit volume of this one-eighth sphere ...
511
1
Hmmm...
Then it is easier to calculate the volume of the sphere (since there is one state per unit volume) instead of calculating the no. of states.
the squared maximum radius is $$\ n^2=\frac{\ E_n \ 2 \ m \ L^2}{\hbar^2 \pi^2}$$
The co-ordinates $$\ n_x,\ n_y,\ n_z$$ that results in a greater radius,also involves greater energy.Hence,they are excluded.
So, the required number of states is $$\frac{4}{3}\pi\ n^3$$
what was wrong with my approach?
My approach was to calculate directly the number of states. The direct value of N gives the number of states with the specified energy E...but it does not include the states with lower values of energy.Therefore I tried to put the problem into an integral...
The integration should not be valid here because n(E) is not a continuous variable.
Last edited:
• Last Post
Replies
2
Views
2K
• Last Post
Replies
0
Views
817
• Last Post
Replies
7
Views
664
• Last Post
Replies
7
Views
915
• Last Post
Replies
2
Views
2K
• Last Post
Replies
3
Views
1K
• Last Post
Replies
0
Views
807
• Last Post
Replies
7
Views
3K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8917004466056824, "perplexity": 1284.9911332306513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540545146.75/warc/CC-MAIN-20191212181310-20191212205310-00440.warc.gz"} |
http://mathhelpforum.com/statistics/211259-poisson-distribution-help.html | 1. ## Poisson Distribution Help!
Here is the question:
Arrivals at starbucks in the concourse can be modelled by a poisson distribution with a mean rate of 5 per minute starting 10 minutes before class starts. What is the probability that 10% students arrive in the last minute?
HELP
2. ## Re: Poisson Distribution Help!
Hey bobbyb.
What do you mean by 10% arriving? This is an ambiguous statement since you already have a PDF. You will need to look at P(X = 10). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9531557559967041, "perplexity": 705.9365078307055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541905.26/warc/CC-MAIN-20161202170901-00485-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/time-dependent-perturbation-theory-klein-gordon-equation.648268/ | # Time Dependent Perturbation Theory - Klein Gordon Equation
1. Oct 30, 2012
### Sekonda
Hey,
I'm struggling to understand a number of things to do with this derivation of the scattering amplitude using time dependent perturbation theory for spinless particles.
We assume we have some perturbation 'V' such that :
$$\left ( \frac{\partial^2 }{\partial t^2}-\triangledown ^2 + m^2 \right )\psi = \delta V\psi$$
We also assume plane wave solutions of the wavefunction such that the input wavefunction is:
$$\psi _{in}=\psi _{i}(x)e^{-iE_{i}t}$$
A single eigenwavefunction of the wavefunction psi. This input interacts and we get an output wavefunction which can be expanded like so:
$$\psi _{out}=\sum_{n}a_{n}(t)\psi _{n}(x)e^{-iE_{n}t}$$
We substitute this output wavefunction into the perturbed Klein Gordon equation above and attain, (by assuming the second derivative of a(t) with respects to time is small):
$$\frac{\partial^2 }{\partial t^2}\sum_{n}a_{n}(t)\psi _{n}(x)e^{-iE_{n}t}=\delta V\sum_{n}a_{n}(t)\psi _{n}e^{-iE_{n}t}$$
Upon assuming the second derivative of a(t) is small we obtain the simplified equation:
$$-2i\sum_{n}\dot{a}_{n}(t)\psi _{n}(x)e^{-iE_{n}t}=\delta V\sum_{n}a_{n}(t)\psi _{n}e^{-iE_{n}t}$$
Though I'm not exactly sure why all these terms cancel... Nonetheless, to specify a value within the sum we use the orthogonality of wavefunctions - we want to attain the 'final' wavefunction and amplitude (denoted by subscript 'f') and so we multiply both sides of the above equation by:
$$\int_{-\infty }^{\infty }d^{3}x\psi_{f}^{*}$$
We then attain this equation upon use of orthogonality:
$$-2iE_{f}\dot{a}_{f}e^{-iE_{f}t}=\int_{-\infty }^{\infty }d^{3}x\psi _{f}^{*}\delta V\sum_{n}a_{n}(t)\psi _{n}e^{-iE_{n}t}$$
We then simplify by saying at t=0, all a(t)=0 apart from the initial a(0)=1 (so essentially we have one eigenwavefunction coming in) - this holds true for small 't'. The equation then becomes:
$$-2iE_{f}\dot{a}_{f}e^{-iE_{f}t}=\int_{-\infty }^{\infty }d^{3}x\psi _{f}^{*}\delta V \psi _{i}e^{-iE_{i}t}$$
to
$$\dot{a}_{f}(t)=\frac{i}{2E_{f}}\int_{-\infty }^{\infty }d^{3}x\psi _{f}^{*}\delta V \psi _{i}$$
and finally attaining solution:
$$a_{f}(t)=\frac{i}{2E_{f}}\int_{-\infty }^{\infty }d^{4}x\psi _{f}^{*}\delta V \psi _{i}$$
(the d4x including the time differential)
Now I'm unsure of a number of things including the output wavefunction form - I think it's just a sum of wavefunctions related to the input but the input is just a single wavefunction?
I'm unsure on why terms cancel in the assumption that the second derivative of 'a' with respects to time is small, though I will try doing the differentiation now and see if I can do it.
Basically, I'd be grateful if someone could check that this derivation follows through and if someone could explain why the assumptions have been made that'd be great.
Thanks guys,
SK
2. Oct 31, 2012
### Sekonda
The jump from the 4th to the 5th equation is confusing me, upon applying the time derivative and laplacian operator we attain a number of expressions that disappear but I'm not sure why. Can someone explain why these terms cancel or =0?
Thanks
3. Nov 1, 2012
### andrien
They just disappear because apart from an,the ψ satisfies the homogeneous part,which will be equal to zero in absence of any potential.
4. Nov 5, 2012
### Sekonda
So am i correct in thinking that the perturbation is instantaneous and so the outgoing wavefunction can be treated as a free particle and so solves the free Klein-Gordon equation?
Thanks
5. Nov 6, 2012
### andrien
of course,that is the lowest order approximation to treat the outgoing wavefunction as a free particle,that is what is done in general theory.But I do think that perturbation must be treated adiabatic in character.
6. Nov 6, 2012
### Sekonda
Thanks, that's essentially what my professor said today - I was incorrect in describing the perturbation as instantaneous.
Cheers!
7. Jul 28, 2013
### plasmon
Dear i wish to know what is the validity of assumption that the second order derivative of a(t) is neglected. Kindly clarify the issue.
Similar Discussions: Time Dependent Perturbation Theory - Klein Gordon Equation | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.938122034072876, "perplexity": 716.407642341521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689897.78/warc/CC-MAIN-20170924062956-20170924082956-00083.warc.gz"} |
https://www.physicsforums.com/threads/translational-vs-rotational-kinetic-energy.280226/ | # Translational vs. Rotational Kinetic Energy
1. Dec 17, 2008
### kash25
Hi,
Suppose I am trying to find the work done in bringing a resting cylinder to an angular speed of 8 rad/s.
Why is it INCORRECT to find the corresponding tangential velocity at a point on the outer surface of the cylinder (using angular speed * radius = tangential speed) and use the translational (0.5mv^2) work-kinetic energy theorem?
Why MUST we use the rotational version with I and angular speed?
Thank you.
2. Dec 17, 2008
Staff Emeritus
Because there is rotational kinetic energy as well.
3. Dec 17, 2008
### Staff: Mentor
Realize that the tangential velocity depends on the distance from the axis--the cylinder does not have a uniform tangential velocity. But if you're willing to add up the translational KE of each piece (dm) of the cylinder, that's just fine. (You'll get the same answer.)
KE = Σ½dm v² = Σ½dm r²ω² = ½(Σdm r²)ω² = ½Iω²
It's just much easier.
Similar Discussions: Translational vs. Rotational Kinetic Energy | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9227796196937561, "perplexity": 1653.0737757745185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812259.30/warc/CC-MAIN-20180218212626-20180218232626-00294.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.