url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://physics.stackexchange.com/questions/27441/pauli-villars-pv-regularisation-breaks-supersymmetry-how-to-see-that?answertab=oldest | # Pauli-Villars (PV) regularisation breaks supersymmetry. How to see that?
Does the PV regulator breaks SUSY?
Take for instance the 1-loop (top/stop loops) correction to the Higgs squared-mass parameter in the MSSM, and you'll get something like,
$$\delta m^2_{h_u} = - 3Y_u^2/(4 \pi^2) m_{\tilde{t}}^2 ln (\frac{\Lambda_{PV}^2}{m_{\tilde{t}}^2})$$
Where, $\Lambda_{PV}$ is the PV regulator/cutoff, and $m_{\tilde{t}}$ is the stop-quark mass.
In my mind, as the calculation is performed before ElectroWeak Symmetry Breaking (EWSB) (i.e. no mass for the top), but at the same time it's considering softly broken susy (i.e. there is mass for the stop-quark), therefore we don't get perfect cancelation. But I heard someone saying that there's no perfect cancelation because PV regulator breaks SUSY!
I don't see where the PV breaking SUSY argument fits. Can anyone enlighten me?
-
The question is unclear. If your question is about the PV regulator, why did you bring up that reference? It has nothing to do with PV, and it is not even mentioned once in the paper. – Zohar Ko Nov 26 '11 at 0:37
Because I took the equation from there. Umm, Ok let's say: Does the Pauli-Villars regularisation break supersymmetry? And how to see that? – stupidity Nov 26 '11 at 0:46
I still don't see why you copied a random equation from a random paper and proclaimed that their cutoff is a PV cutoff. So I still dont understand the question. It would be very helpful if you rephrase it. Also note that there is no problem to write a ghost kientic term in SUSY, so the bottom component of the chiral multiplet is a ghost while the fermion is fine. So the right question is not whether SUSY allows ghosts (it does) but whether this is enough to cancel the divergences. – Zohar Ko Nov 26 '11 at 0:53
I did similar calculation, as an exercise. I used PV and I got a similar result. But then, someone was saying something about PV regulator breaking susy and I was lost. :p Also, I didn't understand your answer. – stupidity Nov 26 '11 at 0:55
Perhaps the statement you are trying to reproduce has to do with dimensional regularization instead? Just a shot in the dark... – user566 Nov 26 '11 at 5:35
show 1 more comment
## 1 Answer
I think that I am confused by the question and even more confused by the last remark. When you use PV regulator, you necessarily encounter for the ghosts. When you add to a propagator of a physical field another part, which looks like a propagator with a minus sign and a mass $\Lambda_{UV}$, you pretend that there is a heavy ghost "particle" in the theory (the wrong sign of the kinetic term in the Lagrangian translates into the sign flip in front of the propagator). Then if you ask the question whether this ghost can be supersymmetrized, the answer is "yes". So in this sense PV regulator does not violate SUSY. (As it was previously mentioned, the question whether it is enough to cancel all the divergences remains though).
-
1
Welcome Andrey, according to the picture, you have grown a lot of hair since the last time I saw you. – Zohar Ko Nov 26 '11 at 13:17
Ok, thanks for your answer. The reason for the confusion is that I'm confused myself. I was following the PV scheme without knowing the physics behind it :/ But I think I'm getting a feeling of what the PV regulator does. Thanks again to you and to Zohar Ko whom I annoyed :p – stupidity Nov 26 '11 at 18:19
I was not annoyed at any point, just tried to make the question clearer for other commentators who might wanna take a shot at it. – Zohar Ko Nov 26 '11 at 18:46
Thanks! So can I ask then, is the PV regulator enough to cancel divergences? If not, then why? – stupidity Nov 28 '11 at 0:50
2
It depends on what are you doing. If you regularize a Yukawa coupling, it is enough to introduce one PV regulator for you scalar to cancel the divergence. But there are more subtle examples, e.g. you need several sets of PV fermions to cancel consistently the divergences in QED photon self energy (the PV method becomes cumbersome and therefore Peskin switches to dim.reg.; you can see how it is done with PV regulators in Bjorken and Drell). I am not aware about any explicit example of a renormalizable theory, where the PV regularization fails, regardless how you introduce the regulators. – Andrey Katz Nov 28 '11 at 3:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429076313972473, "perplexity_flag": "middle"} |
http://www.scholarpedia.org/article/Yarkovsky_and_YORP_effects | # Yarkovsky and YORP effects
From Scholarpedia
David Vokrouhlicky and William F. Bottke (2012), Scholarpedia, 7(5):10599.
Curator and Contributors
1.00 - David Vokrouhlicky
Prof. David Vokrouhlicky accepted the invitation on 22 February 2010 (self-imposed deadline: 22 August 2010).
The Yarkovsky effect describes a small but significant force that affects the orbital motion of meteoroids and asteroids smaller than 30-40 kilometers in diameter. It is caused by sunlight; when these bodies heat up in the Sun, they eventually re-radiate the energy away as heat, which in turn creates a tiny thrust. This recoil acceleration is much weaker than solar and planetary gravitational forces, but it can produce substantial orbital changes over timescales ranging from millions to billions of years. The same physical phenomenon also creates a thermal torque that, complemented by a torque produced by scattered sunlight, can modify the rotation rates and obliquities of small bodies as well. This rotational variant has been coined the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect. During the past decade or so, the Yarkovsky and YORP effects have been used to explore and potentially resolve a number of unsolved mysteries in planetary science dealing with small bodies.
# Historical notes
Interesting problems in science usually have a long history. It is rare, though, that they have a prehistory or perhaps even a mythology of sorts. Yet this is the case for the Yarkovsky effect. I.O. Yarkovsky, a Russian civil engineer born in a family of Polish descent, noted in a privately-published pamphlet (Yarkovsky 1901 and Figure 1) that heating a prograde-rotating planet should produce a transverse acceleration in its motion. While the context of Yarkovsky’s work was mistaken and he was only roughly able to estimate the magnitude of the effect, he succeeded in planting the germ of an idea that later blossomed into a full-fledged theory of how the orbits of small objects revolving about the Sun are modified by the absorption and reemission of sunlight.
Figure 1: Title page of I.O. Yarkovsky's pamphlet, privately published in Bryansk in 1901. Here, for the first time, the concept of his effect appeared, though in a context that is now obsolete (i.e., he assumed that ether existed between solar system bodies).
The legend of Yarkovsky would likely have been lost, along with interest in his pamphlet, if it had not been for the pioneering Estonian astronomer E.J. Öpik. Öpik had read the original document, and he re-introduced its ideas into the modern literature decades after Yarkovsky’s death (Öpik 1951). More recently, interest in Yarkovsky’s lost pamphlet prompted Dutch amateur astronomer G. Beekman to search for it in Russian archives, where he found it only a few years ago (Beekman 2006 and Figure 1). Thanks to his efforts, it is now available for everyone to read for the first time in over 100 years. With this said, though, some memory of Yarkovsky’s idea had been independently kept in the Soviet Union as shown by work of V.V. Radzievskii and others (e.g., Radzievskii 1952).
Meteorite transport was the primary application that led Öpik to re-introduce the Yarkovsky effect, with the idea being that meter-sized and smaller stones could be delivered to Earth from the main asteroid belt located between the orbits of Mars and Jupiter. Astronomers in the 1970s and 1980s also considered these effects, but it is fair to say that the topic hibernated at the outskirts of planetary science during this period. This is because other important components, such as how the dynamics of small bodies are affected by mean-motion and secular resonances, were not known in detail until the 1980s and were not easy to treat numerically until the 1990s. Even worse, modeling the transport of small objects from the asteroid belt to the Earth by thermal forces alone depended on parameters and/or rotation states for small bodies that were almost completely unknown. In the minds of many dynamicists of the day, the Yarkovsky effect was a small and somewhat unwanted complication on a difficult problem, such that it was easier to ignore it for the time being.
Still, some quantitative progress was made during this time. In the late 1970s and 1980s, new insights on the role of radiative (thermal) forces and torques came from the somewhat unexpected field of space geodesy. First, an analysis of tiny orbital residuals in the geodynamical satellite LAGEOS led D.P. Rubincam to develop applications of thermal forces that could characterize the motion of this spacecraft (e.g., Rubincam 1987). Second, data describing the rotation of balloon-type spacecraft since the late 1960s revealed that radiation torques provided a significant brake on their rotation rates. Lessons from both led D.P. Rubincam to revive applications of radiation forces and torques in planetary science (e.g., Rubincam 1995, 2000). In fact, this revival can be appreciated even more by noting that the applications from space geodesy allowed Rubincam to recognize two different variants of the Yarkovsky effect: diurnal and seasonal, of which the latter had not been previously known in planetary science.
The modern boom of thermal force and torque applications in planetary sciences started in the late 1990s largely through the theoretical and numerical modeling work of D.P. Rubincam and P. Farinella. Their work inspired other researchers to investigate additional applications, many of which reached far beyond the original field of meteorite delivery. The issues now affected by thermal forces run the gamut from the fine details of the impact hazard onto the Earth, through formation and dispersal of binary asteroids, up to understanding the orbital structure of asteroid families.
The Yarkovsky and YORP effects are sometimes found to work in a complicated synergy to provide unexpected results. Some examples include the production of asteroid spin vectors with similar magnitudes and orientations (e.g., certain observed members of the Koronis asteroid family; Slivan 2002). The Yarkovsky effect is also becoming a routine part of orbit determination of well-tracked, small near-Earth asteroids. The precise measurement of orbital displacement produced by the Yarkovsky effect over time can even allow ground-based observers to deduce asteroid physical properties like bulk density (Chesley et al. 2003). In-depth review papers on the Yarkovsky and YORP effects can be found in Bottke et al. (2002, 2006).
# Basic concepts: Radiation force and torque
In basic terms, the Yarkovsky effect is defined as a thermal radiation force that causes objects to undergo semimajor axis drift as a function of their size, orbit, and material properties. The YORP effect is a thermal torque that can increase or decrease a body’s spin rate and can modify its spin axis orientation. The name YORP effect comes from an abbreviation of the names Yarkovsky-O’Keefe-Radzievskii-Paddack, a listing of those scientists who worked on or inspired the idea that radiation-driven accelerations could modify the rotation rates of small planetary bodies (see Rubincam 2000).
The theory is as follows. Sunlight impinging on the surface of a body in space is removed from the incident beam, with the energy reprocessed in several ways: (i) light is directly scattered in the optical band, (ii) light is absorbed and conducted as heat to the deeper layers of the body, and (iii) heat is re-emitted as infrared radiation in the thermal band. This is because the effective temperatures in the relevant range of heliocentric distances, a fraction of AU to several AU, are equal to few hundreds of degrees K. Thus, the Yarkovsky and YORP effects can be considered mechanical forces and torques produced by these radiative processes.
The Yarkovsky and YORP effects are formulated by calculating the net radiation energy and momentum exchange produced by all parts of a small body. In practice, this means computing these components on a small (infinitesimal) surface element on an arbitrary-shaped body. The total effect is then the surface integral over all elements. As far as the physical assumptions are concerned, the spectral dependence is simplified into two representative bands, "optical" and "thermal", assuming these components characterize the solar spectrum emissivity and the body’s emissivity.
Note that the relative motion of the body with respect to the Sun is neglected for all but the smallest bodies. The relevant "aberration-type" correction (the Poynting-Robertson effect) is mainly important for dynamics of sub-mm particles in interplanetary (or circumplanetary) space.
## Description of surface processes
Energy budget. -- Energy conservation, needed for surface temperature $$T$$ solution, occurs as a surface boundary condition in the mathematical description of the Yarkovsky and YORP effects. At the most general level it can be given as
$\tag{1} \epsilon(\mu_0) \, \sigma \, T^4 + K \, \mathbf n_{\perp} \cdot \nabla T = (1-A_h(\mu_0)) \, \mu_0 \, F\; ,$
where $$\mathbf n_{\perp}$$ is the outward-oriented normal vector to the chosen surface element $$d{\mathbf S} = {\mathbf n}_{\perp}\, dS\ ,$$ $$\mu_0$$ is the cosine of local zenith distance of the Sun ($$\mu_0 = \mathbf n_{\perp} \cdot \mathbf n_0 \ ,$$ if $$\mathbf n_0$$ is the unit vector pointing toward the Sun), $$\nabla T$$ is the gradient of the surface temperature, $$K$$ is the surface thermal conductivity, $$\sigma$$ is the Stefan-Boltzmann constant and $$F$$ is the sunlight flux. The two functions $$\epsilon (\mu_0)$$ (hemispherical emissivity) and $$A_h(\mu_0)$$ (hemispherical albedo) can be written as:
$\tag{2} A_h(\mu_0)= \frac{1}{\mu_0} \int_{\Omega_+} d \Omega \, \mu \; r_{\mathrm{sca}}(\mu, \mu_0, \phi)\; ,$
and
$\tag{3} \epsilon(\mu_0) = \frac{1}{\pi} \int_{\Omega_+} d \Omega \, \mu \; r_{\mathrm{th}}(\mu, \mu_0, \phi)\; ,$
with $$r_{\mathrm{sca}}(\mu, \mu_0, \phi)$$ and $$r_{\mathrm{th}}(\mu, \mu_0, \phi)$$ being the bi-directional scattering function in optical and bi-directional emissivity function in thermal wavelengths. The underlying assumption, as in the related formalism used for asteroid photometry and radiometry observations, is that the scattered and thermally emitted radiation fields are composed of contributions from many microscopic scatterers and emitters (grain size and smaller). In that case the bi-directional functions depend on spherical angles of the hemisphere $$\Omega_+$$ exterior to the surface facet, namely cosine of colatitude $$\mu$$ and longitude $$\phi$$ measured from $$\mathbf m =( \mathbf n_0 - \mu_0\, \mathbf n_{\perp} ) / \sqrt{1 - \mu^2_0}$$ in the horizon plane of $$d {\mathbf S}\ .$$
Note that $$r_{\mathrm{sca}}$$ and $$r_{\mathrm{th}}$$ do not include effects of the macroscopic surface irregularities, craters, boulders and larger-scale features, that need to be separately included in the computations. In most of the literature on the Yarkovsky and YORP effects $$r_{\mathrm{sca}}$$ and $$r_{\mathrm{th}}$$ have been approximated by Lambert’s law, namely $$r_{\mathrm{sca}}=\mu_0 A/\pi$$ with a characteristic albedo value (constant) $$A_h(\mu_0) = A$$ and $$r_{\mathrm{th}} = \epsilon$$ with a characteristic emissivity value (constant) $$\epsilon (\mu_0) = \epsilon\ .$$ While the Yarkovsky effect description requires finite (non-zero) thermal conductivity $$K\ ,$$ the most simple approximation of the YORP effect (often referred to as the Rubincam approximation) assumes $$K = 0\ .$$ Then the energy budget equation (1) simplifies to $$\epsilon\,\sigma T^4 = (1-A) \mu_0 F$$ and can be used to express the temperature $$T$$ without the need to solve the heat diffusion problem.
Linear momentum budget. -- Solar photons hitting the body produce scattered and thermally emitted radiation that carries away linear momentum. This causes the objects to move according to Newton’s third law of action-reaction. The dynamical consequences (both on translation and rotation) of the incident photons are small, but those related to the scattered and thermally emitted components can be large. The infinitesimal force $$d {\mathbf f}_{\mathrm{sca}}$$ due to the scattered part is given by
$\tag{4} d \mathbf f_{\mathrm{sca}} = -\frac{F}{c}\, (K^{\mathrm{sca}}_1\, \mathbf n_{\perp} + K^{\mathrm{sca}}_2\, \mathbf m ) \, dS\; ,$
where $$c$$ is the light velocity and
$\tag{5} K^{\mathrm{sca}}_1(\mu_0) = \int_{\Omega_+} d\Omega \, \mu^2 \; r_{\mathrm{sca}}(\mu,\mu_0,\phi)\; ,$
$\tag{6} K^{\mathrm{sca}}_2(\mu_0) = \int_{\Omega_+} d\Omega \, \mu \sqrt{1-\mu^2} \, \cos \phi \; r_{\mathrm{sca}}(\mu,\mu_0,\phi) \; .$
Similarly, the thermal part reads
$\tag{7} d \mathbf f_{\mathrm{th}} = - \frac{\sigma T^4}{c}\, ( K^{\mathrm{th}}_1\, \mathbf n_{\perp} + K^{\mathrm{th}}_2\, \mathbf m ) \, dS \; ,$
with
$\tag{8} K^{\mathrm{th}}_1(\mu_0) = \frac{1}{\pi} \int_{\Omega_+ } d \Omega \, \mu^2 \; r_{\mathrm{th}}(\mu,\mu_0,\phi)\; ,$
$\tag{9} K^{\mathrm{th}}_2(\mu_0) = \frac{1}{\pi} \int_{\Omega_+ } d \Omega \, \mu \sqrt{ 1 - \mu^2 } \, \cos \phi \; r_{\mathrm{th}}(\mu, \mu_0, \phi)\; .$
In the simplest situation, using Lambert’s law to model scattering and thermal emission, one has $$K^{\mathrm{sca}}_1(\mu_0)= \frac{2}{3}\, \mu_0 A$$ and $$K^{\mathrm{sca}}_2(\mu_0)= 0\ ,$$ and similarly $$K^{\mathrm{th}}_1(\mu_0)= \frac{2}{3}\, \epsilon$$ and $$K^{\mathrm{th}}_2(\mu_0)= 0\ .$$ The thermal force, for instance, is then given by
$\tag{10} d \mathbf f_{\mathrm{th}} = - \frac{2}{3} \frac{\epsilon \sigma T^4}{c}\, \mathbf n_{\perp} \, dS \; .$
The YORP effect computation may also avoid the need to solve the surface temperature $$T$$ if one uses Rubincam’s approximation.
Total force and torque. -- While this theory has been used to describe the radiative effects on an infinitesimal surface element $$d {\mathbf S}\ ,$$ additional mathematical labor is needed to estimate the effect for the entire body. In principle this is simply a surface integral
$\tag{11} \mathbf f_{\mathrm{sca}} = \int_{ S } d \mathbf f_{\mathrm{sca}} , \qquad \mathbf f_{\mathrm{th}} = \int_{ S' } d \mathbf f_{\mathrm{th}}$
for the radiation force due to the scattered and thermally emitted radiation fields, and
$\tag{12} \mathbf T_{\mathrm{sca}} = \int_{ S } \mathbf r \times d \mathbf f_{\mathrm{sca}} , \qquad \mathbf T_{\mathrm{th}} = \int_{ S' } \mathbf r \times d \mathbf f_{\mathrm{th}}$
for the corresponding radiation torques. Here $${\mathbf r}$$ is the position vector of $$d {\mathbf S}$$ with respect to an appropriately chosen reference center in the body, while the integration domains correspond to the entire surface for the thermal effects ($$S'$$) and the surface part illuminated by the Sun for the effects due to the scattered radiation ($$S$$).
A detailed description of $$S$$ can be difficult to compute with accuracy, partly because it depends on the mutual geometric configuration of the Sun as well as the asteroid’s spin axis and prime-meridian directions, but also because it needs to account for the possibility that irregular-shaped objects spinning in the Sun may have regions capable of shadowing adjacent regions. The latter phenomena, referred to here as mutual occlusions, is important for the prediction of the YORP effect, and it creates additional complications that need to be attacked using detailed and computationally-expensive numerical approaches.
The thermal force $${\mathbf f}_{\mathrm{th}}$$ is the basis of the Yarkovsky effect, while the combined effect of the torques $${\mathbf T}_{\mathrm{sca}}$$ and $${\mathbf T}_{\mathrm{th}}$$ is the basis of the YORP torque. The integrals in (11) and (12) could be evaluated using a wide variety of analytical and numerical methods as one proceeds from simpler to more general shape models. The analytical methods have certain advantages, namely one gets a complete understanding of how the results are dependant on various parameters. On the other hand, they require the use of simple shape models (e.g., a sphere or a spheroid), which limits their effectiveness when dealing with real asteroid shapes. Numerical methods are often better suited to compute the Yarkovsky and YORP effects for a body of an arbitrary shape. To date, these kinds of results have been most effective in helping us understand how surface irregularities affect a body’s evolution. The most common numerical method used to compute how the Yarkovsky and YORP affect irregularly-shaped bodies is to represent their surface components with a large number of small triangular facets. In some extreme cases, computations have been carried out for asteroid models made up of millions of facets.
# Basic concepts: Temperature solution
Determination of the Yarkovsky force and the YORP torque on a rotating body requires computation of its surface temperature $$T\ .$$ This can be calculated in the frame co-moving with the body from heat conduction theory, whose basic relation, often called the Fourier heat equation, is written as:
$\tag{13} \rho \, C_p \frac{ \partial T }{ \partial t } = \nabla \cdot (K \, \nabla T)\; .$
Here $$\rho$$ is the bulk density and $$C_p$$ is the specific heat capacity at constant pressure. This is basically an energy conservation relation, and it states that the energy change in a volume element, given by the time derivative of the temperature, reflects the heat flux $$-K \nabla T$$ through its boundary. Motion of the continuum as well as other energy sources and sinks (e.g., sublimation or phase transitions) are neglected in this approach. In general, the physical constants $$\rho\ ,$$ $$C_p$$ and $$K$$ depend both on (i) the temperature $$T\ ,$$ and (ii) the position of the volume element in the body. When temperature changes are small enough, such that $$\rho\ ,$$ $$C_p$$ and $$K$$ can be reliably approximated by the values at the body's mean temperature, and they do not change in a given region, Eq. (13) simplifies to
$\tag{14} \frac{ \partial T }{ \partial t } - \kappa \nabla^2 T = 0\; ,$
where
$\tag{15} \kappa = \frac{ K }{ \rho \, C_p }\; .$
In many applications it is useful to introduce scaling of the space and time variables such that the physical constants drop out of the heat conduction equation (14). For instance, if the temperature is periodic with a fundamental frequency $$\nu\ ,$$ such as the rotation frequency expressing diurnal cycles of $$T\ ,$$ $$t$$ may be suitably replaced with a non-dimensional quantity $$\zeta = \exp(\iota \nu t )\ .$$ This rescaling naturally involves introduction of the scale length $$l_s = \sqrt{ \kappa/\nu }$$ such that (14) takes the form
$\tag{16} \iota \, \zeta \frac{ \partial T }{ \partial \zeta } - \nabla'^2 T = 0\; ,$
where the scaled nabla operator reads$\nabla' = l_s \nabla$. As an aside, it turns out that $$l_s\!$$ is the penetration depth of thermal wave with frequency $$\nu\!$$ under the surface of the body. Scaling the temperature is not a fundamental property of the heat conduction equation but is motivated by the linearization of the boundary condition (1).
The heat diffusion equation has to be complemented with the appropriate number of boundary constraints in the space and time domains. Solutions of the Yarkovsky force and YORP torque have so far always employed the assumption of periodicity, either diurnal (where the fundamental frequency is due to rotation of the body) or seasonal (where the fundamental frequency is the mean motion about the Sun). This assumption removes the time boundary. The space boundary is specified by the surface of the body, with the energy condition (1) providing the fundamental constraint for $$T\ .$$
Additional constraints come from requirement of regularity of $$T$$ over the entire volume of the body. Note that either form of the heat conduction equation given above, Eqs. (14) and (16), is linear and thus the general solution can be given as a superposition of fundamental modes, of which some typically diverge in the volume and must be excluded.
When the body is not homogeneous,but consists of, say, discrete layers of different thermal properties, additional constraints have to be imposed on their boundaries. While the general solution of the heat diffusion problem can formally be given as a superposition of their fundamental modes, there are two obstacles that prevent one from trying to calculate an explicit solution for $$T$$ on the body’s surface.
First, the fundamental modes need to be algebraically manageable for sufficiently simple geometries such as semi-space (plane-parallel), spherical or cylindrical cases. No formulation is available for a body with a highly irregular shape, which is unfortunate given that these shapes are typical for small objects in the Solar system. Second, even in the case of simple geometry, the quartic emission term in the boundary condition (1) violates the linearity of the problem.
The first issue is essentially insurmountable and, if the irregular shape is really needed (as in the case of the YORP effect), it needs to be attacked by numerical methods. The second issue is usually faced by the linearization method, namely a split of $$T$$ into a mean value $$T_0$$ and an increment $$\Delta T$$ $$(T = T_0 + \Delta T)$$ such that $$|\Delta T| \ll T_0$$ is assumed. Then $$T^4\simeq T_0^4 + 4\,T_0^3 \Delta T$$ in (1).
Analytical solutions for the surface temperature have been obtained using the linearized approximation for bodies having simple shapes. These are very useful for basic characterization of the Yarkovsky and/or YORP effects and have been implemented in a number of numerical integration packages (such as OrbFit or SWIFT). The last few years have also seen a number of semi-numerical or numerical methods developed with applications to particular situations.
# Basic concepts: Orbital and rotational effects
## Orbital consequences of the Yarkovsky effect
The dynamical effects of the radiation forces described above in the optical wavelengths are typically small for meter-sized and larger bodies. They even vanish for circular orbits. We therefore restrict our discussion to the orbital effects of the thermal forces (i.e., $${\mathbf f}_{\mathrm{th}}$$ in (11)). These thermal forces are much smaller than attraction of the center (Sun or planet), allowing one to use the framework of perturbation theory to describe their orbital consequences. Focusing on the most important of them, namely how they change the semimajor axis $$a$$, one can write: $\tag{17} \frac{da}{dt} = \frac{2}{mn^2 a} \,(\mathbf{f} \cdot \mathbf{v}) = \frac{2}{mn} (\mathbf{f} \cdot \mathbf{e}_\tau + \mathcal{O}(e) )\; ,$
where $$M$$ and $$m$$ are masses of the body and the center, $$n$$ is the mean motion $$\left(a^3 n^2 = G\left(M+m\right)\right)$$ and $${\mathbf e}_\tau$$ is a unit vector in the osculating orbital plane and transverse to direction to the center. For small perturbations, the first-order effects are obtained by inserting the unperturbed (Keplerian) motions into the right hand side of (17). The long-term (accumulated) orbital effects are further characterized by averaging over the mean longitude in orbit. This removes the short-period effects from the analysis, which are unimportant for most situations, even for the detection issue discussed below.
A distinct feature of the Yarkovsky effect is its ability to modify the semimajor axis of the body. An analytical estimate of $$da/dt$$ for a spherical body not only provides one with reasonable accuracy but also and yields insights into how numerous parameters such as size, thermal inertia, heliocentric distance and/or obliquity affect the Yarkovsky drift rate. Assuming rotation about the principal axis of the inertia tensor, it is natural to split the effects of the thermal force $${\mathbf f}_{\mathrm{th}}$$ into (i) a component aligned with the spin axis direction (independent on the rotation cycle), and (ii) two components in the equatorial plane of the body (dependent on the rotation cycle). When these values are input into perturbation theory equations, the spin-aligned component produces a secular effect (i.e., one that is not averaged out over time):
Figure 2: Schematic representation of the two variants of the Yarkovsky effect: (i) the diurnal component (left), and (ii) the seasonal component (right). A circular orbit and optimum values of the obliquity are assumed for simplicity, $$\gamma = 0^\circ$$ on the left figure, and $$\gamma = 90^\circ$$ on the right figure. Sunlight always heats the body on the nearside (noon), but due to finite thermal inertia the maximum surface temperature, and thus maximum of the recoil effect of the thermal radiation, is displaced from the solar direction. In the diurnal variant (left), the body's rotation forces the maximum emissivity to be be skewed toward the afternoon side on the body and thus the recoil force is always directed along the gold arrows. A net positive along-track force makes the body systematically accelerated and thus spiraling outward away from the Sun. The effect would have an opposite sign/sense if the body had a retrograde rotation (i.e., with obliquity $$\gamma = 180^\circ$$). In the seasonal variant (right), thermal relaxation occurs on the timescale comparable to the revolution period of the body about the center. The seasonal force -- again shown with the gold arrows -- is directed along the spin axis and is due to north/south temperature difference on the body. One way to think about this is that the hottest part of summer in the Northern Hemisphere is more likely to be in July-August rather than in June. The net, orbit-averaged, along-track force is always negative and the seasonal variant of the Yarkovsky effect makes the orbit migrate consistently towards the center. For extreme values of the obliquity, $$\gamma = 0^\circ$$ or $$\gamma = 180^\circ\ ,$$ the seasonal component is zero because of symmetry between the north and south hemispheres
$\tag{18} \left(\frac{da}{dt}\right)_{\mathrm{seasonal}}= \frac{4}{9}\,(1-A)\,\Phi\, F_n(R', \Theta )\, \sin^2 \gamma\; ,$
while the equatorial components provide a secular effect
$\tag{19} \left(\frac{da}{dt}\right)_{\mathrm{diurnal}} = - \frac{8}{9}\,(1-A)\,\Phi\, F_\omega(R', \Theta )\, \cos \gamma\; .$
In the linear approximation the total effect $$(da/dt)_{\mathrm{tot}}$$ is a superposition of (18) and (19); $$A$$ is an effective albedo value close to the Bond albedo, $$\Phi = \pi R^2 F/(mc)$$ is the characteristic radiation pressure factor, $$R$$ is the radius of the body, and $$\gamma$$ is obliquity of the spin axis (i.e., angle between the spin axis direction and normal to the osculating orbital plane about the center). The $$F_\nu$$-functions are described below. The dependence on different fundamental frequencies $$\nu$$ in the problem, the mean motion frequency $$n$$ in (18), and the rotational frequency $$\omega$$ in (19) gives rise to the terms seasonal and diurnal Yarkovsky effects, respectively.
Dependence on the obliquity. -- Examining the dependence on $$\gamma$$ in (18) and (19), one has: (i) $$(da/dt)_{\mathrm{diurnal}}\varpropto \cos\gamma$$ in the diurnal case, and (ii) $$(da/dt)_{\mathrm{seasonal}}\propto \sin^2 \gamma$$ in the seasonal case. As a consequence, the diurnal component of the Yarkovsky effect can produce both outward secular migration of the orbit (i.e., $$da/dt$$ positive because the $$F_\omega$$ function is negative) for prograde-rotating bodies with $$\gamma < 90^\circ$$ and inward secular migration of the orbit for retrograde-rotating bodies with $$\gamma > 90^\circ\ .$$ The maximum drift occurs when the spin axis is perpendicular to the orbital plane, $$\gamma = 0^\circ$$ or $$\gamma = 180^\circ\ ,$$ and vanishes when the spin axis is in the orbital plane, $$\gamma = 90^\circ\ .$$ In addition, because of the dependence on $$\cos\gamma\ ,$$ a population evolving in semimajor axis with isotropically distributed spin axes will have an average semimajor axis change of zero. This implies the net, long-term effect is the population diffusing into smaller and larger semimajor axis values while keeping the same mean value. On the other hand, the seasonal variant of the Yarkovsky effect always causes orbital decay towards the center (i.e., $$da/dt$$ negative). It is maximum when the spin axis is in the orbital plane and null when the spin axis is perpendicular to it. Figure 2 provides a clear intuitive basis for these conclusions.
Figure 3: Top panel: mean drift $$\Delta a$$ due to the Yarkovsky forces in a constant interval of time 1 My for a sample of asteroids at 2-3 AU heliocentric distance. The bodies have been given random orientation of spin axes in space and different values of the surface thermal conductivity K = 0.001 W/m/K, 0.01 W/m/K, 0.1 W/m/K and 1 W/m/K (see the labels and colors of different curves). The mean change of the semimajor axis, as determined by the initial and final values, has been evaluated for bodies of different size $$D\ ,$$ from meters to ten kilometers (abscissa). $$\Delta a$$ generally decreases toward larger objects, roughly following the $$\varpropto 1/D$$ rule. Bottom panel: mean drift $$\Delta a$$ due to the Yarkovsky forces over a variable time interval equal to the estimated collisional lifetime of a main-belt asteroid with size $$d\ .$$ The longer lifetime for larger bodies mostly compensates for the intrinsic decrease of $$\Delta a$$ from the top panel.
Dependence on the size. -- The magnitude of the semimajor axis secular change is given by the F-functions which read (the frequency ν stands either for ω or n)
$\tag{20} F_\nu(R',\Theta) = - \frac{\kappa_1(R')\,\Theta_\nu}{1 + \kappa_2(R')\,\Theta_\nu + \kappa_3(R')\, \Theta^2_\nu}\; .$
The frequency dependence appears explicitly in the appropriate thermal parameter $$\Theta_\nu = \Gamma \sqrt{\nu}/(\epsilon\sigma T^3_{\star} )$$ and implicitly in the scaling of the size of the body $$R' = R/l_s\ .$$ Recall the penetration depth $$l_s = \sqrt{ K /(\rho C_p \nu) } = K /(\Gamma \sqrt{\nu} )$$ of the thermal wave depends on the frequency and differs thus for the diurnal and seasonal waves. Because the rotation frequency $$\omega$$ is typically much larger than the mean motion $$n\ ,$$ the penetration depth of the seasonal wave is larger than that of the diurnal wave.
To determine the limiting cases for small/large bodies, one needs to scale our equations using the appropriate penetration depth $$l_s$$ or use the non-dimensional parameter $$R'\ .$$ In the limit of large bodies for which $$R'\gg 1$$ (bodies larger than few meters across for relevant thermal parameters and heliocentric distances), the $$\kappa$$-functions in (20) become constant $$\kappa_1\rightarrow$$ ½, $$\kappa_2\rightarrow 1\ ,$$ and $$\kappa_3\rightarrow$$ ½ (for $$R'\rightarrow \infty$$). So the size dependence drops from the $$F$$-factor in (20) and it is only included in the radiation pressure factor $$\Phi = \pi R^2 F/(mc)\ .$$ For a spherical body, $$m \varpropto R^3$$ and thus finally $$da/dt\varpropto 1/R.$$ As a rule of thumb the Yarkovsky effect is optimum for bodies that have $$R' \sim 1\ ,$$ so at sizes larger than a few kilometers the effect is inversely proportional to size.
The longest characteristic timescale available for accumulation of orbital effects is of the order $$\sim 1$$ Gy. This implies the Yarkovsky forces are essentially negligible for asteroids larger than $$\sim 30$$ kilometers. In the opposite limit of very small bodies, where $$R'\ll 1\ ,$$ one obtains $$\kappa_1\varpropto R'\ ,$$ $$\kappa_2\varpropto 1/R'$$ and $$\kappa_3\varpropto 1/R'^2$$ when $$R'\rightarrow 0$$ in (20). As a result, for finite $$\Theta_\nu$$ we have $$F_\nu\varpropto R^3/\Theta$$ in this limit. Combined with the $$R$$-dependence in $$\Phi\ ,$$ we finally obtain $$da/dt\varpropto R^2$$ in the limit of very small objects. This formal result supports our intuition that the Yarkovsky effect should vanish for very small particles. This is because efficient thermal conduction throughout the volume of the body makes the temperature very close to the equilibrium value everywhere. This also damps the amplitude of the recoil effect due to thermally-emitted photons. In practice, the role of the Yarkovsky forces below millimeter size becomes negligible. For smaller grains the Poynting-Robertson effect takes over, forcing particles to migrate inward toward the Sun.
Figure 3 shows the characteristic drift rates in semimajor axes for bodies of different sizes and surface thermal conductivities. The bottom part of the figure, where the drift rate is multiplied by a characteristic collisional lifetime of a body in the main asteroid belt, indicates that meteorite precursors can move by a characteristic value of $$\sim 0.1$$ AU. This implies that meteoroids can move from buffer zones 0.1 AU wide or so to dynamical resonances in the main asteroid belt, where their eccentricities can be pumped up to planet-crossing orbits.
In the case of multi-kilometer size asteroids, the spread of $$\Delta a$$ is smaller, with a characteristic value of $$\sim 0.04$$ AU. This is a reasonable representation of how far asteroid families extend in semimajor axis away from their central or mean values. As a result, over a billion years or so, small asteroids in many families have migrated far enough to significantly change their initial spread of fragments in $$a\ .$$
Dependence on the heliocentric distance. -- In the case of the diurnal effect, typically characterized by large-$$\Theta_\omega$$ and large-$$R'$$ regime, one has $$da/dt\varpropto \Phi/(n\Theta_\nu)\ .$$ Because the subsolar temperature $$T_\star$$ decreases with the heliocentric distance $$d$$ as $$T_\star\varpropto 1/d^{\,1/2}\ ,$$ we have $$\Theta_\omega \varpropto d^{\,3/2}$$ and thus $$n\Theta_\omega$$ is roughly independent of the heliocentric distance. Then the dependence of da/dt on d simply derives from the flux decrease $$\Phi\varpropto 1/d^{\,2}$$ and thus also $$da/dt\varpropto 1/d^{\,2}\ .$$ A slightly more complicated situation may occur for the seasonal variant of the Yarkovsky effect. This is because both $$l_n$$ and $$\Theta_n$$ increase proportionally to $$d^{\,3/4}$$ for near circular orbits. This dependence in the $$F_n$$-function combines with the $$\varpropto 1/d^{\,2}$$ decrease of the solar flux in $$\Phi\ ,$$ producing an overall shallower decrease of the averaged $$da/dt$$ due to the seasonal effect with $$d.$$
## Rotational consequences of the YORP effect
With rare exceptions, the problem of combining rotational dynamics with radiation torques (12) on a small body in space has been analyzed using simplifying assumptions, namely the assumption of rotation about the principal axis of the inertia tensor. Denoting $$C$$ its largest eigenvalue and $${\mathbf s}$$ the corresponding eigenvector, the spin axis and the rotational angular momentum vector can be written as simply $${\mathbf L} = C \omega\, {\mathbf s}\ .$$ Its time derivative is equal to the acting torques $${\mathbf T}\ .$$ The Euler equation can then be split into a pair of equations
$\tag{21} \frac{d\omega}{dt} = \frac{{\mathbf T}\cdot {\mathbf s}}{C}$
and
$\tag{22} \frac{d{\mathbf s}}{dt} = \frac{{\mathbf T}-{\mathbf s}\left({\mathbf s}\cdot {\mathbf T}\right)}{C\omega}\; ,$
to individually track evolution of the rotation frequency $$\omega$$ and the spin vector $${\mathbf s}\ .$$ Conventionally $${\mathbf s}$$ is parametrized using the obliquity value $$\gamma\ ,$$ given by $$\cos\gamma={\mathbf s}\cdot{\mathbf N}\ ,$$ where $${\mathbf N}$$ is a unit vector along the orbital angular momentum, and the precession angle $$\psi\ .$$ The YORP effects on $$\psi$$ are less important, mainly because the dominant secular effects from gravitational solar torques have always an uncertainty larger than the YORP contribution. This allows us to focus on $$\gamma$$, where
$\tag{23} \frac{d(\cos\gamma)}{dt} = \frac{{\mathbf T}\cdot {\mathbf e}}{C \omega}$
holds with an auxiliary vector $${\mathbf e}={\mathbf N}-{\mathbf s}\left({\mathbf s}\cdot{\mathbf N}\right)\ .$$
To obtain long-term (secular) effects, one uses a first order perturbation theory with the unperturbed uniform rotation solution inserted into the right-hand sides of (21) and (23) and then averages over both the rotation and revolution cycles. Analytical results for the YORP effect are more difficult to achieve than for the Yarkovsky effect, so numerical models allow to glean insights into what happens to a body. They are both needed, however, to understand basic properties of the YORP effect and so we can adequately compare model results with observations. Similar to the Yarkovsky effect case, a number of different approaches have been developed along these lines over the last decade.
Secular rates in $$\omega$$ and $$\gamma$$ are given as (zero surface conductivity and circular orbit assumed)
$\tag{24} \frac{d\omega}{dt} = \frac{\Lambda}{C}\,\sum_{p\geq 1}\,{\mathcal A}_p\, P_{2p}\left(\cos\gamma\right)\; ,$
and
$\tag{25} \frac{d\gamma}{dt} = \frac{\Lambda}{C\omega}\,\sum_{p\geq 1}\,{\mathcal B}_p\, P^1_{2p}\left(\cos\gamma\right)\; ,$
where $$\Lambda = 2(1-A)F R^3/(3c)$$ (with the same notation as above for the Yarkovsky effect), $$({\mathcal A}_p,{\mathcal B}_p)$$ are complicated functions that depend on the surface irregularities found on the body, $$P_{2p}\left(\cos\gamma\right)$$ are Legendre polynomials and $$P^1_{2p}\left(\cos\gamma\right)$$ Legendre associated functions of the first order. The series in (24) and (25) may converge very slowly, indicating they are dependant on very high-order terms ($$p$$ large). In practice, this means a YORP solution may depend on increasingly fine structures found on the body's surface (e.g., craters, boulders, etc.). This trend has been confirmed by numerical tests and represents one of the most fundamental obstacles for accurate YORP prediction. Moreover, the analytical estimates above do not take into account the effects of mutual occlusions of the surface facets, which can be very important in numerical runs.
Figure 4: Estimated mean YORP effect on the $$\sim 30$$ m diameter near-Earth asteroid 1998 KY26: (i) secular change of the rotation rate $$d\omega/dt$$ (left panel), and (ii) the obliquity $$d\gamma/dt$$ (right panel). The results have been computed numerically using the radar-derived shape and for different values of the obliquity at the abscissa. The YORP effect's ability to change rotation rate (left) does not depend on the surface thermal conductivity $$K$$ in the plane-parallel numerical model used, while the effect in obliquity (right) is $$K$$-dependent (results for several values of $$K$$ are shown by different curves; numerical values in W/m/K). Both numerically derived curves, $$d\omega/dt$$ and $$d\gamma/dt\ ,$$ show qualitative features explained by the analytic model, Eqs. (24) and (25).
Dependence on the obliquity. -- Secular changes $$d\omega/dt$$ and $$d\gamma/dt$$ in (24) and (25) show a different parity when $$\gamma$$ is transformed to $$180^\circ-\gamma\ :$$ (i) $$d\omega/dt$$ is even, while (ii) $$d\gamma/dt$$ is an odd function of $$\gamma\ .$$ When the quadrupole term ($$p=1$$) dominates, $$d\gamma/dt$$ vanishes for $$\gamma=0^\circ\ ,$$ $$90^\circ$$ and $$180^\circ$$ and its value is either positive or negative for the entire range of prograde-rotating objects (and takes the opposite value for the retrograde-rotating objects). If YORP was only affecting the obliquity, it would thus secularly evolve towards asymptotic values with the spin vector $${\mathbf s}$$ either perpendicular to the orbital plane or in the orbital plane. In the same way, $$d\omega/dt$$ vanishes in the nodes of the second-degree Legendre polynomial, i.e., $$\gamma\sim 55^\circ$$ and $$\gamma\sim 125^\circ\ .$$ Near the asymptotic states of the obliquity, $$d\omega/dt$$ takes maximum value which can either be positive (long-term acceleration of the rotation rate) or negative (long-term deceleration of the rotation rate). An example of the obliquity dependence of $$d\omega/dt$$ and $$d\gamma/dt$$ due to YORP for a small near-Earth asteroid 1998 KY26 is shown in Figure 4.
The gross picture from the quadrupole contribution in (24) and (25) can be modified if high-order terms are important. Additionally, when the effects of finite surface thermal conductivity are included, the results for (i) the rotation rate effect $$d\omega/dt$$ are not changed, but (ii) those for the obliquity effect $$d\gamma/dt$$ are modified.
Dependence on the size. -- Noting the proper dependence of $$\Lambda$$ and $$C$$ on the characteristic radius $$R$$ (i.e., $$\Lambda\propto R^3$$ and $$C\propto R^5$$), Eqs. (24) and (25) indicate that both $$d\omega/dt$$ and $$d\gamma/dt$$ are $$\propto 1/R^2\ .$$ Thus, the YORP effect is strongly dependant on the size of the body. Tests indicate that $$\sim (10-20)$$ km size asteroids at a few AU heliocentric distance need Gy timescale to modify their rotation state by YORP, as derived from scaling $$\omega/(d\omega/dt).$$ YORP is essentially an unimportant effect for larger objects.
A less clear and more interesting situation is how the YORP effect modifies the spin vector of small bodies. For example, if a small object is spinning up, it could presumably continue to spin up until it disrupts. The problem, however, is that the cosmic ray exposure lifetimes of stony meteoroids in space are typically many tens of millions of years in length, many orders of magnitude longer than the rotational bursting timescale obtained from the YORP equations. This problem has not been studied in detail, but observations imply that YORP must become less effective and shut down at some level for small bodies. An improved analytical model, with non-radial thermal flux components included, still predicts a divergence in $$d\omega/dt$$ and $$d\gamma/dt$$ as $$\propto 1/R\ ,$$ but some physical assumptions are violated as we pass certain thresholds. This intriguing problem has not yet been studied in detail, and the solution to how and why YORP shuts down is currently unknown.
# Direct detection of Yarkovsky and YORP effects
Accurate ground based observations have now allowed astronomers to directly detect both the Yarkovsky and YORP effects on asteroids. This was an important validation both of the theory presented above and its underlying concepts. It is also useful because most of the applications of the Yarkovsky and YORP effects involve statistical studies of small bodies populations in the Solar system rather than detailed descriptions of the dynamics of individual objects. Moreover, further observations during the next decade or so will allow many more direct detections of both the Yarkovsky and YORP effects. The Yarkovsky effect will also likely become an inevitable part of the orbit determination for small near-Earth asteroids and an important element in any analysis of their potential impact hazard to Earth.
## Detection of Yarkovsky effect
Figure 5: Orbital solution of near-Earth asteroid (6489) Golevka from astrometric data before May 2003 projected into the plane of radar observables: (i) range ("roughly distance to the radar antenna") at the abscissa, and (ii) range-rate ("roughly relative radial velocity of the asteroid with respect to the radar antenna") on the ordinate. The origin referred to here is the center of the nominal solution that only includes gravitational perturbations. The blue ellipse represents a 90 % confidence level in the orbit due to uncertainties in astrometric observations as well as small body and planetary masses. The center of the red ellipse is the predicted solution with the Yarkovsky forces included; note the range offset of $$\sim 17$$ km and the range rate offset of $$\sim 5$$ mm/s. The actual Arecibo observations from May 24, 26 and 27 of the year 2003 are shown by the diamond (the measurement uncertainty in range is smaller than the symbol). The observations fall within the uncertainty region of the orbital solution containing the Yarkovsky forces (red ellipse; 90 % confidence level).
Detecting the Yarkovsky effect among real asteroids is not straightforward, partly because it is not the only force capable of modifying the motion of small bodies in the Solar System, but also because its strength is weak and observations have inherently finite accuracy. In practice, one needs (i) very precise astrometric observations, with a superior measurements provided by planetary radar (if available), (ii) observations spanning at least a decade or so, and (iii) the target asteroid must be small enough that the effects are measureable over the time interval (i.e., which means the object is smaller than few kilometers in size). This means plausible Yarkovsky candidate targets must be winnowed from a list of criteria: favorable orbital geometry, close approaches to the Earth where radar data can be acquired, and the availability of other observations, such as photometry providing information about the rotation period or spin axis orientation of the near-Earth asteroid. The first asteroid to meet all these criteria was (6489) Golevka (Figure 5 and Chesley et al. 2003).
The possibility of decorrelating orbital perturbations produced by the Yarkovsky effect from other effects derives from the Yarkovsky effect's ability to secularly change the semimajor axis $$a$$ of an asteroid's heliocentric orbit. Kepler's third law directly translates a nonzero average $$da/dt$$ value onto a nonzero secular change $$dn/dt$$ of the orbital mean motion $$n\ ,$$ producing a quadratic advance $$\Delta \lambda$$ in the longitude in orbit $$\lambda\ .$$ An order of magnitude estimate, neglecting eccentricity corrections, provides $$\Delta \lambda \sim \int \delta n\,dt \sim \int (dn/dt)\,t\,dt \sim \frac{3n}{4a}\,(da/dt)\,(\Delta T)^2$$ in time $$\Delta T\ .$$ The same effect can also be expressed as a transverse displacement along the orbit of the order $$\sim a\,\Delta \lambda$$ which again propagates quadratically in time. It is this $$\propto (\Delta T)^2$$ progression in either of the two quantities that makes the Yarkovsky effect distinct from other perturbations.
At present, ground-based astrometric observations of the sky-plane position in optical wavelengths are only accurate to $$0.1-0.5$$ arcseconds at best, with a factor few worse in regular astrometric survey observations. This is not much better than the expected Yarkovsky displacement over a decade or so, unless the body approaches the Earth at a very close distance. The accuracy of radar astrometry is much better; the best observations have uncertainty of only few tens of meters. This can be several orders of magnitude better than the Yarkovsky displacement and thus radar astrometry is potentially much better at detecting weak orbital perturbations such as the Yarkovsky forces than optical observations. Its only drawback is that, at present, the number of near-Earth asteroids that have come close enough to Earth to be observed by radar is limited; about 450 or so compared to more than 8500 near-Earth asteroids observed in optical wavebands.
Figure 6: Asteroids for which Yarkovsky forces have been estimated using orbit determination methods. The second column gives the $$(da/dt)$$ value with formal uncertainty, the third column gives absolute magnitude as determined from astrometric observations (typically uncertain to $$\sim (0.3-0.5)$$ magnitude), the fourth column gives obliquity $$\gamma$$ if determined from photometric observations and/or radar observations, and the fifth column gives mean heliocentric distance $$\bar{r}\ .$$
Figure 6 provides a list of objects where the Yarkovsky effect has been detected using orbit determination methods (S.R. Chesley and D. Cotto-Figueroa, personal communication 2009). The uncertainty in $$(da/dt)$$ for these bodies is fractionally less than one half of the derived value.
The role of Yarkovsky forces in impact hazard computations. -- For near-Earth asteroids that potentially could hit the Earth in the near future, the Yarkovsky forces must be part of any orbit determination procedure. Their importance looms even larger if one needs a high-accuracy ephemeris on a longer-timescale, say tens to hundreds of years. A special class of problems, in this respect, relates to how asteroid impact hazards for Earth are calculated; the orbital uncertainties produced by the Yarkovsky effect mean that we must observe potentially hazardous asteroids for long time periods before we can really rule an impact completely out. So far, the role of Yarkovsky forces in evaluating the impact probability has been studied for three asteroids, but more cases should be expected soon: (i) (29075) 1950 DA, (ii) (99942) Apophis, and (iii) (101955) 1999 RQ36. The Yarkovsky effect has been recognized as the most important element in their orbital uncertainty and thus their impact probability with Earth.
Future outlook. -- Future radar and ground-based astrometric observations will continue to provide possibilities for further detections of the Yarkovsky effect using orbital determination of the near-Earth asteroids. This is because, by definition, their addition to the available database will meet both of the necessary requirements: (i) they extend the timebase $$\Delta T\ ,$$ and (ii) very likely they will be even more accurate than the available astrometric observations to-date. The outlook is especially optimistic if the powerful next-generation sky surveys, such as PanSTARRS or LSST, begin to detect large numbers of near-Earth asteroids. With their wide field cameras and observing schedule covering the high-latitude regions, they will not only regularly observe the currently known population of objects, but will also discover many more small asteroids. Moreover, their large CCD cameras will allow astrometry with roughly an order of magnitude better accuracy than is currently available.
Additionally, the space-based astrometric project Gaia promises to further advance our abilities to detect the Yarkovsky effect. With an internal accuracy of $$\sim 1$$ milliarcseconds for moving objects (even better for targets brighter than $$\sim 13$$ magnitude in V band) and 5 years of operation, it will provide a superb astrometric database for a myriad of near-Earth asteroids; its location near the L2 Lagrange point of the Sun-Earth system will allow much more complete observations of the asteroid population than it is possible from the Earth's surface.
## Detection of YORP effect
Figure 7: Additional sidereal phase $$\Delta \phi$$ required to link photometric observations of a small near-Earth asteroid (54509) YORP during its yearly apparitions from July 2001 till August 2005. The nominal model assumes an asteroid shape determined by the reconstruction of radar echoes, pole position at ($$180^\circ\ ,$$ $$-85^\circ$$) ecliptic longitude and latitude and a constant rotation rate $$42582.41\pm 0.02$$ deg/day. This nominal model, however, is unable to match the rotation phase observed at subsequent apparitions (symbols). The difference is well fit by a quadratic function $$\Delta\phi$$ with $$(d\omega/dt)=(350\pm 35)\times 10^{-8}$$ rad/day$$^2\ .$$ This value fits the theoretically predicted YORP value for this object reasonably well.
Using sufficient astronomical observations, one can determine both the sidereal rotation period and orientation of rotation poles for small bodies in the Solar System. In principle, the solution to both components may reveal secular effects due to the YORP torques. However, YORP torques are extremely weak. Fortunately, it is relatively straightforward to get an accurate determination of the rotation period of an asteroid, thereby allowing direct determination of the YORP effect over sufficiently long time intervals. Still, the YORP effect in $$d\omega/dt$$ is a strong function of size, even more so than the Yarkovsky forces, such that the possibility to detect YORP is limited to small near-Earth asteroids.
The YORP effect may be detected using two techniques (or their combination): (i) optical photometry, and (ii) radar. Both provide instantaneous information about the orientation of the body with respect to the detector (and the Sun in the first case). A time series of this relative configuration might be, after appropriate geometric transformations are taken into account, transformed into a time dependence of the rotation phase $$\phi$$ of the body in the inertial space ("sidereal rotation"). If the body rotated about the principal axis of the inertia tensor with frequency $$\omega\ ,$$ $$\phi$$ would be linear function of time, i.e. $$\phi=\omega\,t + C\ .$$ The YORP effect contributes in $$d\omega/dt$$ by a non-zero average (secular term) and periodic terms. The latter are, however, typically too small to be detected, so the basic perturbation produced by the YORP effect is a quadratic advance in the sidereal phase of rotation $$\Delta \phi= \frac{1}{2}\,(d\omega/dt)\,(\Delta T)^2$$ in time $$\Delta T$$ (similar to the Yarkovsky effect in the longitude in orbit discussed above).
Figure 8: Asteroids for which YORP torques have been detected using observations of the sidereal rotation phase. The second column gives $$(d\omega/dt)$$ value with formal uncertainty, the third column gives absolute magnitude as determined from astrometric observations (typically uncertain to $$\sim (0.3-0.5)$$ magnitude), the fourth column gives rotation period in hours, the fifth column gives obliquity $$\gamma\ ,$$ and the sixth column gives mean heliocentric distance $$\bar{r}\ .$$ For the last two targets, observational limits constrain the available theory which in turn predicts the larger value.
The first target for which YORP torques have been directly detected is a small coorbital asteroid (54509) YORP, formerly 2000 PH5, that also has the largest detected value of $$(d\omega/dt)$$ yet known (Figure 8). In four years (2001-2005) of accurate radar and lightcurve observations, the additional phase advance was $$\Delta \phi\sim 225^\circ$$ (Figure 7; see also Lowry et al. 2007 and Taylor et al. 2007). Larger asteroids have $$(d\omega/dt)$$ significantly smaller, so longer intervals of time $$(\Delta T)$$ between the first and the last observations are required. For instance, in the case of (1620) Geographos, with $$(d\omega/dt)\simeq 1.2\times 10^{-8}$$ rad/day$$^2\ ,$$ the time interval of 39 years between the first observations in 1969 and the last observations in 2008 provided $$\Delta \phi\sim 70^\circ\ .$$ To appreciate the accuracy of these observations, one may also express them in terms of the detected annual change of the rotation period that amounts to only $$1.25$$ ms/y for (54509) YORP and $$2.7$$ ms/y for (1620) Geographos. Figure 8 summarizes some useful information about those asteroids where the YORP effect has been determined.
# Planetary Applications: Yarkovsky effect
Meteorite and NEA transport. -- Since the time of E. Chladni, scientists have recognized that meteorites originate in outer space. What asteroids, or groups of asteroids, were their sources and how they were transported from their source regions to Earth was a longstanding problem in solar system dynamics. Research into the Yarkovsky effect has helped us to understand the transport problem, with important implications for solving where precisely they come from in the main asteroid belt.
The Yarkovsky effect, with its ability to secularly change the semimajor axes of meteoroids (the precursors of meteorites, which are believed to be fragments of larger asteroids located in the main belt between the orbits of Mars and Jupiter), was originally proposed to be the main element driving meteorites to the Earth (see Öpik 1951). However, direct transport from the main belt, say as a small body slowly spiraling inward toward the Sun by the Yarkovsky effect, required very long timescales and unrealistic values of the thermal parameters and/or rotation rates for meter size bodies. Moreover, AM/PM fall statistics and measured pre-atmospheric trajectories in rare cases (like the Pribram meteorite), indicated many meteorites had orbits with semimajor axis still close to the main belt values.
Advances in our understanding of asteroid dynamics, in particular our knowledge of secular and mean-motion resonances, allowed us to recognize in the late 1970s and early 1980s that resonances are transport routes that move main belt objects onto planet-crossing orbits. The Yarkovsky effect would presumably not be necessary for meteorite transport if collisional breakups of parent asteroids directly injected their precursors into nearby resonances. The problem with this scenario, however, is that the transport timescales of meteorite precursors via the resonances were too short to match cosmic-ray exposure ages of the most common ordinary chondrite classes, many which were dominated by ages of ten to nearly hundred My.
When neither of the two hypotheses worked alone, astronomers concluded that a constructive synergy of both might be the correct answer to the meteorite transfer problem. In this model, meteoroids or their immediate precursor objects are collisionally born in the inner and/or central parts of the main belt, where they are then transported to resonances by the Yarkovsky effect. En route, some of their precursors may undergo fragmentation, which can produce new swarms of daughter meteoroids which eventually reach planet-crossing orbits. With this extended model, one can explain the distribution of the cosmic-ray exposure ages of stony meteorites as a combination of several timescales: the time it takes a meteoroid to travel to a resonances and the time it takes for that resonance to deliver the meteoroid to a Earth-crossing orbit, and the time it takes the meteoroid on a planet-crossing orbit to hit Earth (e.g., Vokrouhlický and Farinella 2000).
Farinella and Vokrouhlický (1999) noted that larger, kilometer sized asteroids in the near-Earth population can also be resupplied via the Yarkovsky effect, with thermal drift slowly feeding the resonances. Because Yarkovsky drift in semimajor axis is smaller than for meteoroids (Figure 3), and their collisional lifetimes are longer, the delivery mechanism can involve both strong resonances and weaker but more numerous high-order resonances, which criss-cross the asteroid belt. Many of these tiny resonances are found in the the inner main belt. Numerical models show they allow multi-kilometer objects to escape over a timescale of hundreds of My to Gys. After evolving out from the main belt region, these orbits first become Mars crossers before being further transported by planetary encounters to the near-Earth population.
Spreading of asteroid families. -- Asteroid families are clusters of fragments produced when two asteroids slam into one another at hypervelocities. They are traditionally found in the space of proper orbital elements, with only the youngest ones recognizable in the space of osculating orbital elements. It has long been recognized, however, that the orbital structure of major families do not match the spread predicted by the expected ejection velocity fields. Advances in our understanding of asteroid dynamics in 1990's provided two possibilities to solve this problem: (i) weak mean-motion and secular resonances modify the values of proper eccentricity $$e$$ and inclination $$i$$ among asteroids, and (ii) the Yarkovsky effect modifies the value of proper semimajor axis $$a\ .$$ Both turn out to be important in explaining observations.
Interestingly, because the dominating variant of the Yarkovsky effect is the diurnal component, families undergo both positive and negative changes in $$a$$ (depending on the obliquity value of the individual asteroids). During this evolution, the orbits may become trapped in some of the weaker resonances, thereby inducing evolution of the proper eccentricity or inclination. If they evolve far enough to reach one of the stronger mean motion resonances, they can be pushed out of the main belt and eliminated from the family. The latter case produces many of the observed sharp truncations seem among certain families.
By analyzing the peculiar structure of the Koronis family, Bottke et al. (2001) provided the first clear example of how family structures could be affected by resonant and Yarkovsky dynamical effects. A number of additional examples have been published since that time. Moreover, by using these dynamical effects like a clock, it is possible to use the Yarkovsky effect to estimate the ages of the asteroid families. This information has been used in studies of space weathering as well as the characterization of epochs of elevated dust or asteroid accretion on the Earth.
Age constraints for older families involves matching their structure proper orbital element space. More detailed results can be obtained for young families with ages less than $$\sim 10$$ My. In this case, direct backward integration of the orbits of all family members can be used to monitor the best possible convergence of secular angles, namely the longitude of node and pericenter. This analysis also requires thermal forces to be included in the dynamical model.
# Planetary applications: YORP effect
Asteroids in spin-orbit resonances. -- In an attempt to generalize Cassini's second and third laws, G. Colombo developed a mathematical model in the 1960's that describes the evolution of a body's spin axis rotating about a principal axis of its inertia tensor. Colombo included two fundamental elements in his approach: (i) gravitational torques due to a massive center (e.g., Sun), and (ii) regular precession of the orbital plane of the body by exterior perturbers (e.g., planets). Because (i) produces a regular precession of the spin axis, a secular spin-orbit resonance (with a stable fixed point called Cassini state 2) may occur between its frequency and the frequency by which the orbital plane rotates in the inertial space. Such a resonance may occur only for a certain range of obliquity and rotation period values, and thus there is only a small probability that the spin state of any given asteroid is located in the Cassini state 2 associated with one of the frequencies by which its orbital plane precesses in space.
With this as background, the discovery of five prograde-rotating Koronis member asteroids with similar spin vectors (i.e., spin axes nearly parallel in inertial space and similar rotation periods) was an enormous surprise to asteroid experts. Additionally, the sample of retrograde-rotating asteroids in the same observation campaign showed these objects had anomalously large obliquities ($$\geq 154^\circ$$) and either very short or very long rotation periods (Slivan 2002). This puzzling situation, however, was solved using a model where gravitational spin dynamics was augmented by introducing the long-term effects of YORP torques (Vokrouhlický et al. 2003). The YORP effect was shown to bring, on a $$\sim (2-3)$$ Gy timescale, prograde states close to Cassini state 2 associated with the prominent $$s_6$$ frequency in the orbital precession, providing thus a natural explanation for their alignment in inertial space. No such trapping zone exists for retrograde-rotating bodies, which evolve toward extreme values in both their obliquities and rotation periods.
The possibility exists for asteroid spin states to be trapped in similar spin-orbit resonant states for bodies residing on low-inclination orbits, especially in the central and outer parts of the main asteroid belt.
Distribution of spin rate and pole orientation of small asteroids. -- The distribution of rotation frequencies of large asteroids in the main belt matches a Maxwellian function quite well with a mean rotation period of $$\sim (8-12)$$ hr, depending on the size of the bin used. However, data for asteroids smaller than $$\sim 20$$ km show significant deviations from this law, with many asteroids either having very slow or very fast rotation rates. Recently, a significant quantity of rotation-frequency data has become available from a sample of small main belt asteroids (note that data are also available for small asteroids on planet-crossing orbits, but the interpretation may be complicated by the effects of planetary close approaches). After eliminating possible binary systems, solitary kilometer-size asteroids in the main asteroid belt were shown to have a roughly uniform distribution of rotation frequencies. The only statistically significant deviation was an excess of slow rotators (periods less than a day or so).
These results are well explained with a simple model of a relaxed YORP evolution. In this view asteroid spin rates were driven by the YORP effect toward extreme (large or small) values on a characteristic (YORP-) timescale dependent on the size. Asteroids evolving toward a state of rapid rotation shed mass and rotational angular momentum, yielding slower spinning objects. Those spinning down from YORP lose so much rotational angular momentum that they can enter into a tumbling phase. These objects may later emerge from this state naturally, with a new spin vector, or may gain rotation angular momentum by subcatastrophic impacts. After a few cycles the spin rates settle to an approximately uniform distribution and the memory of its initial value is erased.
In a similar fashion, the spin poles of large and small asteroids also differ from one another. The distribution of pole orientation of large asteroids in the main belt is roughly isotropic, with only a moderate excess of prograde rotating bodies. On the other hand, rotation poles of small asteroids (sizes $$\leq 30$$ km) are strongly concentrated toward ecliptic north and south poles. This result can again be explained with the above model of YORP evolution, with YORP torques driving obliquities toward extreme values.
Structure of asteroid families. -- While the structure of asteroid families in proper element space can be modified by long-term diffusion in semimajor axis due to the Yarkovsky effect, the process may be affected by the YORP effect in an intriguing way. This is because the strength of the Yarkovsky effect depends on the obliquity value, which itself is modified by the YORP effect. Numerical simulations have determined that in majority of cases YORP effect should asymptotically tilt the spin axis toward extreme obliquity values, $$0^\circ$$ or $$180^\circ\ ,$$ for which the diurnal variant of the Yarkovsky effect is maximized (Eq. (19). In this way, radial migration due to the thermal forces is accelerated for small members (few kilometer in size) in families. As a result these small asteroids may be preferentially brought to extreme values in semimajor axis. This pattern is observed in several families and can be used to constrain their age.
Origin of binary asteroids and asteroidal pairs. -- Binary and multiple systems of asteroids are of great value to planetary science because studies of their mutual motion can provide much more complete information about their components than observations of single objects. For that reason considerable attention has been devoted to these systems in recent times.
The origin of binary and multiple systems may, in many cases, be also related to YORP. Restricting our attention to small binary systems in the main belt and among near-Earth objects, which may make up 15 to 20 % of their respective populations, it is clear a robust formation mechanism is needed to explain observations. While several processes can lead to the formation of binary and multiple systems, such as asteroid collisions or tidal disruption events during planetary encounters, important clues are provided by the available observations: (i) the primary (larger) component of the binary nearly always rotates very fast (rotation periods between 2 to 4 hr), and (ii) the estimated total angular momentum of the system is very close to rotational angular momentum of a critically rotating parent body. The conditions (i) and (ii) above hold for the majority of small systems both among near-Earth and main belt asteroids. The most reasonable scenario for their formation is therefore based on idea of rotational fission and the YORP effect. YORP secularly accelerate an asteroid's rotation rate, which in turn drives it toward the fission limit. The relevant characteristic timescale to bring a "typically" rotating kilometer-size asteroid in the inner main belt to the fission limit is $$\sim (20-50)$$ My, shorter that its collisional lifetime. While YORP effect is likely the driving mechanism, details of the fission mechanics are not known and they are subject to current research.
Binary systems are closely related to asteroidal pairs, namely couples of asteroids on very similar orbits. Significant correlation between the rotation rate of the primary (larger) component in the pair and the mass ratio of the two components strongly suggests asteroid pairs were also formed by rotational fission of the parent object (Pravec et al. 2010). Details of fission process, such as elongation of the parent asteroid and the relative mass of the material shed from the primary, and the inherent stability of the two objects in orbit, help explain the probable reason why a binary system is formed in one case and an unbound asteroid pair in the other.
# Further topics
While covering many of the important applications of the Yarkovsky and YORP effect, this page neglects discussion of several topics of potential importance for the future. They include:
• Binary YORP (BYORP) -- This idea was introduced by Ćuk and Burns (2005) and since then gained considerable attention. This model assumes that smaller (kilometer-size) components in binary asteroids may evolve fast due to the YORP effect, once being tidally locked in the spin orbit 1/1 mean motion resonance. This configuration may efficiently transfer rotational angular momentum into orbital angular momentum, provided tidal effects are able to maintain the synchronous state with low amplitude of librations, and thus drive the orbital evolution of the binary system into new states.
• Thermal forces for dynamics of ring particles -- Similar to artificial spacecraft, meter-size boulders in circumplanetary rings can undergo secular orbital decay or expansion due to thermal forces by planetary or solar heating. In this case, however, the complication is that ring particles are not isolated but reside in compact systems where collisions are frequent. This makes it difficult to analytically estimate the mutual effects of bodies in such systems.
# References
• Beekman, G. 2006, I.O. Yarkovsky and the discovery of "his" effect, J. Hist. Astron. 37, 71–86.
• Bottke, W.F., Vokrouhlický, D., Rubincam, D.P. and Brož, M. 2002, Dynamical evolution of asteroids and meteoroids using the Yarkovsky effect, in Asteroids III, ed. W.F. Bottke et al. (University of Arizona Press, Tucson), 395–408.
• Bottke, W.F., Vokrouhlický, D., Rubincam, D.P. and Nesvorný, D. 2006, The Yarkovsky and YORP effects: Implications for asteroid dynamics, Ann. Rev. Earth Planet. Sci. 34, 157–191.
• Chesley, S.R., Ostro, S.J., Vokrouhlický, D., Čapek, D., Giorgini, J.D., Nolan, M.C., Margot, J-L., Hine, A.A., Benner, L.A.M. and Chamberlin, A.B. 2003, Direct detection of the Yarkovsky effect by radar ranging to asteroid 6489 Golevka, Science 302, 1739–1742.
• Ćuk, M. and Burns, J.A. 2005, Effects of thermal radiation on the dynamics of binary NEAs, Icarus 176, 418–431.
• Farinella, P. and Vokrouhlický, D. 1999, Semimajor axis mobility of asteroidal fragments, Science 283, 1507–1510.
• Lowry, S.C., Fitzsimmons, A., Pravec, P., Vokrouhlický, D., Taylor, P.A., Margot, J.L., Galád, A., Irwin, M., Irwin, J. and Kusnirák, P. 2007, Direct detection of the asteroidal YORP effect, Science 316, 272–274.
• Öpik, E.J. 1951, Collision probabilities with the planets and then distribution of interplanetary matter, Proc. Roy. Irish A. 54, 165–199.
• Paddack, S.J. 1969, Rotational bursting of small celestial bodies: Effects of radiation pressure, J. Geophys. Res. 74, 4379–4381.
• Pravec, P., et al. 2010, Asteroid pairs formed by rotational fission, Nature 466, 1085–1088.
• Radzievskii, V.V. 1952, The influence of anisotropically emitted sunlight on orbital motion of asteroids and meteorites, Astron. Zh. 29, 162–170 (in Russian).
• Rubincam, D.P. 1987, LAGEOS orbit decay due to infrared radiation from Earth, J. Geophys. Res. 92, 1287–1294.
• Rubincam, D.P. 1995, Asteroid orbit evolution due to thermal drag, J. Geophys. Res. 100, 1585–1594.
• Rubincam, D.P. 2000, Radiative spin-up and spin-down of small asteroids, Icarus 148, 2–11.
• Slivan, S.M. 2002, Spin vector alignment of Koronis family asteroids, Nature 419, 49–51.
• Taylor, P.A., Margot, J.L., Vokrouhlický, D., Scheeres, D.J., Pravec, P., Lowry, S.C., Fitzsimmons, A., Nolan, M.C., Ostro, S.J., Benner, L.A.M., Giorgini, J.D. and Magri, C. 2007, Spin rate of asteroid (54509) 2000 PH5 increasing due to the YORP effect, Science 316, 274–277.
• Vokrouhlický, D. and Farinella, P. 2000, Efficient delivery of meteorites to the Earth from a wide range of asteroid parent bodies, Nature 405, 606–608.
• Vokrouhlický, D., Nesvorný, D. and Bottke, W.F. 2003, The vector alignments of asteroid spins by thermal torques, Nature 425, 147–151.
• Yarkovsky I.O. 1901, The density of luminiferous ether and the resistance it offers to motion (in Russian), Bryansk (available in the Appendix of M. Brož's thesis).
# See also
Poynting-Robertson effect
Celestial mechanics | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 294, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9182420372962952, "perplexity_flag": "middle"} |
http://wiki.stat.ucla.edu/socr/index.php?title=Probability_and_statistics_EBook&oldid=10188 | # Probability and statistics EBook
### From Socr
Revision as of 20:44, 4 June 2010 by Jenny (Talk | contribs)
This is a General Statistics Curriculum E-Book, which includes Advanced-Placement (AP) materials.
## Contents
This is an Internet-based probability and statistics E-Book. This EBook, and the materials, tools and demonstrations presented within it, may be very useful for advanced-placement (AP) statistics educational curriculum. The E-Book is initially developed by the UCLA Statistics Online Computational Resource (SOCR), however, all statistics instructors, researchers and educators are encouraged to contribute to this effort and improve the content of these learning materials.
There are 4 novel features of this specific Statistics EBook – it is community-built, completely open-access (in terms of use and contributions), blending concepts with technology and multi-lingual.
This section describes the means of traversing, searching, discovering and utilizing the SOCR Statistics EBook resources in formal curricula or informal learning setting. There are problems included for each section.
## Chapter I: Introduction to Statistics
Although natural phenomena in real life are unpredictable, the designs of experiments are bound to generate data that varies because of intrinsic (internal to the system) or extrinsic (due to the ambient environment) effects. How many natural processes or phenomena in real life can we describe that have an exact mathematical closed-form description and are completely deterministic? How do we model the rest of the processes that are unpredictable and have random characteristics?
Statistics is the science of variation, randomness and chance. As such, statistics is different from other sciences, where the processes being studied obey exact deterministic mathematical laws. Statistics provides quantitative inference represented as long-time probability values, confidence or prediction intervals, odds, chances, etc., which may ultimately be subjected to varying interpretations. The phrase Uses and Abuses of Statistics refers to the notion that in some cases statistical results may be used as evidence to seemingly opposite theses. However, most of the time, common principles of logic allow us to disambiguate the obtained statistical inference.
Design of experiments is the blueprint for planning a study or experiment, performing the data collection protocol and controlling the study parameters for accuracy and consistency. Data, or information, is typically collected in regard to a specific process or phenomenon being studied to investigate the effects of some controlled variables (independent variables or predictors) on other observed measurements (responses or dependent variables). Both types of variables are associated with specific observational units (living beings, components, objects, materials, etc.)
All methods for data analysis, understanding or visualizing are based on models that often have compact analytical representations (e.g., formulas, symbolic equations, etc.) Models are used to study processes theoretically. Empirical validations of the utility of models are achieved by inputting data and executing tests of the models. This validation step may be done manually, by computing the model prediction or model inference from recorded measurements. This process may be possible by hand, but only for small numbers of observations (<10). In practice, we write (or use existent) algorithms and computer programs that automate these calculations for better efficiency, accuracy and consistency in applying models to larger datasets.
## Chapter II: Describing, Exploring, and Comparing Data
There are two important concepts in any data analysis - Population and Sample. Each of these may generate data of two major types - Quantitative or Qualitative measurements.
There are two important ways to describe a data set (sample from a population) - Graphs or Tables.
There are many different ways to display and graphically visualize data. These graphical techniques facilitate the understanding of the dataset and enable the selection of an appropriate statistical methodology for the analysis of the data.
There are three main features of populations (or sample data) that are always critical in understanding and interpreting their distributions - Center, Spread and Shape. The main measures of centrality are Mean, Median and Mode(s).
There are many measures of (population or sample) spread, e.g., the range, the variance, the standard deviation, mean absolute deviation, etc. These are used to assess the dispersion or variation in the population.
The shape of a distribution can usually be determined by looking at a histogram of a (representative) sample from that population; Frequency Plots, Dot Plots or Stem and Leaf Displays may be helpful.
Variables can be summarized using statistics - functions of data samples.
Graphical visualization and interrogation of data are critical components of any reliable method for statistical modeling, analysis and interpretation of data.
## Chapter III: Probability
Probability is important in many studies and discipline because measurements, observations and findings are often influenced by variation. In addition, probability theory provides the theoretical groundwork for statistical inference.
Some fundamental concepts of probability theory include random events, sampling, types of probabilities, event manipulations and axioms of probability.
There are many important rules for computing probabilities of composite events. These include conditional probability, statistical independence, multiplication and addition rules, the law of total probability and the Bayesian rule.
Many experimental setting require probability computations of complex events. Such calculations may be carried out exactly, using theoretical models, or approximately, using estimation or simulations.
There are many useful counting principles (including permutations and combinations) to compute the number of ways that certain arrangements of objects can be formed. This allows counting-based estimation of complex events' probabilities.
## Chapter IV: Probability Distributions
There are two basic types of processes that we observe in nature - Discrete and Continuous. We begin by discussing several important discrete random processes, emphasizing the different distributions, expectations, variances and applications. In the next chapter, we will discuss their continuous counterparts. The complete list of all SOCR Distributions is available here.
To simplify the calculations of probabilities, we will define the concept of a random variable which will allow us to study uniformly various processes with the same mathematical and computational techniques.
The expectation and the variance for any discrete random variable or process are important measures of Centrality and Dispersion. This section also presents the definitions of some common population- or sample-based moments.
The Bernoulli and Binomial processes provide the simplest models for discrete random experiments.
Multinomial processes extend the Binomial experiments for the situation of multiple possible outcomes.
The Geometric, Hypergeometric, Negative Binomial, and Negative Multinomial distributions provide computational models for calculating probabilities for a large number of experiment and random variables. This section presents the theoretical foundations and the applications of each of these discrete distributions.
The Poisson distribution models many different discrete processes where the probability of the observed phenomenon is constant in time or space. Poisson distribution may be used as an approximation to the Binomial distribution.
## Chapter V: Normal Probability Distribution
The Normal Distribution is perhaps the most important model for studying quantitative phenomena in the natural and behavioral sciences - this is due to the Central Limit Theorem. Many numerical measurements (e.g., weight, time, etc.) can be well approximated by the normal distribution.
The Standard Normal Distribution is the simplest version (zero-mean, unit-standard-deviation) of the (General) Normal Distribution. Yet, it is perhaps the most frequently used version because many tables and computational resources are explicitly available for calculating probabilities.
In practice, the mechanisms underlying natural phenomena may be unknown, yet the use of the normal model can be theoretically justified in many situations to compute critical and probability values for various processes.
In addition to being able to compute probability (p) values, we often need to estimate the critical values of the Normal Distribution for a given p-value.
## Chapter VI: Relations Between Distributions
In this chapter, we will explore the relationships between different distributions. This knowledge will help us to compute difficult probabilities using reasonable approximations and identify appropriate probability models, graphical and statistical analysis tools for data interpretation. The complete list of all SOCR Distributions is available here and the Distributome applet provides an interactive graphical interface for exploring the relations between different distributions.
The exploration of the relations between different distributions begins with the study of the sampling distribution of the sample average. This will demonstrate the universally important role of normal distribution.
Suppose the relative frequency of occurrence of one event whose probability to be observed at each experiment is p. If we repeat the same experiment over and over, the ratio of the observed frequency of that event to the total number of repetitions converges towards p as the number of experiments increases. Why is that and why is this important?
Normal Distribution provides a valuable approximation to Binomial when the sample sizes are large and the probability of successes and failures is not close to zero.
Poisson provides an approximation to Binomial Distribution when the sample sizes are large and the probability of successes or failures is close to zero.
Binomial Distribution is much simpler to compute, compared to Hypergeometric, and can be used as an approximation when the population sizes are large (relative to the sample size) and the probability of successes is not close to zero.
The Poisson can be approximated fairly well by Normal Distribution when λ is large.
## Chapter VII: Point and Interval Estimates
Estimation of population parameters is critical in many applications. Estimation is most frequently carried in terms of point-estimates or interval (range) estimates for population parameters that are of interest.
There are many ways to obtain point (value) estimates of various population parameters of interest, using observed data from the specific process we study. The method of moments and the maximum likelihood estimation are among the most popular ones frequently used in practice.
This section discusses how to find point and interval estimates when the sample-sizes are large.
Next, we discuss point and interval estimates when the sample-sizes are small. Naturally, the point estimates are less precise and the interval estimates produce wider intervals, compared to the case of large-samples.
The Student's T-Distribution arises in the problem of estimating the mean of a normally distributed population when the sample size is small and the population variance is unknown.
Normal Distribution is appropriate model for proportions, when the sample size is large enough. In this section, we demonstrate how to obtain point and interval estimates for population proportion.
In many processes and experiments, controlling the amount of variance is of critical importance. Thus the ability to assess variation, using point and interval estimates, facilitates our ability to make inference, revise manufacturing protocols, improve clinical trials, etc.
This activity demonstrates the usage and functionality of SOCR General Confidence Interval Applet. This applet is complementary to the SOCR Simple Confidence Interval Applet and its corresponding activity.
## Chapter VIII: Hypothesis Testing
Hypothesis Testing is a statistical technique for decision making regarding populations or processes based on experimental data. It quantitatively answers the possibility that chance alone might be responsible for the observed discrepancy between a theoretical model and the empirical observations.
In this section, we define the core terminology necessary to discuss Hypothesis Testing (Null and Alternative Hypotheses, Type I and II errors, Sensitivity, Specificity, Statistical Power, etc.)
As we already saw how to construct point and interval estimates for the population mean in the large sample case, we now show how to do hypothesis testing in the same situation.
We continue with the discussion on inference for the population mean for small samples.
When the sample size is large, the sampling distribution of the sample proportion $\hat{p}$ is approximately Normal, by CLT. This helps us formulate hypothesis testing protocols and compute the appropriate statistics and p-values to assess significance.
The significance testing for the variation or the standard deviation of a process, a natural phenomenon or an experiment is of paramount importance in many fields. This chapter provides the details for formulating testable hypotheses, computation, and inference on assessing variation.
## Chapter IX: Inferences From Two Samples
In this chapter, we continue our pursuit and study of significance testing in the case of having two populations. This expands the possible applications of one-sample hypothesis testing we saw in the previous chapter.
We need to clearly identify whether samples we compare are Dependent or Independent in all study designs. In this section, we discuss one specific dependent-samples case - Paired Samples.
Independent Samples designs refer to experiments or observations where all measurements are individually independent from each other within their groups and the groups are independent. In this section, we discuss inference based on independent samples.
In this section, we compare variances (or standard deviations) of two populations using randomly sampled data.
This section presents the significance testing and inference on equality of proportions from two independent populations.
## Chapter X: Correlation and Regression
Many scientific applications involve the analysis of relationships between two or more variables involved in a process of interest. We begin with the simplest of all situations where Bivariate Data (X and Y) are measured for a process and we are interested on determining the association, relation or an appropriate model for these observations (e.g., fitting a straight line to the pairs of (X,Y) data).
The Correlation between X and Y represents the first bivariate model of association which may be used to make predictions.
We are now ready to discuss the modeling of linear relations between two variables using Regression Analysis. This section demonstrates this methodology for the SOCR California Earthquake dataset.
In this section, we discuss point and interval estimates about the slope of linear models.
Now, we are interested in determining linear regressions and multilinear models of the relationships between one dependent variable Y and many independent variables Xi.
## Chapter XI: Analysis of Variance (ANOVA)
We now expand our inference methods to study and compare k independent samples. In this case, we will be decomposing the entire variation in the data into independent components.
Now we focus on decomposing the variance of a dataset into (independent/orthogonal) components when we have two (grouping) factors. This procedure called Two-Way Analysis of Variance.
## Chapter XII: Non-Parametric Inference
To be valid, many statistical methods impose (parametric) requirements about the format, parameters and distributions of the data to be analyzed. For instance, the Independent T-Test requires the distributions of the two samples to be Normal, whereas Non-Parametric (distribution-free) statistical methods are often useful in practice, and are less-powerful.
The Sign Test and the Wilcoxon Signed Rank Test are the simplest non-parametric tests which are also alternatives to the One-Sample and Paired T-Test. These tests are applicable for paired designs where the data is not required to be normally distributed.
The Wilcoxon-Mann-Whitney (WMW) Test (also known as Mann-Whitney U Test, Mann-Whitney-Wilcoxon Test, or Wilcoxon rank-sum Test) is a non-parametric test for assessing whether two samples come from the same distribution.
Depending upon whether the samples are dependent or independent, we use different statistical tests.
We now extend the multi-sample inference which we discussed in the ANOVA section, to the situation where the ANOVA assumptions are invalid.
There are several tests for variance equality in k samples. These tests are commonly known as tests for Homogeneity of Variances.
## Chapter XIII: Multinomial Experiments and Contingency Tables
The Chi-Square Test is used to test if a data sample comes from a population with specific characteristics.
The Chi-Square Test may also be used to test for independence (or association) between two variables.
## Chapter XIV:Bayesian Statistics
This section will establish the groundwork for Bayesian Statistics. Probability, Random Variables, Means, Variances, and the Bayes’ Theorem will all be discussed.
In this section, we will provide the basic framework for Bayesian statistical inference. Generally, we take some prior beliefs about some hypothesis and then modify these prior beliefs, based on some data that we collect, in order to arrive at posterior beliefs. Another way to think about Bayesian Inference is that we are using new evidence or observations to update some probability that a hypothesis is true.
This section explains the binomial, poisson, and uniform distributions in terms of Bayesian Inference.
This section will talk about both the classical approach to hypothesis testing and also the Bayesian approach.
This section discusses two sample problems, with variances unknown, both equal and unequal. The Behrens-Fisher controversy is also discussed.
Hierarchical linear models are statistical models of parameters that vary at more than a single level. These models are seen as generalizations of linear models and may extend to non-linear models. Any underlying correlations in the particular model must be represented in analysis for correct inference to be drawn.
Topics covered will include Monte Carlo Methods, Markov Chains, the EM Algorithm, and the Gibbs Sampler. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8942014575004578, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/74943?sort=oldest | ## Constructible sheaves and dg-modules
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $M$ be a smooth manifold, $A_M$ the de Rham algebra of $M$, $D_{A_M}$ the derived category of the category of differential graded (dg) $A_M$-modules and `$D^+_c(M)$` the bounded below constructible derived category of sheaves of real vector spaces on $M$. The category `$D^+_c(M)$` knows a lot about the topology of $M$; for example, it allows one to compute the real cohomology of $M$ together with all Massey products. However, unsurprisingly, this category is also quite complicated. Informally, the question I'd like to ask is: can one describe at least some pieces of `$D^+_c(M)$` in terms of dg-modules (which is something much more manageable)?
In "Equivariant sheaves and functors", 12.3, J Bernstein and V. Lunts construct two mutually inverse equivalences $\gamma_M:\langle \mathbb{R}_M\rangle\to \langle A_M\rangle$ and $\mathcal{L}_M: \langle A_M\rangle\to \langle \mathbb{R}_M\rangle$. Here $\langle\cdot\rangle$ stands for the full triangulated subcategory generated by $\cdot$, $\mathbb{R}_M$ is the constant sheaf on $M$ and $\langle \mathbb{R}_M\rangle$ and $\langle A_M\rangle$ are subcategories of `$D^+_c(M)$` and $D_{A_M}$ respectively.
The functors are defined as follows: $\gamma_M$ takes a sheaf, multiplies it by the de Rham complex and then takes the global sections; $\mathcal{L}_M$ takes a module, replaces it with a $\mathcal{K}$-projective resolution and multiplies the result by the de Rham complex. (A complex of dg $A_M$-modules is $\mathcal{K}$-projective, if $Hom$ from it to an acyclic complex is acyclic.)
Notice that $\gamma_M$ is in fact defined on the whole of `$D^+(M)$`. I would like to ask: is there a subcategory $D$ of `$D^+_c(M)$` larger than the one generated by the constant sheaf such that $\gamma_M$ restricted to $D$ is fully faithful? In particular, if $i:N\subset M$ is a submanifold and $M,N,M\setminus N$ are all simply connected, what happens if we take `$D=\langle\mathbb{R}_M, i_*\mathbb{R}_N\rangle$`?
Here is a related result. Suppose we fix a stratification of $M$ with all strata and their closures simply-connected. Consider the subcategory `$D\subset D^+_c(M)$` formed by complexes which are constructible with respect to the chosen stratification and let `$I^*$` be an injective resolution of the direct sum of the constant sheaves on the strata. Then, due to a result by B. Keller (Deriving dg-algebras, Ann ENS, 1994, no 1, 63-102) by taking `$C^*\to Hom(C^*,I^*)$` we get a fully faithful functor from $D$ to `$D_{End(I^*)}$` where $End(I^*)$ is the (global) endomorphism algebra of `$I^*$`. However, this is not exactly what I'm looking for since the endomorphism algebra is still quite difficult to describe explicitly in the example I'm interested in.
-
I am looking through the tags. This is the only use of 'dg-modules'. By contrast 'dg-algebras' is somewhat frequent. Due to my ignorance I am not sure how big the difference actually is. Naively, it'd seem reasonable to me to have the similar but more prominent tag 'dg-algebras' instaed of this unique usage even if it'd be somewhat less precise. But I might well be wrong. Could you please let me know your opinion related to this and perhaps even retag. (If you are fine with a change but do not want to reactivate I would eventually suggest a tag-merge). – quid Feb 11 at 12:15
quid -- I agree that dg-algebras would be a more appropriate tag. – algori Feb 22 at 13:48
Thank you for the reply and the retagging! – quid Feb 23 at 17:23
## 1 Answer
This is not a helpful answer to your main question, but merely a negative answer to your "In particular..what happens.." question. But, the general idea may be helpful in figuring out what more precise things (weaker than Keller's result) would be reasonable to ask for.
$\newcommand{\RR}{\mathbb{R}}\newcommand{\RHom}{\mathrm{RHom}}\newcommand{\pt}{\mathrm{pt}}\DeclareMathOperator{\deg}{deg}\newcommand{\RGamma}{R\Gamma}\renewcommand{\mod}{\text{-mod}}\newcommand{\C}[1]{C^\bullet(#1)}$
Since I'll lapse into this notation anyway, let me make it explicit: Identity $\gamma_M$ with the functor $$\RGamma(M, -)\colon D \to \C{M}\mod$$ where by $\mod$ I'm implicitly working in a dg-setting.
At some point below, I'll assume that $N$ is compact oriented of dimensions $n$. (This is not strictly necessary, but allows me to avoid extra notation.)
Claim: Suppose that $M$ is simply connected, $\dim M \geq 2$, and that $i \colon N \hookrightarrow M$ is not the identity. Then, the functor $\gamma_M \colon D= \langle \RR_M, i_* \RR_N \rangle \to D_{A_M}$ is not fully faithful.
## Fuzzy Remark:
Before sketching an argument, here's a "philsophical" remark about why things will go wrong:
The category $D_c(M)$ feels the topology (or maybe even geometry) of $M$. In particular, it has a Proper Base-change Theorem saying something like $q^* p_! = (p')_! (q')^*$ where the maps take part in a fiber-square of (actual) topological spaces. The category $D_{A_M} = \C{M}\mod$ feels only the homotopy theory of $M$. You should expect a Base-Change Theorem in this context, but now with a fiber-square of homotopy types -- more correctly, a homotopy fiber-square.
A simpler sort of 'no go' result that this heuristic implies: Suppose you had wanted to include two sub-manifolds $i_k\colon N_k \hookrightarrow M$, $k=1,2$. The $\C{M}\mod$ images would be unable to tell them apart if the $i_k$ were homotopic -- e.g., the inclusion of any two points. While the constructible theory would certain care whether the two points were the same or not.
## Sketch of Claim:
To see this, note that $$\RHom_{D_c(M)}(i_* \RR_N, i_* \RR_N) = \RHom_{D_c(N)}(i^* i_* \RR_N, \RR_N) = \RHom_{D_c(N)}(\RR_N, \RR_N) = \C{N}$$ while
Sub-Claim: Letting $\stackrel{h}\times_M$ denote the homotopy fiber product, $$\RHom_{A_M}(\gamma_M(i_* \RR_N), \gamma_M(i_* \RR_N)) =\RHom_{\C{M}}\left(\C{N}, \C{N}\right) \approx C_{\bullet}\left(N \stackrel{h}\times_M N\right)[-n]$$
Assuming the sub-claim: to conclude it suffices to produce homology classes on $\Omega M$ in arbitrarily positive degrees, whose images under the composite $$H(\Omega M) \to H_*(N \stackrel{h}\times_M N) \to H_*(\Omega (M/N))$$ are non-zero. I think the following should do this upon filling in the details: Equip $M$ with a base-point in $N$, take some non-zero element of $\pi_i M$, with $i \geq 2$, that remains non-zero in $\pi_i (M/N)$. Use it to produce an $(i-1)$-homology class on $\Omega M$, and then take its Pontrjagin products.
Sketch of sub-claim: Underlying the Eilenberg-Moore spectral sequence is the statement that, letting $\boxtimes$ denote derived co-tensor of co-modules over a co-algebra, $$C_\bullet(N) \boxtimes_{C_\bullet(M)} C_\bullet(N) \approx C_\bullet(N \stackrel{h}\times_M N)$$ Poincare duality gives an equivalence $\C{N} \approx C_\bullet(N)[-n]$ of $\C{M}$-modules (or $C_\bullet(M)$-comodules). It remains to identify $$\RHom_{\C{M}}(\C{N}, \C{N}) \approx C_\bullet(N) \boxtimes_{C_\bullet(M)} \C{N}$$ by term-wise identifying the co-simplicial cobar constructions on both sides.
Example: Note that if $N = \pt$ the sub-claim is a familiar statement in Koszul duality: That for $M$ simply-connected $\RHom_{\C{M}}(\RR,\RR) \approx C_\bullet(\Omega M)$. In certain cases, e.g. $M = S^{2k+1}$, you can just see it. As an aside: $C_\bullet(\Omega M)\mod$ knows about all locally-constant things, not just local systems finitely-buildable from the trivial one. (But will run into the same issues if you try to include submanifolds without explicitly adding in extra generators for the strata.)
Remark: Though $\gamma_M$ is not fully-faithful here, it does get the maps into/out of $\RR_M$ right. Logic as above shows that $$\RHom_D(\RR_M, i_* \RR_N) = \C{N} = \RHom_{\C{M}}(\C{M}, \C{N})$$ and then Verdier Duality (for $D_c(M)$) + something like Grothendieck Duality (for $\C{M}$) give the other direction as well.
-
Thanks, Anatoly. I think you are right and my initial guess was way too optimistic. – algori Sep 10 2011 at 21:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281740784645081, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/14486/list | ## Return to Question
Post Made Community Wiki by Harry Gindi
8 oops
## Speculation and background
Let $\mathcal{C}:=CRing^{op}_{Zariski}$, the affine Zariski site. Consider the category of sheaves, $Sh(\mathcal{C})$.
According to nLab, schemes are those sheaves that "have a cover by Zariski-open immersions of affine schemes in the category of presheaves over Aff."
In SGA 4.1.ii.5 Grothendieck defines a further topology on $Sh(\mathcal{C})$ using a "familles couvrantes", which are families of morphisms `$\{U_i \to X\}$` such that the induced map $\coprod U_i \to X$ is an epimorphism. Further, he gives another definition. A family of morphisms `$\{U_i \to X\}$` is called "bicouvrante" if it is a "famille couvrante" and the map $\coprod U_i \times_X to \coprod U_i \to times_X \coprod U_i$ is an epimorphism. [Note: This is given for a general category of sheaves on a site, not sheaves on our affine Zariski site.]
Speculation: I assume that the nLab definition means that we have a (bi)covering family of open immersions of representables, but as it stands, we do not have a sufficiently good definition of an open immersion, or equivalently, open subfunctor.
It seems like the notion of a bicovering family is very important, because this is precisely the condition we require on algebraic spaces (if we replace our covering morphisms with etale surjective morphisms in a smart way and require that our cover be comprised of representables).
## Questions
What does "open immersion" mean precisely in categorical langauge? How do we define a scheme precisely in our language of sheaves and grothendieck topologies? Preferably, this answer should not depend on our base site. The notion of an open immersion should be a notion that we have in any category of sheaves on any site.
Eisenbud and Harris fail to answer this question for the following reason: they rely on classical scheme theory for their definition of an open subfunctor (same thing as an open immersion). If we wish to construct our theory of schemes with no logical prerequisites, this is circular.
Once we have this definition, do we require our covering family of open immersions to be a "covering family" or a "bicovering family"?
Further, how can we exhibit, in precise functor of points language, the definition of an algebraic space?
This last question should be a natural consequence of the previous questions provided they are answered in sufficient generality.
7 deleted 1201 characters in body; edited title; added 228 characters in body; added 58 characters in body
# Precise definition of a scheme (FunctorofpointsKeyquestion:Howtodefineanopensubfunctorwithoutresortingtoclassicalschemetheory)
## Speculationandbackground
In
Speculation: I assume that the usual nLab definition of a scheme, it seems like means that we require the sheaves to have a "famille couvrante" given by (bi)covering family of open immersions of representables, and we require that a separated scheme have a "famille bicouvrante" by representables (but as it might be the case here that we actually require "familles bicouvrantes" regardlessstands, and for separated schemes we require that we do not have a "famille bicouvrante" over $Spec(\mathbb{Z})$). Howeversufficiently good definition of an open immersion, this still doesn't seem totally rightor equivalently, open subfunctor.
It seems like the notion of a bicovering family is very important, because this is precisely the condition we don't just want morphisms from affines, require on algebraic spaces (if we specifically want "open immersions". I suspect replace our covering morphisms with etale surjective morphisms in a smart way and require that we further want our "familles (bi)couvrantes" to cover be families of monomorphisms of representables, but I'm not sure comprised of thisrepresentables).
Then the questions:
## Questions
What does "open immersion" mean precisely in categorical langauge(with respect to our familles (bi)couvrantes)? ? How do we define a scheme precisely in our language of sheaves and grothendieck topologies?
Is the category of Schemes itself a topos Preferably, this answer should not depend on the category our base site. The notion of sheaves equipped with an open immersion should be a topology defined by "familles (bi)couvrantes" with appropriate restrictions (affine monomorphisms maybe)? [Yes, I know notion that usually the we have in any category of sheaves on a any site"extends" .
Eisenbud and Harris fail to answer this question for the categoryfollowing reason: they rely on classical scheme theory for their definition of an open subfunctor (same thing as an open immersion). If we wish to construct our theory of schemes with no logical prerequisites, but this topology is not at all subcanonical (with respect circular.
Once we have this definition, do we require our covering family of open immersions to $Sh(\mathcal{C})$).]be a "covering family" or a "bicovering family"?
Further, how can we exhibit, in precise functor of points language, the definition of an algebraic space? (Wikipedia notes that we can just as well require that the equivalence relation is given by affines rather than general schemes. I don't see how this can be true unless we require that algebraic spaces have an equivalence relation given by separated schemes, but please, enlighten me.)
My guess on this last one: It seems like taking sheaves on the sitey $CRing^{op}_{\acute{e}tale}$, applying the same criteria we used for schemes, and further requiring that all covering families have the property that they are "familles bicouvrantes" over $Spec(\mathbb{Z})$ is enough. (
This is based on the assumption from Wikipedia.)
I hope I did enough research and haven't made too many mistakes.
Edit: It appears that having affine diagonal is weaker than being separated (oh well?). Then this leads to a further last question : If we require that algebraic stacks have affine diagonal, do we still have should be a respectable theory natural consequence of algebraic stacks?
Edit2: According to a footnote in SGA, being "couvrant" et "bicouvrant" are stable under base change, so the requirements over $Spec(\mathbb{Z})$ previous questions provided they are unnecessaryanswered in sufficient generality.
6 deleted 1 characters in body
Let $\mathcal{C}:=CRing^{op}_{Zariski}$, the affine Zariski site. Consider the category of sheaves, $Sh(\mathcal{C})$.
According to nLab, schemes are those sheaves that "have a cover by Zariski-open immersions of affine schemes in the category of presheaves over Aff."
In SGA 4.1.ii.5 Grothendieck defines a further topology on $Sh(\mathcal{C})$ using a "familles couvrantes", which are families of morphisms `$\{U_i \to X\}$` such that the induced map $\coprod U_i \to X$ is an epimorphism. Further, he gives another definition. A family of morphisms `$\{U_i \to X\}$` is called "bicouvrante" if it is a "famille couvrante" and the map $\coprod U_i \times_X \coprod U_i \to \coprod U_i$ is an epimorphism. [Note: This is given for a general category of sheaves on a site, not sheaves on our affine Zariski site.]
In the usual definition of a scheme, it seems like we require the sheaves to have a "famille couvrante" given by representables, and we require that a separated scheme have a "famille bicouvrante" by representables (it might be the case here that we actually require "familles bicouvrantes" regardless, and for separated schemes we require that we have a "famille bicouvrante" over $Spec(\mathbb{Z})$). However, this still doesn't seem totally right, because we don't just want morphisms from affines, we specifically want "open immersions". I suspect that we further want our "familles (bi)couvrantes" to be families of monomorphisms of representables, but I'm not sure of this.
Then the questions:
What does "open immersion" mean precisely in categorical langauge (with respect to our familles (bi)couvrantes)? How do we define a scheme precisely in our language of sheaves and grothendieck topologies?
Is the category of Schemes itself a topos on the category of sheaves equipped with a topology defined by "familles (bi)couvrantes" with appropriate restrictions (affine monomorphisms maybe)? [Yes, I know that usually the category of sheaves on a site "extends" the category, but this topology is not at all subcanonical (with respect to $Sh(\mathcal{C})$).]
Further, how can we exhibit, in precise functor of points language, the definition of an algebraic space? (Wikipedia notes that we can just as well require that the equivalence relation is given by affines rather than general schemes. I don't see how this can be true unless we require that algebraic spaces have an equivalence relation given by separated schemes, but please, enlighten me.)
My guess on this last one: It seems like taking sheaves on the sitey $CRing^{op}_{\acute{e}tale}$, applying the same criteria we used for schemes, and further requiring that all covering families have the property that they are "familles bicouvrantes" over $Spec(\mathbb{Z})$ is enough. (This is based on the assumption from Wikipedia.)
I hope I did enough research and haven't made too many mistakes.
Edit: It appears that having affine diagonal is weaker than being separated (oh well?). Then this leads to a further question: If we require that algebraic stacks have affine diagonal, do we still have a respectable theory of algebraic stacks?
Edit2: Accourting According to a footnote in SGA, being "couvrant" et "bicouvrant" are stable under base change, so the requirements over $Spec(\mathbb{Z})$ are unnecessary.
5 added 167 characters in body
Let $\mathcal{C}:=CRing^{op}_{Zariski}$, the affine Zariski site. Consider the category of sheaves, $Sh(\mathcal{C})$.
According to nLab, schemes are those sheaves that "have a cover by Zariski-open immersions of affine schemes in the category of presheaves over Aff."
In SGA 4.1.ii.5 Grothendieck defines a further topology on $Sh(\mathcal{C})$ using a "familles couvrantes", which are families of morphisms `$\{U_i \to X\}$` such that the induced map $\coprod U_i \to X$ is an epimorphism. Further, he gives another definition. A family of morphisms `$\{U_i \to X\}$` is called "bicouvrante" if it is a "famille couvrante" and the map $\coprod U_i \times_X \coprod U_i \to \coprod U_i$ is an epimorphism. [Note: This is given for a general category of sheaves on a site, not sheaves on our affine Zariski site.]
In the usual definition of a scheme, it seems like we require the sheaves to have a "famille couvrante" given by representables, and we require that a separated scheme have a "famille bicouvrante" by representables (it might be the case here that we actually require "familles bicouvrantes" regardless, and for separated schemes we require that we have a "famille bicouvrante" over $Spec(\mathbb{Z})$). However, this still doesn't seem totally right, because we don't just want morphisms from affines, we specifically want "open immersions". I suspect that we further want our "familles (bi)couvrantes" to be families of monomorphisms of representables, but I'm not sure of this.
Then the questions:
What does "open immersion" mean precisely in categorical langauge (with respect to our familles (bi)couvrantes)? How do we define a scheme precisely in our language of sheaves and grothendieck topologies?
Is the category of Schemes itself a topos on the category of sheaves equipped with a topology defined by "familles (bi)couvrantes" with appropriate restrictions (affine monomorphisms maybe)? [Yes, I know that usually the category of sheaves on a site "extends" the category, but this topology is not at all subcanonical (with respect to $Sh(\mathcal{C})$).]
Further, how can we exhibit, in precise functor of points language, the definition of an algebraic space? (Wikipedia notes that we can just as well require that the equivalence relation is given by affines rather than general schemes. I don't see how this can be true unless we require that algebraic spaces have an equivalence relation given by separated schemes, but please, enlighten me.)
My guess on this last one: It seems like taking sheaves on the sitey $CRing^{op}_{\acute{e}tale}$, applying the same criteria we used for schemes, and further requiring that all covering families have the property that they are "familles bicouvrantes" over $Spec(\mathbb{Z})$ is enough. (This is based on the assumption from Wikipedia.)
I hope I did enough research and haven't made too many mistakes.
Edit: It appears that having affine diagonal is weaker than being separated (oh well?). Then this leads to a further question: If we require that algebraic stacks have affine diagonal, do we still have a respectable theory of algebraic stacks?
Edit2: Accourting to a footnote in SGA, being "couvrant" et "bicouvrant" are stable under base change, so the requirements over $Spec(\mathbb{Z})$ are unnecessary.
4 added 248 characters in body
Let $\mathcal{C}:=CRing^{op}_{Zariski}$, the affine Zariski site. Consider the category of sheaves, $Sh(\mathcal{C})$.
According to nLab, schemes are those sheaves that "have a cover by Zariski-open immersions of affine schemes in the category of presheaves over Aff."
In SGA 4.1.ii.5 Grothendieck defines a further topology on $Sh(\mathcal{C})$ using a "familles couvrantes", which are families of morphisms `$\{U_i \to X\}$` such that the induced map $\coprod U_i \to X$ is an epimorphism. Further, he gives another definition. A family of morphisms `$\{U_i \to X\}$` is called "bicouvrante" if it is a "famille couvrante" and the map $\coprod U_i \times_X \coprod U_i \to \coprod U_i$ is an epimorphism. [Note: This is given for a general category of sheaves on a site, not sheaves on our affine Zariski site.]
In the usual definition of a scheme, it seems like we require the sheaves to have a "famille couvrante" given by representables, and we require that a separated scheme have a "famille bicouvrante" by representables (it might be the case here that we actually require "familles bicouvrantes" regardless, and for separated schemes we require that we have a "famille bicouvrante" over $Spec(\mathbb{Z})$). However, this still doesn't seem totally right, because we don't just want morphisms from affines, we specifically want "open immersions". I suspect that we further want our "familles (bi)couvrantes" to be families of monomorphisms of representables, but I'm not sure of this.
Then the questions:
What does "open immersion" mean precisely in categorical langauge (with respect to our familles (bi)couvrantes)? How do we define a scheme precisely in our language of sheaves and grothendieck topologies?
Is the category of Schemes itself a topos on the category of sheaves equipped with a topology defined by "familles (bi)couvrantes" with appropriate restrictions (affine monomorphisms maybe)? [Yes, I know that usually the category of sheaves on a site "extends" the category, but this topology is not at all subcanonical (with respect to $Sh(\mathcal{C})$).]
Further, how can we exhibit, in precise functor of points language, the definition of an algebraic space? (Wikipedia notes that we can just as well require that the equivalence relation is given by affines rather than general schemes. I don't see how this can be true unless we require that algebraic spaces have an equivalence relation given by separated schemes, but please, enlighten me.)
My guess on this last one: It seems like taking sheaves on the sitey $CRing^{op}_{\acute{e}tale}$, applying the same criteria we used for schemes, and further requiring that all covering families have the property that they are "familles bicouvrantes" over $Spec(\mathbb{Z})$ is enough. (This is based on the assumption from Wikipedia.)
I hope I did enough research and haven't made too many mistakes.
Edit: It appears that having affine diagonal is weaker than being separated (oh well?). Then this leads to a further question: If we require that algebraic stacks have affine diagonal, do we still have a respectable theory of algebraic stacks?
3 added 46 characters in body
Let $\mathcal{C}:=CRing^{op}_{Zariski}$, the affine Zariski site. Consider the category of sheaves, $Sh(\mathcal{C})$.
According to nLab, schemes are those sheaves that "have a cover by Zariski-open immersions of affine schemes in the category of presheaves over Aff."
In SGA 4.1.ii.5 Grothendieck defines a further topology on $Sh(\mathcal{C})$ using a "familles couvrantes", which are families of morphisms `$\{U_i \to X\}$` such that the induced map $\coprod U_i \to X$ is an epimorphism. Further, he gives another definition. A family of morphisms `$\{U_i \to X\}$` is called "bicouvrante" if it is a "famille couvrante" and the map $\coprod U_i \times_X \coprod U_i \to \coprod U_i$ is an epimorphism. [Note: This is given for a general category of sheaves on a site, not sheaves on our affine Zariski site.]
In the usual definition of a scheme, it seems like we require the sheaves to have a "famille couvrante" given by representables, and we require that a separated scheme have a "famille bicouvrante" by representables (it might be the case here that we actually require "familles bicouvrantes" regardless, and for separated schemes we require that we have a "famille bicouvrante" over $Spec(\mathbb{Z})$). However, this still doesn't seem totally right, because we don't just want morphisms from affines, we specifically want "open immersions". I suspect that we further want our "familles (bi)couvrantes" to be families of monomorphisms of representables, but I'm not sure of this.
Then the questions:
What does "open immersion" mean precisely in categorical langauge ? (with respect to our familles (bi)couvrantes)? How do we define a scheme precisely in our language of sheaves and grothendieck topologies?
Is the category of Schemes itself a topos on the category of sheaves equipped with a topology defined by "familles (bi)couvrantes" with appropriate restrictions (affine monomorphisms maybe)? [Yes, I know that usually the category of sheaves on a site "extends" the category, but this topology is not at all subcanonical (with respect to $Sh(\mathcal{C})$).]
Further, how can we exhibit, in precise functor of points language, the definition of an algebraic space? (Wikipedia notes that we can just as well require that the equivalence relation is given by affines rather than general schemes. I don't see how this can be true unless we require that algebraic spaces have an equivalence relation given by separated schemes, but please, enlighten me.)
My guess on this last one: It seems like taking sheaves on the sitey $CRing^{op}_{\acute{e}tale}$, applying the same criteria we used for schemes, and further requiring that all covering families have the property that they are "familles bicouvrantes" over $Spec(\mathbb{Z})$ is enough. (This is based on the assumption from Wikipedia.)
I hope I did enough research and haven't made too many mistakes.
2 added 95 characters in body
Let $\mathcal{C}:=CRing^{op}_{Zariski}$, the affine Zariski site. Consider the category of sheaves, $Sh(\mathcal{C})$.
According to nLab, schemes are those sheaves that "have a cover by Zariski-open immersions of affine schemes in the category of presheaves over Aff."
In SGA 4.1.ii.5 Grothendieck defines a further topology on $Sh(\mathcal{C})$ using a "familles couvrantes", which are families of morphisms `$\{U_i \to X\}$` such that the induced map $\coprod U_i \to X$ is an epimorphism. Further, he gives another definition. A family of morphisms `$\{U_i \to X\}$` is called "bicouvrante" if it is a "famille couvrante" and the map $\coprod U_i \times_X \coprod U_i \to \coprod U_i$ is an epimorphism. [Note: This is given for a general category of sheaves on a site, not sheaves on our affine Zariski site.]
In the usual definition of a scheme, it seems like we require the sheaves to have a "famille couvrante" given by representables, and we require that a separated scheme have a "famille bicouvrante" by representables (it might be the case here that we actually require "familles bicouvrantes" regardless, and for separated schemes we require that we have a "famille bicouvrante" over $Spec(\mathbb{Z})$). However, this still doesn't seem totally right, because we don't just want morphisms from affines, we specifically want "open immersions". I suspect that we further want our "familles (bi)couvrantes" to be families of monomorphisms of representables, but I'm not sure of this.
Then the questions:
What does "open immersion" mean precisely in categorical langauge? How do we define a scheme precisely in our language of sheaves and grothendieck topologies?
Is the category of Schemes itself a topos on the category of sheaves equipped with a topology defined by "familles (bi)couvrantes" with appropriate restrictions (affine monomorphisms maybe)? [Yes, I know that usually the category of sheaves on a site "extends" the category, but this topology is not at all subcanonical (with respect to $Sh(\mathcal{C})$).]
Further, how can we exhibit, in precise functor of points language, the definition of an algebraic space? (Wikipedia notes that we can just as well require that the equivalence relation is given by affines rather than general schemes. I don't see how this can be true unless we require that algebraic spaces have an equivalence relation given by separated schemes, but please, enlighten me.)
My guess on this last one: It seems like taking sheaves on the sitey $CRing^{op}_{\acute{e}tale}$, applying the same criteria we used for schemes, and further requiring that all covering families have the property that they are "familles bicouvrantes" over $Spec(\mathbb{Z})$ is enough. (This is based on the assumption from Wikipedia.)
I hope I did enough research and haven't made too many mistakes.
1
# Precise definition of a scheme (Functor of points)
Let $\mathcal{C}:=CRing^{op}_{Zariski}$, the affine Zariski site. Consider the category of sheaves, $Sh(\mathcal{C})$.
According to nLab, schemes are those sheaves that "have a cover by Zariski-open immersions of affine schemes in the category of presheaves over Aff."
In SGA 4.1.ii.5 Grothendieck defines a further topology on $Sh(\mathcal{C})$ using a "familles couvrantes", which are families of morphisms `$\{U_i \to X\}$` such that the induced map $\coprod U_i \to X$ is an epimorphism. Further, he gives another definition. A family of morphisms `$\{U_i \to X\}$` is called "bicouvrante" if it is a "famille couvrante" and the map $\coprod U_i \times_X \coprod U_i \to \coprod U_i$ is an epimorphism. [Note: This is given for a general category of sheaves on a site, not sheaves on our affine Zariski site.]
In the usual definition of a scheme, it seems like we require the sheaves to have a "famille couvrante" given by representables, and we require that a separated scheme have a "famille bicouvrante" by representables (it might be the case here that we actually require "familles bicouvrantes" regardless, and for separated schemes we require that we have a "famille bicouvrante" over $Spec(\mathbb{Z})$). However, this still doesn't seem totally right, because we don't just want morphisms from affines, we specifically want "open immersions". I suspect that we further want our "familles (bi)couvrantes" to be families of monomorphisms of representables, but I'm not sure of this.
Then the questions:
What does "open immersion" mean precisely in categorical langauge?
Is the category of Schemes itself a topos on the category of sheaves equipped with a topology defined by "familles (bi)couvrantes" with appropriate restrictions (affine monomorphisms maybe)? [Yes, I know that usually the category of sheaves on a site "extends" the category, but this topology is not at all subcanonical (with respect to $Sh(\mathcal{C})$).]
Further, how can we exhibit, in precise functor of points language, the definition of an algebraic space? (Wikipedia notes that we can just as well require that the equivalence relation is given by affines rather than general schemes. I don't see how this can be true unless we require that algebraic spaces have an equivalence relation given by separated schemes, but please, enlighten me.)
My guess on this last one: It seems like taking sheaves on the sitey $CRing^{op}_{\acute{e}tale}$, applying the same criteria we used for schemes, and further requiring that all covering families have the property that they are "familles bicouvrantes" over $Spec(\mathbb{Z})$ is enough. (This is based on the assumption from Wikipedia.)
I hope I did enough research and haven't made too many mistakes. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9253215193748474, "perplexity_flag": "middle"} |
http://mathhelpforum.com/differential-equations/169176-problem-integrating-factor.html | Thread:
1. Problem with Integrating Factor
I'm trying to solve the differential equation dy/dx + (2-3x^2)(x^-3)y = 1. I've integrated (2-3x^2)(x^-3) and taken the exponential of this to give an integrating factor of (x^-3)e^(x^2). However, when I differentiate the integrating factor multiplied by y, I don't end up with the left hand side of the equation. I must have gone wrong somewhere in my working, however, I've checked it several times!
2. Originally Posted by StaryNight
I'm trying to solve the differential equation dy/dx + (2-3x^2)(x^-3)y = 1. I've integrated (2-3x^2)(x^-3) and taken the exponential of this to give an integrating factor of (x^-3)e^(x^2). However, when I differentiate the integrating factor multiplied by y, I don't end up with the left hand side of the equation. I must have gone wrong somewhere in my working, however, I've checked it several times!
I think you'll find that in the integrating factor it's $e^{-\frac{1}{x^2}$ not $e^{x^2}$. Note: $-\frac{1}{x^2} \neq x^2$ .... it seems you're confusing $-\frac{1}{x^2}$ with $\frac{1}{x^{-2}}$ ....
3. Originally Posted by StaryNight
I'm trying to solve the differential equation dy/dx + (2-3x^2)(x^-3)y = 1. I've integrated (2-3x^2)(x^-3) and taken the exponential of this to give an integrating factor of (x^-3)e^(x^2). However, when I differentiate the integrating factor multiplied by y, I don't end up with the left hand side of the equation. I must have gone wrong somewhere in my working, however, I've checked it several times!
$\displaystyle \int \frac{2-3x^2}{x^3}=-\frac{1}{x^2}-3lnx+C$
Integrating Factor = $\displaystyle e^{-\frac{1}{x^2}-3lnx}=e^{-\frac{1}{x^2}}e^{-3lnx}=x^{-3}e^{-\frac{1}{x^2}}$
4. We can rewrite the equation in the form:
$Pdx+Qdy=0\;,\quad P=(1-3x^2)y\;,\;Q=x^3$
and:
$\dfrac{1}{Q}\left(\dfrac{{\partial P}}{{\partial y}}-\dfrac{{\partial Q}}{{\partial x}}\right)=\ldots=\dfrac{1}{x^3}-\dfrac{6}{x}=F(x)$
so, an integrating factor is:
$\mu(x)=e^{\int F(x)\;dx}$
Fernando Revilla
Edited: Sorry, I misread the OP's equation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9555056095123291, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?s=eb5f721862ebf984eca45b3331b57b52&p=4173662 | Physics Forums
## Symmetry and conservation law
This is my first post, so hello everybody. I don't have university background and english is not my native language, so please forgive me if what I'm writing is hard to understand sometimes. I'll do my best to be clear.
I've always loved physics in general, but recently came to conclusion that in order to really understand it I have to get into details and learn the formalism. So, I started with classical mechanics (where else could I start?). I'm making my way through calculus of variations and Lagrangian formulation, which has been great experience for me so far. However, when trying to get my head around the conservation laws and how they arise from symmetries I got stucked at one point.
Let me focus on conservation of (linear) momentum, which is the consequence of the homogeneity of space. This is how I understand it. Our real-life experience is that no part of the physical space is special, meaning that the laws of physics (for isolated system) are everywhere the same. We can move the system one meter to the right, start it with the same boundary conditions (positions and velocities) and it will perform the same motion. This makes sense.
What follows is the tricky part. All papers deriving the conservation law from homogeneity of space say more or less the following:
"Because space is homogenous the (infinitesimal) translation of coordinates will not change the Lagrangian."
The problem is that I can't convince myself this is necessarilly the case! To me the only conclusion from homogeneity of space is that:
"The new Lagrangian (being the result of translation) is such, that the equations of motion (expressed in new coordinates) are the same as the original ones"
In other words, the translated system trajectory is the same as that of the system in its original location.
These two statements don't seem to be equivalent to me. Of course, if Lagrangian is unchanged the trajectory won't change either. However, the opposite is not true. It seems to me that I can easily come up with different Lagrangian which results in the same equations of motion. For example, one can add a term which is time derivative of a function (of positions and velocities) to the Lagrangian:
$\mathcal{L}^{'} = \mathcal{L} + \frac{d}{dt} F(q,\dot{q})$
The change in the action over the trajectory from such Lagrangian change is:
$\delta{S} = \delta \int_{t1}^{t2} \mathcal{L}dt = \int_{t1}^{t2} \frac{d}{dt} F(q,\dot{q}) dt = F(q,\dot{q}) \bigg|_{t1}^{t2}$
Which is constant and only depends on the endpoints (and not the trajectory itself). So, the new trajectory with such new Lagrangian will be the same as the original (because it minimizes the action - adding a constant does not change the condition for stationarity).
So, the question is, isn't the conclusion (arising from homogeneity of space) that the Lagrangian does not change under translation too strong? All we observe in nature is that the motion of the translated system is the same. However, we can get the same motion with different Lagrangian! Where do I make mistake?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions: Science Advisor You lost the $\delta$. $$\delta S' = \delta S + \delta F(q,\dot{q})\bigg |_{t_1}^{t_2}$$ And $\delta F$ isn't necessarily trivial under variations of trajectory. At very least, $\dot{q}(t_2)$ is not fixed by boundary conditions. But anyways, the statement "Lagrangian is invariant under translation," means that if you shift the Lagrangian itself and the solution, the shifted solution is a solution to shifted Lagrangian. In other words, yes, equations of motion remain the same. This is all made far more explicit if you look at where it comes from, namely, Noether's Theorem. Edit: Scratch that. I should be more careful when I answer questions this late. You don't shift the Lagrangian. Just the variables. For example, if you have two masses on a spring, the potential energy depends only on difference of positions. So if you shift both coordinate variables by same amount, the total Lagrangian remains exactly the same. In contrast, if you have an mgy term for gravitational potential, shifting y results in change in Lagrangian. But that makes sense. Vertical momentum is not conserved, because we are ignoring momentum transfer to Earth when we write potential this way.
Thanks for your answer, K^2! Now that I'm looking at my post again I realize I didn't formulate it properly. Speaking about time derivative of function F, which, when added to Langrangian, does not change the equations of motion, I should have written $F(q,t)$ instead of $F(q,\dot{q})$ In such case $\frac{d}{dt}F(q,t)$ will vanish when variating the Lagrangian (because time instants and position of the both ends of trajectory are fixed). In such case the endpoint velocity does not matter. Each Lagrangian constructed this way is equivalent (results in the same equations of motion and trajectory). When deriving the conservation laws, why do we assume that translation does not change the Lagrangian at all, then? I intuitively feel that this might have something to do with the freedom of choosing the inertial frame, but my intuition is not trained yet, so just a guess.
Recognitions:
Science Advisor
## Symmetry and conservation law
Like I said in the edit, it's the translation of coordinates that doesn't change the Lagrangian. I suppose, a concrete example would be better. Say, I have two masses, m1 and m2 at locations x1 and x2 connected by harmonic potential.
$$L(x_1,x_2,\dot{x}_1,\dot{x_2}) = \frac{1}{2}(m_1 \dot{x}_1^2+m_2 \dot{x}_2^2) - \frac{k}{2}(x_2 - x_1)^2$$
Now, let's say I introduced new coordinates, x'1=x1+c, x'2=x2+c.
$$L(x'_1,x'_2,\dot{x}'_1,\dot{x}'_2) = \frac{1}{2}(m_1 \dot{x}\prime_1^2+m_2 \dot{x}\prime_2^2) - \frac{k}{2}(x'_2 - x'_1)^2 = \frac{1}{2}(m_1 \dot{x}_1^2+m_2 \dot{x}_2^2) - \frac{k}{2}(x_2 + c - x_1 - c)^2 = L(x_1,x_2,\dot{x}_1,\dot{x_2})$$
So the Lagrangian is exactly the same after coordinate got shifted. Not just the total action, but the actual value of Lagrangian at every point.
I fully agree with your example. The Lagrangian is valid, it describes isolated system (no explicit time dependence - potential only depends on distance between system components and not the absolute positions). In such case it's obvious that translation will not change the Lagrangian. Your example made me aware of one important thing, though (which, in fact, addresses my doubts)! Even though we have certain freedom in constructing the Lagrangian (we can always add total time derivative of function of positions and time to the Lagrangian and end up with the same trajectory), we DO WANT to construct the Lagrangian for isolated system in its simplest form, e.g. independent (explicitly) from time and absolute positions, which is probably equivalent to saying we construct it in an inertial frame of reference. To show an example of the systems I was thinking about consider: $\mathcal{L} = \frac{1}{2}m \dot{x}^2 + \dot{x}x$It can be shown that even though the translation changes the Lagrangian, it does not alter the equations of motion - original trajectory after translation is still valid (minimizes the action), because the change in the action due to translation is (by $\delta S$ I mean how much the action differs between translated and original system for the same trajectory, assuming the translation is small): $\delta \mathcal{S} = \int_{t1}^{t2} \delta \mathcal{L} dt = \int_{t1}^{t2} \bigg( \frac{d \mathcal{L}}{dx} \delta x + \frac{d \mathcal{L}}{d \dot{x}} \delta \dot{x} \bigg) dt$Now, since $\delta \dot{x}$ is 0 the second term in integral is zero, too, so we can write: $\delta \mathcal{S} = \delta x \int_{t1}^{t2} \frac{d \mathcal{L}}{dx} dt = \delta x \int_{t1}^{t2} \dot{x} dt = \delta x \int_{t1}^{t2} \frac{dx}{dt} dt = \delta x \bigg(x \bigg|_{t1}^{t2} \bigg)$This is constant and will vanish when variating the trajectory. So, the Lagrangian has different form after translation, yet it yields the same equations of motion. However, as I just realized, it does not describe isolated system (at least not in an inertial frame), due to explicit dependence on position in the second term. It might be equivalent to some isolated system expressed in non-inertial frame (I'm not sure). Anyway - it doesn't matter. You helped me understand this. Now I think I know why we say that translation does not change the Lagrangian when deriving conservation laws. It's because the original Lagrangian does not depend explicitly on positions and time. Thanks!
If lagrangian does not depend on time explicitly,then energy is conserved.If it does not on position,momentum is conserved.These things rather arises directly from euler-lagrange eqn.
Yes, exactly. I was just surprised to see that there can be Lagrangians which depend explicitly on position or time, which still yield the same equations of motions after translation - like the one in my example. Now I wonder if it's because they describe the isolated systems (and thus 'conservative' in an inertial frame) in non-inertial frame of reference? This is all new to me, so sometimes I might be doing things backwards... Thanks for your patience in pointing me into right direction!
Recognitions: Science Advisor Actually, it's slightly more complicated. You didn't just add potential. You've modified the kinetic term. $$p = \frac{\partial L}{\partial \dot{x}} = m\dot{x} + x$$ So momentum of your particle is position-dependent. But lets take a look at total energy. $$H = \dot{x}p - L = \frac{1}{2}m\dot{x}^2$$ It looks like a Hamiltonian of a free particle. Of course, we need it in terms of momentum, so that's going to get messy. $$H = \frac{p^2}{2m} - \frac{xp}{m} +\frac{2x-x^2}{2m}$$ So it's a particle whose kinetic energy is function of p and x in a potential. In other words, you did not simply create a different Lagrangian with same trajectory. You came up with a completely new set of physics where under a particular potential, you end up with the same equations of motion as free particle under normal physics. There is no surprise, then, that in general, the conservation laws are going to be very different with these new physics.
This really starts to make some sense to me. Thank you! I greatly appreciate your help and time you spent on it. Indeed, I not only added potential (which could possibly be identified with some fictitious force acting on the system due to non-inertial frame), but also modified the kinetic term! No chance this can describe isolated system in non-inertial frame.
Looks like I need to resurrect the thread that seemed to be closed! In my first post I asked whether demanding that Lagrangian does not change under transformation that is expected to leave laws of physics unchanged is correct way of thinking. The reason for my question was that there is certain 'redundancy' in Lagrangian - in other words the Lagrangian describing the physical system undergoing certain motion is not unique. My original argument was: all we know empirically is that such transformation leaves the motion of the system unchanged. So, we should rather demand that it's the equations of motion that don't change rather than the Lagrangian itself (because there are more than one Lagrangians resulting in the same motion). I got some great answers that really helped me understand the topic. However, now I got confused again! There is a transform that leaves the motion unchanged that seems to motivate my doubts. It's the well known Galilean transform. It leaves the time coordinate unchanged, but changes the spacial coordinate (which depends on relative velocity between frames). Applying such transform clearly changes the Lagrangian (and the action), even if it was originally properly expressed in inertial frame for isolated system (e.g. not explicitly dependent on time and absolute positions). What we get is different Lagrangian here, which nevertheless results in the same equations of motion (because the added term is total time derivative of a function, which vanishes when variating the action). Now, how would you comment on this? It looks to me like my initial argument was justified and valid! We got the transform that does not change the laws of physics, yet it changes the Lagrangian. It's only the equations of motion that don't change here. EDIT: Of course, the opposite way still works - if certain transformation does not change the Lagrangian, the motion won't change either. Also, we can expect the conservation of certain quantity to be associated with such transformation. But it seems that there are some transformations that change the Lagrangian, yet still result in the same equations of motion. How can we know (or demand) that certain transformation that leaves the laws of physics unchanged does not change the Lagrangian? The only evidence we get from real world observations and measurements is the motion. I still can't get my head around it. Or is it the following: - if a transform does not change the Lagrangian (or should I say "action") then we can associate the conserved quantity with it. - if a transform changes the Lagrangian but results in the same motion, we cannot derive the new conservation law from it
Recognitions: Science Advisor Classical Lagrangian is not invariant under Galilean Transformation. It is not an underlying symmetry of classical mechanics. There are no associated conservation laws. As a result, the solution is not actually identical. Solution of Galilean-transformed Lagrangian simply happens to be similarly transformed. Another example would be transformation to a rotating coordinate system. Again, I can transform the solution to match, but the Lagrangian itself will be wildly different and include centrifugal and Coriolis terms. And again, there is no associated conserved quantity because there is no symmetry. In contrast, consider Lagrangian from relativistic field theory. (Classical or Quantum) Such a Lagrangian is invariant under Lorentz Transformation of coordinates. If you consider group of all available linear coordinate transformations, including shifts, rotations, and Lorentz boosts, you do get an associated conserved quantity. That quantity is the Stress-Energy Tensor, which implies energy and momentum conservation in a more frame-invariant manner. Think of it this way. If you transformed coordinates in Lagrangian and similarly in the solution, how can it possibly not match? The point of invariance is that you do not have to adjust both. Transforming coordinates in Lagrangian gives you back exactly the same Lagrangian, and you do not have to transform the solution.
Quote by K^2 Classical Lagrangian is not invariant under Galilean Transformation. It is not an underlying symmetry of classical mechanics. There are no associated conservation laws. As a result, the solution is not actually identical. Solution of Galilean-transformed Lagrangian simply happens to be similarly transformed. Another example would be transformation to a rotating coordinate system. Again, I can transform the solution to match, but the Lagrangian itself will be wildly different and include centrifugal and Coriolis terms. And again, there is no associated conserved quantity because there is no symmetry.
That's what I thought, more or less. So it's only the real symmetries with respect to Lagrangian that can be associated with conserved quantities.
In contrast, consider Lagrangian from relativistic field theory. (Classical or Quantum) Such a Lagrangian is invariant under Lorentz Transformation of coordinates. If you consider group of all available linear coordinate transformations, including shifts, rotations, and Lorentz boosts, you do get an associated conserved quantity. That quantity is the Stress-Energy Tensor, which implies energy and momentum conservation in a more frame-invariant manner.
Unfortunately I don't know much about Lagrangian formulation of relativistic field theory. Hope to know more before I die :)
Anyway, I know what the Lagrangian for relativistic mechanics looks like. As it's expressed in terms of proper time, it is automatically invariant under Lorentz transformation. So, for every kind of Lorentz transformation (translation, rotation, boost) there must be associated conserved quantity. True?
[EDIT] I meant the Lagrangian describing free relativistic particle here...
Think of it this way. If you transformed coordinates in Lagrangian and similarly in the solution, how can it possibly not match? The point of invariance is that you do not have to adjust both. Transforming coordinates in Lagrangian gives you back exactly the same Lagrangian, and you do not have to transform the solution.
I think I know where you're coming from. We're not interested in describing the same system in different coordinates - this is just different description. What really yields new laws is the symmetry - the coordinates transforms that leave our original solutions unchanged.
I really need to think it over well to gain some confidence. And the best way to think about things is to take a bath - that's exactly what I'm gonna do now!
Recognitions:
Science Advisor
Quote by HubertP So, for every kind of Lorentz transformation (translation, rotation, boost) there must be associated conserved quantity. True?
Sort of. Because of how these conserved quantities transform under coordinate transformations (energy and momentum are both frame-dependent), it ends up being much more straight forward to simply lump these all together into a single conserved quantity. Kind of like if you take translation -> momentum, time -> energy, and then you can look at general invariance under translation in space-time and get conservation of four-momentum. Well, if you look at translation in space-time, rotation in space, and boost (boost is a hyper rotation in time, basically) you get conservation of Stress-Energy Tensor. That is effectively a relativistic generalization of conservation of energy, momentum, and angular momentum all lumped into a single quantity.
You could probably define a conserved quantity just for Lorentz boost, for example, but I doubt it would be something particularly meaningful. It really makes more sense to treat all these coordinate transformations together.
The rest of it, that's pretty much how I understand it. So if there is more to it, I can't really help you.
Thread Tools
| | | |
|----------------------------------------------------|-------------------|---------|
| Similar Threads for: Symmetry and conservation law | | |
| Thread | Forum | Replies |
| | General Physics | 0 |
| | General Physics | 7 |
| | Classical Physics | 2 |
| | Quantum Physics | 6 |
| | Quantum Physics | 1 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356762766838074, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/51302/hamiltonian-and-non-conservative-force | Hamiltonian and non conservative force
I have to find the Hamiltonian of a charged particle in a uniform magnetic field; the potential vector is $\vec {A}= B/2 (-y, x, 0)$.
I know that $$H=\sum_i p_i \dot q_i -L$$ where $p_i$ is conjugated momentum, $\dot q_i$ is the velocity and $L$ is the Lagrangian.
The result that I should obtain is $$H=\frac{1}{2} m(\dot x^2+ \dot y^2)= \frac {1}{2m}(p_x^2+ p_y^2)+ \frac{1}{2}\omega^2 (x^2+y^2)+ \omega (p_x y- p_y x)$$, where $\omega= \frac{eB}{2mc}$.
I obtain this result only if I don't consider in the Hamiltonian the potential magnetic, and if I substitute only at the last step the values of velocities in function of conjugata momenta.
Considering that the magnetic field isn't a field of conservative forces, I ask you:
if I have a system in a non conservative field, is it correct to not consider the potential of the non conservative force when I'm writing the Hamiltonian? are my steps correct? thank you.
-
2 Answers
1) We start writing the Lagrangian
$$L=\frac{1}{2}mv^2+q\vec{v}\cdot\vec{A}=\frac{1}{2}m(v_x^2+v_y^2+v_z^2)+q\frac{B}{2}\left(-yv_x+xv_y \right)$$
2) We find the momenta
$$p_{x}=\frac{\partial L}{\partial v_{x}}=mv_{x}-\frac{qBy}{2}$$
$$p_{y}=\frac{\partial L}{\partial v_{y}}=mv_{y}+\frac{qBx}{2}$$
$$p_{z}=\frac{\partial L}{\partial v_{z}}=mv_{z}$$
3) We express the Hamiltonian, performing a Legrende transformation between velocities and momenta as usual, and solving for the velocities as a function of the coordinates and momenta
$$v_{x}=\frac{1}{m}\left(p_x+\frac{qBy}{2}\right)$$
$$v_{y}=\frac{1}{m}\left( p_y-\frac{qBx}{2} \right)$$
$$v_{z}=\frac{p_{z}}{m}$$
So
$$H=\sum_{i} p_{i}v_{i}-L\bigg|_{v=v(p,q)}=\frac{p_{x}}{m}\left(p_{x}+\frac{qBy}{2}\right)+\frac{p_{y}}{m}\left(p_{y}-\frac{qBx}{2} \right)+\frac{p_z^2}{m}-\frac{1}{2m}\left[\left(p_{x}+\frac{qBy}{2}\right)^2+\left(p_{y}-\frac{qBx}{2} \right)^2+p_z^2\right]-q\frac{B}{2}\left[-\frac{y}{m}\left(p_{x}+\frac{qBy}{2}\right)+\frac{x}{m}\left(p_{y}-\frac{qBx}{2} \right) \right]$$
Expanding the squares
$$H=\frac{p_x^2}{m}+\frac{p_xqBy}{2m}+\frac{p_y^2}{m}-\frac{p_yqBx}{2m}+\frac{p_z^2}{m} -\frac{1}{2m}\left(p_x^2+\frac{q^2B^2y^2}{4}+p_xqBy+p_y^2+\frac{q^2B^2x^2}{4}-p_yqBx+\frac{p_{z}^2}{m}\right)-\frac{qB}{2m}\left( -yp_x -\frac{qBy^2}{2}+xp_y-\frac{qBx^2}{2}\right)$$
Taking common factors
$$H=\frac{p_x^2}{2m}+\frac{p_y^2}{2m}+\frac{p^2_{z}}{2m}+p_{x}\left( \frac{qBy}{2m}-\frac{qBy}{2m}+y\right)+p_y\left( -\frac{qBx}{2m}+\frac{qBx}{2m}-x\right) -\frac{q^2B^2y^2}{2·4m}-\frac{q^2B^2x^2}{2·4m}+\frac{qB^2y^2}{4m}+\frac{qB^2x^2}{4m}$$
Finally
$$H=\frac{p_x^2}{2m}+\frac{p_y^2}{2m}+\frac{p^2_{z}}{2m}+\frac{q^2B^2}{4m}(x^2+y^2)+\frac{qB}{2m}(p_xy-p_yx)$$
In the textbook they problably decided to obviate the $p_z$ because, given that $A_z=0$ the Lagrangian does not depende on $v_z$ so $p_z$ is a constant.
-
1
thanks for your answer! :) – sunrise Jan 17 at 9:25
can I follow these steps for every case, either I have a velocity-dependent potential either I have a non generalized potential? – sunrise Jan 17 at 9:28
– Nivalth Jan 17 at 9:34
For "generalized potential" I mean a "velocity-dependent potential" and for "not generalized potential" a potential dependent only on coordinates.. – sunrise Jan 17 at 13:34
Then the answer is yes, this is the very general prescription to translate a problem of Lagrangian mechanics into a problem of Hamiltonian mechanics. 1)Lagrangian 2) Hamiltonian $\sum v_ip_i -L$ (as a function coordinates and momenta, not velocities!) 3) Hamilton's equations 4) Profit! – Nivalth Jan 17 at 13:42
show 2 more comments
The potential of a charged particle in an electromagnetic field is:
$$U(r,v,t)=q\phi -q\mathbf{v}\cdot A$$
Being $\phi$ the electric potential, $v$ the speed of the particle, $q$ the charge of the particle, and $A$ the vector potential of the magnetic field. Make sure su haven't made any mistakes calculating the lagrange equations (when you derive by $d/dt$, remember $\mathbf{v}$ is a function of time.
The Lagrangian will be:
$$L=\frac{1}{2}mv^2-q\phi +q\mathbf{v}\cdot A$$
Again, make sure you haven't made arithmetic mistakes.
-
thank you, but the particle is in a magnetic field and not in an electromagnetic field... – sunrise Jan 15 at 18:33
@sunrise Then the only thing you have to do is remove the electric field term of the potential: $q\phi$, and there you go. – MyUserIsThis Jan 15 at 18:39
sure! but my problem is about the Hamiltonian not about Lagrangian.. :( – sunrise Jan 15 at 18:52
$H=m\dot x ^2+m\dot y^2-1/2m\dot x^2-1/2m\dot y^2-qvA\sin\alpha$, being $\alpha$ the angle bewtween $v$ and $A$, get that angle and operate. – MyUserIsThis Jan 15 at 19:04
the book tells me that the result is $H=\frac{1}{2}m(\dot x^2+ \dot y^2)$ and I have no information about any angle.. :( – sunrise Jan 15 at 19:47
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.874915599822998, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/206527/is-c-0-infty-dense-in-lp/206533 | # Is $C_0^\infty$ dense in $L^p$?
I have a question concerning the Lebesgue spaces: Is $C_0^\infty$ dense in $L^p$ ?
And if yes, why?
Thanks!
-
Over what set? What is the measure you are working with? – Siminore Oct 3 '12 at 11:40
## 3 Answers
Yes. First of all, it is enough to see that any function in $L^p$ with compact support can be approximated in the $L^p$ norm by $C_0^\infty$ functions.
Take $\phi\in C_0^\infty$, $\phi\ge0$ and $\int\phi(x)\,dx=1$, and define for $\epsilon>0$ $\phi_\epsilon(x)=\epsilon^{-1}\phi(x/\epsilon)$. If $f\in L^p$ with compact support, then $\phi_\epsilon\ast f$ has compact support, is of class $C^\infty$ and $\phi_\epsilon\ast f$ converges to $f$ in $L^p$.
-
Thank you. And in the case of non-compact support? – AlexisZorbas Oct 3 '12 at 11:52
@AlexisZorbas A function in $C^\infty$ need not be in $L^p$. – Siminore Oct 3 '12 at 12:05
@AlexisZorbas given a function $f$ in $L^p(\mathbb{R}^k)$, you can use the dominated convergence theorem to show that $f_M = f \cdot \chi_{[-M,M]}$ converges to $f$ in $L^p$, so you can do a standard argument by approximating first by a compactly supported function and then approximating the compactly supported function by a $C_0^\infty$ function. – Chris Janjigian Oct 3 '12 at 12:34
Theorem. Let $\mathcal{L}^k$ be the Lebesgue measure on $\mathbb{R}^k$, $k \geq 1$. For $1 \leq p < \infty$, the set $C_0(\mathbb{R}^k)$ is dense in $L^p(\mathbb{R}^k,\mathcal{L}^k)$.
The proof is based on Lusin's theorem. A more general statement holds true in a locally compact Hausdorff space, provided the measure satisfies some properties.
For all this, you can read W. Rudin, Real and complex analysis, Chapter 3, section Approximation by continuous functions.
-
Take a look in the chapeter 4 of this book (Brezis - Functional Analysis, Sobolev Spaces and PED).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8908993005752563, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/176198-convergence-divergence.html | # Thread:
1. ## Convergence/divergence
Can someone help me with the following problem...I'm stuck!
Find whether the following series converges/diverges using one of the following tests: Nth Term Test for Divergence, Integral Test, Alternating Series Test, Absolute Convergence, Geometric Series, P-Series, Comparison Test, Limit Comparison Test, Ratio Test, & Root Test
$\displaystyle\Sigma(-1)^n*\frac{3n}{4n-1}$
I tried the A.S.T. but I can't get past how to figure out whether A_(n+1) < A_n. I found out that it doesn't converge absolutely using the absolute convergence & the nth term test. I tried the ratio test but I get a limit of 1..which tells me nothing about the convergence. Can someone explain what would be the best test to use bc I cannot figure it out. Thanks
2. Originally Posted by dbakeg00
Can someone help me with the following problem...I'm stuck!
Find whether the following series converges/diverges using one of the following tests: Nth Term Test for Divergence, Integral Test, Alternating Series Test, Absolute Convergence, Geometric Series, P-Series, Comparison Test, Limit Comparison Test, Ratio Test, & Root Test
$\displaystyle\Sigma(-1)^n*\frac{3n}{4n-1}$
I tried the A.S.T. but I can't get past how to figure out whether A_(n+1) < A_n. I found out that it doesn't converge absolutely using the absolute convergence & the nth term test. I tried the ratio test but I get a limit of 1..which tells me nothing about the convergence. Can someone explain what would be the best test to use bc I cannot figure it out. Thanks
You have already answered your own question. By the basic divergence test since the limit of the $\displaystyle \lim_{n \to \infty}a_n \ne 0$ the series diverges
3. Originally Posted by TheEmptySet
You have already answered your own question. By the basic divergence test since the limit of the $\displaystyle \lim_{n \to \infty}a_n \ne 0$ the series diverges
Well, I took the absolute value as part of testing for absolute convergence, then took the limit..which telles me that is doesn't converge absolutely. Can't it still converge conditionally? How do I take the limit of the original series with that $(-1)^n$ as part of the original series?
4. Originally Posted by dbakeg00
Well, I took the absolute value as part of testing for absolute convergence, then took the limit..which telles me that is doesn't converge absolutely. Can't it still converge conditionally? How do I take the limit of the original series with that $(-1)^n$ as part of the original series?
Yes it can but the basic divergence test states that
if $\displaystyle \lim_{n \to \infty}a_n \ne 0$
Or $\displaystyle \lim_{n \to \infty}a_n$ does not exists then the series diverges!
$\displaystyle \lim_{n \to \infty}\frac{(-1)^n3n}{4n-1}$ this limit does not exist!
so you are done!
5. Originally Posted by dbakeg00
Well, I took the absolute value as part of testing for absolute convergence, then took the limit..which telles me that is doesn't converge absolutely. Can't it still converge conditionally? How do I take the limit of the original series with that $(-1)^n$ as part of the original series?
NO! if $\displaystyle\lim _{n \to \infty } \left| {a_n } \right| \ne 0$ it diverges period.
6. Thanks for the help gentlemen!
7. Originally Posted by Plato
NO! if $\displaystyle\lim _{n \to \infty } \left| {a_n } \right| \ne 0$ it diverges period.
I'm kind of confused about the Nth Term Test for Divergence. You say that it says $\displaystyle\lim _{n \to \infty } \left| {a_n } \right| \ne 0$
But all the books I have on Calculus say this: $\displaystyle\lim _{n \to \infty } {a_n } \ne 0$
Where are you finding the definition of that test with the absolute value bars included?
8. Originally Posted by dbakeg00
I'm kind of confused about the Nth Term Test for Divergence. You say that it says $\displaystyle\lim _{n \to \infty } \left| {a_n } \right| \ne 0$
But all the books I have on Calculus say this: $\displaystyle\lim _{n \to \infty } {a_n } \ne 0$
Where are you finding the definition of that test with the absolute value bars included?
If we know that $\displaystyle\lim _{n \to \infty } \left| {a_n } \right| \ne 0$, does that imply that $\displaystyle\lim _{n \to \infty } {a_n } \ne 0~?$
9. Originally Posted by Plato
If we know that $\displaystyle\lim _{n \to \infty } \left| {a_n } \right| \ne 0$, does that imply that $\displaystyle\lim _{n \to \infty } {a_n } \ne 0~?$
I don't know if that implies what you say or not. What about a case such as $\displaystyle\Sigma(-1)^n*\frac{1}{\sqrt{n}}$ ?
If you take the limit of the absolute value of a_n, then you get a limit = 0, so it would tell you nothing really other than that this series has the potential to converge. If you were to take the limit of a_n (without the absolute value), the limit doesn't exsist, so you could conclude that the original series diverges. Now, talking about the same series, if you use the Alternating Series Test, it tells you that this series converges. You can then test for absolute convergence and find out that it converges conditionally. So I'm confused because if I just took the limit (of a_n above) straight up then I would say that it didn't exist and therefore the series diverges. However, if I did the alternating series test and then tested for absolute convergence, I would find that this series converges conditionally. I'm not sure if I'm misunderstanding something here or what??
10. Originally Posted by dbakeg00
I don't know if that implies what you say or not. What about a case such as $\displaystyle\Sigma(-1)^n*\frac{1}{\sqrt{n}}$ ?
If you take the limit of the absolute value of a_n, then you get a limit = 0, so it would tell you nothing really other than that this series has the potential to converge.
That is exactly correct, it tells nothing.
But that is not what I said. Said nothing about converging to zero.
Said not converging to zero.
The sentence "If P then Q." is equivalent to "If not Q then not P."
Thus "if $(a_n)\to 0$ then $|a_n|\to 0$" is equivalent to "If $|a_n|\not\to 0$ then $(a_n)\not\to 0$."
That is the first test for divergence. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484966397285461, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2009/07/10/symmetric-antisymmetric-and-hermitian-forms/?like=1&source=post_flair&_wpnonce=39170bc4ce | # The Unapologetic Mathematician
## Symmetric, Antisymmetric, and Hermitian Forms
The simplest structure we can look for in our bilinear forms is that they be symmetric, antisymmetric, or (if we’re working over the complex numbers) Hermitian. A symmetric form gives the same answer, an antisymmetric form negates the answer, and a Hermitian form conjugates the answer when we swap inputs. Thus if we have a symmetric form given by the linear operator $S$, an antisymmetric form given by the operator $A$, and a Hermitian form given by $H$, we can write
$\displaystyle\begin{aligned}\langle w\rvert S\lvert v\rangle&=\langle v\rvert S\lvert w\rangle\\\langle w\rvert A\lvert v\rangle&=-\langle v\rvert A\lvert w\rangle\\\langle w\rvert H\lvert v\rangle&=\overline{\langle v\rvert H\lvert w\rangle}\end{aligned}$
Each of these conditions an immediately be translated into a condition on the corresponding linear operator. We’ll flip over each of the terms on the left, using the symmetry of the inner product and the adjoint property. In the third line, though, we’ll use the conjugate-symmetry of the complex inner product.
$\displaystyle\begin{aligned}\langle v\rvert S^*\lvert w\rangle&=\langle v\rvert S\lvert w\rangle\\\langle v\rvert A^*\lvert w\rangle&=-\langle v\rvert A\lvert w\rangle\\\overline{\langle v\rvert H^*\lvert w\rangle}&=\overline{\langle v\rvert H\lvert w\rangle}\end{aligned}$
We can conjugate both sides of the last line to simplify it. Similarly, we can use linearity in the second line to rewrite
$\displaystyle\begin{aligned}\langle v\rvert S^*\lvert w\rangle&=\langle v\rvert S\lvert w\rangle\\\langle v\rvert A^*\lvert w\rangle&=\langle v\rvert-A\lvert w\rangle\\\langle v\rvert H^*\lvert w\rangle&=\langle v\rvert H\lvert w\rangle\end{aligned}$
Now in each line we have one operator on the left and another operator on the right, and these operators give rise to the same forms. I say that this means the operators themselves must be the same. To show this, consider the general case
$\displaystyle\langle v\rvert B_1\lvert w\rangle=\langle v\rvert B_2\lvert w\rangle$
Pulling both forms to one side and using linearity we find
$\displaystyle\langle v\rvert B_1-B_2\lvert w\rangle=0$
Now, if the difference $B_1-B_2$ is not the zero transformation, then there is some $w_0$ so that $B_1(w_0)-B_2(w_0)=v_0\neq0$. Then we can consider
$\displaystyle\langle v_0\rvert B_1-B_2\lvert w_0\rangle=\langle v_0\vert v_0\rangle=\lVert v_0\rVert^2\neq0$
And so we must have $B_1=B_2$.
In particular, this shows that if we have a symmetric form, it’s described by a self-adjoint transformation $S^*=S$. Hermitian forms are also described by self-adjoint transformations $H^*=H$. And antisymmetric forms are described by “skew-adjoint” transformations $A^*=-A$
So what’s the difference between a symmetric and a Hermitian form? It’s all in the fact that a symmetric form is based on a vector space with a symmetric inner product, while a Hermitian form is based on a complex vector space with a conjugate-symmetric inner product. The different properties of the two inner products account for the different ways that adjoint transformations behave.
### Like this:
Posted by John Armstrong | Algebra, Linear Algebra
## 11 Comments »
1. How much of this can be generalized to vector spaces over a field other than the complex numbers? For instance, if conjugation is replaced by some field automorphism, does the spectral theorem still hold for Hermitian forms?
Comment by | July 11, 2009 | Reply
2. Well, I don’t know offhand. But it’s worth looking into, as I approach the spectral theorem.
One thing I’ll say is that it seems you not only need an automorphism, but one that relates to some sensible norm in a similar way. I’m not really an expert on these things, but it might be that you need to consider modules over a C* algebra to get the appropriate generalization.
Comment by | July 11, 2009 | Reply
3. But does the spectral theorem for finite-dimensional vector spaces even need a norm? At least, doesn’t the version for symmetric bilinear forms work over arbitrary fields (say of characteristic $\neq 2$)?
Comment by | July 11, 2009 | Reply
4. Interesting question: the relevant statement is, I think, “Sylvester’s law of inertia”. The wikipedia page: http://en.wikipedia.org/wiki/Symmetric_bilinear_form#Signature_and_Sylvester.27s_law_of_inertia
seems to suggest that this doesn’t hold over an arbitrary field (it says “Ordered Field”, but clearly it also works for the complex numbers).
The obvious problem is that working if F_p say, there is no notion of “positive” or “negative”. But it’s not hard to see that the number of zeroes down the diagonal must be basis independent: this is just the dimension of the “radical”.
The beauty about what John is doing (over the reals, and especially the complexes) is that it all generalises beautifully to the infinite-dimensional setting: here you really do need a norm, as you need to work with complete spaces.
Comment by | July 13, 2009 | Reply
5. Yes, once I get around to more functional analysis, I’ll be able to import a lot of this stuff wholesale. And, again, I’ll admit to not being an expert on finite characteristic.
Comment by | July 13, 2009 | Reply
6. [...] must be symmetric (or conjugate-symmetric). This is satisfied by picking our transformation to be symmetric (or hermitian). But we also need our form to be “positive-definite”. That is, we [...]
Pingback by | July 13, 2009 | Reply
7. [...] In this way, unitary and orthogonal transformations are related in a way similar to that in which Hermitian and symmetric forms are [...]
Pingback by | July 28, 2009 | Reply
8. [...] That is, the adjoint matrix is the conjugate transpose. This isn’t really anything new, since we essentially saw it when we considered Hermitian matrices. [...]
Pingback by | July 29, 2009 | Reply
9. [...] in our analogy — self-adjoint and unitary (or orthogonal), and even anti-self-adjoint (antisymmetric and “skew-Hermitian”) transformations satisfying — all satisfy one slightly [...]
Pingback by | August 5, 2009 | Reply
10. [...] I said above, this is a bilinear form. Further, Clairaut’s theorem tells us that it’s a symmetric form. Then the spectral theorem tells us that we can find an orthonormal basis with respect to [...]
Pingback by | November 24, 2009 | Reply
11. [...] already seen that the composition of a linear transformation and its adjoint is self-adjoint and positive-definite. In terms of complex matrices, this tells us that the product of a matrix and [...]
Pingback by | October 13, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922653079032898, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/99969/cosheaf-homology-and-a-theorem-of-beilinson-in-a-paper-on-mixed-tate-motives/119381 | Cosheaf homology and a theorem of Beilinson (in a paper on Mixed Tate Motives)
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm trying to understand the proof of Theorem 4.1 in the paper Multiple Polylogarithms and Mixed Tate Motives by AB Goncharov (http://arxiv.org/pdf/math/0103059v4.pdf). In it, the author uses cosheaf homology.
As far as I can tell, the global sections functor for cosheaves is right exact, so homology should be given by the left derived functors of the global sections functor. Similarly, the higher direct image functors should be the left derived functors of the standard direct image. The Grothendieck spectral sequence should be a homological spectral sequence.
I have four questions:
1. Is this correct?
2. I understand that (see e.g. Hartshorne, Chapter III, Proposition 8.1) the cosheaf sending an open set $U$ to $H_q(p^{-1}(U))$ should be the $q$th direct image of the constant sheaf $\mathbb Z$. However, when on p.46 we are defining the cosheaf $\mathcal{R}_c$ and we are taking relative homology, what cosheaf are we taking the direct image of?
3. Where does the exact sequence (110) come from? Do we always get an exact sequence from the joining of two complexes?
4. Between (109) and (110), I assume that $R_i$ means left derived functor, since, as I mentioned, higher direct images of cosheaves are left, not right, derived functors. But what on Earth does he mean by the higher direct image of a subvariety (or complex of subvarieties)?
I'm guessing that the $q$th direct image of the complex of varieties should be interpreted as the cosheaf corresponding to the homology relative to the union of the varieties associated to that complex? Assuming that's the case, I'm a little unsure how to deal with the higher direct images of the truncation (maybe it corresponds to the hyperhomology of the complex on the inverse image of $U$? Then the special case of the whole complex makes sense since the hyperhomology of that complex is the relative homology. But if that's so, I don't know what to make of the hyperhomology of the truncation...)
-
Wild guess for (4): Given a subvariety, you may push the constant cosheaf on that subvariety forward along the inclusion map, and take the derived functors. – S. Carnahan♦ Jun 21 at 13:50
2
The proof was rewritten in the framework of sheaf cohomology in Deligne and Goncharov's "Groupe fondamentaux motiviques de Tate mixtes". This is french but should still be easier to understand. – YBL Jul 25 at 19:02
2
You can also rewrite the proof quite easily if you remember that (1) Goncharov's objective is to compute $H_n(P^n_{x,y}X,\partial P^n_{x,y} X)$ ($P^n_{x,y}X=\{x\}\times X^n \times \{y\}$ the cosimplicial path space) (2) In the formalism of 4 operations $$H_n(X,Z;A) = {}^\tau H^{−n}(a_{X!} j_{∗} j_{∗} a^{!}_X A)$$ with $\tau$ is the t-structure, $a_X:X\to pt$ the structural morphism, $j:X\setminus Z \hookrightarrow Z$ is the open immersion. (3) The proof in Deligne-Goncharov is Poincaré dual in the sense that they compute $$H^n(X \mod Z;A) = {}^{\tau} H^n( a_{X *} j! j^! a_X^{*} A )$$ – YBL Jul 25 at 19:09
Thanks, I had seen that proof, but I found it to be unnecessarily non-elementary. – David Corwin Jul 26 at 1:03
1 Answer
Cosheaves are indeed mysterious gadgets. On the one hand, cosheaves are everywhere, but on the other hand, someone used to thinking sheaf-theoretically may have some problems. I am very close to finishing an exposition on cosheaves, but need another week or so to put it on the arxiv. Bredon's book on sheaf theory has the most complete reference on cosheaves, so you might look there if you like.
AS you may know, pre-cosheaves are just covariant functors $\hat{F}:\mathrm{Open}(X)\to\mathcal{D}$ where $\mathcal{D}$ is some "data category" like Vect, Ab, or what have you. Cosheaves send covers (closed under intersection) to colimits and different covers of the same open set get sent to isomorphic colimits. The Mayer-Vietoris axiom is a good way of thinking about cosheaves and since homology commutes with direct limits, one can see that $H_0(-,k)$ is always a cosheaf. In particular, $H_0(-,\mathcal{L})$ is a cosheaf whenever $\mathcal{L}$ is a local system.
As you observed, since cosheaves are fundamentally colimit-y, they have left-derived functors rather than right-derived ones. Thus the answer to (1) is yes.
In regards to (2), one must be careful. I believe the answer is yes, but allow me to pontificate on the problem.
Filtered limits and finite colimits do not commute in most categories like Ab, Vect, or Set. This has serious ramifications through the theory of cosheaves.
For example, it is not necessarily true that a sequence of cosheaves is exact iff it is exact on costalks. Here costalks are defined using (filtered) inverse limits rather than direct ones.
Another very serious consequence is that Grothendieck's sheafification procedure cannot be dualized to give cosheafification. Thus the usual phrase
"let blah by the cosheaf associated to the pre-cosheaf blah"
is not necessarily well-founded because it is unclear how to cosheafify! People have solved this problem in the past by working with pro-objects (which corrects for this "filtered limits not commuting with finite colimits" asymmetry) and then they use Grothendieck's construction. However, for abstract categorical reasons one can check that cosheafification does exist for data categories like Vect (i have worked out a proof and haven't found in the literature anyone who claims to have proved this), we just don't have an explicit construction. That said, the usual description of the left-derived functor of the push-forward should still hold.
On the other hand, if one works in the constructible setting, one can get the statements you would like. In particular, it is true that cosheaves constructible with respect to a cell structure are derived equivalent to sheaves constructible with respect to the same cell structure. I discovered independently my own proof, only to find that at least two other people have proved this before. However, in my opinion, the equivalence is the "correct" form of Verdier duality. A larger and updated exposition should be available soon.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312052726745605, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/90191/list | ## Return to Question
2 deleted 2 characters in body
in aubin's book on page 104 theorem 4.7 there is the theorem: Let $(M,g)$ be a compact $C^{\infty}$ Riemannian manifold. There exists a weak solution $\varphi \in H_{1}$ of $\Delta \varphi = f$ if and only if $\int f dvol = 0$. The solution is unique up to a constant. If $f \in C^{r + \alpha}$ ($r \geq 0$ a integer or $r=+\infty$, $0 < \alpha < 1$), then $\varphi \in C^{r+2+\alpha}$. in this theorem the manifolds do not have boundary.
My question: are there similar results with riemannian manifolds with boundary ???
hope for answers.
william
1
# laplace equation on manifolds with boundary
in aubin's book on page 104 theorem 4.7 there is the theorem: Let $(M,g)$ be a compact $C^{\infty}$ Riemannian manifold. There exists a weak solution $\varphi \in H_{1}$ of $\Delta \varphi = f$ if and only if $\int f dvol = 0$. The solution is unique up to a constant. If $f \in C^{r + \alpha}$ ($r \geq 0$ a integer or $r=+\infty$, $0 < \alpha < 1$), then $\varphi \in C^{r+2+\alpha}$. in this theorem the manifolds do not have boundary.
My question: are there similar results with riemannian manifolds with boundary ???
hope for answers.
william | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8936612606048584, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/tagged/co.combinatorics | ## Tagged Questions
0answers
99 views
### One element not belong to at least half of the subsets [closed]
Let $F$ be a collection of subsets of a finite set that is closed under intersection (meaning that the empty set is an element of $F$). Is it true that there must exist one element …
1answer
97 views
### General and translational Birkhoff lattices. Equational classes.
By lattice I'll mean Birkhoff lattice. The two classical equational classes of lattices are modular lattices and distributive lattices. The old problem used to b …
6answers
1k views
### Show that this ratio of factorials is always an integer
show the formula always gives an integer $$\frac{(2m)!(2n)!}{m!n!(m+n)!}$$ I don't remember where I read this problem, but it said this can be proved using a simple counting argu …
0answers
275 views
### Reference/quote request: “All of combinatorics is the representation theory of $S_n$”
I think I remember reading somewhere a glib (or is it deep?) quote, perhaps due to Rota?, which was something like "All of combinatorics is essentially [or can be reduced to?] …
0answers
92 views
### An interesting version of the problem “balls into bins”
Consider n people, each has k identical balls. Each people choose k different bins from m bins, constrained by the condition that there are no two people choose exactly the same k …
1answer
221 views
### Is there a name for this graph?
I'm trying to find out whether the following graph has a name: Let $W$ be an $n$-dimensional vector space over $GF(q)$. The vertices of the graph are all the subspaces of $W$. Two …
0answers
43 views
### Approximate closed-form solution for a recurrence
Find an (approximate) closed-form solution for $S(m, b)$. S(m,b)=\sum_{i=0}^{\lfloor (e-1)/2\rfloor}{e \choose i}S(m-1, b-i) \quad + \sum_{i=\lfloor (e-1)/2\rfloor+1}^{\ …
1answer
325 views
### Enumerating/counting paths of a given length on a 2D lattice
All, I'm wondering if anyone can point me to a reference on how to address the following problem. In my thesis work on lattice QCD many years ago I had to enumerate all possible …
0answers
83 views
### A detail in the proof of the Motzkin-Straus theorem
The Motzkin-Straus theorem says that the global optimum of the quadratic program $$\max f(x)=\frac{1}{2} x^{t}Ax,\qquad \mbox{ subject to }\sum x_{i}=1 \mbox{ and } x_{i}\geq 0,$$ …
1answer
91 views
### The distribution of cycle length in random derangement
It is known that for a fixed x $\in {0,1,...,N-1}$, the length of the cycle of x in a random permutation in $S_N$ distributes uniformly in ${1, . . . ,N}$. My question is regardin …
4answers
1k views
### Verifying the correctness of a Sudoku solution
A Sudoku is solved correctly, if all columns, all rows and all 9 subsquares are filled with the numbers 1 to 9 without repetition. Hence, in order to verify if a (correct) solution …
2answers
80 views
### Incidence matrices of generalized quadrangles
Is there somewhere a database of incidence matrices of generalized quadrangles that one can download?
2answers
173 views
### Discrete disjoint covering of integer lattices
Is there a covering of $\mathbb{Z}^n$ by disjoint translates of the basis-and-origin minimal integer $n$-simplex? By haphazard I have such coverings for $\mathbb{Z}$, \$\mathbb{Z}^2 …
0answers
212 views
### Example of a group with unsolvable word problem
Today I noticed that the last relator in the 27-relator presentation of a group with unsolvable word problem given in Donald J. Collins: A simple presentation of a group with u …
0answers
127 views
+150
### Checking whether an element is in all inclusion-wise maximal common independent sets of two matroids
Given two matroids $M$ and $M'$ over the same universe $E$, and some element $x \in E$, I am interested in the importance of $x$ for the intersection (the common independent sets) …
15 30 50 per page | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8839700818061829, "perplexity_flag": "middle"} |
http://mathhelpforum.com/pre-calculus/82756-turning-points.html | # Thread:
1. ## Turning points
I dont know if this goes in here but this is a question from my pre-calc class that i could not solve
Find a polynomial whose turning points are at -1, (3 + √7)/4 , and (3 - √7)/4
Any help will be appreciated.
2. Originally Posted by mathprob
I dont know if this goes in here but this is a question from my pre-calc class that i could not solve
Find a polynomial whose turning points are at -1, (3 + √7)/4 , and (3 - √7)/4
Any help will be appreciated.
Turning Points occur when the derivative is 0.
Using this information, you can find the derivative.
$\frac{dy}{dx} = (x + 1)\left[x - \left(\frac{3 + \sqrt{7}}{4}\right)\right]\left[x - \left(\frac{3 - \sqrt{7}}{4}\right)\right]$
(set it equal to 0, you should see that the turning points are correct).
Expand, and then you can integrate to find the polynomial.
$\frac{dy}{dx} = x^3 - \frac{1}{2}x^2 - x + \frac{1}{2}$
$y = \int{x^3 - \frac{1}{2}x^2 - x + \frac{1}{2}\,dx}$
$y = \frac{1}{4}x^4 - \frac{1}{6}x^3 - \frac{1}{2}x^2 + \frac{1}{2}x + C$.
You can choose any value of C you like... 0 is the easiest.
So a polynomial that has the turning points you mentioned is
$y = \frac{1}{4}x^4 - \frac{1}{6}x^3 - \frac{1}{2}x^2 + \frac{1}{2}x$.
3. Originally Posted by Prove It
Turning Points occur when the derivative is 0.
Using this information, you can find the derivative.
$\frac{dy}{dx} = (x + 1)\left[x - \left(\frac{3 + \sqrt{7}}{4}\right)\right]\left[x - \left(\frac{3 - \sqrt{7}}{4}\right)\right]$
(set it equal to 0, you should see that the turning points are correct).
Expand, and then you can integrate to find the polynomial.
$\frac{dy}{dx} = x^3 - \frac{1}{2}x^2 - x + \frac{1}{2}$
$y = \int{x^3 - \frac{1}{2}x^2 - x + \frac{1}{2}\,dx}$
$y = \frac{1}{4}x^4 - \frac{1}{6}x^3 - \frac{1}{2}x^2 + \frac{1}{2}x + C$.
You can choose any value of C you like... 0 is the easiest.
So a polynomial that has the turning points you mentioned is
$y = \frac{1}{4}x^4 - \frac{1}{6}x^3 - \frac{1}{2}x^2 + \frac{1}{2}x$.
when i multiply
$\frac{dy}{dx} = (x + 1)\left[x - \left(\frac{3 + \sqrt{7}}{4}\right)\right]\left[x - \left(\frac{3 - \sqrt{7}}{4}\right)\right]$
i get 8x^3 - 4x^2 - 11x + 1
4. Originally Posted by mathprob
when i multiply
$\frac{dy}{dx} = (x + 1)\left[x - \left(\frac{3 + \sqrt{7}}{4}\right)\right]\left[x - \left(\frac{3 - \sqrt{7}}{4}\right)\right]$
i get 8x^3 - 4x^2 - 11x + 1
After multiplying by 8, of course .....
Yes, I get the same result. Prove It made some small mistakes. But they don't affect the method he has shown you and you should be able to get the final answer without trouble.
5. Originally Posted by mr fantastic
After multiplying by 8, of course .....
Yes, I get the same result. Prove It made some small mistakes. But they don't affect the method he has shown you and you should be able to get the final answer without trouble.
well, we never learned integrals thats why i could not use his method even though i know integrals
i got 12x^4 - 8x^3 - 33x^2 + 6x = 0
i need confirmation though, ill get some needed points for this
6. Originally Posted by mathprob
well, we never learned integrals thats why i could not use his method even though i know integrals
i got 12x^4 - 8x^3 - 33x^2 + 6x = 0
i need confirmation though, ill get some needed points for this
Why are you doing questions that require integration if you never learned it?
Your answer is wrong. Read this: Integrals of Polynomials | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9232864379882812, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/141970-using-geometry-calculate-volume-solid-under-z-sqrt-64-x-2-y-2-a.html | # Thread:
1. ## Using geometry, calculate the volume of the solid under z = \sqrt{ 64 - x^{2} - y^{2}
Using geometry, calculate the volume of the solid under and over the circular disk .
2. $x^2+y^2+z^2 = 64$ is the equation of a sphere with radius 8........... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7694263458251953, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/33767/classical-limit-of-schrodinger-equation | # Classical Limit of Schrodinger Equation
There is a well-known argument that if we write the wavefunction as $\psi = A \exp(iS/\hbar)$, where $A$ and $S$ are real, and substitute this into the Schrodinger equation and take the limit $h \to 0$, then we will see that $S$ satisfies the Hamilton-Jacobi equation (for example see http://physics.bu.edu/~rebbi/hamilton_jacobi.pdf).
I understand this, however I feel that I don't understand the claim that this shows that quantum mechanics reduces to classical mechanics in the $\hbar \to 0$ limit. I am confused because I would think that in order to show that QM reduces to CM we would need to show that as $\hbar \to 0$, $|\psi(x,t)|^2$ becomes very narrow and that its center moves in a classical trajectory, ie $|\psi(x,t)|^2=\delta(x-x_\text{classical}(t))$. And it seems that the above argument does not at all show this. In fact, my understanding is that all that matters for the physical measurement of position is $|\psi|^2$ (since this gives the probability distribution) and hence the phase factor $\exp(iS/h)$ seems to not matter at all.
Moreover, some books (see pg 282 of http://www.scribd.com/doc/37204824/Introduction-to-Quantum-Mechanics-Schrodinger-Equation-and-Path-Integral-Tqw-darksiderg#download or pgs 50-52 of Landau and Lifshitz) give a further argument to the one mentioned above. They further say that if $\psi = A \exp(iS/h)$, then $|\psi|^2 = A^2$ satisfies the classical continuity equation for a fluid with a corresponding velocity $dS/dt$, which in the $h \to 0$ limit is equal to the classical velocity.
This argument makes more sense to me. However, I still have some questions about this. (1) I know that there are stationary states whose modulus squared does not evolve in time, which seems to contradict this interpretation of a fluid flowing with velocity v. (2) The fluid interpretation seems to perhaps suggest to me that the wavefunction reduces in the classical limit more to a wave than to a particle. (3) This doesn't show that the wavefunction is narrow.
-
1
dab, consider learning $\LaTeX$, see meta.math.stackexchange.com/questions/1773/… and en.wikipedia.org/wiki/… – Yrogirg Aug 9 '12 at 5:16
– Vijay Murthy Aug 9 '12 at 8:52
– Qmechanic♦ Oct 26 '12 at 21:21
## 4 Answers
The subtlety is that an arbitrary wavefunction doesn't reduce to a point of the classical phase space in the limit $\hbar \to 0$ (thinking about phase space makes more sense since in the classical limit one should have definite coordinates and momenta).
So one could ask, which wavefunctions do. And the answer is that the classical limit is built on the so-called coherent states -- the states that minimize the uncertainty relation (though I don't know any mathematical theorem proving that it's always true in the general case, but in all known examples it is indeed so). States close to the coherent ones can be thought of as some "quantum fuzz", corresponding to the quasiclassical corrections of higher orders in $\hbar$.
Example of this for the harmonic oscillator can be found in Landau Lifshits.
Regarding the fluid argument. About your remark (1): the $|\psi|^2$ for the stationary state is indeed stationary, but it still satisfies the continuity equation since the current is zero for such states. Your remarks (2) and (3) are quite right because, as I already said, the classical limit can't be sensibly taken for arbitrary states, it is built from coherent states.
And also I must admit that the given fluid argument indeed doesn't provide any classical-limit manifestation. It's just an illustration that "everything behaves reasonably well" to convince readers that everything is OK and to presumably drive their attention away from the hard and subtle point -- it often happens in $\it{physics}$ books, probably unintentionally :). The problem of a nice classical limit description is actually an open one (though often underestimated), leading to rather deep questions, like the systematic way to obtain the symplectic geometry from the classical limit. In my opinion it is also connected to the problem of quantum reduction (known also as the "wave function collapse").
-
Your question is studied and answered in my recent paper
U.Klein, What is the limit $\hbar \to 0$ of quantum theory?, Am.J.Phys. vol.80, 1009 (2012).
Preprint version is available here.
-
2
– Qmechanic♦ Nov 6 '12 at 20:28
Can you please provide a link to the paper? – Ferdinando Randisi Nov 6 '12 at 21:07
@FerdinandoRandisi The author provided a link to arXiv:1201.0150 in another post, after losing contact with the account that posted this. – dmckee♦ Nov 9 '12 at 17:11
1
It would also help if you provided a brief answer to the question instead of just linking to a technical paper. This will help others understand things better. – mythealias Nov 10 '12 at 18:51
The main result of the paper "What is the limit ℏ→0 of quantum theory?" is that the classical limit of quantum theory is not classical mechanics but a classical statistical theory. My "technical paper" has been written with the idea in mind to contribute to the understanding (the interpretation) of quantum theory. The final conclusion of the paper presents - in my opinion- a strong argument in favor of the statistical interpretation of quantum theory. Motivation and conclusions are discussed in more detail in sections I and VIII of the paper. I will be happy to answer specific questions but please note that almost all answers I am able to give may be found on my website "http://statintquant.net" (I am just updating this, will be finished soon).
-
1
I've merged you unregistered accounts again. – dmckee♦ Nov 13 '12 at 13:34
Good discussion. As per the comment of U. Klein above the $\hbar \rightarrow 0$ limit of quantum theory is not what people might think. It is natural, and intuitive, as explained above, to assume that the classical limit is a property of a certain class of states.
As it happens, that view is incorrect. You can actually obtain an exact recovery of Hamiltonian Classical Point Mechanics for any value of $\hbar$ using a different wave-equation: $$i\hbar\frac{d}{dt}|\psi\rangle = [H(\langle q\rangle,\langle p \rangle)\hat{1} + H_q(\langle q\rangle,\langle p \rangle)(\hat{q}-\langle q\rangle) + H_p(\langle q\rangle,\langle p \rangle)(\hat{p}-\langle p\rangle)]|\psi\rangle$$ This wave-equation propagates any state along classical trajectories and is unique (i.e. you can derive it and show it is unique). The nonlinearity is essential and comes from the presence of the expectation values in the parameters multiplying each of the three operator terms.
When one becomes familiar with this result then it becomes clear why there is so much confusion.
People thought that the classical limit should be contained within quantum theory.
The simple truth is that it is not. The limit does not exist within the theory.
It involves a very specific equation which is outside linear quantum mechanics. However, it is simple, since this equation produces the expected Ehrenfest relations: $$\frac{d \langle q \rangle}{dt} =+H_p(\langle q\rangle,\langle p \rangle)$$ and $$\frac{d \langle p \rangle}{dt} =-H_q(\langle q\rangle,\langle p \rangle)$$ which is the classical limit of wave-packets following classical paths.
The relevant equation was first derived in K.R.W. Jones (1991), "The Classical Schroedinger Equation" UM-P-91/45 (CSE) http://arxiv.org/abs/1212.6786. A simpler version was published in 1992: K.R.W. Jones (1992), "Classical mechanics as an example of generalized quantum mechanics" Phys. Rev. D45, R2590-R2594. http://link.aps.org/doi/10.1103/PhysRevD.45.R2590. The original pre-print containing the derivation and proof of uniqueness was posted on arXiv a few days ago (see link above).
The fact that the equation is nonlinear may explain why the literature missed it for so long. It is simple, but it is not trivial, nor is it obvious until you know it.
The derivation of the CSE involves group theory and an unusual nonlinear representation of the Heisenberg--Weyl group. The mathematical system of nonlinear quantum theory involved is that discovered first by Weinberg and (independently) by Jones.
A general prescription of how to take a classical limit using a dimensionless parameter $\lambda \rightarrow 0$ is given in the paper: K.R.W. Jones (1993), "A general method for deforming quantum dynamics into classical dynamics while keeping $\hbar$ fixed" Phys. Rev. A48, 822-825.
There you will find the general argument of how to do a classical limit consistently so that the phase space, trajectories and all other properties are recovered.
It is surprising that the physics community has not gotten to grips with this yet. However, the question is an excellent one and subtle indeed since the mathematics involved did not exist prior to 1989. The connection between this area and the Feynman path integral is particularly interesting.
-
– Qmechanic♦ Jan 5 at 20:34
Sorry, added the actual equation and explained it more fully in self-contained fashion. – Kingsley Jones Jan 5 at 20:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9330044984817505, "perplexity_flag": "head"} |
http://simple.wikipedia.org/wiki/Computer_Algebra_System | # Computer Algebra System
A computer algebra system (CAS) is a large computer program that helps people with mathematics and algebra. It changes and moves around (manipulates) mathematical equations and expressions containing numbers and symbols called variables. Variables can stand for known or unknown values that can be solved for, or can be replaced with any value. It always keeps the formula exactly the same mathematically as the original equation or expression, unless it is being transformed, by a transformation algorithm.
A CAS might be used for integration or differentiation (and other calculus transformations), simplification of an expression (making it smaller and/or simpler), optimizing and finding the minimum and maximum values of results or variables, etc. Most have the ability to plot functions and expressions for visualization, which can be helpful and educational too.
## Symbolic capabilities
Computer algebra systems can be special purpose, focusing on only a few types of symbolic math, to very large, general purpose programs that do almost everything (for example, the free CAS Maxima,[1] which is the oldest CAS that is still under development). The results output by a good computer algebra system are often exact, simple, and generalized to work in all possible cases. Computer programs do have bugs, so important results should always be verified for correctness.
## Numeric capabilities
Modern computer algebra systems often include extensive numeric capabilities for convenience and which fit together with its symbolic abilities. Numeric domains supported often include real, complex, interval, rational, and algebraic numbers.
Usually floating point arithmetic is available to use if desired, because the arithmetic is done by most computer hardware very quickly. The down-side of floating point arithmetic is that it is not always exact, and is mostly useless for rocket science, because of only 14 decimal digits of accuracy (with double precision floats), and because of a growing round-off error with each calculation, sometimes even less digits are accurate.[2] Rational number arithmetic is exact if all numbers are rational. Interval arithmetic can be used to easily calculate the total possible error of an inexact arithmetic system. Complex number arithmetic is generally supported by allowing the imaginary unit ($i$) in expressions and following all of its algebraic rules.
## References
This short article about technology can be made longer. You can help Wikipedia by . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9210579991340637, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/67303/list | ## Return to Answer
2 added 1 characters in body
It is easy to see that whenever a space has an unconditional basis then the space of diagonal operators of the basis is equivalent to $\ell_\infty$. If $c_0$ embeds in $K(X,Y)$ then $K(X,Y)$ is not complemented in $B(X,Y)$. One reference for this is: M. FEDER. On subspaces of spaces with an unconditional basis and spaces of operators. Illinois J. Math. 34 (1980), 196-205.
It is also a direct consequence of a result from a Studia paper of Tong and Wilken from 1971. Here they prove that if $Y$ has an unconditional basis then $K(X,Y)$ is uncomplemented in $B(X,Y)$ (assuming the spaces are not equal).
As far as I know the Argyros-Haydon space is the first example of a space for with which it is known that $K(X)$ is complemented in $B(X)$.
1
It is easy to see that whenever a space has an unconditional basis then the space of diagonal operators of the basis is equivalent to $\ell_\infty$. If $c_0$ embeds in $K(X,Y)$ then $K(X,Y)$ is not complemented in $B(X,Y)$. One reference for this is: M. FEDER. On subspaces of spaces with an unconditional basis and spaces of operators. Illinois J. Math. 34 (1980), 196-205.
It is also a direct consequence of a result from a Studia paper of Tong and Wilken from 1971. Here they prove that if $Y$ has an unconditional basis then $K(X,Y)$ is uncomplemented in $B(X,Y)$ (assuming the spaces are not equal).
As far as I know the Argyros-Haydon space is the first example of a space for with it is known that $K(X)$ is complemented in $B(X)$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9500133395195007, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/22896/why-is-compressible-flow-near-the-choke-point-so-efficient | # Why is compressible flow near the choke point so efficient?
Imagine a steady state, one-dimensional, compressible flow in a horizontal pipe of constant cross sectional area. This flow can be isothermal, adiabatic (Fanno), or diabatic (Rayleigh). As an example, the relevant macroscopic energy balance for Fanno flow is: $$\Delta h=-\Delta (KE)$$
In other words, Fanno flow converts enthalpy into kinetic energy. Below are Fanno lines for a real gas (steam) for different flow rates at the same pipe ID and inlet conditions.
The lines clearly indicate the "efficiency" with which enthalpy is converted to kinetic energy. At low velocity (left upper portion of lines), almost no enthalpy is converted while the entropy generation is large (the line is nearly horizontal). Near Mach 1 (the rightmost maximum entropy point for each curve) the enthalpy conversion is high and entropy generation is nearly zero (the line is nearly vertical). This happens whether you are originally subsonic (the upper branch) or supersonic (the lower branch). Another way of saying no entropy generation occurs is that the viscous losses (friction) are zero. This would imply that the molecules no longer impart (as much?) force upon each other near the choke point. This seems absurd.
In isothermal flow, you also find that as you near the choke point (interestingly, below Mach 1!), the flow approaches frictionless.
As I understand it, these examples are physically realizable flow regimes (if obviously very short in length scale). Why is compressible flow so efficient near the choke point? Is something at the molecular scale really going on or is it simply a poor mathematical model at these velocities?
UPDATE: Obviously, in the supersonic regime (lower branch), the reverse process takes place, i.e. kinetic energy is converted into enthalpy. Still, my question holds since this conversion is nearly perfect as you approach Mach 1.
-
My first impulse is that this is an effect of the slowing down of pressure transmission in the direction of motion. Pressure waves travel at the speed of sound, so that the back propagating pressure profile will move arbitrarily slowly near mach one, and this might be making the process close to perfectly adiabatic. – Ron Maimon Mar 28 '12 at 7:53
@RonMaimon but if that is part of the answer then why does the same flattening of dS/dH occur for supersonic flow decelerating to Mach 1? – kleingordon Mar 29 '12 at 4:07
In either cases, the pressure waves running backwards/forwards are asymptoting to zero velocity because the flow is the same as the speed of sound. This might mean that the process becomes more perfectly adiabatic at this point, because slow. I am not sure. This is just an impulse, I didn't sit down and think about it. – Ron Maimon Mar 29 '12 at 4:54
I still don't have a full answer, but something to keep in mind is that the flow never becomes "frictionless" - even near the choke point the fluid is still experiencing the Fanning friction with the wall of the pipe - the solutions you plot assume that the Fanning friction coefficient is constant for the whole flow. So, perhaps a way to refine the question is to ask why the frictional heating doesn't raise the entropy near the choke point. I do think Ron's observation is ultimately responsible, although an elaboration would be nice. – kleingordon Mar 29 '12 at 8:06
@kleingordon I agree that the flow always experiences friction since you can only asymptotically approach isentropic flow. I am interested in why the relative effect of friction decreases as velocities approach Mach 1 from either side. Also, the Fanno flow model doesn't have to assume any particular friction factor nor that it's constant across the flow. These curves can be developed strictly from the mass and energy balance w/o reference to the momentum balance (4 equations in 4 unknowns). You only need the momentum balance when you are interested in the length traversed by the flow. – Jason Waldrop Mar 29 '12 at 14:09
show 1 more comment
## 1 Answer
Instead of thinking about the efficiency of the acceleration, think about the flow itself; the flow is in its maximum entropy state at the throat. Any additional dissipative force would shift the choking location, but it would still occur at Mach 1.
-
I'm glad you posted an answer to keep the activity alive on this one. He can chime in to correct me if necessary, but my feeling is that @JasonWaldrop is looking for intuition for why the maximum entropy state occurs at Mach 1 (for the adiabatic case), from a microscopic point of view. – kleingordon May 4 '12 at 9:17
Okay, so why is Mach 1 the maximum entropy state. I think it's that the thermal boundary layer is fully developed at the throat in adiabatic flow; that is, while there is still momentum transfer, there is no heat transfer (thermal equilibrium == maximum entropy). – Ghillie Dhu May 7 '12 at 16:18
To clarify slightly: I am interested in why the conversion of enthalpy into kinetic energy becomes more efficient (my definition of efficiency as: less relative entropy production per unit production of kinetic energy) as you approach the choke point (Mach 1 for Fanno flow). Maybe another way of saying it that reinforces @kleingordon post is "What is occurring at the microscopic level in terms of reducing production of disorder when the flow is near the choke point?" – Jason Waldrop May 7 '12 at 16:49
Just thinking out loud (sts): At the microscopic level, entropy is defined by Boltzmann's equation $S = k * \ln{\Omega}$. Boltzmann's constant ($k$) is, well, constant; for $S$ to increase, $\Omega$ must increase. I've only worked with microstates for stationary gas, so I don't know offhand why the number of microstates consistent with the macrostate is maximum at Mach 1. – Ghillie Dhu May 9 '12 at 16:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9376446604728699, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/198160/subgroups-of-mathbb-z-times-mathbb-z-n-mathbb-z?answertab=oldest | # Subgroups of $\mathbb Z \times(\mathbb Z/n\mathbb Z)$
So I am dealing with a problem from Dummit (specifically 2.1.7) and am having some issues. The part of the problem in question is:
Prove the set of elements of the direct product $\mathbb{Z} \times (\mathbb{Z} / n \mathbb{Z})$ of infinite order together with the identity is not a subgroup of $\mathbb{Z} \times (\mathbb{Z} / n \mathbb{Z})$.
I am assuming this question is referring to addition as the binary operation for both sets. I know the proof basically involves showing that the product is not closed; however, do not all the elements of the integers have infinite order (except the identity)? In that case, the elements of infinite order together with the identity would form all of $\mathbb{Z} \times (\mathbb{Z} / n \mathbb{Z})$ would it not?
Thanks
-
2
The answer to your last question: The element $(0,a)\in \mathbb{Z}\times\mathbb{Z}/n\mathbb{Z}$ has finite order, but is not the identity unless $a$ is. – Jason DeVito Sep 17 '12 at 19:44
Well this is embarrassing, I cannot believe I overlooked such a simple thing. Thanks guys. – KF Gauss Sep 18 '12 at 2:11
@KFGauss Just FYI, if one of the answers settles your question satisfactorily you are encouraged to "accept" the answer. To do this, click on the grey check-mark to the left of it, which will then turn green. This rewards the user who answered your question and shows other users that the question has been answered. – Alex Becker Sep 19 '12 at 21:49
## 2 Answers
Consider the elements $(1,[ 1])$ and $(-1, [ 0])$; they each have infinite order but their sum has finite order and is not the identity.
-
2
You need $n>2$ for this to work. – Brian M. Scott Sep 17 '12 at 19:46
I see that now, thank you. I have changed it. – Tarnation Sep 17 '12 at 19:51
HINT: $\langle 1,1\rangle$ and $\langle -1,0\rangle$ both have infinite order.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9565749168395996, "perplexity_flag": "middle"} |
http://www.physicsforums.com/library.php?do=view_item&itemid=30 | Physics Forums
Menu
Home
Action
My entries
Defined browse
Select Select in the list MathematicsPhysics Then Select Select in the list Then Select Select in the list
Search
photoelectric effect
Definition/Summary
When a metal surface is irradiated, it ejects electrons whose kinetic energy can be measured. This electron emission only happens when the irradiating light is above a certain angular frequency $\omega_{0}$. This frequency threshold is found to be independent of the intensity of the radiation. The kinetic energy of the electrons is found to be linearly related to the frequency of light after the threshold, with $$T = \hbar(\omega - \omega_{0})$$
Equations
Scientists
Einstein
Recent forum threads on photoelectric effect
Breakdown
Physics > Quantum >> Applications
See Also
Images
Extended explanation
In most elementary courses, the frequency of light is used instead of the angular frequency. The two are related by: $$\omega = 2\pi \nu$$ The photoelectric effect was important in realising the true nature of light. Classically light is a wave and as such the energy from it should arrive at a uniform rate. In this picture there would be no reason for the electron energy to depend on the frequency and be independent of intensity. This led Einstein to postulate that light was carried in little packets of energy called photons. The amount of energy needed to overcome the binding energy of the electron in the metal is called the work function ($\phi$). The work function is obviously the same amount of energy as carried by a photon of threshold frequency; $\phi = h\nu_0$. Thus the maximum energy an electron can gain from a photon of energy $E = h\nu$, is: $$T = h\nu - \phi = h (\nu - \nu_0)$$
Commentary
develish16 @ 10:48 AM Mar23-09
why do i have the feeling that the person who suggested this topic copied the whole content from a high school book without thinking about it for a while. look when it comes to the whole issue of photoelectric effect it is all correct, but when u say that this theory gave the true aspect of light, here u made a mistake. actually light is a very weird enigma that can solve all the issues of physics if we come up to the right definition of it and its true aspect. when u say that the photons ( i.e. the corpuscular aspect of light) came instead of the wave form, this is wrong!!! till now we only know that each described aspect can be applied to a certain limit ( wave form for explaining diffraction and corpuscular for explaining photoelectric effect). they are both complementary. but thanks anyway for bringing up this topic. its one of the few that i like the most.
dx @ 09:24 AM May10-08
second sentence, electron spelled wrong. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9275921583175659, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/87238/morse-kelley-set-theory-consistency-strength/100699 | ## Morse-Kelley set theory consistency strength
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I've come across several references to MK (Morse-Kelley set theory), which includes the idea of a proper class, a limitation of size, includes the axiom schema of comprehension across class variables (so for any $\phi(x,\overline y)$ with $x$ restricted to sets, there a class $X=(x : \phi(x,\overline y))$).
I have seen various statements about MK and how it proves the consistency of various things, including $Con(ZF)$, $Con(ZFC)$, $Con(NBG)$, and in fact, for any $T\subset MK$ finitely axiomatized, it proves $Con(T)$.
However, and quite frustratingly, I don't see any references to back up these claims, except occasionally links to other places where the claim was made, but not proven (or even proof-sketched). I would really appreciate a reference where I can see a proof of these claims, or (if it's easier) a quick sketch of why it should be true.
It's not obvious to me at all why quantifying across proper classes should allow this sort of thing, since all relevant sets (sets of proofs, or sets of statements, or whatever) should be contained in some subset of $\omega$, so should be able to be constructed in $ZF$.
-
## 4 Answers
Let me give an easier (sketch of an) answer to the part of the question about proving Con(ZFC) in MK. Unlike Emil's answer, the following does not cover the case of arbitrary finitely axiomatized subtheories of MK. Intuitively, there's an "obvious" argument for the consistency of ZFC: All its axioms are true when the variables are interpreted as ranging over arbitrary sets. (The universe is a model of ZFC, except that it isn't a set.) And anything deducible from true axioms is true, so you can't deduce contradictions from ZFC. The trouble with this argument is that it relies on a notion of "truth in the universe" that can't be defined in ZFC. What goes wrong if you try to define, in the language of ZFC, this notion of truth (or satisfaction) in the universe? Just as in the definition of truth in a (set-sized) model, you'd proceed by induction on formulas, and there's no problem with atomic formulas and propositional connectives. Quantifiers, though, give the following problem: The truth value of $\exists x\ \phi(x)$ depends on the truth values of all the instances $\phi(a)$, and there are a proper class of these. In showing that definitions by recursion actually define functions, one has to reformulate the recursion in terms of partial functions that give enough evidence for particular values of the function being defined. (For example, the usual definition of the factorial can be made into an explicit definition by saying $n!=z$ iff there is a sequence $s$ of length $n$ with $s_1=1$ and $s_k=ks_{k-1}$ for $2\leq k\leq n$ and $s_n=z$.) If you use the same method to make the definition of "truth in the universe" explicit, you find that the "evidence" (analogous to $s$ for the factorial) needs to be a proper class. So ZFC can't handle that (and it's a good thing it can't, because otherwise it would prove its own consistency). But MK can; it's designed to deal nicely with quantification of proper classes. So in MK, one can define what it means for a formula to be true in the ZFC universe. Then one can prove that all the ZFC axioms are true in this sense and truth is preserved by logical deduction (here one uses induction over the number of steps in the deduction). Therefore deduction from ZFC axioms can never lead to contradictions.
-
I believe that all the details of this approach to proving Con(ZFC) in MK are in Mostowski's book "Constructible Sets With Applications". – Andreas Blass Feb 1 2012 at 19:47
Not quite, one also needs to prove that contradictions are not true in the ZFC universe $\hspace{.6 in}$ (which can easily be done). $\:$ – Ricky Demer Feb 1 2012 at 21:09
1
@Ricky: You're right. The non-truth of contradictions is, fortunately, covered by the "no problem with ... propositional connectives" in my answer. Once you define what truth means for conjunctions and for negations, it is immediate that $\phi\land\neg\phi$ is never true. – Andreas Blass Feb 1 2012 at 23:43
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is an instance of a much more general result. (See Visser for an overview of various related principles.) A theory is called sequential if it supports encoding of sequences of its objects with some basic properties. As a part of the definition (which I omit here as it is technical and not particularly relevant, it can be found in Pudlák, see Visser for more discussion), a sequential theory has some designated natural numbers (which serve as lengths of sequences) defined by a predicate $N(x)$. Usual theories of sets or classes are sequential, with $N(x)$ being $x\in\omega$.
Theorem: For any sequential theory $T$, the following are equivalent:
1. $T$ proves full induction: the schema $$\forall\bar y\,[\varphi(0,\bar y)\land\forall x\,(N(x)\land\varphi(x,\bar y)\to\varphi(x+1,\bar y))\to\forall x\,(N(x)\to\varphi(x,\bar y))]$$ for all formulas $\varphi$.
2. $T$ is uniformly essentially reflexive: for every formula $\varphi(x)$ and a finite subtheory $S\subseteq T$, $T$ proves $N(x)\land\Pr_S(\left\ulcorner\varphi(\dot x)\right\urcorner)\to\varphi(x)$, where $\Pr_S$ denotes the provability predicate for $S$, and $\dot x$ the numeral for $x$.
MK proves full induction, since it has induction for subsets of $\omega$, and the full comprehension schema guarantees that any property of natural numbers defined by a formula actually defines a subset of $\omega$. (Notice that this fails for NBG: due to the restrictions on its comprehension schema, NBG in general cannot prove induction for formulas with class quantifiers.) Thus, MK is uniformly essentially reflexive. In particular, if we take $0\ne0$ (with no occurrence of $x$) for $\varphi$, we see that MK proves $\neg\Pr_S(\left\ulcorner0\ne0\right\urcorner)$, i.e., $\mathrm{Con}_S$, for every its finite subtheory $S$, such as $S=\mathrm{NBG}$.
The main idea of the proof of $1\to2$ (which goes back to Montague) is that using sequence encoding, one can give partial truth definitions (i.e., truth definitions for any finite set of formulas including their substitution instances). Reasoning within the theory, if $S$ proves $\varphi$, then by the cut-elimination theorem, it has a sequent proof where each formula is a subformula of something in $S$ or $\varphi$. Using a partial truth definition for this finite set of formulas, one proves by induction on the length of proof that all sequents in the proof are true, hence also $\varphi$ holds.
-
Apologies, but I'm sure I'm missing something here. This is saying that $T$ proves that if its subtheory $S$ proves $\phi(x)$ (for $x$ natural) then $\phi(x)$ is an actual consequence of $T$. Right? I'm not sure what this gets us. I'm also not sure why ZFC (or NBG) wouldn't get full induction as well, or if they do, I'm still not sure why MK would be special. – Richard Rast Feb 1 2012 at 16:52
3
No, it is saying much more: $T$ proves the implication that provability of $\varphi$ in $S$ implies $\varphi$, no matter whether $\varphi$ is actually provable or not. For example, if you take for $\varphi$ a contradiction, then this implication is exactly $\mathrm{Con}_S$. As for induction, ZFC does prove full induction and it is, duly, essentially reflexive. NBG, on the other hand, only proves induction for formulas without class quantifiers. In order to get full induction in an NBG-like theory, you need comprehension for all formulas of the form $x\in\omega\land\varphi(x)$. – Emil Jeřábek Feb 1 2012 at 17:02
1
It might sound confusing that ZFC proves full induction, and its extension NBG does not. The point is that there are more formulas in NBG than in ZFC, hence the induction schema is also stronger. – Emil Jeřábek Feb 1 2012 at 17:08
So, if I take $\phi$ to be a contradiction, then there aren't any free variables in $\phi$, and the schema instance states that $T$ proves that $Pr_S(\phi)\rightarrow \phi$; then since $T$ proves $\lnot\phi$, we have $T$ proving $\lnot Pr_S(\phi)$ by contrapositive, which is $T$ proving $Con(S)$. And since NBG is a finitely axiomatizable subtheory of MK, we have MK proving $Con(NBG)$, which then proves $Con(ZFC)$ and $Con(ZF)$, since they are subtheories. Is that the argument? (Again, apologies; I'm new to this branch of logic and it keeps turning me around) – Richard Rast Feb 1 2012 at 17:36
Yes, exactly. I’m sorry if I was too terse, unfortunately I often take for granted things that people with different backgrounds are not familiar with. – Emil Jeřábek Feb 1 2012 at 18:02
show 1 more comment
ZF can describe the set of formulas that are not provable in ZF, but, unless it's inconsistent, it can't prove that that set is non-empty. Mostowski proved that MKM can prove this set is non-empty:
@article{0039.27601, author="Mostowski, Andrzej", title="{Some impredicative definitions in the axiomatic set-theory.}", language="English", journal="Fundam. Math.", volume="37", pages="111-124", year="1950", keywords="{set theory}", }
-
I have recently constructed a proof of the relative consistency of ZFC with MK using an internal model. For this proof to work, MK is axiomatized over a free predicate logic with descriptions. The idea of the proof is to define within MK new "primitive" notions such as
((x in A) iff (x in A and A in U))
The next step is to prove in MK all of the aioms for ZFC but stated in the new notation.
I would be happy to send you a pdf of the proof.
[email protected]
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 80, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9385703206062317, "perplexity_flag": "head"} |
http://mathhelpforum.com/pre-calculus/84134-negative-exponent.html | # Thread:
1. ## negative exponent
-0.0583 = x^-2
Solve for x.
What do you have to do to -0.0583 to find x?
2. Just remember that $x^{-a} = \frac {1}{x^a}$.
So you need to solve $-0.0583 = \frac {1}{x^2}$. Which should be pretty easy.
Hope that helps!
EDIT: Just noticed there's a wrinkle here, since the answer involves the square root of a negative number. So the actual answer is imaginary (i.e. a complex number). Still, that aside, that's how negative exponents basically work. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8880696296691895, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/278902/proving-convergence-of-sequence-with-induction | # Proving convergence of sequence with induction
I have a sequence defined as $a_{1}=\sqrt{a}$ and $a_{n}=\sqrt{1+a_{n-1}}$ and I need to prove that it has an upper bound and therefore is convergent. So i have assumed that the sequence has a limit and by squaring I got that the limit is $\frac{1+\sqrt{5}}{2}$ only $\mathbf{if}$ it converges.
What methods are there for proving convergence? I am trying to show that $a_{n}<\frac{1+\sqrt{5}}{2}$ by induction but could use some help since I have never done induction proof before.
Progress:
Step 1(Basis): Check if it holds for lowest possible integer: Since $a_{0}$ is not defined, lowest possible value is $2$.
$a_{2}=\sqrt{1+a_{1}}=\sqrt{1+\sqrt{a}}=\sqrt{1+\sqrt{\frac{1+\sqrt{5}}{2}}}< \frac{1+\sqrt{5}}{2}$.
Step 2: Assume it holds for $k\in \mathbb{N},k\geq 3$. If we can prove that it holds for $n=k+1$ we are done and therefore it holds for all $k$.
This is were i am stuck: $a_{k+1}=\sqrt{1+a_{k}}$. I don't know how to proceed because I don't know where I am supposed to end.
-
What is $a$?${}{}{}$ – P.. Jan 15 at 6:01
## 4 Answers
Hint: Use the Monotone Convergence Theorem: MCT. That is, show that $a_n$ is monotone increasing, and bounded above. Increasing should be evident...
Find any upper bound: $\,3\,$ maybe?; there's no need to find the least upper bound if you only need to prove convergence.
If you can do that, then by the Monotone Convergence Theorem, you can conclude the sequence converges.
Then you are done, if you only need to prove convergence.
THEN, if you want to find the value to which the sequence converges, assume $L=\lim_{n\to\infty}a_n$. Then $L$ must satisfy $L=\sqrt{1+L}$, and solve for $L$.
-
+1 for the nice insights, yet...The sequence is monotone increasing from some definite index n and on, depending heavily on what $\,a>0\,$ is. Before that index, the sequence is decreasing, and there can be lots of indexes with that characteristic. In fact, I think it is possible to prove for any $\,k\in\Bbb N\,$ there exists $\,a(k)=a>0\,$ big enough so that $\,\{a_n\}_{n\leq k}\,$ is decreasing. – DonAntonio Jan 15 at 3:17
So the problem is to show that is bounded above by $\dfrac{1+\sqrt{5}}{2}$.
Why don't you prove something easier, that is bounded above by $3$?
-
Hint: Show that it is monotone and bounded. This guarantees convergence; then suppose $L=\lim_{n\to\infty}a_n$. Then $L$ must satisfy $$L=\sqrt{1+L}.$$ Solve for $L$.
-
Let $f(x) = \sqrt{1+x}-x$. We find that $f'(x) = \frac 1 {2\sqrt{1+x}} -1 <0$. This means that $f$ is a strictly decreasing function. Set $\phi = \frac{1+\sqrt 5}{2}$.
We now that $f(\phi)=0$. We must then have that $f(x)>0$ if $x<\phi$ and $f(x)<0$ if $x>\phi$. So $a_{n+1}>a_n$ if $a_n< \phi$ and $a_{n+1} < a_n$ if $a_n > \phi$.
I claim that if $a_1<\phi$ then $a_n < \phi$ for all $n$. This is proven by induction. Assume that $a_k < \phi$. Then $a_{k+1}^2 = a_k +1 < 1+\frac{1+\sqrt 5}{2}= \frac{6+2\sqrt{5}}{4} =\phi^2$. So $a_{k+1} < \phi$ and by induction we get that $a_n < \phi$ for all $n$.
If $a_1<\phi$ we thus know that we get a bounded increasing sequence. All bounded increasing sequences converge. To deal with the case when $\sqrt {a_1}>\phi$ is left as an exercise.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9551130533218384, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/63465/is-there-a-closed-form-for-the-nth-integral-of-a-polynomial | # Is there a closed form for the $n$th integral of a polynomial?
Say I have some polynomial $p(x)$ and want to express its $n$th integral, is there a closed form for this?
-
4
It doesn't have a unique $n^{th}$ integral; this is only well-defined up to a polynomial of degree $n-1$. – Qiaochu Yuan Sep 11 '11 at 3:45
I don't see your point. Take for instance $p(x) = x$, so its degree is 1. We have the 2nd integral as $\frac{x^{2}}{6}$ however.. edit: that is, plus a constant – Pedro Sep 11 '11 at 3:49
4
The $n^{\rm th}$ integral of $x^r$ is $$\frac{r!}{(r+n)!} x^{r+n} \ \ (+ \ \text{arbitrary poly of degree } n-1).$$ You can then use linearity to add the integrals of individual terms. – Srivatsan Sep 11 '11 at 3:50
On the other hand, $$\underbrace{\int_0^x\int_0^{t_{n-1}}\cdots\int_0^{t_1}}_{n} t^k\;\mathrm dt\cdots\mathrm dt_{n-2}\mathrm dt_{n-1}=\frac1{(n-1)!}\int_0^x t^k (x-t)^{n-1}\mathrm dt=\frac{k!}{(n+k)!}x^{n+k}$$ – J. M. Sep 11 '11 at 3:53
3
Oh, now I see. I forgot to integrate the constant along with it – Pedro Sep 11 '11 at 3:53
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484882950782776, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/33095?sort=oldest | ## How/where are semi-log resolutions used?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In the paper, by János Kollár there is problem 19 (page 8). It is one more strict resolution. A resolution that leaves untouched the semi-simple-normal-crossings singularities of pairs.
My question is: How/where is that kind of resolution used/needed?
Quick definitions:
Pair: $(X,D)$ with $X$ algebraic variety and $D$ a Weil divisor on it.
Semi-simple-normal-crossings: A point in $X$ where $X$ is (locally) a union of coordinates hyperplanes and $D$ is given by intersecting $X$ with some of the other coordinate hyperplanes not contained in $X$.
-
5
By the way, the name is "János Kollár". – Sándor Kovács Oct 18 2010 at 4:55
Fixed. Thank you. – Franklin Oct 19 2010 at 6:05
## 2 Answers
Perhaps a little more explanation would be this:
One of the first things we learn in algebraic geometry is normalization and we are told that it is "harmless" to assume that something is normal since the normalization exists and canonical and all that jazz. This is fine as long as one studies a stand alone object, but once they come in a family, it is no longer true.
Consider a family of curves. Recall that for curves normal=smooth so if not all members of the family are smooth, which is the likely scenario, then they are also not all normal. Now the problem is, there is no way to normalize the family members that they stay in a family. For instance, a family of smooth cubic plane curves degenerates to a singular one, but the normalization of the singular cubic has genus $0$, while the cubic curves have genus $1$ so they can't be members of the same family. Also, if you try to resolve the singularities by blowing up you'll see that you can resolve all singularities to be normal crossings, but you cannot do better and you also add new irreducible components to the singular fibers. This leads one to do semi-stable reduction, which is actually another story, so I won't get into that.
Anyway, for curves, we can actually make do with handling only smooth and simple normal crossing points. In higher dimensions if one tries to do the same, then there are other singularities that one must allow and these are the semi-log canonical (a.k.a. slc) singularities Zsolt mentioned.
OK, so maybe the above convinces you that if you want to do moduli theory and you want to study compact moduli spaces, that is, you actually would like to understand degenerations as well and not just the nice part, then you have to deal with non-normal, in particular with slc singularities. (Actually, "s-something" is usually the non-normal version of "something", accordingly slc is the non-normal version of lc).
Well, now how do you define a non-normal version of a singularity that is otherwise defined via some properties of exceptional divisors (or saying it in a more enlightened way: exceptional set)? You cannot take a full fledged resolution of singularities, because it will resolve the non-normality of the singularity as well. This would not be a huge problem from the point of view of making it simple, but it sabotages the entire operation. The issue is, that if you look at the definition of lc (and klt, dlt, etc) more closely, then it becomes clear that it kind of needs that the resolution used in the definition is an isomorphism in codimension $1$ on the target, that is, the singular guy. It is also important that the exceptional set is a divisor. These will fail for non-normal but $S_2$ singularities, for instance for slc but not lc singularities.
So, you need a partial resolution that resolves the singularities to something that is close to being smooth but has the above properties. The "close to being smooth" is called "semi-smooth", these are double normal crossings and pinch points, exactly the singularities that cannot be made better by only changing something in codimension $2$. (This last statement is left to the reader. If you have difficulty with it, ask).
OK, I better wrap this up. So the point of a semi-resolution is that it has those properties that make it possible to define discrepancies but it does not go "too far". However, it produces varieties with very mild singularities that are almost as good as smooth, at least from the point of view of this definition.
-
2
Dear Sandor, I read it, and enjoyed it! – Emerton Oct 18 2010 at 4:41
Hi Matt, thanks! :) – Sándor Kovács Oct 18 2010 at 4:51
I will definitely read it and Emerton read it. I have a further question. You mentioned the importance of resolutions that are iso in codimension 1. This seems to be stronger than what Kolla'r is asking in problem 19. The question is: Is this correct? For example suppose the pair $(X,D)$ is $X:=(x_1x_2=0)$ and then $D:=(x_1=x_2=0)$. This pair is not semi-snc precisely along the support of $D$. Then to resolve it we need to blow-up $(x_1=x_2=0)$ and we don't get isomorphism in codimension 1. Is this correct? – Franklin Oct 19 2010 at 6:22
Franklin: I think Kollár does not say it, but assumes it. He does say it earlier: look at the bottom of page 5. Also, even though he does not explicitly say it, there are implicit assumptions in his theorems that rule out your example: See the definition of semi-snc in (10) and the assumptions on the restriction of $D$ in both (16) and (17). You are right, though, that in (19) he does not require it, but that is slightly different from needing it for the statement. Also, what I was saying is that you need this for the definition of slc, not that you would definitely need it for the... – Sándor Kovács Oct 19 2010 at 7:12
...definition of a semi-log resolution. Anyway, in general, the sensible assumption to make about a pair is that none of the components of $D$ is contained in the singular locus of $X$. I suppose you could also weaken the requirement of the semi-log resolution from asking that it is an isomorphism in codimension $1$ (on $X$!) to that it is an isomorphism at general points of $D$. If you want to define slc, you need to be able to define the strict transform of $D$ and for that you need a piece of every component of it on the locus where $f$ is an isomorphism. – Sándor Kovács Oct 19 2010 at 7:16
show 8 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It is used in the definition of semi-log canonical singularities (e.g. see Section 4 of "Kollár, J.; Shepherd-Barron, N. I. Threefolds and deformations of surface singularities. Invent. Math. 91 (1988), no. 2, 299--338.")
-
Thank you very much. – Franklin Jul 24 2010 at 2:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9526810646057129, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/22801-finding-exact-area-definite-integral.html | # Thread:
1. ## Finding exact area with the Definite Integral
Okay i was busy trying to figure these things out and then i got stuck.
Now using the Riemann notation to calculate the area under the graph $x^2 + 1$ we get the following:
$R_{6} = \sum_{i = 1}^{6} [ f(x_{i}) \cdot ( \frac{3-0}{6})]$
(I calculated $R_{6} = 14,375$)
They use rectangles under the graph to approximate the area. (6 rectangles and the x-axis value ranges from 0 to 3.)
Then the book goes further and says:
For any number of rectangles under the graph(Still ranging from 0 to 3 on the x-axis):
$R_{n} = \sum_{i = 1}^{n} [ f(x_{i}) \cdot ( \frac{3-0}{n})]$
At the end we arrive at the following:
$R_{n} = \frac{27}{n^3} \sum_{i = 1}^{n} i^2 + \frac{3}{n} \sum_{i = 1}^{n} 1$
Up to there i understand perfectly.
Then from that we get
$R_{n} = \frac{27}{n^3} \left( \frac{ n(n+1)(2n+1) }{6} \right) + \frac{3}{n} (n)$
EDIT: posted by accident. (Would appreciate it if a mod would delete this)
2. Originally Posted by janvdl
Okay i was busy trying to figure these things out and then i got stuck.
Now using the Riemann notation to calculate the area under the graph $x^2 + 1$ we get the following:
$R_{6} = \sum_{i = 1}^{6} [ f(x_{i}) \cdot ( \frac{3-0}{6})]$
(I calculated $R_{6} = 14,375$)
They use rectangles under the graph to approximate the area. (6 rectangles and the x-axis value ranges from 0 to 3.)
Then the book goes further and says:
For any number of rectangles under the graph(Still ranging from 0 to 3 on the x-axis):
$R_{n} = \sum_{i = 1}^{n} [ f(x_{i}) \cdot ( \frac{3-0}{n})]$
At the end we arrive at the following:
$R_{n} = \frac{27}{n^3} \sum_{i = 1}^{n} i^2 + \frac{3}{n} \sum_{i = 1}^{n} 1$
Up to there i understand perfectly.
Then from that we get
$R_{n} = \frac{27}{n^3} \left( \frac{ n(n+1)(2n+1) }{6} \right( + \frac{3}{n} (n)$
what is your question? how thy changed the sums to those formulas? those are identities. see the identities section here
it should not be hard to find the proofs of these identities floating around the internet if you're interested | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9139792919158936, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?p=3888245 | Physics Forums
Page 3 of 6 < 1 2 3 4 5 6 >
Recognitions:
Gold Member
Science Advisor
## Bianchi's entropy result--what to ask, what to learn from it
Quote by fzero ... Since the BH is not a pure state, the correct way to do the computation is to compute the energy from (9) in an ensemble. This will reintroduce the factors of $\mu^*$ and $\gamma$ that were found in the polymer paper.
Why so? I see no reason that combining states of the form (9) to make a mixed state would need to introduce a $\gamma$. Please explain.
Recognitions: Gold Member Science Advisor In case anyone is new to the discussion the "polymer" paper just referred to is from a years and a half ago and is: http://arxiv.org/abs/1011.5628 Black Hole Entropy, Loop Gravity, and Polymer Physics Eugenio Bianchi (Submitted on 25 Nov 2010) Loop Gravity provides a microscopic derivation of Black Hole entropy. In this paper, I show that the microstates counted admit a semiclassical description in terms of shapes of a tessellated horizon. The counting of microstates and the computation of the entropy can be done via a mapping to an equivalent statistical mechanical problem: the counting of conformations of a closed polymer chain. This correspondence suggests a number of intriguing relations between the thermodynamics of Black Holes and the physics of polymers. 13 pages, 2 figures The main paper we are discussing is the one Bianchi just posted this week. For convenience, since we just turned a page, I will give the link and abstract again: http://arxiv.org/abs/1204.5122 Entropy of Non-Extremal Black Holes from Loop Gravity Eugenio Bianchi (Submitted on 23 Apr 2012) We compute the entropy of non-extremal black holes using the quantum dynamics of Loop Gravity. The horizon entropy is finite, scales linearly with the area A, and reproduces the Bekenstein-Hawking expression S = A/4 with the one-fourth coefficient for all values of the Immirzi parameter. The near-horizon geometry of a non-extremal black hole - as seen by a stationary observer - is described by a Rindler horizon. We introduce the notion of a quantum Rindler horizon in the framework of Loop Gravity. The system is described by a quantum surface and the dynamics is generated by the boost Hamiltonion of Lorentzian Spinfoams. We show that the expectation value of the boost Hamiltonian reproduces the local horizon energy of Frodden, Ghosh and Perez. We study the coupling of the geometry of the quantum horizon to a two-level system and show that it thermalizes to the local Unruh temperature. The derived values of the energy and the temperature allow one to compute the thermodynamic entropy of the quantum horizon. The relation with the Spinfoam partition function is discussed. 6 pages, 1 figure
Recognitions:
Gold Member
Homework Help
Science Advisor
Quote by marcus Why so? I see no reason that combining states of the form (9) to make a mixed state would need to introduce a $\gamma$. Please explain.
Each pure state can be thought of as a set of occupation numbers associated with which facets we use to tesselate the surface. These are the $N_i$ in the polymer paper, but I will use the notation of the new paper and call them $N_i$. The area of a given tesselation is
$$A = \sum_f 8\pi G\hbar \gamma N_f j_f$$
and we have a constraint that
$$\sum_f N_f = N.$$
Furthermore, we have to require that our mixed state matches the data of the black hole. For whatever the appropriate distribution, this can be written as an expectation value
$$\langle A \rangle_{\mathrm{ens.}} = A_H$$
where we're summing over the distribution of $N_f$. I put the subscript on the ket to note that this isn't just the expectation value in the pure state.
It is logical in this program to use Bianchi's polymer distribution and demand that the BH state maximizes the entropy. This will result in the same steepest descent condition as in (16) in the polymer paper. The computation of the energy should follow similar steps as those following that equation, leading to the factors I'm referring to.
Basically, if both papers are correct (and they already have many important methods in common), the final answers for the entropy have to agree. Because the mixed state will have an occupation number associated with which faces are used to must satisfy the same constraint (16) as in the polymer paper. It is not enough to just pick a pure state and demand that
$$A_H = \sum_f 8\pi G\hbar \gamma j_f .$$
This state alone is not a black hole. This is the step that allowed Bianchi to hide the factor of $\gamma$.
Quote by fzero The polymer microstate calculation had an explicit dependence on the Immirzi parameter. The only reason the present calculation does not have this dependence is because Bianchi uses a single pure state to do the calculation.
The independence wrt the Immirzi parameter $\gamma$ is not something new introduced by Eugenio Bianchi, but rather a general fact in LQG black holes. The $\gamma$-dependence was present in the old treatment of LQG black holes, indeed. But in 2009 Engle, Noui and Perez presented a new treatment (based on $SU(2)$ instead of $U(1)$ - let me notice here that the original proposal of Rovelli in 1996 was to use $SU(2)$ and the shift to $U(1)$ appeared in the paper by Krasnov and others) so that the entropy is correctly achieved without fixing the Immirzi parameter.
References:
1. Black hole entropy and SU(2) Chern-Simons theory.
2. Black hole entropy from the SU(2)-invariant formulation of Type I isolated horizons
3. Static isolated horizons: SU(2) invariant phase space, quantization, and black hole entropy
4. Radiation from quantum weakly dynamical horizons in LQG.
Recognitions:
Gold Member
Science Advisor
Fzero, thanks for your careful detailed response to my question! It is very helpful to see spelled out why you found the paper flawed, and the conclusion (in the Loop context) that entropy is independent of the Immirzi parameter to be invalid. Everybody benefits from this kind of careful study (although I disagree with you.)
Quote by fzero The polymer microstate calculation had an explicit dependence on the Immirzi parameter. The only reason the present calculation does not have this dependence is because Bianchi uses a single pure state to do the calculation...
Quote by francesca The independence wrt the Immirzi parameter $\gamma$ is not something new introduced by Eugenio Bianchi, but rather a general fact in LQG black holes. The $\gamma$-dependence was present in the old treatment of LQG black holes, indeed. But in 2009 Engle, Noui and Perez presented a new treatment (based on $SU(2)$ instead of $U(1)$ - let me notice here that the original proposal of Rovelli in 1996 was to use $SU(2)$ and the shift to $U(1)$ appeared in the paper by Krasnov and others) so that the entropy is correctly achieved without fixing the Immirzi parameter. References: 1. Black hole entropy and SU(2) Chern-Simons theory. 2. Black hole entropy from the SU(2)-invariant formulation of Type I isolated horizons 3. Static isolated horizons: SU(2) invariant phase space, quantization, and black hole entropy 4. Radiation from quantum weakly dynamical horizons in LQG.
I'm beginning to get a better sense of the historical development. The key reference seems to be #3. The first two lead up to it, but they don't seem to explicitly break free from dependence on the Immirzi parameter. They lay the groundwork, if I am not mistaken. I'll quote the abstract of your reference #3. The November 2010 paper of Perez and Pranzetti.
http://inspirehep.net/record/877359?ln=en
http://arxiv.org/abs/1011.2961
Static isolated horizons: SU(2) invariant phase space, quantization, and black hole entropy
Alejandro Perez, Daniele Pranzetti
(Submitted on 12 Nov 2010)
We study the classical field theoretical formulation of static generic isolated horizons in a manifestly SU(2) invariant formulation. We show that the usual classical description requires revision in the non-static case due to the breaking of diffeomorphism invariance at the horizon leading to the non conservation of the usual pre-symplectic structure. We argue how this difficulty could be avoided by a simple enlargement of the field content at the horizon that restores diffeomorphism invariance. Restricting our attention to static isolated horizons we study the effective theories describing the boundary degrees of freedom. A quantization of the horizon degrees of freedom is proposed. By defining a statistical mechanical ensemble where only the area A of the horizon is fixed macroscopically-states with fluctuations away from spherical symmetry are allowed-we show that it is possible to obtain agreement with the Hawking's area law---S = A/4 (in Planck Units)---without fixing the Immirzi parameter to any particular value: consistency with the area law only imposes a relationship between the Immirzi parameter and the level of the Chern-Simons theory involved in the effective description of the horizon degrees of freedom.
26 pages, published in Entropy 13 (2011) 744-777
Recognitions:
Science Advisor
Quote by marcus http://arxiv.org/abs/1011.2961 Static isolated horizons: SU(2) invariant phase space, quantization, and black hole entropy Alejandro Perez, Daniele Pranzetti (Submitted on 12 Nov 2010) We study the classical field theoretical formulation of static generic isolated horizons in a manifestly SU(2) invariant formulation. We show that the usual classical description requires revision in the non-static case due to the breaking of diffeomorphism invariance at the horizon leading to the non conservation of the usual pre-symplectic structure. We argue how this difficulty could be avoided by a simple enlargement of the field content at the horizon that restores diffeomorphism invariance. Restricting our attention to static isolated horizons we study the effective theories describing the boundary degrees of freedom. A quantization of the horizon degrees of freedom is proposed. By defining a statistical mechanical ensemble where only the area A of the horizon is fixed macroscopically-states with fluctuations away from spherical symmetry are allowed-we show that it is possible to obtain agreement with the Hawking's area law---S = A/4 (in Planck Units)---without fixing the Immirzi parameter to any particular value: consistency with the area law only imposes a relationship between the Immirzi parameter and the level of the Chern-Simons theory involved in the effective description of the horizon degrees of freedom. 26 pages, published in Entropy 13 (2011) 744-777
So what is the relationship between the Immizi parameter and the level of the Chern-Simons theory in Bianchi's new calculation?
Recognitions:
Gold Member
Science Advisor
Quote by atyy So what is the relationship between the Immizi parameter and the level of the Chern-Simons theory in Bianchi's new calculation?
Why should there be any at all? I don't see in Bianchi's paper any reference to the 2010 Perez Pranzetti paper. What you ask sounds to me like a good research topic. There might or might not be some interesting connection. I don't think one can determine that simply based on the research papers already available. I could be wrong of course. Maybe Francesca will correct me, and answer your question. She has her own research to do though.
This breaking free from dependence of entropy on the Immirzi looks to me like a gradual historical process that has been happening by various routes on different fronts. I think of it as a kind of blind tectonic shift. Perhaps the earliest sign being Jacobson's 2007 paper.
Wait. Bianchi's reference [3] cites (in addition to papers by Rovelli 1996 and by Ashtekar et al 1998) the 2010 ENP paper Engle Noui Perez. That was the first one Francesca listed. So there is an indirect reference to Chern Simons level. Maybe we can glimpse some connection by looking at the ENP paper.
Recognitions: Gold Member Science Advisor This is fascinating, I thought the 2009 ENP paper still inextricably involved Immirzi dependence, but I may have missed something. Bianchi cites it and it was the first one on Francesca's list. I need to take a closer look. http://arxiv.org/abs/0905.3168 Black hole entropy and SU(2) Chern-Simons theory Jonathan Engle, Karim Noui, Alejandro Perez (Submitted on 19 May 2009) Black holes in equilibrium can be defined locally in terms of the so-called isolated horizon boundary condition given on a null surface representing the event horizon. We show that this boundary condition can be treated in a manifestly SU(2) invariant manner. Upon quantization, state counting is expressed in terms of the dimension of Chern-Simons Hilbert spaces on a sphere with marked points. Moreover, the counting can be mapped to counting the number of SU(2) intertwiners compatible with the spins that label the defects. The resulting BH entropy is proportional to aH with logarithmic corrections Δ S=-3/2 log aH. Our treatment from first principles completely settles previous controversies concerning the counting of states. 4 pages, published in in Physical Review Letters 2010
Recognitions:
Gold Member
Homework Help
Science Advisor
Quote by francesca The independence wrt the Immirzi parameter $\gamma$ is not something new introduced by Eugenio Bianchi, but rather a general fact in LQG black holes. The $\gamma$-dependence was present in the old treatment of LQG black holes, indeed. But in 2009 Engle, Noui and Perez presented a new treatment (based on $SU(2)$ instead of $U(1)$ - let me notice here that the original proposal of Rovelli in 1996 was to use $SU(2)$ and the shift to $U(1)$ appeared in the paper by Krasnov and others) so that the entropy is correctly achieved without fixing the Immirzi parameter. References: 1. Black hole entropy and SU(2) Chern-Simons theory. 2. Black hole entropy from the SU(2)-invariant formulation of Type I isolated horizons 3. Static isolated horizons: SU(2) invariant phase space, quantization, and black hole entropy 4. Radiation from quantum weakly dynamical horizons in LQG.
I am not claiming that the dependence on $\gamma$ is new or that there weren't earlier papers that claimed that they could avoid it. The simple fact is that Bianchi's polymer result had this dependence. His new result does not. I have explained the reason for the discrepancy, and it has nothing to do with any gauge fixing. In the earlier paper he uses the proper mixed state for the black hole, while in the new paper he uses a pure state. The new paper does not compute the entropy of a black hole.
If you disagree, please explain which Bianchi paper is wrong and why.
Quote by fzero If you disagree, please explain which Bianchi paper is wrong and why.
I don't disagree with you :-)
because the papers by Bianchi are both right,
but the two calculations are done using different ensembles.
This is a tricky point that could have been overlooked. All the previous calculations used the area ensemble, namely one counts how many spin states there are for a given area. You are right to say that a $γ$-dependence is unavoidable. This is also written in the paper (even if in a very compact manner, it would be nice to have a more extended comment on this issue):
Quote by arXiv:1204.5122 The result obtained directly addresses some of the difficulties found in the original Loop Gravity derivation of Black-Hole entropy where the area-ensemble is used [3] and the Immirzi parameter shows up as an ambiguity in the expression of the entropy [20]. Introducing the notion of horizon energy in the quantum theory, we find that the entropy of large black holes is independent from the Immirzi parameter. Quantum gravity corrections to the entropy and the temperature of small black holes are expected to depend on the Immirzi parameter.
So a central point in the paper is the introduction of the energy ensemble, where the energy of the black hole is fixed. This choice is guided by the physical intuition that the energy is the key object being interested in the heat exchanges between the black hole and its neighborhood. This is a thermodynamical reasoning. Of course one can also look at the statistics of the energy ensemble: this is not what has been done in this paper, but I think that people are already working on this for a follow-up paper.
Recognitions:
Gold Member
Science Advisor
Quote by fzero ...while in the new paper he uses a pure state. The new paper does not compute the entropy of a black hole. If you disagree, please explain which Bianchi paper is wrong and why.
Hi Fzero, it's fun having you take such an interest in Bianchi's new entropy paper. Perhaps I should wait for F. to reply since you were addressing her, but she may have more urgent things to do. So I'll tell you my hunch.
I think probably all or most theory papers by creative people are in some respect wrong. They open up and develop new paths. The important papers are never the final word, they shine a light ahead into the dark.
My hunch is that the new Bianchi paper (which I think is basically a draft) probably has places where the reasoning could be improved or clarified. I also think that his conclusion is probably right and will stand! That's just a guess but it seems to be the way a lot of recent Loop BH work is going. Quite a lot of the younger-generation people are beginning to see reasons why BH entropy is independent from Immirzi. I'm just now realizing how many, and how many of them are still postdoc or have recently taken their first faculty appointment (e.g. Engle, Noui, Durka, Pranzetti, Bianchi..). It has the makings of a little revolution--we'll have to see how it goes.
I don't think I need to argue with you. You have decided to disbelieve the result because the argument is based on considering a pure quantum state. I think it's fine for you to say this whenever the occasion arises I do not think the reasoning actually rests on that singlestate basis, but that's MY perception not yours.
As I see it, he's really considering a PROCESS which adds or subtracts a little facet of area and bit of energy from each one of a huge swarm of pure states.
In the case of each pure state he verifies that ∂A/4 = ∂E/T
So "by superposition" he reasons that for the whole swarm it is always true that ∂A/4 = ∂E/T. So, in effect, QED.
But I think it's fine for you to remain unalterably opposed to Bianchi's paper and to firmly declare things like "The new paper does not compute the entropy of a black hole." I don't especially want you to agree with me. And I could be wrong! I'm basically going to wait and see until the next paper on this, by Bianchi and Wolfgang Wieland, comes out. It's in prep. And the last thing I want to do is argue with you. I won't know what I really think about this until I see the followup paper(s).
OOPS! I didn't realize F. had already replied! So this is superfluous, but I think I nevertheless won't erase it.
Hi Francesca, I didn't think you would reply, so wanted to pay Fzero the courtesy of saying something in response to his interesting post.
Recognitions:
Gold Member
Homework Help
Science Advisor
Quote by francesca So a central point in the paper is the introduction of the energy ensemble, where the energy of the black hole is fixed. This choice is guided by the physical intuition that the energy is the key object being interested in the heat exchanges between the black hole and its neighborhood. This is a thermodynamical reasoning. Of course one can also look at the statistics of the energy ensemble: this is not what has been done in this paper, but I think that people are already working on this for a follow-up paper.
There are several problems here.
First, there is no "ensemble" in the latest paper. As you mention, no statistics are addressed, but left to future work. So, as I've been saying, the state being considered is not that of a black hole (pure state vs mixed state). The role of the area ensemble in the polymer paper was not just to count microstates, but was a cruicial part of selecting the correct black hole state.
As you say, what is left is a thermodynamic calculation. What is being treated quantum mechanically is the change in energy $\delta E$. This is the same semiclassical reasoning as Hawking, the quantum computation of the energy $E$ is not done, but $\delta E$ is properly accounted for.
Finally, it's already clear how the "energy ensemble" works. The states that Bianchi uses satisfy $\vec{K} = \gamma \vec{L}$ as well as $|\vec{L}| = |L_z|$. This is something that PhysicsMonkey was asking about a couple of days ago. Therefore the energy is directly proportional to the area. If we were to count microstates subject to the energy constraint, we'd find essentially the same result as he did in the polymer paper. It looks like the only change amounts to a rescaling of the Lagrange multiplier $\mu$ by $\gamma$.
Recognitions:
Gold Member
Homework Help
Science Advisor
Quote by marcus As I see it, he's really considering a PROCESS which adds or subtracts a little facet of area and bit of energy from each one of a huge swarm of pure states. In the case of each pure state he verifies that ∂A/4 = ∂E/T So "by superposition" he reasons that for the whole swarm it is always true that ∂A/4 = ∂E/T. So, in effect, QED.
I think I agree with this, but I've argued on a couple of occasions that this is a semiclassical computation. It is not the fully quantum mechanical treatment that I thought was being advertised. It's also not clear whether we have learned much since Hawking was already able to do this calculation and didn't find a dependence on the Immirzi parameter either!
But I think it's fine for you to remain unalterably opposed to Bianchi's paper and to firmly declare things like "The new paper does not compute the entropy of a black hole." I don't especially want you to agree with me. And I could be wrong! I'm basically going to wait and see until the next paper on this, by Bianchi and Wolfgang Wieland, comes out. It's in prep. And the last thing I want to do is argue with you. I won't know what I really think about this until I see the followup paper(s).
I could amend my statement to refer to the fully quantum computation of a BH entropy. I don't have any major objections against the computation when viewed in the spirit of the original semiclassical computations.
I've already explained how the statistical treatment should work. There is no reason to expect that the result from the polymer paper is going to change since the area contraint is equivalent to the energy constraint for the subspace of states that Bianchi is using.
Recognitions:
Science Advisor
Quote by francesca The independence wrt the Immirzi parameter $\gamma$ is not something new introduced by Eugenio Bianchi, but rather a general fact in LQG black holes. The $\gamma$-dependence was present in the old treatment of LQG black holes, indeed. But in 2009 Engle, Noui and Perez presented a new treatment (based on $SU(2)$ instead of $U(1)$ - let me notice here that the original proposal of Rovelli in 1996 was to use $SU(2)$ and the shift to $U(1)$ appeared in the paper by Krasnov and others) so that the entropy is correctly achieved without fixing the Immirzi parameter. References: 1. Black hole entropy and SU(2) Chern-Simons theory. 2. Black hole entropy from the SU(2)-invariant formulation of Type I isolated horizons 3. Static isolated horizons: SU(2) invariant phase space, quantization, and black hole entropy 4. Radiation from quantum weakly dynamical horizons in LQG.
I don't think they are independent of the Immirzi parameter. Basically, the SU(2) introduces one more parameter k, the level of the Chern-Simons theory. So you have two parameters, and if you fix one, say the Immirzi, you have another to adjust to match the semiclassical calculation of Hawking.
Recognitions:
Gold Member
Homework Help
Science Advisor
Quote by atyy I don't think they are independent of the Immirzi parameter. Basically, the SU(2) introduces one more parameter k, the level of the Chern-Simons theory. So you have two parameters, and if ypu fix one, say the Immirzi, you have another to adjust to match the semiclassical calculation of Hawking.
Also, the level must be an integer, so only discrete values of the Immirzi parameter are allowed in those models. This is at odds with the arguments that the Immirzi parameter might be thought of as a running coupling. So on the one hand, if the BH calculations are to be trusted, at least some of the techniques/conclusions of the asymptotic safety programs are not.
Recognitions: Gold Member Science Advisor For convenience, here's the link to Bianchi's paper: http://arxiv.org/abs/1204.5122 In his conclusions section on page 5, Bianchi cites a 2003 paper of Jacobson and Parentani which we also might want to keep handy: http://arxiv.org/abs/gr-qc/0302099 Horizon Entropy Ted Jacobson, Renaud Parentani (Submitted on 25 Feb 2003) Although the laws of thermodynamics are well established for black hole horizons, much less has been said in the literature to support the extension of these laws to more general settings such as an asymptotic de Sitter horizon or a Rindler horizon (the event horizon of an asymptotic uniformly accelerated observer). In the present paper we review the results that have been previously established and argue that the laws of black hole thermodynamics, as well as their underlying statistical mechanical content, extend quite generally to what we call here "causal horizons". The root of this generalization is the local notion of horizon entropy density. 21 pages, one figure, to appear in a special issue of Foundations of Physics in honor of Jacob Bekenstein Conceptually, Bianchi's paper seems in part to derive from this J&P paper. A Rindler horizon is a type of causal horizon. Bianchi makes central use of the ideas of a quantum Rindler horizon and entropy density. His derivation of the entropy density, to first order, comes tantalizingly close to a tautology. He shows that for all pure states of the quantum Rindler horizon it is identically true that ∂A/4 = ∂E/T The argument that this extends by linearity to superpositions---to mixed states of the quantum Rindler horizon, and large assemblies thereof---is not made explicitly. But a relevant observation is made immediately after equation (20) on page 4:"Notice that the entropy density is independent of the acceleration a, or equivalently from the distance from the horizon."This opens the way to our concluding that ∂A/4 = ∂E/T applies as well to mixed states and collections thereof. Thus any process that increases the BH energy slightly (such as small object like an icecream cone or ukelele falling into the hole) will make the two quantities change in tandem, so that Rindler horizon entropy and area will remain in the same ratio S = A/4.
Recognitions: Gold Member Science Advisor I mentioned that I'm beginning to see this paper in the context of a small revolution in Loop gravity. A number of young researchers are posting Loop BH papers which break from the earlier work (1990s) and often find the entropy independed of Immirzi to first order. (Besides Bianchi, some names are Ghosh, Perez, Engle, Noui, Pranzetti, Durka. And there are groundbreaking Loop BH papers by Modesto, Premont-Schwarz, Hossenfelder. I'm probably forgetting some. ) So part of understanding Bianchi's paper, for me, is catching up on the context of other recent Loop BH papers. Here is one that came out earlier this month. You can see there is significant conceptual overlap. http://arxiv.org/abs/1204.0702 Radiation from quantum weakly dynamical horizons in LQG Daniele Pranzetti (Submitted on 3 Apr 2012) Using the recent thermodynamical study of isolated horizons by Ghosh and Perez, we provide a statistical mechanical analysis of isolated horizons near equilibrium in the grand canonical ensemble. By matching the description of the dynamical phase in terms of weakly dynamical horizons with this local statistical framework, we introduce a notion of temperature in terms of the local surface gravity. This provides further support to the recovering of the semiclassical area law just by means of thermodynamical considerations. Moreover, it allows us to study the radiation process generated by the LQG dynamics near the horizon, providing a quantum gravity description of the horizon evaporation. For large black holes, the spectrum we derive presents a discrete structure which could be potentially observable and might be preserved even after the inclusion of all the relevant transition lines. Comments: 9 pages, 2 figures
Page 3 of 6 < 1 2 3 4 5 6 >
Thread Tools
| | | |
|-----------------------------------------------------------------------------------|----------------------------------|---------|
| Similar Threads for: Bianchi's entropy result--what to ask, what to learn from it | | |
| Thread | Forum | Replies |
| | Mechanical Engineering | 1 |
| | Engineering Systems & Design | 2 |
| | Materials & Chemical Engineering | 0 |
| | Introductory Physics Homework | 0 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9310408234596252, "perplexity_flag": "middle"} |
http://mathhelpforum.com/differential-geometry/203003-re-armstrong-s-basic-topology-pg-70-definition-not-clear.html | 3Thanks
• 1 Post By Deveno
• 1 Post By johnsomeone
• 1 Post By johnsomeone
Thread:
1. Re: Armstrong's Basic Topology. Pg 70. definition not clear.
On pg 70 in Armstrong's Basic Topology book, the author writes:
" We introduce the disjoint union $X+Y$ of spaces $X,Y,$ and the function $j:X+Y \rightarrow X \cup Y$ which when restricted to either $X$ or $Y$ is just the inclusion in $X \cup Y$. This function is important for our purposes because:
a) it is continuous
b) the composition $(f \cup g)j: X+Y \rightarrow Z$ is continuous if and only if both $f$ and $g$ are continuous. "
The author does not mention anything about disjoint union before this. I also checked the index at the back of the book and did not find it. I found one definition of disjoint union on wiki. If I go by that definition then $X$ is not even a subset of $X+Y$ so I don't know what the author is trying to say when he writes '... the function $j:X+Y \rightarrow X \cup Y$ which when restricted to either $X$ or $Y$ ...'
Does anybody have a clue what is the meaning of disjoint union intended here and what this function $j$ might be?
2. Re: Armstrong's Basic Topology. Pg 70. definition not clear.
there are different ways of defining the disjoint union. one way involves "tagging" X and Y that is:
X+Y = (Xx{1}) U (Yx{2}) (any singleton sets that are distinct could be used for the "tags").
if X,Y are already disjoint sets, then the disjoint union is simply XUY. but if X∩Y ≠ Ø, "tagging" the sets ensures that for an element a in X∩Y, we get two elements in X+Y:
namely (a,1) (from X) and (a,2) (from Y).
while, strictly speaking, X is not a subset of X+Y, we have the bijection:
x ↔ (x,1), so as SETS they are isomorphic.
the mapping j is the obvious one:
j(x) = x, if x is in X
j(y) = y, if y is in Y
perhaps an example will help:
let X = Y = R, the real number line. then R+R is two "separate" copies of the line (unlike say, RxR, which is a plane, or (Rx{0}) U ({0}xR), which is just the two coordinate axes). since a point in this space lies either on one line or the other, the image j(x) is just "whatever real number of the real line x belongs to is".
we might represent a point on the first line as (r,1), and a point on the second line as (s,2), if it happens that r = s, j maps both these points to r in RUR = R.
another, more abstract, way to characterize the disjoint union is as follows:
let X,Y be two topological spaces. then X+Y is a space with two continuous embeddings:
j1:X→X+Y
j2:Y→X+Y
such that if we have two continuous maps f:X→Z and g:Y→Z, there is a UNIQUE continuous map h:X+Y→Z with
hj1 = f
hj2 = g
this map h is often written f+g, each "component" keeps track of whether any z in the image of h came from X, or came from Y.
the point is that Xx{*}, where {*} is any singleton space, is naturally homemorphic to X:
f(x,*) = x and
g(x) = (x,*) are clearly both continuous (from a given topology on X, and the product topology on Xx{*}) (note that there is only ONE toplogy on {*} consisting of:
T = {Ø,*}. on a singleton space, ALL topologies (the indiscrete and the discrete, or whatever) are exactly the same, so in the product topology we have just two kinds of open sets:
UxØ = Ø, and Ux{*}, where U is open in X).
the idea is this:
suppose X = {Bob,Ted,Alice} and Y = {Bob,Carol,Ted}. let's form X' = {BOB,TED,ALICE} (which is really just the same set as X, we just capitalized everything).
then X+Y = {Bob,BOB,Ted,TED,ALICE,Carol}, and j is given by:
j(Bob) = Bob
j(BOB) = Bob
j(Ted) = Ted
j(TED) = Ted
j(ALICE) = Alice
j(Carol) = Carol
j "preserves" the elements of X or Y when they are unique, and "identifies" them if they lie in the intersection of X and Y.
3. Re: Armstrong's Basic Topology. Pg 70. definition not clear.
Originally Posted by Deveno
there are different ways of defining the disjoint union. one way involves "tagging" X and Y that is:
X+Y = (Xx{1}) U (Yx{2}) (any singleton sets that are distinct could be used for the "tags").
if X,Y are already disjoint sets, then the disjoint union is simply XUY. but if X∩Y ≠ Ø, "tagging" the sets ensures that for an element a in X∩Y, we get two elements in X+Y:
namely (a,1) (from X) and (a,2) (from Y).
while, strictly speaking, X is not a subset of X+Y, we have the bijection:
x ↔ (x,1), so as SETS they are isomorphic.
the mapping j is the obvious one:
j(x) = x, if x is in X
j(y) = y, if y is in Y
perhaps an example will help:
let X = Y = R, the real number line. then R+R is two "separate" copies of the line (unlike say, RxR, which is a plane, or (Rx{0}) U ({0}xR), which is just the two coordinate axes). since a point in this space lies either on one line or the other, the image j(x) is just "whatever real number of the real line x belongs to is".
we might represent a point on the first line as (r,1), and a point on the second line as (s,2), if it happens that r = s, j maps both these points to r in RUR = R.
another, more abstract, way to characterize the disjoint union is as follows:
let X,Y be two topological spaces. then X+Y is a space with two continuous embeddings:
j1:X→X+Y
j2:Y→X+Y
such that if we have two continuous maps f:X→Z and g:Y→Z, there is a UNIQUE continuous map h:X+Y→Z with
hj1 = f
hj2 = g
this map h is often written f+g, each "component" keeps track of whether any z in the image of h came from X, or came from Y.
the point is that Xx{*}, where {*} is any singleton space, is naturally homemorphic to X:
f(x,*) = x and
g(x) = (x,*) are clearly both continuous (from a given topology on X, and the product topology on Xx{*}) (note that there is only ONE toplogy on {*} consisting of:
T = {Ø,*}. on a singleton space, ALL topologies (the indiscrete and the discrete, or whatever) are exactly the same, so in the product topology we have just two kinds of open sets:
UxØ = Ø, and Ux{*}, where U is open in X).
the idea is this:
suppose X = {Bob,Ted,Alice} and Y = {Bob,Carol,Ted}. let's form X' = {BOB,TED,ALICE} (which is really just the same set as X, we just capitalized everything).
then X+Y = {Bob,BOB,Ted,TED,ALICE,Carol}, and j is given by:
j(Bob) = Bob
j(BOB) = Bob
j(Ted) = Ted
j(TED) = Ted
j(ALICE) = Alice
j(Carol) = Carol
j "preserves" the elements of X or Y when they are unique, and "identifies" them if they lie in the intersection of X and Y.
Thank you so much Denevo for your invaluable help on this. I'd be stuck forever on that page if it weren't for you. It must have consumed a lot of your time to type so much. Thanks again.
4. Re: Armstrong's Basic Topology. Pg 70. definition not clear.
I used Armstrong in the 1980s (the book is still around here somewhere - I think overall it's a wonderful undergrad intro to topology), and I still remember distinctly being troubled by this ("Where are they floating, these two disjoint sets?") and my profs reply. He gave me the vitually the exact answer Deveno just gave you. (Basically he said "If you're worried putting them somewhere, just think of them as Xx{0} and Yx{1} in Zx[0,1], where Z is some monster set containing both X and Y.")
5. Re: Armstrong's Basic Topology. Pg 70. definition not clear.
Originally Posted by abhishekkgp
On pg 70 in Armstrong's Basic Topology book, the author writes:
" We introduce the disjoint union $X+Y$ of spaces $X,Y,$ and the function $j:X+Y \rightarrow X \cup Y$ which when restricted to either $X$ or $Y$ is just the inclusion in $X \cup Y$. This function is important for our purposes because:
a) it is continuous
b) the composition $(f \cup g)j: X+Y \rightarrow Z$ is continuous if and only if both $f$ and $g$ are continuous. "
The author does not mention anything about disjoint union before this. I also checked the index at the back of the book and did not find it. I found one definition of disjoint union on wiki. If I go by that definition then $X$ is not even a subset of $X+Y$ so I don't know what the author is trying to say when he writes '... the function $j:X+Y \rightarrow X \cup Y$ which when restricted to either $X$ or $Y$ ...'
Does anybody have a clue what is the meaning of disjoint union intended here and what this function $j$ might be?
I am stuck again. I am trying to show that (b) holds in the text quoted above.
In the above $X,Y$ were subsets of some topological space and $X,Y,X \cup Y$ all have the induced topologies. The topology on $X+Y$ is the disjoint union topology which I have taken from Disjoint union (topology) - Wikipedia, the free encyclopedia . $f:X \rightarrow Z$ and $g:Y \rightarrow Z$ are two functions.
Let $U$ be open in $Z$.
(only if part)
Assume $f, g$ are continuous.
$((f \cup g)j)^{-1}(U)$
$=j^{-1}({f^{-1}(U) \cup g^{-1}(U)})$
$=((f^{-1}(U) \cup g^{-1}(U)) \cap X) \times \{ 1 \} \, \cup \, ((f^{-1}(U) \cup g^{-1}(U)) \cap Y) \times \{ 2 \}$.
I need to show that $((f^{-1}(U) \cup g^{-1}(U)) \cap X)$ is open in $X$. (similarly for $Y$), which I am unable to do. Please help.
6. Re: Armstrong's Basic Topology. Pg 70. definition not clear.
I also tried to do this step by step through the inverse functions, and keeping the various sets straight is a chore. I think it's easier to establish the main set relationship in one shot (my Claim#1 below), and then tackle the continuity claims of the proposition (my Claim#2 below).
I'll write * for composition, and j-1 for "j inverse", and same for all functions. I hope it's clear. Say X and Y are subspaces of W. Then X+Y is a subspace of of Wx[0,1] defined by X+Y = Xx{0} Union Yx{1}. Also, note that if X intersect Y is not empty (in W), then this f-union-g function is only well defined if f = g on that intersection.)
Have the following *sets* and *functions* (we're ignoring continuity for now): f:X->Z, g:Y->Z, j: X+Y -> X-Union-Y, and f-union-g:X-Union-Y -> Z, and the composition f-union-g * j: X+Y->Z.
Claim 1: Let U be any subSET of Z. Then (f-union-g * j)-1(U) = ( ( (f-1(U))x{0} ) union ( (g-1(U))x{1} ) ) in X+Y.
Proof: The standard way of showing that each is contained in the other (Let (s,t) be in (f-union-g * j)-1(U) in X+Y. Then... ). Doing it this way bypasses the headaches about what happens on X intersect Y inside X-Union-Y. It's a completely typical and straightforward this way.
Claim 2: (f-union-g * j) is continuous iff both f and g are.
Proof: It drops straight out from Claim 1.
=>) ASSUME (f-union-g * j) is continuous. Let U open in Z. Then (f-union-g * j)-1(U) is open in X+Y, so ( (f-1(U))x{0} ) union ( (g-1(U))x{1} ) ) is open in X+Y,
... so f-1(U) open in X and g-1(U) open in Y.
<=) ASSUME f and g are continuous. Let U open in Z. Then f-1(U) open in X and g-1(U) open in Y,
... so (f-union-g * j)-1(U) = ( (f-1(U))x{0} ) union ( (g-1(U))x{1} ) ) is open in X+Y.
7. Re: Armstrong's Basic Topology. Pg 70. definition not clear.
Originally Posted by johnsomeone
Claim 1: Let U be any subSET of Z. Then (f-union-g * j)-1(U) = ( ( (f-1(U))x{0} ) union ( (g-1(U))x{1} ) ) in X+Y.
Thank you so much this did it.
8. Re: Armstrong's Basic Topology. Pg 70. definition not clear.
you eyes probably glazed over when i gave the "abstract" characterization of X+Y. mine did, when i first saw it. i was like: err, wut?
but it is actually helpful. recall that i said if we have ANY two (continuous) maps f:X→Z and g:Y→Z then we have a UNIQUE continuous map:
h:X+Y→Z with hj1 = f, and hj2 = g
(the ji are "embeddings" of X and Y in X+Y:
j1(x) = (x,1)
j2(y) = (y,2), ok?).
so it suffices to verify that:
[(fUg)j]j1 = f and
[(fUg)j]j2 = g
but this is clear since if x is only in X, then:
[(fUg)j]j1(x) = (fUg)j(x,1) = (fUg)(x) = f(x)UØ = f(x)
and if y is only in Y:
[(fUg)j]j2(y) = (fUg)j(y,2) = (fUg)(y) = ØUg(y) = g(y)
and if z is in X∩Y (where f and g agree):
[(fUg)j]j1(z) = (fUg)j(z,1) = (fUg)(z) = f(z)Ug(z) = f(z) (since f(z) = g(z) for all z in in X∩Y)
[(fUg)j]j1(z) = (fUg)j(z,2) = (fUg)(z) = f(z)Ug(z) = g(z) (see above).
then the construction of X+Y guarantees that (fUg)j is continuous (it's just the map f+g). on the other hand, if f+g is continuous, then clearly f and g must be, since they are compositions of continuous maps.
in fact, one of the least troublesome ways to DEFINE the disjoint union topology is to take it to be the FINEST topology such that the inclusions:
j1:X→X+Y
j2:Y→X+Y
are continuous. this automatically gives us "just the open sets we need" (and makes for a lot less "pre-image chasing").
***********
a little bit of overview (why you are doing this): suppose we have two squares (which, if we want to be completely formal about, we can regard as two copies of IxI, the unit square in R2).
as two separate squares, we have (IxI)+(IxI). now let's say we want to paste these two squares together, to get a 2x1 rectangle. so we take a homeomorph of the 2nd square, say:
[1,2]x[0,1] = JxI, which is still (for all intents and purposes), (IxI)+(IxI) ≅ (IxI)+(JxI).
but now we have some "overlap" (IxI)∩(JxI) = {1}x[0,1] (the common edge of our two rectangles). let's say we have an open set U in our rectangle S = (IxI)U(JxI), that crosses the (vertical) line x = 1. since we want the embeddings to be continuous, what we wind up with in the disjoint union topology, is "the part of U that lies in (IxI)" + "the part of U that lies in (JxI)". for example if U is a circle of radius 1/4 at the point (1,1/2), we have a half-circle in IxI and another half-circle in JxI, which are open sets in the subspace (relative) topologies of IxI and JxI.
roughly speaking, XUY is the quotient space (X+Y)/~ where ~ identifies the two parts of X∩Y in X+Y. in general, unions of sets may not behave well in topology, because there's no guarantee that the topology on X and the topology on Y are "compatible". considering X+Y takes care of this, the x's do their thing in the X part, and the y's do their thing in the Y part. of course, to finally get to XUY, we need to consider a quotient space (or identification space). these don't behave so well (quotient maps generally don't preserve connectedness/disconnectedness, for example), but at least we've "isolated" the problem to "the tricky part". | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 44, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9376242756843567, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/29128/why-determinant-of-a-2-by-2-matrix-is-the-area-of-a-parallelogram | # Why determinant of a 2 by 2 matrix is the area of a parallelogram?
Let $A=\begin{bmatrix}a & b\\ c & d\end{bmatrix}$ be a two by two matrix where the first row of $A$ is $a, b$ and the second row of $A$ is $c, d$. How could we show that $ad-bc$ is the area of a parallelogram with vertex $(0, 0),\ (a, b),\ (c, d),\ (a+b, c+d)$? Are the areas of the following parallelograms the same?
$(1)$ parallelogram with vertex $(0, 0),\ (a, b),\ (c, d),\ (a+c, b+d)$
$(2)$ parallelogram with vertex $(0, 0),\ (a, c),\ (b, d),\ (a+b, c+d)$
$(3)$ parallelogram with vertex $(0, 0),\ (a, b),\ (c, d),\ (a+d, b+c)$
$(4)$ parallelogram with vertex $(0, 0),\ (a, c),\ (b, d),\ (a+d, b+c)$
Thank you very much.
-
6
Note $\$ Pick employed this and his area theorem to give a beautiful geometric proof of the Bezout linear representation of the GCD. – Gone Mar 26 '11 at 15:45
– Giuseppe Negro Mar 26 '11 at 16:25
2
See if your question is answered by the discussion here: math.stackexchange.com/questions/668/… . A short answer is that this should be taken (properly modified to take orientation into account) as the definition of the determinant. – Qiaochu Yuan Mar 26 '11 at 16:46
## 6 Answers
The oriented area $A(u,v)$ of the parallelogram spanned by vectors $u,v$ is bilinear (eg. $A(u+v,w)=A(u,w)+A(v,w)$ can be seen by adding and removing a triangle) and skew-symmetric. Hence $A(ae_1+be_2,ce_1+de_2)=(ad-bc)A(e_1,e_2)=ad-bc$. (the same works for oriented volumes in any dimension)
-
Spend a little time with this figure due to Solomon W. Golomb and enlightenment is not far off:
-
For the matrix $\left[\begin{array}{cc} a & c \\ b & d \\ \end{array}\right]$ let $$A = \left[\begin{array}{c} a \\ b \\ \end{array}\right] \;\text{and}\; B = \left[\begin{array}{c} c \\ d \\ \end{array}\right]$$
as shown in the following figure.
Then the height of the parallelogram is
$$\text{height} = |B|\sin\alpha = |B|\cos\beta.$$
If we rotate $A$ by 90 degrees in the CCW direction as follows:
$$R_{90º}A = \left[\begin{array}{cc} 0 &-1 \\ 1 &0 \\ \end{array}\right] \left[\begin{array}{c} a \\ b \\ \end{array}\right] = \left[\begin{array}{c} -b \\ a \\ \end{array}\right],$$
maintaining the magnitude of the base as
$$\text{base} = |A| = |R_{90º}A|,$$
then it is clear that the area of the parallelogram is therefore
$$\text{base}\times\text{height}=(|A|)(|B|\sin\alpha) = |R_{90º}A|\;|B|\cos\beta = (R_{90º}A)\cdot B = \left[\begin{array}{c} -b \\ a \\ \end{array}\right] \cdot \left[\begin{array}{c} c \\ d \\ \end{array}\right] = ad-bc.$$ Q.E.D.
-
1
+1. While it's clear that $|A|\sin\alpha=|R_{90^\circ}A|\cos\beta$, it may be visibly clearER in your diagram that $|\text{altitude of parallelogram}|=|B|\sin\alpha=|B|\cos\beta$ ... which ---since $|A|=|R_{90^\circ}A|$--- works just as well in your argument. (Drawing the altitude from the tip of $B$ onto $A$ would help drive this home.) – Blue Mar 13 '12 at 3:29
@Don, good point. I will make the edit tonight. – Tpofofn Mar 13 '12 at 10:27
Also, if the coordinates of any shape are transformed by a matrix, the area will be changed by a scale factor equal to the determinant.
Since the determinant is the scale factor when the unit square is transformed to a parallelogram, it will be the scale factor when any parallelogram with the origin as a vertex is transformed to any other parallelogram because the inverse matrix will transform a parallelogram back into a square and has reciprocal determinant. If there is no inverse, the determinant is 0 and the transformed shape has no area.
Any triangle with the origin as a vertex can be drawn as half of a parallelgram including the origin. Any triangle not including the origin is the area of a triangle containing the origin minus two triangles inside not containing the origin. The area of any shape can be split into triangles, although an infinite number will be required if it has curved sides.
-
Since the area changes by the determinant of a linear map between two regions.
-
3
This is more of a restatement of the question than an explanation. – Qiaochu Yuan Mar 26 '11 at 16:44
1
@Qiaochu: in defense, what is an 'explanation'? It could be a formal proof, a picture, a restatement in other terms, an informal proof, anything that psychologically gives us trust. Granted if you understood the above restatement, you'd presumably already understand how the determinant and area correspond. – Mitch Mar 26 '11 at 16:57
If you compute the cross product of (a,b,0) and (c,d,0), then you get (in the third coordinate) ad-bc. This is, up to the sign, the area of the parallelogram.
BTW I think that (3) and (4) are not parallelograms, are they?
-
Thank you. (3) and (4) are not parallelograms. – user Mar 26 '11 at 16:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9166978001594543, "perplexity_flag": "middle"} |
http://stats.stackexchange.com/questions/20011/can-the-mic-algorithm-for-detecting-non-linear-correlations-be-explained-intuiti | # Can the MIC algorithm for detecting non-linear correlations be explained intuitively ?
More recently, I read two articles. First one is about the history of the correlation and second is about the new method called Maximal Information Coefficient (MIC). I need your help regarding to understand the MIC method to estimate non-linear correlations between variables.
Moreover, Instructions for its use in R can be found on the author's website (under Downloads):
I hope this would be a good platform to discuss and understand this method. My interest to discuss an intuition behind this method and how it can be extended as author said.
"...we need extensions of MIC(X,Y) to MIC(X,Y|Z). We will want to know how much data are needed to get stable estimates of MIC, how susceptible it is to outliers, what three- or higher-dimensional relationships it will miss, and more. MIC is a great step forward, but there are many more steps to take."
-
The question is interesting one, but I think it is not answerable. Can you please make it more specific? – mpiktas Dec 20 '11 at 8:00
3
The discussion will be hindered by the fact that the article in Science is not open access. – Itamar Dec 20 '11 at 9:16
5
– mbq♦ Dec 20 '11 at 12:03
9
In short, MIC is an excavation of old idea of "plot-all-scatterplots-and-peak-those-with-biggest-white-area", so it mainly produces false positives, has an unreal complexity of $O(M^2)$ (which authors hide behind test-only-some-randomly-selected-pairs heuristic) and by-design misses all three- and more- variable interactions. – mbq♦ Dec 20 '11 at 12:12
3
– r.e.s. Dec 20 '11 at 13:51
show 1 more comment
## 3 Answers
Is it not telling that this was published in a non-statistical journal whose statistical peer review we are unsure of? This problem was solved by Hoeffding in 1948 (Annals of Mathematical Statistics 19:546) who developed a straightforward algorithm requiring no binning nor multiple steps. Hoeffding's work was not even referenced in the Science article. This has been in the R `hoeffd` function in the `Hmisc` package for many years. Here's an example (type `example(hoeffd)` in R):
````# Hoeffding's test can detect even one-to-many dependency
set.seed(1)
x <- seq(-10,10,length=200)
y <- x*sign(runif(200,-1,1))
plot(x,y) # an X
hoeffd(x,y) # also accepts a numeric matrix
D
x y
x 1.00 0.06
y 0.06 1.00
n= 200
P
x y
x 0 # P-value is very small
y 0
````
`hoeffd` uses a fairly efficient Fortran implementation of Hoeffding's method. The basic idea of his test is to consider the difference between joint ranks of X and Y and the product of the marginal rank of X and the marginal rank of Y, suitably scaled.
# Update
I have since been corresponding with the authors (who are very nice by the way, and are open to other ideas and are continuing to research their methods). They originally had the Hoeffding reference in their manuscript but cut it (with regrets, now) for lack of space. While Hoeffding's $D$ test seems to perform well for detecting dependence in their examples, it does not provide an index that meets their criteria of ordering degrees of dependence the way the human eye is able to.
In an upcoming release of the R `Hmisc` package I've added two additional outputs related to $D$, namely the mean and max $|F(x,y) - G(x)H(y)|$ which are useful measures of dependence. However these measures, like $D$, do not have the property that the creators of MIC were seeking.
-
5
– r.e.s. Dec 23 '11 at 17:04
Nice find. Might be worth a short note to Science comparing Hoeffding's performance with theirs. It is a pity that many good studies (in many fields) from the 50's were forgotten over the years. – Itamar Dec 24 '11 at 20:26
The MIC method is based on Mutual information (MI), which quantifies the dependence between the joint distribution of X and Y and what the joint distribution would be if X and Y were independent (See,e.g., the Wikipedia entry). Mathematically, MI is defined as $$MI=H(X)+H(Y)-H(X,Y)$$ where $$H(X)=-\sum_i p(z_i)\log p(z_i)$$ is the entropy of a single variable and $$H(X,Y)=-\sum_{i,j} p(x_i,y_j)\log p(x_i,y_j)$$ is the joint entropy of two variables.
The authors' main idea is to discretize the data onto many different two-dimensional grids and calculate normalized scores that represents the mutual information of the two variables on each grid. The scores are normalized to ensure a fair comparison between different grids and vary between 0 (uncorrelated) and 1 (high correlations).
MIC is defined as the highest score obtained and is an indication of how strongly the two variables are correlated. In fact, the authors claim that for noiseless functional relationships MIC values are comparable to the coefficient of determination ($R^2$).
-
I found two good articles explaining more clearly the idea of MIC in particular this one; here the second.
As I understood from these reads is that you can zoom in to different complexities and scales of relationships between two variables by exploring different combinations of grids; these grids are used to split the 2 dimensional space into cells. By choosing the grid that holds the most information on how the cells partition the space you are choosing the MIC.
I would like to ask @mbq if he could expand what he called "plot-all-scatterplots-and-peak-those-with-biggest-white-area" and unreal complexity of O(M2).
-
2
I worry about any statistical method that uses binning. – Frank Harrell Dec 24 '11 at 13:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949824869632721, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Quasi-Monte_Carlo_method | # Quasi-Monte Carlo method
[Pseudorandom sequence]
[Low-discrepancy sequence (Sobol sequence)]
256 points from a pseudorandom number source, Halton sequence, and Sobol sequence (red=1,..,10, blue=11,..,100, green=101,..,256). Points from Sobol sequence are more evenly distributed.
In numerical analysis, quasi-Monte Carlo method is a method for numerical integration and solving some other problems using low-discrepancy sequences (also called quasi-random sequences or sub-random sequences). This is in contrast to the regular Monte Carlo method or Monte Carlo integration, which are based on sequences of pseudorandom numbers.
Monte Carlo and quasi-Monte Carlo methods are stated in a similar way. The problem is to approximate the integral of a function f as the average of the function evaluated at a set of points x1, ..., xN:
$\int_{[0,1]^s} f(u)\,{\rm d}u \approx \frac{1}{N}\,\sum_{i=1}^N f(x_i).$
Since we are integrating over the s-dimensional unit cube, each xi is a vector of s elements. The difference between quasi-Monte Carlo and Monte Carlo is the way the xi are chosen. Quasi-Monte Carlo uses a low-discrepancy sequence such as the Halton sequence, the Sobol sequence, or the Faure sequence, whereas Monte Carlo uses a pseudorandom sequence. The advantage of using low-discrepancy sequences is a faster rate of convergence. Quasi-Monte Carlo has a rate of convergence close to O(1/N), whereas the rate for the Monte Carlo method is O(N-0.5).[1]
The Quasi-Monte Carlo method recently became popular in the area of mathematical finance or computational finance.[1] In these areas, high-dimensional numerical integrals, where the integral should be evaluated within a threshold ε, occur frequently. Hence, the Monte Carlo method and the quasi-Monte Carlo method are beneficial in these situations.
## Approximation error bounds of quasi-Monte Carlo
The approximation error of the quasi-Monte Carlo method is bounded by a term proportional to the discrepancy of the set x1, ..., xN. Specifically, the Koksma-Hlawka inequality states that the error
$\epsilon = | \int_{[0,1]^s} f(u)\,{\rm d}u - \frac{1}{N}\,\sum_{i=1}^N f(x_i) |$
is bounded by
$|\epsilon| \leq V(f) D_N$,
where V(f) is the Hardy-Krause variation of the function f (see Morokoff and Caflisch (1995) [2] for the detailed definitions). DN is the discrepancy of the set (x1,...,xN) and is defined as
$D_N = \sup_{Q \subset [0,1]^s} | \frac{\mbox{number of points in } Q}{N} - volume(Q)|$,
where Q is a rectangular solid in [0,1]s with sides parallel to the coordinate axes.[2] The inequality $|\epsilon| \leq V(f) D_N$ can be used to show that the error of the approximation by the quasi-Monte Carlo method is $O\left(\frac{(\log N)^s}{N}\right)$, whereas the Monte Carlo method has a probabilistic error of $O\left(\frac{1}{\sqrt{N}}\right)$. Though we can only state the upper bound of the approximation error, the convergence rate of quasi-Monte Carlo method in practice is usually much faster than its theoretical bound.[1] Hence, in general, the accuracy of the quasi-Monte Carlo method increases faster than that of the Monte Carlo method.
## Monte Carlo and quasi-Monte Carlo for multidimensional integrations
For one-dimensional integration, quadrature methods such as the trapezoidal rule, Simpson's rule, or Newton–Cotes formulas are known to be efficient if the function is smooth. These approaches can be also used for multidimensional integrations by repeating the one-dimensional integrals over multiple dimensions. Cubature is one of the well known packages using quadrature methods that work great for low dimensional integration. However, the number of function evaluations grow exponentially as s, the number of dimensions, increases. Hence, a method that can overcome this curse of dimensionality should be used for multidimensional integrations. The standard Monte Carlo method is frequently used when the quadrature methods are difficult or expensive to implement.[2] Monte Carlo and quasi-Monte Carlo methods are accurate and fast when the dimension is high, up to 300 or higher.[3]
Morokoff and Caflisch [2] studied the performance of Monte Carlo and quasi-Monte Carlo methods for integration. In the paper, Halton, Sobol, and Faure sequences for quasi-Monte Carlo are compared with the standard Monte Carlo method using pseudorandom sequences. They found that the Halton sequence performs best for dimensions up to around 6; the Sobol sequence performs best for higher dimensions; and the Faure sequence, while outperformed by the other two, still performs better than a pseudorandom sequence.
However, Morokoff and Caflisch [2] gave examples where the advantage of the quasi-Monte Carlo is less than expected theoretically. Still, in the examples studied by Morokoff and Caflisch, the quasi-Monte Carlo method did yield a more accurate result than the Monte Carlo method with the same number of points. Morokoff and Caflisch remark that the advantage of the quasi-Monte Carlo method is greater if the integrand is smooth, and the number of dimensions s of the integral is small.
## Drawbacks of quasi-Monte Carlo
Lemieux mentioned the drawbacks of quasi-Monte Carlo in.[4]
• In order for $O\left(\frac{(\log N)^s}{N}\right)$ to be smaller than $O\left(\frac{1}{\sqrt{N}}\right)$, $s$ needs to be small and $N$ needs to be large.
• For many functions arising in practice, $V(f) = \infty$.
• We only know an upper bound on the error(i.e., ε ≤ V(f) DN) and it is difficult to compute $D_N^*$ and $V(f)$.
In order to overcome these difficulties, we can use a randomized quasi-Monte Carlo method.
## Randomization of quasi-Monte Carlo
Since the low discrepancy sequence are not random, but deterministic, quasi-Monte Carlo method can be seen as a deterministic algorithm or derandomized algorithm. In this case, we only have the bound (e.g., ε ≤ V(f) DN) for error, and the error is hard to estimate. In order to recover our ability to analyze and estimate the variance, we can randomize the method (see randomization for the general idea). The resulting method is called the randomized quasi-Monte Carlo method and can be also viewed as a variance reduction technique for the standard Monte Carlo method.[5] Among several methods, the simplest transformation procedure is through random shifting. Let {x1,...,xN} be the point set from the low discrepancy sequence. We sample s-dimensional random vector U and mix it with {x1,...,xN}. In detail, for each xj, create
$y_{j} = x_{j} + U \pmod 1$
and use the sequence $(y_{j})$ instead of $(x_{j})$. If we have R replications for Monte Carlo, sample s-dimensional random vector U for each replication. The drawback of randomization is the sacrifice of computation speed. Since we now use a pseudorandom number generator, the method is slower. Still, randomization is useful since the variance and the computation speed are slightly better than that of standard Monte Carlo, from the experimental results in Tuffin (2008) [6]
## Software
This section requires expansion. (November 2012)
MATLAB provides functions to generate numbers from low-discrepancy sequences such as the Halton sequence or the Sobol sequence (see MATLAB: Generating Quasi-Random Numbers).
## References
1. ^ a b c Søren Asmussen and Peter W. Glynn, Stochastic Simulation: Algorithms and Analysis, Springer, 2007, 476 pages
2. William J. Morokoff and Russel E. Caflisch, Quasi-Monte Carlo integration, J. Comput. Phys. 122 (1995), no. 2, 218--230. (At CiteSeer: [1])
3. Rudolf Schürer, A comparison between (quasi-)Monte Carlo and cubature rule based methods for solving high-dimensional integration problems, Mathematics and Computers in Simulation, Volume 62, Issues 3–6, 3 March 2003, 509–517
4. Christiane Lemieux, Monte Carlo and Quasi-Monte Carlo Sampling, Springer, 2009, ISBN 978-1441926760
5. Moshe Dror, Pierre L’Ecuyer and Ferenc Szidarovszky, Modeling Uncertainty: An Examination of Stochastic Theory, Methods, and Applications, Springer 2002, pp. 419-474
6. Bruno Tuffin, Randomization of Quasi-Monte Carlo Methods for Error Estimation: Survey and Normal Approximation, Monte Carlo Methods and Applications mcma. Volume 10, Issue 3-4, Pages 617–628, ISSN (Online) 1569-3961, ISSN (Print) 0929-9629, DOI: 10.1515/mcma.2004.10.3-4.617, May 2008
• R. E. Caflisch, Monte Carlo and quasi-Monte Carlo methods, Acta Numerica vol. 7, Cambridge University Press, 1998, pp. 1–49.
• Josef Dick and Friedrich Pillichshammer, Digital Nets and Sequences. Discrepancy Theory and Quasi-Monte Carlo Integration, Cambridge University Press, Cambridge, 2010, ISBN 978-0-521-19159-3
• Michael Drmota and Robert F. Tichy, Sequences, discrepancies and applications, Lecture Notes in Math., 1651, Springer, Berlin, 1997, ISBN 3-540-62606-9
• William J. Morokoff and Russel E. Caflisch, Quasi-random sequences and their discrepancies, SIAM J. Sci. Comput. 15 (1994), no. 6, 1251–1279 (At CiteSeer:[2])
• Harald Niederreiter. Random Number Generation and Quasi-Monte Carlo Methods. Society for Industrial and Applied Mathematics, 1992. ISBN 0-89871-295-5
• Harald G. Niederreiter, Quasi-Monte Carlo methods and pseudo-random numbers, Bull. Amer. Math. Soc. 84 (1978), no. 6, 957–1041
• Oto Strauch and Štefan Porubský, Distribution of Sequences: A Sampler, Peter Lang Publishing House, Frankfurt am Main 2005, ISBN 3-631-54013-2 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.837965726852417, "perplexity_flag": "middle"} |
http://mathhelpforum.com/pre-calculus/72382-solved-combining-functions.html | # Thread:
1. ## [SOLVED] Combining functions
Given the graph of a function with a few clear ordered pairs. Call it f(x). Can one use those given points on the graph, to sketch (f+f)(x)? Add y values? Add x values? Both?
2. Originally Posted by EyesForEars
Given the graph of a function with a few clear ordered pairs. Call it f(x). Can one use those given points on the graph, to sketch (f+f)(x)? Add y values? Add x values? Both?
double the y-values (the second coordinate in each pair). do you see why this works?
3. Honestly no. I wish I could. The coordinates are (-2,2.5)(-1,2)(0,1.5)(1,1)(3,-1) So when you say double, do you mean multiply? So that last coordinate would be (3,-2)? Also, i don't know if it matters, but just looking at it i can tell you its a piecewise. Its linear for the first four points, then it turns curvelinear from (1,1) to (3,-1)
4. Originally Posted by EyesForEars
Honestly no. I wish I could. The coordinates are (-2,2.5)(-1,2)(0,1.5)(1,1)(3,-1) So when you say double, do you mean multiply?
yes, double means multiply by two. do you realize that (f + f)(x) = f(x) + f(x) = 2f(x)...or in other words, 2 times the y-value.
So that last coordinate would be (3,-2)?
yes
Also, i don't know if it matters, but just looking at it i can tell you its a piecewise. Its linear for the first four points, then it turns curvelinear from (1,1) to (3,-1)
no, we cannot tell from this. we just have discrete points here. descriptions like "piece-wise" do not apply. if these points are just a few points given for a non-discrete function, then we cannot tell whether the function is piece-wise or not here.
5. Ah! I think i get it! Tell me if im on the right track. $y=ax+b$So, if your going to add $ax+b$ to itself you would get $2ax+2b$. Hence, $2f(x) or, 2y?$
6. Originally Posted by EyesForEars
Ah! I think i get it! Tell me if im on the right track. $y=ax+b$So, if your going to add $ax+b$ to itself you would get $2ax+2b$. Hence, $2f(x) or, 2y?$
Exactly! Operations on functions may look bad, but they're really pretty simplistic. The "(f + f)(x)" means nothing more than "f(x) + f(x)", or "2f(x)". So if f(x) = mx + b, then (f + f)(x) = 2f(x) = 2(mx + b) = 2mx + 2b.
Good work! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8961825966835022, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2007/04/10/endomorphism-rings/?like=1&source=post_flair&_wpnonce=5d5a17dad5 | # The Unapologetic Mathematician
## Endomorphism rings
Today I want to set out an incredibly important example of a ring. This example (and variations) come up over and over and over again throughout mathematics.
Let’s start with an abelian group $G$. Now consider all the linear functions from $G$ back to itself. Remember that “linear function” is just another term for “abelian group homomorphism” — it’s a function that preserves the addition — and that we call such homomorphisms from a group to itself “endomorphisms”.
As for any group, this set has the structure of a monoid. We can compose linear functions by, well, composing them. First do one, then do the other. We define the operation by $\left[f\circ g\right](x)=f(g(x))$ and verify that the composition is again a linear function:
$\left[f\circ g\right](x+y)=f(g(x+y))=f(g(x)+g(y))=$
$f(g(x))+f(g(y))=[f\circ g](x)+[f\circ g](y)$
This composition is associative, and the function that sends every element of $G$ to itself is an identity, so we do have a monoid.
Less obvious, though, is the fact that we can add such functions. Just add the values! Define $\left[f+g\right](x)=f(x)+g(x)$. We check that this is another endomorphism:
$\left[f+g\right](x+y)=f(x+y)+g(x+y)=f(x)+f(y)+g(x)+g(y)=$
$f(x)+g(x)+f(y)+g(y)=[f+g](x)+[f+g](y)$
Now this addition is associative. Further the function ${}0$ sending every element of $G$ to the element ${}0$ of $G$ is an additive identity, and the function $\left[-f\right](x)=-f(x)$ is an additive inverse. The collection of endomorphisms with this addition becomes an abelian group.
So we have two structures: an abelian group and a monoid. Do they play well together? Indeed!
$\left[(f_1+g_1)\circ(f_2+g_2)\right](x)=\left[f_1+g_1\right](\left[f_2+g_2\right]((x))=$
$f_1(f_2(x)+g_2(x))+g_1(f_2(x)+g_2(x))=$
$f_1(f_2(x))+f_1(g_2(x))+g_1(f_2(x))+g_1(g_2(x))=$
$\left[f_1\circ f_2\right](x)+\left[f_1\circ g_2\right](x)+\left[g_1\circ f_2\right](x)+\left[g_1\circ g_2\right](x)=$
$\left[f_1\circ f_2+f_1\circ g_2+g_1\circ f_2+g_1\circ g_2\right](x)$
showing that composition distributes over addition.
So the endomorphisms of an abelian group $G$ form a ring with unit. We call this ring ${\rm End}(G)$, and like I said it will come up everywhere, so it’s worth internalizing.
### Like this:
Posted by John Armstrong | Ring theory
## 1 Comment »
1. [...] If the ring is the ring of integers and is an abelian group, then is just the endomorphism ring we considered earlier. This is an example of how the theory of modules naturally extends the theory of abelian [...]
Pingback by | April 23, 2007 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9079819917678833, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/51285/qubit-initial-state | # Qubit initial state
Suppose that a qubit in an initial state that we don't know was measured, and the result was 1. Is it possible to know the initial state of the qubit by the result measured? And if the result would have been 0?
-
## 1 Answer
No. The only thing you know is that $\langle \psi | 1\rangle\neq 0$.
-
I have updated the question. – João Reis Jan 15 at 14:42
Sorry, but there's no physical difference between 0 and 1, they are just labels. So the answer is the same. You would know that $\langle\psi|0\rangle\neq0$. This is the interesting thing about quantum mechanics. A measurement usually can't tell you everything about the state. If you have any doubt I can explain you some details. – Bzazz Jan 15 at 14:46
I was just thinking about that, that the only thing we would know is that $\langle \psi | 0\rangle\neq 0$ too. Thank you! – João Reis Jan 15 at 14:50
You're welcome. – Bzazz Jan 15 at 14:51
1
If you've got a prior distribution for the qubit's initial state, doesn't the measurement give you additional information that you can use to update your prior using Bayes? So although you don't know everything about the initial state, you know a little more about it than you did before. – EnergyNumbers Jan 15 at 15:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9730685949325562, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/tagged/yang-mills+lattice-model | Tagged Questions
0answers
78 views
SU(2) critical point and volume dependence
I am doing multi-dimensional plots of $\beta_j$ for SU(2) for infinite volume to understand the flow behavior and I was wondering, before I go too much further, if anyone knew off the top of their ...
1answer
85 views
What evidence do we have for S-duality in N=4 Super-Yang-Mills?
Do we have anything resembling a proof*? Or is it just a collection of "coincidences"? Also, do we have evidence from lattice gauge theory computations? *Of course I'm not talking about a proof in ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281922578811646, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/87731/list | ## Return to Answer
2 punctuation and spelling: two extra comments
My answer is in agreement with Grothendieck that topological spaces may be seen as inadequate for many geometric, and in particular, homotopical purposes. Round about 1970, I spent 9 years trying to generalise the fundamental groupoid of a topological ; space to dimension 2, using a notion of double groupoid to reflec reflect the idea of algebraic inverse to subdivision'' and in the hope of proving a 2-dimensional van Kampen type theorem. In discussion with Philip Higgins in 1974 we agreed that:
1) Whitehead's theorem on free crossed modules, that $\pi_2(X \cup {e^2_\lambda},X,x)$ was a free crossed $\pi_1(X,x)$-module, was an instance of a 2-dimensional universal property in homotopy theory.
2) If our proposed theories were to be any good, then Whitehead's theorem should be a corollary.
However we observed that Whitehead's theorem was about relative homotopy groups. So we tried to define a homotopy double groupoid of a pair of pointed spaces, mapping a square into $X$ in which the edges go to $A$ and the vertices to the base point, and taking homotopy classes of such maps. This worked like a dream, and we were able to formulate and prove our theorem, published after some delays (and in the teeth of opposition!) in 1978.
We could then see how to generalise this to filtered spaces, but the proofs needed new ideas, and were published in 1981; this and subsequent work has evolved into the book Nonabelian algebraic topology'' published last August.
Contact with Loday who had defined a special kind of $(n+1)$-fold groupoid for an $n$-cube of spaces led to a more powerful van Kampen Theorem, with a totally different type of proof, published jointly in 1987. This allows for calculations of some homotopy $n$-types, and has as a Corollary an $n$-ad connectivity theorem, with a calculation of the critical (nonabelian!) $n$-ad homotopy group, as has been made more explicit by Ellis and Steiner, using the notion of a crossed $n$-cube of groups.
Thus we could get useful strict homotopy multiple groupoids for kinds of structured spaces, allowing calculations not previously possible.
In this way, Grothendieck's view is verified that as spaces with some kind of structure arise naturally in geometric situations, there should be advantages if the algebraic methods take proper cognisance of this structure from the start. That is, one should consider the data which define the space of interest.
1 [made Community Wiki]
My answer is in agreement with Grothendieck that topological spaces may be seen as inadequate for many geometric, and in particular, homotopical purposes. Round about 1970, I spent 9 years trying to generalise the fundamental groupoid of a topological; space to dimension 2, using a notion of double groupoid to reflec the idea of algebraic inverse to subdivision'' and in the hope of proving a 2-dimensional van Kampen type theorem. In discussion with Philip Higgins in 1974 we agreed that:
1) Whitehead's theorem on free crossed modules, that $\pi_2(X \cup {e^2_\lambda},X,x)$ was a free crossed $\pi_1(X,x)$-module, was an instance of a 2-dimensional universal property in homotopy theory.
2) If our proposed theories were to be any good, then Whitehead's theorem should be a corollary.
However we observed that Whitehead's theorem was about relative homotopy groups. So we tried to define a homotopy double groupoid of a pair of pointed spaces, mapping a square into $X$ in which the edges go to $A$ and the vertices to the base point, and taking homotopy classes of such maps. This worked like a dream, and we were able to formulate and prove our theorem, published after some delays in 1978.
We could then see how to generalise this to filtered spaces, but the proofs needed new ideas, and were published in 1981; this and subsequent work has evolved into the book Nonabelian algebraic topology'' published last August.
Contact with Loday who had defined a special kind of $(n+1)$-fold groupoid for an $n$-cube of spaces led to a more powerful van Kampen Theorem, with a totally different type of proof, published jointly in 1987. This allows for calculations of some homotopy $n$-types, and has as a Corollary an $n$-ad connectivity theorem, with a calculation of the critical (nonabelian!) $n$-ad homotopy group, as has been made more explicit by Ellis and Steiner, using the notion of a crossed $n$-cube of groups.
Thus we could get useful strict homotopy multiple groupoids for kinds of structured spaces, allowing calculations not previously possible.
In this way, Grothendieck's view is verified that as spaces with some kind of structure arise naturally in geometric situations, there should be advantages if the algebraic methods take proper cognisance of this structure from the start. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 4, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469029903411865, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/156097-solve-y.html | # Thread:
1. ## Solve for y...
I'm doing differential equations and got stuck on this part.
How do I solve for y if I have the equation:
-siny = x^2 + C
Thanks
2. Originally Posted by jzellt
I'm doing differential equations and got stuck on this part.
How do I solve for y if I have the equation:
-siny = x^2 + C
Thanks
$-\sin y = x^2 + C$
$\sin y = C-x^2$
$y = \sin^{-1}(C-x^2)$
3. Originally Posted by pickslides
$y = \sin^{-1}(C-x^2)$
Shouldn't it be minus c as well?
Here's what I got:
$-\sin y = x^2 + C$
$\sin y = -C-x^2$
$y = \sin^{-1}(-C-x^2)$
OR
$y = -\sin^{-1}(C+x^2)$
4. Originally Posted by Educated
Shouldn't it be minus c as well?
Its a constant $\displaystyle c\in \mathbb{R}$ , why should it be negative?
5. Maybe this is why I'm still in high school.
Just ignore me then... I haven't learnt those things yet.
EDIT:
Wait...
$\displaystyle c\in \mathbb{R}$
c is and element of a real number...
Isn't -c a real number? Why isn't -c allowed?
6. Originally Posted by Educated
c is and element of a real number...
Therefore can be positive or negative.
7. Since C is an arbitrary real number, it doesn't matter whether C is positive or negative and it doesn't matter whether we call it "C" or "- C". The same thing happens often with exponentials. If you have a solution to an equation of the form $e^{x+ C}$ where C is an arbitrary constant, you can write that as $e^{x+ C}= e^C e^x$ or simply as $C e^x$. Strictly speaking, we should use a different symbol, say C' with the explanation that $C'= e^C$ but typically, knowing that they are both just arbitrary numbers, that is not done. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9494322538375854, "perplexity_flag": "middle"} |
http://programmingpraxis.com/2011/01/11/two-integrals/?like=1&source=post_flair&_wpnonce=6c7ec013d9 | # Programming Praxis
A collection of etudes, updated weekly, for the education and enjoyment of the savvy programmer
## Two Integrals
### January 11, 2011
The exponential integral appears frequently in the study of physics, and the related logarithmic integral appears both in physics and in number theory. With the Euler-Mascheroni constant γ = 0.5772156649015328606065, formulas for computing the exponential and logarithmic integral are:
$\mathrm{Ei}\ (x) = -\int_{-x}^{\infty}\frac{e^{-t}\ d\ t}{t} = \gamma + \mathrm{ln}\ x + \sum_{k=1}^{\infty}\frac{x^k}{k \cdot k!}$
$\mathrm{Li}\ (x) = \mathrm{Ei}\ (\mathrm{ln}\ x) = \int_{0}^{x}\frac{d\ t}{\mathrm{ln}\ t} = \gamma + \mathrm{ln}\ \mathrm{ln}\ x + \sum_{k=1}^{\infty}\frac{(\mathrm{ln}\ x)^k}{k \cdot k!}$
Since there is a singularity at Li(1) = −∞, the logarithmic integral is often given in an offset form with the integral starting at 2 instead of 0; the two forms of the logarithmic integral are related by Lioffset(x) = Li(x) – Li(2) = Li(x) – 1.04516378011749278. It is this form that we are most interested in, because the offset logarithmic integral is a good approximation of the prime counting function π(x), which computes the number of primes less than or equal to x:
| | | |
|-------------|-------|----------------------|
| x | 106 | 1021 |
| Lioffset(x) | 78627 | 21127269486616126182 |
| π(x) | 78498 | 21127269486018731928 |
If you read the mathematical literature, you should be aware that there is some notational confusion about the two forms of the logarithmic integral: some authors use Li for the logarithmic integral and li for its offset variant, other authors turn that convention around, and still other authors use either notation in either (or both!) contexts. The good news is that in most cases it doesn’t matter which variant you choose.
Your task is to write functions that compute the exponential integral and the two forms of the logarithmic integral. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
### Like this:
Pages: 1 2
Posted by programmingpraxis
Filed in Exercises
5 Comments »
### 5 Responses to “Two Integrals”
1. January 11, 2011 at 9:48 AM
[...] today’s Programming Praxis exercise, our task is to write functions to calculate the exponential and [...]
2. Remco Niemeijer said
January 11, 2011 at 9:48 AM
My Haskell solution (see http://bonsaicode.wordpress.com/2011/01/11/programming-praxis-two-integrals/ for a version with comments):
```ei :: Double -> Double
ei x = 0.5772156649015328606065 + log x +
sum (takeWhile (> 1e-17) [x**k / k / product [1..k] | k <- [1..]])
li :: Double -> Double
li = ei . log
liOffset :: Double -> Double
liOffset x = li x - li 2
```
3. Graham said
January 11, 2011 at 3:39 PM
My Python solution. This blog and my free time studies are drawing me more and
more towards Scheme and Haskell, but since there are two great solutions in
those languages already I felt I should offer a solution in a different
language. I’ve moved towards the newer “format” instead of the older printf
style string formatting.
```#!/usr/bin/env python
from __future__ import division
import math
GAMMA = 0.5772156649015328606065
def exp_int(x):
s = GAMMA + math.log(x)
term, k, f = x, 1, 1
while term > 1e-17:
s += term
k += 1
f *= k
term = pow(x, k) / (k * f)
return s
def log_int(x):
return exp_int(math.log(x))
def offset_log_int(x):
return log_int(x) - 1.04516378011749278
if __name__ == "__main__":
print "Li_offset(1e6) = {0:d}".format(int(round(offset_log_int(1e6))))
print "Li_offset(1e21) = {0:d}".format(int(round(offset_log_int(1e21))))
# Output:
# Li_offset(1e6) = 78627
# Li_offset(1e21) = 21127269486616088576
```
4. bubo said
January 14, 2011 at 6:29 AM
this is the best blog on the whole internet. thank you!
5. GameSeven said
January 19, 2011 at 6:36 AM
Why 1e-17 is chosen as the stopping point in all of your code?
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8822668790817261, "perplexity_flag": "middle"} |
http://stats.stackexchange.com/questions/16074/how-to-analyse-this-data-obtained-from-a-simple-physics-experiment-on-attractive/16079 | # How to analyse this data obtained from a simple physics experiment on attractive forces?
I did a simple physics experiment that measures the attractive force a plate experiences towards the other plate as a function of the applied voltage and distance between the plates. Now I have to know whether the gathered data confirms the relation $F \propto V^2/d^2$. Suppose the following is what I gathered.
```voltage\distance 8.000 10.000 12.000 14.000 16.000
4.0 3.3 2.0 1.7 1.2 0.8
6.0 8.1 4.8 3.8 2.5 1.9
8.0 13.4 9.2 6.0 4.4 3.5
10.0 22.9 14.2 9.4 7.0 5.6
12.0 32.7 20.1 13.5 10.5 7.9
```
The uncertainties are $\pm 0.05$ for the voltage, $\pm 0.005$ for the distance, and $\pm 0.1$ for the force.
As far as I know, there are a few ways to analyze the data with R.
1. `lm(log(force) ~ log(voltage) + log(distance))`
2. `lm(I(force * distance^2) ~ -1 + I(voltage^2))`
3. `aov(I(force * voltage^-2 * distance^2) ~ voltage * distance)`
Which one should I use?
-
## 1 Answer
If you think that the errors of measurement (that you referred to as "uncertainties") are on the relative scale, then analysis in logs (model 1 on your list) will be the only one that will likely handle heteroskedasticity (non-constant variance, one of the assumptions of the textbook linear regression and ANOVA models). Other than that, the most important issue is to perform the analysis of residuals. That is, you would want to check for any remaining patterns that may give you an indication that your $V^2/d^2$ model is not performing well. With model 1, you can also test for the coefficients to be equal to 2 and -2, respectively, as another indication of the model performance. A test of the functional form should also be easily available from model 3: any deviation from the null intercept-only model is bad. On these accounts, model 2 is probably the weakest one, in the sense of allowing you rather little diagnostic capabilities per se. I would personally run several different forms, to see if they all agree that your data support your physical model.
-
default | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492358565330505, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/6495/a-differential-equation | # A differential equation
Think of $t$ and $r$ as two independent variables.
• Suppose $E$ be a function of $r$ and $V~$ be a function of $(t,r)$ such that both go to $0$ at $r=0$.
• There exists a positive function $M(r)$ such that $M(0)=0$ and $V(t,r) = -\dfrac{M(r)}{R(t,r)}$ where $R$ is another positive function such that $R(0,r)=r$.
• Let $p(r) = \dfrac{E(r)}{V(0,r)}$ be a function regular at $r=0$ such that $p(0) \in (-\infty,1)$.
• Also define a function $a$ of $r$ such that, $a(r) = \dfrac{M(r)}{\dfrac{4}{3}\pi r^3}$. Then $a$ is also a positive definite function with a well-defined value at $r=0$.
• Define $\alpha = a(0)$
Now look at this differential equation,
$$\frac{\dot{R}^2}{2} + V(t,r) = E(r)$$
Apparently this differential equation has a solution of the form,
$$\frac{t}{t_0} = \sqrt{\frac{\alpha}{a(r)}}\frac{F(p(r))}{F(p(0))} \left [1 - \left ( \dfrac{R(t,r)}{r} \right)^{\dfrac{3}{2}}~\cdot~\dfrac{F\left(~~ \dfrac{p(r)R(t,r)}{r} \right) }{F(p(r))} \right ]$$
where $t_0 = \sqrt {\dfrac{3}{8\pi \alpha}} F(p(0))$
and the function $F$ is defined over the interval $(-\infty,1)$ as,
$$F(x) = \left\{ \begin{array}{c c} -\frac{\sqrt{1-x}}{x} - \frac{1}{(-x)^{\frac{3}{2}}} \tanh^{-1} \left [ \sqrt{\frac{x}{x-1}} \right ] & x<0 \\ \frac{2}{3} & x =0 \\ \frac{1}{x^{\frac{3}{2}}}tan^{-1} \left [ \sqrt{\frac{x}{1-x}} \right ] - \frac{\sqrt{1-x}}{x} & 0<x<1 \end {array} \right.$$
How does one get the above solution?
-
What do you mean by a "positive definite function"? And how does that reconcile with the notion that $M(0) = 0$? – Willie Wong♦ Oct 11 '10 at 22:45
@Willie Thanks for pointing out the typo. I have corrected it. Any help with solving this differential equation? – Anirbit Oct 13 '10 at 6:26
I've fixed your latex: the problem was that you needed to \-escape a \{ and all \\, and that the < sign has to be written in some cases using an HTML escape (for silly reasons! Of course, in comments the rules are different...) – Mariano Suárez-Alvarez♦ Oct 13 '10 at 20:26
1
What is $\dot{R}$, is it $\partial R/\partial t$? Also, there seems to be a lot of interdependence among the definitions; can you tell us what is given for a particular instance of the problem, and what is to be determined? Better yet, if you could provide the original source or motivation for this problem, it might make things clearer. – Rahul Narain Oct 13 '10 at 20:45
1
@Mariano Thanks for correcting the LaTeXing. @Rahul Yes $\dot{R}$ is partial derivative of $R$ with respect to it. I didn't understand the second part of your query. I think I have completely defined all the quantities in question. You see any ambiguities? This is taken from a paper and I am giving you the reference if that helps, "Strength of naked singularities in Tolman-Bondi spacetimes" by R.P.A.C Newman in Class. Quantum Grav.3 (1986) 527-539 The third page of the paper has this. Will be happy to get back any help. – Anirbit Oct 14 '10 at 4:33
## 1 Answer
I'm sort of just guessing here (partly based on the solution already found). By explicitly plugging in $V(t,r) = - \frac{M(r)}{R(t,r)}$, you arrive at the ordinary differential equation (for each fixed $r$) for $R$ as
$$\dot{R}^2 - \frac{2M}{R} = 2 E$$
where $M$ and $E$ are constants in time. Now, re-scale the original equation by $t = \lambda s$ and $R = \mu \rho$. Then $\partial_t R = \frac{\mu}{\lambda} \partial_s \rho$. Then you can solve $(\frac{\mu}{\lambda})^2 = \frac{2M}{\mu} = 2E$ to reduce the equation to
$$\dot{\rho}^2 - \frac{1}{\rho} = 1$$
(the weights $\mu$ and $\lambda$ will, roughly speaking, give you the weights $p(r)$ and $a(r)$ in your question). Now note that this scaling degenerates if $E = 0$. In the case that $E = 0$, the equation can be solved by quadrature:
$$\dot{x}^2 = x^{-1} \Rightarrow \sqrt{x} dx = dt \Rightarrow x^{3/2} \sim t$$
This gives the solution to the homogeneous case. In the inhomogeneous case, you take that as a sort of integrating factor: assume that $\rho^{3/2} f(\rho) \sim t$, this implies that
$$\dot{\rho} \left( \rho^{3/2} f(\rho) \right)' = 1$$
we plug this into the equation, which we first re-arrange as
$$\dot{\rho} = \sqrt{\frac{1}{\frac{\rho}{1+\rho}}}$$
and we conclude that
$$\sqrt{\frac{\rho}{1+\rho}} = \frac{d}{d\rho}( \rho^{3/2} f(\rho))$$
and you solve this by directly integrating it. I think this $f$ you find should be exactly the $F$ you wrote down above (I didn't check it myself). Note that due to the singular weight $\rho$ which degenerates as $\rho \to 0$, you will have to separately integrate in the regime where $\rho > 0$ and $\rho < 0$. The existence and uniqueness of solution is guaranteed by the theory of Fuchsian ODEs.
-
Thanks a lot for this help. Are you defining $f$ as, $f(\rho) = \frac{kt}{\rho ^ \frac{3}{2}}$ ? (for some constant $k$) Since there are no other replies I am giving away the bounty to you. (Though I am yet to understand your solution completely) – Anirbit Oct 19 '10 at 6:18
Actually I think I defined it with $k = 1$. The constant $k$ can be absorbed into the (implicit) definition of $\rho$, by rescaling $\rho$; so it is not too important. The reason I wrote $\sim t$ instead of $=t$ is just because there can be a constant term: $f(\rho) = (kt + C) / \rho^{3/2}$. who doesn't change the derivative. (In fact, I think in the final answer, to match the boundary conditions there may need to be a $C$.) – Willie Wong♦ Oct 19 '10 at 10:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9502207636833191, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/87956/unionnew-intersection-of-any-number-of-open-sets-is-also-open/87959 | # union(new: intersection) of any number of open sets is also open
I've just begun reading Spivak's Calculus on Manifold and attempted to proof this simple result.
-I've updated my proof-
My proof are as follows,
My proof for the intersection case still looks kinda dubious though.
@Devan Ware, the notation $N_{\epsilon}(x)$ looks very useful to me but i haven't seen it anywhere, which branch of math is it found in and where can i learn more about it?
-
Just let take $x\in\bigcup_{i\in I}G_i$ where $G_i$ open. Then $x\in G_i$ for some $i$ and hence $\exists N_{\epsilon}(x) \subset G_i \subset \bigcup_{i\in I}G_i$. A more interesting question to consider is that a finite intersection of open sets is also open. (just notice that if $x\in\bigcap_{i\in I}G_i$ ($I$ finite) then $\exists N_{\epsilon_i}(x)\subset G_i$ for each $i$ and so take $k = \min{\epsilon_i}$ then $N_{k}(x) \subset \bigcap_{i\in I}G_i$.) – Deven Ware Dec 3 '11 at 7:24
Ah, I'm sorry if the notation was a bit confusing but it means "Neighborhood of radius $\epsilon$" or "ball of radius $\epsilon$" it is from analysis (I learned it from baby Rudin) – Deven Ware Dec 3 '11 at 19:05
## 2 Answers
Your proof doesn't seem to be quite correct. Note that as your definition you have that $U \subseteq \mathbb{R}^n$ is open iff for each $x \in U$ there is an open rectangle $A = (a_1,b_1) \times \cdots \times (a_n,b_n)$ containing $x$ such that $A \subseteq U$. This means that for every point $x$ of $U$ you have to find such an open rectangle, and the choice of rectangle may depend on the choice of $x$. Thus, "picking $A$ to work for $U$ and $B$ to work for $V$" doesn't quite make sense. What you need to do is first pick the $x$ from the set you wish to show is open, and then show that there is an open rectangle that would work for this particular $x$.
The "trick" is to note that if $\{ U_i : i \in I \}$ is any family of sets, then $x \in \bigcup_{i \in I} U_i$ iff there is an $i \in I$ such that $x \in U_i$, and also note that if $A \subseteq U_i$ for some $i$, then $A \subseteq \bigcup_{i \in I} U_i$.
I think this should lead you in the right direction.
-
For the infinite case, if the family is uncountable, then we cannot use mathematical induction. Here is the proof for the general case (whether it is finite or infinite, countable or uncountable): Let $\{U_i\}_{i\in I}$ be a family of open sets. Here $I$ can be finite, infinite, countable, or uncountable. We want to prove that $\cup_{i\in I}U_i$ is open.
To prove this, let $x\in\cup_{i\in I}U_i$. Therefore, $x\in U_i$ for some $i\in I$. Since $U_i$ is open by assumption, there exists an open rectangle $(a_1,b_1)\times(a_2,b_2)\times\cdots\times (a_n,b_n)$ such that $$x\in (a_1,b_1)\times(a_2,b_2)\times\cdots\times (a_n,b_n)\subset U_i.$$ This implies that $$x\in (a_1,b_1)\times(a_2,b_2)\times\cdots\times (a_n,b_n)\subset U_i\subset\cup_{i\in I}U_i.$$ Since $x$ is arbitrary, we have proved that $\cup_{i\in I}U_i$ is open.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9660726189613342, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/13404/capacitance-of-two-cocentric-spheres-contradicting-results | # Capacitance of two cocentric spheres, contradicting results
Suppose we are given two conducting, cocentric spheres of radius $a_1$ and $a_2$ respectively. The inner sphere with charge $q$, the outer sphere with charge $-q$.
I can calculate the capacitance of this system by calculating the potential difference $U$ between the plates and then use the definition $C = q / U$. This is easy enough and indeed gives me the right result
$$C = \dfrac{4\pi \epsilon_0}{a_1^{-1} - a_2^{-1}}$$
But now I tried to caluclate $C$ by finding out the energy stored in the system in two different ways: First we have the expression $$W_{el} = \frac12 \frac {q^2}C$$ for the energy stored in any capacitor. Now I wanted to compare this to the energy stored in the electric field (since that's where the energy is, right?!) $$E(r) = \begin{cases}\frac1{4\pi \epsilon_0} \frac q{r^2} & a_1 < r < a_2 \\ 0 & \text{otherwise}\end{cases}$$ to derive the above formula for $C$ again. The energy density is $w_{el} = \frac {\epsilon_0} 2 E^2$, therefore the energy stored in the electric field is \begin{eqnarray*} W_{el} &=& \int w_{el} \;\mathrm{d}V \\ &=& 4\pi \int_{a_1}^{a_2} \frac{\epsilon_0}2 \left(\frac1{4\pi \epsilon_0} \frac q{r^2}\right)^2 r \; \mathrm dr \\ &=& \frac1{4\pi \epsilon_0}\frac{q^2}{2} \int_{a_1}^{a_2} \frac{\mathrm d r}{r^3} \\ &=& \frac1{4\pi \epsilon_0}\frac{q^2}{2} \frac14\left(\frac 1{a_1^4} - \frac1{a_2^4} \right) \end{eqnarray*}
Comparing the two expressions gives
$$C = \dfrac{16\pi \epsilon_0}{a_1^{-4} - a_2^{-4}}$$
So my question is: Why is this wrong? Where is the energy in this capacitor stored, if not in the electric field (as it doesn't seem to be - unless I have made a mistake in deriving the result somewhere...)?
Edit: I also noticed, that the second result can be rewritten as
$$C = \dfrac{4\pi \epsilon_0}{a_1^{-1} - a_2^{-1}}\frac 4{a_1^{-3} + a_1^{-2}a_2^{-1} + a_1^{-1}a_2^{-2} + a_2^{-3}}$$
but I don't know whether this has any significance.
Thanks for reading and any help will be greatly appreciated! :)
-
1
The volume integral is $dV = 4\pi r^2 dr$. You missed a power of $r$. I think that should fix it. – user1631 Aug 11 '11 at 20:18
Also, check your integration formula. – user1631 Aug 11 '11 at 20:21
@user1631: Thanks. =) That fixed it. My brain seems to be in - as we say in German - Energiesparmodus ($\approx$ energy saver mode). – Sam Aug 11 '11 at 20:36
## 1 Answer
Two things stand out to my eye:
1) In your volume integral, when you switched to spherical coordinates, you put in a $4 \pi r \ dr$ where there should be a $4 \pi r^2 dr$.
2) Your evaluation of the 1-D integral is not quite right.
I think these two are enough to explain the discrepancy.
-
Ah, lol. Now I see it, too... Thanks! Correcting these silly mistakes gives the right answer, yay! – Sam Aug 11 '11 at 20:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.874612033367157, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-math-topics/19705-sequences.html | # Thread:
1. ## Sequences
1a) Define $(x_n)$ as being the sequence:
$x_1 = 3$
$x_{n+1} = \frac{1}{2}\cdot (x_n + \frac{3}{x_n})$
Prove $x_n$ converges and find the lim.
b) Let $b > 1$ and define $(x_n)$ as being the sequence:
$x_1 = b$
$x_(n+1) = \frac{1}{2}\cdot (x_n + \frac{b}{x_n})$
Prove $x_n$ converges and find the lim.
c) Suppose you're on an island with only a solar-powered very basic calculator. Use the result from part b to approximate $sqrt(17)$
2. 1) a
We'll prove that the sequance is low bounded by $\sqrt{3}$
$x_1=3>\sqrt{3}$
$\displaystyle x_{n+1}=\frac{x_n+\frac{3}{x_n}}{2}\geq\sqrt{x_n\c dot\frac{3}{x_n}}=\sqrt{3}$ (I used AM-GM inequality).
Now, we'll prove that the sequence is descrescent.
$x_2=2\Rightarrow x_1>x_2$.
Suppose that $x_{n-1}>x_n$.
Then, $x_n-x_{n+1}=x_n-\frac{x_n^2+3}{2x_n}=\frac{x_n^2-3}{2x_n}>0\Rightarrow x_n>x_{n+1}$ (I used the fact that the sequence is bounded).
So, the sequence is convergent. Let $\lim_{n\to\infty}x_n=x$.
Applying the limit in the recurrency relation we have
$\displaystyle x=\frac{1}{2}\left(x+\frac{3}{x}\right)\Rightarrow x^2=3\Rightarrow x=\sqrt{3}$
3. Hello, alikation0!
I recognized the function . . . and "eyeballed" the answers.
b) Let $b > 1$ and define $(x_n)$ as being the sequence:
$x_1 = b$
$x_(n+1) = \frac{1}{2}\cdot (x_n + \frac{b}{x_n})$
Prove $x_n$ converges and find the lim.
c) Suppose you're on an island with only a solar-powered very basic calculator.
Use the result from part b to approximate $\sqrt(17)$
I'm old enough to remember those "very basic calculators".
They were the size of a TI-30 (but an inch thick), weighed a pound,
. . used a 9-volt battery (used up quickly by those red LEDs)
.. cost about \$100 and did only basic arithmetic.
Back then, we learned many tricks for approximating more complex answers.
. . And one of them was square roots.
Suppose we want $\sqrt{N}$.
Make a first approximation, $a_1.$
If we're very very lucky, $a_1$ is exact.
. . Then the two factors of $N$ are equal: . $a_1 \,=\,\frac{N}{a_1}$
Most likely, the two factors are not equal.
. . Obviously, one is too small and the other too large.
Then a better approximation is the average of these two factors.
So we will calculate: . $a_2 \:=\:\frac{a_1 + \frac{N}{a_1}}{2} \;=\;\frac{1}{2}\left(a_1 + \frac{N}{a_1}\right)$
Hence, $a_2$ is a better approximation of $\sqrt{N}.$
Then: . $a_3 \:=\:\frac{1}{2}\left(a_2 + \frac{N}{a_2}\right)$ is an even better approximation.
And that is the source of that recursive function: . $a_{n+1} \;=\;\frac{1}{2}\left(a_n + \frac{N}{a_n}\right)$
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Actually, I used this form: . $a_{n+1} \;=\;\frac{a_n^2 + N}{2a_n}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 35, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9366844892501831, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/3003?sort=votes | ## In what sense are fields an algebraic theory?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Since there is no "free field generated by a set", it would seem that
1) there is no monad on Set whose algebras are exactly the fields
and
2) there is no Lawvere theory whose models in Set are exactly the fields
(Are 1) and 2) correct?)
Fields don't form a variety of algebras in the sense of universal algebra since the field axioms can´t be written as identities (since the axiom for multiplicative inverses has the restriction that the element be non-zero).
I guess fields are an algebraic theory in a more general universal algebra sense of being defined by operations on a single set with a set of first order axioms.
Is there any better sense in which they are algebraic or are fields just not really algebraic in nature?
-
## 5 Answers
1 and 2 are correct, for a simple reason. If C is a category satisfying either 1 or 2 then C has a terminal object. But there is no terminal object in the category of fields (and ring homomorphisms), because there are no maps between fields of different characteristic.
For the same reason, the category of fields is not an essentially algebraic theory (mentioned in Andrew's answer). An essentially algebraic theory can be defined as, simply, a small category with finite limits. A model or algebra for an essentially algebraic theory T is a finite-limit-preserving functor T --> Set. (Of course, you can consider models in other finite limit categories too.) And the category of models always has a terminal object.
This embodies the idea that Andrew was describing, of a theory where some operations are only partially defined, but (and this is crucial!) the domain of definition is itself defined by equations. You can see some rough connection between finite limits and this intuitive idea if you consider pullbacks in Set. A pullback in Set is, after all, the set of pairs satisfying some equation.
I don't know in what sense the theory of fields is algebraic. It's partly because of its failure to be algebraic in any of the usual senses that one often chooses to work with commutative rings rather than fields, in algebraic geometry and in topos theory, for instance.
-
Thanks for the confirmation! I doubted that fields would be even essentially algebraic but wasn't sure that there wasn't a sneaky way to do it. – Andrew Stacey Oct 29 2009 at 7:59
1
Combining your answer with Andrew Stacey's, might this be a reason to think that there would be an interesting notion of algebraic geometry over meadows? – Steven Gubkin Feb 3 2010 at 17:58
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Fields are not algebraic. An algebraic theory, for example, has free objects: there are free rings, free groups, a free monoids. The free functor is left adjoint to the forgetful functor to sets (okay, I'm talking about models in Set). There are, though, no free fields.
One can extend one's idea of an "algebraic theory" to an "essentially algebraic theory" in which partially defined operations are allowed (it's not clear to me that fields satisfy those since you need to specify the domain in terms of other operations whereas it seems that one can only specify the domain of the inverse as the complement of such a subset). Or, (maybe, but I doubt it), one could define a field as a Z2-graded algebraic theory where 0 is in degree 0 and everything else is in degree 1. Here, a grading should be regarded simply as a labelling system.
Alternatively, one can talk about meadows. Meadows are algebraic theories which are modified versions of fields. Instead of multiplicative inverses, there is a unary operation ι:M → M which satisfies the identity xι(x)x = x. Defining ι(x) = x-1 for non-zero x, and ι(0) = 0 turns any field into a meadow. The relationship between meadows and fields is quite strong.
An arXiv search throws up 68 references (at time of writing; for some reason google doesn't turn up anything particularly relevant, even when combined with the word "field"). One prominent name is that of Jan Bergstra.
-
I didn't know about meadows, thanks for telling me about them! – Omar Antolín-Camarena Oct 28 2009 at 16:00
Is there a way to talk about a "field object" in a category? – Dinakar Muthiah Oct 29 2009 at 0:08
There's probably some way to make sense of that ("Anyone saying something is impossible is more than likely to be interrupted by some idiot doing it."), but I think it would be so contrived that none of the intuition from group objects or ring objects would carry over. The problem is that the inverse is defined on the complement of a subset and that's not obvious how to generalise to an arbitrary category. – Andrew Stacey Oct 29 2009 at 8:02
As previous answers have said, fields are not algebraic. They are also not essentially algebraic, because categories of models of essentially algebraic theories have an initial object, and the category of fields instead has a set of initial objects -- Z and Z_p for each prime p. (There is no map from a field of one characteristic to a field of a different characteristic, so there can't be a single initial object.)
Fields are models of a a theory which is essentially algebraic plus allows specification of disjunctions. In the language of sketches, fields are the models of a "finite sum sketch." This was proved in Diers' thesis and is spelled out in the paper "The formal description of data types using sketches" by Charles Wells and Michael Barr, in volume 298 of the Springer Lecture Notes in Computer Science, 1988. For a general overview of sketches, see "Sketches: Outline with References" at http://www.cwru.edu/artsci/math/wells/pub/pdf/sketch.pdf .
ADDED 17 November 2009. The category of models of a finite-sum theory is not as nice as the models of an algebraic theory or even an essentially algebraic theory. Generally, the more different kinds of things you can specify in a sketch, the more awkward the category of models is. The category of fields is pretty awkward!
It does have filtered colimits and is closed under ultraproducts. Field theorists have made considerable use of the closure under ultraproducts.
-
Is it possible that the category of field of fixed characteristic p are essentially algebraic? – Charles Rezk Oct 29 2009 at 3:01
1
Categories of models of essentially algebraic theories are categories of models of finite limit sketches, and thus locally presentable, which the category of fields of a given characterstic is not (for instance it lacks a terminal object or coproducts). – Reid Barton Oct 29 2009 at 3:14
It also follows from this answer that the category of fields is accessible, which can be regarded as a sort of "algebraicity". – Mike Shulman Oct 29 2009 at 3:37
2
Yes, I've always thought that locally presentable and accesible categories (see Adamek and Rosicky's book of the same name) have really bad PR--sounds like the most boring and technical thing in the world but actually the theory is quite beautiful and illuminating. – Reid Barton Oct 29 2009 at 3:45
A small remark that might be helpful as well is motivated by the Birkhoff theorem - in most usual senses, algebras for the given algebraic theory are closed under products, which fields are not.
-
Algebraic theory are those definited in Universal ALgebras ang formalizated by B.Lawvere in categorical formalism. But this is only the first elementary level of what is called "Categorical Logic", in "right" categories is possible formalizing the "language and terms" used in the further logics, for example $\exists$, $\Rightarrow$, ecc.. For define a Field structure (beyond the machinary used for Rings) you need use the existential $\exists$, or the "not", similar thing for define the notion of "Local commutative ring". See P. Johnstone "Topos Theory" or (much) better "Sketches of an Elephant: A Topos Theory Compendium 2"
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9424421787261963, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/tagged/multivariable-calculus+integral | # Tagged Questions
0answers
31 views
### A little help integrating this torus?
Let $\mathbf{F}\colon \mathbb{R}^3 \rightarrow \mathbb{R}^3$ be given by $$\mathbf{F}(x,y,z)=(x,y,z).$$ Evaluate $$\iint\limits_S \mathbf{F}\cdot dS$$ where $S$ is the surface of the torus ...
1answer
79 views
### How to integrate $\cos\left(\sqrt{x^2 + y^2}\right)$
Could you help me solve this? $$\iint_{M}\!\cos\left(\sqrt{x^2+y^2}\right)\,dxdy;$$ $M: \frac{\pi^2}{4}\leq x^2+y^2\leq 4\pi^2$ I know that the region would look like this and I need to solve it as ...
1answer
25 views
### How to determine a function of 2 variables from its derivative?
Please even the slightest advice would help! If I have a function $V$ made of 2 variables $x_1$ and $x_2$, and its derivative \frac{dV}{dt} = \frac{dV}{dx_1}\frac{dx_1}{dt} + ...
1answer
60 views
### Evalute this integral using Green's Thereom
Let C be the boundary of the half-annulus $$1\leq(x^2+y^2)\leq4$$ where $$x\le0$$ in the xy plane, traversed in the positive direction. Evaluate : \$ \displaystyle \int_{c}(7\cosh^3(7x)-2y^3) ...
3answers
104 views
### How to solve this integral for a hyperbolic bowl?
$$\iint_{s} z dS$$ where S is the surface given by $$z^2=1+x^2+y^2$$ and $1 \leq(z)\leq\sqrt5$ (hyperbolic bowl)
0answers
36 views
### separating a variable from integral
In the following integral, I would like to separate $\alpha$ from rest of the equation. Can we solve the following equation for $\alpha$? \large{\int_{0}^{a} \int_{0}^{2\pi} ...
2answers
46 views
### Computing $\iiint_\mathbb{R^3} e^{-x^2-y^2-z^2}dxdydz$ using substitution
Consider this integral: $$\iiint_\mathbb{R^3} e^{-x^2-y^2-z^2}dxdydz$$ How would you compute it? I already solved this problem this way: \iiint_\mathbb{R^3} e^{-x^2-y^2-z^2}dxdydz = \left( ...
1answer
54 views
### Theorem or just a change of varibles?
I have a formula in my text: $$\int \int_{S} F \cdot n dA= \int \int_{w} F(G(u,v)) \cdot (dG_{u}\times dG_{v}) du dv$$ I am really lazy and hate remembering formulas to me this looks like a ...
1answer
81 views
### How to calculate this integral?
Define $$F=(x^2+y-4,3xy,2xz+z^2)$$ Compute the integral of Curl F over the surface $x^2+y^2+z^2=16, z\geq 0$
1answer
77 views
### Multivariable integral
What is the result of the following integral? $$2 \cdot \int_0^{\infty} \frac{1}{\sqrt{2\pi s}}e^{-\frac{b^2}{2s}} \int_{1-s}^{\infty}\frac{b}{\sqrt{2 \pi u^3}}e^{-\frac{b^2}{2u}} du db$$ where \$0 ...
2answers
37 views
### Is there a need for another integration technique?
I'm being asked to calculate $$I\triangleq\int_0^1\int_{e^{\large x}}^e{xe^y\over(\ln y)^2}\,dy\,dx\quad.$$ I got stuck on the indefinite inner one, $$J\triangleq\int{e^ydy\over(\ln y)^2}\quad.$$ At ...
0answers
29 views
### The tightest bound on an integral
Consider a polynomial $p(x)$ where $p(x)>0$ for $x\in(0,1)$ and $p(0)=0$. Let $s(x)$ be an increasing analytic function such that $s(0)=0$ and $s(1)=1$. I am interested to bound the following ...
1answer
26 views
### Finding volume under surface and above a region
I'm asked to find $\underset{U}{\int}(x+y)^2\, dA$ where U is a region bounded by the lines x = -1, x = 1, y = -1 ... and by the curves x=$y^2$ , y=1+$x^2$ Plot: http://d.pr/WYSg I started out by ...
1answer
37 views
### Finding the centre of mass? What axis does the centre of mass lie on?
Let the mass density $\mu$ be given by $$\mu(x,y,z)= \frac{x^2}{a^2} + \frac{y^2}{b^2} + \frac{z^2}{c^2} \leq1$$ what axis would the centre of mass lie on?
3answers
35 views
### Confusing Triple Integral
i'm having trouble with this integral the integral is $\int_0^9\int_{\sqrt z}^3\int_0^y z\cos(y^6)\,dx\,dy\,dz$. We aren't given any more information and i'm a bit stuck as to where to start. I don't ...
2answers
53 views
### Triple integral problem involving a sphere
Let $R = \{(x,y,z)\in \textbf{R}^3 :x^2+y^2+z^2\le\pi^2\}$ How do I integrate this triple integral $$\int\int\int_R \cos x\, dxdydz,$$ where $R$ is a sphere of radius $\pi$? I have trouble ...
1answer
43 views
### Integration in $\mathbb{R}^n$ region
If its all parameterized usually I can solve it, but I have a problem with integration in vagues regions, usually I dont know the right procedure to solve them. The problem I need to solve is: given ...
2answers
82 views
### Find integral of a polar function $h(r,\theta)$ over a circle
I am studying for my math final and our prof gave us a review but without any solutions or hints. I don't really understand this problem so if anyone could help me out here I would appreciate it. ...
0answers
72 views
### Multivariable weird function
I have to prove two statements, but this function is so weird( and hard to work with )...I just can't figure out how to solve this. The given function is $\varphi:\mathbb{R}^N\rightarrow\mathbb{R}$ ...
1answer
56 views
### Moment of inertia of a circle
A wire has the shape of the circle $x^2+y^2=a^2$. Determine the moment of inertia about a diameter if the density at $(x,y)$ is $|x|+|y|$ Thank you
1answer
59 views
### Work and Line Integral
A two-dimensional force field is given by the equation $$f(x,y)=cxy\textbf{i}+x^6y^2\textbf{j}$$, where $c$ is a positive constant. This force acts on a particle which must move from $(0,0)$ to the ...
1answer
37 views
### Triple Integral Spherical Coordinates
So I have to compute the triple integral of this: $\int\int\int \frac{1}{1+x^2+y^2+z^2}$ and it says the equation of the sphere is $x^2 + y^2 + z^2 = z$ which is just an elongated sphere running ...
1answer
69 views
### tricky surface integral
I am studying for my final and my prof gave us review questions but with no answers so I am lost with this question. If anyone can help I would really appreciate it. Question: Find the area of the ...
0answers
36 views
### Intuitive understanding of integral of vector valued functions
Today in class we were introducing complex line integrals. And that got me thinking, I don't know of a good interpretation for integrals of functions from $\mathbb{R}$ to $\mathbb{R}^2$ or ...
2answers
74 views
### Volume integral over a bounded region
Class is over now and I am studying for my final and I have a problem with this question on our review sheet. If anyone can help I would appreciate it. Question: Find the volume of the region in ...
1answer
31 views
### Line integral of $F = r \times k$ on hemisphere
Exam revision - Verify Stokes theorem directly by explicit calculation of the surface and line integrals for the hemisphere $r=c$, with $z \geq 0$, where $F = r \times k$ and $k$ is the unit vector ...
2answers
38 views
### Double Integral of piece wise function?
Let $I=[0,1]\times[0,1]$ and let $$f(x)= \begin{cases} 0, & \text{if (x,y)=(0,0)}\\ \frac{x^2-y^2}{(x^2+y^2)^2}, & \text { if (x,y)$\not=$(0,0)}\\ \end{cases}$$ Need to show that ...
1answer
60 views
### Polar Coordinates: Dividing by the variable “r.”
Evaluate the iterated integral by converting to polar coordinates: $\large \int^2_0 \int^{\sqrt{2x-x^2}}_0 xy~dy~dx$ I successfully completed most of the problem; however, I had difficulty ...
1answer
27 views
### volume evaluated by triple integral
Let $\Omega:=\{(x,y,z)|x^2+y^2=1, 0\leq z \leq 2\}$, fix an $\alpha \in (-\frac{\pi}{2},\frac{\pi}{2})$ and given the transoformation $T(x,y,z):=(x,y+z\tan \alpha,z)$, find the volume of $T(\Omega)$. ...
1answer
48 views
### How to integrate a vector function in spherical coordinates?
How to integrate a vector function in spherical coordinates? In my specific case, it's an electric field on the axis of charged ring (see image below), the integral is pretty easy, but I don't ...
2answers
42 views
### Determining the Moment of inertia
Let $a,b,c$ be positive real numbers such that $c<a$. Suppose given is a thin plate $R$ in the plane bounded by $$\frac{x}{a}+\frac{y}{b}=1, \frac{x}{c}+\frac{y}{b}=1, y=0$$ and such that the ...
2answers
38 views
### Double integral of polar coordinates?
Compute $\int_C (8-\sqrt{x^2 +y^2}) ~ds$ where $C$ is the circle $x^2 + y^2 =4$. Answer: $24\pi$ How is the answer $24\pi$? I converted the integral into a double integral of polar coordinates ...
2answers
68 views
### Integral from $0$ to Infinity of $e^{-3x^2}$? [duplicate]
How do you calculate the integral from $0$ to Infinity of $e^{-3x^2}$? I am supposed to use a double integral. Can someone please explain? Thanks in advance.
0answers
97 views
### Algorithm to calculate multiple integral.
One of the major difficulties of student in advanced calculus (including myself when student) is to obtain the extremes of repeated integrals to calculate the volume integral in $R^n$ i.e. transform ...
3answers
68 views
### Really Confused on a surface area integral can't seem to finish the integral off.
Basically the question asks to compute $\int \int_{S} ( x^{2}+y^{2}) dA$ where S is the portion of the sphere $x^{2} + y^{2}+ z^{2}= 4$ and $z \in [1,2]$ we start with a chnage of variables \$x=x ...
3answers
52 views
### Find the volume of the region contained above $z=1$ and below $x^{2}+y^{2}+z^{2}=4$
Why doesn't this work? Find the volume of the region contained above $z=1$ and below $x^{2}+y^{2}+z^{2}=4$ going to cylindrical this should be easy. $z=(4-r^{2})^{\frac {1}{2}}$ and $z=1$ ...
1answer
55 views
### Sketch of the ordinate set of $f$
Let $f$ be defined on $[0,1] \times [0,1]$ as follows: $f(x,y)= \begin{cases} x+y \mbox{ if } x^2 \leq y \leq 2x^2 \\ 0 \mbox{ otherwise} \end{cases}$ I want to make a sketch of the ordinate set of ...
2answers
148 views
### Double integral application
I need to determine $$\int_{0}^{1} \int_{-\sqrt{x}}^{\sqrt{x}}\frac{1}{1-y}dydx$$ I integrate in terms of the y component and obtained: $$\int_{0}^{1}\ln(\frac{1+\sqrt{x}}{1-\sqrt{x}})dx$$ Can ...
1answer
58 views
### Property of double integrals
Let $f,g : A \rightarrow \mathbb{R}$ be integrable functions on a closed rectangle $A \subset \mathbb{R}^n$. Show that $f+g$ is integrable and $\int_{A}f+g= \int_A f+ \int_A g$ Thank you
2answers
113 views
### Finding surface area of a cone
I will describe the problem then show what I tried to solve it. I need to find the area of the cone defined as follows: $$z^2=a^2(x^2+y^2)$$ $$0\leq z\leq bx+c$$ where $a,b,c>0$ and $b<a$. ...
1answer
40 views
### (Calculus 4) Compute the line integral with respect to s along the curve C.
I'm having a lot of trouble with this problem, and I suspect my mistake is somewhere in the setup. Here is the problem: $$\int_C \frac{1}{1+x} ds$$ C: r(t) = ti + \frac{2}{3}t^{3/2}j, 0 \le t ...
1answer
84 views
### Show $g(\mathbf{x}) \leq h(\mathbf{x})$ implies $\int g(\mathbf{x})\mathrm{d}\mathbf{x} \leq \int h(\mathbf{x})\mathrm{d}\mathbf{x}$
Suppose I have $g$ and $h$ from $\mathbb{R}^p\to\mathbb{R}$ such that for all $\mathbf{x}$, $g(\mathbf{x}) \leq h(\mathbf{x})$. I want to prove that the integral over all $\mathbb{R}^p$ of $g$ is less ...
3answers
57 views
### Calculate volume in a 3D sort of space using cartesian coordinates
Find the volume bounded by the cylinder $x^2 + y^2 = 1$, the planes $x=0, z=0, z=y$ and lies in the first octant. (where x, y, and z are all positive)
3answers
74 views
### Change of variables in two dimensions
This is from Munkres' Analysis on Manifolds, Section 17, Question 4. (a) Show that $$\int_\Bbb {R^2} e^{-(x^2+y^2)} = \left[ \int_\Bbb R e^{-x^2}\right]^2,$$ provided the first of these ...
0answers
20 views
### Riemann integrable then J-integrable
Let $E\subset\mathbb{R}^n$ be a closed Jordan domain and $f:E\rightarrow\mathbb{R}$ a bounded function. We adopt the convention that $f$ is extended to $\mathbb{R}^n\setminus E$ by $0$. Let $\jmath$ ...
0answers
35 views
### Riemann integral is zero for certain sets
The question is: Let $\pi=\left \{ x\in\mathbb{R}^n\;|\;x=(x_1,...,x_{n-1}, 0) \right \}$. Prove that if $E\subset\pi$ is a closed Jordan domain, and $f:E\rightarrow\mathbb{R}$ is Riemann integrable, ...
1answer
27 views
### help taking line integral over a vector field
I have a problem in which I'm given a force field $\vec{F}(x,y,z)=x\hat{i}+y\hat{j}+ 3\hat{k}$ and a path $\vec{r}(t)=4cos(t)\hat{i}+4sin(t)\hat{j}+3t\hat{k}$ over the interval $0\le t\le 2\pi$. I ...
0answers
39 views
### Line and surface integrals $R^{3}$
So i actually missed the class where this material was covered so plaese bear with me if my understanding is not so good. one of the problems in my textbook is as follow's. Prove the following ...
2answers
36 views
### Surface Integral Q
I've been revising this area and I've completely forgotten what I'm doing and my notes are sketchy. Evaluate $\int r \cdot dS$ over the surface of the sphere, radius a, centred at the origin. ...
0answers
87 views
### Tough integration with change of variables and switch to polar coordinates
I was given this question in class and I was just wondering if I am on the right track… Evaluate: $$I=\iint\left(1-\frac{x^2}{a^2} -\frac{y^2}{b^2} \right)^{3/2} dxdy$$ over the region enclosed by ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 111, "mathjax_display_tex": 25, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9173809885978699, "perplexity_flag": "head"} |
http://sciencehouse.wordpress.com/2009/03/27/the-gigabit-machine/ | Scientific Clearing House
Carson C. Chow
The gigabit machine
There are plenty of amazing things about the brain but one of the most mind boggling is that it can be coded by a genome of only 3 billion base pairs or 6 gigabits. If we consider a brain with about $10^{11}$ neurons each receiving something like $10^4$ inputs (i.e. synapses), then we’re talking about something on the order of $10^{15}$ parameters to set. Hence the genome does not carry enough information to set every connection. I had an amusing senior moment on the train ride from Edinburgh to London after the Mathematical Neuroscience workshop with Peter Latham, where I took the log of $10^{15}$ and ended up with about 50 bits of information to specify the brain. Peter had a good laugh at my expense and we’ve now dubbed the double log of the entropy as the number of “super bits”. My confusion stemmed from the fact that I had only accounted for the amount of information needed to identify a neuron and that amounts to about 50 to 100 bits of information. For example, if each neuron could secrete any combination of 50 different neurotrophic factors and expressed any combination of 50 different receptors then each neuron would have enough information to connect to any other specific neuron. However, that would not say which combination of genes are expressed in a given neuron, which is what is necessary to specify all the connections. Thus, one would only need about 100 bits to be able to connect up a random brain but would need $10^{15}$ bits to specify each connection.
So, the brain has about 50 super bits or $10^{15}$ real bits of potential information and yet we start with about $10^9$ bits. Hence, if we believe that the connections in the brain do matter then almost all of them must be set by the inputs the brain receives throughout its life. If we take something like $10^6$ bits per second of information entering our brains (which is probably an overestimate as we will see) then that gives something on the order of $10^{13}$ bits of information per year. Multiply that by a human lifespan and we can fill the brain. However, this $10^6$ bits of information per second just counts the number of spikes entering from the sensory systems. The amount of actual information used is probably lower and the amount of information that we acquire through “active” learning such as reading, lessons, advice and so forth is even smaller. However you want to slice it, it seems like most of the connections in the brain are set by things that are independent of genetics and controlled environmental variables such as schooling and parental guidance.
This means that either most of the connections don’t matter or that there are very high correlations between them. My guess is that it is some of both. There are crucially important connections and correlations and there are a lot of connections that are random. However, this randomness could be important to the functioning of the brain. It could form the initial conditions for learning or provide the initial guess for solving a problem. This also puts constraints on the genetic contribution to brain function given that genes probably mostly influence the machinery of the brain like myelination, synaptic time scales, and ion channels rather than connectivity. Hence, any disease related to connectivity could be influenced by many possible genes with overlapping effects. For example, a variation in threshold for inducing plasticity of synapses, a population imbalance between excitatory and inhibitory cells, or a difference in density of ion channels could all lead to an effective differently connected brain.
Like this:
This entry was posted on March 27, 2009 at 13:03 and is filed under Neuroscience. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
2 Responses to “The gigabit machine”
1. One bit can change your life « Scientific Clearing House Says:
February 23, 2011 at 11:20
[...] second you’re taking in perhaps a million bits of information. See here for the estimate. Most of those bits don’t affect your brain or your life at all. The [...]
2. Information content of the brain revisited « Scientific Clearing House Says:
March 14, 2012 at 12:40
[...] post – The gigabit machine, was reposted on the web aggregator site reddit.com recently. Aside from increasing traffic to my [...]
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9208280444145203, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/26864/what-is-the-use-of-a-universal-not-gate/26865 | # What is the use of a Universal-NOT gate?
The universal-NOT gate in quantum computing is an operation which maps every point on the Bloch sphere to its antipodal point (see Buzek et al, Phys. Rev. A 60, R2626–R2629). In general, a single qubit quantum state, $|\phi\rangle = \alpha |0\rangle + \beta | 1 \rangle$ will be mapped to $\beta^* |0\rangle - \alpha^*| 1 \rangle$. This operation is not unitary (in fact it is anti-unitary) and so is not something that can be implemented deterministically on a quantum computer.
Optimal approximations to such gates drew quite a lot of interest about 10 years ago (see for example this Nature paper which presents an experimental realization of an optimal approximation).
What has been puzzling me, and what I cannot find in any of the introductions to these papers, is why one would ever want such a gate. Is it actually useful for anything? Moreover, why would one want an approximation, when there are other representations of $SU(2)$ for which there is a unitary operator which anti-commutes with all of the generators?
This question may seem vague, but I believe it has a concrete answer. There presumable is one or more strong reasons why we might want such an operator, and I am simply not seeing them (or finding them). If anyone could enlighten me, it would be much appreciated.
-
## 3 Answers
I can imagine a number of reasons why one may want to realize such a gate.
The first is that the universal-NOT exists in classical theory (it is just flipping). This is similar to the case of cloning, that is possible in classical theory but not in quantum theory. So you can look at the study of an approximate universal-NOT as something similar to the study of an approximate cloner (actually, it is easy to argue that if cloning is possible, then universal-NOT is possible: just clone to identify the state, and then rotate it).
The second reason it that the universal-NOT is related to time reversal, and if we want to simulate the latter, we may want to have the former.
The third reason is that the universal-NOT is related to transposition, and as such could be used to test for the presence of entanglement when applied to part of a larger system (partial transposition test).
You can find more recent results and hopefully some more motivation in http://arxiv.org/abs/1104.3039
-
The first answer you give is exactly the justification they give in the paper, but it's not a reason why you would actually want to be able to do it. As regards the second, you don't actually need a universal NOT for that, you simply need an operator that anti-commutes with the Hamiltonian, and there are potentially far better time reversal techniques (i.e. WaHuHa etc.). The partial transpose test does seem to be one reason to want it though. – Joe Fitzsimons Sep 29 '11 at 7:21
[Edited as my original answer misunderstood the question]
The immediate application I can see is in dynamical decoupling. The pulse sequences needed for that are a modified form of the not operation, projecting the state to a point opposite a given symmetry plane on the Bloch sphere. At the moment, the problem there is that the sequences that have been found correct decoherence along one axis on the Bloch sphere. A universal-NOT would be able to generate a universal dynamical decoupler. In essense, for any system and any type of decoherence, we could "run time backwards" and re-extract a coherent system.
(It is maybe interesting to think that, given the links with the no-cloning theorem, there may well be a connection between no cloning and the appearance of a decoherence arrow of time.)
-
The universal-NOT isn't just a quantum bit flip operator (the Pauli $\sigma_X$ gate fills this role). It maps every state to its antipodal point on the block sphere, and hence anti-commutes with all Pauli operators. I'm not clear on why you bring up feed forward corrections in MBQC, as these all correspond to Pauli operators, not universal-NOTs. – Joe Fitzsimons Oct 5 '11 at 12:51
I see. How does it work when a responder has mistaken the question - should I delete my response? Also, I have an answer to the actual question, do I put that here or start a new answer? – Clare Oct 5 '11 at 13:34
You can simply edit this answer if you want. – Joe Fitzsimons Oct 5 '11 at 15:36
1
Yes, dynamical decoupling is one reason you may want a universal-NOT. However, given that these don't exist, you are likely far better with WaHuHa pulses and refinements there of, than using approximate universal-NOTs. After all, you only need the effective change to add up to zero, and having more than two types of evolution involved doesn't change that end result (see my comment on Marco's answer above). +1 anyway though, as it is indeed a reason for wanting a universal-NOT. – Joe Fitzsimons Oct 6 '11 at 5:52
In terms of implementation, it may indeed be easier as a first step to tailor the decoupler to the specific system. The idea though would be to use a U-NOT as a universal decoupler, not dependent on any of the specifics of the state in question or the algorithm (or the error model). It would be a very powerful piece of kit. Practically, though, I wouldn't spend time trying to develop one just yet. – Clare Oct 6 '11 at 6:16
show 1 more comment
I suppose, the reason for U-Not gate is more clear in wider framework of research of universal quantum machines, conducted by V. Buzek et. al. So U-Not is coming in good company with question about universal quantum cloning (it is also impossible to do precisely, so it is question about most perfect approximation) and other elementary operations. An introduction to U-Not may be found here http://arxiv.org/abs/quant-ph/9901053 (seems it is just online version of second reference in Nature paper cited above).
-
The article you cite is already mentioned in the question. It's the preprint of the Phys Rev A article. – Pieter Oct 8 '11 at 12:00
Likely yes. I see. It is "the second reference in Nature" I mentioned instead. – Alex 'qubeat' Oct 10 '11 at 9:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9448353052139282, "perplexity_flag": "middle"} |
http://mathhelpforum.com/statistics/210701-help-null-alternative-hypothesis-z-test-proportions.html | # Thread:
1. ## Help with null/alternative hypothesis in Z-test of proportions
I have a clinical trail where all the patients sufferd from a stroke.
I need to assess the efficacy of the drug treatment (drug/placebo) for the entire sample by using the z-test in excel.
I have the values if the drug treatment was a succes or a failure for each patient.
My quesiton is what is the null hypothesis and the alternative, cause i have no idea what to write for z-test of proportions.
I did also a chi-square test and the null hypothesis was: the drug treatment and its result are independent / and the alternative hypothesis was : the drug treatment and its result are dependent,
maybe it can helps!
THANKS A LOT!!!!
2. ## Re: Help with null/alternative hypothesis in Z-test of proportions
Hey eladgro.
If you have two sets of data (placebo and drug) then you may want to use a t-test or an equivalent t-test in non-parametric form.
You can then test if there is a statistically significant difference between the means of the distributions and use that to help you come to a conclusion based on evidence.
If you need to look at individual cases then another test would be worth considering.
What exactly are you trying to figure out specifically? (In other words, what is the specific nature of your question and inquiry? What do you hope to answer in full?)
3. ## Re: Help with null/alternative hypothesis in Z-test of proportions
Typically, in a test like that the "null hypothesis" would be "there is no advantage to the placebo" and the alternative would be "there is an advantage".
4. ## Re: Help with null/alternative hypothesis in Z-test of proportions
Hi eladgro!
Suppose $\pi_1$ is the proportion of the success rate for the drug.
And suppose $\pi_2$ is the proportion of the success rate for the placebo.
Then your null hypothesis is: $\pi_1 = \pi_2$.
And your alternative hypothesis is: $\pi_1 \ne \pi_2$.
Unless you're only interested in whether the drug helps. In that case your alternative hypothesis is $\pi_1 > \pi_2$.
See here for the formulas that you are supposed to use: Statistics: Test for Comparing Two Proportions
Or here for a more in depth description: Statistical hypothesis testing - Wikipedia, the free encyclopedia
You probably have them in your notes as well.
This is indeed a z-test and not a t-test.
Basically you should find the same result as when you did the $\chi^2$ test.
5. ## Re: Help with null/alternative hypothesis in Z-test of proportions
Thank you !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9275685548782349, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/91648?sort=newest | Simple and general relation between continuant polynomials
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Continued fraction $[a_0,a_1,...,a_n]$ may be expressed as quotient of two polynomials of $(a_0,a_1,...,a_n)$, named continuants (see http://en.wikipedia.org/wiki/Continuant_%28mathematics%29 )
$[a_0,a_1,...,a_n] = K(a_0,a_1,...,a_n)/K(a_1,...,a_n)$
For example $K(a_0,a_1,a_2,a_3) = a_{0} a_{1} a_{2} a_{3} + a_{0} a_{1} + a_{0} a_{3} + a_{2} a_{3} + 1$
Continuants has many interesting recurrence relations some of which You may find in Graham, Knuth, Patashnik book "Concrete mathematics". Continuants appears for example in Hopcroft's minimization algorithm.
I found interesting relation:
$K( a_0,a_1,...,a_k,a_{(k+1)},...,a_n ) =$
$K(a_0,a_1,...,a_k,1,1,a_{(k+1)},...a_n) - K(a_0,a_1,...,a_k,1,a_{(k+1)},...a_n)$
that is - between variables $a_k$ and $a_{(k+1)}$ You put two of "1" in the firs and one "1" in the second term... You may consider this as generalisation of Fibonacci recurrence - because if You put all $a_i =1$ You obtain Fibonacci numbers.
For example: $K(a_0,a_1,a_2) = (a_0 a_ 1+1)a_2+a_0$ $K(a_0,a_1,1,1,a_2) = (2 a_0 a_1+ a_0+2)a_2+a_0 a_1+a_0+1$ $K(a_0,a_1,1,a_2) = (a_2+1)(a_0 a_1+1)+a_0 a_2$
And it is true that:
$K(a_0,a_1,a_2) = K(a_0,a_1,1,1,a_2) - K(a_0,a_1,1,a_2)$
I am curious - is this known fact? Or something new?
How to proove it - I found finite closed expression for continuant polynomial $K(a_0,a_1...a_n)$ in the form:
(1) $x = [a_{0},a_{1},a_{2},...a_{n}] =\frac{K(a_0,a_1,...,a_n)}{K(a_1,...,a_n)}= - \frac{1}{2} \frac{Tr \left( M(S- L) \right) }{Tr \left( M(S-T) \right) }$
where M is continuant matrix 2x2 and $S,T,L$ and $I$ (unity matrix) are constant matrices which form base in $SL(2,Z)$ space - they are defined in Sage workbook accesible for download here: http://fiksacie.wordpress.com/2012/03/17/ulamki-lancuchowe-nonstandard-matrix-representation-of-continued-fractions-cz-5/ There is also several essays about that on my blog - in polish. Proof is simple linear algebra exercise:
start from monoid formed by $S,T$ generators ( which represent in nonstandard way moves on Stern-Brocot tree - standard way consists of R-ight, and L-eft, RL matrices) and then construct ring over rationals with $I,S,T,L$ generators, and then formula above appears. M is defined as follows ( it is continuant matrix representation for given continued fraction $x = [a_{0},a_{1},a_{2},...a_{n}]$ )
(2) $M = S \prod_{i=0}^{n} S(ST)^{a_{i}}$.
$S$ is that $S^2 =I$ and $T$ is that $T^2 = I+T$ so
$I = T^2 -T = S(ST)S(ST) - S(ST)$
Then You may insert it in any place between $S(ST)^{a_{i}}S(ST)^{a_{i+1}}$ which gives expression above and many more if You consider $T^p$ for $p \in Z$ etc.
It is worth to note that in the general ring $Q[{I,S,T,L}]$ matrix $M$ has decomposition in the form $M=aI+bS+cT+dL$ with rational $a,b,c,d$ and that continued fractions form algebraic curve which equations is given by bilinear form $\left| w^TBw \right| = 1$ where $w=[a,b,c,d]$ - column vector where $a,b,c,d$ are integers (there are additional inequalities for this coefficients - because $M$ has to have positive elements)
Please note that (1) and (2) allows similarity transformations, and then You may choose any base in SL(2,Z) space and that whole construction may be easily generalized to bigger than 2x2 dimensional matrices. I have many questions here to ask, some of which are:
(A) are there irreducible representations of this construction in SL(D,Z) spaces where D>2?
(B) are there representations in complex number set ${z_1,...z_4}$ which obeys the same multiplication table as for $I,S,T,L$ matrices? How to express (1) in that case?
(C) probably there is possibility to construct pure algebraical representation in which continued fractions are generalised to values of operator which should be generalisation of rhs of (1). How to do it?
I am not professional mathematician - I have problems if it is worth to publish it, where to do it etc.
-
2
Does this not follow from the simple recursion for the continuant? – Gerry Myerson Mar 19 2012 at 21:50
I do not know. Does it? Could You show that? I develop my own way do generate it - from simple linear algebra... - let me explain in the post. – kakaz Mar 20 2012 at 8:27
Here is simplified and compact version of sage worksheet: sagenb.com/home/pub/4603 – kakaz Mar 23 2012 at 12:33
1 Answer
As Gerry remarked it follows from the recursion of the continuants. By the recurrence of the continuants you have ${K_{k + 3}}({a_0}, \cdots ,{a_k},1,1) = {K_{k + 2}}({a_0}, \cdots ,{a_k},1) + {K_{k + 1}}({a_0}, \cdots ,{a_k}).$ Therefore ${K_{k + 4}}({a_0}, \cdots ,{a_k},1,1,{a_{k + 1}}) = {a_{k + 1}}{K_{k + 3}}({a_0}, \cdots ,{a_k},1,1) + {K_{k + 2}}({a_0}, \cdots ,{a_k},1)$, ${K_{k + 3}}({a_0}, \cdots ,{a_k},1,{a_{k + 1}}) = {a_{k + 1}}{K_{k + 2}}({a_0}, \cdots ,{a_k},1) + {K_{k + 1}}({a_0}, \cdots ,{a_k}),$ ${K_{k + 2}}({a_0}, \cdots ,{a_k},{a_{k + 1}}) = {a_{k + 1}}{K_{k + 1}}({a_0}, \cdots ,{a_k}) + {K_k}({a_0}, \cdots ,{a_{k - 1}}).$ This implies ${K_{k + 4}}({a_0}, \cdots ,{a_k},1,1,{a_{k + 1}}) = {K_{k + 3}}({a_0}, \cdots ,{a_k},1,{a_{k + 1}}) + {K_{k + 2}}({a_0}, \cdots ,{a_k},{a_{k + 1}}).$ By induction your result follows.
-
Great! Thank You. So my result is somehow trivial... – kakaz Mar 20 2012 at 9:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.918070912361145, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2007/04/07/the-first-isomorphism-theorem-for-rings/?like=1&source=post_flair&_wpnonce=3fdfac1636 | # The Unapologetic Mathematician
## The First Isomorphism Theorem (for rings)
Just like we had for groups, there is an isomorphism theorem for rings. In fact, the demonstration goes much the same as it did there.
Any subring $S$ of a ring $R$ comes equipped with an inclusion homomorphism $\iota_{(R,S)}:S\rightarrow R$. Any quotient of a ring $R$ by and ideal $I$ comes with a projection homomorphism $\pi_{(R,I)}:G\rightarrow G/I$. For both of these, just construct the inclusion or projection homomorphism for the underlying abelian groups and check that it preserves the multiplication. Just like before, the inclusion is a monomorphism, the projection is an epimorphism, and the kernel of the projection — the set of elements of $R$ that get sent to zero — is the ideal $I$.
Now given any ring homomorphism $f:R\rightarrow R'$, the kernel is an ideal. Indeed, if $f(x)=0$ and $f(x')=0$ then
$f(x+x')=f(x)+f(x')=0+0=0$
$f(rx)=f(r)f(x)=f(r)0=0$
$f(xr)=f(x)f(r)=0f(r)=0$
So the set of elements that get sent to zero is closed under addition and under left and right multiplication by any element of the ring.
Also, the image of $f$ is a subring of $R'$. Given elements $f(r)$ and $f(r')$ in the image of $f$ we have $f(r)f(r')=f(rr')$ and $f(r)+f(r')=f(r+r')$, so sums and products of elements of the image are again in the image.
Now we want to take $f$ and make an isomorphism $\bar{f}:R/{\rm Ker}(f)\rightarrow{\rm Im}(f)$. For any coset in $R/{\rm Ker}(f)$, pick a representative element $r$ of $R$ and define $\bar{f}(r+{\rm Ker}(f))=f(r)$. Clearly this lands in ${\rm Im}(f)$, but does it really define a homomorphism from $R/{\rm Ker}(f)$? Indeed it does, because any other representative of the same coset looks like $r+x$ for some element $x$ of the kernel. Then
$\bar(f)(r+x+{\rm Ker}(f))=f(r+x)=f(r)+f(x)=f(r)+0=f(r)$
so we get the same answer — the value of $\bar{f}$ doesn’t depend on the choice of representative we make.
Is $\bar{f}$ a monomorphism? Yes, because if $\bar{f}(r+{\rm Ker}(f)=0$ then $f(r)=0$, so $r$ is a representative of ${\rm Ker}(f)$, which takes the place of ${}0$ in $R/{\rm Ker}(f)$. Is it an epimorphism? Yes, because every element of ${\rm Im}(f)$ comes from some element $r$ of $R$, so we can hit it by taking $\bar{f}(r+{\rm Ker}(f))$.
Putting it all together we can factor any homomorphism $f$ into the composition of an epimorphism $\pi_{(R,{\rm Ker}(f))}$, an isomorphism $\bar{f}$, and a monomorphism $\iota_{(R',{\rm Im}(f))}$. All homomorphisms of rings work this way: factor out some kernel, then send the quotient isomorphically to some subring of the target ring. Again, all the interesting stuff really happens in the first step. Studying homomorphisms from a given ring really comes down to studying the possible ideals of a given ring. In particular, if a ring has no ideals but the whole ring itself and the ideal consisting only of ${}0$ we call it “simple”. Every homomorphic image of a simple ring is either zero or the whole ring itself.
As I said above, this really looks a lot like what we did for groups, and there’s actually a very good reason why that I want to put off a while longer. Essentially, the fact that we have an isomorphism theorem like this doesn’t depend on the “groupiness” or the “ringiness” of the objects we’re studying, but on deeper structure shared by both groups and rings — or rather shared by group and ring homomorphisms.
### Like this:
Posted by John Armstrong | Ring theory
## 2 Comments »
1. [...] the first isomorphism theorem for rings tells us that we can impose our relation by taking the quotient ring . But what we just discussed [...]
Pingback by | August 7, 2008 | Reply
2. [...] object, kernels, and cokernels, which is enough to get the first isomorphism theorem, just like for rings. Specifically, if is any homomorphism of Lie algebras then we can factor it as [...]
Pingback by | August 15, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 49, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323798418045044, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/35301?sort=votes | ## P/poly algorithm for polynomial identity testing
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
By the Schwartz–Zippel lemma, "Is this arithmetic formula identically zero?" is in coRP $\subseteq$ BPP $\subset$ P/poly, with the second inclusion by Adleman's theorem. By basically following the proof, but using the improved error bound that comes from the original algorithm only having one-sided error, one gets an algorithm that computes suitable advice. (equivalently, a suitable circuit)
Is there any known P/poly algorithm for this problem with advice that can be computed faster?
-
## 1 Answer
The Schwartz-Zippel lemma is very fast, only one evaluation of the formula at one random point. There's nothing better known that minimizes time and error as well as Schwartz-Zippel. But Schwartz-Zippel requires a lot of randomness in each repetition: a fresh new point of n elements.
Have you tried some of the polynomial identity tests with better tradeoffs between randomness and error? Their running time (and the running time dependence on the error) is a bit worse than Schwartz-Zippel, but the number of random bits needed is much less than Schwartz-Zippel. So in the application of Adleman's theorem, the sizes of the witnesses you need to hard-code in the non-uniform circuit will shrink, but the time dependence on error increases, potentially making the number of necessary witnesses increase. Given these complex tradeoffs, I'm not sure which of them would work best for obtaining small circuits.
For a quick overview of these alternative identity tests and their tradeoffs, see the table on p.3 in Agrawal and Biswas: http://www.cse.iitk.ac.in/users/manindra/algebra/identity.pdf
-
4
Another way to reduce error with few random bits is to use any deterministic construction of expander graphs. – Tsuyoshi Ito Aug 12 2010 at 15:29
2
Yes, thanks! However that may be overkill here, as that works in a black-box way for any randomized algorithm. Polynomial identity testing has very special structure which the above algorithms exploit. – Ryan Williams Aug 13 2010 at 4:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9019699692726135, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/38771?sort=newest | ## Dual Schroeder-Bernstein theorem
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question was motivated by the comments to http://mathoverflow.net/questions/38754/dual-of-zorns-lemma
Let's denote by the Dual Schroeder-Bernstein theorem (DSB) the statement
For any sets $A$ and $B$, if there are surjections from $A$ onto $B$ and from $B$ onto $A$, then there is a bijection between them.
In set theory without choice, assume that the Dual Schroeder-Bernstein theorem holds. Does it follow that choice must hold as well?
I strongly suspect this is open, though I would be glad to be proven wrong in this regard. In all models of ZF without choice that I have examined, DSB fails. This really does not say much, as there are plenty of models I have not looked at. In any case, I don't see how to even formlate an approach to show the consistency of DSB without AC.
The only reference I know for this is Bernhard Banaschewski, Gregory H. Moore, The dual Cantor-Bernstein theorem and the partition principle, Notre Dame J. Formal Logic 31 (3), (1990), 375–381. In this paper it is shown that a strengthening of DSB does imply AC, namely, that whenever there are surjections $f:A\to B$ and $g:B\to A$, then there is a bijection $h:A\to B$ contained in $f\cup g^{-1}$. (Note that the usual Schroeder-Bernstein theorem holds -without needing choice- in this fashion.)
The partition principle is the statement that whenever there is a surjection from $A$ onto $B$, then there is an injection from $B$ into $A$. As far as I know, it is open whether this implies choice, or whether DSB implies the partition principle. Clearly, the reverse implications hold.
If you are interested in natural examples of failures of DSB in some of the usual models, Benjamin Miller wrote a nice note on this, available at his page.
Added Sep. 21. [Edited Aug. 14, 2012] It may be worthwhile to point out what is known, beyond the Banaschewski-Moore result mentioned above.
Assume DSB, and suppose $x$ is equipotent with $x\sqcup x$. Then, if there is a surjection from $x$ onto a set $y$, we also have an injection from $y$ into $x$. (So we have a weak version of the partition principle.) This idemmultiple hypothesis that $x\sqcup x$ is equipotent to $x$, for all infinite sets $x$, is strictly weaker than choice, as shown in Gershon Sageev, An independence result concerning the axiom of choice, Ann. Math. Logic 8 (1975), 1–184, MR0366668 (51 #2915).
Also, as indicated in Arturo Magidin's answer (and the links in the comments), H. Rubin proved that DSB implies that any infinite set contains a countable subset.
-
About DSB not being provable without (some amount of) choice: Without choice, it is consistent that there is a quotient of the reals that is strictly larger than the reals. This shows that DSB is not provable without choice: R and its quotient both are surjective images of each other, but there is no bijection between them. This holds, for example, in Solovay's model where all sets of reals are Lebesgue measurable. The note by Miller has more sophisticated versions of this example and interesting variants. – Andres Caicedo Sep 15 2010 at 15:15
Vaguely related, another choice principle related to DSB (and stronger than it as well): mathoverflow.net/questions/81204/… – Asaf Karagila Aug 14 at 15:40
## 1 Answer
This is only a partial answer because I'm having trouble reconstructing something I think I figured out seven years ago...
It would seem the Dual Cantor-Bernstein implies Countable Choice. In a post in sci.math in March 2003 discussing the dual of Cantor-Bernstein, Herman Rubin essentially points out that if the dual of Cantor-Bernstein holds, then every infinite set has a denumerable subset; this is equivalent, I believe, to Countable Choice.
Let $U$ be an infinite set. Let $A$ be the set of all $n$-tuples of elements of $U$ with $n\gt 0$ and even, and let $B$ be the set of all $n$-tuples of $U$ with $n$ odd. There are surjections from $A$ onto $B$ (delete the first element of the tuple) and from $B$ onto $A$ (for the $1$-tuples, map to a fixed element of $A$; for the rest, delete the first element of the tuple). If we assume the dual of Cantor-Bernstein holds, then there exists a one-to-one function from $f\colon B\to A$ (in fact, a bijection). Rubin writes that "a 1-1 mapping from $B$ to $A$ quickly gives a countable subset of $U$", but right now I'm not quite seeing it...
-
Thanks for the reference to the sci.math discussion! I believe this is correct, I'll have to think about it once I find some free time (haha). I have in my notes that DSB implies that there are no infinite Dedekind finite sets, which is what you wrote, but as I recall, my argument was different. – Andres Caicedo Sep 15 2010 at 16:15
Hmm... I'm not sure I see this argument. It looks to me that Rubin is using something like "a countable union of finite sets contains a countable subset", but I don't think this is true without some use of choice. The weaker "a countable union of finite sets is countable" is certainly not true in general without choice. But perhaps he had a different argument in mind... – Andres Caicedo Sep 16 2010 at 6:48
Like I said, I seem to remember working this out at the time, but it just escapes me at the moment. I will post and ask if he can flesh out the argument (he's still active in sci.math/sci.logic). – Arturo Magidin Sep 16 2010 at 14:33
I got a reply from Don Coppersmith (nothing from Rubin yet). His first proposal is at groups.google.com/group/sci.math/msg/…, with a follow-up at groups.google.com/group/sci.math/msg/… on whether you can prove directly that the resulting set is infinite. – Arturo Magidin Sep 16 2010 at 21:42
Hi Arturo. Thanks! Yes, this is actually a "standard trick": You force some kind of ordering that suffices for the purpose at hand by considering finite tuples rather than finite sets (as I stubbornly kept thinking when I read your outline, even though you wrote "tuple"). I believe the trick originated with Specker. (That Dedekind finite=finite most definitely does not give countable choice, though.) – Andres Caicedo Sep 22 2010 at 3:48
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93340003490448, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2009/02/11/repeated-eigenvalues/?like=1&source=post_flair&_wpnonce=18b5244a97 | # The Unapologetic Mathematician
## Repeated Eigenvalues
So we’ve got a linear transformation $T$ on a vector space $V$ of finite dimension $d$. We take its characteristic polynomial to find the eigenvalues. If all of its roots are distinct (and there are $d$ of them, as there must be if we’re working over an algebraically closed field) then we can pick a basis of eigenvectors and get a diagonal matrix for $T$.
But what if a root is repeated? Say the characteristic polynomial has a factor of $(\lambda-1)^2$. We might think to pick two linearly independent eigenvectors with eigenvalue ${1}$, but unfortunately that doesn’t always work. Consider the transformation given by the matrix
$\displaystyle\begin{pmatrix}1&1\\{0}&1\end{pmatrix}$
This matrix is upper-triangular, and so we can just read off its eigenvalues from the diagonal: two copies of the eigenvalue ${1}$. We can easily calculate its action on a vector
$\displaystyle\begin{pmatrix}1&1\\{0}&1\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}=\begin{pmatrix}x+y\\y\end{pmatrix}$
So the eigenvector equation says that $x=x+y$ and $y=y$. The second is trivial, and the first tells us that $y=0$. But we can only pick one linearly independent vector of this form. So we can’t find a basis of eigenvectors, and we can’t diagonalize this matrix. We’re going to need another approach.
About these ads
### Like this:
Like Loading...
Posted by John Armstrong | Algebra, Linear Algebra
## 2 Comments »
1. [...] we’ve established that distinct eigenvalues allow us to diagonalize a matrix, but repeated eigenvalues cause us problems. We need to generalize the concept of eigenvectors [...]
Pingback by | February 16, 2009 | Reply
2. [...] eigenvectors. That is, as the dimension of the kernel of . Unfortunately, we saw that when we have repeated eigenvalues, sometimes this doesn’t quite capture the right notion. In that example, the -eigenspace has [...]
Pingback by | February 19, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## RSS Feeds
RSS - Posts
RSS - Comments
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 13, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.901322066783905, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/88001?sort=newest | ## Relative generic flatness.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It is known that any morphism is flat at an open set of points. I'd like to know if there is a relative version of this fact.
Let $f: X \rightarrow Y$ and $g:Y \rightarrow S$ be morphisms of varieties, and let $h=g \circ f$ be their composition. Suppose that $h$ and $g$ are flat. Is it true that the set $\{x\in X| f|_{h ^{-1}(h(x))} : h ^{-1}(h(x)) \to g ^{-1}(h(x)) \text{ is flat at } x \}$ is open?
-
## 1 Answer
With these assumptions, the set in question is the set of points in $X$ where $f$ is flat; hence it is open. This is "flatness by fibers", EGA IV (11.3.10) (applied with $\mathcal{F}=\mathcal{O}_X$).
-
I know that a map $f:X \to Y$ is flat in an open subset $U \subset X$. I just do not undestand why it implies what I need. Do you claim that $\\{x\in X| f|_{h ^{-1}(h(x))} : h ^{-1}(h(x)) \to g ^{-1}(h(x)) \text{ is flat at } x \\} =$ $= \\{ x \in X| f \text{ is flat at } x \\}$ ? Why? – Rami Feb 9 2012 at 21:34
I think I got it. EGA IV (11.3.10) says that $\\{x\in X| f|_{h ^{-1}(h(x))} : h ^{-1}(h(x)) \to g ^{-1}(h(x)) \text{ is flat at } x \\} = \\{ x \in X| f \text{ is flat at } x \\}$. and EGA IV (11.3.1) says that the later is open. Please correct me if I'm wrong. Thank you very much. – Rami Feb 10 2012 at 2:45
That's right. The "open" bit is also stated at the end of (11.3.10). – Laurent Moret-Bailly Feb 10 2012 at 8:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9579607248306274, "perplexity_flag": "head"} |
http://alanrendall.wordpress.com/2011/05/ | # Hydrobates
A mathematician thinks aloud
## Archive for May, 2011
### Is half of what is in immunology textbooks wrong?
May 31, 2011
Yesterday I heard a talk by Rolf Zinkernagel who won a Nobel prize together with Peter Doherty in 1996 for the discovery of the MHC restriction in the T cell recognition of antigens. What this means is that a T cell which is monitoring the body for harmful substances does not recognise the antigen (the substance itself) alone but a combination of the antigen with a major histocompatibility complex (MHC) molecule. The MHC molecules were originally discovered through their role in tissue transplantation. In order that a transplant not be rejected the MHC molecules of donor and host should be sufficiently similar. The key work which earned the prize was done in Australia and it was interesting to observe that Zinkernagel (who is originally from Switzerland) speaks English with quite a noticeable Australian accent. The talk was clearly meant to be provocative and the speaker did emphasize on more than one occasion that he was exaggerating. My knowledge of the subject is not good enough to allow me to make a comprehensive judgement of his claims but it was clear that he was presenting a picture far from the usual consensus.
One of the statements in the talk is the one which occurs in the title of this post, namely that fifty per cent of what is in immunology textbooks is wrong. This was complemented with the statements that what is difficult is to know which fifty per cent is wrong and that more than fifty per cent is based on fashion and something else which I do not remember. On a more humble note the speaker said that a similar statement might apply to what was in his lecture. This is not going to cause me to lose confidence in immunology textbooks but it is reasonable to adopt the recommendation not to accept what is in the books uncritically.
Zinkernagel said that from an evolutionary point of view our immune system is designed to work well until we are 25 at the most, which was all that was required to guarantee offspring for our ancestors. It is not clear to me that having some more children could not have resulted in a competitive advantage. It may, however, be that before the invention of agriculture it was impossible to feed many children. He emphasized the important role of the period before and shortly after birth in the development of the immune system. In the first months children are protected by antibodies coming from the mother and the claim was that this has a determining influence on the way the immune system works. At the same time he was critical of the standard picture of acquired immunity, where the reponse to a second infection is faster and stronger than that to the first infection and this is the key to immunity. He also suggested that after an individual has been infected with a pathogen such as measles the antigen remains in the body in some form and continues to stimulate the immune system. This seemed very speculative to me. The lecture contained many comments relating to diverse aspects of immunity (the speaker emphasized the importance of making the distinction between immunity and immunology). One suggestion I found interesting was that by doing intensive research on the relatively benign HIV-2 it might be possible to obtain essential insights for understanding the problem of HIV-1 better.
The lecture hall was very full and this showed that there was a lot of interest in hearing this speaker. On the other hand the questions after the talk were very few and it did not seem to come to a real intellectual engagement between the speaker and the members of the audience asking the questions. Perhaps Zinkernagel has moved so far from the accepted view on many themes in immunology that communication with colleagues has become difficult. Whatever else can be said about this lecture I think it did have the positive feature of stirring up the ideas of the audience and this cannot be a bad thing in science.
Posted in immunology | 4 Comments »
### The quantal theory of immunity
May 24, 2011
I recently read two papers of Kendall Smith on his ‘quantal theory of immunity’ (Cell Research 16, 11 and Medical Immunology 3:3). To start with, it should be noted that the word ‘quantal’ here has nothing to do with quantum theory although Schrödinger’s cat does make a brief cameo appearance in the first paper. The term is used to denote all-or-nothing reactions. In other words as the input to a system is increased continuously there is a threshold where the output changes from a low almost constant level to another much higher almost constant level. The input could be the concentration of a substance surrounding a cell while the output could be the amount of a certain protein produced by the cell.
Smith starts by discussing the ‘clonal selection theory’ of MacFarlane Burnet. In this context it is natural to pose the broad question of the possibility of discovering general theories or laws in biology. These concepts are familiar to us from physics and the question is to what extent they can be adapted to biology. I do not pretend to have a general answer to this. A theory or law in this sense is an idea in science which can be used to explain not just a few phenomena but which gives a pattern which can contribute to explanations in a wide field. An obvious example which comes to mind in biology is natural selection. Burnet’s theory seems to provide an example of this kind in immunology. It says that the ability of the immune system to recognize antigens is localized in certain white blood cells (later identified as the lymphocytes), each of which is specific for one antigen. The number of antigens which can be recognized is enormous and correspondingly there are only very few cells recognizing a given antigen. The large numbers of cells needed to combat a given pathogen are reached by ‘clonal expansion’ – one cell undergoes a population expansion by cell division. The next question is how the immune system manages to cause this proliferation in cases where it is beneficial to the individual and to avoid it in cases where it would be harmful. The quantal theory of immunity is a proposed theory to provide an explanation for this.
Smith’s discussion concentrates on T cells, saying that the case of B cells is similar. The proliferation of T cells is associated to their production of the cytokine IL-2. Thus it is important to understand the causes and effects of the latter. Another important theme which needs to be discussed at this point is the the relation between the behaviour of individual cells and that of cell populations. In an experiment we might measure a property of a population, for instance the total production of IL-2, but it is clear that what is being measured in that case is just an average of what is happening in individual cells. Experiments which focus on the properties of individual cells have revealed that there is a high intrinsic variability in cell populations (and in populations of the organisms whose cells are being studied). It has been suggested that this can provide an evolutionary advantage in some cases.
Let us return to IL-2 of which, as I understand, Smith is more or less the father. The reaction to IL-2 in an individual T cell is quantal. The same is true for the production of IL-2 in response to stimulation of the T cell receptor (together with the co-receptor CD28). The global picture presented in Smith’s papers (seen at low resolution) is the following. The fate of T cells (activation, proliferation, anergy, apoptosis) is determined by the number of T cell receptors stimulated. On the axis representing this number there are several bands representing the different outcomes. Variants of this scheme are presented for T cells in the thymus and in the periphery. Underlying this mechanism there is another similar one where the controlling quantity is the number of occupied IL-2 receptors.
Posted in immunology | 1 Comment »
### Cooperativity and the Hill equation
May 1, 2011
In biology it often happens that a molecule of one substance has several binding sites for another molecule (call it the ligand). This is known as cooperative binding. A classical example is haemoglobin, which has four binding sites for oxygen. If the rate of binding of the ligand at one of the sites is affected (positively or negatively) by the fact that some of the other sites are occupied then this is known as allostery. Suppose for example that an enzyme has two binding sites for a ligand. Using a description of this process of Michaelis-Menten type for the two successive binding events leads to an equation where the right hand side is a quotient of two quadratic polynomials. A derivation of this can be found in Murray’s book ‘Mathematical Biology’. Assuming a reaction where both molecules combine with the enzyme simultaneously gives a different quotient of quadratic polynomials which is particularly simple, namely $\frac {Au^2}{u^2+B}$. This is not likely to be a genuine reaction mechanism but might arise in some way by telescoping two reactions to get a simpler model. It may be interpreted as what is called complete cooperativity: when one molecule of the ligand binds the probability of binding at the other site becomes much higher and so may be supposed to occur essentially immediately.
A generalization of this idea for $n$ substrate molecules is given by the Hill equation $\dot u=f(n)=\frac{Au^n}{u^n+B}$. It was introduced by the physiologist Archibald Hill on a paper on haemoglobin in 1910. (Hill studied mathematics before he turned to physiology. He received the 1922 Nobel Prize for Physiology or Medicine for his work on the production of heat in muscles.) There he first considers what kind of equations might result if several molecules of oxygen bind to one molecule of haemoglobin. He then suggests considering the equation which now bears his name independently of detailed reaction mechanisms. This is a common procedure in modern biology. The Hill equation is used primarily as a phenomenological ansatz. The idea of cooperativity is in the background but the link is not very direct. In particular, it can happen that the Hill equation is used with non-integer values of $n$. An important qualitative feature of the nonlinearity in the Hill equation is that it is sigmoid for $n>1$. In other words the first derivative increases for small values of the argument before decreasing for larger values. This is a feature which may be observed in experimental data. When it is seen it is possible to try to fit it to a description by the Hill equation using a Hill plot. This means fitting the linear relation $\log(\frac{f}{A-f})=n\log u-\log B$. In particular the slope of the graph gives the Hill coefficient $n$.
This equation has no relation to the linear ordinary differential equation called Hill’s equation which is named after the astronomer and mathematician George William Hill.
Posted in dynamical systems, mathematical biology | 1 Comment » | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 7, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9693506956100464, "perplexity_flag": "head"} |
http://mathhelpforum.com/pre-calculus/102214-problem-limits.html | # Thread:
1. ## Problem on Limits
Can anyone give guidance and point me in the right direction for this problem? I've tried a couple approaches that have lead to dead ends and am not sure what to do. Thanks.
(SqRT(10x+9)-SqRT(79))/x-7
Limit x->7
2. Originally Posted by JP22
Can anyone give guidance and point me in the right direction for this problem? I've tried a couple approaches that have lead to dead ends and am not sure what to do. Thanks.
(SqRT(10x+9)-SqRT(79))/x-7
Limit x->7
start by multiplying the numerator and denominator by $\sqrt{10x+9} + \sqrt{79}$
3. I think I figured it out. Thanks. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278162121772766, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/41334/list | Return to Question
3 Corrected well-ordered to well-founded
Symmetric Proof that Product is Well-OrderedWell-Founded
This is a fairly minor, technical question, but I'll toss it out in case someone has a good idea on it.
Suppose $(X,<_X)$ and $(Y,<_Y)$ are well-orderings well-founded orderings (not necessarily linearly ordered, though I don't think it matters). Consider the ordering ${<}$ on $X\times Y$ given by $(x',y') < (x,y)$ if $x'\leq x$ and $y'\leq y$, and either $x' < x$ or $y' < y$. Note that this is not the lexicographic ordering; indeed, it's symmetric.
Obviously $X\times Y$ is well-orderedwell-founded. Suppose I want to prove this carefully (by which I really mean "in the formal theory $ID_1$"); more precisely, let's take $X$ to be a set with two properties: $$Cl_X:\forall x(\forall x'<_X x. x'\in X)\rightarrow x\in X$$ and $$Ind_X: \forall Z[\forall x(\forall x'<_X x. x'\in Z)\rightarrow x\in Z)\rightarrow X\subseteq Z]$$ and similarly for $Y$. (These just characterize that $X$ is its own well-founded part.) I want to prove that for all $(x,y)\in X\times Y$, $(x,y)$ are in the well-founded part of $X\times Y$ under ${<}$; call the well-founded part of $X\times Y$ $Acc(X\times Y)$.
I know one way to prove this: for each $x\in X$, define `$Y_x=\{y\in Y\mid (x,y)\in Acc(X\times Y)\}$`. Let $X'$ be the set of $x\in X$ such that $Y\subseteq Y_x$. Then it would be good enough to show that $X'$ satisfies the closure property, so I can apply $Ind_X$. To do this, in turn, I show that, if $Y\subseteq Y_{x'}$ for all $x'<_X x$ then $Y_x$ satisfies the closure property, so I can apply $Ind_Y$.
Of course, that means I know I second way: I could swap $X$ and $Y$ in the above proof. Moreover, when one works through the details, it's clear that I'm really proving that the lexicographic ordering is well-founded, and using the fact that ${<}$ is a subrelation of the lexicographic ordering.
Which brings me to my question: is there a proof that $Acc(X\times Y)=X\times Y$ which is symmetric?
2 fixed latex
This is a fairly minor, technical question, but I'll toss it out in case someone has a good idea on it.
Suppose $(X,<_X)$ and $(Y,<_Y)$ are well-orderings (not necessarily linearly ordered, though I don't think it matters). Consider the ordering ${<}$ on $X\times Y$ given by $(x',y') < (x,y)$ if $x'\leq x$ and $y'\leq y$, and either $x' < x$ or $y' < y$. Note that this is not the lexicographic ordering; indeed, it's symmetric.
Obviously $X\times Y$ is well-ordered. Suppose I want to prove this carefully (by which I really mean "in the formal theory $ID_1$"); more precisely, let's take $X$ to be a set with two properties: $$Cl_X:\forall x(\forall x'<_X x. x'\in X)\rightarrow x\in X$$ and $$Ind_X: \forall Z[\forall x(\forall x'<_X x. x'\in Z)\rightarrow x\in Z)\rightarrow X\subseteq Z]$$ and similarly for $Y$. (These just characterize that $X$ is its own well-founded part.) I want to prove that for all $(x,y)\in X\times Y$, $(x,y)$ are in the well-founded part of $X\times Y$ under ${<}$; call the well-founded part of $X\times Y$ $Acc(X\times Y)$.
I know one way to prove this: for each $x\in X$, define `$Y_x=\{y\in Y\mid (x,y)\in Acc(X\times Y)\}$`. Let $X'$ be the set of $x\in X$ such that $Y\subseteq Y_x$. Then it would be good enough to show that $X'$ satisfies the closure property, so I can apply $Ind_X$. To do this, in turn, I show that, if $Y\subseteq Y_{x'}$ for all $x'<_X x$ then $Y_x$ satisfies the closure property, so I can apply $Ind_Y$.
Of course, that means I know I second way: I could swap $X$ and $Y$ in the above proof. Moreover, when one works through the details, it's clear that I'm really proving that the lexicographic ordering is well-founded, and using the fact that ${<}$ is a subrelation of the lexicographic ordering.
Which brings me to my question: is there a proof that $Acc(X\times Y)=X\times Y$ which is symmetric?
1
Symmetric Proof that Product is Well-Ordered
This is a fairly minor, technical question, but I'll toss it out in case someone has a good idea on it.
Suppose $(X,<_X)$ and $(Y,<_Y)$ are well-orderings (not necessarily linearly ordered, though I don't think it matters). Consider the ordering \$
Obviously $X\times Y$ is well-ordered. Suppose I want to prove this carefully (by which I really mean "in the formal theory $ID_1$"); more precisely, let's take $X$ to be a set with two properties: $$Cl_X:\forall x(\forall x'<_X x. x'\in X)\rightarrow x\in X$$ and $$Ind_X: \forall Z[\forall x(\forall x'<_X x. x'\in Z)\rightarrow x\in Z)\rightarrow X\subseteq Z]$$ and similarly for $Y$. (These just characterize that $X$ is its own well-founded part.) I want to prove that for all $(x,y)\in X\times Y$, $(x,y)$ are in the well-founded part of $X\times Y$ under \$
I know one way to prove this: for each $x\in X$, define `$Y_x=\{y\in Y\mid (x,y)\in Acc(X\times Y)\}$`. Let $X'$ be the set of $x\in X$ such that $Y\subseteq Y_x$. Then it would be good enough to show that $X'$ satisfies the closure property, so I can apply $Ind_X$. To do this, in turn, I show that, if $Y\subseteq Y_{x'}$ for all $x'<_X x$ then $Y_x$ satisfies the closure property, so I can apply $Ind_Y$.
Of course, that means I know I second way: I could swap $X$ and $Y$ in the above proof. Moreover, when one works through the details, it's clear that I'm really proving that the lexicographic ordering is well-founded, and using the fact that \$
Which brings me to my question: is there a proof that $Acc(X\times Y)=X\times Y$ which is symmetric? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 91, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9645456671714783, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/150732/asymptotic-expansion-of-x-n-x-n-frac1-tanx-n | # Asymptotic expansion of $x_{n}$, $x_{n}=\frac{1}{\tan(x_{n})}$
I would like to find a two-term or a three-term asymptotic expansion of $x_{n}$ the unique solution of $$x_{n}=\frac{1}{\tan(x_{n})}$$ on the interval $]n\pi,n\pi+\pi[$
We have: $$x_{n}=n\pi+\arctan(\frac{1}{x_{n}})$$
So $$x_{n} \sim_{n\rightarrow \infty} n\pi$$
What is the method to find the next terms of the asymptotic expansion?
-
2
$\arctan(z)=z+o(z)$ when $z\to0$ hence $x_n=n\pi+1/(n\pi)+o(1/n)$. – Did May 28 '12 at 11:51
Thanks! I should have thought more before asking... – Chon May 28 '12 at 14:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8828781247138977, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/30714/list | ## Return to Answer
2 added 67 characters in body
Q/Z
$\mathbb{Q}/\mathbb{Z}$ is a pretty terrible abelian group, or a rather hard one, there may be better injective resolutions to work with. It would certainly be easier to do the projective resolution, use $0 \to Z \mathbb{Z} \to Z \mathbb{Z} \to Z/n \mathbb{Z}/n \to 0$. this will surely be easier to work through than the one involving Q/Z. $\mathbb{Q}/\mathbb{Z}$. Then compute the appropriate tensor product or hom group.
I started learning this stuff on more interesting modules as Schmidt suggests. For example, modules over the group ring of some cyclic group, or maybe an exterior algebra on two generators (if you make the generators of different gradings, in particular 1 and 3). This happens to be the category of modules you need to understand in order to compute complex connective k-theory!
this should help get you going. These computations are very fun!
1
Q/Z is a pretty terrible abelian group, or a rather hard one, there may be better injective resolutions to work with. It would certainly be easier to do the projective resolution, use $0 \to Z \to Z \to Z/n \to 0$. this will surely be easier to work through than the one involving Q/Z. Then compute the appropriate tensor product or hom group.
I started learning this stuff on more interesting modules as Schmidt suggests. For example, modules over the group ring of some cyclic group, or maybe an exterior algebra on two generators (if you make the generators of different gradings, in particular 1 and 3). This happens to be the category of modules you need to understand in order to compute complex connective k-theory!
this should help get you going. These computations are very fun! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9213507771492004, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/27662/extend-an-alternative-definition-of-limits-to-one-sided-limits/27679 | # extend an alternative definition of limits to one-sided limits
The following theorem is an alternative definition of limits. In this theorem, you don't need to know the value of $\lim \limits_{x \rightarrow c} {f(x)}$ in order to prove the limit exists.
Let $I \in R$ be an open interval, let $c \in I$, and let $f: I-{c} \rightarrow R$ be a function. Then $\lim \limits_{x \rightarrow c} {f(x)}$ exists if for each $\epsilon > 0$, there is some $\delta > 0$ such that $x,y \in I-{c}$ and $\vert x-c \vert < \delta$ and $\vert y-c \vert < \delta$ implies $\vert f(x)-f(y) \vert < \epsilon$.
I have extended this to a theorem for one-sided limits: $\lim \limits_{x \rightarrow c+} {f(x)}$ exists if or each $\epsilon > 0$, there is some $\delta > 0$ such that $c<x<c+ \delta$, and $c<y<c+ \delta$ implies $\vert f(x)-f(y) \vert < \epsilon$.
My question is: how to prove this extension to one-sided limits? I'm not familiar with Cauchy sequences, so I would prefer $\epsilon$-$\delta$ proof.
-
2
It's "Cauchy sequences", not "Cauchy's sequence". It's a class of sequences, not a particular one that caught Cauchy's fancy... – Arturo Magidin Mar 17 '11 at 20:31
@Arturo: corrected. Thanks! :) – Lindsay Duran Mar 17 '11 at 20:33
Do you know how to prove your first theorem? If so, I can't see why the one-sided case presents a problem. – TonyK Mar 17 '11 at 20:49
@Tony: I gave the key part of a proof, but I used Cauchy sequences; if the OP doesn't know what they are, that's a bit of a problem... – Arturo Magidin Mar 17 '11 at 20:56
## 3 Answers
You have essentially the same difficulty as in the previous question: figuring out a "target" number $L$ for the limit.
(By the way: if you don't know what Cauchy sequences are, you should have said something in that previous problem!)
You can proceed in a similar manner. First, you want to find a potential "target." For $\epsilon_n = \frac{1}{n}$, you know there is a $\delta_n$ (which you may assume is less than or equal to $\frac{1}{n}$ and less than $\delta_{n-1}$) such that for all $x$ and $y$, if $c\lt x\lt c+\delta_n$ and $c\lt y \lt c+\delta_n$, then $|f(x)-f(y)|\lt \frac{1}{n}$.
Note that this means that the values of $f$ on $(c,c+\delta_n)$ are bounded: for any $x\in (c,c+\delta_n)$, you know that $|f(x) - f(c+\delta_n/2)|\lt \frac{1}{n}$, so $|f(x)| \lt \frac{1}{n}+|f(c+\delta_n/2)|$.
In particular, there is a greatest lower bound $a_n$ to $f(c,c+\delta_n)$. Since $(c,c+\delta_{n+1})\subseteq (c,c+\delta_n)$, we have that $a_n\leq a_{n+1}$. So the sequence $a_1,a_2,\ldots$ is an increasing function. The sequence is bounded above, so the sequence converges to some $L$.
The obvious "target" for the limit is $L$. So, let $\epsilon\gt 0$, and you want to show that there is a $\delta\gt 0$ such that for all $x$, if $c\lt x\lt c+\delta$, then $|f(x)-L|\lt \epsilon$.
Since $L$ is the least upper bound of $a_1,a_2,\ldots$, there exists $N$ such that $L-\frac{\epsilon}{2} \lt a_n \leq L$ for all $n\geq N$. And there exists an $M$ such that $\frac{1}{M}\lt \frac{\epsilon}{4}$. Let $K=\max\{M,N\}$, and consider $\delta=\delta_K$.
We know that since $a_K$ is the greatest lower bound of $f(c,c+\delta_K)$, there exists $y\in (c,c+\delta_K)$ such that $a_K\leq f(y)\lt a_K+\frac{\epsilon}{4}$.
Now suppose that $c\lt x \lt c+\delta_K$. Then \begin{align*} |f(x)-L| &\leq |f(x)-f(y)|+|f(y)-a_K| + |a_K-L| &&\mbox{(triangle inequality)}\\ &\lt \frac{1}{K} + |f(y)-a_K| + |a_K-L| &&\mbox{(choice of $\delta_K$)}\\ &\leq \frac{1}{K} + \frac{\epsilon}{4} + |a_K-L| &&\mbox{(choice of $y$)}\\ &\leq \frac{1}{K} + \frac{\epsilon}{4} + \frac{\epsilon}{2} &&\mbox{(since $K\geq N$)}\\ &\lt \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{2} &&\mbox{(since $K\geq M$)}\\ &=\epsilon. \end{align*}
(With this in mind, you can try doing the two-sided version of the problem without invoking Cauchy sequences; the idea is very similar, and you don't need to consider both least upper bounds and greatest lower bounds for $f(c-\delta_n,c+\delta_n)$, just one of the two and use their limit as a "target". Once you know what to aim for, hitting it becomes much easier).
-
thanks for this! now I figured out ways to prove both problems. :) And sorry about not pointing out "Cauchy Sequence" thing... – Lindsay Duran Mar 17 '11 at 22:06
Answer: For each positive $\epsilon$, there is some positive $\delta$ such that $\vert f(x)-f(y) \vert \le \epsilon$ for every $x$ and $y$ in $I$ such that $c<x\le c+\delta$ and $c<y\le c+\delta$.
About the question added later on in your post: you could try to mimick the proof given in the two sided case.
-
Assume that $\lim_{x \to c^+} f(x) = g$ exists. Then for any $\varepsilon > 0$ there exists $\delta > 0$ such that if $0 < x - c < \delta$ then $|f(x) - g| < \varepsilon$ and if $0 < y - c < \delta$ then $|f(y) - g| < \varepsilon$. Hence $$|f(x) - f(y)| \leq |f(x) - g| + |f(y) - g| < 2\varepsilon$$ for $0 < x - c < \delta$ and $0 < y - c < \delta$.
On the other hand, assume that your definition holds. Then there exists a strictly decreasing sequence $(c_n)_{n \geq 1}$ such that $c_n \to c$. Thus for a given $\varepsilon > 0$ there exists $\delta > 0$ such that there exists positive integer $N$ such that $|f(c_m)-f(c_n)| < \varepsilon$ for $m,n > N$.
It means that $(f(c_n))$ is Cauchy sequence and converges to, say, $g$.
(If you are not familiar with Cauchy sequences, it is easy to prove that the sequence $(a_n)$ such that for $\varepsilon > 0$ there exists $N > 0$ such that $|a_m - a_n| < \varepsilon$ for $m,n > N$ converges. To prove it, notice first that $(a_n)$ is bounded. Then, by Bolzano-Weierstrass theorem there is a subsequence $(a_{n_k})$ which converges to some $a$. Then, by Cauchy condition and triangle inequality $a_n \to a$.)
Assume now that there exists a sequence $(c'_n)_{n \geq 1}$ such that $f(c'_n) \to g' \neq g$. Let $C_n$ equals $c_n$ if $n$ is even, and equals $c'_n$ if $n$ is odd. Hence $(f(C_n))$ diverges but on the other hand we have $|f(C_m) - f(C_n)| < \varepsilon$ for sufficiently large $m,n$ and this means that $(f(C_n))$ is Cauchy, hence converges. Contradiction.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 111, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.930211067199707, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/26681-how-do-i-find-sides-triangle.html | # Thread:
1. ## How do i find the sides of this triangle?
Heres the question:
On the banks of a river, surveyors marked locations A, B, and C. The measure of angle ACB=70 degrees and the measure of angle ABC=65 degrees. Which expression shows the relationship between the lengths of the sides of this triangle?
That means i need to find the sides of the triangle right? How would i do this?
2. Originally Posted by eh501
Heres the question:
On the banks of a river, surveyors marked locations A, B, and C. The measure of angle ACB=70 degrees and the measure of angle ABC=65 degrees. Which expression shows the relationship between the lengths of the sides of this triangle?
That means i need to find the sides of the triangle right? How would i do this?
Which expressions indeed??
It would help if you included them.
I'm guessing you'll need the sine rule
$\frac{a}{\sin A}=\frac{b}{\sin B}=\frac{c}{\sin C}$
3. 1)ab< Bc< Ac
2)bc< Ab< Ac
3)bc< Ac< Ab
4)ac< Ab< Bc
4. In any triangle, the largest side will be opposite the largest angle and the smallest side will be opposite the smallest angle. Can you see the answer now? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9097980856895447, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/40075/chain-complexes-and-linear-infinity-categories | Chain Complexes and Linear Infinity-Categories
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A statement I heard recently is that "chain complexes are the same thing as strict linear $\infty$-categories". Can someone explain how to see this?
-
The cochain complex of a space $X$ is an $E_\infty$-algebra, i.e., product of cochains (cup product) are defined although it is only commutative up to homotopy, thereby turning it into an $E_\infty$-algebra. And I presume that an $E_\infty$-category (which is linear in your case because the hom sets are chain complexes over $\mathbf{Z}$) with one object is an $E_\infty$-algebra. I'm sure other users here would be able to elaborate more. – Somnath Basu Sep 27 2010 at 5:36
3 Answers
The result you want is contained in
(BROWN, R. and HIGGINS, P.J.), `Cubical abelian groups with connections are equivalent to chain complexes', Homology, Homotopy and Applications, 5(1) (2003) 49-52.
which also deals with strict globular $\omega$-categories internal to abelian groups.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You say 'how to see this', and for that I suggest that you first look at low dimensional cases. You also have to be a bit careful in the definition of chain complex as the ones in question have to be trivial in negative degree. You also, as hinted at by others, have to say what you mean by 'linear'.
(I will not assume that you know much on this, and excuse me if you do and I am saying obvious things.)
The first step is to see what $\infty$-cat structure, a `very short' chain complex gives, that is to take a chain complex with stuff only in dimension 0 and 1 and to stare at $C_1\oplus C_0$ and $C_0$ and to interpret them as the arrows and objects of a category with source given by the projection, target by the source plus the boundary $t(c_1,c_0) = c_0 + \partial c_1$. Composition is given by addition in $C_1$. Now just check the details to get a category. (In fact it is a 2-category with the additive structure on the groups giving the other composition.)
The next step is to try to see what a chain complex of length 2 (i.e. allowing $C_2$ to be non-zero) gives. Clearly you need to add an additional factor and to get some 2-cells. The formula is quite geometric so should be findable by fiddling around. Now you can hope to 'see' how to go further. (The actual construction is outlined in several different places, so once you think you know what is happening in low dimensions look for it in the literature. I have usually found it worth while working this sort of thing out myself first, knowing that the intuition is the really hard thing to grasp, whilst the technique of proof may be clear once the intuition is in place.)
If you want a linearised enriched version, now start working with the category of chain complexes. The chain maps etc, between chain complexes form a chain complex, which I presume you know well as it is 'classical', so now you can start applying the steps that are suggested above to that cochain complex. The objects will be the maps the 1-cells homotopies between them, the 2-cells homotopies between homotopies and so on.
You then have 'linear' $\infty$-categories as objects and the homs between them are also such objects.
Once all that seems routine then looking at the simplicial side and the Dold-Kan equivalence in the light of that is very well worth doing. Proofs of these things are reasonably easy to manufacture, once the intuitions are grasped, and that is what I am assuming you are asking for.
(Edit: if you want a full answer, look at Ronnie Brown and Phil Higgins' paper that he mentions in his reply. But I still think that trying to see what has to be done is very important.)
-
(Edit: I was being a bit free and easy below. Dold-Kan is rigorous for $\infty$-groupoids = Kan complexes, but I'm only guessing when I say I can sweep details aka weak complicial sets under the carpet. I'll leave the answer as is, in case anyone else can fill in details)
If we take 'strict linear $\infty$-categories to mean '$\infty$-categories internal to $Ab$', and we take '$\infty$-category' to mean simplicial set (I'm sweeping some details under the carpet, but really you'd want to consider (weak) complicial sets to make this rigorous), then this is nothing but the Dold-Kan correspondence: simplicial abelian groups are the same thing as non-negatively graded chain complexes. Note that the simplicial set underlying a simplicial abelian group is automatically a Kan complex, so really this reduces to talking about $\infty$-groupoids.
If you want $\infty$-categories enriched in something linear, rather than internal categories, then you'd want some sort of weak enrichment in simplicial abelian groups. It sound like you might want some sort of $E_\infty$-category, but this is certainly not equivalent to chain complexes.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9509351253509521, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Limit_of_a_sequence | Limit of a sequence
n n sin(1/n)
1 0.841471
2 0.958851
...
10 0.998334
...
100 0.999983
As the positive integer n becomes larger and larger, the value n sin(1/n) becomes arbitrarily close to 1. We say that "the limit of the sequence n sin(1/n) equals 1."
In mathematics, the limit of a sequence is the value that the terms of a sequence "tend to".[1] If such a limit exists, the sequence is called convergent. A sequence which does not converge is said to be divergent.[2] The limit of a sequence is said to be the fundamental notion on which the whole of analysis ultimately rests.[1]
Limits can be defined in any metric or topological space, but are usually first encountered in the real numbers.
Real numbers
The plot of a convergent sequence {an} is shown in blue. Visually we can see that the sequence is converging to the limit 0 as n increases.
Formal Definition
We call $x$ the limit of the sequence $(x_n)$ if the following condition holds:
• For each real number $\epsilon > 0$, there exists a natural number $N$ such that, for every $n > N$, we have $|x_n - x| < \epsilon$.
In other words, for every measure of closeness $\epsilon$, the sequence's terms are eventually that close to the limit. The sequence $(x_n)$ is said to converge to or tend to the limit $x$, written $x_n \to x$ or $\lim_{n \to \infty} x_n = x$.
If a sequence converges to some limit, then it is convergent; otherwise it is divergent.
Examples
• If $x_n = c$ for some constant c, then $x_n \to c$. Proof: choose $N = 1$. We have that, for every $n > N$, $|x_n - c| = 0 < \epsilon$.
• If $x_n = 1/n$, then $x_n \to 0$. Proof: choose $N = \left\lfloor\frac{1}{\epsilon}\right\rfloor$ (the floor function). We have that, for every $n > N$, $|x_n - 0| \le x_{N+1} = \frac{1}{\lfloor1/\epsilon\rfloor + 1} < \epsilon$.
• If $x_n = 1/n$ when $n$ is even, and $x_n = 1/n^2$ when $n$ is odd, then $x_n \to 0$. (The fact that $x_{n+1} > x_n$ whenever $n$ is odd is irrelevant.)
• Given any real number, one may easily construct a sequence that converges to that number by taking decimal approximations. For example, the sequence $0.3, 0.33, 0.333, 0.3333, ...$ converges to $1/3$. Note that the decimal representation $0.3333...$ is the limit of the previous sequence, defined by
$0.3333...\triangleq\lim_{n\to \infty} \sum_{i=1}^n \frac{3}{10^i}$.
• Finding $c$ might sometimes be non-intuitive, like $\lim_{n\to\infty} \left( 1 + \frac{1}{n} \right)^n$, the number e. In these cases, one common approach is to find upper and lower bounds for the limit of the sequence (e.g., proving that $2.71< e <2.72$).
Properties
Limits of sequences behave well with respect to the usual arithmetic operations. If $a_n \to a$ and $b_n \to b$, then $a_n+b_n \to a+b$, $a_nb_n \to ab$ and, if neither b nor any $b_n$ is zero, $a_n/b_n \to a/b$.
For any continuous function f, if $x_n \to x$ then $f(x_n) \to f(x)$. In fact, any real-valued function f is continuous if and only if it preserves the limits of sequences (though this is not necessarily true when using more general notions of continuity).
Some other important properties of limits of real sequences include the following.
• The limit of a sequence is unique.
• $\lim_{n\to\infty} (a_n \pm b_n) = \lim_{n\to\infty} a_n \pm \lim_{n\to\infty} b_n$
• $\lim_{n\to\infty} c a_n = c \lim_{n\to\infty} a_n$
• $\lim_{n\to\infty} (a_n b_n) = (\lim_{n\to\infty} a_n)( \lim_{n\to\infty} b_n)$
• $\lim_{n\to\infty} \frac{a_n} {b_n} = \frac{ \lim_{n\to\infty} a_n}{ \lim_{n\to\infty} b_n}$ provided $\lim_{n\to\infty} b_n \ne 0$
• $\lim_{n\to\infty} a_n^p = \left[ \lim_{n\to\infty} a_n \right]^p$
• If $a_n \leq b_n$ for all $n$ greater than some $N$, then $\lim_{n\to\infty} a_n \leq \lim_{n\to\infty} b_n$
• (Squeeze Theorem) If $a_n \leq c_n \leq b_n$ for all $n > N$, and $\lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = L$, then $\lim_{n\to\infty} c_n = L$.
• If a sequence is bounded and monotonic then it is convergent.
• A sequence is convergent if and only if every subsequence is convergent.
These properties are extensively used to prove limits without the need to directly use the cumbersome formal definition. Once proven that $1/n \to 0$ it becomes easy to show that $\frac{a}{b+c/n} \to \frac{a}{b}$, ($b \ne 0$), using the properties above.
Infinite limits
The terminology and notation of convergence is also used to describe sequences whose terms become very large. A sequence $(x_n)$ is said to tend to infinity, written $x_n \to \infty$ or $\lim_{n\to\infty}x_n = \infty$ if, for every K, there is an N such that, for every $n \geq N$, $x_n > K$; that is, the sequence terms are eventually larger than any fixed K. Similarly, $x_n \to -\infty$ if, for every K, there is an N such that, for every $n \geq N$, $x_n < K$.
Metric spaces
Definition
A point x of the metric space (X, d) is the limit of the sequence (xn) if, for all ε > 0, there is an N such that, for every $n \geq N$, $d(x_n, x) < \epsilon$. This coincides with the definition given for real numbers when $X = \mathbb{R}$ and $d(x, y) = |x-y|$.
Properties
For any continuous function f, if $x_n \to x$ then $f(x_n) \to f(x)$. In fact, a function f is continuous if and only if it preserves the limits of sequences.
Limits of sequences are unique when they exist, as distinct points are separated by some positive distance, so for $\epsilon$ less that half this distance, sequence terms cannot be within a distance $\epsilon$ of both points.
Topological spaces
Definition
A point x of the topological space (X, τ) is the limit of the sequence (xn) if, for every neighbourhood U of x, there is an N such that, for every $n \geq N$, $x_n \in U$. This coincides with the definition given for metric spaces if (X,d) is a metric space and $\tau$ is the topology generated by d.
The limit of a sequence of points $\left(x_n:n\in \mathbb{N}\right)\;$ in a topological space T is a special case of the limit of a function: the domain is $\mathbb{N}$ in the space $\mathbb{N} \cup \lbrace +\infty \rbrace$ with the induced topology of the affinely extended real number system, the range is T, and the function argument n tends to +∞, which in this space is a limit point of $\mathbb{N}$.
Properties
If X is a Hausdorff space then limits of sequences are unique where they exist. Note that this need not be the case in general; in particular, if two points x and y are topologically indistinguishable, any sequence that converges to x must converge to y and vice-versa.
Cauchy sequences
Main article: Cauchy sequence
The plot of a Cauchy sequence (xn), shown in blue, as xn versus n. Visually, we see that the sequence appears to be converging to a limit point as the terms in the sequence become closer together as n increases. In the real numbers every Cauchy sequence converges to some limit.
A Cauchy sequence is a sequence whose terms become arbitrarily close together as n gets very large. The notion of a Cauchy sequence is important in the study of sequences in metric spaces, and, in particular, in real analysis. One particularly important result in real analysis is Cauchy characterization of convergence for sequences:
A sequence is convergent if and only if it is Cauchy.
Definition in hyperreal numbers
The definition of the limit using the hyperreal numbers formalizes the intuition that for a "very large" value of the index, the corresponding term is "very close" to the limit. More precisely, a real sequence $(x_n)$ tends to L if for every infinite hypernatural H, the term xH is infinitely close to L, i.e., the difference xH - L is infinitesimal. Equivalently, L is the standard part of xH
$L = {\rm st}(x_H)\,$.
Thus, the limit can be defined by the formula
$\lim_{n \to H} x_n= {\rm st}(x_H),$
where the limit exists if and only if the righthand side is independent of the choice of an infinite H.
History
The Greek philosopher Zeno of Elea is famous for formulating paradoxes that involve limiting processes.
Leucippus, Democritus, Antiphon, Eudoxus and Archimedes developed the method of exhaustion, which uses an infinite sequence of approximations to determine an area or a volume. Archimedes succeeded in summing what is now called a geometric series.
Newton dealt with series in his works on Analysis with infinite series (written in 1669, circulated in manuscript, published in 1711), Method of fluxions and infinite series (written in 1671, published in English translation in 1736, Latin original published much later) and Tractatus de Quadratura Curvarum (written in 1693, published in 1704 as an Appendix to his Optiks). In the latter work, Newton considers the binomial expansion of (x+o)n which he then linearizes by taking limits (letting o→0).
In the 18th century, mathematicians like Euler succeeded in summing some divergent series by stopping at the right moment; they did not much care whether a limit existed, as long as it could be calculated. At the end of the century, Lagrange in his Théorie des fonctions analytiques (1797) opined that the lack of rigour precluded further development in calculus. Gauss in his etude of hypergeometric series (1813) for the first time rigorously investigated under which conditions a series converged to a limit.
The modern definition of a limit (for any ε there exists an index N so that ...) was given by Bernhard Bolzano (Der binomische Lehrsatz, Prague 1816, little noticed at the time) and by Weierstrass in the 1870s.
See also
• Limit of a function
• Limit of a net — A net is a topological generalization of a sequence.
• Modes of convergence
Notes
1. ^ a b Courant (1961), p. 29.
2. Courant (1961), p. 39.
References
• Courant, Richard (1961). "Differential and Integral Calculus Volume I", Blackie & Son, Ltd., Glasgoa.
• Frank Morley and James Harkness A treatise on the theory of functions (New York: Macmillan, 1893) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 86, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9012814164161682, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/180782-inequality-proof.html | # Thread:
1. ## Inequality proof.
I need to mathematically prove that:
(a + b)/(c + d) lies between (a / c) and (b / d).
a,b,c,d are all real and >0
Any ideas?
(also, if necessary you can assume that b>a and d>c but preferably not)
2. We don't need to assume $b\geq a$ (or the other way), but we do have to assume either $ad \leq bc$ or the opposite*. It's obviously one of them (unless it's both) so there's no danger in doing this; also notice that the roles of a and d can be switched with those of b and c respectively in the middle of the inequality, so what we're doing is all kosher.
So the question is: Prove $\frac{a}{c} \leq \frac{a+b}{c+d} \leq \frac{b}{d}$
Start with the assumption ad < bc (When I write < I mean less than or equal to, but I don't want to format this entire answer).
First, add ac to both sides:
ad+ac < bc+ac
a(d+c) < c(a+b)
a/c < (a+b)/(c+d). Voila, the left inequality
Then go back to the assumption but add bd to both sides:
ad+bd < bc+bd
d(a+b) < b(c+d)
(a+b)/(c+d) < b/d. And there's the right one.
[ Note: when I did my scratchwork, I did it by assuming what I wanted to prove and tried to get something I knew to be true; I didn't just pull "+bd" out of nowhere. But that's not actually a proof - you're not allowed to assume what you want to be true and then use it to show 0=0, the logic can fail ]
* This may seem like a bizarre thing to assume, but if I wrote it like this: "We must assume either a/c < b/d OR b/d < a/c," now it makes more sense, doesn't it? I chose the non-fraction form because it was easier for the computation in the proof, also it's easier to see how the a/b // d/c pairs can be switched around. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9534457325935364, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/156643-subgroup-question.html | # Thread:
1. ## subgroup question
Q: Find all possible finite subgroups of the non negative rational numbers under multiplication.
I can only think of the singleton set containing the identity element which would be of order 1. All other sets would be of infinite order. I feel there is more to this though.
thanks
2. Originally Posted by Danneedshelp
Q: Find all possible finite subgroups of the non negative rational numbers under multiplication.
I can only think of the singleton set containing the identity element which would be of order 1. All other sets would be of infinite order. I feel there is more to this though.
thanks
The non-negative rationals under multiplication are not a group...the POSITIVE rationals are, though.
As for your solution it is correct: if $0<q\in\mathbb{Q}\,,\,\,then\,\,\{q^n\}_{n=1}^\inft y$ is an infinite set...
Tonio
3. Originally Posted by tonio
The non-negative rationals under multiplication are not a group...the POSITIVE rationals are, though.
As for your solution it is correct: if $0<q\in\mathbb{Q}\,,\,\,then\,\,\{q^n\}_{n=1}^\inft y$ is an infinite set...
Tonio
Is it supposed to be $1\ne q\in\mathbb{Q}$ instead of $0<q\in\mathbb{Q}$?
Edit: Meh I was thinking of $\,\mathbb{Q}$ as the positive rationals, should have rewritten the set or instead restricted $q\not\in\{-1,0,1\}$.
4. Originally Posted by tonio
The non-negative rationals under multiplication are not a group...the POSITIVE rationals are, though.
As for your solution it is correct: if $0<q\in\mathbb{Q}\,,\,\,then\,\,\{q^n\}_{n=1}^\inft y$ is an infinite set...
Tonio
Oh, my bad. I don't know why I wrote that.
Should it be for all $q\in{\mathbb{Q}}-\{0\}$ and $q\neq\\e$?
Thanks for the help
5. Originally Posted by Danneedshelp
Oh, my bad. I don't know why I wrote that.
Should it be for all $q\in{\mathbb{Q}}-\{0\}$ and $q\neq\\e$?
Thanks for the help
Yes, but it is also true for the positive rationals only (without 1) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484294056892395, "perplexity_flag": "middle"} |
http://mathematica.stackexchange.com/questions/40/is-it-possible-to-invoke-the-oeis-from-mathematica?answertab=active | # Is it possible to invoke the OEIS from Mathematica?
I had always wondered if there might be a way to write a function, which I'll call `OEISData[]`, that more or less works as a curated data function for The On-Line Encyclopedia of Integer Sequences.
I would imagine that the usage might be a little something like this:
````OEISData["A004001"][9]
5
OEISData["A003418"][Range[8, 15]]
840, 2520, 2520, 27720, 27720, 360360, 360360, 360360
OEISData["A005849", "Keywords"]
{"hard", "nonn", "nice", "more"}
````
An API or something to retrieve data from the OEIS site might be needed for an implementation of this function. Is a function like this possible, with what Mathematica is currently capable of?
-
– Simon Jan 17 '12 at 23:06
...I wasn't specifically asking for a Wolfram Alpha solution... – J. M.♦ Jan 21 '12 at 6:03
1
– Simon Jan 24 '12 at 1:37
## 3 Answers
There is a Mathematica package exactly for this at the OEIS wiki.
Somewhat related: there's also a package for formatting data into the OEIS format.
WolframAlpha also has some of this information, though I'm not sure how to get the $n^{\mathrm{th}}$ term of the sequence.
````In[1]:= WolframAlpha["A004001", {{"TermsPod:IntegerSequence", 1}, "ComputableData"}]
Out[1]= {1, 1, 2, 2, 3, 4, 4, 4, 5, 6, 7, 7, 8, 8, 8, 8, 9, 10, 11,
12, 12, 13, 14, 14, 15}
````
-
I was aware of the second one by Eric Weisstein, but not aware of the first. Thanks! – J. M.♦ Jan 18 '12 at 1:00
I liked Szabolcs’ answer but would like to remind about free form input here. We get so much information using it for very little typing. Plus we get native to M. format. For those who does not know this yet - at the beginning of new input line press equal sign “=” twice to get orange spiky and then type in free form. In this case you see result below. This is NOT web browser but M. notebook. Of course you can get the same on W|A website. But additionally here you can get the data. For example go to “Sequence terms” pod and click “more” to get a few more terms. Then press little plus sign in the top right corner and then and from the menu choose “computable data”. This pastes in M. notebook what you see here at the lower part of the image the image. And this also partially answers Szabolcs’ question about more terms ;-) This is also a good way to learn tricks of WolframAlpha[] function.
-
A disadvantage of this is that it only works over the internet, not locally. – celtschk Jan 18 '12 at 11:39
2
@celtschk: Sure, but I was assuming that one needed to be connected to the Internet anyway to access stuff from the OEIS... – J. M.♦ Jan 18 '12 at 11:43
Ah, right, how could I miss that. Unless you are running on the OEIS hosting site, of course. :-) – celtschk Jan 18 '12 at 11:53
A bit of a hack, could do with some polishing, but the basic idea will work:
````OEISData[str_] :=
StringSplit[#, ","] & /@
Select[StringSplit[Import["http://oeis.org/search?q=" <> str]],
StringMatchQ[#, __ ~~ ","] &];
OEISData["A004001"][[9]]
````
Edit: Come to think of it, if you just want the numbers, it could be even easier to just import from http://oeis.org/A004001/list
-
1
Nice piece of work – niklasfi Jan 17 '12 at 23:22
lang-mma | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.872684121131897, "perplexity_flag": "middle"} |
http://mathhelpforum.com/geometry/105216-finding-measure-angles.html | # Thread:
1. ## Finding the measure of angles...
For an acute angled triangle,all its angles have measure in integral degree.The smallest angle has a mesure 1/5th the measure of largest.Find all the measures of triangle>>
2. Hello anshulbshah
Originally Posted by anshulbshah
For an acute angled triangle,all its angles have measure in integral degree.The smallest angle has a mesure 1/5th the measure of largest.Find all the measures of triangle>>
You can't solve problems like this by simple algebra - you have to use a little bit of 'trial and error'.
First, let's suppose the smallest angle is $x^o$. Then the largest angle is $5x$. Since the triangle is acute-angled, $5x < 90\Rightarrow 5x \le 85$, since $x$ must be an integer, and $5x$ therefore a multiple of $5$. This gives us that $0<x\le 17$.
Next suppose that the third (middle-sized) angle of the triangle is $y^o$. Using the angle-sum of a triangle, $y = 180 - 6x$. The smallest possible value of $y$ is when $x$ is as large as possible; i.e. $x = 17\Rightarrow y = 180 - 102 = 78$, and at this value the largest angle is $5x=85^o$.
If we make $x$ a bit smaller, say $x = 16$, then $y = 180 - 96 = 84$. But in this case $5x = 80$, which is smaller than $y$.
So the only solution is $17^o,78^o,85^o$.
Grandad | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282146096229553, "perplexity_flag": "head"} |
http://tcsmath.wordpress.com/2009/06/ | # tcs math – some mathematics of theoretical computer science
## June 19, 2009
### Lecture P1. From Integrality Gaps to Dictatorship Tests
Filed under: CSE 599S — Tags: discrete harmonic analysis, hardness of approximation, invariance principle — James Lee @ 2:25 pm
Here are Prasad Raghavendra‘s notes on one of two guest lectures he gave for CSE 599S. Prasad will be a faculty member at Georgia Tech after he finishes his postdoc at MSR New England.
In yet another excursion in to applications of discrete harmonic analysis, we will now see an application of the invariance principle, to hardness of approximation. We will complement this discussion with the proof of the invariance principle in the next lecture.
1. Invariance Principle
In its simplest form, the central limit theorem asserts the following: “As ${n}$ increases, the sum of ${n}$ independent bernoulli random variables (${\pm 1}$ random variables) has approximately the same distribution as the sum of ${n}$ independent normal (Gaussian with mean ${0}$ and variance ${1}$) random variables”
Alternatively, as ${n}$ increases, the value of the polynomial ${F(\mathbf x) = \frac{1}{\sqrt{n}} (x^{(1)} + x^{(2)} + \ldots x^{(n)})}$ has approximately the same distribution whether the random variables ${x^{(i)}}$ are i.i.d Bernoulli random variables or i.i.d normal random variables. More generally, the distribution of ${p(\mathbf x)}$ is the approximately the same as long as the random variables ${x^{(i)}}$ are independent with mean ${0}$, variance ${1}$ and satisfy certain mild regularity assumptions.
A phenomenon of this nature where the distribution of a function of random variables, depends solely on a small number of their moments is referred to as invariance.
A natural approach to generalize of the above stated central limit theorem, is to replace the “sum” (${\frac{1}{\sqrt{n}} (x^{(1)} + x^{(2)} + \ldots x^{(n)})}$) by other multivariate polynomials. As we will see in the next lecture, the invariance principle indeed holds for low degree multilinear polynomials that are not influenced heavily by any single coordinate. Formally, define the influence of a coordinate on a polynomial as follows:
Definition 1 For a multilinear polynomial ${F(\mathbf{x}) = \sum_{\sigma} \hat{F}_{\sigma} \prod_{i \in [R]} x^{(\sigma_i)}}$ define the influence of the ${i^{th}}$ coordinate as follows:
${ \mathrm{Inf}_{i}(F) = \mathop{\mathbb E}_{x} [\mathrm{Var}_{x^{(i)}}[F]] = \sum_{\sigma_{i} \neq 0} \hat{F}^2_\sigma}$
Here ${\mathrm{Var}_{x^{(i)}}[F]}$ denotes the variance of ${F(x)}$ over random choice of ${x_i}$.
The invariance principle for low degree polynomials was first shown by Rotar in 1979. More recently, invariance principles for low degree polynomials were shown in different settings in the work of Mossel-O’Donnell-Olekszciewicz and Chatterjee. The former of the two works also showed the Majority is Stablest conjecture, and has been influential in introducing the powerful tool of invariance to hardness of approximation.
Here we state a special case of the invariance principle tailored to the application at hand. To this end, let us first define a rounding function ${f_{[-1,1]}}$ as follows:
$\displaystyle f_{[-1,1]}(x) = \begin{cases} -1 &\text{ if } x < -1 \\ x &\text{ if } -1\leqslant x\leqslant 1 \\ 1 &\text{ if } x > 1 \end{cases}$
Theorem 2 (Invariance Principle [Mossel-ODonnell-Oleszkiewicz]) Let ${\mathbf{z} = \{z_1, z_2\}}$ and ${\mathbf{G} = \{g_1,g_2\}}$ be sets of Bernoulli and Gaussian random variables respectively. Furthermore, let
$\displaystyle \mathop{\mathbb E}[z_i] = \mathop{\mathbb E}[g_i] = 0 \qquad \qquad \mathop{\mathbb E}[z_i^2] = \mathop{\mathbb E}[g_i^2] = 1 \qquad \qquad \forall i \in [2]$
and ${\mathop{\mathbb E}[z_1 z_2] = \mathop{\mathbb E}[g_1 g_2]}$. Let ${\mathbf{z}^R, \mathbf{G}^R}$ denote ${R}$ independent copies of the random variables ${\mathbf{z}}$ and ${\mathbf{G}}$.
Let ${F}$ be a multilinear polynomial given by ${F(\mathbf{x}) = \sum_{\sigma} \hat{F}_{\sigma} \prod_{i \in \sigma} x^{(i)}}$, and let ${H(\mathbf{x})=T_{1-\epsilon} F(\mathbf{x}) = \sum_{\sigma}(1-\epsilon)^{|\sigma|} \hat{F}_{\sigma} \prod_{i\in \sigma} x^{(i)}}$. If ${\mathrm{Inf}_\ell(H) \leqslant \tau}$ for all ${\ell \in [R]}$, then the following statements hold:
1. For every function ${\Psi : \mathbb{R}^2 \rightarrow \mathbb{R}}$ which is thrice differentiable with all its partial derivatives up to order ${3}$ bounded uniformly by ${C_0}$,
$\displaystyle \Big|\mathop{\mathbb E}\Big[\Psi(H(\mathbf{z}_1^{R}), H(\mathbf{z}_2^{R}))\Big] - \mathop{\mathbb E}\Big[\Psi(H(\mathbf{g}_1^R), H(\mathbf{g}_2^R))\Big] \Big| \leqslant \tau^{O(\epsilon)}$
2. Define the function ${\xi : {\mathbb R}^2 \rightarrow {\mathbb R}}$ as ${\xi(\mathbf{x}) = \sum_{i\in [2]} (x_i - f_{[-1,1]}(x_i))^2}$ Then, we have ${ \Big|\mathop{\mathbb E}[\xi(H(\mathbf{z}_1^{n}), H(\mathbf{z}_2^{n}))] - \mathop{\mathbb E}[\xi(H(\mathbf{g}_1^{n}),H(\mathbf{g}_2^{n}))] \Big| \leqslant \tau^{O(\epsilon)} }$
2. Dictatorship Testing
A boolean function ${\mathcal{F}(x_1,x_2,\ldots,x_n)}$ on ${n}$ bits is a dictator or a long code if ${\mathcal{F}(x) = x_i}$ for some ${i}$. Given the truth table of a function ${\mathcal{F}}$, a dictatorship test is a randomized procedure that queries a few locations (say ${2}$) in the truth table of ${\mathcal{F}}$, and tests a predicate ${P}$ on the values it queried. If the queried values satisfy the predicate ${P}$, the test outputs ACCEPT else it outputs REJECT. The goal of the test is to determine if ${\mathcal{F}}$ is a dictator or is far from being a dictator. The main parameters of interest in a dictatorship test are :
• Completeness${(c)}$ Every dictator function ${\mathcal{F}(x) = x_i}$ is accepted with probability at least ${c}$.
• Soundness${(s)}$ Any function ${\mathcal{F}}$ which is far from every dictator is accepted with probability at most ${s}$.
• Number of queries made, and the predicate ${P}$ used by the test.
2.1. Motivation
The motivation for the problem of dictatorship testing arises from hardness of approximation and PCP constructions. To show that an optimization problem ${\Lambda}$ is ${\mathbf{NP}}$-hard to approximate, one constructs a reduction from a well-known ${\mathbf{NP}}$-hard problem such as Label Cover to ${\Lambda}$. Given an instance ${\Im}$ of the Label Cover problem, a hardness reduction produces an instance ${\Im'}$ of the problem ${\Lambda}$. The instance ${\Im'}$ has a large optimum value if and only if ${\Im}$ has a high optimum. Dictatorship tests serve as “gadgets” that encode solutions to the Label Cover, as solutions to the problem ${\Lambda}$. In fact, constructing an appropriate dictatorship test almost always translates in to a corresponding hardness result based on the Unique Games Conjecture.
Dictatorship tests or long code tests as they are also referred to, were originally conceived purely from the insight of error correcting codes. Let us suppose we are to encode a message that could take one of ${R}$ different values ${\{m_1,\ldots,m_{R}\}}$. The long code encoding of the message ${m_\ell}$ is a bit string of length ${2^{R}}$, consisting of the truth table of the function ${\mathcal{F}(x_1,\ldots,x_R) = x_\ell}$. This encoding is maximally redundant in that any binary encoding with more than ${2^{R}}$ bits would contain ${2}$ bits that are identical for all $R$ messages. Intuitively, greater the redundancy in the encoding, the easier it is to perform the reduction.
While long code tests/dictatorship tests were originally conceived from a coding theoretic perspective, somewhat surprisingly these objects are intimately connected to semidefinite programs. This connection between semidefinite programs and dictatorship tests is the subject of today’s lecture. In particular, we will see a black-box reduction from SDP integrality gaps for Max Cut to dictatorship tests that can be used to show hardness of approximating Max Cut.
2.2. The case of Max Cut
The nature of dictatorship test needed for a hardness reduction varies with the specific problem one is trying to show is hard. To keep things concrete and simple, we will restrict our attention to the Max Cut problem.
A dictatorship test ${\mathsf{DICT}}$ for the Max Cut problem consists of a graph on the set of vertices ${\{\pm 1\}^{R}}$. By convention, the graph ${\mathsf{DICT}}$ is a weighted graph where the edge weights form a probability distribution (sum up to ${1}$). We will write ${(\mathbf{z},\mathbf{z}') \in \mathsf{DICT}}$ to denote an edge sampled from the graph ${\mathsf{DICT}}$ (here ${\mathbf{z},\mathbf{z}' \in \{\pm 1\}^{R}}$).
A cut of the ${\mathsf{DICT}}$ graph can be thought of as a boolean function ${\mathcal{F} : \{\pm 1\}^R \rightarrow \{\pm 1\}}$. For a boolean function ${\mathcal{F} : \{\pm 1\}^R \rightarrow \{\pm 1\}}$, let ${\mathsf{DICT}(\mathcal{F})}$ denote the value of the cut. The value of a cut ${\mathcal{F}}$ is given by
$\displaystyle \mathsf{DICT}(\mathcal{F}) = \frac{1}{2}\mathop{\mathbb E}_{ (\mathbf{z}, \mathbf{z}')} \Big[ 1 - \mathcal{F}(\mathbf{z}) \mathcal{F}(\mathbf{z}') \Big]$
It is useful to define ${\mathsf{DICT}(\mathcal{F})}$, for non-boolean functions ${\mathcal{F}: \{\pm 1\}^R \rightarrow [-1,1]}$ that take values in the interval ${[-1,1]}$. To this end, we will interpret a value ${\mathcal{F}(\mathbf{z}) \in [-1,1]}$ as a random variable that takes ${\{\pm 1\}}$ values. Specifically, we think of a number ${a \in [-1,1]}$ as the following random variable
\$ latex a = -1 \$ with probability $\frac{1-a}{2}$ and $a = 1$ with probability \$\frac{1+a}{2}\$.
With this interpretation, the natural definition of ${\mathsf{DICT}(\mathcal{F})}$ is as follows:
$\displaystyle \mathsf{DICT}(\mathcal{F}) = \frac{1}{2}\mathop{\mathbb E}_{ (\mathbf{z}, \mathbf{z}') \in \mathsf{DICT}} \Big[ 1 - \mathcal{F}(\mathbf{z}) \mathcal{F}(\mathbf{z}') \Big] {\,.}$
Indeed, the above expression is equal to the expected value of the cut obtained by randomly rounding the values of the function ${\mathcal{F} : \{\pm 1\}^{R} \rightarrow [-1,1]}$ to ${\{\pm 1\}}$ as described in Equation 2.
A Dictator Cut
The dictator cuts are given by the functions ${\mathcal{F}(\mathbf{z}) = z_\ell}$ for some ${\ell \in [R]}$. The ${\mathsf{Completeness}}$ of the test ${\mathsf{DICT}}$ is the minimum value of a dictator cut, i.e.,
$\displaystyle \mathsf{Completeness}(\mathsf{DICT}) = \min_{\ell \in [R]} \mathsf{DICT}(z_{\ell})$
The soundness of the dictatorship test is the value of cuts that are far from every dictator. We will formalize the notion of being far from every dictator is formalized using influences as follows:
Cut Far From Every Dictator
(more…)
Theme: Shocking Blue Green. Blog at WordPress.com. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 107, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9026095867156982, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/33453/complex-analysis-differentiablity-continuity-analyticity | # Complex analysis: differentiablity/continuity/analyticity
Suppose I have a real-valued function f defined on a complex open connected set D. Am I right in saying that: f is analytic on D if and only if f is n times continuously differentiable on D?
Or does this only apply if f is complex valued?
thanks
-
1
What do you mean by $n$? – wildildildlife Apr 17 '11 at 11:54
Sorry, n is just some natural number > 0 – A.A Apr 17 '11 at 12:39
1
What do you mean by analytic? What do you mean by differentiable? There is a big difference between considering complex differentiability versus real differentiability; the former is much more rigid than the latter. (I seem to remember recently a thread about this, but I can't find it at the moment) – Willie Wong♦ Apr 17 '11 at 12:39
Hmm. Well, I have no control over D; it is either a real open interval or a complex open connected set. I require f(c), ..., f^(n)(c) to exist for certain values of c which are inside D and for the derivatives to be continuous inside D. So instead of saying if D was real that f should be n-times continuously differentiable on D, and saying if D was complex that f should be analytic on D, I thought I'd just say f should be continuously differentiable on D. So I guess by analytic, cts. and differentiable? For differentiabilty, I just need it to exist.. I am not too sure about the differences :| – A.A Apr 17 '11 at 13:54
Complex differentiable functions are much different from real differentiable functions. Whether $D$ is an open interval in the real numbers or it is an open subset of $\mathbb{C}$ makes a huge difference. On the interval, the only notion of differentiability that makes sense is the real one, and it is much much weaker than analyticity. In fact even by having infinitely many derivatives you cannot guarantee analyticity. On a domain in the complex plane, if you consider it as a domain in $\mathbb{R}^2$, you have the – Willie Wong♦ Apr 17 '11 at 14:51
show 2 more comments
## 1 Answer
Let $D$ be a region in $\mathbb{C}$. Then $f : D \to \mathbb{R}$ is analytic on $D$ if and only if $f$ is constant. This is easily verified if you consider the Cauchy-Riemann equation.
If we consider a complex-valued function $f : D \to \mathbb{C}$ instead, we obtain the theorem that epitomizes the true nature of complex analysis: $f$ is analytic if and only if $f$ is complex differentiable on $D$. No further assumption on the differentiability of $f$, including $C^1$ condition, is needed. Notice that this drasically constrasts with the situation in the real case, where $C^0 (\Omega) \supsetneq C^1 (\Omega) \supsetneq C^2 (\Omega) \supsetneq \cdots \supsetneq C^{\infty}(\Omega) \supsetneq C^{\omega}(\Omega)$.
-
Your first statement is not true (strictly speaking). What you meant to write is "$f:D\to\mathbb{C}$ is a complex analytic function that happens to be purely real if and only if..." If you consider real analytic functions instead, there are plenty of examples. – Willie Wong♦ Apr 17 '11 at 12:28
@Willie Wong - You're right, if we consider $f : D \subset \mathbb{R}^2 \to \mathbb{R}$. A polynomial function with two variables will be a good counterexample. But this interpretation seems unnatural for me in this case, since A.A is considering 'analycitity on a subset of $\mathbb{C}$.'. – sos440 Apr 17 '11 at 13:05
Thank you all for the help. – A.A Apr 17 '11 at 13:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427577257156372, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/139584/whats-the-most-straight-forward-way-to-prove-walrass-law?answertab=active | # What's the most straight-forward way to prove Walras's Law?
Walras' Law states that summation of pi Ei(p) = 0 for all pi. We define Ei(p) = xi(p) - qi(p) - Ri. What are the next steps that I should take?
-
I don't quite know what your notation is. The proof starts by asserting LNS preferences and claiming walras' law, $\forall p,w$ and , $x \in x(p,w), p\cdot x=w$ The proof is almost always handled by contradiction. You can see most any micro textbook for the full proof. A good start would be to define your assumptions (LNS?) and the various functions you've specified (you'd have to do that for a proper proof anyway.) – Jason B Nov 14 '11 at 4:30
@Jason B - what's LNS? – Beatrice Nov 15 '11 at 17:59
Local non-satiation. It's the claim that, for any point $x$ and any number $\epsilon>0$, there exists a $x'$ in the $\epsilon$-neighbourhood of $x$ such that $x'$ is strictly preferred to $x$. – Zermelo Nov 15 '11 at 18:59
thanks @user68! – Beatrice Nov 15 '11 at 21:56
@Patience the most straightforward proof of Walras' Law requires one to assume LNS preferences and little more (it is implicit in Zermelo's answer). – Jason B Nov 16 '11 at 3:46
## 1 Answer
Let $i$ denote an agent; $j$ denote the good.
Walras' law: $p.e(p)=0$ for all $p$.
$\sum_{j} p_j.x_{ij}=\sum_{j} p_j.w_{ij}$ where $w_{ij}$ is $i$'s endowment of good $j$, $x_{ij}$ is $i$'s consumption of good $j$.
In other words, $\sum_{j} p_{j}.e_{ij}=0$, where $e_{ij}=x_{ij}-w_{ij}$.
Now just add over all agents $i$. You get $\sum_{j}p_j.e_j=0$, where $e_j=\sum_i e_{ij}$ for each $j$. This is Walras' Law. Note that this applies to ALL $p$ - regardless of whether it's the equilibrium price.
Use the dollar sign ($) for Latex. `${p_j * x_{ij}}$` for${p_j * x_{ij}}\$ – BlackJack Nov 15 '11 at 0:33
by the way, for your specific question (where there is production too) replce $w_j$ for every good $j$ by $w_j+R_j$ – Zermelo Nov 15 '11 at 5:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8886774182319641, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/11044?sort=newest | ## What is the probability that 4 points determine a hemisphere ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given 4 points ( not all on the same plane ), what is the probability that a hemisphere exists that passes through all four of them ?
-
Related thread: mathoverflow.net/questions/2014/… – Yoo Jan 7 2010 at 18:30
1
I may be being slow here (and the fact that no one else is asking suggests that I am), but... in order for the question to make sense, don't you need to specify what probability distribution you're using? I.e. when you say "at random", what do you mean? I guess you mean the points to be in R^3, so you need a probability distribution on (R^3)^4 = R^12. Even if the points are chosen independently and according to the same distribution, don't you need to give us at least a distribution on R^3? – Tom Leinster Jan 7 2010 at 23:03
Hmm, the answers of Michael and Dmitri (and the paper of Wendell that Michael links to) seem to assume that the points all lie on S^2, presumably uniformly distributed. They seem to have magically guessed what you meant. – Tom Leinster Jan 7 2010 at 23:06
Tom, if you decide indeed that all the points are on the fixed sphere (as we have done), then it is clear from my answer, that you can take any measure on S^n, invariant under the central symmetry. You just need to ask that the measure has no atoms. This need not be a measure induced by the standard metric. – Dmitri Jan 8 2010 at 0:47
a bit of my thought process: "hemisphere" implied that there was a sphere, and I generally default to a uniform distribution when possible. – Michael Lugo Jan 8 2010 at 2:01
show 2 more comments
## 3 Answers
See J. G. Wendell, "A problem in geometric probability", Math. Scand. 11 (1962) 109-111. Available online here. The probability that $N$ random points lie in some hemisphere of the unit sphere in $n$-space is
$$p_{n,N} = 2^{-N+1} \sum_{k=0}^{n-1} {N-1 \choose k}$$
and in particular you want
$$p_{3,4} = 2^{-3} \sum_{k=0}^2 {3 \choose k} = {7 \over 8}$$.
A second solution: A solution from The Annals of Mathematics, 2 (1886) 133-143 (available from jstor), specific to the (3,4) case, is as follows. First take three points at random, A, B, C; they are all in the same hemisphere and form a spherical triangle. Find the antipodal points to those three, A', B', C'. Now either the fourth point is in the same hemisphere as the first three or it is in the triangle A'B'C'. The average area of this triangle is one-eighth the surface of the sphere.
This gets the right answer, but I'm not sure how I feel about it; why is the average area one-eighth the surface of the sphere? One can guess this from the fact that three great circles divide a sphere into eight spherical triangles, but that's hardly a proof. Generally this solution seems to assume more facility with spherical geometry than is common nowadays.
-
nice.. am stil tryin to understand the solution.. i expected that the answer would be simpler than that !! To think this was asked as a question at the interview for an app-developer post. – sanz Jan 7 2010 at 17:03
I seem to recall seeing a simpler solution somewhere in the specific case you asked about. – Michael Lugo Jan 7 2010 at 17:05
5
If my mental picture is right, the fourth point will be inside triangle A'B'C' one eighth of the time because we may replace any subset of {A, B, C} with their antipodes, and the eight triangles formed in this way cover the sphere. – Reid Barton Jan 7 2010 at 17:46
Kind of nice to combine the two solutions for the following puzzle: Choose $n$ points randomly. Suppose they belong to the same hemisphere. What is the (conditional) expected area of their spherical convex hull? – t3suji Jan 7 2010 at 18:05
Computing expected areas of convex hulls is tricky enough that I don't think I'd call that a "puzzle". – Michael Lugo Jan 7 2010 at 18:06
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The probability is 7/8ths.
Consider throwing 3 darts at a sphere on average the darts will land with one on each of the ends of the Cartesian coordinates, i.e. (0,0,1), (0,1,0), (1,0,0). Or on average the SA of the spherical triangle made using the three darts as vertices will be 1/8th of the sphere volume.
This is easy to verify. The position the dart lands can be described using 2 coordinates and the equation for a sphere in Cartesian. Randomly choose an x value between -1 and 1, randomly choose a y value between -1 and 1 and the equation for a sphere will give you the z value.
On average x will be 0, y will be 0 and z will be 1.
Do this using the others and you can see why the triangle will on average have a SA of 1/8th of sphere volume.
Now consider placing a great circle around any 2 points. There are 3 ways this can be done. On average These 3 great circles will make up the x, y and z planes. Or better to describe as, on average these three great circles will be on orthogonal planes.
So there are 8 octant to choose from, Sa of 7 of these octants can be included in the same hemisphere as the 3 points by choosing different great circles. So only if the 4th dart lands in the 8th octant do we not have a great circle that can be used to split the sphere into 2 hemispheres encompassing all 4 darts.
The 8th octant will be the dipodal spherical triangle of the "average position of the darts landing. that is draw lines through the darts, through the centre of the sphere and make another spherical triangle using the intersection of the before mentioned lines on the opposite side of the sphere.
Think about it, that is the only octant that can not be encompassed using any of the three great circles.
and again
throw 3 darts, 3 great circle can be drawn, on average the planes these three great circles lie on will be orthogonal. The intersection of these three planes creates 8 octants. There can be a spherical triangle drawn around the three darts. If and only if the fourth dart lands on the antipodal spherical triangle of the first three darts will it not be in the same hemisphere.
-
We chose n points on $S^{n-2}$ and want to show that the probability for them to be in one half-sphere is $1-2^{1-n}$. A simple way to solve this question is to notice that up to a linear transformation there exsits a unique collection of generic $n$ lines in $R^{n-1}$ through 0. This reduces whe problem to a combinatorial one. End of solution.
Here are details. Namely, Instead of chosing points on the whole shpere, it is sufficient to chose these points among $2n$ points of intersection of the sphere with generic lines $L1,...,L_n$. We just need to chose one point on one line. We call these $2n$ points $P_1, -P_1,...,P_n, -P_n$
Lemma. For generic $L_i$ there will be only two choices of n points $\pm P_i$, such that the obtained simplex is not contained in the demi-sphere.
Proof for n=4. It is sufficient to check this statement for the verticies of the regular cube. Indeed, for generic 4 lines in $R^3$ there is a linear transformation that takes these lines to the axes of the cube.
"Proof" for any n. For n generic lines in $R^{n-1}$ it is alway possible to send them to the lines generated by vectors $1,0,...,0$,... $0,0,...,1$ and $1,1,...,1$. It is sufficient to check the lemma for 2n points repersenting intersections of these lines with $S^{n-2}$.
From this lemma we get the answer. Number of all choices of $n$ points is $2^n$, two choices are bad, so we get $(2^n-2)2^{-n}$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9244115352630615, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/96760?sort=newest | Schemes associated to vector spaces
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $k$ be a field. Let $F$ be a covariant functor on the category of $k$-algebras to the category of sets. Assume that the opposite functor $F^{op}$ on the category of affine $k$-schemes is a sheaf. (This last assumption might not be necessary.)
Now, suppose that $F(R)$ is a finitely generated $R$-module for all $k$-algebras $R$ and that
$F(R) = F(k)\otimes_k R$.
It seems that in this case $F$ is "representable" by the vector space $$F(k).$$
I don't know what this means and I'm probably missing some very standard construction here. It should mean that $F^{op}$ is representable by a finite type $k$-scheme.
I know one can associate to a finite-dimensional vector space $E$ over $k$ the affine variety $\mathrm{Spec} \ \mathrm{Sym}(E).$ Is it clear that in this case the functor $F$ is representable by the affine variety $\mathrm{Spec} \ \mathrm{Sym}(E)$?
-
1 Answer
Since $F(k) \cong k^n$ for some $n$, we have $F(R) \cong R^n$, naturally in $R$. But $R^n \cong (\mathbb{A}^n)(R)$, so that $F$ is just the scheme $\mathbb{A}^n$. This is also isomorphic to $\mathrm{Spec}(\mathrm{Sym}(F(k))$.
EDIT: More generally and coordinate free: When $S$ is some base scheme and $\mathcal{E}$ is some quasi-coherent module over $S$, then $\mathbb{V}(\mathcal{E}^*) := \mathrm{Spec}(\mathrm{Sym}(\mathcal{E}))$ represents the contravariant functor which sends an $S$-scheme $p : T \to S$ the set of $\mathcal{O}_S$-homomorphisms `$\mathcal{E} \to p_* \mathcal{O}_T$`. When $S=\mathrm{Spec}(k)$ is affine, this is the set of $k$-module homomorphisms $\Gamma(S,\mathcal{E}) \to \Gamma(T,\mathcal{O}_T)$. For example, $\mathbb{V}(\mathcal{O}_S^n) = \mathbb{A}^n_S$.
Now if $F$ is as in your question, we see that $F$ represents the same functor as $\mathbb{V}(F(k))$.
-
3
I think normally we take Spec(Sym(E*)) (E* means E dual). That convention will make morphisms from Speck to this scheme the same as elements of E, if E is a vector space. – 36min May 12 2012 at 16:02
Right, I've fixed it. – Martin Brandenburg May 12 2012 at 17:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9337964057922363, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/84074/undecidable-sentences-of-first-order-arithmetic-whose-truth-values-are-unknown | ## undecidable sentences of first-order arithmetic whose truth values are unknown
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Godel's undecidable sentences in first-order arithmetic were guaranteed to be true, by construction. But are there examples of specific sentences known to be undecidable in first-order arithmetic whose truth values aren't known? I'm thinking, by contrast, of the situation in set theory: CH is undecidable in ZFC, but its truth value is, in some sense, unknown.
Paris and Harrington showed the strengthened finite Ramsey theorem is true (in the sense of provable in second-order arithmetic) but undecidable in first-order arithmetic. I'm asking for "natural" examples in this general vein -- but whose truth values haven't yet been settled.
EDIT. Let me clarify my interest in the question, which is more philosophical than mathematical. I asked it on the basis of the following passage in Peter Koellner's paper "On the Question of Absolute Undecidability":
The above statements of analysis [i.e. all projective sets of reals are Lebesgue measurable] and set theory [i.e. CH] differ from the early arithmetical instances of incompleteness in that their independence does not imply their truth. Moreover, it is not immediately clear whether they are settled at any level of the hierarchy. They are much more serious cases of independence.
What I'm asking is whether there are "much more serious cases" of independence even in first-order arithmetic -- and not in the trivial case of full-on ZFC, like V=L, etc. By a sentence with "unknown truth value," I just mean a sentence that hasn't been proved in a theory stronger than first-order arithmetic. (For example, Paris and Harrington proved the strengthened finite Ramsey theorem in second-order arithmetic.)
-
3
I would be interested in something which is not of this sort. Anyway, all of Harvey's examples are true in the standard model. (Under appropriate background assumptions on large cardinals.) – Andres Caicedo Dec 22 2011 at 6:52
3
I do not know about an specific sentence, but it is worth noticing that it is easy to build pair of sentences such that at one of them is of the kind you want. How? Let us consider $\psi$ an undecidable sentence which is true (e.g., $Con(ZFC)$) and let us consider $\varphi$ a sentence whose truth value is unknown (e.g., Riemann Hypthesis). Then, either the sentence $\psi \land \varphi$ or the sentence $\psi \land \neg \varphi$ is of the kind you are interested on. – boumol Dec 22 2011 at 9:39
4
@Carl: I think the best scenario would be a arithmetic sentence $\phi$ such that both $PA+\phi$ and $PA+\lnot\phi$ are equiconsistent with $PA$, but for which we do not know whether $\phi$ or $\lnot\phi$ holds in ${\mathbb N}$. I doubt any "natural" such examples are known. – Andres Caicedo Dec 22 2011 at 15:58
6
François, perhaps this is what Matt means: Say that $X$ is known at time $t$ if and only if someone has, at some time $t'\le t$, exhibited a proof of $X$ from ZFC + some standard large cardinal axiom. Then he seeks an explicit example of arithmetical statement $X$ such that, taking $t$ to be the year 2011, $X$ is not known, but "PA does not prove $X$" is known and "PA does not prove $\neg X$" is known. This seems roughly in line with Koellner's notion of absolute independence, and doesn't trivialize just because arithmetical statements are all either true or false. – Timothy Chow Dec 22 2011 at 17:10
3
It might also help Matt if someone explains why we run into silly examples if we draw a bright line at (say) ZFC specifically, or if we allow "unnatural" statements. – Timothy Chow Dec 22 2011 at 17:14
show 16 more comments
## 6 Answers
Update. I've improved the argument to use only the consistency of $T$. (2/7/12): I corrected some over-statements previously made about Robinson's Q.
I claim that for every statement $\varphi$, there is a variant way to express it, $\psi$, which is equivalent to the original statement $\varphi$, but which is formally independent of any particular desired consistent theory $T$.
In particular, if $\varphi$ is your favorite natural open question, whose truth value is unknown, then there is an equivalent formulation of that question which exhibits formal independence in the way you had requested. In this sense, every open question is equivalent to an assertion with the property you have requested. I take this to reveal certain difficult subtleties with your project.
Theorem. Suppose that $\varphi$ is any sentence and $T$ is any consistent theory containing weak arithmetic. Then there is another sentence $\psi$ such that
• $\text{PA}+\text{Con}(T)$ proves that $\varphi$ and $\psi$ are equivalent.
• $T$ does not prove $\psi$.
• $T$ does not prove $\neg\psi$.
Proof. Let $R$ be the Rosser sentence for $T$, the self-referential assertion that for any proof of $R$ in $T$, there is a smaller proof of $\neg R$. The Gödel-Rosser theorem establishes that if $T$ is consistent, then $T$ proves neither $R$ nor $\neg R$. Formalizing the first part of this argument shows that $\text{PA}+\text{Con}(T)$ proves that $R$ is not provable in $T$ and hence that $R$ is vacuously true. Formalizing the second part of this argument shows that $\text{Con}(T)$ implies $\text{Con}(T+R)$, and hence by the incompleteness theorem applied to $T+R$, we deduce that $T+R$ does not prove $\text{Con}(T)$. Thus, $T+R$ is a strictly intermediate theory between $T$ and $T+\text{Con}(T)$.
Now, let $\psi$ be the assertion $R\to (\text{Con}(T)\wedge \varphi)$. Since $\text{PA}+\text{Con}(T)$ proves $R$, it is easy to see by elementary logic that $\text{PA}+\text{Con}(T)$ proves that $\varphi$ and $\psi$ are equivalent.
The statement $\psi$, however, is not provable in $T$, since if it were, then $T+R$ would prove $\text{Con}(T)$, which it does not by our observations above.
Conversely, $\psi$ is not refutable in $T$, since any such refutation would mean that $T$ proves that the hypothesis of $\psi$ is true and the conclusion false; in particular, it would require $T$ to prove the Rosser sentence $R$, which it does not by the Gödel-Rosser theorem. QED
Note that any instance of non-provability from $T$ will require the consistency of $T$, and so one cannot provide a solution to the problem without assuming the theory is consistent.
The observation of the theorem has arisen in some of the philosophical literature you may have in mind, based on what you said in the question. For example, the claim of the theorem is mentioned in Haim Gaifman's new paper "On ontology and realism in mathematics," which we read in my course last semester on the philosophy of set theory; see the discussion on page 24 of Gaifman's paper and specifically footnote 35, where he credits a fixed-point argument to Torkel Franzen, and an independent construction to Harvey Friedman.
My original argument (see edit history) used the sentence $\text{Con}(T)\to(\text{Con}^2(T)\wedge\varphi)$, where $\text{Con}^2(T)$ is the assertion $\text{Con}(T+\text{Con}(T))$, and worked under the assumption that $\text{Con}^2(T)$ is true, relying on the fact that $T+\text{Con}(T)$ is strictly between $T$ and this stronger theory. The current argument uses the essentially similarly idea that $T+R$ is strictly between $T$ and $T+\text{Con}(T)$, thereby reducing the consistency assumption.
-
There is nothing special about $\mathrm{Con}(T)$. If $\gamma$ is any sentence unprovable in $T$, and $\phi$ any sentence, then there is a $\psi$ such that $\vdash\gamma\to(\phi\leftrightarrow\psi)$, $T\nvdash\psi$, and $T\nvdash\neg\psi$. Proof: By the Gödel–Rosser theorem, there is an $\alpha$ independent of $T+\neg\gamma$. Put $\psi=(\gamma\land\phi)\lor(\neg\gamma\land\alpha)$. – Emil Jeřábek Apr 18 2012 at 15:39
Yes, I agree with that. – Joel David Hamkins Apr 18 2012 at 16:35
The argument presumes that $T$ is computably axiomatizable, so that the Rosser sentence is available. This should be understood as a hypothesis in the theorem. (Vote up this comment if I should edit to say this explicitly.) – Joel David Hamkins Nov 8 at 2:42
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
What you want is called [first-order] Arithmetical Splitting. I have spoken and written a lot about it in the last few years. Send me a message and I can show you some drafts about the current state of this most important topic.
Yes, we should not be able to form any preference, like in the case of PH, where it is clear that PH is better than \neg \PH.
-
1
Hi Andrey! I didn't ask the question, but naturally I am very interested in this topic. (Tried to email you but the email address I have for you has been discontinued.) Would you please email me some of those drafts you mentioned? – Andres Caicedo Jan 31 2012 at 2:53
I think there is a little misunderstanding.
Paris and Harrington showed the strengthened finite Ramsey theorem is true but unprovable in first-order arithmetic; I don't know if there's a proof that extends the result to full-on undecidability rather than just unprovability.
Indeed, the Wikipedia page is only talking about unprovability, but the negation of the strengthened finite Ramsey theorem is also unprovable in Peano arithmetic for "trivial" reasons: if you can prove this negation, then second order arithmetic can also prove this negation (because second order arithmetic is stronger than Peano arithmetic), so this would mean that second order arithmetic is inconsistent (because second order arithmetic proves the strengthened finite Ramsey theorem).
So if you take for granted that second order arithmetic is consistent, then your example is actually undecidable in Peano arithmetic. And there are other examples like Goodstein's theorem.
-
Yes, I realize now that undecidability of the Ramsey theorem is trivial, once we have unprovability. But this is not an answer to the question: the Ramsey theorem -- like Goodenstein's theorem -- is, in fact, true on the standard model of arithmetic. I'm interested in undecidable cases where the truth value hasn't been settled. – Matt Lord Dec 22 2011 at 16:51
Once again, I don't know what you mean by "truth value". An ultrafinitist (someone that does not believe in the infinity) will probably tell you that the truth value of Goodstein's theorem has not been settled at all, and that it's most likely false because experimentally you will never reach 0. On the other hand, someone like Woodin will probably tell you that the truth value of CH is known: it's false because there are strong arguments against it and few arguments for it. And most set theorists will tell you that large cardinal axioms are true because nobody want them to be false. – Guillaume Brunerie Dec 22 2011 at 17:15
(of course this only applies to platonists, people who believe that there is a real mathematical universe somewhere, where every question has an answer) – Guillaume Brunerie Dec 22 2011 at 17:21
Guillaume, these are not the sort of examples the question is after. If it is an issue, work in a metatheory extending ZFC with enough large cardinals. – Andres Caicedo Dec 22 2011 at 17:44
@Andres, I know, I was just clarifying the unprovable/undecidable point. – Guillaume Brunerie Dec 22 2011 at 18:39
I'd say Con(ZF) isn't known to be undecidable--it's only undecidable if it's true. If it's false, that fact is $\Sigma^0_1$ and therefore provable. We want a sentence that's $\Sigma^0_2$ or higher.
This might be an almost-example: http://www.cs.uchicago.edu/~simon/RES/collatz.pdf
It proves that a generalization of the 3n+1 conjecture is $\Pi_2$-complete. But it's not quite what is asked, since it's about a family of problems rather than a single sentence.
-
1
Regarding the objection of your first paragraph, if Con(ZF) is false, then there are no independent sentences of any complexity. – Joel David Hamkins Jan 5 2012 at 13:18
If you'll settle for an important open question that's independent from some weak fragments of PA, the P vs NP problem is an example.
http://www.scottaaronson.com/papers/pnp.pdf discusses this a little bit.
Or if you'll take a completely artificial problem that's more strongly independent, just generate a very long random proposition in PA's language, by rolling dice. You won't know its truth value, and from Chaitin's theorem it will almost certainly be independent of any reasonable axiom system.
-
1
P = NP is independent of those weak fragments only if a certain cryptographic hypothesis holds, so this is not really an example. I don't think the second example works either since Matt Lord wants a specific example of a statement that is known to be undecidable. – Timothy Chow Dec 29 2011 at 15:04
If we omit the qualification "natural" from the question, then, of course, the most obvious examples are the arithmetized versions of metamathematical sentences expressing the (absolute) consistency of ZFC, or any axiom-system of set theory far weaker than ZFC within which arithmetic can be developed. Indeed, e.g. Con(ZF) is a sentence of arithmetic that we obtain by Godel-numbering from the metamathematical sentence "there is no proof of 0=1 from ZF." And if ZF is consistent, then Con(ZF) is undecidable in Peano arithmetic with unknown truth value. Actually, on the one hand, Con(ZF) implies Con(PA), and the arithmetical proof of $\lnot$Con(ZF) would yield a direct proof of the inconsistency of ZF.
As far as the "naturalness" condition is concerned, it seems that there will be no easy way to find "natural" sentences of this kind. Indeed, some natural candidates as e.g. the Goldbach conjecture are excluded, since they should be true, if they turn out to be undecidable. More precisely, any $\Pi_1$ sentence $S$ of arithmetic is true whenever $S$ is undecidable. Indeed, if $S$ is false, then its negation is a true $\Sigma_1$ sentence, and Peano arithmetic proves any true $\Sigma_1$ sentence.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 108, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9476419687271118, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/73368-finding-velocity-function.html | # Thread:
1. ## Finding the velocity function
I thought acceleration was the change in velocity or in other words the derivative of the velocity function, and thus am trying to solve the problem I've attached keeping this in mind but it's not working. Any helpful hints?
Attached Thumbnails
2. Originally Posted by fattydq
I thought acceleration was the change in velocity or in other words the derivative of the velocity function, and thus am trying to solve the problem I've attached keeping this in mind but it's not working. Any helpful hints?
you forgot your constant of integration when you integrated to get v(t) ... determine it from the given initial condition.
3. A god way can be integrate, knowing:
$<br /> v(t) = d'(t) \Rightarrow \int {v(t)dt} = d(t) + C<br />$
4. Two or more functions can have the same derivative. In our case, any function whose derivative is $t+8$ must be of the form
$v(t)=\frac{1}{2}t^2+8t+C.$
All we have to do now is figure out what $C$ must be. We are told that $v(0)=3$, so
$v(0)=\frac{1}{2}0^2+8\cdot0+C=C=3.$
Now that we know $C=3$, we can rewrite our formula for $v(t)$:
$v(t)=\frac{1}{2}t^2+8t+3.$
You were close, though.
5. ahh gotcha, but then how would I go the final part of the problem calculating the total distance?
6. I can't even figure out how total distance can be contrived from what's given...
7. Just as acceleration is the derivative of velocity, so velocity is the derivative of distance. Just as velocity is the integral of acceleration, so distance is the integral of velocity.
Now that you know the velocity function is $\frac{1}{2}t^2+ 8t+ 3$ integrate that. Because you are asked for the distance covered during the time interval 0 and 10, d(0)= 0 to find C, the constant of integration and the find d(10). Equivalently evaluate d(10)- d(0) so the constant cancels. That last is the same as finding the definite integral $\int_0^{10} (\frac{1}{2}t^2+ 8t- 3)dt$
8. Thanks buddy
9. Originally Posted by HallsofIvy
Just as acceleration is the derivative of velocity, so velocity is the derivative of distance. Just as velocity is the integral of acceleration, so distance is the integral of velocity.
to get the terminology straight ...
position (a vector) is the antiderivative of velocity (also a vector). distance is a scalar quantity.
a definite integral of velocity over an interval of time yields the displacement, or change in position over that interval of time.
distance traveled over an interval of time is the definite integral of speed (a scalar equal to the absolute value of velocity) over that time interval. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9443753361701965, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/86679?sort=newest | ## Why the notation $\mathcal{O}(\mathcal{L})$ for line bundles $\mathcal{L}$
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $X$ be a complex manifold. If $D$ is a divisor on $X$, then $\mathcal{O}(D)$ denotes the (up to isomorphic) line bundle $\mathcal{L}$ with a meromorphic section $m$ whose divisor $(m)=D$.
But if $\mathcal{L}$ is a line bundle on $X$, there is also the notation $\mathcal{O}(\mathcal{L})$. Can anyone tell me what does it mean? Thank you very much.
-
6
It means the sheaf of sections of the line bundle $\mathcal{L}$. I guess the notation $\mathcal{O}(\mathcal{L})$ is just to remind you that the sheaf is an $\mathcal{O}$-module. Usually people don't distinguish much between line bundles and invertible sheaves on a complex manifold, which probably explains why this notation isn't as wellknown as $\mathcal{O}(D)$. – J.C. Ottem Jan 26 2012 at 1:22
## 1 Answer
Slightly expanding the comment of J. C. Otterm, I think $\mathcal O_X(D)$ should not be interpreted as a line bundle but rather as a sheaf of sections.
So, summing up:
• When you consider a (holomorphic) line bundle $L\to X$ you should think at that as a complex manifold together with a holomorphic surjective map to $X$, locally trivial, whose fibers are complex vector spaces of dimension one (and the local trivialization are compatible with the vector space structure).
• When you consider a (Weil or Cartier: I am assuming $X$ to be smooth so that the two concepts coincide) divisor $D$, you should look at it just as a formal integral combination of codimension one irreducible subvarieties.
• When you consider $\mathcal O_X(L)$, you should look at it as the sheaf of holomorphic sections of $L\to X$.
• When you consider $\mathcal O_X(D)$, if $D=\sum_j a_j D_j$ where $a_j\in\mathbb Z$ and $D_i$ are prime divisors, you should look at it as the sheaf of meromorphic function on $X$ which have at least zeros of order $a_i$ along $D_i$ if $a_i\le 0$ and at most poles of order $a_k$ along $D_k$ if $a_k\ge 0$.
Of course, these four concepts are strongly related.
For instance, given a divisor $D$ one can form an associated holomorphic line bundle, let's say $L_D$ and then consider its sheaf of holomorphic sections $\mathcal O_X(L_D)$ which is naturally isomorphic to $\mathcal O_X(D)$.
On the other hand, given a holomorphic line bundle $L\to X$ where $X$ is projective, then it always admits a meromorphic section $\sigma$. Let $D_\sigma$ its associated divisor. Then, $L_{D_\sigma}\simeq L$ as holomorphic line bundles.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9316015839576721, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/29949/what-is-the-shortest-program-for-which-halting-is-unknown/29953 | ## What is the shortest program for which halting is unknown?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In short, my question is:
What is the shortest computer program for which it is not known whether or not the program halts?
Of course, this depends on the description language; I also have the following vague question:
To what extent does this depend on the description language?
Here's my motivation, which I am sure is known but I think is a particularly striking possibility for an application to mathematics:
Let $P(n)$ be a statement about the natural numbers such that there exists a Turing machine $T$ which can decide whether $P(n)$ is true or false. (That is, this Turing machine halts on every natural number $n$, printing "True" if $P(n)$ is true and "False" otherwise.) Then the smallest $n$ such that $P(n)$ is false has low Kolmogorov complexity, as it will be printed by a program that tests $P(1)$, then $P(2)$, and so on until it reaches $n$ with $P(n)$ false, and prints this $n$. Thus the Kolmogorov complexity of the smallest counterexample to $P$ is bounded above by $|T|+c$ for some (effective) constant $c$.
Let $L$ be the length of the shortest computer program for which the halting problem is not known. Then if $|T|+c < L$, we may prove the statement $\forall n, P(n)$ simply by executing all halting programs of length less than or equal to $|T|+c$, and running $T$ on their output. If $T$ outputs "True" for these finitely many numbers, then $P$ is true.
Of course, the Halting problem places limits on the power of this method.
Essentially, this question boils down to: What is the most succinctly stateable open conjecture?
EDIT: By the way, an amazing implication of the argument I give is that to prove any theorem about the natural numbers, it suffices to prove it for finitely many values (those with low Kolmogorov complexity). However, because of the Halting problem it is impossible to know which values! If anyone knows a reference for this sort of thing I would also appreciate that.
-
1
My money is on tag systems: en.wikipedia.org/wiki/… – Steve Huntsman Jun 29 2010 at 18:37
Note here that the "length of the program" includes the input---so this is a little less clear. – Daniel Litt Jun 29 2010 at 18:46
Maybe one can also find some short λ-terms having neither a known normal form, nor a proof of non-existence of it; unfortunately, I can’t find anything on the topic. – Antonio E. Porreca Jun 29 2010 at 19:19
"By the way, an amazing implication of the argument I give is that to prove any theorem about the natural numbers, it suffices to prove it for finitely many values..." A proof would require you prove P(n) for those finitely many n and to prove that those values are sufficient. This would prove the halting or non-halting of the TM that tests P(1), P(2)... You could turn this around to solve the halting problem. Given a TM M, define P(n) = false iff M halts with $n$ on its tape. Proving P(n)=true for all n proves that M does not halt. Finding an n so that P(n)=false proves that M halts – Travis Service Jun 29 2010 at 22:01
@Travis: Which is why I wrote "However, because of the Halting problem it is impossible to know which values! If anyone knows a reference for this sort of thing I would also appreciate that." – Daniel Litt Jun 29 2010 at 22:32
show 1 more comment
## 9 Answers
There is a 5-state, 2-symbol Turing machine for which it is not known whether it halts. See http://en.wikipedia.org/wiki/Busy_beaver.
-
This does seem like a good contender. Is the halting status of every 4-state, 2-symbol machine known? – Daniel Litt Jun 29 2010 at 18:57
1
Daniel: yes, since S(n) is known for n = 4. – Antonio E. Porreca Jun 29 2010 at 19:04
Ah, fair enough. That answers original question to my satisfaction, though if someone wants to address the "descriptive language" question and the reference request, that would also be great. – Daniel Litt Jun 29 2010 at 19:05
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You may find my blog post "Are small sentences of Peano arithmetic decidable?" relevant. In summary, John Langford and I investigated short sentences of Peano arithmetic. We enumerated them all (actually, our laptops did) and eliminated those that could be recognized as decidable. It quickly turned out that Diophantine equations were difficult to recognize as decidable. Among those we found two that gave a professional number theorist something to munch on (all quantifiers range over $\mathbb{N}$): $$\exists a, b, c . \; a^2 - 2 = (a + b) b c$$ and $$\exists a, b, c . \; a^2 + a - 1 = (a + b) b c.$$
The shortest unsolved sentence I am aware of is ($S$ is the successor function) $$\forall a \exists b \forall c, d . \; (a+b)(a+b) \neq S(S( (S(S c)) \cdot (S(S d)))).$$ It says (more or less) that there are infinitely many primes of the form $x^2 - 2$.
What I found most surprising was that mixed quantifiers were easier than just straight universal or existential statements. It seems that with very few symbols to spare for the matrix, you cannot produce an interesting mixed-quantifier sentence.
-
+1 I was wondering if someone would answer along these lines! Based on your blog post, I infer that you have not actually enumerated and solved all the statements shorter than the one you give; is that the case? – Daniel Litt Jul 3 2010 at 7:15
This was some time ago. If I remember correctly we enumerated all universal statements up to 15 symbols where we took inequality is a basic symbol (and did not use negation). Approximately 3000 of those could not be recognized as decidable by our heuristics. Practically all of them were Diophantine equations (actually their negations), except possibly for a couple of systems of equations. We also investigated mixed-quantifier sentences and I think we got up to something like 10 and never found an interesting one. – Andrej Bauer Jul 3 2010 at 14:17
It's quite possible that the Collatz conjecture provides an answer. Apply this function repeatedly. The conjecture is that this process will eventually reach the number 1, regardless of which positive integer is chosen initially.
Not sure how many lines of code this would be. Probably 2 in Haskell.
-
5
It's possible; this isn't a big list question though--I'm not looking for guesses. I'm actually wondering if someone has enumerated all short programs and tried to prove that they halt, with an aim of pursuing the application I describe in the question. – Daniel Litt Jun 29 2010 at 18:53
2
@Daniel Litt: regarding that comment, you should look at work such as cs.auckland.ac.nz/~cristian/Calude361_370.pdf . In that paper, the authors compute the first 64 bits of an $\Omega$-number explicitly. Of course these bits are utterly dependent on the choice of a universal prefix-free machine. – Carl Mummert Jun 29 2010 at 19:48
Thanks! Looks good. – Daniel Litt Jun 29 2010 at 19:49
I guess you mean to make the function recursive? f(n) = f(n/2), f(n) = f(3n+1), etc. Also you need the base case. Otherwise this function is easily computable. :-) – Wilson Jun 29 2010 at 21:16
Also, there's the question of how long the bit representation is of the smallest n not known to halt. – Wilson Jun 29 2010 at 21:17
show 1 more comment
The answer depends utterly and completely on the description language. Rogers called a description language "acceptable" if there is a pair of effective methods, one to convert standard programs to programs in the language D, and one to convert programs in D to standard programs. (More precisely, these description languages are just numberings of the set of partial computable functions.) Now for any program whatsoever, there is an acceptable description language which assigns that program to index 0. So any program you like can be the shortest example, if you pick the right acceptable numbering.
This is the same kind of reason that, for any string of length 2 or more, there is a universal prefix-free machine that makes the string Kolmogorov random, and another universal prefix-free machine that makes the string not Kolmogorov random.
-
This is true, but somehow a very naive answer--if we include the length $p$ of a "translator" program into our analysis of the dependence of length on description language, the dependence is a little less arbitrary. That is, any program in language 1 of length $x$ gives a program of length $p+x$ in language $2$. – Daniel Litt Jun 29 2010 at 19:24
To do that, you still must start with a "preferred" description language, and the complexity will depend on which one you think of as preferred. Perhaps the one I prefer assigns some function to index 0, while your preferred one assigns that function only much larger indices. – Carl Mummert Jun 29 2010 at 19:43
Ah sorry I was unclear. I'm not suggesting a definition of complexity that is independent of language--just that the length of "translators" provides a bound on the difference between the two complexities. Essentially, my second question was asking whether there is any less trivial relationship (and aiming towards applications, whether we can pick a language in a tricky way to make the method for proof I suggest at all reasonable). – Daniel Litt Jun 29 2010 at 19:47
I think you will find the paper I posted above interesting. – Carl Mummert Jun 29 2010 at 19:49
Definitely, it looks fantastic. Wish I could accept that comment as an answer as well. – Daniel Litt Jun 29 2010 at 19:51
Readers might be interested to know that congruential iterations similar to the Collatz 3n+1 problem were discovered "in the wild" while trying to understand the behavior of various busy beaver candidate machines. Since John Conway has shown that deciding termination of certain classes of such congruential iterations is undecidable, it may be the the boundary of undecidability already lies in one of these open busy beaver congruential iteration problems. For much further discussion and references see my old post on sci.math, 13 Feb 1996, halting is weak? http://groups.google.com/group/comp.lang.scheme/msg/b8c43aee2bc12241
-
Regarding to prove any theorem about the natural numbers, it suffices to prove it for finitely many values ... impossible to know which values [where the context explains that the theorem is a universally quantified statement $\forall n\,P(n)$]: this is trivial, and one value suffices. Namely, define $n_0$ as follows: if the statement is false, let $n_0$ be the smallest counterexample, otherwise let $n_0:=0$. Then the statement holds if and only if $P(n_0)$.
-
This is obviously not the intent of my claim; if you can show me a Turing machine which, given $P$, prints a correct (numerical) value of $n_0$, however, I'll be pretty impressed. Of course, such a machine would solve the halting problem. – Daniel Litt Mar 10 2011 at 16:32
3
Emil's statement looks to me entirely parallel to the claim in the question. Emil says to check `$n_0$` but gives you no clue how to find it. The claim in the question says to check all halting programs up to a certain length but gives you no clue how to determine which programs those are. – Andreas Blass Mar 10 2011 at 16:44
Well this is a bit of a quibble, but my claim is that given a correct value, one can produce a proof--on the other hand, Emil's "value" does not allow one to produce a proof. Instead, it produces something silly like "If there is an odd perfect number, let $x$ be that number; otherwise, let $x$ be a proof that there is no odd perfect number." In any case, I think we all understand each other and pretty much agree that the proof technique I "suggest" is essentially useless. – Daniel Litt Mar 10 2011 at 17:27
@Daniel, If you prove the statement for $n_0$ then by the definition of $n_0$ it would follow for all. – Kaveh Sep 21 at 21:03
This doesn't qualify as "shortest" but is my favorite example of why humans can't solve the halting problem:
for all odd numbers $n = 1,3,5,...$
if $n$ is perfect, halt.
This program halts if and only if there is an odd perfect number. Of course, the query "is $n$ perfect" is not terribly short (but can be computed by adding one more for loop).
-
this question is apparently closely related to Wolframs research program of determining whether "small" Cellular Automata [CAs][1] are Turing Complete. if the CA is proved Turing Complete then by mapping with Turings halting problem, there exists an input for which termination of the CA cannot be proven. but also determining whether the CA is Turing complete can be very difficult and there are several so-far-indeterminate cases. a case where it succeeded but with a very complex proof is [2], some further details of the dynamics in [4]. see also [5] for a writeup of an ambitious somewhat recent "major attack" on the busy beaver problem that superseded many prior results. and there is also a related long tradition of research for finding small state universal TMs[3,6] probably dating to the ~1960s including results by Marvin Minsky. re Collatz conjecture candidate & a boundary with "nearby" problems similar to Conway-type, see also [7]
[1] Elementary cellular automata, wikipedia
[2] Rule 110, wikipedia
[3] tcs.se, whats the simplest noncontroversial 2 state universal TM
[4] tcs.se initial conditions for rule 110
[5] New-Millenium Attack on the Busy Beaver Problem by Ross et al
[6] The complexity of small universal Turing machines: a survey Woods & Neary
[7] tcs.se, whats the nearest problem to the Collatz conjecture thats been successfully resolved?
-
The following is based on Waring's Problem:
For all n floor((3/2)^n) + 3^n mod 2^n < 2^n. Kubina has tested this up to 471,600,000.
x = 9; for( y = 4 ; x/y + x%y < y ; y *= 2 ) x *= 3;
Assume that x and y are int's with unlimited size.
Whereas small Turing machines have been exhaustively analysed and, as Richard notes, there is a 5-state 2-symbol TM whose halting is unknown, I have not seen a similar analysis for other models of computation (Register Machines, LOOP programs, C programs). So I propose the above C program (containing 32 symbols) as the shortest whose halting is unknown.
In the program x contains powers of 3 and y contains corresponding powers of 2 (x*=3 multiplies x by 3).
-
If this is an answer to the question, "What is the shortest program fro which halting is unknown?" you'll have to flesh it out. If it's not an answer to that question, it doesn't belong here. – Gerry Myerson Oct 4 at 5:47
2
Integers in C are always bounded (but the bound depends on the implementation) so this program doesn't work as intended. – François G. Dorais♦ Oct 4 at 13:47
The program seems to be written in a nonstandard version of C in which ints are unbounded. It is reasonable to consider this variant language as a model of computation, and it is reasonable to ask about short programs in this language, but it seems unreasonable to claim that your answer satisfies the constraints of working in C. – S. Carnahan♦ Oct 5 at 2:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9327397346496582, "perplexity_flag": "middle"} |
http://www.chemeurope.com/en/encyclopedia/Refractive_index.html | My watch list
my.chemeurope.com
my.chemeurope.com
With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter.
• My watch list
• My saved searches
• My saved topics
• Home
• Encyclopedia
• Refractive_index
# Refractive index
The refractive index (or index of refraction) of a medium is a measure for how much the speed of light (or other waves such as sound waves) is reduced inside the medium. For example, typical glass has a refractive index of 1.5, which means that light travels at 1 / 1.5 = 0.67 times the speed in air or vacuum. Two common properties of glass and other transparent materials are directly related to their refractive index. First, light rays change direction when they cross the interface from air to the material, an effect that is used in lenses and glasses. Second, light reflects partially from surfaces that have a refractive index different from that of their surroundings.
## Definition
The refractive index n of a medium is defined as the ratio of the phase velocity c of a wave phenomenon such as light or sound in a reference medium to the phase velocity vp in the medium itself:
$n = \frac{c}{v_{\mathrm {p}}}$
It is most commonly used in the context of light with vacuum as a reference medium, although historically other reference media (e.g. air at a standardized pressure and temperature) have been common. It is usually given the symbol n. In the case of light, it equals
$n=\sqrt{\epsilon_r\mu_r}$,
where εr is the material's relative permittivity, and μr is its relative permeability. For most materials, μr is very close to 1 at optical frequencies, therefore n is approximately $\sqrt{\epsilon_r}$. Contrary to a widespread misconception, n may be less than 1, for example for x-rays.[1]. This has practical technical applications, such as effective mirrors for x-rays based on total external reflection.
The phase velocity is defined as the rate at which the crests of the waveform propagate; that is, the rate at which the phase of the waveform is moving. The group velocity is the rate that the envelope of the waveform is propagating; that is, the rate of variation of the amplitude of the waveform. Provided the waveform is not distorted significantly during propagation, it is the group velocity that represents the rate that information (and energy) may be transmitted by the wave, for example the velocity at which a beam of light travels down an optical fiber.
## The speed of light
The speed of all electromagnetic radiation in vacuum is the same, approximately 3×108 meters per second, and is denoted by c. Therefore, if v is the phase velocity of radiation of a specific frequency in a specific material, the refractive index is given by
$n =\frac{c}{v}$.
This number is typically greater than one: the higher the index of the material, the more the light is slowed down. However, at certain frequencies (e.g. near absorption resonances, and for X-rays), n will actually be smaller than one. This does not contradict the theory of relativity, which holds that no information-carrying signal can ever propagate faster than c, because the phase velocity is not the same as the group velocity or the signal velocity.
Sometimes, a "group velocity refractive index", usually called the group index is defined:
$n_g=\frac{c}{v_g}$
where vg is the group velocity. This value should not be confused with n, which is always defined with respect to the phase velocity. The group index can be written in terms of the wavelength dependence of the refractive index as
$n_g = n - \lambda\frac{dn}{d\lambda},$
where λ is the wavelength in vacuum. At the microscale, an electromagnetic wave's phase velocity is slowed in a material because the electric field creates a disturbance in the charges of each atom (primarily the electrons) proportional to the permittivity of the medium. The charges will, in general, oscillate slightly out of phase with respect to the driving electric field. The charges thus radiate their own electromagnetic wave that is at the same frequency but with a phase delay. The macroscopic sum of all such contributions in the material is a wave with the same frequency but shorter wavelength than the original, leading to a slowing of the wave's phase velocity. Most of the radiation from oscillating material charges will modify the incoming wave, changing its velocity. However, some net energy will be radiated in other directions (see scattering).
If the refractive indices of two materials are known for a given frequency, then one can compute the angle by which radiation of that frequency will be refracted as it moves from the first into the second material from Snell's law.
If in a given region the values of refractive indices n or ng were found to differ from unity (whether homogeneously, or isotropically, or not), then this region was distinct from vacuum in the above sense for lacking Poincaré symmetry.
## Negative Refractive Index
Recent research has also demonstrated the existence of negative refractive index which can occur if the real parts of both εr and μr are simultaneously negative, although such is a sufficient but not necessary condition. Not thought to occur naturally, this can be achieved with so-called metamaterials and offers the possibility of perfect lenses and other exotic phenomena such as a reversal of Snell's law. [1] [2]
## Dispersion and absorption
In real materials, the polarization does not respond instantaneously to an applied field. This causes dielectric loss, which can be expressed by a permittivity that is both complex and frequency dependent. Real materials are not perfect insulators either, i.e. they have non-zero direct current conductivity. Taking both aspects into consideration, we can define a complex index of refraction:
$\tilde{n}=n-i\kappa$
Here, n is the refractive index indicating the phase velocity as above, while κ is called the extinction coefficient, which indicates the amount of absorption loss when the electromagnetic wave propagates through the material. Both n and κ are dependent on the frequency (wavelength).
The effect that n varies with frequency (except in vacuum, where all frequencies travel at the same speed, c) is known as dispersion, and it is what causes a prism to divide white light into its constituent spectral colors, explains rainbows, and is the cause of chromatic aberration in lenses. In regions of the spectrum where the material does not absorb, the real part of the refractive index tends to increase with frequency. Near absorption peaks, the curve of the refractive index is a complex form given by the Kramers-Kronig relations, and can decrease with frequency.
Since the refractive index of a material varies with the frequency (and thus wavelength) of light, it is usual to specify the corresponding vacuum wavelength at which the refractive index is measured. Typically, this is done at various well-defined spectral emission lines; for example, nD is the refractive index at the Fraunhofer "D" line, the centre of the yellow sodium double emission at 589.29 nm wavelength.
The Sellmeier equation is an empirical formula that works well in describing dispersion, and Sellmeier coefficients are often quoted instead of the refractive index in tables. For some representative refractive indices at different wavelengths, see list of indices of refraction.
As shown above, dielectric loss and non-zero DC conductivity in materials cause absorption. Good dielectric materials such as glass have extremely low DC conductivity, and at low frequencies the dielectric loss is also negligible, resulting in almost no absorption (κ ≈ 0). However, at higher frequencies (such as visible light), dielectric loss may increase absorption significantly, reducing the material's transparency to these frequencies.
The real and imaginary parts of the complex refractive index are related through use of the Kramers-Kronig relations. For example, one can determine a material's full complex refractive index as a function of wavelength from an absorption spectrum of the material.
## Anisotropy
The refractive index of certain media may be different depending on the polarization and direction of propagation of the light through the medium. This is known as birefringence or anisotropy and is described by the field of crystal optics. In the most general case, the dielectric constant is a rank-2 tensor (a 3 by 3 matrix), which cannot simply be described by refractive indices except for polarizations along principal axes.
In magneto-optic (gyro-magnetic) and optically active materials, the principal axes are complex (corresponding to elliptical polarizations), and the dielectric tensor is complex-Hermitian (for lossless media); such materials break time-reversal symmetry and are used e.g. to construct Faraday isolators.
## Nonlinearity
The strong electric field of high intensity light (such as output of a laser) may cause a medium's refractive index to vary as the light passes through it, giving rise to nonlinear optics. If the index varies quadratically with the field (linearly with the intensity), it is called the optical Kerr effect and causes phenomena such as self-focusing and self phase modulation. If the index varies linearly with the field (which is only possible in materials that do not possess inversion symmetry), it is known as the Pockels effect.
## Inhomogeneity
If the refractive index of a medium is not constant, but varies gradually with position, the material is known as a gradient-index medium and is described by gradient index optics. Light travelling through such a medium can be bent or focussed, and this effect can be exploited to produce lenses, some optical fibers and other devices. Some common mirages are caused by a spatially-varying refractive index of air.
## Refractive index and density
In general, the refractive index of a glass increases with its density. However, there does not exist an overall linear relation between the refractive index and the density for all silicate and borosilicate glasses. A relatively high refractive index and low density can be obtained with glasses containing light metal oxides such as Li2O and MgO, while the opposite trend is observed with glasses containing PbO and BaO as seen in the diagram at the right.
## Momentum Paradox
The momentum of a refracted ray, p, was calculated by Hermann Minkowski[3] in 1908, where E is energy of the photon, c is the speed of light in vacuo and n is the refractive index of the medium.
$p=\frac{nE}{c}$
In 1909 Max Abraham[4] proposed
$p=\frac{E}{nc}$
Rudolf Peierls raises this in his "More Surprises in Theoretical Physics" Princeton (1991). Ulf Leonhardt, Chair in Theoretical Physics at the University of St Andrews has discussed[5] this including experiments to resolve.
## Applications
The refractive index of a material is the most important property of any optical system that uses refraction. It is used to calculate the focusing power of lenses, and the dispersive power of prisms.
Since refractive index is a fundamental physical property of a substance, it is often used to identify a particular substance, confirm its purity, or measure its concentration. Refractive index is used to measure solids (glasses and gemstones), liquids, and gases. Most commonly it is used to measure the concentration of a solute in an aqueous solution. A refractometer is the instrument used to measure refractive index. For a solution of sugar, the refractive index can be used to determine the sugar content (see Brix).
## See also
• List of refractive indices
• Optical properties of water and ice
• Sellmeier equation
• Total internal reflection
• Negative refractive index or Metamaterial
• Index-matching material
• Birefringence
• Calculation of glass properties, incl. refractive index
## References
1. ^ Tanya M. Sansosti, Compound Refractive Lenses for X-Rays. 2002
2. ^ Glassproperties.com
3. ^ Nacht Ges. Wiss. Göttn. Math.-Phys. Kl.53 (1908).
4. ^ Rend. Circ. Matem. Palermo 28, 1 (1909).
5. ^ Nature vol 444 14 December 2006 p823-824. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.919698178768158, "perplexity_flag": "middle"} |
http://math.stackexchange.com/tags/string-theory/info | Tag info
About string-theory
String theory is an research framework in particle physics that attempts to reconcile quantum mechanics and general relativity. It is a contender for a theory of everything, a self-contained mathematical model that describes all fundamental forces and forms of matter. String theory posits that the elementary particles (i.e., electrons and quarks) within an atom are not $0$-dimensional objects, but rather $1$-dimensional oscillating lines ("strings"). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8660045862197876, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/168786/how-to-approach-integrals-as-a-function/168835 | # How to approach integrals as a function?
I'm trying to solve the following question involving integrals, and can't quite get what am I supposed to do:
$$f(x) = \int_{2x}^{x^2}\root 3\of{\cos z}~dz$$ $$f'(x) =\ ?$$
How should I approach such integral functions? Am I just over-complicating a simple thing?
-
7
Are you familiar with the Fundamental Theorem of Calculus and the chain rule for differentiation? – Tim Duff Jul 9 '12 at 20:35
Yes, I do; I tried messing with this integral around, using the Fundamental Theorem of Calculus, but wasn't quite sure I'm on the right path. I've ended up with a weird expression that I'm not sure is correct/final answer. I've tried gathering information from wolframalpha, but it doesn't seem to handle such functions/integrals. Could you direct me to the right way - what should I end with? – Dvir Azulay Jul 9 '12 at 20:40
@Sam: That looks interesting; Could you explain how it can be used here? – Dvir Azulay Jul 9 '12 at 20:41
1
You could first split the integral into two integrals: $\int_{2x}^{x^2} \root3\of{\cos z}\,dz =\int_{2x}^0\root3\of{\cos z}\,dz+\int_0^{x^2}\root3\of{\cos z}\,dz=-\int_0^{2x}\root3\of{\cos z}\,dz+\int_0^{x^2}\root3\of{\cos z}\,dz$, and then use Tim's hint in his comment. – David Mitra Jul 9 '12 at 20:52
## 5 Answers
For this problem, you will ultimately use a version of the Fundamental Theorem of Calculus: If $f$ is continuous, then the function $F$ defined by $F(x)=\int_a^x f(z)\,dz$ is differentiable and $F'(x)=f(x)$.
So for instance, for $F(x)=\int_0^x\root3\of{\cos z}\,dz$, we have $F'(x)=\root3\of{\cos x}$.
One can combine this with the chain rule, when it applies, to differentiate a function whose rule is of the form $F(x)=\int_a^{g(x)} f(z)\,dz$. Here, we recognize that $F$ is a composition of the form $F=G\circ g$ with $G(x)=\int_a^x f(z)\,dz$. The derivative is $F'(x)=\bigl[ G(g(x))\bigr]'=G'(g(x))\cdot g'(x)=f(g(x))\cdot g'(x)$.
For example, for $F(x)=\int_0^{x^2}\root3\of{\cos z}\,dz$, we have $F'(x)=\root3\of{\cos x^2}\cdot(x^2)'=2x\root3\of{\cos x^2}$.
Now to tackle your problem proper and take advantage of these rules, we just "split the integral": $$\tag{1} \int_{2x}^{x^2}\root3\of{\cos z}\,dz= \int_{2x}^{0}\root3\of{\cos z}\,dz+ \int_{0}^{x^2}\root3\of{\cos z}\,dz.$$ But wait! We can only use the aforementioned differentiation rules for functions defined by an integral when it's the upper limit of integration that is the variable. The first integral in the right hand side of $(1)$ does not satisfy this. Things are easily remedied, though; write the right hand side of $(1)$ as: $$-\int_{0}^{2x}\root3\of{\cos z}\,dz+ \int_{0}^{x^2}\root3\of{\cos z}\,dz;$$ and now things are set up to use our rule (of course, you'll also use the rule $[cf+g]'=cf'+g'$).
-
1
Such a well written answer. Wish I could up-vote it a few more times; Thank you so much – Dvir Azulay Jul 9 '12 at 23:49
@dvir, I did it for you. – Tpofofn Jul 10 '12 at 2:38
Not to give it away completely. Using the Fundamental Theorem of Calculus, $f(x) = C(x^2)-C(2x)$, where $C(x)$ is the anti-derivative of the integrand. Now, use the Chain rule to compute $f'(x)$, which will depend only on the $C'(x)$, which is the integrand itself, evaluated at $x$.
-
Really thanks for your answer! – Dvir Azulay Jul 9 '12 at 23:49
\begin{eqnarray} f'(x)&=&(x^2)'(\sqrt[s]{\cos z})|_{z=x^2}-(2x)'(\sqrt[s]{\cos z})|_{z=2x}\cr &=&2x\sqrt[s]{\cos x^2}-2\sqrt[s]{\cos 2x} \end{eqnarray}
-
This doesn't seem correct. – Dom Jul 9 '12 at 21:49
1
Then show us what you think is correct! – Mercy Jul 9 '12 at 21:55
Shouldn't the answer be $2x\root s\of{\cos x^2} - 2\root s\of{\cos 2x}$ ? – Dom Jul 9 '12 at 22:03
Yes, you are right! – Mercy Jul 9 '12 at 22:13
Generally, to differentiate an integral of the form:
$$\int_{g_1(x)}^{g_2(x)}f(z)dz$$
we use Leibniz rule. First assume that $F(x)$ is the anti-derivative of $f(x)$. That is $F'(x) = f(x)$. Then it follows that,
$$\int_{g_1(x)}^{g_2(x)}f(z)dz = F(z)|_{z=g_2(x)} - F(z)|_{z=g_1(x)} = F(g_2(x)) - F(g_1(x))$$
Now if we differentiate this result using the chain rule we get:
$$\frac{d}{dx}\left(F(g_2(x)) - F(g_1(x))\right) = f(g_2(x))g_2'(x) - f(g_1(x))g_1'(x).$$
Note that it is not necessary to find the anti-derivative $F()$.
-
Using the Leibnitz rule of differentiation of integrals, which states that if \begin{align} f(x) = \int_{a(x)}^{b(x)} g(y) \ dy, \end{align} then \begin{align} f^{\prime}(x) = g(b(x)) b^{\prime}(x) - g(a(x)) a^{\prime}(x). \end{align} Thus, for your problem $a^{\prime}(x) = 2$ and $b^{\prime}(x) = 2x$ and, therefore, \begin{align} f^{\prime}(x) = \int_{2x}^{x^2} \sqrt[3]{\cos z} dz = \sqrt[3]{\cos (x^2)} (2 x) - \sqrt[3]{\cos (2x)} (2). \end{align}
-
The downvote seems a bit harsh. – user02138 Jul 16 '12 at 1:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9102481603622437, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/27428/does-the-axiom-of-choice-or-any-other-optional-set-theory-axiom-have-real-wor/29192 | ## Does the Axiom of Choice (or any other “optional” set theory axiom) have real-world consequences? [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Or another way to put it: Could the axiom of choice, or any other set-theoretic axiom/formulation which we normally think of as undecidable, be somehow empirically testable? If you have a particular scheme for testing it, that's great, but even the existence or non-existence of a proof regarding potential testability is wonderful.
How about something a little simpler: can we even test the Peano axioms? Are there experiments that can empirically verify theorems not provable by them?
This is a slightly fuzzy question, so to clarify what I mean, consider this: the parallel postulate produces good, useful geometry, yet beyond its inapplicability to the sphere, there's evidence to suggest that the universe is actually hyperbolic - this can be considered an experimental evidence "against" the parallel postulate in our universe.
Edit: Thanks to all the people who answered - I understand the concerns of those who don't like this questions, and I appreciate all those who answered a more modest interpretation that I should, in retrospect, have stated. That is, "Is the axiom of choice agreeable with testable theories of mathematical physics, is it completely and forever irrelevant, or is it conceivably relevant but in a way not yet known," to which I got several compelling answers indicating the former.
-
3
I remember an American Mathematical Monthly article where the authors showed that you can use the axiom of choice to predict the future (well, in a rather non-constructive way)... :-) – Andrea Ferretti Jun 8 2010 at 10:45
1
Well, for sure quantum mechanics assumes those Hilbert spaces where wavefunctions live have a basis. I don't know if assuming that those particular Hilbert spaces have a basis depends on the axiom of choice. Certainly the statement that every vector space has a basis does. – Marc Alcobé García Jun 8 2010 at 20:11
Does Banach-Tarski count as evidence that AC is "false"? Does the existence of non-measurable sets, which implies the existence of coins with the property that even if you toss them a gazillion times the ratio (number of heads)/(total tosses) does not show any signs of converging to anything, count as evidence that it's "false"? Is Goodstein's theorem an experiment which empirically verifies something not provable in PA? These are really just vague comments on what seems to me to be a vague question. – Kevin Buzzard Jun 8 2010 at 21:13
1
@Kevin: Since Banach-Tarski does not apply to matter made up of atoms, why would it say anything about falsity of some mathematics? Littlewood has an essay saying that trying to make probability theory fit anything in the real world is problematic. – Gerald Edgar Jun 9 2010 at 1:00
## 11 Answers
The 1996 paper The Axiom of Choice in Quantum Theory by Brunner, Svozil and Baaz contains the following provocative statement in the first paragraph: "Hence the very notion of a self-adjoint operator as an observable of quantum theory may become meaningless without the axiom of choice." The authors arrive at this conclusion by "constructing peculiar Hilbert spaces from counterexamples to the axiom of choice."
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Before answering your question about the axiom of choice, let me take another set-theoretic axiom: "There exists an inaccessible cardinal." This axiom implies that ZFC is consistent. One could argue that this has the following "real-world consequence": If you set a computer running to look for a contradiction in ZFC, it will never find one. More generally, large cardinal axioms imply that smaller cardinal axioms are consistent with ZFC. Thus running your computer to search for contradictions in large cardinal axioms is a way to "test" larger cardinal axioms.
If you buy that, then one way to "test" an axiom is to look at what $\Pi^0_1$ sentences (i.e., sentences of the form "for all integers $n$, $P(n)$ holds" where $P(n)$ is some statement about the number $n$ that can be checked by a terminating computer program) the axiom implies. Coming back now to the axiom of choice, there is unfortunately no way to test it in this manner, because it is a theorem that any $\Pi^0_1$ sentence (in fact, any first-order sentence of arithmetic) is a theorem of ZFC if and only if it is a theorem of ZF. The same goes for statements like the continuum hypothesis: any arithmetical theorem of ZFC + CH is already a theorem of ZF.
One might still wonder whether there is some other way to test mathematical statements using physical experiments. It seems unlikely to me, mainly because as finite creatures we can make only finitely many physical observations, so I think that the only mathematical statements that we will be able to reject definitively on the basis of physical experiments will be finitary ones, and first-order arithmetic should be able to express any finitary mathematical statement. It's true that, as some others have mentioned, some physicists have used the axiom of choice to construct physical theories, but if one of these theories were to be contradicted by experiment, we would probably just say that this disproves the physical theory, not the axiom of choice itself.
-
I think this question is the wrong way around (but this is also a philosophical issue).
The question is not if a theory has real world consequences but if the theory fits the parts of reality that is in the focus of the researcher.
Mathematics is about deducing lemmas and theorems from axioms so there are no consequences for the real world. All the lemmas and theorems are already there when you choose the axioms (sometimes you don't find them within some hundred years - this is also about computability). The question is if these theorems describe the real world and give correct forecasts when you insert real world variables into them.
If you choose too few axioms you won't get a very rich theory, if you choose too many you will get something with which you could prove anything and its counterpart. So it is really a tightrope walk to find the "right" axioms - also trial and error over the centuries.
After that you model reality with your lemmas/theorems and fit it to reality. If it works - fine, if not, try something else. I think it all boils down to that.
-
1
Isn't this basically what I just said? This is not rhetorical: I'm a little sick and feverish so my answer may not make as much sense as I think it does ;). – jeremy Jun 8 2010 at 8:05
2
Well, to be quite honest: Me too - perhaps the same virus? Could be we are both delusional right now and when we'll read our answers a few days from now won't understand them any more ;-) But back to the point: I think the difference to your answer is that mine is more general, yours is more concrete and you say that there are no better or worse choices but I think there are more appropriate ones (they fit better and are better building blocks). But generally I agree with your answer (and just upvoted it :-) – vonjd Jun 8 2010 at 8:14
Asking whether a mathematical axiom can be empirically tested makes little or no sense, as it would be asking the same about the rules of chess. Excuse me for trivializing a little. If we use natural numbers with Peano's axioms to count grains of rice, we may be satisfied, for they give a quite satisfying model. One could also object that after all there are finitely many grains of rice in the world, though. But this say nothing about the consistency of Peano axioms; it only shows the range of applicability of a certain physical interpretation of them.
-
2
I think this is going too far--one can definitely test if a particular system is well-described by a particular set of axioms. One could even imagine there could be a system where you can "exactly" test them. This certainly counts as "empirically testing" but my contention is that there is not an well-defined/objective (i.e., 'experiment independent') notion of this. – jeremy Jun 8 2010 at 8:37
This is a slightly fuzzy question, so to clarify what I mean, consider this: the parallel postulate produces good, useful geometry, yet beyond its inapplicability to the sphere, there's evidence to suggest that the universe is actually hyperbolic - this can be considered an experimental evidence "against" the parallel postulate in our universe.
But that doesn't mean the same thing as your question is asking. The generalization of Euclidean geometry is not just hyperbolic or spherical geometry, but differential geometry. And a lot of the power of general relativity (or any Yang-Mills theory) comes from its general differential geometric structure (in other words, that it's a principal bundle with certain gauge group, etc), not the specific "geometry." This is analogous to thinking of differential equations v.s. initial conditions and specific solutions.
From the point of view of theoretical physicists, in a sense the answer is the same. If it is sensible to have math with and without the axiom of choice, one could reasonably expect that there are physical situations that can be described with and without the axiom of choice.
In other words, it may not be reasonable to say that AC is "empirically testable," as some systems may be described by "X+AC" and some others may be described by "X". Analogously, some systems are hyperbolic, e.g., special relativity's geometry, and others are not, e.g., generic Yang-Mills, string theory, etc--this does not mean that "geometry" is testable, it simply means that specific systems have specific descriptions. Not that any one formulation is in any way "better" than others from an experimental standpoint.
So I do not believe it makes sense to ask if the "universe" satisfies AC (or any other property) in this way.
-
Unfortunately, I can't remember either the details or an appropriate reference, but I once read of an amusing proposal made by a physicist to exploit ideas in relativity, which, if the geometry of the universe was as it might possibly be, would allow you to do infinitely many computational steps in finite time (though presumably it wasn't finite time from the point of view of the computer). If it worked, then one could test number theoretic statements by simply running through all the integers. (This is in non-serious answer to your question about the Peano axioms.)
Well, I wrote that, but I now see that there's a problem because the input size would tend to infinity, so the computer would need infinite memory as well. I imagine the physicist concerned had thought about that but can't remember enough to be sure.
-
3
Some of my quantum computation friends have mentioned these things, and they all rely on closed timelike curves so that any computation that can be done in finite time can be done in bounded time--including zero time or a negative amount of time! If you're clever, you can get around some of the infinite memory problems by having the computer communicate with past states of itself in a clever manner, IIRC. Then, after explaining this, they laugh and tell me how they're paid to study this... – jeremy Jun 8 2010 at 7:42
1
Philip Welch has an excellent mathematical summary of these ideas at maths.bris.ac.uk/~mapdw/chapter-fin.pdf (and several other papers on his web page). The idea of building these strange spacetimes to physically realize infinitary computation stretches back at least to Hogarth 1992, with a lot of work since then, and there is currently resurgent active work on the purely mathematical aspects of infinitary computability, such as that arising in Blum-Shub-Smale machines and infinite time Turing machines, with which I have been involved. – Joel David Hamkins Jun 8 2010 at 12:54
You can find many such proposals by googling for "hypercomputation." But even if we take such proposals seriously, there is a problem. Say we construct a physical theory that tells us how to build a hypercomputer to solve the halting problem. We build the hypercomputer and ask it, "Would a Turing machine programmed to find a contradiction in ZFC halt?" Say the hypercomputer replies, "No." Have we "proved" that ZFC is consistent? I don't think so. We can't rule out the possibility that ZFC is inconsistent but that there is something wrong with our physical theory about the hypercomputer. – Timothy Chow Jun 9 2010 at 2:16
Here is a completely serious paper about computation and closed timelike curves, by two top quantum computer scientists: arxiv.org/PS_cache/arxiv/pdf/0808/0808.2669v1.pdf The gist of it is that if closed timelike curves exist, then quantum computers are no more powerful than classical computers (although both become superpowerful, they become equally superpowerful). – Ryan O'Donnell Jun 9 2010 at 2:22
Stan Wagon mentions the following paper in his book: B. W. Augenstein, Hadron physics and transfinite set theory, International Journal of Theoretical Physics, 23(1984), 1572-9575.
Peter Komjath
-
The paper can be read here (possibly requires institutional affiliation): springerlink.com/content/unr7753m10567w2h/… – John Stillwell Jun 8 2010 at 4:56
I'm not sure how to add comments to other responses (maybe I don't have enough reputation, or maybe I'm just inept). I think the paper that gowers is referring to is on Malament-Hogarth spacetime.
http://en.wikipedia.org/wiki/Malament%E2%80%93Hogarth_spacetime
-
At the risk of making further feverish, potentially wrong, comments, I believe that when I discussed this particular situation with my QC friend, we decided that this does not actually solve the halting problem because (at the least) long-time signals will be arbitrarily red shifted, so you'd need to detect arbitrarily low energy photons, which you can't because eventually their wavelength will be longer than the universe. I believe there were other problems as well but I do not remember them offhand... – jeremy Jun 8 2010 at 11:44
You need 50 reputation to add comments. – Ian Morris Jun 8 2010 at 12:04
I vaguely recall some (humorous?) exchange about whether it is possible that a bridge would fall down because the calculations in its design had used the Lebesgue integral instead of the Riemann integral... Does anyone know where this was?
-
3
According to Rota's Discrete Thoughts, page 2, F.P. Ramsey asked Wittgenstein: "Suppose a contradiction were to be found in the axioms of set theory. Do you seriously believe that a bridge would fall down?" I think I've heard the Riemann v. Lebesgue variant too, but I can't find the source. – John Stillwell Jun 9 2010 at 2:33
2
The Riemann vs. Lebesgue issue is a story about Hamming. He said that if the structural integrity of a particular airplane turned on the distinction between Riemann and Lebesgue integrals, he wouldn't fly in it. For an exact quote and citation, see the list of quotes on the wikipedia page en.wikipedia.org/wiki/Richard_Hamming. – KConrad Jun 9 2010 at 3:27
The following paper "On Non-measurable sets and Invariant Tori' uses the axiom choice to solve a problem in classical mechanics and discusses the application of the axiom of choice to physics.
-
Answering your question specifically concerning real-world consequence of AC, it is worth noting that the answer is strongly dependent on whether or not the universe is discrete or continuous. Although quantum mechanics and high energy physics hint at a fully discrete universe, this is not at all settled. For example it is not known whether or not space-time is discrete. If the universe is discrete, and therefore either finitely or infinitely countable, depending on whether or not the size of the universe is finite (also not known), the full AC is no longer applicable (a choice function for finite sets can be proven within ZF). In this case AC would seem exceedingly unlikely to have real-world consequences.
-
Quantum mechanics and high energy physics absolutely do not hint at a "fully discrete universe." There is absolutely nothing 'discrete' about quantum mechanics. This is a frequent misunderstanding and is discussed in many introductory undergraduate quantum mechanics books. No serious physicists believe that spacetime is in any way literally "discrete." – jeremy Jun 23 2010 at 5:40
I have heard of these ideas; they aren't new, they've been around since at least the '60s. Things like discrete causal networks need to have uncountably many networks to hope to make sense. Most of the actual implementations of it either COMPLETELY fail to reproduce relativity in any limit, or just end up being perverse rewritings of SR or GR's geometry in terms of, e.g., its topology plus mysterious relations. This line of research was pretty much entirely abandoned by mainstream physics by the early to mid '80s. – jeremy Jun 23 2010 at 6:10
Also, those ideas had absolutely nothing to do with any "discreteness" of quantum mechanics. They have to do with the fact that any individual observer actually sees a countable number of events, and spacetime is modeled on the structure of events (more-or-less lightcones in SR, more subtle in GR). So, the actual history of an observer is modeled on a finite lattice of events. But it turns out that it's not important what you DO observer, but the space of possibilities of what you COULD observe, which is uncountable. So you're just left with describing the points on a spacetime manifold. – jeremy Jun 23 2010 at 6:13
Eh? Spacetime being discrete would absolutely slaughter the structure of string theory, not to mention GR and QFTs. For specific examples, see what happened with the development of lattice GR (particularly early on) and the special things needed to do in order to make lattice field theory work. (Mainly, what has to be done is account for the fake-discreteness by really knowing everything is discrete.) – jeremy Jun 23 2010 at 6:29
(cont) It's well-known that spacetime (specifically) can't be discrete, and that other things (generically) are not expected to be discrete without very good reasons. There appears to be no "fundamental" discreteness in physics. – jeremy Jun 23 2010 at 6:29
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513257145881653, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/20454-word-prob.html | # Thread:
1. ## Word prob.
A particle in a magnetic field travels in a straight line. The force (in Newtons) acting upon it is given by the function
F(s) = square root of (s) −s^2, where s = f(t), the position of the particle in meters, changes with time t. At what rate is the force changing with respect to time at a time T for which the particle is at position s = 1 and has velocity 4 m/s.
Please explain this prob and how to do it!
2. Originally Posted by kwivo
A particle in a magnetic field travels in a straight line. The force (in Newtons) acting upon it is given by the function
F(s) = ps−s2, where s = f(t), the position of the particle in meters, changes with time t. At what rate is the force changing with respect to time at a time T for which the particle is at position s = 1 and has velocity 4 m/s.
Please explain this prob and how to do it!
Is that
F(s) = p*s -s^2 ?
s = f(t), okay, but what is p?
3. Sorry, I fixed it.
4. Originally Posted by kwivo
A particle in a magnetic field travels in a straight line. The force (in Newtons) acting upon it is given by the function
F(s) = square root of (s) −s^2, where s = f(t), the position of the particle in meters, changes with time t. At what rate is the force changing with respect to time at a time T for which the particle is at position s = 1 and has velocity 4 m/s.
Please explain this prob and how to do it!
$F(s) = \sqrt{s} - s^2$
where s is a function of t.
What is
$\frac{dF}{dt}$?
Looks like a chain rule problem to me.
For example, the first term:
$\frac{d}{dt}\sqrt{s} = \frac{1}{2\sqrt{s}} \cdot \frac{ds}{dt}$
You do the rest.
-Dan | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9232406616210938, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/65405?sort=newest | ## Induced pretopologies on sSet
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Recall that the geometric realisation functor $| - |: sSet \to Top$ preserves products (choosing $Top = k Space$ or similar). Thus any given singleton Grothendieck pretopology on $Top$ gives rise to a singleton pretopology on $sSet$. I have in mind three, no four, examples:
1. open covers (that is, $\coprod U_i \to X$ for `$\{U_i\}$` an open cover of $X$)
2. numerable open covers (ditto)
3. open surjections
4. maps admitting local sections (=locally split maps)
The pretopology $J^*$ on $sSet$ (which is a wide subcategory) arises as the pullback along $| - |$ of the given pretopology $J$ on $Top$. Now it occurs to one that there might be some sort of characterisation of maps of simplicial sets that correspond to, for example, open surjections. Since geometric realisation of maps in $sSet$ give rise to cellular maps (if we retain for argument's sake the cellular structure on the spaces involved), one could think of this as a characterisation of cellular maps which have the required property. But it seems to me that cellularity is sort of orthogonal (not the technical meaning!) to openness, in that it seems hard to think of a map of simplicial sets which is an open cover upon geometric realisation. Thus 1. and 2. don't seem very promising.
However, if one has a twisted cartesian product (see, e.g. May's book on simplicial sets), which is the 'same thing' as a fibre bundle in the world of simplicial sets, then geometric realisation gives you a fibre bundle (even a principal G-bundle), hence open surjection. But this is obviously too strong to give all of $J^*$ for either $J=$ open surjections or $J=$ locally split maps. So my question is:
Let $J$ be either the class of open surjections or the class of locally split maps in $Top$. Is a characterisation of those maps in $J^*$ likely? Possible? Already in the literature?
One could then compare this to the canonical topology on $sSet$, which consists of the epimorphisms. One would hope that $J^*$ is subcanonical, but this looks like it should follow from the definition...
-
## 1 Answer
This not a complete answer but too long for a comment:
First a quick remark: 1.) and 4.) generate the same topology.
Now let geometric realization be denoted by $G$. Consider the geometric morphism `$$\bar G=\left(G^*,G_*\right):Psh\left(Top\right) \to Psh\left(sSet\right),$$` and the geometric morphism $$J=\left(a,i\right):Sh_J\left(Top\right) \to Psh\left(Top\right)$$ corresponding to the Grothendieck topology $J$ (I have used $J$ to denote the geometric embedding since there is a one-to-one correspondence between geometric embeddings and Grothendieck topologies).
The topos of sheaves on $sSet$ with respect to what you call $J^*,$ is the pullback topos $$Sh_{J^*}\left(sSet\right):=Psh\left(sSet\right) \times_{Psh\left(Top\right)} Sh_J\left(Top\right).$$
Let $S:Top \to sSet$ denote the singular nerve functor. Note that $G^*$ is the left-Kan extension along Yoneda of the functor $$y_G:Top \to Psh\left(sSet\right),$$
$$y_G\left(T\right):X \mapsto Hom_{Top}\left(G\left(X\right),T\right) \cong Hom_{sSet}\left(X,S\left(T\right)\right),$$
i.e. for topological space $T,$ `$$G^*\left(y\left(T\right)\right)=y\left(S\left(T\right)\right),$$` where $y$ denotes the Yoneda embedding.
Since all of the Grothendieck topologies you mentioned are extensive, it follows that that the $J^*$ sieves are the ones generated by the ones of the form
`$$S\left(\underset{j} \coprod V_j\right) \to S\left(V\right).$$`
I do not think it follows "by definition" that $J^*$ is subcanonical, but maybe I am missing something.
-
P.S. David, I tried to send you a math email, but it bounced back. You can find mine on my website (listed on my profile here). Can you drop me a line with your email? Thx. – David Carchedi May 19 2011 at 15:17
Thanks, David, but I really am interested in the pretopology, not the topology. Hmm, I think geometric realisation preserves regular epimorphisms (since it preserves colimits and finite limits), but of course it doesn't a priori to reflect them. Perhaps I was a bit hasty in thinking that it was straightforward to claim so. – David Roberts May 20 2011 at 2:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9235013127326965, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/101014/fundamental-problems-whose-solution-seems-completely-out-of-reach/101075 | ## Fundamental problems whose solution seems completely out of reach [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In many areas of mathematics there are fundamental problems that are embarrasingly natural or simple to state, but whose solution seem so out of reach that they are barely mentioned in the literature even though most practitioners know about them. I'm specifically looking for open problems of the sort that when one first hears of them, the first reaction is to say: that's not known ??!! As examples, I'll mention three problems in geometry that I think fall in this category and I hope that people will pitch in either more problems of this type, or direct me to the literature where these problems are studied.
The first two problems are "holy grails" of systolic geometry---the study of inequalities involving the volume of a Riemannian manifold and the length of its shortest periodic geodesic---, the third problem is one of the Busemann-Petty problems and, to my mind, one of the prettiest open problems in affine convex geometry.
Systolic geometry of simply-connected manifolds. Does there exist a constant $C > 0$ so that for every Riemannian metric $g$ on the three-sphere, the volume of $(S^3,g)$ is bounded below by the cube of the length of its shortest periodic geodesic times the constant $C$?
Comments.
• For the two-sphere this is a theorem of Croke.
• Another basic test for studying this problem is $S^1 \times S^2$. In this case the fundamental group is non-trivial, but in some sense it is small (i.e., the manifold is not essential in the sense of Gromov).
• There is a very timid hint to this problem in Gromov's Filling Riemannian manifods.
Sharp systolic inequality for real projective space. If a Riemannian metric in projective three-space has the same volume as the canonical metric, but is not isometric to it, does it carry a (non-contractible) periodic geodesic of length smaller than $\pi$?
Comments.
• For the real projective plane this is Pu's theorem.
• In his Panoramic view of Riemannian geometry, Berger hesitates in conjecturing that this is the case (he says it is not clear that this is the right way to bet).
• In a recent preprint with Florent Balacheff, I studied a parametric version of this problem. The results suggest that the formulation above is the right way to bet.
Isoperimetry of metric balls. For what three-dimensional normed spaces are metric balls solutions of the isoperimetric inequality?
Comments.
• In two dimensions this problem was studied by Radon. There are plenty of norms on the plane for which metric discs are solutions of the isoperimetric problem. For example, the normed plane for which the disc is a regular hexagon.
• This is one of the Busemann-Petty problems.
• The volume and area are defined using the Hausdorff $2$ and $3$-dimensional measure.
• I have not seen any partial solution, even of the most modest kind, to this problem.
• Busemann and Petty gave a beautiful elementary interpretation of this problem:
Take a convex body symmetric about the origin and a plane supporting it at some point $x$. Translate the plane to the origin, intersect it with the body, and consider the solid cone formed by this central section and the point $x$. The conjecture is that if the volume of all cones formed in this way is always the same, then the body is an ellipsoid.
Additional problem: I had forgotten another beautiful problem from the paper of Busemann and Petty: Problems on convex bodies, Mathematica Scandinavica 4: 88–94.
Minimality of flats in normed spaces. Given a closed $k$-dimensional polyhedron in an $n$-dimensional normed space with $n > k$, is it true that the area (taken as $k$-dimensional Hausdorff measure) of any facet does not exceed the sum of the areas of the remaining facets?
Comments.
• When $n = k + 1$ this is a celebrated theorem of Busemann, which convex geometers are more likely to recognize in the following form: the intersection body of a centrally symmetric convex body is convex. A nice proof and a deep extension of this theorem was given by G. Berck in Convexity of Lp-intersection bodies, Adv. Math. 222 (2009), 920-936.
• When $k = 2$ this has "just" been proved by D. Burago and S. Ivanov: http://front.math.ucdavis.edu/1204.1543
• It is not true that totally geodesic submanifolds of a Finsler space (or a length metric space) are minimal for the Hausdorff measure. Berck and I gave a counter-example in What is wrong with the Hausdorff measure in Finsler spaces, Advances in Mathematics, vol. 204, no. 2, pp. 647-663, 2006.
-
1
are you asking for problems in any area or just geometry? – Michael Jun 30 at 19:44
1
@Michael: In any area. I'm asking for the sort of problem that when one hears it, the first instinct is to say "that's ot known ??!!". – alvarezpaiva Jun 30 at 19:50
7
As this question has no correct answer, it should be community wiki. – HW Jun 30 at 20:00
11
The big problems always seem out of reach, until they are solved. – Angelo Jun 30 at 22:01
2
@François: well, we really don't know whether their ARE completely out of reach ... Mathematicians do not always look for solutions to problems in the place where the problems are, but in the place where other solutions are. A bit like searching for your lost car keys next to the lamp-post instead of where you lost them, because there is more light around the lamp-post ... I don't know how to put this in a title! – alvarezpaiva Jul 1 at 8:45
show 11 more comments
## 30 Answers
Is every algebraic curve in $\mathbb P^3$ the set-theoretic intersection of two algebraic surfaces ? Not known!
-
This is a nice example! – François Brunault Jul 1 at 8:14
Great! Exactly what I was looking for. – alvarezpaiva Jul 1 at 8:45
Can you give a reference where this problem is (partially) analyzed? – Martin Brandenburg Jul 2 at 10:20
@Martin: [Here](thesis.bilkent.edu.tr/0002115.pdf) is a thesis devoted to the theme of complete intersections and which is a nice synthesis of the subject. The very last sentence of that document states the open problem I mention! – Georges Elencwajg Jul 2 at 12:23
Is this known in the complex case? – Steven Gubkin Jul 2 at 18:33
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
A proof of this conjecture of Erdos would certainly turn heads, raise eyebrows, and garner the attention of the Fields Medal committee.
• If $\sum_{a \in A} \frac 1a$ diverges and $A\subseteq {\mathbb N}_{>0}$, then $A$ contains a 3-term arithmetic progression.
Probably "diverges" can be replaced with "is bigger than 4".
-
This comes up in Waring's Problem, but it is so freakishly simple that it has taken on a life of its own. Let $\{ x \} = x \mod 1 = x-\lfloor x \rfloor$ be the fractional part of $x$.
• Say anything about the sequence $\{ (3/2)^n \}.$
Computations support the thought that the sequence should uniformly distributed in $[0,1]$, as for almost all $x$ the sequence $\{x^n\}$ is u.d. But with $x=3/2$, there is no value known to be a limit point, nor any value known to not be a limit point, it's unknown if there are two limit points, unknown if the sequence is infinitely often in $[0,1/2)$, or that it is infinitely often not in $[0,1/2)$. Really, nothing is known.
As a final comment on this problem, the golden ratio is special. With $x=\phi=(1+\sqrt 5)/2$, for every $\epsilon>0$ there are only finitely many $n$ with $$\epsilon< \{\phi^n \} < 1-\epsilon.$$
-
I'd done some special visualizations of this problem, but due to lack of background in my NT-experience I#ve not been able to proceed in a meaningful way then. But I find it still an interesting approach to display such a property as tried in that link as part of my earlier Collatz-discussion: go.helms-net.de/math/collatz/aboutloop/… The golden ratio, for instance, shows this significant behave, that you mention, in a surprising regularity. – Gottfried Helms Jul 1 at 15:17
1
Actually, it is known that the sequence $\\{ (3/2)^n \\}$ has infinitely many limit points. More generally, Vijayaraghavan proved in the 40s ("On the fractional parts of the powers of a number. II.") that if $\theta>1$ is algebraic and is not Pisot, then $\\{ \theta^n \\}$ has infinitely many limit points. – Pablo Shmerkin Jul 2 at 1:16
1
Sorry about the bad formatting... I meant the sequence of fractional parts of $(3/2)^n$ and $\theta^n$. – Pablo Shmerkin Jul 2 at 1:18
@Pablo: I didn't know that! I'll read up and fix my answer. – Kevin O'Bryant Jul 2 at 3:07
3
@Kevin: I learned this from a draft of Yann Bugeaud's soon to be published book "Distribution modulo one and Diophantine approximation". There it is mentioned that the proof that there are infinitely many points was found independently by Vijayaraghavan, by Pisot ("La repartition modulo 1 et les nombres algebriques") and by Rédei ("Zu einem Approximationssatz von Koksma"). All of these are from around 1940 and it seems there has been no progress since. – Pablo Shmerkin Jul 2 at 8:29
show 1 more comment
It is still not known whether the problem of determining whether a linear integer recurrence (of which the Fibonacci recurrence $F_n = F_{n-1}+F_{n-2}$, $F_1=F_0=1$ is the most well known) contains a zero is decidable or not. Even the case of recurrences of depth 6 is currently open. (I discussed this problem at http://terrytao.wordpress.com/2007/05/25/open-question-effective-skolem-mahler-lech-theorem/ .) We do have the famous Skolem-Mahler-Lech theorem that gives a simple criterion as to when the number of zeroes is finite, but nobody knows how to get from that to deciding when there is a zero at all. (This is perhaps the simplest example of a large family of results in number theory in which one has an ineffective finiteness theorem for the number of solutions to a certain number-theoretic problem (in this case, an exponential Diophantine problem), but no way to determine if a solution exists at all. Other famous examples include Faltings' theorem and Siegel's theorem.)
EDIT: See also this survey of Halava-Harju-Hirvensalo-Karhumäki from 2005 on this problem: http://tucs.fi/publications/view/?id=tHaHaHiKa05a
-
Here are two from ergodic theory:
• (The problem of smooth realizations) Let $X$ be a Lebesgue space with measure $\mu$, and let $T:X\to X$ be a transformation preserving the measure $\mu$. If the entropy $h_\mu(T)$ is finite, is $(X,T,\mu)$ always measurably isomorphic to a smooth system $(M,f,v)$, where $M$ is a compact manifold, $f$ is a diffeomorphism of $M$ and $v$ is a smooth volume?
• (Furstenberg's $\times 2 \times 3$ problem) Does there exist a Borel probability measure $\mu$ on the unit circle $\mathbb{R}/\mathbb{Z}$, which is neither discrete nor Haar measure, and which is invariant under both $x\to 2x \bmod 1$ and $x\to 3x\bmod 1$?
For the first problem, as far as I know there has been no significant progress.
For Furstenberg's conjecture, Furstenberg himself solved the analog question for sets (answer is negative), and Rudolph proved that the answer is negative under an extra positive entropy assumption. While there has been a huge amount of progress in the positive entropy case since, the zero entropy case remains untractable despite the simplicity of the statement.
-
Artin's Conjecture: There are infinitely many primes $p$ for which 2 is a primitive root, i.e., 2 generates the multiplicative group of ${\mathbb Z}/p{\mathbb Z}$.
The conjecture is actually a bit more general, but we should at least be able to say what happens with 2! The OEIS lists the first several such primes.
-
Thanks Kevin. These are nice problems! – alvarezpaiva Jul 1 at 15:25
Dear Kevin, As you probably know, Hooley proved this contingent on a GRH. Thus it reduces to a more-well known open problem! It could be worth adding this to your answer. Regards, Matthew – Emerton Jul 2 at 12:03
Typo: "on a GRH" should just read "on GRH". (More precisely, he needs the RH for the Dedekind zeta-functions of a certain infinite collection of number fields.) – Emerton Jul 2 at 12:04
Follows from, but doesn't imply. This is (I suspect) much easier than GRH, especially just asking about $2$. – Kevin O'Bryant Jul 2 at 16:52
2
Dear Kevin, From the point of algebraic number theory (or more specifically, my understanding of Hooley's proof!), $2$ doesn't look so different from other numbers. But maybe from other points of view that's less true. Also, I agree that this seems easier (or, at least, much more specialized) than GRH, but on the other hand, Hooley's proof is quite natural, so viewing it through the GRH lens doesn't seem unreasonable. (E.g. the same method applies to prove many variants --- again dependent on GRH.) Regards, Matthew – Emerton Jul 2 at 22:46
Can we exactly calculate Ramsey numbers? Erdős once famously remarked:
"Suppose aliens invade the earth and threaten to obliterate it in a year's time unless human beings can find the Ramsey number for red five and blue five. We could marshal the world's best minds and fastest computers, and within a year we could probably calculate the value. If the aliens demanded the Ramsey number for red six and blue six, however, we would have no choice but to launch a preemptive attack."
-
There seems to be something wrong with the link. – alvarezpaiva Jul 2 at 12:04
@alvarezpaiva: Thanks! Corrected – Alex R. Jul 2 at 12:46
Normality of numbers. Is Pi normal?
-
3
en.wikipedia.org/wiki/Normal_number – Martin Brandenburg Jul 2 at 10:22
Is there an algebraic irrational number in the Cantor set?
More generally: Are algebraic irrational numbers normal in all bases?
-
The Bunyakovsky conjecture (or Bouniakowsky conjecture) stated in 1857 by the Russian mathematician Viktor Bunyakovsky, claims that
• an irreducible polynomial of degree two or higher with integer coefficients generates for natural arguments either an infinite set of numbers with greatest common divisor (gcd) exceeding unity, or infinitely many prime numbers.
-
Often referred to as "Schinzel's hypothesis" nowadays, though Schinzel is somewhat more general than this. – JSE Jul 2 at 6:04
@JSE: Yes I truly agree. – Chandrasekhar Jul 2 at 6:43
Does $H^\infty(D)$, the Banach space of all bounded holomorphic functions on the unit disc, have Grothendieck's approximation property?
-
Here is an old question by Borel: is there any a priori growth restriction on entire functions $f(z)$ satisfying polynomial differential equations $P(z,f(z),\dots,f^n(z))=0$ where $P$ is a polynomial with complex coefficients in $n+2$ variables?
-
@fedja: $P$ seems to be polynomial in $n+1$ variables, isn't it? – Martin Jul 2 at 13:30
@Martin: No, we have $z$ and the 0th-nth derivative of $f(z)$. – temp Jul 3 at 1:36
Are there infinitely many regular primes? We know there are infinitely many irregular ones, and that their percentage should be much smaller than the regular ones, still it is unproven that the latter are infinite.
Let me recall that a prime $p$ is irregular if it divides the class number of $\mathbb{Q}(\zeta_p)$, the cyclotomic field.
Similarly, we cannot prove that there are infinitely many real quadratic fields of class number $1$.
-
An obvious problem in algebraic topology would be the computation of the homotopy groups of spheres.
-
I'd like the statement of problem a bit more if it was more concrete: sort of what's $\pi_5(S^2)$ ? Actually, what are the smallest $k$ and $n$ for which $\pi_k(S^n)$ is not known ? – alvarezpaiva Jul 2 at 18:25
1
What does "compute" here mean precisely? Do these answers not count: mathoverflow.net/questions/31004 ? – Michael Jul 2 at 19:01
Dear Michael, In this context, "compute" means something like "give a closed form expression for". Regards, – Emerton Jul 2 at 22:39
1
Dear Emerton, why is there hope that such closed form expression exists for homotopy groups of spheres? – Michael Jul 2 at 23:54
Well, in contrast, it is known by work of Jie Wu that the homotopy groups of $S^2$ are given by the centres of a sequence of combinatorially described groups (math.nus.edu.sg/~matwujie/newnewpis_3.pdf from 2001). I don't know how far we are now from having a combinatorial or algorithmic description of these centres though. – David Roberts Jul 3 at 1:01
Every finite abelian group is (isomorphic to) the class group of the ring of algebraic integers of some number field.
Some comments:
For Dedekind domains this is well-known (even for any abelian group); due to Claborn and Pete L. Clark has an alternate proof/a refinement.
Also a 'geometric analog' is known (Perret, 1999).
And every finite ablian group is at least a subgroup of a classgroup (even for a cyclotomic field).
It can also be shown that, for a fixed prime $p$, every finite abelian $p$-group is the $p$-Sylow of the class group of the ring of algebraic integers of some number field (by Yahagi, Tokyo J. of Math 1978) and that every finite $p$-group is the Galois group of the maximal unramified $p$-extension of a number field (Ozaki, Inventiones 2011); note that this Galois group coincides with the class group if one adds the condition that it be abelian, by Class Field Theory.
ps. Not sure this passes all (or any) of the criteria; I'll let you decide :)
ps2. Searching for a reference, I found this math.SE question on exactly this http://math.stackexchange.com/questions/10949/finite-abelian-groups-as-class-groups
-
I don't think that this problem qualifies. On the other hand, the question is quite vague ... – Martin Brandenburg Jul 1 at 7:36
1
Also, every finite abelian group is an $S$-class group of a number field. – Timo Keller Jul 1 at 8:05
@Martin Brandenburg: in particular, since there are several criteria to be met, could you elabortate, which one(s) is/are not met in your opinion. – quid Jul 1 at 8:33
Schinzel-Sierpinski Conjecture
Melvyn Nathanson, in his book Elementary Methods in Number Theory (Chapter 8: Prime Numbers) states the following:
• A conjecture of Schinzel and Sierpinski asserts that every positive rational number $x$ can be represented as a quotient of shifted primes, that $x=\frac{p+1}{q+1}$ for primes $p$ and $q$. It is known that the set of shifted primes, generates a subgroup of the multiplicative group of rational numbers of index at most $3$.
-
From the Overview of the Royal Danish Sciences Institution's work and its members' work in the year 1882.
In the notes from a meeting on March 9th 1877, after discussing papers by Legendre, J. W. L. Glaisher, and Meissel, Oppermann stated:
At the same occasion, I made people aware of the not yet proven conjecture, that when $n$ is a whole number $>1$, at least one prime number lies between $n(n-1)$ and $n^2$ and also between $n^2$ and $n(n+1)$.
A solution to Oppermann's Conjecture leads to simple solutions to Legendre's, Brocard's, and Andrica's Conjectures.
-
That's a neat one! – alvarezpaiva Jul 2 at 12:02
1
Or, to ask a stronger conjecture, is it true that for any positive real $\epsilon$, there is a natural number $N$ such that if $N < n$ then there is a prime between $n$ and $n + n^\epsilon$. – Zsbán Ambrus Jul 2 at 16:40
Sendov's Conjecture
For a polynomial $$f(z) = (z-r_{1}) \cdot (z-r_{2}) \cdots (z-r_{n}) \quad \text{for} \ \ \ \ n \geq 2$$ with all roots $r_{1}, ..., r_{n}$ inside the closed unit disk $|z| \leq 1$, each of the $n$ roots is at a distance no more than $1$ from at least one critical point of $f$.
-
Chromatic Number of the Plane (Hadwiger-Nelson Problem): What is the minimum number of colors required to color the plane so that no two points which are unit distance apart are the same color? Let $\chi$ denote this number. The current bounds on $\chi$ are
$$4\leq \chi \leq 7$$.
-
I'm not sure what your threshold for "barely mentioned in the literature" is, since some of the highly-voted answers seem rather well known to me, but here's one that is certainly fundamental, seemingly out of reach, and perhaps not so well known except to complexity theorists.
Describe explicitly a Boolean function whose minimum circuit size is superlinear.
A simple counting argument shows that almost all Boolean functions require exponentially large circuits to express. However, giving explicit examples is another matter. Here, "explicit" is a bit vague, but let's say for example that it means that the truth table can be computed in time polynomial in the size of the truth table. Thus NP-complete Boolean functions count as "explicit," and proving superpolynomial circuit lower bounds for them would separate P from NP, but even if we weaken the requirement to a superlinear lower bound on any explicit function, nobody seems to have any clue.
-
The Eilenberg-Ganea conjecture. Recall that the cohomological dimension $\text{cd}(G)$ of a discrete group $G$ is the maximal $n$ such that there exists a $G$-module $M$ with $H^n(G;M) \neq 0$. The geometric dimension $\text{gd}(G)$ of $G$ is the smallest $n$ such that $G$ has a $K(G,1)$ which is an $n$-dimensional CW complex. It is elementary that $\text{cd}(G) \leq \text{gd}(G)$. Moreover, if $\text{cd}(G) \neq 2$, then it is classical that $\text{cd}(G) = \text{gd}(G)$. The Eilenberg-Ganea conjecture says that this also holds if $\text{cd}(G)=2$. It is known, by the way, that if $\text{cd}(G)=2$ then $2 \leq \text{gd}(G) \leq 3$.
The only progress that I know of concerning this is a deep theorem of Bestvina and Brady that says that at the Eilenberg-Ganea conjecture and the Whitehead asphericity conjecture cannot both be true.
-
The Whitehead asphericity conjecture. Let $X$ be a $2$-dimensional aspherical simplicial complex and let $Y \subset X$ be a connected subcomplex. The conjecture then is that $Y$ is aspherical.
Very little is known about this, but a deep theorem of Bestvina and Brady says that the Eilenberg-Ganea conjecture and the Whitehead asphericity conjecture cannot both be true.
-
This is just in two dimensions or are you giving us the simplest unkown case ? – alvarezpaiva Jul 3 at 5:12
@alvarezpaiva : It's something special about dimension $2$. It's easy to come up with counterexamples in higher dimensions; for instance, you can triangulate $\mathbb{R}^3$ so that it contains $S^2$ as a subcomplex. – Andy Putman Jul 3 at 16:02
The complex is finiite? – Alexander Chervov Jul 10 at 4:59
@Alexander Chernov : No. – Andy Putman Jul 14 at 5:51
Gelfand's problem: can you find a closed, proper unital subalgebra A of C[0,1] such that the natural map from [0,1] to the character space of A is bijective?
(See for instance these notes of Feinstein )
-
Is every complemented subspace of $C[0.1]$ isomorphic to $C(K)$ for some compact metric space $K$?
Is every infinite dimensional complemented subspace of $L_1[0.1]$ isomorphic either to $L_1[0.1]$ or to $\ell_1$?
-
Here is a variation of Georges Elencwajg's question, due to Gennady Lyubeznik. Is every closed point (of arbitrary degree over $\mathbb{Q}$) in $\mathbb{P}^2_{\mathbb{Q}}$ set-theoretically the intersection of two curves?
-
Is every finite lattice a congruence lattice of a finite (universal) algebra?
Astonishingly, by Pálfy and Pudlák, this question is equivalent to a question in group theory: is every finite lattice isomorphic to an interval of the subgroup lattice of a finite group?
-
Let $G$ be a finite group. We define $r(G)$ to be the smallest number of relations possible in a presentation of $G$ with the minimal number of generators. If $G$ is a $p$-group, we can also consider "pro-$p$ presentations" of $G$ (using the free objects in the category of pro-$p$ groups); we write $r_p(G)$ for the smallest number of relations possible in a pro-$p$ presentation with the minimal number of generators.
Does $r(G) = r_p(G)$?
-
P vs NP
According to Leonid Levin (via Scott Aaronson), Richard Feynman could not be convinced that this was actually an open problem.
-
3
Thanks JeffE, but this is well-known and documented problem with a million dollar check attached to it. I'm looking for problems that are almost embarrasingly natural and that are not often mentioned in the literature. – alvarezpaiva Jul 1 at 8:50
What about Goldbach conjecure asking if every even natural number is the sum of two primes?
Another quite famous problem is Collatz' conjecture (also known as $3n+1$ problem), see http://en.wikipedia.org/wiki/Collatz_conjecture: consider the algorithm taking $n\in\mathbb{N}$ and sending it to $n/2$ if $n$ is even, and to $3n+1$ if $n$ is odd, iteratively. The question is whether the algorithm always ends up producing the loop $1\mapsto 3\cdot 1+1=4\mapsto 2\mapsto 1\mapsto 4\dots$ regardless of the initial input $n$.
-
This is a well-known problem (by everybody), with a bit of literature, including a novel, behind it. I'm looking for the sort of problem that experts in some field would know, that may even consitute some sort of holy grail in the field, but that has not been very publicized in the literature because it is so hard that there are few if any partial solutions. – alvarezpaiva Jul 2 at 5:14
Ok, I am sorry, at first glance I did not notice that you were asking for problems which were barely unkown. I apologize for the bad answer. I edited it adding another problem, quite well-known in elementary number theory but may be not so well-known to everybody. – Filippo Alberto Edoardo Jul 2 at 9:18
Can one transform a particular closed knotted piece of rope into another one?
-
Do you mean the study of knot types when the length and thickness of the knot must be kept constant during the isotopy? Any reference for this problem or at least for a hint of it? – alvarezpaiva Jul 1 at 19:20
I mean a classification of knots under the isotopy. Just formulated it in physical setting to show its simple nature. A physical type problem is also unsolved. There are some development here, uncluding algorithms, depending on the thickness of a knot. – Andrew Jul 1 at 20:26
2
It's still not clear to me what question you are asking. – Douglas Zare Jul 2 at 5:50
2
I'm confused. Haken's algorithm determines whether two knot-complements are homeomorphic and hence, by the Gordon--Luecke Theorem, whether the two knots are isotopic (modulo orientation, admittedly). – HW Jul 2 at 10:13
2
I still don't know which problem is being suggested, but I think that with all of the progress on knot theory and geometrization, it would be very strange to say, "The solution seems completely out of reach." – Douglas Zare Jul 3 at 0:49
show 5 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 157, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329437017440796, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/50186?sort=votes | ## If the Riemann Hypothesis fails, must it fail infinitely often?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
That is must there either be no non-trivial zeros off the critical line or infinitely many?
I'm sure that no one believes otherwise, but I've never seen a theorem in the literature addressing this. Folklore perhaps?
-
4
i would bet five hundred dollars that this is wide open, and won't be resolved until RH itself is resolved. – David Hansen Dec 23 2010 at 0:09
1
I think it is true that certain L-functions can have only finitely many zeroes on the critical line, although it is well-understood where the zeroes come from. One may construct L-functions associated to the Laplacian on an arithmetic manifold, and the zeroes come from small eigenvalues of the Laplace operator. I'm not sure if there is any proposed analogy though in the case of the Riemann-zeta function. – Agol Dec 23 2010 at 1:05
Agol, do you mean "off the critical line"? – David Hansen Dec 23 2010 at 2:33
1
@David: Yes, I meant off the critical line - they are actually real zeroes. – Agol Dec 23 2010 at 3:28
3
What we need is a proof that not-RH => finitely many, and another proof that not-RH => infinitely many. :-) – Mitchell Porter Dec 24 2010 at 0:34
show 3 more comments
## 4 Answers
To elaborate on my comment, I really do think this won't be resolved until RH itself is resolved, or at least until new tools are brought onto the scene, because it's so far from what we know about the zeros by current technology. All the theorems we have about zeros of the zeta function - e.g., the zero-free region, the zero-density theorems, Montgomery's conditional results on pair correlation, Selberg's theorems on $S(t)$ - are either incredibly weak compared to what we expect to be true (Iwaniec often refers to zero-density theorems as "estimates for the cardinality of the empty set"), or are theorems "in the large", that is to say they deal in whole masses of zeros as opposed to individual zeros. Conventional harmonic analysis is simply unable to grasp individual zeros, for reasons of the uncertainty principle.
The closest reference I know to your question is a paper of Bombieri, "Remarks on Weil's quadratic functional in the theory of prime numbers", where he shows that if the RH is false for only a finite number of zeros, then very odd things follow...
Edit: Thinking about this question a bit more has led me to a conjecture seems quite a bit weaker than anything approaching RH and which essentially implies the impossibility of finitely many failures.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This does not answer your question, but recall that the explicit formula reads
$$\psi(x) = x - \sum_{\rho} \frac{x^{\rho}}{\rho} - \log(2 \pi) - \log(1 - x^{-2})/2.$$
On RH, this implies that $\psi(x) = x + O(x^{1/2} \log^2 x)$. If RH were false with infinitely many exceptions, you'd get a bigger error term which would potentially behave erratically and "randomly". But if RH were false with finitely many exceptions, then you would get
$$\psi(x) = x - \sum_{\rho} \frac{x^{\rho}}{\rho} + O(x^{1/2} \log^2 x),$$
with the sum over the exceptions -- a distinct secondary term and therefore weird oscillatory behavior in the frequency of the primes. Although this has not been ruled out to my knowledge (for example you see terms for Siegel zeroes of Dirichlet $L$-functions all over the literature), it does seem unlikely.
-
Very nice insight! – timur Dec 23 2010 at 16:47
To quote from "Problems of the Millennium: the Riemann Hypothesis" by Bombieri:
It is known that hypothetical exceptions to the Riemann hypothesis must be rare if we move away from the line $\Re(s) = \frac{1}{2}$.
Let $N(\alpha, T)$ be the number of zeros of $\zeta(s)$ in the rectangle $\alpha \leq \Re(s) \leq 2$, $0 \leq \Im(s) \leq T$. The prototype result goes back to Bohr and Landau in 1914, namely $N(\alpha, T) = O(T)$ for any fixed $\alpha$ with $\frac{1}{2} < \alpha < 1$. A significant improvement of the result of Bohr and Landau was obtained by Carlson in 1920, obtaining the density theorem $N(\alpha, T) = O(T^{4\alpha(1−\alpha)+\epsilon})$ for any fixed $\epsilon > 0$. The fact that the exponent here is strictly less than $1$ is important for arithmetic applications, for example in the study of primes in short intervals. The exponent in Carlson’s theorem has gone through several successive refinements for various ranges of $\alpha$, in particular in the range $\frac{3}{4} < \alpha < 1$. Curiously enough, the best exponent known up to date in the range $\frac{1}{2} < \alpha \leq \frac{3}{4}$ remains Ingham’s exponent $3(1 − \alpha)/(2 − \alpha)$, obtained in 1940.
-
So, this doesn't actually answer the question, right? – Gerry Myerson Dec 22 2010 at 22:19
Sure. It might well be an open and very difficult question. I would love to see an expert answer clarifying the issue a bit more. – Andrey Rekalo Dec 22 2010 at 22:29
On a somewhat related note I would like to call attention to the following papers:
A. Booker, Poles of Artin $L$-functions and the strong Artin conjecture, Annals of Math. 158 (2003), 1089-1098.
Here the author proves that for 2-dimensional Galois representations $\rho$ the Artin conjecture implies the strong Artin conjecture by showing that if some character twist $L(s,\rho\otimes\chi)$ has a pole then $L(s,\rho)$ has infinitely many poles (hence in fact all twists have infinitely many poles).
P. Sarnak, A. Zaharescu, Some remarks on Landau-Siegel zeros, Duke Math. J. 111 (2002), 495-507.
Here the authors prove that if all zeros of all quadratic Dirichlet $L$-functions are on the critical line or on the real axis, then the possible real zeros are much farther from $s=1$ than we can prove at present without any hypothesis.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9416524171829224, "perplexity_flag": "middle"} |
http://math.albany.edu/~hammond/mmlmisc/bordermx2t.html | # Another BorderMatrix Example
Make the whole thing a $4×4$ ordinary table using table rules with inline math in the cells except for the first row and first column:
| | | | |
|----|----------------------------------------------------------------------------|----------------------------------------------------------------------------|----------------------------------------------------------------------------|
| | a | b | c |
| x | $1$ | $2$ | $3$ |
| y | | $22$ | $33$ |
| z | | | $333$ | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.5816481113433838, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/35090/list | ## Return to Question
4 deleted 16 characters in body
Normally a 2-sided error randomized algorithm will have some constant error $\varepsilon < 1/2$. We know that we can replace the error term for any inverse polynomial. And the inverse polynomial can be replace replaced for an inverse exponential. Say that we have an algorithm $A$ with $\varepsilon_A=1/p(n)$ for some polynomial $p$ that runs in $T(n)$ steps, and by repeating the algorithm $O(\log \frac{1}{\varepsilon})$ times we obtain and algorithm $B$ with success probability close to 1 but with a logarithmic overhead.
My question is:
(1) If the error decreases polynomially faster, for practical purposes, do we still need to repeat the algorithm several times? Because if we do so we get a logarithmic term (which is not desired), but leaving it as it is, the algorithm will still have a success probability close to 1 for sufficiently large $n$.
(2) What about an exponentially faster decreasing error? Here it seems that we don't need to repeat the algorithm at all.
The same questions apply for 1-sided and 0-sided errors.
3 added 17 characters in body; added 24 characters in body
Normally a 2-sided error randomized algorithm will have some constant error $\varepsilon < 1/2$. We know that we can replace the error term for any inverse polynomial. And the inverse polynomial can be replace for an inverse exponential. Say that we have an algorithm $A$ with $\varepsilon_A=1/p(n)$ for some polynomial $p$ that runs in $T(n)$ steps, and by repeating the algorithm $O(\log \frac{1}{\varepsilon})$ times we obtain and algorithm $B$ with success probability close to 1 but with a logarithmic overhead.
My question is:
(1) If the error decreases polynomially faster, for practical purposes, do we still need to repeat the algorithm several times? Because if we do so we get a logarithmic term (which is not desired), but leaving it as it is, the algorithm will still have a success probability close to 1 for sufficiently large $n$.
(2) What about an exponentially faster decreasing error? Here it seems that we don't need to repeat the algorithm at all.
The same questions apply for 1-sided and 0-sided errors.
2 edited body
Normally a 2-sided error randomized algorithm will have some constant error $\varepsilon < 1/2$. We know that we can replace the error term for any inverse polynomial. And the inverse polynomial can be replace for an inverse exponential. Say that we have an algorithm $A$ with $\varepsilon_A=1/p(n)$ that runs in $T(n)$ steps, and by repeating the algorithm $O(\log \frac{1}{\varepsilon})$ times we obtain and algorithm $B$ with success probability close to 1 but with a logarithmic overhead.
My question is:
(1) If the error decreases polynomially faster, for practical purposes, do we still need to repeat the algorithm several times? Because if we do it so we get a logarithmic term (which is not desired), but leaving it as it is, the algorithm will still have a success probability close to 1 for sufficiently large $n$.
(2) What about an exponentially faster decreasing error? Here it seems that we don't need to repeat the algorithm at all.
The same questions apply for 1-sided and 0-sided errors.
1
# Practical use of probability amplification for randomized algorithms
Normally a 2-sided error randomized algorithm will have some constant error $\varepsilon < 1/2$. We know that we can replace the error term for any inverse polynomial. And the inverse polynomial can be replace for an inverse exponential. Say that we have an algorithm $A$ with $\varepsilon_A=1/p(n)$ that runs in $T(n)$ steps, and by repeating the algorithm $O(\log \frac{1}{\varepsilon})$ times we obtain and algorithm $B$ with success probability close to 1 but with a logarithmic overhead.
My question is:
(1) If the error decreases polynomially faster, for practical purposes, do we still need to repeat the algorithm several times? Because if we do it we get a logarithmic term (which is not desired), but leaving it as it is, the algorithm will still have a success probability close to 1 for sufficiently large $n$.
(2) What about an exponentially faster decreasing error? Here it seems that we don't need to repeat the algorithm at all.
The same questions apply for 1-sided and 0-sided errors. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277255535125732, "perplexity_flag": "head"} |
http://cs.stackexchange.com/questions/8970/how-to-prove-a-problem-is-np-complete | How to prove a problem is NP-complete?
Consider the following problem: Given two graphs $G_1 = (V_1, E_1)$ and $G_2 = (V_2, E_2)$ and some non-negative integer $k \in \mathbb{N}$, is it possible to delete at most $k$ vertices from $G_1$ to obtain $G_1'$ such that $G_1' \cong G_2$, i.e. the resulting graph is isomorphic to $G_2$.
I have to show that this problem is NP-complete.
Can somebody help me with this problem? It is school homework and I don't know how to solve it.
-
What have you tried? Do you know how one usually proves that a problem is NP-complete? – Yuval Filmus Jan 16 at 13:08
I have to find another NP-complete problem and reduce it to this one, but I cannot think of a way to transform this problem into another problem. – Charlie Jan 16 at 13:11
3
Good thing you don't have to: you have to transform instances of the other problem into ones of this problem. – Raphael♦ Jan 16 at 15:48
2 Answers
Here's a hint: Consider Vertex Cover, which is NP-complete. If $G = (V,E)$ has a vertex cover of size $k$, then you can remove $k$ vertices from $G$ to obtain $G'$ such that $G'$ is an edgeless graph on $n-k$ vertices.
Edit: The correct reduction (by using Vertex Cover) shows the problem to be NP-hard. Given the correct witnesses, you can easily show that it is in NP.
-
Your problem is one version of famous subgraph isomorphism problem which is NP-complete. It is a computational task in which two graphs G and H are given as inputs, and one must determine whether G contains a subgraph that is isomorphic to H. There are many solutions on the web for this problem.
-
1
I do not think that there are many versions of subgraph isomorphism problem. The subgraph isomorphism problem is exactly the one you described: given graphs G_1 and G_2, decide whether G_1 contains a subgraph that is isomorphic to G_2. And this is different from the problem stated in the question. In the problem stated in the question, the task is to decide whether G_1 contains an induced subgraph that is isomorphic to G_2. – Tsuyoshi Ito Jan 29 at 3:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.96054607629776, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/79927/which-n-maximize-gn-frac-sigmann-log-log-n | ## Which $n$ maximize $G(n)=\frac{\sigma(n)}{n \log \log n}$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
By Robin's theorem
$$G(n)=\frac{\sigma(n)}{n \log \log n}$$
is bounded by $e^\gamma \approx 1.78107241799$ for $n>5040$ assuming Riemann hypothesis .
For $n=\mathrm {lcm} (1,2 \dots k)$, $G(n)$ appears generally increasing as $k$ increases reaching $\approx 1.781063$ for $n=\mathrm {lcm} (1,2 \dots10^8)$ and this is relatively close to the bound.
Empirically slightly better choice appears $n=\mathrm {lcm} (1,2 \dots k) \prod_{\text{prime } p, p<\log k}{k}$ -- again $G(n)$ generally increases as $k$ increases.
Using pari/gp reals with moderate precision so far agrees with integer results and is significantly faster. With reals got the (possibly incorrect) value of $\approx 1.781072152947513062$
Which $n$ maximize $G(n)=\frac{\sigma(n)}{n \log \log n}$?
-
That's not what Robin's theorem says, and in any case the opening paragraph is nonsensical as $a$ is not defined. Possibly, what you are trying to ask is this: define $G(n)$, and then set $a(0)=3$ and $a(k)$ to be the least integer $x$ with $x>a(k-1),G(x)>G(a(k-1))$. Give other descriptions of the sequence $a(k)$. Please clarify! – Kevin O'Bryant Nov 3 2011 at 14:19
@Kevin thank you, will try to clarify. I mean $a(n)$ is some sequence and I gave example with $a(n)=lcm(1..n)$. What's wrong with Robin's theorem, $G(n)$ must be $<e^\gamma$. – joro Nov 3 2011 at 14:33
...assuming RH. – joro Nov 3 2011 at 14:34
Thank you again Kevin, tried to clarify. a(n) was indeed confusing. – joro Nov 3 2011 at 15:11
3
Anyway, $\lim_{k\to\infty}G(\mathrm{lcm}(1,\dots,k))=e^\gamma$ follows from Mertens’ theorems. – Emil Jeřábek Nov 3 2011 at 16:11
show 5 more comments
## 3 Answers
First, from a more detailed theorem of Robin we have an unconditional result (1984) that says that your ratio of interest is, for $n \geq 13,$ smaller than $$e^\gamma + \frac{0.64821364942...}{(\log \log n)^2},$$ with the constant in the numerator giving equality for $n=12.$ from which it follows that your supremum is achieved for some $n,$ perhaps 5040 itself, but almost certainly with $n \leq 5040.$ Note that $\log \log n$ first exceeds 1 for $n \geq 16$ so we should probably include that as a hypothesis.
Second, it is a virtual certainty that the maximum will be achieved by a colossally abundant number, see COLOSSUS
There is a recipe for these, given some $\delta > 0,$ the prime factorization for the (largest if more than one) number $n$ that maximizes $$\frac{ \sigma(n)}{n^{1 + \delta}}$$ is given by an explicit formula for each prime's exponent involving the floor function. I will see if I can find that, meanwhile just look at the sequence A004490 in OEIS. The recipe should be written out in Alaoglu and Erdos (1944). Indeed, Ramanujan included these numbers in his original article on highly composite numbers, but that section was not included in the publication owing to shortages of paper at the time(1915). The full manuscript was published in the Ramanujan Journal in 1997.
EDIT: I was able to download Alaoglu and Erdos, given some $\delta > 0,$ the correct exponent for some prime $p$ is $$\left\lfloor \frac{\log (p^{1 + \delta} - 1) - \log(p^\delta - 1)}{\log p} \right\rfloor \; - \; 1.$$
This is Theorem 10 on page 455. For a fixed $\delta,$ the exponents either stay the same or decrease for increasing $p,$ and eventually the exponent 0 is reached, so there is your complete number. For a fixed $p,$ the exponent either stays the same or increases with decreasing $\delta.$
I'm not seeing any lists that show $\delta$ and the result, so here, if I call $f(\delta)$ the corresponding colossally abundant number for $\delta,$ I calculate $$f(1) = 1, \; f(1/2) = 2, \; f(1/4) = 6, \; f(1/6) = 12, \; f(1/10) = 60, \; f(1/12) = 120,$$ then $$f(1/14) = 360, \; f(1/17) = 2520, \; f(1/25) = 5040, \; f(1/31) = 55440, \; f(1/39) = 720720,$$ and so on as $\delta$ decreases.
If you want the first (largest) $\delta$ for which a favorite prime $p$ gets assigned exponent $k,$ let $$\delta = \frac{\log(p^{k+1} - 1) - \log(p^{k+1} - p)}{\log p}$$
-
Thank you...... – joro Nov 4 2011 at 5:51
2
I don't believe your argument in the first paragraph is correct (or maybe I don't understand it). Just because there's an upper bound for the ratio that's decreasing to $e^\gamma$ doesn't mean that the supremum is achieved for some $n$. For example, $1+1/n$ is an upper bound for the sequence $\{1-1/n\}$, but the supremum is not achieved. – Greg Martin Nov 4 2011 at 8:06
I may have coded CA numbers wrong, but lcm gives better results for the ranges I can test. – joro Nov 4 2011 at 15:55
If joro is allowing numbers below 5040, as Emil points out, there are several numbers with large $G$ such as $G(60) \approx 1.986369.$ As Emil points out, this means the maximum is achieved, for $n=3$ if we permit such small $n.$ I can add that, if RH is wrong, there exists a CA number $n$ larger than 5040 with $G(n) > e^\gamma.$ This does not mean that CA numbers give the "best" easily-calculated sequence to get large values of $G(n)$ when $n > 5040.$ But $G(n)$ can be calculated for, essentially, arbitrarily large CA numbers. – Will Jagy Nov 4 2011 at 20:09
Better implementation of CA gave much better results. – joro Nov 7 2011 at 12:39
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The question is still unclear to me, therefore I will outline what the options are.
The function $G(n)$ (well-defined for integers $n>1$, and positive for $n>3$) has the following properties:
• Grönwall’s theorem: $$\limsup_{n\to\infty}G(n)=e^\gamma.$$ For a concrete infinite sequence $\{n_k\}$ for which $G(n_k)$ tends to $e^\gamma$, one may take $n_k=\mathrm{lcm}(1,\dots,k)$.
• There are a bunch of numbers $n\le5040$ for which $G(n)>e^\gamma$.
• Robin’s theorem: If the Riemann hypothesis holds, then $G(n)< e^\gamma$ for all $n>5040$. If the Riemann hypothesis fails, there are infinitely many $n$ such that $G(n)>e^\gamma$ (and even $G(n)>e^\gamma+c/(\log n)^\beta$ for some constants $c,\beta>0$).
• Another Robin’s theorem: $G(n)< e^\gamma+0.6482/(\log\log n)^2$ for all $n>1$ (unconditionally).
Therefore, there are the following possibilities:
1. Joro wants to find $n>1$ where $G(n)$ is maximal.
The answer is $\mathbf{n=3}$, which gives $G(3)\approx14.177183749182$. All other $G(n)$ are smaller by the unconditional Robin’s theorem above.
2. Joro wants to find $n>5040$ where $G(n)$ is maximal.
Case A: Riemann hypothesis holds.
The supremum of the sequence is $e^\gamma$, but all its elements are strictly smaller. Thus, there is no maximum.
Case B: Riemann hypothesis fails.
Let $n_0>5040$ be such that $G(n_0)>e^\gamma$. Then $G(n)< G(n_0)$ for all but finitely many $n$ by Grönwall’s theorem, hence the sequence does have a maximum, which is greater than $e^\gamma$. The point $n$ where it is achieved must be quite large, and cannot be exhibited explicitly at present, since this would amount to disproving the Riemann hypothesis. There may be more than one $n$ achieving the maximal $G(n)$, but in any case there are only finitely many such $n$.
3. Joro wants to exhibit an infinite sequence of $n$ on which $G(n)$ tends to $e^\gamma$.
(He already said he does not, but I think it’s actually a more natural question than the other two readings above.) There is no unique answer, one possibility is to take the numbers $n=\mathrm{lcm}(1,\dots,k)$ as above.
4. Joro wants something else,
in which case he should state the question more clearly.
-
1
I believe that Joro wants an infinite sequence of n, such that G(n) tends to $e^{\gamma}$ 'as quickly as possible' – Woett Nov 4 2011 at 15:59
What is “as quickly as possible”? I don’t see how this is a well-defined problem. – Emil Jeřábek Nov 4 2011 at 19:08
GronwallsTheorem(http://mathworld.wolfram.com/GronwallsTheorem.html) and Riemann Hypothesis suggest no n>5040 will satisfy $G(n)=e^{\gamma}$,although $limsupG(n)=e^{\gamma}$.
-
1
You have just restated Robin's theorem, which the OP linked to in the question. – David Loeffler Nov 3 2011 at 16:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 102, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432330131530762, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Beer-Lambert_law | Beer–Lambert law
(Redirected from Beer-Lambert law)
An example of Beer–Lambert law: green laser light in a solution of Rhodamine 6B. The beam intensity becomes weaker as it passes through solution
In optics, the Beer–Lambert law, also known as Beer's law or the Lambert–Beer law or the Beer–Lambert–Bouguer law (named after August Beer, Johann Heinrich Lambert, and Pierre Bouguer) relates the absorption of light to the properties of the material through which the light is travelling.
Equations
The law states that there is a logarithmic dependence between the transmission (or transmissivity), T, of light through a substance and the product of the absorption coefficient of the substance, α, and the distance the light travels through the material (i.e., the path length), ℓ. The absorption coefficient can, in turn, be written as a product of either a molar absorptivity (extinction coefficient) of the absorber, ε, and the molar concentration c of absorbing species in the material, or an absorption cross section, σ, and the (number) density N' of absorbers.
For liquids[citation needed], these relations are usually written as:
$T = {I\over I_{0}} = 10^{-\alpha\, \ell} = 10^{-\varepsilon\ell c}$
whereas for gases, and in particular among physicists and for spectroscopy and spectrophotometry, they are normally written
$T = {I\over I_{0}} = e^{-\alpha'\, \ell} = e^{-\sigma \ell N}$
where $I_0$ and $I$ are the intensity (or power)[clarification needed] of the incident light and the transmitted light, respectively; σ is cross section of light absorption by a single particle and N is the density (number per unit volume) of absorbing particles.
The base 10 and base e conventions must not be confused because they give different values for the absorption coefficient: $\alpha\neq\alpha'$. However, it is easy to convert one to the other, using
$\alpha' = \alpha \ln(10)\approx 2.303\alpha. \,$
The transmission (or transmissivity) is expressed in terms of an absorbance which, for liquids, is defined as
$A = -\log_{10} \left( \frac{I}{I_0} \right)$
whereas, for gases, it is usually defined as
$A' = -\ln \left( \frac{I}{I_0} \right).$
This implies that the absorbance becomes linear with the concentration (or number density of absorbers) according to
$A = \varepsilon \ell c = \alpha\ell \,$
and
$A' = \sigma \ell N = \alpha' \ell \,$
for the two cases, respectively.
Thus, if the path length and the molar absorptivity (or the absorption cross section) are known and the absorbance is measured, the concentration of the substance (or the number density of absorbers) can be deduced.
Although several of the expressions above often are used as Beer–Lambert law, the name should strictly speaking only be associated with the latter two. The reason is that historically, the Lambert law states that absorption is proportional to the light path length, whereas the Beer law states that absorption is proportional to the concentration of absorbing species in the material.[1]
If the concentration is expressed as a mole fraction i.e., a dimensionless fraction, the molar absorptivity (ε) takes the same dimension as the absorption coefficient, i.e., reciprocal length (e.g., m−1). However, if the concentration is expressed in moles per unit volume, the molar absorptivity (ε) is used in L·mol−1·cm−1, or sometimes in converted SI units of m2·mol−1.
The absorption coefficient α' is one of many ways to describe the absorption of electromagnetic waves. For the others, and their interrelationships, see the article: Mathematical descriptions of opacity. For example, α' can be expressed in terms of the imaginary part of the refractive index, κ, and the wavelength of the light (in free space), λ0, according to
$\alpha' = \frac{4 \pi \kappa}{\lambda_{0}}.$
In molecular absorption spectrometry, the absorption cross section σ is expressed in terms of a linestrength, S, and an (area-normalized) lineshape function, Φ. The frequency scale in molecular spectroscopy is often in cm−1, wherefore the lineshape function is expressed in units of 1/cm−1, which can look funny but is strictly correct. Since N is given as a number density in units of 1/cm3, the linestrength is often given in units of cm2cm−1/molecule. A typical linestrength in one of the vibrational overtone bands of smaller molecules, e.g., around 1.5 μm in CO or CO2, is around 10−23 cm2cm−1, although it can be larger for species with strong transitions, e.g., C2H2. The linestrengths of various transitions can be found in large databases, e.g., HITRAN. The lineshape function often takes a value around a few 1/cm−1, up to around 10/cm−1 under low pressure conditions, when the transition is Doppler broadened, and below this under atmospheric pressure conditions, when the transition is collision broadened. It has also become commonplace to express the linestrength in units of cm−2/atm since then the concentration is given in terms of a pressure in units of atm. A typical linestrength is then often in the order of 10−3 cm−2/atm. Under these conditions, the detectability of a given technique is often quoted in terms of ppm•m.
The fact that there are two commensurate definitions of absorbance (in base 10 or e) implies that the absorbance and the absorption coefficient for the cases with gases, A' and α', are ln 10 (approximately 2.3) times as large as the corresponding values for liquids, i.e., A and α, respectively. Therefore, care must be taken when interpreting data that the correct form of the law is used.
The law tends to break down at very high concentrations, especially if the material is highly scattering. If the light is especially intense, nonlinear optical processes can also cause variances. The main reason, however, is the following. At high concentrations, the molecules are closer to each other and begin to interact with each other. This interaction will change several properties of the molecule, and thus will change the molar aborbtivity. If the absorbtivity is different at higher concentrations than at lower ones, then the plot of the absorbance will not be linear, as is suggested by the equation, so you can only use it when all the concentrations you are working with are low enough that the absorbtivity is the same for all of them.
Derivation
Classically, the Beer–Lambert law was first devised independently where Lambert's law stated that absorbance is directly proportional to the thickness of the sample, and Beer's law stated that absorbance is proportional to the concentration of the sample. The modern derivation of the Beer–Lambert law combines the two laws and correlate the absorbance to both, the concentration as well as the thickness (path length) of the sample.[2]
In concept, the derivation of the Beer–Lambert law is straightforward. Divide the absorbing sample into thin slices that are perpendicular to the beam of light. The light that emerges from a slice is slightly less intense than the light that entered because some of the photons have run into molecules in the sample and did not make it to the other side. For most cases where measurements of absorption are needed, a vast majority of the light entering the slice leaves without being absorbed. Because the physical description of the problem is in terms of differences—intensity before and after light passes through the slice—we can easily write an ordinary differential equation model for absorption. The difference in intensity due to the slice of absorbing material $dI$ is reduced; leaving the slice, it is a fraction $\beta$ of the light entering the slice $I$. The thickness of the slice is $dz$, which scales the amount of absorption (thin slice does not absorb much light but a thick slice absorbs a lot). In symbols, $dI = \beta I dz$, or $dI/dz = \beta I$. This conceptual overview uses $\beta$ to describe how much light is absorbed. All we can say about the value of this constant is that it will be different for each material. Also, its values should be constrained between −1 and 0. The following paragraphs cover the meaning of this constant and the whole derivation in much greater detail.
Assume that particles may be described as having an absorption cross section (i.e., area), σ, perpendicular to the path of light through a solution, such that a photon of light is absorbed if it strikes the particle, and is transmitted if it does not.
Define z as an axis parallel to the direction that photons of light are moving, and A and dz as the area and thickness (along the z axis) of a 3-dimensional slab of space through which light is passing. We assume that dz is sufficiently small that one particle in the slab cannot obscure another particle in the slab when viewed along the z direction. The concentration of particles in the slab is represented by N.
It follows that the fraction of photons absorbed when passing through this slab is equal to the total opaque area of the particles in the slab, σAN dz, divided by the area of the slab A, which yields σN dz. Expressing the number of photons absorbed by the slab as dIz, and the total number of photons incident on the slab as Iz, the number of photons absorbed by the slab is given by
$dI_z = - \sigma N\,I_z\,dz .$
Note that because there are fewer photons which pass through the slab than are incident on it, dIz is actually negative (It is proportional in magnitude to the number of photons absorbed).
The solution to this simple differential equation is obtained by integrating both sides to obtain Iz as a function of z
$\ln(I_z) = - \sigma N z + C . \,$
The difference of intensity for a slab of real thickness ℓ is I0 at z = 0, and Il at z = ℓ. Using the previous equation, the difference in intensity can be written as,
$\ln(I_l) - \ln(I_0) = (- \sigma \ell N + C) - ( - \sigma 0 N + C) = - \sigma \ell N \,$
rearranging and exponentiating yields,
$\ T = \frac{I_l}{I_0} = e ^ {- \sigma \ell N} = e ^ {- \alpha'\ell} .$
This implies that
$A' = - \ln\left( \frac{I_l}{I_0} \right) = \alpha' \ell = \sigma\ell N \,$
and
$A = - \log_{10}\left( \frac{I_l}{I_0} \right) = \frac{\alpha'\ell}{2.303} = \alpha \ell = \varepsilon \ell c. \,$
The derivation assumes that every absorbing particle behaves independently with respect to the light and is not affected by other particles. Error is introduced when particles are lying along the same optical path such that some particles are in the shadow of others. This occurs in highly concentrated solutions. In practice, when large absorption values are measured, dilution is required to achieve accurate results. Measurements of absorption in the range 0.1 to 1 are less affected by shadowing than other sources of random error. In this range, the ODE model developed above is a good approximation; measurements of absorption in this range are linearly related to concentration. At higher absorbances, concentrations will be underestimated due to this shadow effect unless one employs a more sophisticated model that describes the non-linear relationship between absorption and concentration.
Deviations from Beer–Lambert Law
Under certain conditions Beer–Lambert law fails to maintain a linear relationship between absorbance and concentration of analyte.[3] These deviations are classified into three categories:
1. Real Deviations – These are fundamental deviations due to the limitations of the law itself.
2. Chemical Deviations – These are deviations observed due to specific chemical species of the sample which is being analyzed.
3. Instrument Deviations – These are deviations which occur due to how the absorbance measurements are made.
Prerequisites
There are at least six conditions that need to be fulfilled in order for Beer’s law to be valid. These are:
1. The absorbers must act independently of each other;
2. The absorbing medium must be homogeneous in the interaction volume
3. The absorbing medium must not scatter the radiation – no turbidity;
4. The incident radiation must consist of parallel rays, each traversing the same length in the absorbing medium;
5. The incident radiation should preferably be monochromatic, or have at least a width that is narrower than that of the absorbing transition; and
6. The incident flux must not influence the atoms or molecules; it should only act as a non-invasive probe of the species under study. In particular, this implies that the light should not cause optical saturation or optical pumping, since such effects will deplete the lower level and possibly give rise to stimulated emission.
If any of these conditions are not fulfilled, there will be deviations from Beer’s law.
Chemical analysis
Beer's law can be applied to the analysis of a mixture by spectrophotometry, without the need for extensive pre-processing of the sample. An example is the determination of bilirubin in blood plasma samples. The spectrum of pure bilirubin is known, so the molar absorption coefficient is known. Measurements are made at one wavelength that is nearly unique for bilirubin and at a second wavelength in order to correct for possible interferences.The concentration is given by c = Acorrected / ε.
For a more complicated example, consider a mixture in solution containing two components at concentrations c1 and c2. The absorbance at any wavelength, λ is, for unit path length, given by
$A(\lambda)=c_1\ \varepsilon_1(\lambda)+c_2\ \varepsilon_2(\lambda).$
Therefore, measurements at two wavelengths yields two equations in two unknowns and will suffice to determine the concentrations c1 and c2 as long as the molar absorbances of the two components, ε1 and ε2 are known at both wavelengths. This two system equation can be solved using Cramer's rule. In practice it is better to use linear least squares to determine the two concentrations from measurements made at more than two wavelengths. Mixtures containing more than two components can be analysed in the same way, using a minimum of n wavelengths for a mixture containing n components.
The law is used widely in infra-red spectroscopy and near-infrared spectroscopy for analysis of polymer degradation and oxidation (also in biological tissue). The carbonyl group absorption at about 6 micrometres can be detected quite easily, and degree of oxidation of the polymer calculated.
Beer–Lambert law in the atmosphere
This law is also applied to describe the attenuation of solar or stellar radiation as it travels through the atmosphere. In this case, there is scattering of radiation as well as absorption. The Beer–Lambert law for the atmosphere is usually written
$I = I_0\,\exp(-m(\tau_a+\tau_g+\tau_{\rm NO_2}+\tau_w+\tau_{\rm O_3}+\tau_r)),$
where each $\tau_{x}$ is the optical depth whose subscript identifies the source of the absorption or scattering it describes:
• $a$ refers to aerosols (that absorb and scatter)
• $g$ are uniformly mixed gases (mainly carbon dioxide ($\mathrm{CO}_2$) and molecular oxygen ($\mathrm{O}_2$) which only absorb)
• $\mathrm{NO}_2$ is nitrogen dioxide, mainly due to urban pollution (absorption only)
• $w$ is water vapour absorption
• $\mathrm{O}_3$ is ozone (absorption only)
• $r$ is Rayleigh scattering from molecular oxygen ($\mathrm{O}_2$) and nitrogen ($\mathrm{N}_2$) (responsible for the blue color of the sky).
$m$ is the optical mass or airmass factor, a term approximately equal (for small and moderate values of $\theta$) to $1/\cos(\theta)$, where $\theta$ is the observed object's zenith angle (the angle measured from the direction perpendicular to the Earth's surface at the observation site).
This equation can be used to retrieve $\tau_{a}$, the aerosol optical thickness, which is necessary for the correction of satellite images and also important in accounting for the role of aerosols in climate.
When the path taken by the light is through the atmosphere, the density of the absorbing gas is not constant, so the original equation must be modified as follows:
$T = {I_{1}\over I_{0}} = e^{-\int\alpha'\, dz} = e^{-\sigma\int N dz}$
where z is the distance along the path through the atmosphere, all other symbols are as defined above.[4] This is taken into account in each $\tau_{x}$ in the atmospheric equation above.
History
The law was discovered by Pierre Bouguer before 1729.[5] It is often attributed to Johann Heinrich Lambert, who cited Bouguer's Essai d'Optique sur la Gradation de la Lumiere (Claude Jombert, Paris, 1729) — and even quoted from it — in his Photometria in 1760.[6] Much later, August Beer extended the exponential absorption law in 1852 to include the concentration of solutions in the absorption coefficient.[7]
References
1. J. D. J. Ingle and S. R. Crouch, Spectrochemical Analysis, Prentice Hall, New Jersey (1988)
2. J.H. Lambert, Photometria sive de mensura et gradibus luminis, colorum et umbrae [Photometry, or, On the measure and gradations of light, colors, and shade] (Augsburg ("Augusta Vindelicorum"), Germany: Eberhardt Klett, 1760). See especially p. 391.
3. Beer (1852) "Bestimmung der Absorption des rothen Lichts in farbigen Flüssigkeiten" (Determination of the absorption of red light in colored liquids), Annalen der Physik und Chemie, vol. 86, pp. 78–88. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 44, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.929256021976471, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/275115/why-fourier-transformation-use-complex-number | # Why fourier transformation use complex number?
I know that the Fourier transform is as follows:$$\hat{f}(\xi)= \int_{-\infty}^{\infty}\exp(-\mathrm ix\xi)f(x)\mathrm{d}x$$ but I couldent understand why should use complex number $i$ in the integration. Is that means I have a real number function and after fourier transformation I get a complex function? I know that $\hat{f}(\xi)$ stand for the amplitude of each frequency. But how to understand when the amplitude is a complex number?
-
I'm not sure if there's any good geometric intuition behind the Fourier transform of a complex function, but if you write $f(x) = g(x) + i h(x)$, with $g,h$ real, then the $\hat{f} = \hat{g} + i \hat{h}$, and I guess the Fourier transform of a real-valued function is easier to visualize. – Christopher A. Wong Jan 10 at 12:31
1
@ChristopherA.Wong. I think maple is asking about the Fourier transform of a real function, but wonders why the operation itself involves the complex number $i$. – Thomas E. Jan 10 at 12:35
maybe $\exp(-iz)=\cos(z) - i\sin(z)$ helps... – draks ... Jan 10 at 12:43
## 1 Answer
You need to ask yourself why we use Fourier transforms. We want to transfer the signal from the space or time domain to another domain - the frequency domain. In this domain, the signal has two "properties" - magnitude and phase. If we want to get only the signal's "power" in a specific frequency bin, we indeed need only to take the absolute value of the Fourier transform, which is real. But, the Fourier transform gives as the phase of each frequency as well. While the first's (magnitude) importance is immediate, the phase is sometimes just as important. For example, for images, most of the information is contained in the phase and NOT in the amplitude. Also, frequency responses (fourier transform) are used in digital and analog filters, and the phase plays a major role here as well, especially for audio filters where a linear phase is required: This is what enables an audio filter to process all frequencies and output them without a different delay for each frequency (which will distort the sound - imagine a filter that makes your bass sound come a little before your treble...).
So I hope I convinced you the phase is important as well as the magnitude. And in order to get these two properties, we need something other than just real numbers, we need something with magnitude and phase. Something like a complex number.
-
thank you. It's really help – maple Jan 10 at 13:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9187935590744019, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/84846/joint-distribution-of-sum-of-independent-normals | Joint distribution of sum of independent normals [closed]
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose we have three independent normally distributed random variables $$X_0 \sim \mathcal{N}(\mu_0, \sigma_0^2),$$ $$X_1 \sim \mathcal{N}(\mu_1, \sigma_1^2),$$ $$X_2 \sim \mathcal{N}(\mu_2, \sigma_2^2).$$
Now, define two new random variables $Y_0 = X_0+X_1$ and $Y_1 = X_1+X_2$.
Let $\vec{Y} = [Y_0 \;\;\; Y_1]^T$
What can we say about the distribution of $\vec{Y}$? Obviously, $Y_0$ and $Y_1$ are not independent. If they were, then $\vec{Y}$ would have been a multivariate normal variable. Any ideas?
-
If you get no interest here, try stats.stackexchange.com - and if you do, flag this question for moderator attention to be closed, so we don't duplicate questions. – David Roberts Jan 4 2012 at 0:48
1
Alternatively, you could try math.stackexchange.com - the question looks like it would belong better there. Clearly both $Y_0$ and $Y_1$ are Gaussian so it seems that the vector $(Y_0,Y_1)$ will be Gaussian with some non-trivial covariance matrix. – Yemon Choi Jan 4 2012 at 0:53
The fact that $Y_0$ and $Y_1$ both are Gaussian does not imply that $\vec{Y}$ is Gaussian. See en.wikipedia.org/wiki/… – eakbas Jan 4 2012 at 1:11
@eakbas: ah yes, I was too hasty with what I said. However, isn't the image of a Gaussian vector under a linear transformation also Gaussian? see en.wikipedia.org/wiki/… – Yemon Choi Jan 4 2012 at 2:35
The question has now been posted at stats.stackexchange.com/questions/20565/… where other people seem to agree with me regarding multivariate Gaussian distributions – Yemon Choi Jan 4 2012 at 3:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9099076390266418, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/14418/can-one-canonical-conjugate-variable-be-considered-to-be-the-frequency-of-the | # Can one canonical conjugate variable be considered to be the “frequency” of the other one? (which could be a “wavelength”)?
So, from http://en.wikipedia.org/wiki/Conjugate_variables#Derivatives_of_action, we have...
• The energy of a particle at a certain event is the negative of the derivative of the action along a trajectory of that particle ending at that event with respect to the time of the event.
• The linear momentum of a particle is the derivative of its action with respect to its position.
• The angular momentum of a particle is the derivative of its action with respect to its angle (angular position).
• The electric potential (φ, voltage) at an event is the negative of the derivative of the action of the electromagnetic field with respect to the density of (free) electric charge at that event.
• The magnetic potential (A) at an event is the derivative of the action of the electromagnetic field with respect to the density of (free) electric current at that event. The electric field (E) at an event is the derivative of the action of the electromagnetic field with respect to the electric polarization density at that event.
• The magnetic induction (B) at an event is the derivative of the action of the electromagnetic field with respect to the magnetization at that event.
• The Newtonian gravitational potential at an event is the negative of the derivative of the action of the Newtonian gravitation field with respect to the mass density at that event.
We know that one canonical conjugate variable can be Fourier transformed into its dual. So all properties of Fourier transforms apply.
It also appears that "momentum" can be considered to be the "frequency" of position. Does this analogy also apply to each and every once of these cases above?
In that the energy of a particle is the "frequency" of its trajectory? Or that the electric potential is the "frequency" of the density of free electric charge?
Or that the electric field is the "frequency" of the electric polarization density? And that the magnetic induction is the "frequency" of the magnetization?
-
## 1 Answer
The answer for all the cases mentioned in the question is positive: There is a Fourier transform which intertwines the conjugate variables, but this is not true in general.
To be specific I'll elaborate the particle (in one dimension) case, but this elaboration can be generalized for the other cases as well.
In the particle case, we know how to construct a canonical transformation that exchanges the position and the momentum.
$F(q, Q) = qQ$
The momenta are given by:
$p = \frac{\partial F}{\partial q}=Q$
$P = -\frac{\partial F}{\partial Q}=-q$
The time independent Schrödinger equation in the first set of variables corresponding to a Hamiltonian \$H(p,q) is:
$H(i \hbar\frac{\partial}{\partial q}, q) \psi(q) = E \psi(q)$
while in the second set of variables, it is:
$H(Q, -i \hbar\frac{\partial}{\partial Q}) \phi(Q) = E \phi(Q)$
Now, it is easy to verify that the Fourier transform:
$\phi(Q) = \int exp(\frac{i}{\hbar} F(q, Q)) \psi(q) dq$
intertwines between the energy eigenfunction of the two Hamiltonians, i.e., if $\psi(q)$ is an eigenfunction of the first Hamiltonian with an energe $E$, then $\phi(Q)$ will be an eigenfunction of the second Hamiltonian with the same energy.
Now, is the above description general for any two classicaly conjugate variables and any canonical transformation, the answer is no. In general we will have
$\phi(Q) = \int exp(\frac{i}{\hbar} F_{quant}(q, Q)) \psi(q) dq$,
where $F_{quant}(q, Q)$ ia a "Quantum canonical transformation", which can be expanded in powers of $\hbar$, such that the leading term is the "classical" canonical transformation:
$F_{quant}(q, Q) = F(q, Q) + \hbar F_1(q, Q) +\hbar^2 F_2(q, Q) + . . .$
This equation essentially constitutes the basis of deformation quantization.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9139888286590576, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/33942/do-the-base-3-digits-of-2n-avoid-the-digit-2-infinitely-often-what-is-the-s | ## Do the base 3 digits of $2^n$ avoid the digit 2 infinitely often — what is the status of this problem?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I believe this question is due to Erdős and Graham, and I think it is still open: does the base 3 expansion of $2^n$ avoid the digit 2 for infinitely many $n$?
If we concatenate the digits of $2^i$, $i \geq 0$, we produce the number $0.110100100010000...$. This number is not simply normal in base 2, so it is not normal. Is it simply normal in base 3? I think even that result would not imply that for sufficiently large $n$, 2 doesn't appear in the base 3 expansion of $2^n$.
The number 20 here is not special:
$2^{20} = 1222021101011_3, \;\;\;\; 2^{21} = 10221112202022_3, \;\;\; 2^{22} = 21220002111121_3$
Statistically, we seem to be flipping a fair 3-sided coin, and statistical analysis for larger $n$ bears this out (in the past, I did a p-test on the digits, but don't have the data available here). If we actually produced these digits by flipping this 3-sided coin, for fixed $n$ we would have probability about $$(2/3)^{n\ln2/\ln3}$$ of having no 2s in the base-3 digit expansion.
What is the state of the art for this problem? Is there a good number-theoretic reason why this problem should be very difficult (e.g. an analogy with other supposed-hard problems)? Are there related problems that have been solved?
-
So I guess you mean cases like $2^8 = 100111_3$? If I computed correctly. – Helge Jul 30 2010 at 22:00
Yes, but as the number of digits increases, these become increasingly unlikely. Under the assumption, of course, that the base 3 digits of 2^n are random, which they are not. – Eric Tressler Jul 30 2010 at 22:56
## 3 Answers
As of a few months ago, the status of the problem was: still unsolved. See the slides Jeff Lagarias put up from a talk he gave in September 2009: http://www.math.lsa.umich.edu/~lagarias/talks-files/fields-horz5.pdf
An older reference is http://www.americanscientist.org/issues/id.3268,y.0,no.,content.true,page.2,css.print/issue.aspx (Brian Hayes, Third Base, American Scientist) which says the problem was still open in late 2001; also that Ilan Vardi searched up to $2^{6973568802}$ without finding any 2-less powers of 2 (other than $2^2$ and $2^8$).
-
This sounds comprehensive. Lagarias's slides in particular are very interesting. Thanks for the references. – Eric Tressler Jul 31 2010 at 16:24
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
... and $\sum (2/3)^{n\log 2/\log 3} < \infty$ so (if things were random) we would expect only finitely many such occurrences by Borel-Cantelli easy direction.
But of course proving this is anything like random is far too hard for today's tools, I think.
-
Something way easier that one can prove is that your sequence is disjunctive, which is much weaker than normality. This is true since for every string of digits $k$ you can find a power of two with a base 3 expansion that starts with k.
I don't know much about normality except that there are many conjectures (for example that every irrational algebraic number is normal) and no techniques to answer such questions except in trivial cases. I believe the current state is similar for simply normal numbers.
Another question that is similar in spirit and that was answered recently is Stolarsky's conjecture that says $$\liminf_{n\to \infty}\frac{s_q(n^k)}{s_q(n)}=0$$ where $s_q(\cdot)$ is the sum of digits in base $q$. Intuitively it is hard to come up with examples that $s _3(2^{nk)} < s_3(2^k)$ for most $k$, let alone that the $\liminf$ is zero. However this question is much weaker than the one you ask since the sum of digits puts very few restraints on the digits themselves.
-
Thank you for the reference. I will read the proof of Stolarsky's conjecture (assuming it was answered in the positive). – Eric Tressler Jul 31 2010 at 16:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550204277038574, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/82270/reference-request-gluing-manifolds-along-pieces-of-boundary | ## Reference request: gluing manifolds along pieces of boundary
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I've been asked for a reference for the following construction and since I didn't know one, I thought I'd ask here if anyone did.
Consider two smooth manifolds with boundary of the same dimension, $M$ and $N$. Suppose that we have a submanifold $S$ of the boundaries $\partial M$ and $\partial N$ of codimension $0$ ($S$ may have boundary as well). Then glue $M$ and $N$ together along $S$, smoothing out the corners (corresponding to the boundary of $S$) if necessary. To do this properly, one would have to add a collar to each of $M$ and $N$ which is "broken" at the boundary of $S$ (thus making manifolds-with-corners) and glue those.
Is there a standard reference for this that works through the details?
-
1
mathoverflow.net/questions/67809/… asks the same thing, I think. It's a good question- Kosinski does it to some extent, and the Jim Davis reference which Igor Rivin gave also gives some details, but I don't know somewhere that all the details, including things like the isotopy extension theorem, properly appear in a unified framework. – Daniel Moskovich Nov 30 2011 at 14:08
Daniel, I hadn't seen that one (I did check the suggestions, but that didn't come up). I'll need to take a look in Kosinski's book to see if that covers this case because as written, the questions are not duplicates: this situation has the added wrinkle of not gluing along the whole boundary (or a component of the boundary). (That said, if Kosinski does have it then I have no issue with close-as-duplicate.) – Andrew Stacey Nov 30 2011 at 14:30
My goto book for such things is, as Daniel mentioned, Kosinski. Look at chapter 6 section 5 of Kosinski. – Kelly Davis Nov 30 2011 at 16:40
1
Although this isn't specifically in Kosinski, it is a small variant of the ideas there. Given a non-closed co-dimension 0 submanifold of the boundary, there are special collars so that VI.5 still works exactly as stated. – Ryan Budney Nov 30 2011 at 17:48
Have you looked at Douady's articles in Seminaire Cartan vol 14 . – Mohan Ramachandran Dec 4 2011 at 18:55
show 2 more comments
## 3 Answers
This was a bit too long for a comment, so I am posting it as an answer. You are sort of asking two things:
1. How to turn your manifolds M and S into an appropriate manifold with corners together with an appropriate notion of collar?
2. How to then glue these to obtain a new manifold?
To do (1) you'll need some assumptions on S, probably including compactness. In many cases though it might be clear that you can choose such collars. In that case you might be interested in Theorem 3.5 from my 2009 dissertation (arXiv:1112.1000, page 140). There I show that even if the collars are not specified, the glued manifold is still unique up to (non-canonical) diffeomorphism fixing S and restricting to the identity outside a neighborhood of S. In fact the construction shows that there is a canonical contractible family of these diffeomorphisms (and so there is a canoncial isotopy class of diffeomorphisms).
I used this to build one version of the 2-category of cobordisms, where you need to glue along parts of the boundary in the manner you describe, but where you also don't want to mod out by diffeomorphisms too early.
When S is a component of the boundary, you can find this result here:
James R. Munkres, Elementary differential topology, Lectures given at Massachusetts Institute of Technology, Fall, vol. 1961, Princeton University Press, Princeton, N.J., 1966.
I basically adapted this proof to cover the case of gluing manifolds along a portion of the boundary.
-
That looks very useful, I'll pass those along. – Andrew Stacey Dec 8 2011 at 20:19
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This seems to be discussed here. (particularly page 6)
-
That seems to be the desired result, but it also seems to be lacking a few details! Such as proof or reference to proof. Nonetheless, the statement is more detailed than I have so far so it's definitely progress. – Andrew Stacey Nov 30 2011 at 14:33
3
Well, you COULD just write to Jim Davis, and ask for a reference (and then share it with us...) – Igor Rivin Nov 30 2011 at 14:37
Igor: I did, and have. – Andrew Stacey Dec 8 2011 at 13:14
1
You are a wise man! – Igor Rivin Dec 8 2011 at 14:35
I did email Jim Davis and I have permission to post his reply (I'll summarise it). He taught a course which needed this result (which the notes that Igor Rivin links to are from). Being unable to find the precise statement (or proof) in the literature, he proved himself. He has ambitions to flesh out those notes to something fuller.
Since Kosinski's book was mentioned in the comments, it is perhaps worth pointing out that it contains the statement (p14):
Complications arise when more than one handle has been attached. When this happens some proofs have to rely strongly on the technique known as vigorous hand waving.
so for a precise statement/proof, then it would appear that Kosinski's book has to be discounted. (I should be honest and say that I haven't checked Kosinski's book myself; the original request was from a colleague and he has checked the book and is not happy with what is in there. The above quote was highlighted by Jim Davis.)
-
Thanks! This quote is precisely addressed here: mathoverflow.net/questions/70248/… It relates to the proof of the Morse Lemmas. Palais's proof was further simplified by Fukui. BTW, Kosinski does the "one handle attachment case" extremely nicely, and must be the best reference out there for this. But I don't think that this relates to your question, which is much more basic. I think it's a separate issue. – Daniel Moskovich Dec 8 2011 at 14:30
2
@Andrew: I think that quote is in reference to the "smoothing the corners" approach to viewing handle attachments as attaching $D^n \timd D^m$ along $(\partial D^n)\times D^m$. But the point of the quote is that if you think of it in a slightly different way, as Kosinski does, you don't have to deal with smoothing the corners to begin with. – Ryan Budney Dec 8 2011 at 14:38
(Could the anonymous down-voter please explain their reasoning.) – Andrew Stacey Dec 9 2011 at 8:03
I am the masked down-voter. My reasons are as follows: First, your statement is hearsay, which would not be so bad if it were correct. Second, in the first edition of Kosinski, there is no such statement on page 14. Third, in the second edition of Kosinski, there is also no such statement on page 14. Fourth, the first and second editions of Kosinski do have such a statement on page 142. However, this statement is related to how "other books" treat attaching handles. It is not a statement on how Kosinski's book treats attaching handles. (See Ryan's remark above.) – Kelly Davis Dec 9 2011 at 11:55
Kelly, thank you for that comment. That is so much more useful to me than a drive-by down-voting. Your first reason is somewhat dubious, given the comments on Igor's answer and the setting (that I'm asking for a reference for someone else who doesn't use MO so almost everything to do with this question is "hearsay"). Your second reason, however, is completely valid and is something that everyone reading what I wrote should also read - which is why you should have commented in the first place! – Andrew Stacey Dec 9 2011 at 13:05
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.962296187877655, "perplexity_flag": "middle"} |
http://aquantumoftheory.wordpress.com/ | # A Quantum Of Theory
Exploring new paths in quantum physics
### Quantum theory – A view from the inside – Part VI
The last post gave a possible answer to a very interesting question, namely “Why does the wave function collapse and where does the randomness come from?”. But we have also left out a few important details.
We have only seen a quantum measurement process for a single qubit, so what about more complex systems? There is a very elegant way to generalize the qubit measurement to systems of arbitrary size, and that is iterated bisection. Imagine the qubit process not acting on a two state system, but rather on two subspaces of a system. The result of one measurement process would then be a collapse to one subspace. This process can be iterated and surprisingly the splitting point for the subspaces and the number and order of the individual processes does not have any influence on the outcome, as long as there enough processes happening to fully reduce the state to one-dimensional outcomes.
Next, what about stability of the measurement? If you measure twice in a row, will the results be identical? The answer is yes, and in general the Born rule will hold again if the outcome of a measurement is measured once more. This is not a trivial result, and in fact it comes with an unexpected consequence concerning the observer’s memory of events. See my paper linked further down for the details.
The concept of an observer, does it not make all the observations very subjective? Is there an objective reality? Well, the observations we derived are subjective, but they are subjective in a weak way. That means, all other observers that look at the same event do get the same result in general, because they’re subject to the same constraints. So there is a universal subjective reality if you want.
So, this is not new physics, it is just an overlooked aspect of unitary quantum theory? It is new physics as it uncovers some earlier unknown consequences of observing unitary evolution and gives new mechanisms, but it is also established physics as it does not postulate anything new. It is even testable experimentally because the process of measurement is described in detail. In addition the same mechanism applied to a different scattering process can be shown to also produce two other projective outcomes that do not follow the Born rule but only work on single qubits. Those can also be verified experimentally.
Does this solve the quantum measurement problem? I think it does, but I am obviously biased. I believe it successfully follows the idea of replacing the requirement of an interpretation with an actual theory. Of course it remains to be seen if the arguments contain any factual errors.
Where can I read up on all the details?
So, finally, here is a link to my paper: http://arxiv.org/abs/1205.0293
Why is this only on arxiv and not in some peer reviewed journal? I am currently trying to get this published, but it is not easy to come up with something entirely new in a field that has been worked on for so long. The skepticism is great and many researchers follow their own school of thought without considering alternatives. This is why I chose to put it on arxiv for now and get feedback about the paper from other physicists first. This blog is to attract people to the idea and offer a gentle introduction to the derivations. I would like to know if it is well enough written to be understandable and if you consider the arguments sound. I will keep revising the paper based on the feedback I get and hopefully finally have it published with a peer reviewed journal.
This is where I need you. You can actively help me spread this idea, or contribute by giving me your feedback of any kind. You can comment here on the blog or send an email to the address on the paper.
Thank you for reading this far and please let me know what you think!
### QT – A view from the inside Part V – The Born rule?
Now that we have established that external interactions can cause an observed jump in the state evolution, we are going to construct a case where this actually happens and we can study the properties of this transition.
Generally, the system containing the observer and everything he interacts with is very large. We cannot discuss all the details of such a system, therefore we will use a minimal version consisting of just the necessary elements. And probably to your surprise, the observer is not contained in this simplified system. We do not actually need him, because we already now how he would reconstruct the system’s dynamic.
Even our entire universe will be very simple. It just consists of a single qubit and the photon field. The photons are assumed to be inaccessible locally, because they come in or leave at the speed of light. So it is just the qubit that we regard as the local system a hypothetical observer would reconstruct.
The process we are going to analyze is a simple scattering and absorption process on the qubit, which we may imagine to be realized as some two level system with two energy eigenstates $\left|0\right\rangle$ and $\left|1\right\rangle$. The incoming single photon $\left|\rightsquigarrow\right\rangle$ carries a two-dimensional polarization state spanned by $\left|\circlearrowleft\right\rangle$ and $\left|\circlearrowright\right\rangle$. Before the qubit and the photon interact, their states are independent and represented as a product state.
The actual dynamics of the process is not really important to us, the output state of the scattering event is already perfectly sufficient for the demonstration. Hence we can simply define a unitary map between the input and output states. Unitarity is guaranteed by mapping an orthonormal basis of the input space to an orthonormal basis of the output space. We have a 4-dimensional input space, 2 qubit dimensions times 2 photon polarization dimensions, so we also get a 4-dimensional output space. And the whole process can be written in the form of 4 mapping rules.
The first rule is very simple, as the photon just passes through the qubit:
$\left|\rightsquigarrow\circlearrowleft\right\rangle \left|0\right\rangle \mapsto \left|0\right\rangle \left|\rightsquigarrow\circlearrowleft\right\rangle$
The same thing happens if we flip both the polarization and the qubit:
$\left|\rightsquigarrow\circlearrowright\right\rangle \left|1\right\rangle \mapsto \left|1\right\rangle \left|\rightsquigarrow\circlearrowright\right\rangle$
But in the two other cases we get a nontrivial interaction, resulting in a superposition of a scattering and an absorption or stimulated emission outcome:
$\left|\rightsquigarrow\circlearrowright\right\rangle \left|0\right\rangle \mapsto \left( \left|0\right\rangle\left|\leftrightsquigarrow\circlearrowright\right\rangle + \left|1\right\rangle\left|\cdot\right\rangle \right)\frac{1}{\sqrt{2}}$
$\left|\rightsquigarrow\circlearrowleft\right\rangle \left|1\right\rangle \mapsto \left( \left|1\right\rangle\left|\leftrightsquigarrow\circlearrowleft\right\rangle + \left|0\right\rangle\left|\rightsquigarrow\circlearrowleft,\rightsquigarrow\circlearrowleft\right\rangle \right)\frac{1}{\sqrt{2}}$
Here, $\left|\leftrightsquigarrow\right\rangle$ is the omnidirectionally scattered photon, $\left|\cdot\right\rangle$ the vacuum after the photon has been absorbed, and $\left|\rightsquigarrow,\rightsquigarrow\right\rangle$ denotes the two photon state after stimulated emission.
If we apply the process to an input state $\left( \alpha \left|\rightsquigarrow\circlearrowleft\right\rangle + \beta \left|\rightsquigarrow\circlearrowright\right\rangle \right) \left( a \left|0\right\rangle + b \left|1\right\rangle \right)$ and then take the trace over the inaccessible photon states, we get the state operator of the output state:
$\rho = \left( \left|\alpha\right|^2 \left|a\right|^2 + \frac{1}{2}\left( \left|\alpha\right|^2 \left|b\right|^2 + \left|\beta\right|^2 \left|a\right|^2 \right) \right) \left| 0 \right\rangle \left\langle 0 \right|$
$+ \left( \left|\beta\right|^2 \left|b\right|^2 + \frac{1}{2}\left( \left|\alpha\right|^2 \left|b\right|^2 + \left|\beta\right|^2 \left|a\right|^2 \right) \right) \left| 1 \right\rangle \left\langle 1 \right|$
Conveniently, this result is already decomposed into its eigenbasis. If we apply what we have learned about the locally observed state, we have to find the eigenvector with the greatest eigenvalue of this state operator. The result of which can easily be read of the representation, and we get $\left|0\right\rangle$ if $\left|\alpha\right|^2 \left|a\right|^2 > \left|\beta\right|^2 \left|b\right|^2$ and $\left|1\right\rangle$ otherwise. But the observer at the qubit does not know the state of the incoming photon, so $\alpha$ and $\beta$ are unknown to him and will introduce random information to the result. Since the observer does not know better, he has to assume that all photon polarizations are equally likely, which means the assumed random distribution of the photon state must invariant under a unitary transform of the photon state. With this assumption, we can calculate the probability of the observed process outcome to be $\left|0\right\rangle$ or $\left|1\right\rangle$.
I cannot show the calculation here, but please see my next post for a reference to a paper on arxiv where it is actually performed. Of course, I will still give you the result. The probability of seeing an outcome $\left|0\right\rangle$ is precisely $p_0 = \frac{\left|a\right|^2}{\left|a\right|^2+\left|b\right|^2}$, and the complementary event of observing $\left|1\right\rangle$ is $p_1 = \frac{\left|b\right|^2}{\left|a\right|^2+\left|b\right|^2}$.
You have certainly noticed, that our result is precisely the probability as postulated by the Born rule! This means we have not only derived, that the observer will see a discontinuous jump between pure states, but also that the statistics of that jump exactly match the established statistics of performing a quantum measurement, at least for a simple interaction with an incoming photon. We have therefore deduced the full measurement postulate of the Copenhagen interpretation just by assuming the position of an internal observer, without any additional requirements, and also without having to introduce arbitrarily many quantum worlds.
My next post will contain a few final thoughts and a pointer to more technical and rigorous derivations of all results presented here and more.
### Quantum Theory – A view from the inside Part IV
Last time, we have derived how an observer inside a quantum system can only reconstruct a crude approximation to the real quantum state of this system. Specifically, he will observe the system to evolve like the eigenvector belonging to the greatest eigenvalue of the objective state operator describing his subsystem. To get this result however, we had to assume that the subsystem the observer interacts with is isolated and the evolution therefore unitary.
In order to understand how a non-unitary evolution affect the state reconstruction, we have to understand how unitary and non-unitary transformations modify the eigenstructure of an hermitian operator. This structure is given by real eigenvalues and orthogonal eigensubspaces that span the whole space. Both unitary and non-unitary transformations have to preserve this general structure. An additional constraint comes from the time-continuity of the evolution that translates to a continuous evolution of the eigenvalues and eigensubspaces.
Consider a hermitian operator $A$ and its eigenvector $\left|v\right\rangle$ with the eigenvalue $v$. It then follows that $UAU^\dagger$ has an eigenvector $U\left|v \right\rangle$ with the eigenvalue $v$, where $U$ is a unitary operator. Or in other words, unitary evolution does not change the eigenvalues, but it rotates the eigensubspaces while preserving their orthogonality.
Now for non-unitary evolution the only possible way to generalize that behavior is to unlock the eigenvalues. This implies that eigensubspaces can fuse when distinct eigenvalues evolve into being equal. We also have the reverse process where they split up instead. But most of the time we expect to see a simple intersection of two eigenvalues with a fused eigensubspace only existing at a single point in time. This really summarizes all that can possibly happen. The Subspaces can also only be rotated because the hermitian constraint enforces orthogonality at all times. Non-unitary evolution is really not that complicated if presented like this!
The implications for the state reconstruction performed by our virtual observer are relatively simple. As long as the non-unitary evolution does not lead to an eigenvalue intersection, the order of the eigenvalue-sorted subspaces will stay the same. We have seen in the last post that the observer can not determine the actual eigenvalues anyway, so nothing changes for him. He cannot distinguish such a non-unitary evolution from a unitary one. If we do allow for eigenvalue intersections, then it depends on which eigenvalues are affected. If the intersection does not involve the greatest eigenvalue then the observer will still not be able to detect it, because the corresponding subspaces are not reconstructable.
The only case, when the intersection has any influence on the observed evolution, is when the greatest eigenvalue intersects with another eigenvalue. Then the eigensubspace associated with the greatest eigenvalue switches instantly. The virtual observer would perceive this event as a discontinuous evolution of the system state.
Have you realized what we have just derived? We have seen, that an observer, who is part of a quantum system, perceives the evolution of this system that includes himself and his nearby environment as the unitary evolution of a pure state. But when the system interacts with some part of the universe that is not included in his description, the pure quantum state can suddenly jump to a completely different pure state. And that can happen in a way that is entirely not unitary.
This description is very close to the postulates of the Copenhagen interpretation. The only missing aspect is a statement about the probability and results of the discontinuous state evolution. We will deal with that in the next post!
### Quantum Theory – A view from the inside Part III
Our virtual observer sits in a simulated quantum universe and its global state evolves unitarily. The interactions in the simulated universe are local, allowing parts of the system to be isolated by spatial separation. And this observer is very smart. He has somehow figured out that systems, which are isolated reasonably well, evolve in a specific predictable way. It is not important how exactly he did that, let us just assume that it is the case.
Mathematically, he would formulate a law that we would write as
$\left|\psi(t)\right\rangle = U(t,t_0)\left|\psi(t_0)\right\rangle$
where $\left|\psi(t)\right\rangle$ is the state of the system at the time t and $U(t,t_0)$ is the unitary operator that evolves the state from the time $t$ to the time $t_0$.
As he develops the theory he also becomes aware of entanglement and that the state of subsystems cannot be generally described by a Hilbert space ray. He will find that there is a more general class of state descriptions, in the form of state operators (I will avoid the term “density operator” for reasons that should become obvious later) which also evolve unitarily for isolated systems, but with a slightly different law:
$\rho(t) = U(t,t_0) \rho(t_0) U^\dagger (t,t_0)$
The usual quantum states can be embedded naturally in this new state space by mapping them to their projectors $\left|\psi\right\rangle\mapsto\rho=\left|\psi\right\rangle \left\langle\psi\right|$ and the evolution is preserved. As it turns out, the state operator is always a non-negative hermitian operator with finite non-zero trace. This does not require a statistical interpretation of the operator, but cannot be easily seen without going much deeper. For this, I have to refer to the scientific article that I will cite in one of the next posts. Right here it would only distract from the following argument.
Assume that the observer and everything that interacts with him directly (during a certain time span) can be regarded as one system that is approximately isolated. This separates the Universe into the part of the universe that directly contributes to the experience of the observer and the environment unknown to him. The observer could try to reconstruct the state of his observable subsystem and would choose the most general state representation he knows, a state operator. Remember that all his knowledge can only be based on the history of states of the universe. But since all he knows about the universe is contained in this subsystem, all he can possibly know is a result of the sequence of states of this subsystem. In this situation, is the state operator that the virtual observer finds uniquely determined?
This is the same situation as for the cellular automaton, where changing the cell contents did not make a difference for the structure contained in the sequence of its states. Only this time the allowed transformations are different. Generally, the transformation must be reversible so that the contained information in the representations remains the same.
Let us consider a simple example. If $\rho(t)$ is a reconstructed state, we define $\bar{\rho}(t):= \rho^2(t)$. The time evolution law of $\bar{\rho}(t)$ follows as
$\bar{\rho}(t) = U(t,t_0) \rho(t_0) U^\dagger (t,t_0) U(t,t_0) \rho(t_0) U^\dagger (t,t_0)$
which simplifies with $U^\dagger U = 1$ to $\bar{\rho}(t)=U(t,t_0)\bar{\rho}(t_0)U^\dagger (t,t_0)$. That means $\bar{\rho}$ has the same time evolution law as $\rho$. Because state operators are hermitian and non-negative, squaring is a bijection. As a result, both $\rho$ and $\bar{\rho}$ are valid representations of the state history. Interestingly, none of the two descriptions is any simpler or preferable. Of course, squaring appears like an extra operation that makes it seem like a worse option, but that is only something we did as a construction example, the observer would have estimated $\rho$ or $\bar{\rho}$ directly, and to him both seem equally correct.
Clearly, squaring is not the only way to construct a new valid representation. It is easily verified that the same argument applies to any positive integer power of the state operator, they all generate a bijection between non-negative hermitian operators and the unitary evolution is preserved. We can also use linear combinations of positive integer powers, but then we have to be careful with preserving the bijectivity. It turns out that the class of interesting bijections is defined as the analytic continuation of monotonically increasing analytic functions $f:\mathbb{R}^+_0\to\mathbb{R}^+_0$ with $f(0)=0$. In other words, take any such function and apply its power series to a state operator in order to retrieve a new state operator, that is also a valid description of the observer’s state history.
Any choice of a single state operator from this infinite family of possible states would be purely arbitrary. That also means our scientifically working virtual observer cannot deduce any such state operator representation from his state history. He will have to come up with an alternative state representation that eliminates the redundancy encoded in the state operators. This representation must describe something, which all equivalent state operators have in common. And it should also come in a familiar form, allowing to use the same formulation for the dynamics.
The hermitian non-negative operators we used to represent states have real non-negative eigenvalues, and the associated eigensubspaces are mutually orthogonal. From the way they are constructed, degeneracy would be purely accidental and we may assume, that all eigensubspaces, that are not the nullspace, are in fact one-dimensional. This will simplify our discussion. As a result, we can describe each single state operator with a list of eigenvalues and the associated eigenvectors. The transformations between valid representations as defined above act as maps on the eigenvalues only, by applying the analytic monotonic functions to them. The monotonicity of these function preserves the order of the eigenvalues. That means once sorted they will stay sorted even after the transformation. However, the actual eigenvalues do change.
This implies two things: The eigenvalues do not contain information about the reconstructed state, because they change with the representation. The order of the eigenvalues does contain information, because it is shared by all state representations. That gives us a new less redundant state representation: A list of eigenvectors, sorted by their eigenvalues. The eigenvalues are not listed however.
But is this really a convenient representation that the observer would pick? Even if the observed part of the universe is not very large, the number of eigenvectors with non-zero eigenvalue would be enormous. How could he keep track of them all? How does he even find out how many there are exactly?
The answer is that he cannot count the subspaces, because the unitary dynamics he uses to extract information from his environment is not influenced by that number. But he knows that there is at least one subspace. In order to construct a one-subspace representation we have to send all eigenvalues to zero, except for one. We cannot do that with a bijection, but we can approximate it arbitrarily well using the bijective transformations. Taking the step to non-bijectivity is the price we have to pay, because we cannot count the subspaces. The reconstruction is unique nonetheless, as the monotonicity of the transformation functions forces us to preserve the largest eigenvalue as the only nonzero eigenvalue. For the reasons mentioned earlier, we can assume the associated eigensubspace to be one-dimensional.
We have just deduced, that the observer will reconstruct the state of the system containing himself and the environment he interacts with as the eigenvector of the objective state operator corresponding to the greatest eigenvalue. This eigenvector evolves unitarily and we get the usual time dependence of a state vector:
$\left|\rho_\mathrm{max}(t)\right\rangle=U(t,t_0)\left|\rho_\mathrm{max}(t_0)\right\rangle$
Next time we will discuss the consequences of dropping the isolation requirement for the system and the implied non-unitary evolution.
### Quantum Theory – A view from the inside Part II
Let us make a few formal assumptions about the virtual universe and the observer to make the following exploration easier:
1. The virtual universe has a concept of space and locality. All interactions only act locally and propagate with some kind of speed limit.
2. Time is imposed from the outside as part of the simulation.
3. The observer is a mechanism that is contained within a finite region of space for all time
What does the observer know about his universe in the best case? And maybe even more importantly, what does he not know?
First, everything he can possibly know must be encoded somehow in the sequence of states that the universe passes through, because that’s all there is.
Then, he cannot know about certain symmetries of the formulation of the simulation. For example, if the virtual universe is a cellular automaton with an integer number in each cell, then surely the observer cannot decide which numbers are in the cells. At best he can figure out only their relationship. For example, he could add 5 to all numbers and still get a cellular automation law that generates the exact sequence of states but with an offset of 5 in each cell. However, there might be some choice of numbers that makes the laws of his observed physics very simple or beautiful. And even if the original simulation used entirely different numbers, the observer would surely prefer the more elegant representation.
Another example of an unobservable symmetry is the state space of quantum theory. If we simulate the unitary evolution of a state vector, then it is impossible to decide from any mechanism that is itself simulated, what the magnitude or phase of the state vector is. That is because quantum theory is linear and the state evolution commutes with scalar multiplication. Also, there is no most elegant choice for a complex factor and the observer would decide that all work equally well. Remember, we are not assuming any kind of statistical interpretation of the state vector, just unitary dynamics, so normalization is not an obvious elegant choice. But we may assume that he want his state descriptions to be unique and so he identifies all vectors that are related by a nonzero complex factor. The resulting projective space containing the rays of the original state space is then his new state space.
As simple as it may seem, this is an important point. In fact, we will find a symmetry that will allow us to make a statement about reconstructing the quantum state of the universe similar to this one, with surprising consequences.
### Quantum Theory – A view from the inside Part I
The history of science has taught us many things, among them that asking new questions often leads to new insight. Often, these new questions had not been asked before because they seemed to be too philosophical, unanswerable or even mostly unscientific. Here, I would like to confront you with a question that, at a first glance, might seem to fit into these categories. Nevertheless, I will show that discussing this question, specifically applied to quantum theory, leads to deep insight.
In the computer age we have grown very familiar with the concept of simulation. We can simulate practically anything we have understood physically, and we do that for very complicated and large systems like climate models of our planet. Of course, we are using approximations to reality so that our computers can handle the complexity. This, however, is a limitation that we can easily imagine not to exist. The concept of simulation remains the same, even if performed on a hypothetical machine without any practical restrictions.
We could think of any consistent set of mathematical rules and simulate it on a computer. In some sense, we would be creating our own universes with the rules that we make up. Some of these simulations might be just complex enough to allow for an internal observer to evolve, an individual that would have an inside view of our simulation. And if we had the means of communicating with him, we could ask him what he is observing.
We will possibly never get to the scientific sophistication that would allow this sort of real experiment, so what is the point of proposing it? The universe of our hypothetical observer is purely mathematical, a list of rules and an initial state, not more. The reality perceived by him must emerge in some way from the mathematical rules. Surely some aspects of his observation will be highly subjective, like the perception of color, taste or anything that just developed by chance without any profound direct connection to material reality as perceived by him. But other aspects of his observation will not be so subjective, but shared by all other hypothetical observers in the same simulated universe.
So, the question I would like to ask is “How does reality as shared by all possible observers emerge from the mathematical rules that describe the universe these observers inhabit?”. Maybe I have already convinced you that the question is not so esoteric after all. But quite certainly not, that it is even remotely possible to answer it. How would one distinguish objective features from subjective ones? And would we not have to know about all the emergent structures of the simulated universe first, like atoms and molecules or even brains?
I do share the above concerns, but I can also offer a way to circumvent them entirely. Let us assume that our virtual observer is not just any observer, but in fact a physicist who tries to formulate his own mathematical theory of his perceived reality. If he is a good scientist, his theories will only include those aspects of his observation shared by all other observers, and if he is successful his final theory of all things he can observe will be a perfect mathematical description of the objective emergent reality in the virtual universe. This is an extremely helpful assumption, because it allows us to actually talk about mathematical structure instead of a fuzzy and partly psychological concept. With this we can reformulate the fundamental question to “What mathematical model does a virtual observer use to describe his perceived reality?”. This formulation sounds much more reasonable and there is some hope that we may find a way to mathematically deduce the emergence of this internal view from the mathematical structure of the universe we simulate.
### Does quantum theory have to be interpreted?
Witnessing the ongoing discussion about how quantum theory should be interpreted, and the strong opinions and sometimes even dogmatic arguments, I decided to write a series of blog posts that will try to discuss the issue of interpretation as objectively as I possibly can. I will not specifically try to compare the different mainstream interpretation with each other, but rather explore the requirement of an interpretation at all and the possibility of answering the same fundamental questions using strong scientific rigor instead.
A scientific theory is usually defined as consisting of a mathematical apparatus that allows to perform calculations of predictive nature, and a layer of interpretational glue that connects the resulting numbers with measurements that we can actually perform. The separation of measurement and prediction works very well for all classical theories, where observer and experiment can be regarded as entirely separate entities. Quantum theory however makes a clean cut between observer and the observed experiment impossible, because after the experiment the two subsystems are interwoven in a very fundamental and complicated way, even if spatially separated. The nonlocal entanglement of the quantum state space does not allow us to use the approximation of objectivity anymore.
Understanding this problem, there are two main approaches of dealing with it. The older one insists on the classical separation and is willing to live with the necessary consequences. The Copenhagen interpretation introduces the Heisenberg cut between quantum and classical domains to recover the notion of an objective observer that can make classical statements about the measurement outcome. And with that cut we also get the interpretational glue back that relates mathematics with measurement results. This happens in the form of the well known measurement postulate which includes the Born rule describing the statistical outcome of a measurement.
The approach has several drawbacks however. Firstly, the location of the Heisenberg cut is more or less arbitrary as long as the observer and the system are well distinguishable, but becomes impossible as soon as this is not the case anymore. Often this does not pose a problem, but it is still a shortcoming as it keeps us from understanding certain realizable situations. Secondly, the Copenhagen and related interpretations leave us entirely in the dark as to what precisely happens during a measurement. Still, the Copenhagen interpretation is fundamentally scientific, as it focuses on measurements and predictions only, and does not take into account what is not observable.
The other main approach to the problem of observation takes the alternative route. Instead of introducing a cut, everything is taken into account. Experiment and measurement device become one system, which is itself a part of the largest system, the universe. It is then only consequential to assume the time evolution of undisturbed quantum systems as formulated in the Copenhagen interpretation, the Schrödinger equation, as the evolution law for the universe. Within this approach, all predictions and results must emerge only from the properties of the evolving system, as there is no external observer that can measure anything, and no classical measurement device either. The time evolution would also be fully deterministic and the randomness of the measurement outcome could also be regarded as an emergent property.
So when Hugh Everett III came up with his many worlds or relative state interpretation, he did really not at all want to create an interpretation in the sense of the Copenhagen interpretation, namely as a layer of translation between math and measurement. Rather, he wanted to create a scientific theory of emergence, where all results are derived as inherent properties of the system itself. And he was willing to accept all the consequences it brought, because the approach was rigorously scientific and only the logical consequence of avoiding the artificial Heisenberg cut.
Unfortunately, not everything worked out as well as this approach had been promising. Of course, the most famous consequence is the existence of arbitrarily many worlds containing observers that have seen any possible experimental outcome. While this is philosophically hard to accept for some, it surely is only an acceptable consequence if the other results work out correctly. And these results ought to be the precise statements of the measurement postulate of the Copenhagen interpretation, because those are experimentally verified.
However, while the many worlds theory gives a reasonably good explanation for the state collapse, it fails to give the right statistics. There has been some criticism regarding the collapse too, but more importantly it is generally agreed that the Born rule does not come out of the relative state theory unless extra postulates are added. Decoherence theory, which incorporates the environment to move coherence away from the experiment, or more recent attempts to use psychologically founded counting mechanisms for calculating the relative outcome probabilities, have not been generally convincing enough to consider the issues of the theory be solved. And adding postulates of course spoils the initial idea of having an actual theory of emergence.
So where does this leave us? We have a practical approach that works most of the time, but hides some possibly important features and mechanisms from us. And we have a holistic approach that stands on a beautiful theoretical idea, but fails to deliver the right results and comes with some curious side effects.
The question that I will explore in the following articles is what Everett’s approach has to do with the relationship between simulation and reality, and whether something that he and others have potentially overlooked could lead to a new theory with better results. And I promise, I’ll have a few surprises for you! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 55, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484134316444397, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/55691?sort=newest | ## For what reductive groups $G$ over $K$ are the inner forms classified by $H^1(K, G^{ad})$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose $G$ is a connected reductive algebraic group over an arbitrary field $K$; let $Z$ be the center of $G$. The inner automorphisms of $G$ are given by $\operatorname{Inn}(G) = G / Z = G^{\operatorname{ad}}$. Set $\operatorname{Out}(G)$ to be the quotient $\operatorname{Aut}(G) / G^{\operatorname{ad}}.$ The forms of $G$ are parameterized by $\operatorname{H}^1(K, \operatorname{Aut}(G))$, and the inner forms are those in the image of
$$\operatorname{H}^1(K,G^{\operatorname{ad}}) \rightarrow \operatorname{H}^1(K,\operatorname{Aut}(G))$$ So we can recast the classification of inner forms of $G$ to:
What conditions can we put on $G$ to guarantee that the map $$\operatorname{H}^0(K,\operatorname{Aut}(G)) \rightarrow \operatorname{H}^0(K,\operatorname{Out}(G))$$ is surjective?
I'm primarily interested in the connected reductive case here, but I would be curious about the more general case as well.
On a related note, I've frequently seen the claim that for quasisimple $G$, the group $\operatorname{Out}(G)$ is given by the automorphism group of the Dynkin diagram of $G$. This holds for some reductive $G$ (such as $\operatorname{GL}_n$) and not others (most nontrivial tori will have a nontrivial outer automorphism group and a trivial dynkin diagram).
How can we extend the description of $\operatorname{Out}(G)$ from the case of quasi-simple $G$ to connected reductive $G$?
-
## 2 Answers
I want to address your first question, regarding what conditions you can put on $G$ to guarantee that the map $Aut(G)(K) \rightarrow Out(G)(K)$ is surjective. I am only going to talk about the case where is $G$ semisimple. If $G$ is quasi-split, then the map has a section so it is surjective, as you can see from the references given by fherzig or Victor.
But if $G$ is not quasi-split, there can be obstructions coming from both the Tits index and Tits algebras. One example to consider is $G = Spin(6,2)$ (of type $D_4$) over the real numbers. In that case $Out(G)(K)$ is the symmetric group on 3 letters, but the image of your map has order 2, corresponding to the outer automorphism given by a hyperplane reflection.
If $G$ is absolutely simple, you can hope that the Tits algebras provide the only obstruction to the surjectivity of your map. But this seems to be open. For more details on these obstructions and some positive results (that in some cases the Tits algebras are the only obstruction), see section 2 of my recent paper `Outer automorphisms...'.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Split redictive groups are classified by a combinatorial structure called root datum. If your $G$ is split, the outer automorphism group is the constant group scheme of automorphisms of this structure preserving simple roots. If $G$ is semisimple simply connected or adjoint it is indeed just the automorphism group of the Dynkin diagram, but in the other semisimple cases it can be a proper subgroup of the latter; for general reductive groups it is finitely generated but not always finite. If $G$ is not split, you can get a twisted form of this constant scheme. SGA 3 Exp. XXIV may help.
-
By the way, $GL_n$ has more outer automorphisms, so called central: $g\mapsto det(g)^mg$ for any integer $m$. – Victor Petrov Feb 21 2011 at 20:03
1
These maps aren't automorphisms if $m \ne 0$ and $1+mn \ne -1$: they aren't injective on the centre. In the other cases they are conjugate to $g \mapsto {}^t g^{-1}$. – fherzig Feb 21 2011 at 21:13
It might be helpful to note that any pinning (i.e. choice of a non-trivial element in each simple root subgroup) gives a canonical splitting of the sequence $1 \to Inn(G) \to Aut(G) \to Aut (based root datum) \to 1$. See e.g. Prop. 2.13 in Springer's notes in Corvallis (over alg. closed fields): ams.org/publications/online-books/pspum331-index (first link). This is important in the construction of L-groups (see Borel's notes in Corvallis, volume 2). – fherzig Feb 21 2011 at 21:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9118345379829407, "perplexity_flag": "head"} |
http://catalog.flatworldknowledge.com/bookhub/reader/4309?e=averill_1.0-ch96appE | # General Chemistry: Principles, Patterns, and Applications, v. 1.0
by Bruce Averill and Patricia Eldredge
Study Aids:
Click the Study Aids tab at the bottom of the book to access your Study Aids (usually practice quizzes and flash cards).
Study Pass:
Study Pass is our latest digital product that lets you take notes, highlight important sections of the text using different colors, create "tags" or labels to filter your notes and highlights, and print so you can study offline. Study Pass also includes interactive study aids, such as flash cards and quizzes.
Highlighting and Taking Notes:
If you've purchased the All Access Pass or Study Pass, in the online reader, click and drag your mouse to highlight text. When you do a small button appears – simply click on it! From there, you can select a highlight color, add notes, add tags, or any combination.
Printing:
If you've purchased the All Access Pass, you can print each chapter by clicking on the Downloads tab. If you have Study Pass, click on the print icon within Study View to print out your notes and highlighted sections.
Search:
To search, use the text box at the bottom of the book. Click a search result to be taken to that chapter or section of the book (note you may need to scroll down to get to the result).
View Full Student FAQs
# Chapter 29 Appendix E: Standard Reduction Potentials at 25°C
Half-Reaction E° (V)
Ac3+ + 3e−→ Ac −2.20
Ag+ + e−→ Ag 0.7996
AgBr + e−→ Ag + Br− 0.07133
AgCl + e−→ Ag + Cl− 0.22233
Ag2CrO4 + 2e−→ 2Ag + CrO42− 0.4470
AgI + e−→ Ag + I− −0.15224
Ag2S + 2e−→ 2Ag + S2− −0.691
Ag2S + 2H+ + 2e−→ 2Ag + H2S −0.0366
AgSCN + e−→ Ag + SCN− 0.08951
Al3+ + 3e−→ Al −1.662
Al(OH)4− + 3e−→ Al + 4OH− −2.328
Am3+ + 3e−→ Am −2.048
As + 3H+ + 3e−→ AsH3 −0.608
H3AsO4 + 2H+ + 2e−→ HAsO2 + 2H2O 0.560
Au+ + e−→ Au 1.692
Au3+ + 3e−→ Au 1.498
H3BO3 + 3H+ + 3e−→ B + 3H2O −0.8698
Ba2+ + 2e−→ Ba −2.912
Be2+ + 2e−→ Be −1.847
Bi3+ + 3e−→ Bi 0.308
BiO+ + 2H+ + 3e−→ Bi + H2O 0.320
Br2(aq) + 2e−→ 2Br− 1.0873
Br2(l) + 2e−→ 2Br− 1.066
BrO3− + 6H+ + 5e−→ $12$Br2 + 3H2O 1.482
BrO3− + 6H+ + 6e−→ Br− + 3H2O 1.423
CO2 + 2H+ + 2e−→ HCO2H −0.199
Ca2+ + 2e−→ Ca −2.868
Ca(OH)2 + 2e−→ Ca + 2OH− −3.02
Cd2+ + 2e−→ Cd −0.4030
CdSO4 + 2e−→ Cd + SO42− −0.246
Cd(OH)42− + 2e−→ Cd + 4OH− −0.658
Ce3+ + 3e−→ Ce −2.336
Ce4− + e−→ Ce3+ 1.72
Cl2(g) + 2e−→ 2Cl− 1.35827
HClO + H+ + e−→ $12$Cl2 + H2O 1.611
HClO + H+ + 2e−→ Cl− + H2O 1.482
ClO− + H2O + 2e−→ Cl− + 2OH− 0.81
ClO3− + 6H+ + 5e−→ $12$Cl2 + 3H2O 1.47
ClO3− + 6H+ + 6e−→ Cl− + 3H2O 1.451
ClO4− + 8H+ + 7e−→ $12$Cl2 + 4H2O 1.39
ClO4− + 8H+ + 8e−→ Cl− + 4H2O 1.389
Co2+ + 2e−→ Co −0.28
Co3+ + e−→ Co2+ 1.92
Cr2+ + 2e−→ Cr −0.913
Cr3+ + e−→ Cr2+ −0.407
Cr3+ + 3e−→ Cr −0.744
Cr2O7− + 14H+ + 6e−→ 2Cr3+ + 7H2O 1.232
CrO42− + 4H2O + 3e−→ Cr(OH)3 + 5OH− −0.13
Cs+ + e−→ Cs −3.026
Cu+ + e−→ Cu 0.521
Cu2+ + e−→ Cu+ 0.153
Cu2+ + 2e−→ Cu 0.3419
CuI2− + e−→ Cu + 2I− 0.00
Cu2O + H2O + 2e−→ 2Cu + 2OH− −0.360
Dy3+ + 3e−→ Dy −2.295
Er3+ + 3e−→ Er −2.331
Es3+ + 3e−→ Es −1.91
Eu2+ + 2e−→ Eu −2.812
Eu3+ + 3e−→ Eu −1.991
F2 + 2e−→ 2F− 2.866
Fe2+ + 2e−→ Fe −0.447
Fe3+ + 3e−→ Fe −0.037
Fe3+ + e−→ Fe2+ 0.771
[Fe(CN)6]3− + e−→ [Fe(CN)6]4− 0.358
Fe(OH)3 + e−→ Fe(OH)2 + OH− −0.56
Fm3+ + 3e−→ Fm −1.89
Fm2+ + 2e−→ Fm −2.30
Ga3+ + 3e−→ Ga −0.549
Gd3+ + 3e−→ Gd −2.279
Ge2+ + 2e−→ Ge 0.24
Ge4+ + 4e−→ Ge 0.124
2H+ + 2e−→ H2 0.00000
H2 + 2e−→ 2H− −2.23
2H2O + 2e−→ H2 + 2OH− −0.8277
H2O2 + 2H+ + 2e−→ 2H2O 1.776
Hf4+ + 4e−→ Hf −1.55
Hg2+ + 2e−→ Hg 0.851
2Hg2+ + 2e−→ Hg22+ 0.920
Hg2Cl2 + 2e−→ 2Hg + 2Cl− 0.26808
Ho2+ + 2e−→ Ho −2.1
Ho3+ + 3e−→ Ho −2.33
I2 + 2e−→ 2I− 0.5355
I3− + 2e−→ 3I− 0.536
2IO3− + 12H+ + 10e−→ I2 + 6H2O 1.195
IO3− + 6H+ + 6e−→ I− + 3H2O 1.085
In+ + e−→ In −0.14
In3+ + 2e−→ In+ −0.443
In3+ + 3e−→ In −0.3382
Ir3+ + 3e−→ Ir 1.156
K+ + e−→ K −2.931
La3+ + 3e−→ La −2.379
Li+ + e−→ Li −3.0401
Lr3+ + 3e−→ Lr −1.96
Lu3+ + 3e−→ Lu −2.28
Md3+ + 3e−→ Md −1.65
Md2+ + 2e−→ Md −2.40
Mg2+ + 2e−→ Mg −2.372
Mn2+ + 2e−→ Mn −1.185
MnO2 + 4H+ + 2e−→ Mn2+ + 2H2O 1.224
MnO4− + 8H+ + 5e−→ Mn2+ + 4H2O 1.507
MnO4− + 2H2O + 3e−→ MnO2 + 4OH− 0.595
Mo3+ + 3e−→ Mo −0.200
N2 + 2H2O + 6H+ + 6e−→ 2NH4OH 0.092
HNO2 + H+ + e−→ NO + H2O 0.983
NO3− + 4H+ + 3e−→ NO + 2H2O 0.957
Na+ + e−→ Na −2.71
Nb3+ + 3e−→ Nb −1.099
Nd3+ + 3e−→ Nd −2.323
Ni2+ + 2e−→ Ni −0.257
No3+ + 3e−→ No −1.20
No2+ + 2e−→ No −2.50
Np3+ + 3e−→ Np −1.856
O2 + 2H+ + 2e−→ H2O2 0.695
O2 + 4H+ + 4e−→ 2H2O 1.229
O2 + 2H2O + 2e−→ H2O2 + 2OH− −0.146
O3 + 2H+ + 2e−→ O2 + H2O 2.076
OsO4 + 8H+ + 8e−→ Os + 4H2O 0.838
P + 3H2O + 3e−→ PH3(g) + 3OH− −0.87
PO43− + 2H2O + 2e−→ HPO32− + 3OH− −1.05
Pa3+ + 3e−→ Pa −1.34
Pa4+ + 4e−→ Pa −1.49
Pb2+ + 2e−→ Pb −0.1262
PbO + H2O + 2e−→ Pb + 2OH− −0.580
PbO2 + SO42− + 4H+ + 2e−→ PbSO4 + 2H2O 1.6913
PbSO4 + 2e−→ Pb + SO42− −0.3588
Pd2+ + 2e−→ Pd 0.951
Pm3+ + 3e−→ Pm −2.30
Po4+ + 4e−→ Po 0.76
Pr3+ + 3e−→ Pr −2.353
Pt2+ + 2e−→ Pt 1.18
[PtCl4]2− + 2e−→ Pt + 4Cl− 0.755
Pu3+ + 3e−→ Pu −2.031
Ra2+ + 2e−→ Ra −2.8
Rb+ + e−→ Rb −2.98
Re3+ + 3e−→ Re 0.300
Rh3+ + 3e−→ Rh 0.758
Ru3+ + e−→ Ru2+ 0.2487
S + 2e−→ S2− −0.47627
S + 2H+ + 2e−→ H2S(aq) 0.142
2S + 2e−→ S22− −0.42836
H2SO3 + 4H+ + 4e−→ S + 3H2O 0.449
SO42− + H2O + 2e−→ SO32− + 2OH− −0.93
Sb + 3H+ + 3e−→ SbH3 −0.510
Sc3+ + 3e−→ Sc −2.077
Se + 2e−→ Se2− −0.924
Se + 2H+ + 2e−→ H2Se −0.082
SiF62− + 4e−→ Si + 6F− −1.24
Sm3+ + 3e−→ Sm −2.304
Sn2+ + 2e−→ Sn −0.1375
Sn4+ + 2e−→ Sn2+ 0.151
Sr2+ + 2e−→ Sr −2.899
Ta3+ + 3e−→ Ta −0.6
TcO4− + 4H+ + 3e−→ TcO2 + 2H2O 0.782
TcO4− + 8H+ + 7e−→ Tc + 4H2O 0.472
Tb3+ + 3e−→ Tb −2.28
Te + 2e−→ Te2− −1.143
Te4+ + 4e−→ Te 0.568
Th4+ + 4e−→ Th −1.899
Ti2+ + 2e−→ Ti −1.630
Tl+ + e−→ Tl −0.336
Tl3+ + 2e−→ Tl+ 1.252
Tl3+ + 3e−→ Tl 0.741
Tm3+ + 3e−→ Tm −2.319
U3+ + 3e−→ U −1.798
VO2+ + 2H+ + e−→ VO2+ + H2O 0.991
V2O5 + 6H+ + 2e−→ 2VO2+ + 3H2O 0.957
W2O5 + 2H+ + 2e−→ 2WO2 + H2O −0.031
XeO3 + 6H+ + 6e−→ Xe + 3H2O 2.10
Y3+ + 3e−→Y −2.372
Yb3+ + 3e−→Yb −2.19
Zn2+ + 2e−→ Zn −0.7618
Zn(OH)42− + 2e−→ Zn + 4OH− −1.199
Zn(OH)2 + 2e−→ Zn + 2OH− −1.249
ZrO2 + 4H+ + 4e−→ Zr + 2H2O −1.553
Zr4+ + 4e−→ Zr −1.45
Source of data: CRC Handbook of Chemistry and Physics, 84th Edition (2004).
Close Search Results
Study Aids
Need Help?
Talk to a Flat World Knowledge Rep today:
• 877-257-9243
• Live Chat
• Contact a Rep
Monday - Friday 9am - 5pm Eastern
We'd love to hear your feedback!
Leave Feedback!
Edit definition for
#<Bookhub::ReaderController:0x0000000d377140>
show
#<Bookhub::ReaderReporter:0x0000000d131610>
702433 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.30997735261917114, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/32427/how-convincing-is-the-evidence-for-dark-matter-annihilation-at-130-gev-in-the-ga?answertab=votes | # How convincing is the evidence for dark matter annihilation at 130 GeV in the galactic center from the Fermi Satellite data?
I listened to Christoph Weniger present his results at SLAC today. See his paper is here: http://arxiv.org/abs/1204.2797 and also see a different analysis here: http://arxiv.org/abs/1205.1045. The data seems convincing to me! Is this result consistent with theoretical expectations for DM candidates? In particular is the reported estimate cross section for annihilation into photons consistent with estimated cross sections for the various WIMP dark matter candidate particles (like LSP dark matter candidates)? Are there any other reasonable astrophysical mechanisms that would produce this 130 GeV photon line?
The summary for the talk claims: Using 43 months of public gamma-ray data from the Fermi Large Area Telescope, we find in regions close to the galactic center at energies of 130 GeV a 4.6 sigma excess that is not inconsistent with a gamma-ray line from dark matter annihilation. When taking into account the look-elsewhere effect, the significance of the observed signature is 3.3 sigma. If interpreted in terms of dark matter particles annihilating into a photon pair, the observations imply a partial annihilation cross-section of about $10^{-27} cm^3 s^{-1}$ and a dark matter mass around 130 GeV.
-
1
You kinda answer your own question. One observation at better than 3 sigma. OK, so that's good for a first report (and very exciting), but nothing is nailed down. – dmckee♦ Jul 20 '12 at 3:10
2
I'm a little concerned that "How convincing is this breaking news" is a bit of a subjective discussion rather than a real question, but I'll let it go a bit to see if we get non-subjective answers and to allow input from interested users. – dmckee♦ Jul 20 '12 at 3:11
2
@dmckee The one particular piece of concrete information I don't know is whether the reported cross section is reasonable when compared to proposed DM candidates. I was hoping someone here could nail down that one concrete fact. If this reported cross section is many orders of magnitude too large, I would question it's validity, whereas if it is a reasonable cross section then I would think this evidence is more convincing. By the way, this is not brand new information - it has been reported since April/May 2012. – FrankH Jul 20 '12 at 3:18
2
Well, that would certainly be a nice piece of concrete, non-subjective, non-discussion. – dmckee♦ Jul 20 '12 at 3:20
## 2 Answers
Another very fresh paper presented at Dark Attack yesterday, one by Hektor et al.,
http://arxiv.org/abs/1207.4466
also claims that the signal is there – not only in the center of the Milky Way but also in other galactic clusters, at the same 130 GeV energy. This 3+ sigma evidence from clusters is arguably very independent. All these hints and several additional papers of the sort look very intriguing.
There are negative news, too. Fermi hasn't confirmed the "discovery status" of the line yet. Puzzles appear in detailed theoretical investigations, too. Cohen at al.
http://arxiv.org/abs/1207.0800
claim that they have excluded neutralino – the most widely believed identity of a WIMP – as the source because the neutralino would lead to additional traces in the data because of processes involving other Standard Model particles and these traces seem to be absent. The WIMP could be a different particle than the supersymmetric neutralino, of course.
Another paper also disfavors neutralino because it is claimed to require much higher cross sections than predicted by SUSY models:
http://arxiv.org/abs/1207.4434
But one must be careful and realize that the status of the "5 sigma discovery" here isn't analogous to the Higgs because in the case of the Higgs, the "canonical" null hypothesis without the Higgs is well-defined and well-tested. In this case, the 130-GeV-line-free hypothesis is much more murky. There may still exist astrophysical processes that tend to produce rather sharp peaks around 130 GeV even though there are no particle species of this mass. I think and hope it is unlikely but it hasn't really been excluded.
Everyone who studies these things in detail may want to look at the list (or contents) of all papers referring to Weniger's original observation – it's currently 33 papers:
http://inspirehep.net/search?ln=en&p=refersto%3Arecid%3A1110710
-
## Did you find this question interesting? Try our newsletter
I'm reluctant to say much about this right now, right here, but: the cross section is a reasonable one for a loop annihilation to $\gamma\gamma$. From the model-building point of view the thing to worry about is that many DM models that predict such a loop would also predict much more frequent tree-level annihilation processes that are ruled out by the lack of continuum gamma rays from the same spot. From the astrophysical point of view the puzzle is why the gamma rays seem to come not quite from the galactic center but from a couple hundred parsecs away. And then there may be some other puzzling features in the data, in terms of similar lines being seen when looking in places where you don't expect to see dark matter. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421097040176392, "perplexity_flag": "middle"} |
Subsets and Splits