url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/questions/28703/rearrangement-of-dirichlet-series-and-continuity-at-the-abscissa-of-convergence | ## rearrangement of Dirichlet series and continuity at the abscissa of convergence
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $D(s):=\sum_{n\geq 1}a_n e^{-\lambda_n s}$ be a general Dirichlet series of type $(\lambda_n) _{n\geq 1}$ with finite abscissae of convergence and absolute convergence, respectively $\sigma_c$ and $\sigma_a$. Let also $\sigma_c < \sigma_a$. Pick a fixed $s_0\in\mathbb{C}$ with $\sigma_c <\Re(s_0)<\sigma_a$. Then $\sum_{n\geq 1}a_n e^{-\lambda_n s_0}$ is conditionally convergent. Now, according to a theorem of Steinitz (generalization of Riemann series theorem to finite-dimensional vector spaces over the reals)* there is an affine subspace $A\subset\mathbb{C}$ such that $$\forall w\in A\ \exists \tau\in S(\mathbb{N}):\ \sum_{n\geq 1}a_{\tau(n)} e^{-\lambda_{\tau(n)} s_0}=w,$$ where $S(\mathbb{N})$ denotes the set of all auto-bijections of the natural numbers ("rearrangements"). We discuss the case when $A$ is a non-trivial affine subspace (consists of more than just one point), so we can assume that $D(s_0)\neq w$ for some $\tau\in S(\mathbb{N})$. Now, let us observe the Dirichlet series defined by $$D_{\tau}(s):=\sum_{n\geq 1}a_{\tau(n)} e^{-\lambda_{\tau(n)} s}.$$ The Dirichlet series $D_{\tau}$ is convergent in the point $s_0$ and hence locally uniformly convergent in the open half-plane ${\Re(z)> \Re(s_0)}$, thus being analytic there. However, $D$ and $D_{\tau}$ agree on the open half-plane ${\Re(z)>\sigma_a}$ since absolute convergence is invariant with respect to rearrangements, hence they also agree in the open half-plane ${\Re(z)>\Re(s_0)}$ by analycity. Since $D(s_0)\neq w=D_{\tau}(s_0)$, it follows that $D_{\tau}$ cannot be continuous in $s_0$, although it is convergent in it. Provided that my argumentation has so far no flaws, there remains the only possibility that $\Re(s_0)$ is the abscissa of convergence of $D_{\tau}$. Hence my question:
(Q) Given that a Dirichlet-series converges on its abscisse of convergence, are there any known conditions for it to be also continuous there? How about holomorphic? (Or am I missing something here?)
Note that the exact behavior of Dirichlet series on the abscissa of convergence is in general an open problem.
Thanks in advance!
*see for instance the end of http://en.wikipedia.org/wiki/Riemann_series_theorem as well as http://de.wikipedia.org/wiki/Steinitzscher_Umordnungssatz (only in German)
-
As pointed out by fedja on the AoPS forums, the flaw of the above argumentation is in fact that rearranging the general Dirichlet series breaks the condition $\lambda_n \uparrow \infty$, hence all the standard statements on general Dirichlet series are not anymore applicable in this form. – ex falso quodlibet Jun 29 2010 at 23:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.907006561756134, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/125420/in-what-ways-is-an-inner-product-different-from-a-dot-product?answertab=votes | # In what ways is an inner product different from a dot product?
I assume I am correct when I say this but these two things aren't exactly the same thing, are they(dot product is a type of inner product right?) ? In what ways are they different? I looked it up but all I could find was some mentions of the Gram Schmidt Process. How does the Gram Schmidt orthonormalization process come about here?
Note: I was about to enter the tags for this question and I typed in "inner" and up came a suggestion "inner-product-spaces" and it said
An inner product space is a vector space equipped with an inner product. The inner product is a generalisation of the "dot" product.
How so? What kind of "generalization" does it mean? (I know this isn't all too suited question to be put up here but I hope for an answer :)
Any Help is much appreciated!
Thanks in Advance!
-
3
Generalization means that we observe the nice properties of a dot product (multilinear, etc) and define a new object - inner product - which is defined to be something having these nice properties. This means that a dot product is an inner product, but not every inner product is a dot product. – Asaf Karagila Mar 28 '12 at 12:10
Since you mention the Gram-Schmidt process, it works for any countable, linearly independent collection of vectors in an inner product space. – M Turgeon Mar 28 '12 at 12:36
"dot product is a type of inner product right?": exactly! There is the dot product on $\mathbb{R}^n$, which is a special kind of inner product. – nik Mar 28 '12 at 12:52
1
For finite dimensional spaces, any inner product can be regarded as a dot product with respect to some basis. The proof uses the Gram-Schmidt process to construct the basis. – Grumpy Parsnip Mar 28 '12 at 12:59
## 2 Answers
This answer is an elaboration on one of the comments above.
Assume that we are working with finite dimensional spaces. Futhermore for simplicity assume that the underlying field $\mathbb{F}$ is the set of real numbers $\mathbb{R}$. An inner product on a vector space $V$ is a function $(.,.):V\times V\mapsto \mathbb{F}$ that obeys the following properties for all $u,v,w \in V$ and all $\lambda \in \mathbb{F}$:
1. $(u,v)=(v,u)$
2. $(\lambda v,u)=\lambda(v,u)$
3. $(v+w,u)=(v,u)+(w,u)$
4. $(u,u)\geq 0$ and $(u,u)=0$ iff $u=0$
Now if you are familiar with the Gram Schmidt process, any finite dimensional vector space equiped with an inner product (often called an euclidean space), has an orthonormal basis.
The proof of this involves two non trivial steps. The first step is to prove that any finite dimensional space has a basis. The second step is to show that this basis can be turned into an orthonormal basis. For a detailed explanation of how to do this I advice you to look into practically any book on linear algebra
If a vector space has an orthonormal basis one can show that any inner product on that space is the dot product in the orthonormal basis.
Proof: Let $u,v\in V$, then since $V$ is an euclidean space it has an orthonormal basis $\{e_1,…,e_n\}$, hence we have $u=x_1e_1+..+x_ne_n$ and $v=y_1e_1+…+y_ne_n$ for some scalars $x_1,..,x_n$ and $y_1,..,y_n$ in the underlying field. But then $$(u,v)=(x_1e_1+...+x_ne_n,y_1e_1+...+y_ne_n)=$$$$=\sum_{i=0}^n{}x_1y_i(e_1,e_i)+…+\sum_{i=0}^nx_ny_i(e_n,e_i)=x_1y_1+...+x_ny_n$$
Where we several times used that $\{e_1,…,e_n\}$ is an orthonormal basis , and that (.,.) is a norm. More precisely we used properties 2 and 3 of the norm.
-
## Did you find this question interesting? Try our newsletter
The dot product on $\mathbb{R}^n$ is equivalent to matrix multiplication of a row and a column vector. The inner product generalizes this because it is defined more generally. For example, consider the vector space $V$ of continuous, square integrable complex-valued functions on $(-\pi,\pi)$. There is a nifty basis for this space, consisting of the functions $f_k(t)=e^{ikt}$. Turns out a nice inner product on the space is the integral of the product of two functions over the domain. The inner product of a function $f(t)\in V$ with a basis vector $f_k$ (or its complex conjugate, actually) then gives us the magnitude $c_k$ of the projection onto the subspace $\mathbb{C}f_k$, a.k.a. the Fourier coefficient $c_k=\int_{-\pi}^{\pi}f(t)\,e^{-ikt}\,dt$, and the function can be represented as the sum of its projections onto each such basis vector: $f=\sum_{k=0}^\infty c_k f_k$. This is called Fourier analysis. You can take any finite real interval $T$ as your domain, or you can take the whole real line, or higher dimensional cartesian products of these (tori, etc). There is a duality between the domains of the original function space and that of the Fourier transform: $T\cong\mathbb{S}^1\leftrightarrow\mathbb{Z}$, $\mathbb{R}\leftrightarrow\mathbb{R}$; in other words, Fourier transforms of periodic functions are Fourier series, while Fourier transforms of functions on the real line are again functions on the real line.
What does that have to do with the question at hand? I guess you were going to define the dot product on $L^2(\mathbb{S}^1)$ etc., but as it is, your answer is barely one. Also the $f_k$ are an Hilbert basis or (more ambiguously) an orthonormal basis; not every element is in their span, but their span is dense in the space. – nik Mar 28 '12 at 12:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343975186347961, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/84148?sort=oldest | ## polynomial matrices and its spectrum
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello, all!
I have a polynomial non-singular square matrix over $\mathbf{F} _q[x]$, $$\underset{l \times l}{G(x)} = \left( \begin{matrix} g _{0,0}(x) & g _{0,1}(x) & \ldots & g _{0,l-1}(x) \\ \vdots & \vdots & \vdots & \vdots \\ g _{l-1,0}(x) & g _{l-1,1}(x) & \ldots & g _{l-1,l-1}(x) \end{matrix} \right).$$ I call an eigenvalue of $G(x)$ roots of equation $\det G(x) = 0$. It can be founded from some extension $\mathbf{F} _{q^r}$ of finite field $\mathbf{F} _q$. I call an eigenvector corresponding to eigenvalue $\lambda _i$ a solution $\underset{l \times 1}{v _{i,j}}$ of system of equations $G(\lambda _i) v _{i,j} = 0$. So $v _{i,j}$ is the $j$-th eigenvector corresponding to eigenvalue $\lambda _i$.
I suppose, eigenvectors of $G(x)$ have equal algebraic and geometric multiplicities.
My problem is to prove that if some $l \times 1$ - vector of polynomials $r(x)$ satisfies $\underset{l \times 1}{r(\lambda _i)}^T \underset{1 \times l}{v _{i,j}} = 0$ $\forall i, j$ then it must belongs to space of rows of $G(x) = (\underset{1 \times l}{g_0(x)}, \ldots, \underset{1 \times l}{g_{l-1}(x)})$: so, $r(x) = \sum_{t = 0}^{l-1} b_t(x) \cdot g_t(x)^T$ for some $b_t(x) \in \mathbf{F}_q[x]$. How it can be proved? What technique can be used for that?
Thank you!
-
This seems a reasonable question, but I can't understand your notation. Is $j$ the coordinate index of your vector $(v_i)$? What is the shape of the product $r(\lambda_i)v_{i,j}$? Could you write down explicitly the sizes of the involved vectors? What is the "space of rows"? – Federico Poloni Dec 23 2011 at 12:14
Thank you! I put appropriate fixes to text of problem. – spk Dec 23 2011 at 12:21
Ok, now it is better. Two more points: I assume the condition is $r(\lambda_i)^T v_{i,j}=0$. with the transpose in this position.Second point,how do your definitions of eigenvalue/eigenvector extend in the case that there are generalized eigenvalues and the matrix is not diagonalizable?Think for instance $G(x)=xI-A$, where $A=\begin{bmatrix}0 & 1\\\\ 0 & 0\end{bmatrix}$. Do you have one or two eigenvectors? This might be crucial. – Federico Poloni Dec 23 2011 at 13:12
I'm sorry for big delay. Ok, for simplicity I can assume that matrix $G(x)$ has equal algebraic and geometric multiplicities for its eigenvalues. – spk Jan 18 2012 at 10:24
@spk eigenvalues roots of det(y-M)=0. Your definition of eig.val is somewhat strange for me unless G(x) = x-M... – Alexander Chervov Jan 18 2012 at 10:24
show 1 more comment
## 1 Answer
Unless I misunderstand the question, this seems to be false for $1\times 1$ matrices. In this case you are asking whether every polynomial $p(x)$ which vanishes whenever $q(x)$ vanishes is a constant multiple of $q(x).$ This is obviously false. Is there some condition missing?
-
As I understood the question, in the $1\times 1$ case $v_{i,j}=1$ for every $i,j$, so the question becomes "is every polynomial $r(x)$ such that $r(x) \cdot 1=0$ a constant multiple of $g(x)$?", which is trivially true. – Federico Poloni Dec 23 2011 at 20:56
@Federico: My understanding is that $r(x)$ has to be $0$ ONLY at the roots of $g(x)$ (the OP only has equalities at the eigenvalues). – Igor Rivin Dec 23 2011 at 21:34
@Igor: you are perfectly right, sorry. – Federico Poloni Dec 23 2011 at 23:56
@Igor: I'm sorry for big delay. Thank you for this point. Yes, one should consider polynomial coefficients for linear combination of rows of $G(x)$ but not constants as it were earlier. – spk Jan 18 2012 at 10:32
Igor's point still holds. With the new version, you are asking whether every $p(x)$ which vanishes whenever $q(x)$ vanishes is a polynomial multiple of $q(x)$. This is still false, take e.g. $p(x)=x$, $q(x)=x^2$. Can you please double-check everything so that it works at least in the $1\times 1$ case? – Federico Poloni Jan 18 2012 at 11:36
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9089961051940918, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/8062/about-the-complex-nature-of-the-wave-function | # About the complex nature of the wave function?
1.
Why is the wave function complex? I've collected some layman explanations but they are incomplete and unsatisfactory. However in the book by Merzbacher in the initial few pages he provides an explanation that I need some help with: that the de broglie wavelength and the wavelength of an elastic wave do not show similar properties under a galilean transformation. He basically says that both are equivalent under a Guage transform and also, separately by lorentz transforms. This, accompanied with the observation that $\psi$ is not observable, so there is no "reason for it being real". Can someone give me an intuitive prelude by what is a Guage transform and why does it give the same result as a Lorentz tranformation in a non-relativistic setting? And eventually how in this "grand scheme" the complex nature of the wave function becomes evident.. in a way that a dummy like me can understand.
2.
A wavefunction can be thought of as a scalar field (has a scalar value in every point ($r,t$) given by $\psi:\mathbb{R^3}\times \mathbb{R}\rightarrow \mathbb{C}$ and also as a ray in Hilbert space (a vector). How are these two perspectives the same (this is possibly something elementary that I am missing out, or getting confused by definitions and terminology, if that is the case I am desperate for help ;)
3.
One way I have thought about the above question is that the wave function can be equivalently written in $\psi:\mathbb{R^3}\times \mathbb{R}\rightarrow \mathbb{R}^2$ i.e, Since a wave function is complex, the Schrodinger equation could in principle be be written equivalently as coupled differential equations in two real functions which staisfy the Cauchy Reimann conditions. ie, if $$\psi(x,t) = u(x,t) + i v(x,t)$$ and $u_x=v_t$ ; $u_t = -v_x$ and we get $$\hbar \partial_t u = -\frac{\hbar^2}{2m} \partial_x^2v + V v$$ $$\hbar \partial_t v = \frac{\hbar^2}{2m} \partial_x^2u - V u$$ (..in 1-D) If this is correct what are the interpretations of the $u,v$.. and why isn't it useful. (I am assuming that physical problems always have an analytic $\psi(r,t)$).
-
removed the images. – yayu Apr 5 '11 at 6:45
1
Hi Yayu. I've always found interesting a paper by Leon Cohen, "Rules of Probability in Quantum Mechanics", Foundations of Physics 18, 983(1988), which approaches this question somewhat sideways, through characteristic functions. Cohen comes from a signal processing background, where Fourier transforms are very often a natural thing to do. Fourier transforms and complex numbers are of course pretty much joined at the hip. – Peter Morgan Apr 5 '11 at 18:08
## 9 Answers
More physically than a lot of the other answers here (a lot of which amount to "the formalism of quantum mechanics has complex numbers, so quantum mechanics should have complex numbers), you can account for the complex nature of the wave function by writing it as $\Psi (x) = |\Psi (x)|e^{i \phi (x)}$, where $i\phi$ is a complex phase factor. It turns out that this phase factor is not directly measurable, but has many measurable consequences, such as the double slit experiment and the Aharonov-Bohm effect.
Why are complex numbers essential for explaining these things? Because you need a representation that both doesn't induce time and space dependencies in the magnitude of $|\Psi (x)|^{2}$ (like multiplying by real phases would), AND that DOES allow for interference effects like those cited above. The most natural way of doing this is to multiply the wave amplitude by a complex phase.
-
1
Is there any wave or vibration which cannot/has-to-be described with complex number formalism? – Georg Oct 13 '11 at 10:03
Alternative discussion by Scott Aaronson: http://www.scottaaronson.com/democritus/lec9.html
1. From the probability interpretation postulate, we conclude that the time evolution operator $\hat{U}(t)$ must be unitary in order to keep the total probability to be 1 all the time. Note that the wavefunction is not necessarily complex yet.
2. From the website: "Why did God go with the complex numbers and not the real numbers? Answer: Well, if you want every unitary operation to have a square root, then you have to go to the complex numbers... " $\hat{U}(t)$ must be complex if we still want a continuous transformation. This implies a complex wavefunction.
Hence the operator should be: $\hat{U}(t) = e^{i\hat{K}t}$ for hermitian $\hat{K}$ in order to preserve the norm of the wavefunction.
-
1
Personally I prefer Jerry Schirmer's answer because it requires less postulate and instead uses experimental fact directly. =) – pcr Apr 8 '11 at 4:56
Among other things, the OP reprinted a page of a textbook, asking what "it is all about". I think it is impossible to answer this kind of questions because what is the OP's problem all about is totally undetermined, and the people who offer their answers could be writing their own textbooks, with no results.
The wave function in quantum mechanics has to be complex because the operators satisfy things like $$[x,p] = xp-px = i\hbar.$$ It's the commutator defining the uncertainty principle. Because the left hand side is anti-Hermitian, $$(xp-px)^\dagger = p^\dagger x^\dagger - x^\dagger p^\dagger = (px-xp) = -(xp-px),$$ it follows that if it is a $c$-number, its eigenvalues have to be pure imaginary. It follows that either $x$ or $p$ or both have to have some non-real matrix elements.
Also, Schrödinger's equation $$i\hbar\,\, {\rm d/d}t |\psi\rangle = H |\psi\rangle$$ has a factor of $i$ in it. The equivalent $i$ appears in Heisenberg's equations for the operators and in the $\exp(iS/\hbar)$ integrand of Feynman's path integral. So the amplitudes inevitably have to come out as complex numbers. That's also related to the fact that eigenstates of energy and momenta etc. have the dependence on space or time etc. $$\exp(Et/i\hbar)$$ which is complex. A cosine wouldn't be enough because a cosine is an even function (and the sine is an odd function) so it couldn't distringuish the sign of the energy. Of course, the appearance of $i$ in the phase is related to the commutator at the beginning of this answer. See also
http://motls.blogspot.com/2010/08/why-complex-numbers-are-fundamental-in.html
Why complex numbers are fundamental in physics
Concerning the second question, in physics jargon, we choose to emphasize that a wave function is not a scalar field. A wave function is not an observable at all while a field is. Classically, the fields evolve deterministically and can be measured by one measurement - but the wave function cannot be measured. Quantum fields are operators - but the wave function is not. Moreover, the mathematical similarity of a wave function to a scalar field in 3+1 dimensions only holds for the description of one spinless particle, not for more complicated systems.
Concerning the last question, it is not useful to decompose complex numbers into real and imaginary parts exactly because "a complex number" is one number and not two numbers. In particular, if we multiply a wave function by a complex phase $\exp(i\phi)$, which is only possible if we allow the wave functions to be complex and we use the multiplication of complex numbers, physics doesn't change at all. It's the whole point of complex numbers that we deal with them as with a single entity.
-
1
thanks for answering. I have one question, not knowing about Feynman path integrals yet, I take it that what you are saying is the same thing as: if we make the transformation $\psi(r,t) = e^{i\frac{S(r,t)}{\hbar}}$ then the Schrodinger equation reduces to the classical hamilton Jacobi equations (if terms containing $i$ and $\hbar$ were negligible)? – yayu Apr 5 '11 at 5:35
2
Dear yayu, thanks for your question. First, the appearance of $\exp(iS/\hbar)$ in Feynman's approach is not a transformation of variables: the exponential is an integrand that appears in an integral used to calculate any transition amplitude. Second, $\psi$ is complex and $S$ is real, so $\psi=\exp(iS/\hbar)$ cannot be a "change of variables". You may write $\psi=\sqrt{\rho}\exp(i S/\hbar)$, in which case Schrödinger's equation may be (unnaturally) rewritten as two real equations, a continuity equation for $\rho$ and the Hamilton-Jacobi equation for $S$ with some extra quantum corrections. – Luboš Motl Apr 5 '11 at 5:39
I edited my question removing the reprints and trying to state my problem without them.. it will take some time to think about some points you made in the answer already, though. – yayu Apr 5 '11 at 6:00
This question has been asked since Dirac
In fact Dirac's answer is available for \$ 100 from JSTOR in a paper by Dirac from I think 1935 ?
A recent answer from James Wheeler - is that the zero-signature Killing metric of a new, real-valued, 8-dimensional gauging of the conformal group accounts for the complex character of quantum mechanics
Reference is Why Quantum Mechanics is Complex , James T. Wheeler ArXiv:hep-th9708088
-
If the wave function were real, performing a Fourier transform in time will lead to pairs of positive-negative energy eigenstates. Negative energies with no lower bounds is incompatible with stability. So, complex wave functions are needed for stability.
No, the wave function is not a field. It only looks like it for a single particle, but for N particles, it is a function in 3N+1 dimensional configuration space.
-
From the Heisenberg Uncertainty Principle, if we know a great deal about the momentum of a particle we can know very little about its position. This suggests that our mathematics should have a quantum state that corresponds to a plane wave $\psi(x)$ with a precisely known momentum but entirely unknown position.
A natural definition for the probability of finding the particle at the position $x$ is $|\psi(x)|^2$. This definition makes sense for both a real wave function and an imaginary wave function.
For a plane wave to have no position information is to imply that $|\psi(x)|$ does not depend on position and so is constant. Therefore we must have $\psi$ complex; otherwise there would be no way to store the information "what is the momentum of the particle".
So in my view, the complex nature of wave functions arises from the interaction between the necessity for (1) a probability interpretation, (2) the Heisenberg uncertainty principle, and (3) plane waves.
-
Please clear some doubts for me. 1. The probability interpretation: I think it followed since the wavefunction was complex and physical meaning could only attributed to a real value. If we make a construction $\psi^*\psi$ then we arrive at the continuity equation from the schrodinger equation and the interpretation can now be made that the quantity $\rho=\psi^*\psi$ is the probability density. Starting from an interpretation like $\rho=\psi^*\psi$, I do not see any way to work backwards and convincingly argue that the amplitude $\psi$ must be complex. – yayu Apr 6 '11 at 18:09
the uncertainty relations follow from the identification of the free particle as a plane wave. I am guessing your answer points in the right direction, I am working on (2) as suggested in Lubos' answer as well and trying to get why $\psi$ is complex valued as a consequence, however I fail to see how anything except (2) is relevant for showing it conclusively. – yayu Apr 6 '11 at 18:16
1
@yayu: see my post--there are two essential experimental facts: 1) phase is not directly measurable; 2) interference effects happen in a broad range of quantum materials. It's hard to reconcile these things without using complex numbers. – Jerry Schirmer Apr 7 '11 at 3:39
EDIT add:
My Answer is GA centric and after the comments I felt the need to say some words about the beauty of Geometric Algebra:
On 2nd page of Oersted Medal Lecture (link bellow):
(3) GA Reduces “grad, div, curl and all that” to a single vector derivative that, among other things, combines the standard set of four Maxwell equations into a single equation and provides new methods to solve it.
Geometry Algebra (GA) encompasses in a single framework for all this:
Synthetic Geometry, Coordinate Geometry, Complex Variables, Quaternions, Vector Analysis, Matrix Algebra, Spinors, Tensors, Differential forms. It is one language for all physics.
Probably Schrödinger, Dirac, Pauli, etc ... would have used GA if it existed at the time.
To the Question: WHY is the wave function complex? This Answer is not helpful: because the wave function is complex (or has a i on it). We have to try something different, not written in your book.
In the abstracts I bolded the evidence that the papers are about the WHYs. If someone begs a fish I'll try to give a fishing rod.
I'm an old IT analyst who would be unemployed if I had not evolved. Physics is evolving too.
end EDIT
Recently I've found the Geometric Algebra, Grassman, Clifford, and David Hestenes.
I will not detail here the subject of the OP because each one of us need to follow paths, find new ideas and take time to read. I will only provide some paths with part of the abstracts:
Overview of Geometric Algebra in Physics
Oersted Medal Lecture 2002: Reforming the Mathematical Language of Physics (a good start)
In this lecture Hestenes is arguing for a reform of the way in which mathematics is taught to physicists. He asserts that using Geometric Algebra will make it easier to understand the fundamentals of physics, because the mathematical language will be clearer and more uniform.
Hunting for Snarks in Quantum Mechanics
Abstract. A long-standing debate over the interpretation of quantum mechanics has centered on the meaning of Schroedinger’s wave function ψ for an electron. Broadly speaking, there are two major opposing schools. On the one side, the Copenhagen school (led by Bohr, Heisenberg and Pauli) holds that ψ provides a complete description of a single electron state; hence the probability interpretation of ψψ* expresses an irreducible uncertainty in electron behavior that is intrinsic in nature. On the other side, the realist school (led by Einstein, de Broglie, Bohm and Jaynes) holds that ψ represents a statistical ensemble of possible electron states; hence it is an incomplete description of a single electron state. I contend that the debaters have overlooked crucial facts about the electron revealed by Dirac theory. In particular, analysis of electron zitterbewegung (first noticed by Schroedinger) opens a window to particle substructure in quantum mechanics that explains the physical significance of the complex phase factor in ψ. This led to a testable model for particle substructure with surprising support by recent experimental evidence. If the explanation is upheld by further research, it will resolve the debate in favor of the realist school. I give details. The perils of research on the foundations of quantum mechanics have been foreseen by Lewis Carroll in The Hunting of the Snark!
THE KINEMATIC ORIGIN OF COMPLEX WAVE FUNCTION
Abstract. A reformulation of the Dirac theory reveals that i¯h has a geometric meaning relating it to electron spin. This provides the basis for a coherent physical interpretation of the Dirac and Sch¨odinger theories wherein the complex phase factor exp(−iϕ/¯h) in the wave function describes electron zitterbewegung, a localized, circular motion generating the electron spin and magnetic moment. Zitterbewegung interactions also generate resonances which may explain quantization, diffraction, and the Pauli principle.
Universal Geometric Calculus a course, and follow:
III. Implications for Quantum Mechanics
The Kinematic Origin of Complex Wave Functions
Clifford Algebra and the Interpretation of Quantum Mechanics
The Zitterbewegung Interpretation of Quantum Mechanics
Quantum Mechanics from Self-Interaction
Zitterbewegung in Radiative Processes
On Decoupling Probability from Kinematics in Quantum Mechanics
Zitterbewegung Modeling
Space-Time Structure of Weak and Electromagnetic Interactions
to keep more references together:
Geometric Algebra and its Application to Mathematical Physics (Chris Thesis)
(what lead me to this amazing path was a paper by Joy Christian 'Disproof of Bell Theorem')
'Bon voyage', 'good journey', 'boa viagem'
-
Why the Down votes? – Helder Velez Apr 5 '11 at 15:03
@Helder The downvotes are not from me, but I think your Answer doesn't much address the Question, so I think they are justifiable just on that count. More significantly, citing Hestenes is problematic unless you are very specific about what you are taking from him, in which case you could as easily cite someone else who does not make such inflated claims. Too many of Hestenes' claims are not justifiable enough, and all of them have to be read critically to find what is interesting, which is time-consuming. Keep your wits about you as you follow the Hestenes path. – Peter Morgan Apr 5 '11 at 18:26
@Helder; I have a great deal of respect for Dr. Hestenes' work, send me an email if you want to talk about it. His work directly reads on the complex nature of QM. I'll +1 your answer when I get my votes back (I always use them up). – Carl Brannen Apr 6 '11 at 1:57
@Helder Velez I am one of your downvoters as I saw it as a very broad answer with lots of references and abstracts reproduced which have little to do with the specific context in which I tried to frame my question. Also, I am not interested in the interpretational aspect of Quantum Mechanics at all, at my stage. – yayu Apr 6 '11 at 4:52
@Carl Brannen Do you upvote an answer just because it cites the work of someone you respect, despite the fact that it might be of little relevance to the question? – yayu Apr 6 '11 at 4:57
show 7 more comments
This year-old question popped up unexpectedly when I signed in, and it's an interesting one. So I guess it's OK just to add an intuition-level "addendum answer" to the excellent and far more complete responses provided long ago.
Your kernel question seems to be this: "Why is the wave function complex?"
My intentionally informal answer is this:
Because by experimental observation, the quantum behavior of a particle far more closely resembles that of a rotating rope (e.g. a skip rope) than it does a rope that only moves up and down.
If each point in a rope marks out a circle as it moves, then a very natural and economical way to represent each point along the length of the rope is as a complex magnitude. You certainly don't have to do it that way, of course. In fact, using polar coordinates would probably be a bit more straightforward.
However, the nifty thing about complex numbers is that they provide a simple and computationally efficient way to represent just such a polar coordinate system. You can get into the gory details mathematical details of why, but suffice it to say that when early physicists started using complex numbers for just that purpose, their benefits continued even as the problems became far more complex. In quantum mechanics, their benefits became so overwhelming that complex numbers started being accepted pretty much as the "reality" of how to represent such mathematics.
That conceptual merging of complex quantities with actual physics can throw off your intuitions a bit. For example, if you look at moving skip rope there is no distinction between the "real" and "imaginary" axes in the actual rotations of each point in the rope. The same is true for quantum representations: It's the phase and amplitude that counts, with other distinctions between the axes of the phase plane being a result of how you use those phases within more complicated mathematical constructions.
So, if quantum wave functions behaved only like ropes moving up and down along a single axis, we'd use real functions to represent them. But they don't. Since they instead are more like those skip ropes, it's a lot easier to represent each point along the rope with two values, one "real" and one "imaginary" (and neither in real XYZ space) for its value.
Finally, why do I claim that a single quantum particle has a wave function that resembles that of a skip rope in motion? The classic example is the particle-in-a-box problem, where a single particle bounces back-and-forth between the two X axis ends of the box. Such a particle forms one, two, three, or more regions (or anti-nodes) in which the particle is more likely to be found.
If you borrow Y and Z (perpendicular to the length of the box) to represent the real and imaginary amplitudes of the particle wave function at each point along X, it's interesting to see what you get. It looks exactly like a skip-rope in action, one in which the regions where the electron is most likely to be found correspond one-for-one to the one, two, three, or more loops of the moving skip rope. (Fancy skip-ropers know all about higher numbers of loops.)
The analogy doesn't stop there. The volume enclosed by all the loops, normalized to 1, tells you exactly what the odds are on finding the electron along any one section along the box in the X axis. Tunneling is represented by the electron appearing on both sides of the unmoving nodes of the rope, those nodes being regions where there is no chance of finding the electron. The continuity of the rope from point to point captures a rough approximation of the differential equations that assign high energy costs to sharp bends in the rope. The absolute rotation speed of the rope represents the total mass-energy of the electron, or at least can be used that way.
Finally, and a bit more complicated, you can break those simple loops down into other wave components by using the Fourier transform. Any simple look can also be viewed as two helical waves (like whipping a hose around to free it) going in opposite directions. These two components represent the idea that a single-loop wave function actually includes helical representations of the same electron going in opposite directions, at the same time. "At the same time" is highly characteristic of quantum function in general, since such functions always contain multiple "versions" of the location and motions of the single particle that they represent. That is really what a wave function is, in fact: A summation of the simple waves that represent every likely location and momentum situation that the particle could be in.
Full quantum mechanics is far more complex than that, of course. You must work in three spatial dimensions, for one thing, and you have to deal with composite probabilities of many particles interacting. That drives you into the use of more abstract concepts such as Hilbert spaces.
But with regards to the question of "why complex instead of real?", the simple example of the similarity of quantum functions to rotating ropes still holds: All of these more complicated cases are complex because, at their heart, every point within them behaves as though it is rotating in an abstract space, in a way that keeps it synchronized with points in immediately neighboring points in space.
-
Since the physical point of view, the wave function needs to be complex in order to explain the double-slit experiment, as well mentionated in the book of The Feynman Lectures on Physics-III, I suggest you that review chapters 1&3, where it is explained how $\psi$ has to be considered of probabilistic nature, according to the pattern of interference, because "something" has to behave like a wave at the time of crossing through "each one" of the slits. Furthermore, Bohm proclaims that path of the particle (electron,photon, etc.) can be considered classic, so as a consequence you may watch this one, as it follows the rules already known at the macro... in that sense, you can see next reference or this one to consider the covariance of the laws of mechanics.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9316213130950928, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/brst+hilbert-space | # Tagged Questions
2answers
145 views
### Kugo and Ojima's Canonical Formulation of Yang-Mills using BRST
I am trying to study the canonical formulation of Yang-Mills theories so that I have direct access to the $n$-particle of the theory (i.e. the Hilbert Space). To that end, I am following Kugo and ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8790740966796875, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/5243?sort=oldest | Why is it a good idea to study a ring by studying its modules?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is related to another question of mine. Suppose you met someone who was well-acquainted with the basic properties of rings, but who had never heard of a module. You tell him that modules generalize ideals and quotients, but he remains unimpressed. How do you convince him that studying modules of a ring is a good way to understand that ring? (In other words, why does one have to work "external" to the ring?) Your answer should also explain why it is a good idea to study a group by studying its representations.
-
3
To explain the usefulness of modules, claiming it helps to understand the ring does not strike me as the right initial rationale. For example, is the main use of real vector spaces to give you insights into the real numbers? Nope. Similarly, I think a better motivation for caring about modules is to point out that they're the setting for linear algebra over a ring. Then illustrate why the module-over-a-ring viewpoint is useful in examples tailored to the other person's background (e.g., canonical forms for linear operators via modules over F[x], analogous to f.g. abelian groups). – KConrad Mar 25 2012 at 4:04
14 Answers
In short, I'd tell your friend: "If you believe a ring can be understood geometrically as functions its spectrum, then modules help you by providing more functions with which to measure and characterize its spectrum."
Elements of a module over a ring $R$ are like generalized functions on $Spec(R)$. We can talk about the support of a module element, or its vanishing set. More concretely, think of how global sections of a line bundle can act as functions you can use to define map into projective space.
When you glue together a module on open sets of a spectrum or a scheme, you get to glue using maps which are module isomorphisms, which are more flexible than the ring isomorphisms required to glue together a scheme. Borrowing intuition from smooth manifold land, the twist in the Moebius band (as a line bundle on the circle) is formed by gluing a copy of the reals to itself via multiplication by $-1$, a module map, not a ring map. This allows us to think of functions like $\cos(\theta/2)$ as being globally defined: as a map to the Moebius band.
In the same vein, when you have a representation $V$ of a group $G$, each element $v\in V$ gives you a nice evaluation map from $G$ into $V$, so lurking everywhere we've got these morphisms from our object of interest into a known object, which are nicely related to each other via the group laws. A fortiori, this certainly doesn't capture the full utility of group representations, but a priori I think it's a decent justification.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The "categorical" approach: Understand an object not via intrinsic properties (rather ignore them, they are not part of the categorical data) but via the morphisms.
By choosing a category to work in, we choose the class of properties that are interesting. If we choose to work in the category of groups and group homomorphisms, then two groups with different underlying sets but having a group isomorphism between will not be distinguished.
A group representation is a group morphism G -> End(M) and this way we get properties about the group itself.
A module M over a ring R ist nothing else than a morphism R -> End(M) and so we get information about R from this morphism (or, better: from the collection of all these morphisms).
Then, you could continue asking "why should we do this and not study the ring/group intrinsically"?
My answer is: what is a group, to you? For me it is a set satisfying the group axioms - and two groups are the same if there is a group isomorphism between them.
The "intrinsic" study makes sense only if you want to distinguish different groups/rings that are isomorphic. But then you could also choose a different category, with less isomorphisms :-)
-
Sure, but my point is why distinguish sources of the form End(M)? – Qiaochu Yuan Nov 12 2009 at 20:16
In other words, your answer does not tell me why representations of groups are more interesting than group actions. – Qiaochu Yuan Nov 12 2009 at 20:17
Yes, group actions are interesting in their own right. Representations are just the linear ones, thus easier to understand. The same holds for End(M) above. You could say to the unimpressed mathematician "you don't think linear algebra is useful?". – Konrad Voelkel Nov 12 2009 at 20:20
Oh wait, maybe you're asking "how to convince somebody to use linear algebra". Then my answer would be: Don't try to. If they're happy with difficult math, let them do it. – Konrad Voelkel Nov 12 2009 at 20:21
1
@Konrad: I dont' think you have addressed Qiaochu's question. I guess what he's looking for, is why the study of all such endomorphisms from R -> End (M) (in the modules case, let's say) should give information about R, i.e. study of these endomorphisms as a whole. Your answer seems to be focusing on each such endomorphisms, thus not quite to the point. – Ho Chung Siu Dec 5 2009 at 4:48
I think the main reason is the flexibility of working in the category of $R$-modules rather than just with the ring $R$. For instance suppose we stick to rings - we have some ways of building new rings like localization and taking factor rings and limited ways of "building new things" - basically linear combinations of elements and maybe taking limits if $R$ is complete with respect to some topology.
At the level of module categories we still get all of this, torsion theories deal with localization (and make it clear that this is really an "internal" concept), instead of a quotient map we get useful adjoints, and we can still add and compose endomorphisms of $R$. But we also have lots of other structure to work with. We have all limits and colimits, possibly a tensor product, injective modules (which can have a lovely structure theory), duality, etc... So not only can we build a lot more objects but one can prove that just the existence of certain objects gives us a lot of information about the ring.
In a sense (which one can make precise) the category of $R$-modules is the same as $R$ (which is the same as $D^{\mathrm{perf}}(R)$ with its tensor product for $R$ commutative with unit) so the distinction between a ring and its modules shouldn't really exist!
I thought I would add the following quote which sprang to mind when I read the question:
"Grothendieck would later describe each sheaf on a space T as a “meter stick” measuring T."
taken from McClarty's article The Rising Sea: Grothendieck on simplicity and generality I (which can be found here).
In the commutative with unit case this can be interpreted literally, however, I still think it is relevant (suitably adjusted) in the case of noncommutative rings.
-
I always thought one should regard issues like ring theory/module theory or theory of (abstract) groups/ representation theory of groups in an analogous manner to theory of abstract manifolds/embeddings of manifolds. So you can disentangle "mixed" notions and work out the concepts more clearly. It's not like embeddings of manifolds were "more interesting" than the theory of manifolds - on the contrary, the gist is distinguishing both.
-
If the ring $R$ is ${\mathbb R}$, then the modules are vector spaces. Vector spaces are more interesting and more useful. Modules over an arbitrary ring, then are analogous to vector spaces. Moreover, the places where the analogies break down form the interesting parts of the theory.
For groups and representations, I think the question is different. A group can be thought of as a set of symmetries --- the original meaning. Symmetries of what? Well, for example a particular vector space, or rather, for a family of vector spaces. The relations among those spaces tells you about the group.
-
This relates to an earlier question you asked:
http://mathoverflow.net/questions/1827/what-representative-examples-of-modules-should-i-keep-in-mind
My answer is the same: look at the frontispieces of Miles Ried's book undergrad commutative algebra, which visualize this central fact: Modules to rings are what bundles are to varieties.
-
1
Would now be a good time to admit that I also have no intuition for why bundles are useful? – Qiaochu Yuan Nov 13 2009 at 0:31
1
A better time was when the original question asked, but now is a good time too. One important feature with bundles/sheaves (depending on the category) you can cook a lot of numerical invariants. E.g. if you want to prove that a scheme is general type, one standard technique is to represnt the canonical sheaf as effective plus ample; many times this reduces to a lot of computations in the Chow ring, which is much simpler then finding many sections of the pluri-canonical family. – David Lehavi Nov 13 2009 at 6:05
Cuz otherwise it'd be like getting to know a bicycle without riding it.
-
I want to answer your question twice: first with a "top-down" approach and second with a "bottom-up" approach. Let me limit myself to the first answer here and see how I do.
I claim the following analogy:
abstract groups : group actions on sets :: abstract rings : linear actions of rings on abelian groups (= modules)
I will take it as mostly self-evident that it is desirable to study groups acting on sets. If you are not doing this -- but thinking of groups only as sets with a certain law of composition -- then you are thinking about groups "in the wrong way". You are missing out not only on powerful tools for studying abstract groups (e.g. the Monster was constructed as the automorphism group of a certain algebra), but also, even more importantly, on why groups are important and interesting to mathematics: they come up as automorphisms of things, not (or rather, rarely) abstractly.
The way to think about a group action is that you have a set S, and it has an automorphism group, Sym(S), the group of all bijections from S to itself. Then an action of G on S is simply a homomorphism of groups G -> Sym(S). More generally, if x is any object in a category C, then it has an automorphism group Aut(x), and one can think of a homomorphism from an abstract group G to Aut(x) as a group action on x.
Now in place of a set, we take an abelian group M. This has more structure -- apart from a group Aut(M) of Z-linear automorphsims, it also has an endomorphism ring End(M): the ring of Z-linear maps from M to itself. Note that End(M) is in general noncommutative, so this construction is more general than any "ring of functions" construction in (commutative!) algebraic geometry.
So given a ring R, the analogy is completed by considering ring homomorphisms R -> End(M). As for rings, this provides a bridge between the abstract notion of a ring and the "real world" notion of endomorphisms of an abelian group. Moreover, just as the notion of a symmetry group of a set generalizes to that of an automorphism group of an object in a category, similarly, any object x in an abelian category C has an endomorphism ring, and hence one can consider "ring actions" of an abstract ring R on x, via homomorphisms R -> End(x).
In the first part of the analogy, a distinguished role is played by the category of [left, or right] G-sets for a particular group G. In particular, this provides a way for every group G to be the precise automorphism group of some object in a category: it is the automorphism group of itself. It is of course not true that any abstract group is equal to the full symmetry group Sym(S) of a set S, so this is an important construction.
In the second part of the analogy, a distinguished role is played by the abelian category of [left, or right] R-modules. In particular, End_{R-Mod}(R) = R, so every ring is the precise endomorphism ring of some object in an abelian category. (I am pretty sure that not every ring is isomorphic to the full endomorphism ring of an abelian group, although this is less obvious than the other case. It might make a good question in its own right...)
This is I think the right "general" answer to the question "Why is studying modules of a ring a good way to understand that ring?" A different kind of answer would give instances in commutative algebra when theorems about rings are proved using modules. I'll try that at some future point.
-
Thanks Pete! Could you recommend a book about ring/modules/group-theory with this view as its driving force? I think I am lacking this kind of insight. – Jose Brox Nov 16 2009 at 22:17
I like this answer. Thanks! – Qiaochu Yuan Nov 16 2009 at 22:27
1
The introduction to section 12.A in Isaacs Algebra: A Graduate Course talks a little about this view. – David Dynerman Mar 28 2011 at 19:29
+1. Beautifully written. – Dejan Govc Mar 24 2012 at 23:36
First, category of commutative ring is opposite category of commutative affine schemes(whether in classical algebraic geometric sense or presheaf view point sense)
Second, considering an affine scheme (Spec(R),O),where R is a commutative ring. Qcoh(SpecR,O) is R-mod (category of R-modules). You should know, we can reconstruct the affine scheme (SpecR,O) from this R-module up to isomorphism. So study the ring and module category over it is the same. (Actually this is from Grothendieck view point: to do geometry, we just need category of sheaves on "would be space", this view point is supported by Gabriel-Rosenberg reconstruction theorem).
But, the module theory is just the theory of linear algebra. It is much easier to deal with than theory of ring. In fact, if we study quasi compact and quasi separated scheme. According to Barr-Beck's theorem(in particular,Grothendieck flat descent),all the algebraic geometry turns into linear algebra.(for affine scheme, we deal with module over monad, for non affine schemes, we deal with comodule over comonad, for all semiseparated scheme(all algebraic varieties), we even have simpler form for comonad). But I have to remind you. I assume you are talking about unital ring.
For non-unital ring, we can still consider R-module as category of associative action to some linear vector space. This is just the quasi coherent sheaves on quasi affine scheme(which is called cone). In fact, this is also not very difficult to deal with. A good reference for general framework is "Des categories Abéliennes" or more recent, O.Gabber's work on almost ring theory.
For your question about studying representation theory of G. The reason is the same. We can use Tannaka formalism to reconstruct locally compact group from its representations. Moreover, if G is an algebraic group. Then all algebraic geometry on group scheme can be turned into study on Rep(G) which is a symmetric monoidal category with linear algebra.
-
This is a bit late, but here is one example I like:
Theorem. A localization of a regular local ring at a prime ideal is still regular.
One way to prove this is to deduce it from
Theorem. Let $R$ be a local ring. Then the following are equivalent:
1. $R$ is regular
2. Every $R$-module has a finite length projective resolution
3. The residue field has a finite length projective resolution.
(To use it, let $P$ be the prime ideal. Since $R$ is regular, $R/P$ has a finite length projective resolution. Now localize--this is exact, so we get a finite length $R_P$-projective resolution of $(R/P)_P$, which is the residue field of $R_P$)
This stuff is in Chapter 19 of Eisenbud's Commutative Algebra.
It's not clear to me how one would try to prove the first theorem from the definitions of regular.
Edit: fixed some mistakes
-
One way to convince the guy would be to make him list interesting questions he can make about rings, and show him which can be solved by looking only at the category of modules.
Somewhere in the middle of that conversation, one should make the point that it may very well happen that two different rings have equivalent categories of modules, so in answering any such question we can change the ring.
-
In homage to Serge Lang, I might suggest that your friend pick up any book on Morita theory and solve all the exercises. Less facetiously, I might point out that many interesting ring-theoretic properties can be characterized, sometimes unexpectedly, by properties of the (right) module category over the ring. For instance, consider what it means for all right $R$-modules to be injective, or what it means for all right $R$-modules to be projective, or what it means for all right $R$-modules to be flat, or what it means for all direct sums of injective right $R$-modules to be injective, or what it means for all direct products of projective right $R$-modules to be projective, or what it means for all flat right $R$-modules to be projective, or ...
-
@Greg: +1. Indeed my own commutative algebra notes (as well as the class currently being taught out of them) take this perspective a lot. For instance, just today I proved that for a commutative ring $R$, all modules are projective iff $R$ is a finite direct product of fields. To my taste, a lot of the standard texts don't emphasize these sort of "inverse problems" as much as they should (e.g. where is this fact in Atiyah-Macdonald or Matsumura? perhaps some exercise, but I don't know where). – Pete L. Clark Mar 28 2011 at 23:59
This is what is usually called 'the homological study' of ring theory – Jose Brox Mar 29 2011 at 3:00
This is closely related to Pete Clark's answer, but stated in a slightly different way that I personally find helpful. I think it's not too hard to convince people that when studying an abstract object, it helps a great deal to be able to "write it down concretely," i.e., to consider a representation of the object. It's not an accident that the word "representation" is used for a homomorphism of a group into the group of matrices over a field; matrices over a familiar field like $\mathbb{C}$ are, intuitively, "more concrete" than an arbitrary abstract group and hence can be thought of as providing a concrete representation of the group. When you write something down concretely, often some structure will emerge that is not immediately evident from the original abstract definition. For example, the character table is an important invariant of a finite group. I don't know how you would come up with this invariant without considering representations of the group.
Similarly, modules are representations of rings. I think the Artin-Wedderburn theorem is a good illustration of the usefulness of considering representations of rings. Even if your interest is only in rings themselves, it's clearly an important result that you can classify all (Artinian) semisimple rings as products of matrix rings over division rings. If you possess the concept that a ring can be represented as a matrix ring, then it is not too shocking (and may even seem natural) that something like the Artin-Wedderburn theorem should be true, and moreover you can even see that to prove it you should somehow construct the matrix rings by having the original ring act on something. Without the concept of a representation (or equivalently a module), I don't know how you would proceed; it seems like a difficult and clumsy task at best.
-
I guess they are dozens of points of view. Here are the one that I found most useful (for group representations) :
• Studying the linear representations of a group is a way to linearize the problem. You thus get all the powerful tools of linear algebra (diagonalization, trigonalization...). The simplest (yet striking) example I know of that principle : you can show that every finite commutative group is a direct product of cyclic groups (essentially by decomposing its regular representation).
• Studying the complex representations of a group $G$ is equivalent to studying modules over $\mathbb{C}[G]$ (you can adapt the algebra depending on the setting). So you may view it as analysis on $G$. A well known example : roughly, the theory of Fourier Series gives you a decomposition of $L^2(\mathbb{R}/\mathbb{Z})$ as a sum of irreducible representations of $\mathbb{R}/\mathbb{Z}$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413418173789978, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/49700/inverse-length-3-arithmetic-progression-problem-for-sets-with-positive-upper-dens | ## Inverse Length 3 Arithmetic Progression Problem for sets with positive upper density
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It is a famous theorem of Roth, which Szemerédi famously generalized, that if a set of natural numbers has positive upper density then it contains arithmetic progressions of length $k$. The famous Green-Tao Theorem generalized this property to the primes. My question is, is there any progress on the 'inverse' problem?
First Question: Suppose $A \subset \mathbb{N}$ has positive upper density. Does it follow that with at most finitely many exceptions, all elements $a \in A$ is in an arithmetic progression of length at least $3$ in elements of $a$? That is, with at most finitely many exceptions, is it true that for each $a \in A$ there exists $k > 0$ such that $a, a+k, a+2k \in A$?
Edit: This question has been answered; see below by two constructions. However a second question may be asked.
A related problem (which I believe to be harder) is the same question for the primes, which has been asked here before:
http://mathoverflow.net/questions/34197/are-all-primes-in-a-pap-3
Another related problem is found here (the constructions given all have density less than 1/2. Is it possible to find counterexamples with large density?)
http://mathoverflow.net/questions/49977/do-there-exist-sets-of-integers-with-arbitrarily-large-upper-density-which-contai
-
## 2 Answers
Or just take all powers of $3$ and add to them all numbers that are congruent to $1$ modulo $3$.
-
It took me a bit to understand this since I thought you meant the set of sums of powers of 3 with numbers 1 mod 3. – Harry Altman Dec 17 2010 at 7:30
Much better than my construction! This also shows that the exceptional set in the question can be as large as a set without $3$-APs (if $B$ has no $3$-APs, let $A=3B \cup (3\mahthbb{Z}+1)$. Then no element in $3B$ is part of a $3$-AP in $A$). – Pablo Shmerkin Dec 17 2010 at 11:32
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is false. We construct $A$ inductively, so that the following holds:
• $A$ contains all powers of two larger or equal than $4$ and no other even numbers.
• The number of odd numbers in $A$ between $2^j$ and $2^{j+1}$ is $2^{j-2}$.
• No power of two is in a 3-AP contained in $A$.
We start by specifying that $4\in A, 5\in A, 6\notin A,7\notin A$. Suppose $A\cap\{1,\ldots, 2^m-1\}$ has been defined so that the above properties hold. We next define $A\cap\{ 2^m,\ldots, 2^{m+1}-1\}$ as follows: $2^m\in A$. There are $1+2+\ldots+2^{m-3}<2^{m-2}$ odd numbers smaller than $2^m$ in $A$; let $O_m$ be the set of all of them. We choose $2^{m-2}$ odd numbers in $$\{2^m,\ldots, 2^{m+1}\} \backslash (2^{m+1}-O_m).$$ and add them to $A$. We can do this since $|O_m|< 2^{m-2}$.
The first two properties are clear from the construction. To check the last (the one we care about), note that $2^m$ can't be the first/last term of a $3$-AP in $A$, since then the last/first term would also be even, hence another power of $2$, and then the middle one would be even, and a power of $2$ as well. But $2^m$ can't be the middle term of a $3$-AP either: for the same reason as before, the other two terms must be odd. Let $(a,2^m,c)$ be the AP. Then $a\in O_m$ by definition, but this implies $c-2^m=2^m-a$, or $c\in 2^{m+1}-O_m$, a case which was excluded in the construction.
Clearly $A$ has density $1/4$ so this completes the proof.
If $A$ has positive upper density, one can still ask what is the largest possible size of the set $B$ of all elements of $A$ which are not in any $3$-AP contained in $A$. Clearly $B$ has density $0$ by Roth's Theorem (and we get better bounds from the quantitative bounds in Roth's Theorem). Is it possible to do better?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9501940011978149, "perplexity_flag": "head"} |
http://amathew.wordpress.com/2010/11/14/the-koszul-complex-ii/ | # Climbing Mount Bourbaki
Thoughts on mathematics
November 14, 2010
## The Koszul complex II
Posted by Akhil Mathew under algebra, commutative algebra, MaBloWriMo | Tags: differential graded algebras, Koszul complex, regular sequences |
1 Comment
I’ve not been a very good MaBloWriMo participant this time around. Nonetheless, coursework does tend to sap the time and energy I have for blogging. I have been independently looking as of late at the formal function theorem in algebraic geometry, which can be phrased loosely by saying that the higher direct images under a proper morphism of schemes commute with formal completions. This is proved in Hartshorne for projective morphisms by first verifying it for the standard line bundles and then using a (subtle) exactness argument, but EGA III.4 presents an argument for general proper morphisms. The result is quite powerful, with applications for instance to Zariski’s main theorem (or at least a weak version thereof), and I would like to say a few words about it at some point, at least after I have a fuller understanding of it than I do now. So I confess to having been distracted by algebraic geometry.
For today, I shall continue with the story on the Koszul complex, and barely begin the connection between Koszul homology and regular sequences. Last time, we were trying to prove:
Proposition 24 Let ${\lambda: L \rightarrow R, \lambda': L' \rightarrow R}$ be linear functionals. Then the Koszul complex ${K_*(\lambda \oplus \lambda')}$ is the tensor product ${K_*(\lambda) \otimes K_*(\lambda')}$ as differential graded algebras.
So in other words, not only is the algebra structure preserved by taking the tensor product, but when you think of them as chain complexes, ${K_*(\lambda \oplus \lambda') \simeq K_*(\lambda) \oplus K_*(\lambda')}$. This is a condition on the differentials. Here ${\lambda \oplus \lambda'}$ is the functional ${L \oplus L' \stackrel{\lambda \oplus \lambda'}{\rightarrow} R \oplus R \rightarrow R}$ where the last map is addition.
So for instance this implies that ${K_*(\mathbf{f}) \otimes K_*(\mathbf{f}') \simeq K_*(\mathbf{f}, \mathbf{f}')}$ for two tuples ${\mathbf{f} = (f_1, \dots, f_i), \mathbf{f}' = (f'_1, \dots, f'_j)}$. This implies that in the case we care about most, catenation of lists of elements corresponds to the tensor product.
Before starting the proof, let us talk about differential graded algebras. This is not really necessary, but the Koszul complex is a special case of a differential graded algebra.
Definition 25 A differential graded algebra is a graded unital associative algebra ${A}$ together with a derivation ${d: A \rightarrow A}$ of degree one (i.e. increasing the degree by one). This derivation is required to satisfy a graded version of the usual Leibnitz rule: ${d(ab) = (da)b + (-1)^{\mathrm{deg} a} a (db) }$. Moreover, ${A}$ is required to be a complex: ${d^2=0}$. So the derivation is a differential.
So the basic example to keep in mind here is the case of the Koszul complex. This is an algebra (it’s the exterior algebra). The derivation ${d}$ was immediately checked to be a differential. There is apparently a category-theoretic interpretation of DGAs, but I have not studied this.
Proof: As already stated, the graded algebra structures on ${K_*(\lambda), K_*(\lambda')}$ are the same. This is, I suppose, a piece of linear algebra, about exterior products, and I won’t prove it here. The point is that the differentials coincide. The differential on ${K_*(\lambda \oplus \lambda')}$ is given by extending the homomorphism ${L \oplus L' \stackrel{\lambda \oplus \lambda'}{\rightarrow} R}$ to a derivation. This extension is unique.
Now I claim that tensor product of two differential graded algebras with the product differential is a DGA itself. This says that the tensor product of the differentials is itself not only a differential, but a derivation on the tensor product. This is what we want, because then the product differential on ${K_*(\lambda) \otimes K_(\lambda')}$ is a derivation, and since the differential induced by ${\lambda + \lambda'}$ is one too, the two must coincide as they coincide in degree one. This is a routine computation, which is not suitable to blogging. So one should check that if ${(A, d_A), (B, d_B)}$ are DGAs, then ${(A \otimes B, d_{A \otimes B})}$, where ${A \otimes B}$ has the graded algebra structure and ${d_{A \otimes B}}$ is the product differential, is indeed a DGA.
0.7. Koszul homology and regular sequences
In general, the Koszul complex is not exact. The degree to which its homology vanishes does, however, say something. It tells you the length of regular sequences, or alternatively the depth.
First, we can compute the Koszul homology at the end. Let ${f_1, \dots, f_r \in R}$ for ${R}$ a commutative ring, and let ${M}$ be an ${R}$-module. Then ${K_1(\mathbf{f}) = R^r}$ and ${K_0(\mathbf{f}) = R}$ from the definitions. The differential ${K_1(\mathbf{f}) \rightarrow K_0(\mathbf{f})}$ is simply ${(a_i) \rightarrow \sum a_i r_i}$. In particular, we see that the homology of the Koszul complex at dimension zero is ${R/(f_1, \dots, f_r) R}$. More generally, this argument and the right-exactness of the tensor product shows that:
Proposition 26 We have$\displaystyle H_0(\mathbf{f}, M) = M/(f_1, \dots, f_r) M.$
So in general, the zeroth Koszul homology ${H_0}$ will be nonzero. But the higher ones vanish for regular sequences. We are aiming for:
Proposition 27 Let ${f_1, \dots, f_r}$ be an ${M}$-regular sequence. Then ${H_s(\mathbf{f}, M) = 0}$ for ${s \neq 0}$.
Proof: The argument, as expected, will be inductive. The first step is the core of the idea, though. The Koszul complex for ${\mathbf{f}}$ consisting of one element is
$\displaystyle 0 \rightarrow M \stackrel{f}{\rightarrow} M \rightarrow 0.$
It is clear that the homology of this complex detects the nonzerodivisorness of ${f}$ on ${M}$. In general, the result is an inductivization of the above observation. When ${r=1}$, the above proposition is true. Let us assume it true for ${r-1}$, and we prove it for ${r}$. So let ${f_1, \dots, f_r}$ be an ${M}$-regular sequence, and let ${\mathbf{f}' = (f_1, \dots, f_{r-1})}$. We know that the homology of the complex
$\displaystyle K(\mathbf{f}', M)$
vanishes in dimension ${\neq 0}$, and is ${M/(\mathbf{f}'M)}$ for dimension zero. This is “close” to what we want as ${K(\mathbf{f}', M)}$ and ${K(\mathbf{f}, M)}$ are “similar,” but we need a way of going between them. So far, we know that
$\displaystyle K(\mathbf{f}, M) = K(\mathbf{f}',M) \otimes K(f_r, M),$
and that ${f_r}$ is a nonzerodivisor on ${H_0(K(\mathbf{f}', M))}$. That way is provided by:
Lemma 28 Let ${\left\{C_n\right\}_{n \geq 0}}$ be a chain complex of ${R}$-modules such that ${C_*}$ is exact in positive dimension. Suppose ${y \in R}$ is a nonzerodivisor on ${H_0(C)}$. Then ${C_* \otimes K(y, R)}$ is acyclic in positive dimension.
Proof: Because I’m in the mood to use a sledgehammer, let’s deduce this from a spectral sequence. We know that there is a double complex ${\left\{C_p \otimes K_q(y, R)\right\}_{p,q \geq 0}}$.
There are two spectral sequences that converge to the same thing. For the first homology, we take the horizontal homology, and then the vertical homology of the horizontal homology. But since ${K(y,R)}$ is just ${R}$ and zero, the horizontal homology is zero except in dimension zero, where it’s ${H_0(C)}$ located at ${(0,0)}$ and ${(0,1)}$. The vertical differential is multiplication by ${y}$. When we take the next page in this spectral sequence, the fact that ${y}$ is ${H_0(C)}$-regular implies that it is ${H_0(C)/y H_0(C)}$ at the origin and nothing elsewhere. In particular, the second ${E_2}$ page of this spectral sequence is centered at the origin. This spectral sequence converges to the total homology of the double complex. That total homology is ${H_*(C \otimes K(y,R))}$.
But the spectral sequence obviously collapses at ${E_2}$, and the convergent limit ${E_\infty = E_2}$ as a result. But from the thus calculated ${E_\infty}$ page of the spectral sequence, we find that there is nothing about the nonzero diagonals, and consequently since the sequence converges ${H_*(C \otimes K(y,R))}$, we see that ${C \otimes K(y,R)}$ is acyclic in positive dimensions.
Now, with the lemma established, the result is clear. I should note that the lemma can be proved slightly less conceptually but more elementarily (without spectral sequences) if one writes some exact sequences.
This result is very far from the best we can do. The Koszul homology may very well be zero without the initial sequence being a regular sequence. The more natural result, which can be proved using more sophisticated refinements of the above reasoning, is that Koszul homology ${H_*(\mathbf{f}, M)}$ detects the length of a maximal ${M}$-sequence in the ideal ${(\mathbf{f}) \subset R}$. I want to get to this result, but first there are some interesting things one can do with what’s been proved alone in algebraic geometry. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 82, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415289163589478, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/100491/list | ## Return to Question
2 edit
We consider a Banach space $X$ and its dual $X^*$.
Let $Q\colon X^\ast \to X^\ast$ be an idempotent operator. Question: Can we find an idempotent operator $P\colon X^\ast \to X^\ast$ which is weak${}^\ast$ weak${}^\ast$-to-weak${}^\ast$ continuous and with range isomorphic to range of $Q$ and $\mbox{im}P\subseteq \mbox{im}Q$? In fact, I am mostly interested in the case $\mbox{im }Q\cong \ell_p$ for $p\in [1,\infty)$.
Certainly, $P$ would have to be an adjoint to some idempotent on $X$. My feeling is that in general this is not the case but perhaps it might be true for some well-behaved class of Banach spaces $X$ like Banach lattices? Or Banach lattices without a complemented copy of $\ell_1$?
1
# Weak*-closed and complemented subspaces of dual Banach spaces
We consider a Banach space $X$ and its dual $X^*$. Let $Q\colon X^\ast \to X^\ast$ be an idempotent operator. Can we find an idempotent operator $P\colon X^\ast \to X^\ast$ which is weak${}^\ast$ continuous and with $\mbox{im}P\subseteq \mbox{im}Q$? Certainly, $P$ would have to be an adjoint to some idempotent on $X$. My feeling is that in general this is not the case but perhaps it might be true for some well-behaved class of Banach spaces $X$ like Banach lattices? Or Banach lattices without a complemented copy of $\ell_1$? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418392181396484, "perplexity_flag": "head"} |
http://amathew.wordpress.com/tag/characteristic-classes/ | # Climbing Mount Bourbaki
Thoughts on mathematics
March 5, 2012
## The cohomology of projective hypersurfaces
Posted by Akhil Mathew under algebraic geometry, topology | Tags: characteristic classes, cohomology, hypersurfaces |
[6] Comments
Consider a smooth surface ${M \subset \mathop{\mathbb P}^3(\mathbb{C})}$ of degree ${d}$. We are interested in determining its cohomology.
1. A fibration argument
A key observation is that all such ${M}$‘s are diffeomorphic. (When ${\mathop{\mathbb P}^3}$ is replaced by ${\mathop{\mathbb P}^2}$, then this is just the observation that the genus is determined by the degree, in the case of a plane curve.) In fact, consider the space ${V}$ of all degree ${d}$ homogeneous equations, so that ${\mathop{\mathbb P}(V)}$ is the space of all smooth surfaces of degree ${d}$. There is a universal hypersurface ${H \subset \mathop{\mathbb P}^3 \times \mathop{\mathbb P}(V)}$ consisting of pairs ${(p, M)}$ where ${p}$ is a point lying on the hypersurface ${M}$. This admits a map
$\displaystyle \pi: H \rightarrow \mathop{\mathbb P}(V)$
which is (at least intuitively) a fiber bundle over the locus of smooth hypersurfaces. Consequently, if ${U \subset \mathop{\mathbb P}(V)}$ corresponds to smooth hypersurfaces, we get an honest fiber bundle
$\displaystyle \pi^{-1}(U) \rightarrow U .$
But ${U}$ is connected, since we have thrown away a complex codimension ${\geq 1}$ subset to get ${U}$ from ${\mathop{\mathbb P}(V)}$; this means that the fibers are all diffeomorphic.
This argument fails when one considers only the real points of a variety, because a codimension one subset of a real variety may disconnect the variety. (more…)
December 28, 2011
## Bott periodicity and integrality theorems
Posted by Akhil Mathew under topology | Tags: almost complex manifolds, Bott periodicity, characteristic classes, Chern character |
[2] Comments
Today I would like to blog about a result of Atiyah from the 1950s, from his paper “Bott periodicity and the parallelizability of the spheres.” Namely:
Theorem 1 (Atiyah) On a nine-fold suspension ${Y = \Sigma^9 X}$ of a finite complex, the Stiefel-Whitney classes of any real vector bundle vanish.
In particular, this means that any real vector bundle on a sphere $S^n, n \geq 9$ cannot be distinguished using Stiefel-Whitney classes from the trivial bundle. The argument relies on the Bott periodicity theorem and some calculations with Stiefel-Whitney classes. There is also an analog for the Chern classes of complex vector bundles on spheres; they don’t necessarily vanish but are highly divisible.
These sorts of integrality theorems often have surprising geometric consequences. In this post, I’ll discuss the classical problem of when spheres admit almost-complex structures, a problem one can solve using the second of the integrality theorems mentioned above. Atiyah was originally motivated by the question of parallelizability of the spheres. (more…)
December 27, 2011
## Wu’s theorem on the Stiefel-Whitney classes of a manifold
Posted by Akhil Mathew under topology | Tags: characteristic classes, cobordism, Gysin map, Stiefel-Whitney classes |
Today I would like to take a break from the index theorem, and blog about a result of Wu, that the Stiefel-Whitney classes of a compact manifold (i.e. those of the tangent bundle) are homotopy invariant. It is not even a priori obvious that the Stiefel-Whitney classes are homeomorphism invariant; note that “homeomorphic” is a strictly weaker relation than “diffeomorphic” for compact manifolds, a result first due to Milnor. But in fact the argument shows even that the Stiefel-Whitney classes (of the tangent bundle) can be worked out solely in terms of the structure of the cohomology ring as a module over the Steenrod algebra.
Here is the idea. When $A \subset M$ is a closed submanifold of a manifold, there is a lower shriek (Gysin) homomorphism from the cohomology of $A$ to that of $M$; this is Poincaré dual to the restriction map in the other direction. We will see that the “fundamental class” of $A$ (that is, the image of 1 under this lower shriek map) corresponds to the mod 2 Euler (or top Stiefel-Whitney) class of the normal bundle. In the case of $M \subset M \times M$, the corresponding normal bundle is just the tangent bundle of $M$. But by other means we’ll be able to work out the Gysin map easily. Once we have this, the Steenrod operations determine the rest of the Stiefel-Whitney classes.
(more…)
October 18, 2011
## Thom’s construction of the Stiefel-Whitney classes
Posted by Akhil Mathew under topology | Tags: characteristic classes, Steenrod algebra, Stiefel-Whitney classes, Thom isomorphism, Thom space |
I’ve been trying to fix the (many) gaps in my knowledge of classical algebraic topology as of late, and will probably do a few posts in the near future on vector bundles, K-theory, and characteristic classes.
Let ${B}$ be a base space, and let ${p: E \rightarrow B}$ be a real vector bundle. There are numerous constructions for the characteristic classes of ${B}$. Recall that these are elements in the cohomology ring ${H^*(B; R)}$ (for ${R}$ some ring) that measure, in some sense, the twisting or nontriviality of the bundle ${B}$.
Over a smooth manifold ${B}$, with ${E}$ a smooth vector bundle, a construction can be made in de Rham cohomology. Namely, one chooses a connection ${\nabla}$ on ${E}$, computes the curvature tensor of ${E}$ (which is an ${\hom(E,E)}$-valued 2-form ${\Theta}$ on ${B}$), and then applies a suitable polynomial from matrices to polynomials to the curvature ${\Theta}$. One can show that this gives closed forms, whose de Rham cohomology class does not depend on the choice of connection. This is the subject of Chern-Weil theory, and it applies more generally to principal ${G}$-bundles on a manifold for ${G}$ a Lie group.
But there is something that this approach misses: torsion. By working with de Rham cohomology (or equivalently, cohomology with ${\mathbb{R}}$-coefficients), the very interesting torsion phenomena that algebraic topologists care about is lost. For the purposes of this post, we’re interested in cohomology classes where the ground ring is ${R = \mathbb{Z}/2}$, and so de Rham cohomology is out. However, in return, we have cohomology operations. We can use them instead. (more…)
August 21, 2011
## Chern-Weil II
Posted by Akhil Mathew under differential geometry, topology | Tags: characteristic classes, Chern-Weil theory |
I’d like to finish the series I started a while back on Chern-Weil theory (and then get back to exponential sums).
So, in the discussion of the Cartan formalism a few days back, we showed that given a vector bundle $E$ with a connection on a smooth manifold, we can associate with it a curvature form, which is an $\hom(E, E)$-valued 2-form; this is a generalization of the Riemann curvature tensor (as some computations that I don’t feel like posting here will show). In the case of a line bundle, we saw that since $\hom(E, E)$ was canonically trivialized, we could interpret the curvature form as a plain old 2-form, and in fact it turned out to be a representative — in de Rham cohomology — of the first Chern class of the line bundle. Now we want to see what to do for a vector bundle, where there are going to be a whole bunch of Chern classes.
For a general vector bundle, the curvature ${\Theta}$ (of a connection) will not in itself be a form, but rather a differential form with coefficients in ${\hom(E, E)}$, which is generally not a trivial bundle. In order to get a differential form from this, we shall have to apply an invariant polynomial. In this post, I’ll describe the proof that one indeed gets well-defined characteristic classes (that are actually independent of the connection), and that they coincide with the usually defined topological Chern classes. (more…)
August 8, 2011
## Chern-Weil for complex line bundles
Posted by Akhil Mathew under differential geometry | Tags: characteristic classes, Chern classes, Chern-Weil theory, de Rham cohomology, line bundles |
[6] Comments
So, now with the preliminaries on connections and curvature established, and the Chern classes summarized, it’s time to see how they connect with one another. Namely, we want to say that, given a complex vector bundle, we can compute the Chern classes in de Rham cohomology by picking a connection — any connection — on it, computing the curvature, and then applying various polynomials.
We shall start by warming up with a special case, of a line bundle, where the algebra needed is easier. Let ${M}$ be a smooth manifold, ${L \rightarrow M}$ a complex line bundle. Let ${\nabla}$ be a connection on ${L}$, and let ${\Theta}$ be the curvature.
Thus, ${\Theta}$ is a global section of ${\mathcal{A}^2 \otimes \hom(L, L)}$; but since ${L}$ is a line bundle, this bundle is canonically identified with ${\mathcal{A}^2}$. (Recall the notation that $\mathcal{A}^k$ is the bundle (or sheaf) of smooth $k$-forms on the manifold $M$.)
Proposition 1 (Chern-Weil for line bundles) ${\Theta}$ is a closed form, and the image in ${ H^2(M; \mathbb{C})}$ is ${2\pi i}$ times the first Chern class of the line bundle ${L}$. (more…)
August 5, 2011
## Chern classes
Posted by Akhil Mathew under topology | Tags: characteristic classes, Chern classes, splitting principle |
[3] Comments
So, I’m in a tutorial this summer, planning to write my final paper on the Kodaira embedding theorem, and I’ve been finding my total ignorance of complex algebraic geometry to be something of a problem. One of my goals next year is, coincidentally, to acquire a solid understanding of most of the topics in Griffiths-Harris. To start with, I’d like to spend a few posts on Chern-Weil theory. This gives an analytic method of computing the Chern classes of a complex vector bundle, and more generally a framework for the characteristic classes of a principal bundle over a Lie group. In fact, it tells you what the cohomology of the classifying space of a Lie group is (it’s a certain algebra of invariant polynomials on the Lie algebra), from which — by Yoneda’s lemma — you can associate cohomology classes to a principal bundle on any space.
Today, I’d like to review what Chern classes are like.
1. Introduction
To start with, we will need to describe what the Chern classes really are. These are going to be natural maps
$\displaystyle \mathrm{Vect}_{\mathbb{C}}(X) \rightarrow H^*(X; \mathbb{Z}),$
from the complex vector bundles on a space ${X}$ to the cohomology ring. In other words, to each vector bundle ${E \rightarrow X}$, we will have an element ${c(E) \in H^*(X; \mathbb{Z})}$. In order for this to be natural, we are going to want that, for any map ${f: Y \rightarrow X}$ of topological spaces,
$\displaystyle c(f^*E) = f^* c(E) \in H^*(Y; \mathbb{Z}).$
In other words, we are going to want the map ${\mathrm{Vect}_{\mathbb{C}}(X) \rightarrow H^*(X; \mathbb{Z})}$ to be functorial in ${X}$, when both are considered as contravariant functors in ${X}$. It turns out that each functor ${\mathrm{Vect}_{n, \mathbb{C}}}$ (of ${n}$-dimensional complex vector bundles) and ${H^k(X; \mathbb{Z})}$ is representable on the appropriate homotopy category. (more…)
December 17, 2010
## The Stiefel-Whitney classes of projective space
Posted by Akhil Mathew under topology | Tags: characteristic classes, projective space, real division algebras, Stiefel-Whitney classes |
Last time we gave the axiomatic description of the Stiefel-Whitney classes. Today, following Milnor-Stasheff, we want to look at what happens in the particular case of real projective space ${\mathbb{RP}^n}$. In particular, we want to compute the Stiefel-Whitney classes of the tangent bundle ${T(\mathbb{RP}^n)}$. The cohomology ring of ${\mathbb{RP}^n}$ with ${\mathbb{Z}/2}$-coefficients is very nice: it’s ${\mathbb{Z}/2[t]/(t^{n+1})}$. We’d like to find what ${w(T(\mathbb{RP}^n)) \in \mathbb{Z}/2[t]/(t^{n+1})}$ is.
On ${\mathbb{RP}^n}$, we have a tautological line bundle ${\mathcal{L}}$ such that the fiber over ${x \in \mathbb{RP}^n}$ is the set of vectors that lie in the line represented by ${x}$. Let’s start by figuring out the Stiefel-Whitney classes of this. I claim that
$\displaystyle w(\mathcal{L}) = 1+t \in H^*(\mathbb{RP}^n, \mathbb{Z}/2).$
The reason is that, if ${\mathbb{RP}^1 \hookrightarrow \mathbb{RP}^n}$ is a linear embedding, then ${\mathcal{L}}$ pulls back to the tautological line bundle ${\mathcal{L}_1}$ on ${\mathbb{RP}^1}$. In particular, by the axioms, we know that ${w(\mathcal{L}_1) \neq 1}$, and in particular has nonzero ${w_1}$. This means that ${w_1(\mathcal{L}) \neq 0}$ by the naturality. As a result, ${w_1(\mathcal{L})}$ is forced to be ${t}$, and there can be nothing in other dimensions since we are working with a 1-dimensional bundle. The claim is thus proved. (more…)
December 16, 2010
## Stiefel-Whitney classes
Posted by Akhil Mathew under topology | Tags: characteristic classes, Grassmannians, Stiefel-Whitney classes |
1 Comment
The first basic example of characteristic classes are the Stiefel-Whitney classes. Given a (real) ${n}$-dimensional vector bundle ${p: E \rightarrow B}$, the Stiefel-Whitney classes take values in the cohomology ring ${H^*(B, \mathbb{Z}/2)}$. They can be used to show that most projective spaces are not parallelizable.
So how do we get them? One way, as discussed last time, is to compute the ${\mathbb{Z}/2}$ cohomology of the infinite Grassmannian. This is possible by using an explicit cell decomposition into Schubert varieties. On the other hand, it seems more elegant to give the axiomatic formulation. That is, following Milnor-Stasheff, we’re just going to list a bunch of properties that we want the Stiefel-Whitney classes to have.
Let ${p: E \rightarrow B }$ be a bundle. The Stiefel-Whitney classes are characteristic classes ${w_i(E) \in H^i(B, \mathbb{Z}/2)}$ that satisfy the following properties.
First, ${w_i(E) = 0}$ when ${i > \dim E}$. When you compute the cohomology of ${\mathrm{Gr}_n(\mathbb{R}^{\infty})}$, the result is in fact a polynomial ring with ${n}$ generators. Consequently, we should only have ${n}$ characteristic classes of an ${n}$-dimensional vector bundle. In addition, we require that ${w_0 \equiv 1}$ always.
Second, like any characteristic class, the ${w_i}$ are natural: they commute with pulling back. If ${E \rightarrow B}$ is a bundle, ${f: B' \rightarrow B}$ is a map, then ${w_i(f^*E) = f^* w_i(E)}$. Without this, they would not be very interesting. (more…)
December 15, 2010
## Vector bundles, Grassmannians, and characteristic classes
Posted by Akhil Mathew under topology | Tags: characteristic classes, Grassmannians, universal n-bundle, Yoneda lemma |
[3] Comments
I’ve been reading about spectra and stable homotopy theory lately, but don’t feel ready to start talking about them here. Instead, I shall say a few words on characteristic classes. The present post will be quite general and preparatory — the more difficult matter is to actually construct such characteristic classes. Our goal is to see that characteristic classes essentially boil down to computing the cohomology of the infinite Grassmannian.
A lot of problems in mathematics involve the existence of sections to vector bundles. For instance, there is the old question of when the sphere is parallelizable. A quick Euler characteristic argument shows that even-dimensional spheres can’t be—then there would be an everywhere nonzero vector field, whose infinitesimal flows would be homotopic to the identity (and consequently having nonzero Lefschetz number by the even-dimensionality) while having no fixed points. In fact, much more is known. Using the group or group-like structures on ${S^1, S^3, S^7}$ (coming from the complex numbers, quarternions, and octonions), it is easy to see that these manifolds are parallelizable. But in fact no other sphere is.
A characteristic class is a means of assigning some invariant to a vector bundle. Ideally, it should be trivial on trivial bundles, so the characteristic class can be thought of as an “obstruction” to finding large numbers of linearly independent sections.
More formally, let ${p: E \rightarrow B}$ be a vector bundle. A characteristic class assigns to this bundle (of some fixed dimension, say ${n}$) an element of the cohomology ring ${H^*(B)}$ (with coefficients in some ring). To be interesting, the characteristic class has to be natural. That is, if ${f: B' \rightarrow B}$ is a map, then the characteristic class of the pull-back bundle ${f^*E \rightarrow B'}$ should be the pull-back of the characteristic class of ${E \rightarrow B}$. (more…) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 124, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9033964276313782, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/33768/change-in-appearance-of-liquid-drop-due-to-gravity/33777 | # Change in appearance of liquid drop due to gravity
A liquid drop is spherical in shape due to surface tension. But why does it appear as a vertical line under the free-fall due to gravity? (E.g. During a rain - falling raindrop) Is there a specified length for the line or does it vary with the size of drops?
-
5
Your original question was better--- they appear long because they are moving fast, and so streak in your eye. – Ron Maimon Aug 9 '12 at 8:32
1
– aberration Nov 2 '12 at 14:51
## 1 Answer
A drop that is free falling in vacuum is spherical. This is because free falling in a gravitational field is the same thing as being at rest with no gravitational field present: the gravitational field and the acceleration cancel each other out.
Rain drops falling to the earth can have various shapes depending on their size, although I am not aware that they can become elongated (can you provide a source?). These shapes are due to the air flowing past them, in a rather intuitive way: roughly, air flow causes the bottom to become flatter.
Edit: Regarding the appearance of raindrops (as opposed to their physical shape), consider taking a photograph of falling rain. The camera integrates the incoming light over the exposure time $t$, during which the drop travels a distance of $v t$, where $v$ is the velocity. If we are close to the ground this will be the terminal velocity, which is about $2_{m/s}$. If we use an exposure time of $t=1/60_s$ (say we are using a flash), the drop will trace a line of length $\sim 3_{cm}$. The apparent line on the photograph then has to account for distance from the drop, etc.
-
They don't become flat, they become elongated, OP is asking why. – Ron Maimon Aug 9 '12 at 3:53
3
Their shape depends on their size. Small drops are spherical, larger ones look sort of like pancakes (this is what I mean by 'flat', perhaps this was not clear), yet larger ones look like parachutes. I don't know at which size (if any) they become elongated, but the shape is due to the flow of air and not gravity. I will try to clarify... – Guy Gur-Ari Aug 9 '12 at 4:38
1
– Ron Maimon Aug 9 '12 at 5:06
@CrazyBuddy then that's the question about optical illusions – Yrogirg Aug 9 '12 at 7:53
@CrazyBuddy What confuses me at least is that you call them 'horizontal' lines -- I never saw a raindrop that looks like that. That is why I thought you meant this flattening. Do you mean vertical lines? – Guy Gur-Ari Aug 9 '12 at 13:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9602305293083191, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/187618/finding-combined-time-to-repair-two-machines-where-time-is-exponentially-distrib?answertab=votes | # Finding combined time to repair two machines where time is exponentially distributed
I am trying to solve the following problem.
The time $T$ required to repair a machine is an exponentially distributed random variable with mean 10 hours.
a) What is the probability that a repair takes at least 15 hours given that its duration exceeds 12 hours? b) What is the probability that the combined time to repair two machines is at least 20 hours?
Solution Attempt
Since mean is given to be 10 hours hence $\lambda = \dfrac {1}{10}$ and the probability distribution of the time is given as $e^{-\lambda t} = e^{-\dfrac {1}{10} t}$
a) $P(T>15 |T>12) = P(0$ repairs in $(12, 15]) = e^{-\dfrac {1}{10} 3}$
b) let $T_1$ be the r.v representing time to repair the first machine and $T_2$ be the r.v representing time to repair the second machine. So we seek to evaluate $P(T_1 + T_2 > 20)$ we know both of these time should be independent as the exponential distribution process to memory less but i am not sure how to proceed from here.
Any help would be much appreciated.
-
1
I think you have a problem. You are assuming that if machine 1 is repaired first then the time to repair machine 2 starts when machine 1 is repaired. This is not the case. but the memoryless property does tell you that given machine one is repaired at time T1 the remaining time to repair machine 2 still has the exponential distribution with rate 1/10. – Michael Chernick Aug 27 '12 at 20:43
## 2 Answers
$$\mathrm P(T_1+T_2\gt20)=\mathrm P(T_1\gt20)+\int_0^{20}\mathrm P(T_2\gt20-t)\cdot\lambda\mathrm e^{-\lambda t}\cdot\mathrm dt$$ $$\mathrm P(T_1+T_2\gt20)=\mathrm e^{-20\lambda}+\int_0^{20}\mathrm e^{-\lambda (20-t)}\cdot\lambda\mathrm e^{-\lambda t}\cdot\mathrm dt=\ ...$$
-
While the numerical answer you get for part (a) is correct, I think that your work indicates some misinterpretation of the concepts. $T$ is the time required to complete a repair, and its complementary cumulative distribution function is $\exp(-t/10)$, that is, $$P\{T > t\} = 1 - F_T(t) = e^{-t/10}.$$ The question in part (a) asks for a conditional probability $P\{T > 15 \mid T > 12\}$ which is by definition given by $$P\{T > 15 \mid T > 12\} = \frac{P(\{T > 15 \}\cap\{T > 12\})}{P\{T > 12\}} = \frac{P\{T > 15 \}}{P\{T > 12\}} = \frac{e^{-15/10}}{e^{-12/10}}= e^{-3/10}$$ which is the same answer as you obtained, but it is not the probability of $0$ repairs in $(12,15]$. The repairing began at time $t =0$ and the question asks: if the repair is still ongoing at time $t = 12$, what is the conditional probability that it is still ongoing at $t = 15$, and thus completes at some time $T$ larger than $15$. Your use of the phrase
$$0 ~ \text{repairs in} ~(12,15]$$
almost makes it sound like the repairs are a Poisson arrivals process.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951801598072052, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/108792/irreducibility-of-fundamental-weyl-modules | ## Irreducibility of fundamental Weyl modules
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It is known that for a simple algebraic group over an algebraically closed field of positive characteristic (which I assume to be {\it good} for the group), the Weyl modules corresponding to the fundamental weights are irreducible if the group is either $SL_n$ or $SO_n$ (Wong 71). For the symplectic group, this is not true for an arbitrary fundamental weight. Foulle provides certain conditions on the weights satisfying which the corresponding Weyl modules of the symplectic group are irreducible. Does one know anything about the irreducibility of the fundamental Weyl modules for the exceptional groups? I believe its true for $G_2$ by results of Premet, Humphreys and others. Is it true also for other exceptional groups?
-
@P-Samuel: You should accept Sasha's answer, since it is likely to be the only complete one. – Jim Humphreys Oct 6 at 20:55
## 3 Answers
It is true in types $G_2$, $F_4$ and $E_6$, but in types $E_7$ and $E_8$ one has to exclude $p=7$, $p=13$ and $p=19$ as well (I think $p=19$ occurs only in type $E_8$). More on this can be found in Jantzen's paper "First cohomology groups for classical Lie algebras" published in Progress in Math., vol. 95, 1991 (the best way to find this paper is to google the title). For related results, see also the paper by Gilkey and Seitz titled "Some representations of exceptional Lie algebras" and published in Geom. Dedicata, 25, (1988), 407-416.
That the Weyl module $V(\varpi_6)$ for $G=E_7(K)$ is reducible in characteristic $7$ is quite easy to see directly: the minuscule module $V(\varpi_7)$ has a nondegenerate $G$-invariant symplectic form and by using Weyl's formula one observes that $V(\varpi_6)$ can be obtained from the second fundamental Weyl module for $Sp_{56}(K)$ by restriction to $G$. The latter Weyl module is reducible in characteristic $7$ as $7$ divides $56$.
-
Thanks for pointing out this paper by Jantzen, which I completely overlooked when putting together my survey; I will add it to the revisions list. In his table on page 299, the prime 19 indeed occurs just for `$E_8$` and `$\varpi_3$`. (All the good primes which create problems in exceptional types are below the respective Coxeter numbers 6, 12, 12, 18, 30. But in that range there are not yet any coherent conjectures like Lusztig's for larger primes.) – Jim Humphreys Oct 4 at 14:12
Thank you. This was indeed very helpful. – P-Samuel Oct 5 at 7:41
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
[EDIT] My original incomplete answer is below, but Sasha Premet has provided the complete reference, the table of 4.6 in Jantzen's 1991 paper. Jantzen recalls that he computed various cases in types `$E_\ell$` not covered by the computer calculations of his Oregon colleagues Gilkey and Seitz, using his amazing Sum Formula. This formula applies for all primes but is usually quite daunting to compute. Probably his non-computer results are completely correct, but skeptics should feel free to duplicate them by whatever means. It's definitely challenging to get any conceptual insight for these small primes. (Revisions to my lecture notes are posted here.)
This question probably hasn't been fully answered for the exceptional types `$E_6, E_7, E_8$`. In general, the linkage principle (developed fully in Jantzen's book Representations of Algebraic Groups) ensures that for `$p$` "large enough", each fundamental weight `$\varpi$` lies in the closure of the lowest `$p$`-alcove for the affine Weyl group (of Langlands dual type) and thus the Weyl module `$V(\varpi)$` is indeed simple. Here "large enough" depends on the Coxeter number and the coefficients of the highest short root, so the problem reduces to a computational one for a definite range of primes. The bad primes (possibly 2, 3, 5 for exceptional types) cause special problems, but apart from those the fundamental modules for types `$G_2, F_4$` are well-behaved.
In Chapter 4 of my 2006 survey Modular Representations of Finite Groups of Lie Type (LMS Lecture Notes Series 326), I tried to cover all results known to me, with complete references. For example, Table 1 on page 37 summarizes results computed by Gilkey-Seitz for `$F_4$` which confirm that fundamental Weyl modules are simple when `$p>3$`. Symplectic groups have been thoroughly examined by Premet-Suprunenko, Foulle, McNinch. And so on.
The fundamental modules for types `$E_\ell$` probably haven't all been studied well enough to settle all cases. However, the few fundamental weights which are minuscule pose no problem since the weights of the corresponding Weyl modules form a single orbit under the Weyl group and thus the modules remain simple for all `$p$`. But otherwise the linkage principle just gives a bound on how much computation is needed. Probably nothing unusual happens for good primes, but that's not clearly documented yet.
One other theoretical remark: Lusztig's conjecture should handle all primes greater than the Coxeter number in a uniform way, but it has not yet been proved for this optimal lower bound. Meanwhile direct computation using existing algorithms (which go back to early work by Wong and Burgoyne) seems to be the only alternative.
-
You might also find the data on Frank Luebeck's website useful:
http://www.math.rwth-aachen.de/~Frank.Luebeck/chev/WMSmall/index.html
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9245153665542603, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/197981/assigning-a-finite-value-to-a-divergent-integral | # Assigning a finite value to a divergent integral
Can someone help me to regularize the following divergent integral?
$$\int_0^{1/2}\, \frac{d x}{x^{3/2} (1-x)^{3/2}}$$
Guys, thank you very much for your answers. Thus if I have understood your procedure, the regularized result of this divergent integral (let's do a trivial case) $$\int_0^\infty{dx} = \lim_{\Lambda\rightarrow \infty} \int_0^\Lambda{dx}=\lim_{\Lambda\rightarrow \infty}\Lambda - 0 \equiv 0$$ is zero because one simply remove the divergency and the game is over, right?
Well, I would like to have your opinion about this other regularization I have thought of $$\int_0^\infty{dx} = \lim_{m\rightarrow\infty} \int_0^m{dx} = \lim_{m\rightarrow\infty} 1+\sum_{n=1}^{m-1} 1 = \lim_{m\rightarrow\infty} 1+\sum_{n=1}^{m-1} {1\over n^0} = 1+\zeta(0)=1-{1\over 2}={1\over 2}$$ where I have used the well-known value $\zeta(0)=-1/2$ of the Riemann $\zeta$-function. I was wondering what can be the physical interpretation of such a (naive, I admit) regularization...but maybe there is none and I am just a crazy physicist :)
-
5
What do you mean? – Gerry Myerson Sep 17 '12 at 12:38
This integral formally diverges at the lower end. However, I know that there are ways to regularize the integral and assign it a finite value (by throwing away a formally infinite quantity, of course). – Andrea Sep 17 '12 at 12:52
1
Well, if that's all you want, pick your favorite number between 0 and 1/2 (not including 0), and throw away the integral from zero to your number, and report the finite value you get by integrating from your number to 1/2. If you want more than this, then you have to know what you want, what conditions your regularization has to satisfy. – Gerry Myerson Sep 17 '12 at 13:05
1
The first question, and @oen's answer, arose in J. Hadamard's "finite partie" idea, which had a similarly disturbingly ad-hoc appearance, but which he used self-consistently to some advantage. At the same time, I'd agree with Gerry M. that one definitely needs some sense of the "function" of the regularization, since, without constraints, a moderately clever person can arrange to get any answer they want, etc. – paul garrett Oct 4 '12 at 16:22
## 2 Answers
Here's a fairly natural way to assign the integral a value.
Introduce a cutoff, $$\begin{eqnarray*} I_\epsilon &=& \int_\epsilon^{1/2} \frac{dx}{x^{3/2}(1-x)^{3/2}} \\ &=& \frac{2(1-2\epsilon)}{\sqrt{\epsilon(1-\epsilon)}} \\ &=& \frac{2}{\sqrt{\epsilon}}-3\sqrt{\epsilon} + O(\epsilon^{3/2}). \end{eqnarray*}$$ The simplest regularization is to remove only the divergence, in which case we find the regularized integral is zero: $$I_{\mathrm{reg}} = \lim_{\epsilon\to0}\left(I_\epsilon-\frac{2}{\sqrt{\epsilon}}\right) = 0.$$
Another approach involves analytic continuation of the incomplete beta function, with the same result.
Addendum: Consider the integral representation of the incomplete beta function, $$B_z(a,b) = \int_0^z dx\, x^{a-1}(1-x)^{b-1}.$$ This representation requires $\mathrm{Re}(a) > 0$. (We assume $0<z<1$.) In terms of the hypergeometric function, $$B_z(a,b) = \frac{z^a}{a} {}_2 F_1(a,1-b;a+1;z).$$ We take the right hand side to define the integral, wherever it makes sense to do so. Thus, the integral is $$\begin{eqnarray*} B_{1/2}\left(-\frac{1}{2},-\frac{1}{2}\right) &=& -2\sqrt{2} \ {}_2F_1\left(-\frac{1}{2},\frac{3}{2};\frac{1}{2};\frac{1}{2}\right) \\ &=& 0. \end{eqnarray*}$$ The last equality is not trivial. In fact $${}_2F_1\left(-\frac{1}{2},\frac{3}{2};\frac{1}{2};z\right) = \frac{1-2z}{\sqrt{1-z}}.$$
-
1
Could you give a brief description of how the analytic continuation of the incomplete beta function is carried out? I'm very interested. – Antonio Vargas Sep 19 '12 at 1:15
@AntonioVargas: Sure, Antonio. I've added something above. – oen Sep 19 '12 at 2:09
Cheers, thanks :) – Antonio Vargas Sep 19 '12 at 2:23
$$x = \sin^2(\theta) \implies I = \int_0^{\pi/4} \dfrac{2 \sin(\theta) \cos(\theta)}{\sin^3(\theta) \cos^3(\theta)} d\theta$$
\begin{align} I & = 8 \int_0^{\pi/4} \dfrac{d\theta}{4\sin^2(\theta) \cos^2(\theta)} = 8 \int_0^{\pi/4} \dfrac{d \theta}{\sin^2(2 \theta)}\\ & = 8 \int_0^{\pi/4} \text{cosec}^2(2 \theta) d \theta = 4 \int_{0}^{\pi/2} \text{cosec}^2(\phi) d \phi\\ & = -4 \left. \cot(\phi) \right \vert_{0}^{\pi/2} = \lim_{t \to 0} 4 \cot(t) \\ & = 4 \left( \dfrac1t - \dfrac{t}3 - \dfrac{t^3}{45} - \dfrac{2t^5}{945} + \mathcal{O} \left(t^7 \right) \right)\end{align} Removing the pole at $t=0$, and substituting $t=0$ in the rest, we get $I = 0$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8823980689048767, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/89556/easy-and-hard-problems-in-mathematics/89576 | ## Easy and Hard problems in Mathematics [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Modified question:
I would like to know some examples of problems in Mathematics, for pedagogical purposes, which do not involve difficult techiques to solve the problem but with a change of context turns them into monstrous-unimaginably difficult to solve problems.
By changing the context I mean, by changing one class of objects in the problem to a related class of objects. For example, from directed graph to undirected graph or Zygmund class to Log-lipshitz class. By changing a 'less-than problem' to 'greater-than problem'. From 2-case problem to 3-case problem. There are plenty of such examples in Theoretical Computer Science or Computational Complexity theory. I need some examples in Mathematics. Lot of examples fall in this category but I am looking for only extreme examples like the ones I stated below. Since, this question is asked for pedagogical purpose it would be interesting if there is a story behind the problem.
Examples of problems:
• Linear Programming to integer linear programming
• 2-coloring to 3-coloring
• Eulerian graph to Hamiltonian graph
• Undirected graph case to directed graph case in Shannon's switching game
• 2-SAT to 3-SAT
One thought which motivated me to pose this question is: what if Konigsberg problem has been formulated as a vertex problem. Would Leonard Euler get inspried to create graph theory? No doubt, history speaks differently as Konigsberg problem is stated in terms of edges. Not only Euler solved this problem but created a branch of mathematics out of it! And I am not sure what turn of events would have taken place had the problem been posed in terms of vertices.
IMHO, there are look-alike easy problems and hard problems coexisting but it is the easy problems which saved mathematicians day and hard ones which gave them incentive to work harder.
Some pointers for hardness of problem: problems which need sophisticated tools, techniques which diverge from the routine ones, radical thinking or bold ideas to solve the them, like Poincare conjecture. Or, those problems which do not have adequate tools yet to attempt them, like (NP=P?).
I would appreciate any answers in this direction. Thank you in advance.
-
3
I'm guessing the words "small change" are I'll defined enough to get this question closed. That being said, when you prove theorems, you have some assumptions on the objects of study, in a huge number of cases removing an assumption makes the theorem not just more difficult, but plain wrong. But it's at a fundamental level. An example of this is Fermat's Last theorem(I went with the most obvious example possible), if you take R or C rather than Z, solutions exist and the proof is fairly straightforward. – B. Bischof Feb 26 2012 at 5:49
1
I just voted to close as "not a real question" but in hindsight I think "subjective and argumentative" is nearer the mark. A lot of these so-called small changes are in fact BIG changes, in my opinion. – Yemon Choi Feb 26 2012 at 7:33
7
If the change from "wanting real solutions" to "wanting integer solutions" seems small, then one may need stronger lenses. – Yemon Choi Feb 26 2012 at 10:27
1
I take it you are hoping to get the question reopened. Perhaps you should start a thread on the meta site where there could be a discussion of how the question might be made more acceptable on MO. I'm not sure that a change in the title suffices. – Gerry Myerson Feb 28 2012 at 22:22
1
Meta discussion - meta.mathoverflow.net/discussion/1314/… – François G. Dorais♦ Feb 29 2012 at 2:53
show 6 more comments
## 5 Answers
Finding the $\ell_p$ operator norm of a matrix is easy for $p=1,2,\infty$, but it is NP-hard for any other $p$ (see this discussion, for example).
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Changing $L^2$ convergence to almost everywhere convergence (of the Fourier series of an $L^2$ function) takes (essentially) the rather easy Riesz-Fischer theorem into the very hard Carleson's theorem.
-
This is very common, for example the mean ergodic theorem (Von-Neumann) vs. the pointwise ergodic theorem (Birkhoff). – Asaf Feb 26 2012 at 7:22
3
Always good to see love for Carleson's theorem, but I would debate that this is a "small change" ... – Yemon Choi Feb 26 2012 at 9:59
Changing the question from "integral quadratic form represents zero" to "real quadratic form almost represents zero", changes the classical theorems (which eventually sum up to the Hasse-Minkowski theorem) to the Oppenheim conjecture, proved by Margulis in the late 80s, by completely different methods (instead of algebraic number theory, one uses Homogeneous Flows).
-
Finding real solutions to an elliptic curve compared to finding integer solutions.
-
1
Not a small change, in my opinion – Yemon Choi Feb 26 2012 at 10:30
Let C be a finite collection of finite sets. Further let C be closed under union. Is it always the case that there is an element x which is in at least half the members of C?
The answer to this question is no. If, however, the union of C is required to be nonempty, then the answer is unknown to me, and I suspect to many others. (Look up union closed sets conjecture.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9437775015830994, "perplexity_flag": "middle"} |
http://mathhelpforum.com/statistics/82073-sample-spaces-events-print.html | # Sample Spaces and Events
Printable View
• April 3rd 2009, 12:13 AM
youvy
Sample Spaces and Events
43. From a group of 20 men and 16 women, 2 people are chosen at random. Find the probability of:
a) 1 man and 1 woman being chosen
b) 2 men being chosen
c) 2 women being chosen
Thank you:)
• April 3rd 2009, 12:35 AM
toop
a. 1 man being chosen = 20/36 and 1 woman being chosen is 16/36
The probability of both would be the product of the two probabilities I believe.
b. The probability of the first man being chosen is 20/36 and the probability of the second is 19/35 since you are now one man down. Multiply the two probabilities.
c. The probability of the first woman being chosen is 16/36 and the second is 15/35. Multiply the two like above.
• April 3rd 2009, 03:19 AM
mr fantastic
Quote:
Originally Posted by toop
a. 1 man being chosen = 20/36 and 1 woman being chosen is 16/36
The probability of both would be the product of the two probabilities I believe.
b. The probability of the first man being chosen is 20/36 and the probability of the second is 19/35 since you are now one man down. Multiply the two probabilities.
c. The probability of the first woman being chosen is 16/36 and the second is 15/35. Multiply the two like above.
I don't think your solution to (a) is correct.
The correct answer is given by $\frac{{20 \choose 1} \cdot {16 \choose 1}}{{36 \choose 2}} = \frac{20 \cdot 16 \cdot 2}{36 \cdot 35} \, ....$
All times are GMT -8. The time now is 06:57 AM.
Copyright © 2005-2013 Math Help Forum. All rights reserved.
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949122965335846, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/100999/fx-0-for-x-notin-mathbbq-and-fx-1-q-for-x-p-q-prove-f-is-in | $f(x)= 0$ for $x \notin\mathbb{Q}$, and $f(x)=1/q$ for $x=p/q$. Prove: $f$ is integrable
$f(x)= 0$ , if $x \notin \mathbb{Q}$, otherwise $f(x)=1/q$ for $x=p/q$ such that $p$ and $q$ don't share common divisor. I 'd love your help proving that $f$ is integrable and that $\int_{0}^{1}f=0$.
I showed that the lower Darboux sum is $0$ and I basically need to show that for every epsilon we can find division such that the upper Darboux sum is smaller than the given epsilon.
The upper darboux sum is $\bar{S}=\sum_{1}^{n}f(x_i) \Delta x_i$ for all $x_i$ of the partition, I tried to replace the $f(x_i)$ in $1/q_i$, and check if this series converges to $0$.
Thanks a lot.
-
2
HINT: Did you try to calculate the upper Darboux sum corresponding to the uniform partition into $n$ equal parts? Attempt this and tell us where you are stuck. – Srivatsan Jan 21 '12 at 14:27
2 Answers
The OP was stuck with using the definition of the Darboux integral; to proceed we need some nontrivial estimates about the function. (I have tried to maintain notational consistency with Didier Piau's answer.)
Consider the uniform partition of the unit interval into $N$ parts. Fix some $n$ (whose value will be decided shortly), and define $X_n$ to be the set of points $x$ such that $f(x) \geqslant \frac{1}{n}$. Then $|X_n| \leqslant n^2$ (why?), and so at most $n^2$ of the $N$ subintervals contain a point from $X_n$.
For each of the subintervals containing a point from $X_n$, we upper bound the function by $1$; for the remaining subintervals we upper bound it by $\frac{1}{n}$. Therefore the Darboux sum is at most $$\left( n^2 + (N - n^2) \cdot \frac{1}{n} \right) \cdot \frac{1}{N} \leqslant \frac{n^2}{N} + \frac{1}{n}.$$ Now picking $n = N^{1/3}$, the Darboux sum is at most $O ( N^{-1/3} )$, which approaches $0$ as $N \to \infty$.
-
Using Darboux sums is allright, of course, but here is a shortcut.
For every positive integer $n$, let $X_n$ denote the set of points $x$ such that $f(x)\geqslant1/n$. Then $X_n$ is finite and $f\leqslant f_n$, where $f_n=1/n+(1-1/n)\mathbf 1_{X_n}$. Since every lower or upper Darboux sum of $f_n$ adapted to $f_n$ is exactly $1/n$, letting $n\to\infty$ concludes the proof.
-
"Since every lower or upper Darboux sum of $f_n$ adapted to $f_n$" -- Can you clarify what this means? – Srivatsan Jan 21 '12 at 14:33
What is $\mathbf 1_{X_n}$? – Jozef Jan 21 '12 at 14:34
– Did Jan 21 '12 at 14:35
@Srivatsan: Darboux sums are based on subdivisions. A subdivision is adapted to a step function if it contains every discontinuity point of the function. – Did Jan 21 '12 at 15:02
Thanks for the clarification. And quite nice approach, by the way! (+1) – Srivatsan Jan 21 '12 at 15:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9118160605430603, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/189693/every-vector-space-has-a-basis-proof-using-zorns-lemma-in-linear-algebra | # Every vector space has a basis proof using zorn's lemma in linear algebra
Every vector space has a basis proof by using zorn's lemma How will i prove this statement by using zorn's lemma
-
1
– Bill Cook Sep 1 '12 at 16:35
1
A basis is a maximal linearly independent set under the $\subseteq$-ordering. – Michael Greinecker Sep 1 '12 at 16:50
3
I downvoted. I think the asker should put some more effort. I will gladly reverse my downvote if the question gets edited accordingly. – Giuseppe Negro Sep 1 '12 at 17:14
What have you tried? If this is homework, tag it as such. – lhf Sep 1 '12 at 17:45
## 1 Answer
The poset involved is the set of all linearly independent subsets, ordered by inclusion, i.e. $A\le B$ iff $A\subseteq B$. The problem is to show that there is a maximal such linearly independent subset. Zorn's lemma says this is true if every chain of linearly independent subsets has an upper bound. So the question is where to find such an upper bound. I claim the union of the members of the chain will serve. If a set of linearly independent subsets is linearly ordered by inclusion, then their union is also linearly independent. To show this, one must recall that a set is linearly independent precisely if every finite linear relation among them is trivial. I.e. linear independence is a property that has "finite character".
At this point, I will leave the details as an exercise.
Later note: In view of Brian M. Scott's comment below: If a linearly independent set fails to span the space, then there's some vector that is not a linear combination of the members of that set. Add it to the linearly independent set, getting a bigger linearly independent set. It follows that every linearly independent set that fails to span the space is not maximal. Hence every maximal one spans the space.
-
1
Showing that there is a maximal linear independent set is only half of if: one must still show that a maximal lin. indep. set spans the space. – Brian M. Scott Sep 1 '12 at 17:39
@BrianM.Scott : I've added something on that above. – Michael Hardy Sep 1 '12 at 18:05
Looks good now. +1 – Brian M. Scott Sep 1 '12 at 18:07
I said "every finite linear relation among them is trivial". One could also say "Every finite linear dependence among them is zero". A linear dependence among vectors $\vec{v}_1,\ldots,\vec{v}_n$ is a tuple of scalars $c_1,\ldots,c_n$ such that $\sum_k c_k \vec{v}_k=\vec{0}$. One can show that two linear dependences are really the same dependence if one of them is a nonzero scalar multiple of the other; hence the set of linear dependences is a projective space. – Michael Hardy Sep 1 '12 at 18:28
@BrianM.Scott : It seems to me that that "half" of the problem is much less than half: it's quite simple by comparison to the other "half". – Michael Hardy Sep 1 '12 at 22:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941318929195404, "perplexity_flag": "head"} |
http://mathlesstraveled.com/2010/01/30/irrationality-of-pi-curiouser-and-curiouser/ | Explorations in mathematical beauty
## Irrationality of pi: curiouser and curiouser
Posted on January 30, 2010 by
I’ve been remiss in posting here lately, which I will attribute to Christmas and New Year travelling and general craziness, and then starting a new semester craziness… but things have settled down a bit, so here we go again!
Since it’s been a while since my last post in this series, here’s a quick recap: I’m presenting a proof by Ivan Niven that $\pi$ is irrational, that is, that it cannot be represented as the ratio of two integers (and hence its decimal expansion goes on forever without repeating). My first post just gave some background and an outline of the general argument. In my second post, we began by assuming that $\pi$ is rational, and defined the function
$\displaystyle f(x) = \frac{x^n(a - bx)^n}{n!}$
(really, a family of functions, one for each value of $n$) where $a$ and $b$ are the “numerator” and “denominator” of $\pi$. We then showed that $f(0) = f(\pi) = 0$, and in fact that $f(x)$ is symmetric, with $f(\pi - x) = f(x)$. In my third post, we showed that all the derivatives of $f(x)$ take on integer values when evaluated at both 0 and $\pi$. We’re about halfway there! Today we’ll continue by defining a new function $F(x)$ in terms of $f(x)$, and show some of its properties. Recall too our overall plan: we’re going to wind up with an integral which is strictly greater than 0, strictly less than 1, and also an integer! Since this is clearly nonsense (there are no integers between 0 and 1) we will conclude that our initial assumption—that $\pi$ is rational—was bogus, and that $\pi$ must be irrational after all.
So without further ado, here’s our new function $F(x)$. Actually, this too is technically a family of functions $F_n(x)$, one for each $n$; but again, everything we prove about it will be true no matter what $n$ is.
$\displaystyle F(x) = f(x) - f^{(2)}(x) + f^{(4)}(x) - \dots + (-1)^n f^{(2n)}(x).$
In words, $F(x)$ is the alternating sum of all the even derivatives of $f(x)$. (I say “all” because, as noted in my last post, any derivative of $f(x)$ higher than $2n$ is zero.) Using Sigma notation, we can also write this more concisely as
$\displaystyle F(x) = \sum_{i = 0}^n (-1)^i f^{(2i)}(x).$
There are a few things to note. First, think what happens when we evaluate $F(0)$: since all the derivatives of $f(x)$ take on integer values at 0, and $F(x)$ is just a sum of a bunch of derivatives of $f(x)$, $F(0)$ must be an integer too. Of course, the same thing goes for $F(\pi)$.
Next, consider
$F^{\prime\prime}(x) + F(x).$
Since the derivative of a sum is the sum of the derivatives, we can compute $F^{\prime\prime}(x)$ as
$F^{\prime\prime}(x) = f^{(2)}(x) - f^{(4)}(x) + \dots + (-1)^{n-1}f^{(2n)}(x).$
That is, $f(x)$ turns into $f^{(2)}(x)$, $-f^{(2)}(x)$ turns into $-f^{(4)}(x)$, and so on. “But wait a minute,” you say. “Shouldn’t the $(-1)^n f^{(2n)}(x)$ at the end of $F(x)$ turn into $(-1)^n f^{(2n+2)}(x)$ in $F^{\prime\prime}(x)$?” In fact, it does—but as noted before, $f^{(2n+2)}(x)$ is zero, so that term just goes away. Now we note that every term of $F(x)$ has a corresponding term in $F^{\prime\prime}(x)$ of the opposite sign, except $f(x)$, which has no corresponding term. So when we add $F(x)$ and $F^{\prime\prime}(x)$, everything cancels except $f(x)$:
$F^{\prime\prime}(x) + F(x) = f(x).$
Astute readers will note a funny resemblance between the definition of $F(x)$ and the Taylor series for $\cos(x)$… and indeed, next time we’ll start making some connections with our old trigonometric friends, $\sin$ and $\cos$.
### Share this:
This entry was posted in famous numbers, proof and tagged derivatives, irrationality, Ivan Niven, pi, proof. Bookmark the permalink.
### 10 Responses to Irrationality of pi: curiouser and curiouser
1. Dave says:
The long awaited 4th post! I’m still with you. However, I’ll have to do some research on the Taylor series for cos(x) between now and the next post. It’s been a long time since I’ve seen any Taylor series (though I know it’s back there in the cobwebs somewhere).
2. Brent says:
Well, the similarity is that the Taylor series for cosine is also an alternating sum of “even things” (x^(2k)/(2k)!, to be precise).
3. Brent says:
Also, knowing the Taylor series for cos(x) will NOT be a requirement for understanding the next post! I just threw that in there at the end as an interesting aside.
4. Dave says:
I see. It still might be worth the time to go back and look at Talyor series (just for the sake of jogging my memory). That’s one of the things I like most about your site. It forces me to think about things that I haven’t studied rigorously in almost a decade. And many times I come out of the discussion with a better understanding than I had the first time around.
Also, it’s worth mentioning, AGAIN, that after seeing the F(x) function and the relationship, F”(x) + F(x) = f(x), I have the uncontrollable urge to climb inside Niven’s mind to find out how he came up with the F(x) function in the first place.
5. Brent says:
Yes, you’re right, it’s probably still worth it. And I’m glad my blog spurs you to learn (or re-learn) things!
As for how Niven came up with F(x), I suspect it’s just f(x) that he came up with, and then used F(x) just as a convenient notational tool so he didn’t have to write out lots of long equations with sums of derivatives of f (I think this will make more sense once you see the next post—basically F(x) just helps to conveniently compute the integral of f(x) sin(x)). But I would also really like to know how he came up with f(x)! Hopefully in a wrap-up post I can do some research and try to get a bit of insight that I can share.
6. Sue VanHattum says:
I’m with you on this chunk, but when my brain is clearer, I’ll need to go over the first parts of the argument again to see the big picture more clearly.
Once we’re done with the details, I’d like to try to imagine/understand Niven’s thought process. We haven’t yet used the fact of what pi is, have we?
7. Brent says:
Sue: you’re absolutely right, we haven’t actually used any properties of pi yet. That will come in the next post, when we bring some trig functions into the picture.
8. Jack says:
I’m really enjoying this series =) I hope you do more like them.
9. Brent says:
Jack: thanks, glad you’re enjoying it! I enjoy doing this sort of expositional series so I imagine I will probably do more… I just have to come up with other topics that would work well. Suggestions welcome!
10. Pingback: Irrationality of pi: the impossible integral « The Math Less Traveled
Comments are closed.
• Brent's blogging goal
Cancel | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 55, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402255415916443, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/20343/when-to-use-heat-diffusivity-eqn-and-when-to-use-fouriers-law-to-find-temperatu/20357 | # When to use Heat Diffusivity eqn and when to use Fourier's law to find temperature distribution?
Let's say that there is a circular conical section that has diameter $D=.25x$ without any heat generation and I need to find the temperature distribution.
Originially I thought I could use the heat diffusivity equation at steady state to find the temperature distribution. The differential equation would be:
$$\frac{d}{dx}(k\frac{dT}{dx})=0$$
I am looking at the solution to the example in the book and they use Fourier's Law $$q_{x}=-kA\frac{dT}{dx}$$ and their result is $T(x)=T_{1}-\frac{4q}{\pi a^{2}k}(\frac{1}{x_{1}}-\frac{1}{x_{2}})$
Why do they use one as opposed to the other? Will the two methods produce the same result?
The reason I ask is because they also provide a derivation for the temperature distribution of a plane wall with no heat generation and they use the heat diffusivity equation
-
## 1 Answer
Both methods should thus give you the same result. The diffusion equation for heat can be derived from Fourier's law.
For a one-dimensional problem, you can do that easily yourself, by taking a inifinitesimal part of the rod of size $dx$ and realizing that $q_x$ at either side of this part should be the same (since there is no heat source or sink in this piece). Taking the limit for $dx \rightarrow 0$, will give you the diffusion equation. If your problem becomes more complex, the diffusion equation will be easier to solve.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9298890233039856, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/194272/how-to-sample-from-a-product-of-sums-distribution?answertab=votes | # How to sample from a product-of-sums distribution?
$A$ is a $M$x$N$ matrix whose entries are positive. $x$ is a $N$ dimensional binary (i.e. consisting of $0$s and $1$s) vector and the number of $1$s in $x$ is constant. Let $y = Ax$. The distribution of $x$ is given by
$$p(x) \propto \prod_{i=1}^M y_i$$
where $y_i$ is the $i^{\rm th}$ element of $y$.
How can one fairly and efficiently sample from this distribution?
I'm planning to do Gibbs sampling but its computational complexity is high (drawing a random $x$ is $\mathcal{O}(MN)$). I'm looking for a more efficient sampling method or tricks to make Gibbs faster.
-
Please let me ask you. You have $M$ dimensional $y$ and you are interested in the multivariate density of first $N$ multiplications, $A$ is a deterministic matrix and $x$ is a random vector. Are my observations correct? thanks. – Seyhmus Güngören Sep 16 '12 at 10:36
Thanks for your comment, I noticed and corrected an error. The product should be from 1 to M. – emrea Sep 17 '12 at 1:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9336127042770386, "perplexity_flag": "head"} |
http://divisbyzero.com/2012/06/20/platos-approximation-of-pi/?like=1&_wpnonce=1e4d043b68 | # Division by Zero
A blog about math, puzzles, teaching, and academic technology
Posted by: Dave Richeson | June 20, 2012
## Plato’s approximation of pi?
Today I came across an assertion that Plato used ${\sqrt{2}+\sqrt{3}}$ as an approximation of ${\pi}$. Indeed, it is not a bad approximation: ${3.14626\ldots}$ (although it is not within Archimedes’s bounds: ${223/71<\pi<22/7}$).
Not only had I not seen this approximation before, I had not heard of any value of ${\pi}$ attributed to Plato.
I investigated a little further and discovered that there is no direct evidence that Plato knew of this approximation. It was pure speculation by the famous philosopher of science Karl Popper! Here’s what Popper has to say (this is in his notes to Chapter 6 of The Open Society and its Enemies, Vol. 1, pp. 252–253).
It is a curious fact that ${\sqrt{2}+\sqrt{3}}$ very nearly approximates ${\pi}$… The excess is less than ${0.0047}$, i.e. less than ${1 \frac{1}{2}}$ pro mille of ${\pi}$, and we have reason to believe that no better upper boundary for ${\pi}$ had been proved to exist. A kind of explanation of this curious fact is that it follows from the fact that the arithmetical mean of the areas of the circumscribed hexagon and the inscribed octagon is a good approximation of the area of the circle. Now it appears, on the one hand, that Bryson operated with the means of circumscribed and inscribed polygons,… and we know, on the other hand (from the Greater Hippias), that Plato was interested in the adding of irrationals, so that he must have added ${\sqrt{2}+\sqrt{3}}$. There are thus two ways by which Plato may have found out the approximate equation ${\sqrt{2}+\sqrt{3}\approx \pi}$, and the second of these ways seems almost inescapable. It seems a plausible hypothesis that Plato knew of this equation, but was unable to prove whether or not it was a strict equality or only an approximation.
Popper then spends a couple of paragraphs tying this into an earlier discussion of Plato’s Timaeus. This is the work in which Plato discusses the four elements (earth, air, fire, and water) and associates them with four of the regular polyhedra (cube, octahedron, tetrahedron, and icosahedron, respectively). The connection between Timaeus and ${\pi}$ is the relation between the values ${\sqrt{2}}$ and ${\sqrt{3}}$, the 45-45-90 and 30-60-90 triangles which can be used to make the faces of the polyhedra, and the approximations to the area of a circle using these triangles.
Popper ends with the reminder/disclaimer:
I must again emphasize that no direct evidence is known to me to show that this was in Plato’s mind; but if we consider the indirect evidence here marshalled, then the hypothesis does perhaps not seem too far-fetched.
Note: if we take the unit circle and construct a circumscribed hexagon and an inscribed octagon, then the area of the hexagon is ${2\sqrt{3}}$ and the area of the octagon is ${2\sqrt{2}}$. So, it is true that the average of these areas is ${\sqrt{2}+\sqrt{3}}$.
### Like this:
Posted in Math | Tags: approximation, Karl Popper, Math, pi, Plato, Timaeus
## Responses
1. Borzacchini, Luigi. Incommensurability, music and continuum: a cognitive approach. (English). Arch. Hist. Exact Sci. 61, No. 3, 273-302 (2007).
Borzacchini argues that music ratios provides the “secret of the sect” for the irrational continuum. Plato’s Timeaus is based on Pythagorean philosophy but using irrational numbers as music tuning. This is tied to Archytas and Eudoxus.
Karl Popper stated that Kepler’s Pythagorean philosophy was the precursor of Schrodinger and Poppler then stated that harmonic resonance governs science and reality.
I wonder if this is tied to Egypt using 9/8 for solving the area of a circle with 9/8 also as the major 2nd music interval from Egypt’s sacred unit fraction of 2/3. The 2/3 has to be the subharmonic or inversion of 3/2, the Perfect Fifth music interval, which was squared to 9/4 and then halved to 9/8. So then 9/8 cubed was the solution for the square root of two as the tritone music interval. This is also discussed by musicologist Ernest McClain’s book the Pythagorean Plato.
By: drew hempel on July 30, 2012
at 8:37 pm
• Thanks for sharing this article. I look forward to reading it. (I don’t know too much about music theory, but I’m interested in learning more.)
By: Dave Richeson on July 31, 2012
at 1:52 pm
2. [...] Richeson shares his readings about Plato’s approximation of posted at Division by [...]
By: Mathematics and Multimedia Blog Carnival 23 on August 18, 2012
at 11:21 am
3. 355/113=3.141592………. is a best approximation of pi using integers less than 1000.
By: vijayan on September 14, 2012
at 10:55 pm | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 19, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9511873126029968, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Two's_complement | # Two's complement
Two's complement is a mathematical operation on binary numbers, as well as a binary signed number representation based on this operation.
The two's complement of an N-bit number is defined as the complement with respect to 2N, in other words the result of subtracting the number from 2N. This is also equivalent to taking the ones' complement and then adding one, since the sum of a number and its ones' complement is all 1 bits. The two's complement of a number behaves like the negative of the original number in most arithmetic, and positive and negative numbers can coexist in a natural way.
In two's-complement representation, negative numbers are represented by the two's complement of their absolute value;[1] in general, negation (reversing the sign) is performed by taking the two's complement. This system is the most common method of representing signed integers on computers.[2] An N-bit two's-complement numeral system can represent every integer in the range −(2N − 1) to +(2N − 1 − 1) while ones' complement can only represent integers in the range −(2N − 1 − 1) to +(2N − 1 − 1).
The two's-complement system has the advantage that the fundamental arithmetic operations of addition, subtraction, and multiplication are identical to those for unsigned binary numbers (as long as the inputs are represented in the same number of bits and any overflow beyond those bits is discarded from the result). This property makes the system both simpler to implement and capable of easily handling higher precision arithmetic. Also, zero has only a single representation, obviating the subtleties associated with negative zero, which exists in ones'-complement systems.
8-bit two's-complement integers
Bits Unsigned value 2's complement value
0000 0000 0 0
0000 0001 1 1
0000 0010 2 2
0111 1110 126 126
0111 1111 127 127
1000 0000 128 −128
1000 0001 129 −127
1000 0010 130 −126
1111 1110 254 −2
1111 1111 255 −1
## Potential ambiguities of terminology
One should be cautious when using the term two's complement, as it can mean either a number format or a mathematical operator. For example, 0111 represents decimal 7 in two's-complement notation, but the two's complement of 7 in a 4-bit register is actually the "1001" bit string (the same as represents 9 = 24 − 7 in unsigned arithmetics) which is the two's complement representation of −7. The statement "convert x to two's complement" may be ambiguous, since it could describe either the process of representing x in two's-complement notation without changing its value, or the calculation of the two's complement, which is the arithmetic negative of x if two's complement representation is used.
## Converting from two's complement representation
A two's-complement number system encodes positive and negative numbers in a binary number representation. The weight of each bit is a power of two, except for the most significant bit, whose weight is the negative of the corresponding power of two.
The value w of an N-bit integer $a_{N-1} a_{N-2} \dots a_0$ is given by the following formula:
$w=-a_{N-1} 2^{N-1} + \sum_{i=0}^{N-2} a_i 2^i$
The most significant bit determines the sign of the number and is sometimes called the sign bit. Unlike in sign-and-magnitude representation, the sign bit also has the weight −(2N − 1) shown above. Using N bits, all integers from −(2N − 1) to 2N − 1 − 1 can be represented.
## Converting to two's complement representation
This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (May 2013)
In two's complement notation, a non-negative number is represented by its ordinary binary representation; in this case, the most significant bit is 0. Though, the range of numbers represented is not the same as with unsigned binary numbers. For example, an 8-bit unsigned number can represent the values 0 to 255 (11111111). However a two's complement 8-bit number can only represent positive integers from 0 to 127 (01111111), because the rest of the bit combinations with the most significant bit as '1' represent the negative integers −1 to −128.
The two's complement operation is the additive inverse operation, so negative numbers are represented by the two's complement of the absolute value.
### From the ones' complement
To get the two's complement of a binary number, the bits are inverted, or "flipped", by using the bitwise NOT operation; the value of 1 is then added to the resulting value, ignoring the overflow which occurs when taking the two's complement of 0.
For example, using 1 byte (= 2 nibbles = 8 bits), the decimal number 5 is represented by
0000 01012
The most significant bit is 0, so the pattern represents a non-negative value. To convert to −5 in two's-complement notation, the bits are inverted; 0 becomes 1, and 1 becomes 0:
1111 1010
At this point, the numeral is the ones' complement of the decimal value 5. To obtain the two's complement, 1 is added to the result, giving:
1111 1011
The result is a signed binary number representing the decimal value −5 in two's-complement form. The most significant bit is 1, so the value represented is negative.
The two's complement of a negative number is the corresponding positive value. For example, inverting the bits of −5 (above) gives:
0000 0100
And adding one gives the final value:
0000 0101
The two's complement of zero is zero: inverting gives all ones, and adding one changes the ones back to zeros (since the overflow is ignored). Furthermore, the two's complement of the most negative number representable (e.g. a one as the most-significant bit and all other bits zero) is itself. Hence, there appears to be an 'extra' negative number.
### Subtraction from 2N
The sum of a number and its ones' complement is an N-bit word with all 1 bits, which is 2N − 1. Then adding a number to its two's complement results in the N lowest bits set to 0 and the carry bit 1, where the latter has the weight 2N. Hence, in the unsigned binary arithmetic the value of two's-complement negative number x* of a positive x satisfies the equality x* = 2N − x.[3]
For example, to find the 4-bit representation of −5 (subscripts denote the base of the representation):
x = 510 therefore x = 01012
Hence, with N = 4:
x* = 2N − x = 24 − 510 = 100002 − 01012 = 10112
The calculation can be done entirely in base 10, converting to base 2 at the end:
x* = 2N − x = 24 − 510 = 1110 = 10112
### Working from LSB towards MSB
A shortcut to manually convert a binary number into its two's complement is to start at the least significant bit (LSB), and copy all the zeros (working from LSB toward the most significant bit) until the first 1 is reached; then copy that 1, and flip all the remaining bits. This shortcut allows a person to convert a number to its two's complement without first forming its ones' complement. For example: the two's complement of "0011 1100" is "1100 0100", where the underlined digits were unchanged by the copying operation (while the rest of the digits were flipped).
In computer circuitry, this method is no faster than the "complement and add one" method; both methods require working sequentially from right to left, propagating logic changes. The method of complementing and adding one can be sped up by a standard carry look-ahead adder circuit; the LSB towards MSB method can be sped up by a similar logic transformation.
## Sign extension
Main article: Sign extension
Decimal 7-bit notation 8-bit notation
−42 1010110 1101 0110
42 0101010 0010 1010
sign-bit repetition in 7 and 8-bit integers using two's-complement
When turning a two's-complement number with a certain number of bits into one with more bits (e.g., when copying from a 1 byte variable to a two byte variable), the most-significant bit must be repeated in all the extra bits and lower bits.
Some processors have instructions to do this in a single instruction. On other processors a conditional must be used followed with code to set the relevant bits or bytes.
Similarly, when a two's-complement number is shifted to the right, the most-significant bit, which contains magnitude and the sign information, must be maintained. However when shifted to the left, a 0 is shifted in. These rules preserve the common semantics that left shifts multiply the number by two and right shifts divide the number by two.
Both shifting and doubling the precision are important for some multiplication algorithms. Note that unlike addition and subtraction, precision extension and right shifting are done differently for signed vs unsigned numbers.
## The most negative number
With only one exception, starting with any number in two's-complement representation, if all the bits are flipped and 1 added, the two's-complement representation of the negative of that number is obtained. Positive 12 becomes negative 12, positive 5 becomes negative 5, zero becomes zero(+overflow), etc.
−128 1000 0000
invert bits 0111 1111
add one 1000 0000
The two's complement of −128 results in the same 8-bit binary number.
The two's complement of the minimum number in the range will not have the desired effect of negating the number. For example, the two's complement of −128 in an 8-bit system results in the same binary number. This is because a positive value of 128 cannot be represented with an 8-bit signed binary numeral. Note that this is detected as an overflow condition since there was a carry into but not out of the most-significant bit. This can lead to unexpected bugs in that an unchecked implementation of absolute value could return a negative number in the case of the minimum negative. The abs family of integer functions in C typically has this behaviour. This is also true for Java.[4] In this case it is for the developer to decide if there will be a check for the minimum negative value before the call of the function.
The most negative number in two's complement is sometimes called "the weird number," because it is the only exception.[5][6]
Although the number is an exception, it is a valid number in regular two's complement systems. All arithmetic operations work with it both as an operand and (unless there was an overflow) a result.
## Why it works
This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (May 2013)
Given a set of all possible N-bit values, we can assign the lower (by binary value) half to be the integers from 0 to (2N − 1 − 1) inclusive and the upper half to be −2N − 1 to −1 inclusive. The upper half can be used to represent negative integers from −2N − 1 to −1 because, under addition modulo 2N they behave the same way as those negative integers. That is to say that because i + j mod 2N = i + (j + 2N) mod 2N any value in the set { j + k 2N | k is an integer } can be used in place of j.
For example, with eight bits, the unsigned bytes are 0 to 255. Subtracting 256 from the top half (128 to 255) yields the signed bytes −128 to −1.
The relationship to two's complement is realised by noting that 256 = 255 + 1, and (255 − x) is the ones' complement of x.
Decimal Two's complement
127 0111 1111
64 0100 0000
1 0000 0001
0 0000 0000
−1 1111 1111
−64 1100 0000
−127 1000 0001
−128 1000 0000
Some special numbers to note
### Example
−95 modulo 256 is equivalent to 161 since
−95 + 256
= −95 + 255 + 1
= 255 − 95 + 1
= 160 + 1
= 161
``` 1111 1111 255
− 0101 1111 − 95
=========== =====
1010 0000 (ones' complement) 160
+ 1 + 1
=========== =====
1010 0001 (two's complement) 161
```
Two's complement Decimal
0111 7
0110 6
0101 5
0100 4
0011 3
0010 2
0001 1
0000 0
1111 −1
1110 −2
1101 −3
1100 −4
1011 −5
1010 −6
1001 −7
1000 −8
Two's complement using a 4-bit integer
Fundamentally, the system represents negative integers by counting backward and wrapping around. The boundary between positive and negative numbers is arbitrary, but the de facto rule is that all negative numbers have a left-most bit (most significant bit) of one. Therefore, the most positive 4-bit number is 0111 (7) and the most negative is 1000 (−8). Because of the use of the left-most bit as the sign bit, the absolute value of the most negative number (|−8| = 8) is too large to represent. For example, an 8-bit number can only represent every integer from −128 to 127 (28 − 1 = 128) inclusive. Negating a two's complement number is simple: Invert all the bits and add one to the result. For example, negating 1111, we get 0000 + 1 = 1. Therefore, 1111 must represent −1.
The system is useful in simplifying the implementation of arithmetic on computer hardware. Adding 0011 (3) to 1111 (−1) at first seems to give the incorrect answer of 10010. However, the hardware can simply ignore the left-most bit to give the correct answer of 0010 (2). Overflow checks still must exist to catch operations such as summing 0100 and 0100.
The system therefore allows addition of negative operands without a subtraction circuit and a circuit that detects the sign of a number. Moreover, that addition circuit can also perform subtraction by taking the two's complement of a number (see below), which only requires an additional cycle or its own adder circuit. To perform this, the circuit merely pretends an extra left-most bit of 1 exists.
## Arithmetic operations
### Addition
Adding two's-complement numbers requires no special processing if the operands have opposite signs: the sign of the result is determined automatically. For example, adding 15 and −5:
``` 11111 111 (carry)
0000 1111 (15)
+ 1111 1011 (−5)
==================
0000 1010 (10)
```
This process depends upon restricting to 8 bits of precision; a carry to the (nonexistent) 9th most significant bit is ignored, resulting in the arithmetically correct result of 1010.
The last two bits of the carry row (reading right-to-left) contain vital information: whether the calculation resulted in an arithmetic overflow, a number too large for the binary system to represent (in this case greater than 8 bits). An overflow condition exists when these last two bits are different from one another. As mentioned above, the sign of the number is encoded in the MSB of the result.
In other terms, if the left two carry bits (the ones on the far left of the top row in these examples) are both 1s or both 0s, the result is valid; if the left two carry bits are "1 0" or "0 1", a sign overflow has occurred. Conveniently, an XOR operation on these two bits can quickly determine if an overflow condition exists. As an example, consider the signed 4-bit addition of 7 and 3:
``` 0111 (carry)
0111 (7)
+ 0011 (3)
=============
1010 (−6) invalid!
```
In this case, the far left two (MSB) carry bits are "01", which means there was a two's-complement addition overflow. That is, 10102 = 1010 is outside the permitted range of −8 to 7.
In general, any two N-bit numbers may be added without overflow, by first sign-extending both of them to N + 1 bits, and then adding as above. The N + 1 bits result is large enough to represent any possible sum (N = 5 two's complement can represent values in the range −16 to 15) so overflow will never occur. It is then possible, if desired, to 'truncate' the result back to N bits while preserving the value if and only if the discarded bit is a proper sign extension of the retained result bits. This provides another method of detecting overflow—which is equivalent to the method of comparing the carry bits—but which may be easier to implement in some situations, because it does not require access to the internals of the addition.
### Subtraction
Computers usually use the method of complements to implement subtraction. Using complements for subtraction is closely related to using complements for representing negative numbers, since the combination allows all signs of operands and results; direct subtraction works with two's-complement numbers as well. Like addition, the advantage of using two's complement is the elimination of examining the signs of the operands to determine if addition or subtraction is needed. For example, subtracting −5 from 15 is really adding 5 to 15, but this is hidden by the two's-complement representation:
``` 11110 000 (borrow)
0000 1111 (15)
− 1111 1011 (−5)
===========
0001 0100 (20)
```
Overflow is detected the same way as for addition, by examining the two leftmost (most significant) bits of the borrows; overflow has occurred if they are different.
Another example is a subtraction operation where the result is negative: 15 − 35 = −20:
``` 11100 000 (borrow)
0000 1111 (15)
− 0010 0011 (35)
===========
1110 1100 (−20)
```
As for addition, overflow in subtraction may be avoided (or detected after the operation) by first sign-extending both inputs by an extra bit.
### Multiplication
The product of two N-bit numbers requires 2N bits to contain all possible values. If the precision of the two, two's complement operands is doubled before the multiplication, direct multiplication (discarding any excess bits beyond that precision) will provide the correct result. For example, take 6 × −5 = −30. First, the precision is extended from 4 bits to 8. Then the numbers are multiplied, discarding the bits beyond 8 (shown by 'x'):
``` 00000110 (6)
* 11111011 (−5)
============
110
1100
00000
110000
1100000
11000000
x10000000
xx00000000
============
xx11100010
```
This is very inefficient; by doubling the precision ahead of time, all additions must be double-precision and at least twice as many partial products are needed than for the more efficient algorithms actually implemented in computers. Some multiplication algorithms are designed for two's complement, notably Booth's multiplication algorithm. Methods for multiplying sign-magnitude numbers don't work with two's-complement numbers without adaptation. There isn't usually a problem when the multiplicand (the one being repeatedly added to form the product) is negative; the issue is setting the initial bits of the product correctly when the multiplier is negative. Two methods for adapting algorithms to handle two's-complement numbers are common:
• First check to see if the multiplier is negative. If so, negate (i.e., take the two's complement of) both operands before multiplying. The multiplier will then be positive so the algorithm will work. Because both operands are negated, the result will still have the correct sign.
• Subtract the partial product resulting from the MSB (pseudo sign bit) instead of adding it like the other partial products. This method requires the multiplicand's sign bit to be extended by one position, being preserved during the shift right actions.[7]
As an example of the second method, take the common add-and-shift algorithm for multiplication. Instead of shifting partial products to the left as is done with pencil and paper, the accumulated product is shifted right, into a second register that will eventually hold the least significant half of the product. Since the least significant bits are not changed once they are calculated, the additions can be single precision, accumulating in the register that will eventually hold the most significant half of the product. In the following example, again multiplying 6 by −5, the two registers and the extended sign bit are separated by "|":
``` 0 0110 (6) (multiplicand with extended sign bit)
× 1011 (−5) (multiplier)
=|====|====
0|0110|0000 (first partial product (rightmost bit is 1))
0|0011|0000 (shift right, preserving extended sign bit)
0|1001|0000 (add second partial product (next bit is 1))
0|0100|1000 (shift right, preserving extended sign bit)
0|0100|1000 (add third partial product: 0 so no change)
0|0010|0100 (shift right, preserving extended sign bit)
1|1100|0100 (subtract last partial product since it's from sign bit)
1|1110|0010 (shift right, preserving extended sign bit)
|1110|0010 (discard extended sign bit, giving the final answer, −30)
```
### Comparison (ordering)
Comparison is often implemented with a dummy subtraction, where the flags in the computer's status register are checked, but the main result is ignored. The zero flag indicates if two values compared equal. If the exclusive-or of the sign and overflow flags is 1, the subtraction result was less than zero, otherwise the result was zero or greater. These checks are often implemented in computers in conditional branch instructions.
Unsigned binary numbers can be ordered by a simple lexicographic ordering, where the bit value 0 is defined as less than the bit value 1. For two's complement values, the meaning of the most significant bit is reversed (i.e. 1 is less than 0).
The following algorithm (for an n-bit two's complement architecture) sets the result register R to −1 if A < B, to +1 if A > B, and to 0 if A and B are equal:
``````Reversed comparison of sign bit:
if A(n-1) == 0 and B(n-1) == 1 then
R := +1
break
else if A(n-1) == 1 and B(n-1) == 0 then
R := -1
break
end
Comparison of remaining bits:
for i = n-2...0 do
if A(i) == 0 and B(i) == 1 then
R := -1
break
else if A(i) == 1 and B(i) == 0 then
R := +1
break
end
end
R := 0
```
```
## Two's complement and universal algebra
In a classic HAKMEM published by the MIT AI Lab in 1972, Bill Gosper noted that whether or not a machine's internal representation was two's-complement could be determined by summing the successive powers of two. In a flight of fancy, he noted that the result of doing this algebraically indicated that "algebra is run on a machine (the universe) which is two's-complement."[8]
Gosper's end conclusion is not necessarily meant to be taken seriously,[by whom?] and it is akin to a mathematical joke. The critical step is "...110 = ...111 − 1", i.e., "2X = X − 1", and thus X = ...111 = −1. This presupposes a method by which an infinite string of 1s is considered a number, which requires an extension of the finite place-value concepts in elementary arithmetic. It is meaningful either as part of a two's-complement notation for all integers, as a typical 2-adic number, or even as one of the generalized sums defined for the divergent series of real numbers 1 + 2 + 4 + 8 + ···.[9] Digital arithmetic circuits, idealized to operate with infinite (extending to positive powers of 2) bit strings, produce 2-adic addition and multiplication compatible with two's complement representation.[10] Continuity of binary arithmetical and bitwise operations in 2-adic metric also has some use in cryptography.[11]
## See also
• Division algorithm, including restoring and non-restoring division in two's-complement representations
• Offset binary
• p-adic number
## References
1. David J. Lilja and Sachin S. Sapatnekar, Designing Digital Computer Systems with Verilog, Cambridge University Press, 2005 online
2. E.g. "Signed integers are two's complement binary values that can be used to represent both positive and negative integer values.", Section 4.2.1 in Intel 64 and IA-32 Architectures Software Developer's Manual, Volume 1: Basic Architecture, November 2006
3. For x = 0 we have 2N − 0 = 2N, which is equivalent to 0* = 0 modulo 2N (i.e. after restricting to N least significant bits).
4.
5.
6. Wakerly, John F. (2000). Digital Design Principles & Practices (3rd ed.). Prentice Hall. p. 47. ISBN 0-13-769191-2.
7. For the summation of 1 + 2 + 4 + 8 + ··· without recourse to the 2-adic metric, see Hardy, G.H. (1949). Divergent Series. Clarendon Press. LCC QA295 .H29 1967. (pp. 7–10)
8. Vuillemin, Jean (1993). On circuits and numbers. Paris: Digital Equipment Corp. p. 19. Retrieved 2012-01-24. , Chapter 7, especially 7.3 for multiplication.
9. Anashin, Vladimir; Bogdanov, Andrey; Kizhvatov, Ilya (2007). "ABC Stream Cipher". Russian State University for the Humanities. Retrieved 24 January 2012.
## Further reading
• Koren, Israel (2002). Computer Arithmetic Algorithms. A.K. Peters. ISBN 1-56881-160-8.
• Flores, Ivan (1963). The Logic of Computer Arithmetic. Prentice-Hall. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8923567533493042, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/111361/can-second-order-arithmetic-make-aleph-1l-countable | ## Can second order arithmetic make $\aleph_1^L$ countable?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Simpson's book Subsystems of Second Order Arithmetic shows $Z_2$ can interpret some fragments of ZF strong enough to give good theories of constructible sets and formalize statements like "there is a countable ordinal $\gamma$ such that $\gamma=\aleph_1^L$", Forcing in ZF shows this is independent of ZF and so certainly independent of $Z_2$. But can the independence be proved in some set theory interpretable in $Z_2$?
I ask because I expect it can.
But a positive answer would mean $Z_2$ implies consistency of a fragment of ZF with global well-ordering and existence of $\aleph_1$, obviously without power set. I don't know if that is possible.
-
1
Can't the independence results, as purely formal statements of arithemtic $\text{Con}(ZF)\to\text{Con}(ZFC+\psi)$, be formalized in PA or much weaker systems? We don't need ZF to prove that ZF can formalize forcing. – Joel David Hamkins Nov 3 at 12:03
Yes. Something weaker than PA (like EFA) suffices to show $\text{Con}(ZF)\rightarrow \text{Con}(Z_2+\psi)$ where $\psi$ is existence of a countable but not constructibly countable ordinal. I am asking whether $\text{Con}(Z_2)\rightarrow \text{Con}(Z_2+\psi)$. I would also be curious to know how far the forcing argument used in ZF can be interpreted in $Z_2$. – Colin McLarty Nov 3 at 19:18
1
I'm still thinking about this (in the little spare time that I have these days) so I don't have an answer. Nevertheless, the proof of CH mod $V = L$ is very robust so I don't see how to get past that obstruction right now. Could you say more why you "expect it can" be done? – François G. Dorais♦ Nov 5 at 2:47
2
I'd expect that $Z_2$ plus "there is a countable ordinal that is $\aleph_1^L$" would prove the consistency of $Z_2$, by means of the countable model consisting of the constructible sets of natural numbers. If that's right, then of course, by the second incompleteness theorem, that theory could not be proved (in $Z_2$) to be consistent relative to $Z_2$. This might be sensitive to exactly how you formalize constructibility. My expectation might have a better chance if you use Gödel operations rather than definability. – Andreas Blass Nov 5 at 6:04
1
Thanks to all. Very helpful comments giving me clearer expectations. In terms of my current project (proof theory of ZF with restricted power set) this has warned me away from a false route, and anyway I've found a simpler path that avoids this issue. So I could accept something like these comments as a sufficient answer. – Colin McLarty Nov 7 at 15:23
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466372132301331, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?s=e78b8211e767063120e87157c1043ed7&p=4270421 | Physics Forums
## How does movement of an electric charge create a magnetic field?
I've been researching this for hours and yet cant seem to get an understanding.
So, let's take an electron fixed in space. At any given time it has an electrostatic field around it which decreases with distance and uses photons as a force carrier. It has an elementary charge e- and will attract protons/repel electrons.
Now let's say you have 2 electrons within each other's electrostatic fields, however we are "holding" them so that although they are experiencing repulsive forces, they are not moving.
Now let's say i take one electron and start moving it.
This means that relative to each other, both are moving.
This means that immediately, both start to invoke a magnetic field on each other, correct?
question 1: But what does this even mean? The only difference i see is that electron 1 will still be applying an electrostatic force but the strength of that force will vary from stronger to weaker as i move the electron closer or farther from the other electron.
Question 2: I dont even understand the functional difference of a magnetic field. This is the magnetic field that can attract certain metals, and can repel or attract ends of a magnet, right? Does it do anything else?
Question 3: But isnt that just the same as having a negative or positive net charge in a substance based on a local abundance of protons or electrons? And an object wouldnt have to be moving to exert a net electrostatic force and repel or attract something. So how on earth does MOTION of electrons create a magnetic field?
Question 4: For emphasis of 3: What's so special/necessary about the "motion" aspect in creating magnetic fields?
Question 5: Where on earth does this magnetic field come from? I dont understand what in the system changes that you get this fundamentally new (yet interconnected) force. The force carrier is still the photon, and the electrons still have an elementary charge, but now the location of that charge point is changing, so the only outcome i can see is that an affected body would be increasingly or decreasingly subjected to the strength of that charge point as it moves over time... I have no idea where the magnetic aspect comes in..
If someone could clarify my misunderstandings and gaps in knowledge it would really make my day!
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Welcome to PF.
This means that immediately, both start to invoke a magnetic field on each other, correct?
No, it does not work that way. When you shake the first charge, the standard view is that the magnetic field is generated in the charge and starts spreading out to space with the speed of light in every direction. Technically, the field is called to be "retarded", since the field at some distance will be function of past motion of the charge.
Only the first charge has non-zero field, the second charge, being at rest, keeps its electrostatic field.
The magnetic field is present in the sense that moving charges would experience magnetic force. However, as the magnetic force acts only on moving charges, there is no action of magnetic field on the second charge. This means that in the situation as you described it, the magnetic field does no exhibit itself.
However, as the first charge was shaken, besides magnetic field, also electric field experiences change and this will affect the force acting on the second charge.
Where the magnetic field comes from? Based on theory and experience, we picture the electromagnetic fields everywhere and connected to the particles. It is difficult to find deeper explanation for their existence.
Do not try to understand this in terms of some light particles - although the idea is attractive at first, you will find that it is very hard to make it work for every aspect of EM field. It is much better, in your position as a beginner, to try to understand electromagnetic phenomena with classical theory of electromagnetism.
Thanks for the answers! I understand a little bit better now, although i still have many things im curious about.
Quote by Jano L. The magnetic field is present in the sense that moving charges would experience magnetic force.
So what exactly does it mean to "experience this magnetic force"? Im not sure what these forces actually do to the charges that pass through them. Does the magnetic field create some kind of attraction or repulsion? How does this attraction or repulsion vary from electrostatic? I read something about 2 electrons travelling side by side- ultimately their electrostatic forces will repel each other, but the faster they go, the magnetic fields they generate will create a higher degree of attraction between the two? I dont understand the mechanism of how this attraction is created via movement, and what exactly it attracts or repels.
However, as the magnetic force acts only on moving charges, there is no action of magnetic field on the second charge. This means that in the situation as you described it, the magnetic field does no exhibit itself.
Ohhh, ok, so not only does the particle have to move to generate a magnetic field, but another particle has to be "moving" through the field to be affected by it.
But in this case how is movement defined? If you have 2 electrons, and one is moving east @ 100 feet per second, and the second is moving east @ 50 feet/sec, they would both exert a magnetic field on each other, correct? But isnt movement relative? Couldnt one say that relative to electron 1, electron 2 is standing still, and electron 1 is going 50 fps?
Sorry for all the questions, my mind just naturally seeks out the root answers and explanations to things...
## How does movement of an electric charge create a magnetic field?
To experience magnetic force means that the particle is under action of force that is perpendicular to its velocity. The exact formula for the force is
\mathbf F_m = q\mathbf v \times \mathbf B
where ##q,\mathbf v## are the charge and velocity vector of the particle and ##\mathbf B## the magnetic field vector.
Here is how you can see the effect of magnetic force on electrons with your own eyes: take magnet and approach old TV or CRT monitor screen. Normally, the picture is formed by myriads of electrons flying from the hot cathode and falling on the screen. When you put magnet on the screen you will see that the colors get distorted by the magnet. The explanation is that the motion of the electrons and the magnetization of the screen is affected by the magnetic field of the magnet and hence they do not fall onto screen as intended for good picture.
In cyclotron, electrons move in horizontal circle thanks to constant vertical magnetic field ; magnetic force constantly curves the trajectory of the particle into a circle.
But isnt movement relative? Couldnt one say that relative to electron 1, electron 2 is standing still, and electron 1 is going 50 fps?
Yes, it depends on the frame of reference. So does the magnetic and electric field.
If you had two charges moving along with the same velocities, from your point of view, they would be surrounded by magnetic fields, but from their point of view, there is just electrostatic field and no magnetic field.
Electric and magnetic field are just two parts of description of interaction between bodies. "How much" of the interaction is described by electric force and how much by magnetic force depends on the frame of reference.
Mentor
[I took too long to write this, and Jano slipped in before me.]
Quote by mpatryluk But isnt movement relative?
Yes indeed! And so is the distinction between electric and magnetic fields.
Consider two stationary charges, one above the other from your point of view as you stand in front of them, and separated by some distance. Each charge produces (only) an electric field, which exerts an electric force on the other charge. If the charges are free to move, they move towards or away from each other depending on whether they are "like" or "unlike" charges.
Now imagine running past the two charges. From your new point of view, the charges are both in motion. In addition to the electric fields and forces, each charge now produces a magnetic field which exerts a magnetic force on the other charge.
Clearly the net effect of the electric and magnetic forces in the second case must be the "same" as the effect of the electric forces in the first case, in terms of the motion of the charges, after taking into account the the difference in your own motion. After all, the charges don't "know" whether you're standing still or running.
We say nowadays that electric and magnetic fields and forces are merely different aspects of a single unified electromagnetic field and electromagnetic force. If we know the electric and magnetic fields in one reference frame (e.g. the one you use when you're standing still), and the relative velocity of another reference frame (e.g. the one you use when you're running), we ought to be able to calculate the fields in the second frame. That is, we ought to be able to transform the fields from one frame to the other.
The question of how to do this in a way that is consistent with how the laws of mechanics transform between reference frames, was a major theoretical puzzle in the late 1800s, which led ultimately to Einstein's Special Theory of Relativity.
Great, everything is starting to make a bit more sense now. I think what threw me off the most was that the electrostatic field is so simple, with such a standard explanation as being the direct result of charge exuded from an elementary particle, so i kind of assumed i should be able to grasp the magnetic component in the same way. However it seems that the explanation for "why magnetic field" has underpinnings in quantum electrodynamics. I was expecting some direct cause and effect explanation, like: when an electron is moving, x, y and z happens with the photon and the electrostatic waves, and BOOM, there's your reason for magnetic fields. I see now that it's much more complicated.
Quote by Jano L. To experience magnetic force means that the particle is under action of force that is perpendicular to its velocity. The exact formula for the force is $$\mathbf F_m = q\mathbf v \times \mathbf B$$ where ##q,\mathbf v## are the charge and velocity vector of the particle and ##\mathbf B## the magnetic field vector.
At least the math for effects of magnetic fields is fairly straightforward :). And the reasons/explanations for the "perpendicular to velocity" part also has quantum underpinnings, right?
Also how do i know what is affected by magnetic fields? Magnetic fields still only affect things on the basis of them having a positive or negative charge, right? They just affect them in a different way because the physics of particles being in motion change the mechanics? And permanent magnets are tied into, and affected by those same principles?
Then there must be some similarities on the quantum level between magnetic fields generated by motion of a charge and stationary magnetic fields of permanent magnets?
Quote by Jano L. Yes, it depends on the frame of reference. So does the magnetic and electric field. If you had two charges moving along with the same velocities, from your point of view, they would be surrounded by magnetic fields, but from their point of view, there is just electrostatic field and no magnetic field.
So then does that mean the outcome of the observation is relativistic too? From their frame of reference would they only be affected by electrostatic fields, whereas from my frame of reference i would see it unfold as though they had been under the influence of a magnetic field?
Electric and magnetic field are just two parts of description of interaction between bodies. "How much" of the interaction is described by electric force and how much by magnetic force depends on the frame of reference.
So unless I'm mistaken, it's the same set of rules (in the sense that it's a matter of attractive and repulsive forces), but the mechanics and manifestations of particles when in motion unfold differently from when at rest? And the explanations for why are mostly Q.E.D based?
Quote by jtbell [I took too long to write this, and Jano slipped in before me.] Yes indeed! And so is the distinction between electric and magnetic fields. Consider two stationary charges, one above the other from your point of view as you stand in front of them, and separated by some distance. Each charge produces (only) an electric field, which exerts an electric force on the other charge. If the charges are free to move, they move towards or away from each other depending on whether they are "like" or "unlike" charges. Now imagine running past the two charges. From your new point of view, the charges are both in motion. In addition to the electric fields and forces, each charge now produces a magnetic field which exerts a magnetic force on the other charge. Clearly the net effect of the electric and magnetic forces in the second case must be the "same" as the effect of the electric forces in the first case, in terms of the motion of the charges, after taking into account the the difference in your own motion. After all, the charges don't "know" whether you're standing still or running. We say nowadays that electric and magnetic fields and forces are merely different aspects of a single unified electromagnetic field and electromagnetic force. If we know the electric and magnetic fields in one reference frame (e.g. the one you use when you're standing still), and the relative velocity of another reference frame (e.g. the one you use when you're running), we ought to be able to calculate the fields in the second frame. That is, we ought to be able to transform the fields from one frame to the other. The question of how to do this in a way that is consistent with how the laws of mechanics transform between reference frames, was a major theoretical puzzle in the late 1800s, which led ultimately to Einstein's Special Theory of Relativity.
I had no idea that electromagnetics had so many relativistic components! So, correct me if i'm wrong, but let's say you're running towards the charges and observing an electric field:
There is one true underlying reality in regards to their actual state. To arrive at that state, various observers at various speeds will view differently rendered versions of the same event, witnessing varying proportions of electric and magnetic fields in use (depending on their speeds?), which all reconcile the event with it's true mathematical reality, where the specific nature of the "reconciliation" is based on their frame of reference?
Recognitions:
Gold Member
Quote by mpatryluk There is one true underlying reality in regards to their actual state.
No. There is no "true underlying reality". That's precisely what this effect demonstrates. According to ANY observer, moving in ANY way, the particles will behave exactly the same and will either move apart if they are like charges or towards each other if they are opposite charges.
Quote by Drakkith No. There is no "true underlying reality". That's precisely what this effect demonstrates. According to ANY observer, moving in ANY way, the particles will behave exactly the same and will either move apart if they are like charges or towards each other if they are opposite charges.
Ok, but if they were moving relative to me, i would observe a magnetic field. Yet relative to each other, they would be stationary and observing an electrostatic field. So no matter what vector of movement i was on, i would view them do the same thing? So i would perceive it as a magnetic field but the interactions i observed between the two would be that of a static field?
Tags
electromagnetism
Thread Tools
| | | |
|---------------------------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: How does movement of an electric charge create a magnetic field? | | |
| Thread | Forum | Replies |
| | General Physics | 0 |
| | Classical Physics | 3 |
| | General Physics | 1 |
| | Classical Physics | 6 |
| | Introductory Physics Homework | 16 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508931636810303, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/72364?sort=newest | Finite, Étale Morphism Of Varieties
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have a, probably very simple, question: My intuition tells me that the following statement should be true, but I couldn't find it anywhere and I wanted to make sure I am not missing something.
Let $\pi:Y\to X$ be a finite, étale morphism of nonsingular varieties over some algebraically closed field $\Bbbk$. Is it true that every point $P\in X$ has an affine neighbourhood $U$ such that $\pi^{-1}(U)$ consists of $\deg(\pi)$ irreducible components, each of which is isomorphic to $U$ via $\pi$?
Of course, if it is true, I would also be happy if you could provide a proof, in literature or otherwise.
-
7
Perhaps next time you should take some time to consider simple examples first (see Mahon's answer). – Martin Brandenburg Aug 8 2011 at 19:34
11
I wish there were downvotes for comments right now. – Mattia Talpo Aug 8 2011 at 19:59
2 Answers
No, it's not true. Consider the map $x\mapsto x^2$ as a map from $X=\mathbb A^1-{0}$ to itself. The "problem" is that Zariski neighborhoods are too big. Any open subset of $X$ has exactly one irreducible component (in general, an open subset cannot have more components than the ambient space), so there is no hope to get the preimage of an open set to have two components.
However, if you refine your topology to allow "etale open neighborhoods" (i.e. you allow pullback by etale maps $U\to X$, not just open immersions $U\to X$), then the answer is yes. Perhaps the easiest way to prove that is to pull back by $\pi$ itself, after which you can "peel off" the diagonal component of $Y\times_X Y$. Now you have a finite etale map $(Y\times_X Y -\Delta)\to Y$ which has degree one less. Repeat until the degree is $1$, at which point you have an etale map $U\to \cdots \to Y\to X$ so that $Y\times_X U$ is $deg(\pi)$ copies of $U$.
-
Another way to see the last fact is to note that finite etale covers of a strictly henselian ring are split, together with the fact that the strict henselianization at a point is the "limit" of etale neighborhoods (and a "noetherian descent" argument to argue that the splitting must descend to some etale neighborhood after all). – Akhil Mathew Aug 8 2011 at 16:26
9
It's worth pointing out that it's a general philosophy that things that you expect to be true because of your experience with the analytic topology are usually true in the etale topology. – Anton Geraschenko♦ Aug 8 2011 at 16:38
@Anton Geraschenko:How to see an finite etale morphism of degree one is an isomorphism? I think you use this fact in your answer. – ZhuangXiaobo Jan 2 at 2:30
1
@ZhuangXiaobo: Let's work locally, so that the map is $Spec(S)\to Spec(R)$. By definition, "degree $n$" means that the corresponding ring homomorphism $R\to S$ makes $S$ into a free module of rank $n$ over $R$. If $n=1$, the map is bijective, so an isomorphism of rings. – Anton Geraschenko♦ Jan 2 at 14:58
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is almost never true. For example if $X,Y$ are irreducible, then for any (affine) Zariski open $U$ of $X$, the inverse image is open in $Y$ and hence irreducible.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318795800209045, "perplexity_flag": "head"} |
http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Half-life | More results on: Download PDF files on: Download Word files on: Images on: Video/Audio on: Download PowerPoint on: More results from.edu web: Map (if applicable) of:
Half-life - Wikipedia, the free encyclopedia
# Half-life
Number of
half-lives
elapsed
Fraction
remaining
Percentage
remaining
0 1/1 100
1 1/2 50
2 1/4 25
3 1/8 12 .5
4 1/16 6 .25
5 1/32 3 .125
6 1/64 1 .563
7 1/128 0 .781
... ... ...
n 1/2n 100/(2n)
Half-life (t½) is the time required for a quantity to fall to half its value as measured at the beginning of the time period. In physics, it is typically used to describe a property of radioactive decay, but may be used to describe any quantity which follows an exponential decay.
The original term, dating to Ernest Rutherford's discovery of the principle in 1907, was "half-life period", which was shortened to "half-life" in the early 1950s.1
Half-life is used to describe a quantity undergoing exponential decay, and is constant over the lifetime of the decaying quantity. It is a characteristic unit for the exponential decay equation. The term "half-life" may generically be used to refer to any period of time in which a quantity falls by half, even if the decay is not exponential. For a general introduction and description of exponential decay, see exponential decay. For a general introduction and description of non-exponential decay, see rate law.
The converse of half-life is doubling time.
The table on the right shows the reduction of a quantity in terms of the number of half-lives elapsed.
## Probabilistic nature of half-life
Simulation of many identical atoms undergoing radioactive decay, starting with either 4 atoms per box (left) or 400 (right). The number at the top is how many half-lives have elapsed. Note the law of large numbers: With more atoms, the overall decay is more regular and more predictable.
A half-life usually describes the decay of discrete entities, such as radioactive atoms, which have unstable nuclei. In that case, it does not work to use the definition "half-life is the time required for exactly half of the entities to decay". For example, if there is just one radioactive atom with a half-life of one second, there will not be "one-half of an atom" left after one second. There will be either zero atoms left or one atom left, depending on whether or not that atom happened to decay.
Instead, the half-life is defined in terms of probability. It is the time when the expected value of the number of entities that have decayed is equal to half the original number. For example, one can start with a single radioactive atom, wait its half-life, and then check whether or not it has decayed. Perhaps it did, but perhaps it did not. But if this experiment is repeated again and again, it will be seen that - on average - it decays within the half-life 50% of the time.
In some experiments (such as the synthesis of a superheavy element), there is in fact only one radioactive atom produced at a time, with its lifetime individually measured. In this case, statistical analysis is required to infer the half-life. In other cases, a very large number of identical radioactive atoms decay in the measured time range. In this case, the law of large numbers ensures that the number of atoms that actually decay is approximately equal to the number of atoms that are expected to decay. In other words, with a large enough number of decaying atoms, the probabilistic aspects of the process could be neglected.
There are various simple exercises that demonstrate probabilistic decay, for example involving flipping coins or running a statistical computer program.234 For example, the image on the right is a simulation of many identical atoms undergoing radioactive decay. Note that after one half-life there are not exactly one-half of the atoms remaining, only approximately, because of the random variation in the process. However, with more atoms (right boxes), the overall decay is smoother and less random-looking than with fewer atoms (left boxes), in accordance with the law of large numbers.
## Formulas for half-life in exponential decay
Main article: Exponential decay
An exponential decay process can be described by any of the following three equivalent formulas:
$N(t) = N_0 \left(\frac {1}{2}\right)^{t/t_{1/2}}$
$N(t) = N_0 e^{-t/\tau} \,$
$N(t) = N_0 e^{-\lambda t} \,$
where
• N0 is the initial quantity of the substance that will decay (this quantity may be measured in grams, moles, number of atoms, etc.),
• N(t) is the quantity that still remains and has not yet decayed after a time t,
• t1/2 is the half-life of the decaying quantity,
• τ is a positive number called the mean lifetime of the decaying quantity,
• λ is a positive number called the decay constant of the decaying quantity.
The three parameters $t_{1/2}$, $\tau$, and λ are all directly related in the following way:
$t_{1/2} = \frac{\ln (2)}{\lambda} = \tau \ln(2)$
where ln(2) is the natural logarithm of 2 (approximately 0.693).
Click "show" to see a detailed derivation of the relationship between half-life, decay time, and decay constant.
$N(t) = N_0 \left(\frac {1}{2}\right)^{t/t_{1/2}}$
$N(t) = N_0 e^{-t/\tau}$
$N(t) = N_0 e^{-\lambda t}$
We want to find a relationship between $t_{1/2}$, $\tau$, and λ, such that these three equations describe exactly the same exponential decay process. Comparing the equations, we find the following condition:
$\left(\frac {1}{2}\right)^{t/t_{1/2}} = e^{-t/\tau} = e^{-\lambda t}$
Next, we'll take the natural logarithm of each of these quantities.
$\ln\left(\left(\frac {1}{2}\right)^{t/t_{1/2}}\right) = \ln(e^{-t/\tau}) = \ln(e^{-\lambda t})$
Using the properties of logarithms, this simplifies to the following:
$(t/t_{1/2})\ln \left(\frac {1}{2}\right) = (-t/\tau)\ln(e) = (-\lambda t)\ln(e)$
Since the natural logarithm of e is 1, we get:
$(t/t_{1/2})\ln \left(\frac {1}{2}\right) = -t/\tau = -\lambda t$
Canceling the factor of t and plugging in $\ln\left(\frac {1}{2}\right)=-\ln 2$, the eventual result is:
$t_{1/2} = \tau \ln 2 = \frac{\ln 2}{\lambda}.$
By plugging in and manipulating these relationships, we get all of the following equivalent descriptions of exponential decay, in terms of the half-life:
$N(t) = N_0 \left(\frac {1}{2}\right)^{t/t_{1/2}} = N_0 2^{-t/t_{1/2}} = N_0 e^{-t\ln(2)/t_{1/2}}$
$t_{1/2} = t/\log_2(N_0/N(t)) = t/(\log_2(N_0)-\log_2(N(t))) = (\log_{2^t}(N_0/N(t)))^{-1} = t\ln(2)/\ln(N_0/N(t))$
Regardless of how it's written, we can plug into the formula to get
• $N(0)=N_0$ as expected (this is the definition of "initial quantity")
• $N(t_{1/2})=\left(\frac {1}{2}\right)N_0$ as expected (this is the definition of half-life)
• $\lim_{t\to \infty} N(t) = 0$, i.e. amount approaches zero as t approaches infinity as expected (the longer we wait, the less remains).
### Decay by two or more processes
Some quantities decay by two exponential-decay processes simultaneously. In this case, the actual half-life T1/2 can be related to the half-lives t1 and t2 that the quantity would have if each of the decay processes acted in isolation:
$\frac{1}{T_{1/2}} = \frac{1}{t_1} + \frac{1}{t_2}$
For three or more processes, the analogous formula is:
$\frac{1}{T_{1/2}} = \frac{1}{t_1} + \frac{1}{t_2} + \frac{1}{t_3} + \cdots$
For a proof of these formulas, see Decay by two or more processes.
### Examples
Main article: Exponential decay--Applications and examples
There is a half-life describing any exponential-decay process. For example:
• The current flowing through an RC circuit or RL circuit decays with a half-life of $RC\ln(2)$ or $\ln(2)L/R$, respectively. For this example, the term half time might be used instead of "half life", but they mean the same thing.
• In a first-order chemical reaction, the half-life of the reactant is $\ln(2)/\lambda$, where λ is the reaction rate constant.
• In radioactive decay, the half-life is the length of time after which there is a 50% chance that an atom will have undergone nuclear decay. It varies depending on the atom type and isotope, and is usually determined experimentally. See List of nuclides.
the half life of a species is the time it takes for the concentration of the substance to fall to half of its initial value
## Half-life in non-exponential decay
Main article: Rate equation
The decay of many physical quantities is not exponential—for example, the evaporation of water from a puddle, or (often) the chemical reaction of a molecule. In such cases, the half-life is defined the same way as before: as the time elapsed before half of the original quantity has decayed. However, unlike in an exponential decay, the half-life depends on the initial quantity, and the prospective half-life will change over time as the quantity decays.
As an example, the radioactive decay of carbon-14 is exponential with a half-life of 5730 years. A quantity of carbon-14 will decay to half of its original amount (on average) after 5730 years, regardless of how big or small the original quantity was. After another 5730 years, one-quarter of the original will remain. On the other hand, the time it will take a puddle to half-evaporate depends on how deep the puddle is. Perhaps a puddle of a certain size will evaporate down to half its original volume in one day. But on the second day, there is no reason to expect that one-quarter of the puddle will remain; in fact, it will probably be much less than that. This is an example where the half-life reduces as time goes on. (In other non-exponential decays, it can increase instead.)
The decay of a mixture of two or more materials which each decay exponentially, but with different half-lives, is not exponential. Mathematically, the sum of two exponential functions is not a single exponential function. A common example of such a situation is the waste of nuclear power stations, which is a mix of substances with vastly different half-lives. Consider a sample containing a rapidly decaying element A, with a half-life of 1 second, and a slowly decaying element B, with a half-life of one year. After a few seconds, almost all atoms of the element A have decayed after repeated halving of the initial total number of atoms; but very few of the atoms of element B will have decayed yet as only a tiny fraction of a half-life has elapsed. Thus, the mixture taken as a whole does not decay by halves.
## Half-life in biology and pharmacology
Main article: Biological half-life
A biological half-life or elimination half-life is the time it takes for a substance (drug, radioactive nuclide, or other) to lose one-half of its pharmacologic, physiologic, or radiological activity. In a medical context, the half-life may also describe the time that it takes for the concentration in blood plasma of a substance to reach one-half of its steady-state value (the "plasma half-life").
The relationship between the biological and plasma half-lives of a substance can be complex, due to factors including accumulation in tissues, active metabolites, and receptor interactions.5
While a radioactive isotope decays almost perfectly according to so-called "first order kinetics" where the rate constant is a fixed number, the elimination of a substance from a living organism usually follows more complex chemical kinetics.
For example, the biological half-life of water in a human being is about 7 to 14 days, though this can be altered by his/her behavior. The biological half-life of cesium in human beings is between one and four months. This can be shortened by feeding the person prussian blue, which acts as a solid ion exchanger that absorbs the cesium while releasing potassium ions in their place.
## References
1. John Ayto, "20th Century Words" (1989), Cambridge University Press.
2. "MADSCI.org". Retrieved 2012-04-25.
3. "Exploratorium.edu". Retrieved 2012-04-25.
4. "Astro.GLU.edu". Retrieved 2012-04-25.
5. Lin VW; Cardenas DD (2003). Spinal cord medicine. Demos Medical Publishing, LLC. p. 251. ISBN 1-888799-61-7.
HPTS - Area Progetti - Edu-Soft - JavaEdu - N.Saperi - Ass.Scuola.. - TS BCTV - TSODP - TRTWE
TSE-Wiki - Blog Lavoro - InterAzioni- NormaScuola - Editoriali - Job Search - DownFree !
TerritorioScuola. Some rights reserved. Informazioni d'uso ☞ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.91070556640625, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/37498/sequences-of-evenly-distributed-points-in-a-product-of-intervals/37509 | ## Sequences of evenly-distributed points in a product of intervals
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let φ be the golden ratio, (1+√5)/2. Taking the fractional parts of its integer multiples, we obtain a sequence of values in (0,1) which are in some sense "evenly distributed" in a way which is due to the continued fraction form of φ, making the constant "as difficult as possible" to approximate using rational values (otherwise, the values in the sequence would cluster around multiples of such rational approximations). If one takes the first n values, especially if n is a Fibonacci number, they will be very evenly spaced; in fact, if n is a Fibonacci number, then the difference between two consecutive values (after ordering) is always one of two adjacent powers of φ, in correspondence with the fact that the Fibonacci numbers themselves are roughly of the form φk/√5.
Is there any related (or otherwise?) sequence of values in (0,1)d, where d > 1, which are similarly "evenly distributed"?
Edit: I've been a bit unclear about the way in which φ is "special", so I'll try to elucidate. My motivation was that, as drvitek says, φ has no "better-than-expected" rational convergents. So when nφ (mod 1) is plotted against n, not only is the entire set of residues uniformly distributed on (0,1) but also "locally" we have a roughly-uniform distribution on (0,1) × N. This property marks φ out as "special" compared with most irrational numbers. I'm afraid I'm not sure how to phrase it more precisely than that.
-
1
Reminds me of Weyl's Equidistribution theorem. en.wikipedia.org/wiki/Equidistribution_theorem – Tony Huynh Sep 2 2010 at 14:02
Yes, but this property is stronger than the sequence being uniformly distributed. – Robin Saunders Sep 2 2010 at 14:19
Do you mean a result like front.math.ucdavis.edu/0906.0045 ? – Helge Sep 2 2010 at 15:34
2
Robin, your question is unclear. I think the "stronger property" you attribute to multiples of $\phi$ has to do with uniformity of spacing between adjacent points (after ordering), but something like this happens for any irrational (look for The Three Gap Theorem). And you're interested in higher dimensions, but how do you propose to order a bunch of points up there? The usual way to measure how even a distribution is is via discrepancy, and there is a lot of work on low discrepancy sequences in high dimension, and the Kuiper-Niederreiter book will get you started. – Gerry Myerson Sep 2 2010 at 23:13
1
That's not what I meant, but, yes, those are meaningful questions. I think the Niederreiter paper looks at dispersion for $n\theta$ and finds it minimized for the golden mean. I don't know if he looks at dispersion for u. d. sequences, and I don't know if he looks at the minmax problem. But you now have several papers you can look at to see what results and what ideas you can try to apply to your questions. The Schilling paper is on Schilling's website. – Gerry Myerson Sep 8 2010 at 5:52
show 11 more comments
## 3 Answers
It should be possible to do the same with a carefully chosen tuple of rationaly independent numbers $(\varphi_1,\ldots,\varphi_d)$, no ? But the precise equidistribution you want is not very clear to me.
Note that the sequence that is conjectured to be the most evenly distributed on $(0,1)$ is the dyadic one : $1/2, 1/4, 3/4, 1/8, 5/8, 3/8, 7/8,\ldots$, see Kuipers & Niederreiter Uniform distribution of sequences (which might discuss the higher-dimensional problem as well).
-
Yes, that book certainly does discuss the higher-dimensional problem, in detail, and even though it was published over 35 years ago it's still a good place to start learning about these things. But the original question is unclear to me.... – Gerry Myerson Sep 2 2010 at 22:58
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
One way to interpret this result is that it comes from the periodicity of the continued fraction expansion of $\phi = 1 + \frac{1}{1+\frac{1}{\cdots}}$ in the sense that it has no "better-than-expected" rational convergents, whereas for example with $\pi = (3;7,15,1,292,\cdots)$ we may stop at the 292 to get a good approximation (355/113 I believe).
So one may look at numbers of the form $x_n = (n;n,n,n,\cdots)$, which satisfy $x_n^2 -nx_n - 1 = 0$, or $$x_n = \frac{n+\sqrt{n^2+4}}{2}.$$ So a few good sequences may be for example $\left\{nx_2\right\}$ where $x_2 = 1+\sqrt{2}$, the so-called "silver ratio", or the same for $x_3 = (3+\sqrt{13})/2.$
EDIT: These are in some cases pretty good approximations; one way to measure the "well-distribution" of such a sequence is to take the fractional parts $\{\lfloor nx_n \rfloor: n = 1, \cdots, M\}$, sort them, compute the maximum difference between consecutive terms, and multiply this by $M$ to get some number in the range $[1,M)$. This can be accomplished in one line in Mathematica as follows:
````WellDistribution[x_,M_]:=
Max[Differences[Sort[Table[N[FractionalPart[x*m]], {m, 1, M}]]]]*M;
````
Some interesting things happen with this when we vary $n$; perhaps I'll make a new post out of it.
-
You are computing something very closely related to the "discrepancy" of the sequence. You'll find much information on discrepancy in the Kuiper-Niederreiter book and elsewhere. – Gerry Myerson Sep 3 2010 at 0:02
How evenly a sequence is distributed is often measured by its $\it discrepancy$. Let $u(1),u(2),\dots$ be a sequence of numbers in $[0,1)$. We define the discrepancy $D(n)$ of the first $n$ terms of the sequence by $nD(n)=\sup\vert A(a;n)-na\vert$, where $A(a;n)$ counts the number of terms with $k\le n$ and $u(k)\lt a$, and the supremum is over all $a$ with $0\lt a\le1$. Technically, what I've just defined is the $\it star-discrepancy$, but the distinction need not detain us here.
Sequences are known with $nD(n)=O(\log n)$. This is best possible, in the sense that there is an absolute constant $c$ such that for every sequence we have $nD(n)\gt c\log n$ for infinitely many $n$.
Now for higher dimensions. Let $\bf x$ be a point in $I=[0,1]^d$. Let $B({\bf x})$ be the box (that is, parallelipiped aligned with the coordinate axes) with diagonally opposite corners at the origin and $\bf x$. Let $V({\bf x})$ be the volume of this box (so it's just the product of the components of $\bf x$). Given a sequence ${\bf u}(1),{\bf u}(2),\dots$ of points in $[0,1)^d$, define the discrepancy $D(n)$ of the first $n$ terms of the sequence by $nD(n)=\sup\vert A({\bf x};n)-nV({\bf x})\vert$, where $A({\bf x};n)$ counts the number of terms with $k\le n$ and ${\bf u}(k)$ in $B({\bf x})$, and the supremum is over all $\bf x$ in $I$. Various and sundry results are known about upper and lower bounds for $nD(n)$. As mentioned elsewhere, the Kuipers (which I have incorrectly given as Kuiper in some of the comments) and Niederreiter book is a good place to start. The website http://www-rocq.inria.fr/mathfi/Premia/free-version/doc/premia-doc/pdf_html/mc_quasi_doc/index.html discusses some low discrepancy sequences.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374872446060181, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/26798/calculations-of-apparent-magnitude?answertab=votes | # Calculations of apparent magnitude
I was attempting to do some calculations of apparent magnitude to help solidify my understanding of the topic, but have been running into some confusion.
According to Wikipedia, the apparent magnitude can be given as:
$m_x = -2.5\log_{10}(F_x/F^0_x)$
where $F_x$ is the observed flux and $F^0_x$ is a reference flux (in other words, this equation provides the difference of apparent magnitude between two observed values). Also, this is assuming that the same wavelength band is used in both flux measurements.
Flux, in turn, can be calculated as:
$F = \frac{L}{A}$
where $L$ is the star's luminosity and $A$ is the flux density. Since stars act as point sources, this can be simplified to:
$F = \frac{L}{4\pi r^2}$
where $r$ is the distance to the star.
Since, historically, Vega has been used as the reference zero-point (having an apparent magnitude around 0.03), I tried doing a simple calculation to find out the apparent magnitude of Fomalhaut using the values for luminosity and distance given in Wikipedia for both of them.
First, the flux of Vega:
$F_{Vega} = \frac{37\,L_\odot}{4\pi (25.3\,ly)^2}$
$F_{Vega} = 4.5999\times 10^{-3}\,L_\odot/ly^2$
Next, the flux of Fomalhaut:
$F_{Fomalhaut} = \frac{17.66\,L_\odot}{4\pi (25\,ly)^2}$
$F_{Fomalhaut} = 2.2485\times 10^{-3}\,L_\odot/ly^2$
Now, to calculate the apparent magnitude:
$m_{Fomalhaut} = -2.5\log_{10}(\frac{2.2485\times 10^{-3}\,L_\odot/ly^2}{4.5999\times 10^{-3}\,L_\odot/ly^2})$
$m_{Fomalhaut} = 0.7777$
Huh?? Fomalhaut's apparent magnitude is supposed to be 1.16. Even correcting for Vega's offset of 0.03, we still come up with 0.8077. Why are the calculations failing? I don't think I've made a mistake in the mathematics. Am I using the wrong values?
-
## 3 Answers
Thanks for asking this question. It is something we all assume to be obviously trivial and often skip. Your question made me think and I wasn't sure whether the values for luminosities listed in Wikipedia were in the optical range, or the bolometric luminosity i.e. the luminosity over all wavelengths.
A little bit of googling led me to this page, where this question seems to have been discussed well and also resolved.
-
I think the real problem will be in the errors in measurements. Recall that the error in a function of $n$ variables, $f(x_1, x_2, ..., x_n)$ with associated errors for each variable $\sigma_1$, $\sigma_2$, ... , $\sigma_n$ is given by $$\sigma_{f}=\sqrt{\sum_i^n \left(\sigma_i\frac{df(x_1, x_2, ..., x_n)}{dx_i}\right)^2}$$
In your case, the function is $F(L, r)=\frac{L}{4\pi r^2}$, so the error propagation formula is
$$\sigma_{F}=\sqrt{\left(\frac{\sigma_{L}}{4 \pi r^2}\right)^2 + \left(\frac{-3\sigma_{r}}{4 \pi r^3}\right)^2}$$
For Vega, $\sigma_{L}=3L_{\odot}$, $\sigma_r=0.1LY$. That gives $$\sigma_{F_{Vega}}=3.77\times10^{-4}$$
For Fomalhaut, it was a bit trickier to track down since wikipedia doesn't give the error in luminosity, but in the article it's given: $\sigma_{L}=0.82L_{\odot}$, $\sigma_r=0.1LY$. That gives $$\sigma_{F_{Fomalhaut}}=1.05\times10^{-4}$$
Using the standard error propagation formula again, the error associated with the apparent magnitude is
$$\sigma_{m}=\sqrt{\left(\frac{-2.5\sigma_{F_{Fomalhaut}}}{F_{Fomalhaut}}\right)^2 + \left(\frac{2.5\sigma_{F_{Vega}}}{F_{Vega}}\right)^2}$$
If you plug in all the values used above, I get $$\sigma_{m}=0.237$$ which means that your estimate is off by only $1.5\sigma$. That's pretty good, considering that these luminosities are probably bolometric rather than the visual band alone, yet your apparent magnitude is only in the V-band which makes it only a fraction of all the light from the star.
-
Good point on the bolometric luminosities - the B-V index of Vega is indeed quite low (listed as 0.00 on Wikipedia, actually). – voithos Jun 13 '11 at 17:21
The calculations look correct. I didn't check your input values, but I think that the error arises because you're overestimating the accuracy of the luminosity and distance values for Vega and Fomalhaut. Both of those things are notoriously hard to measure in astronomy and sometimes require the specification of caveats. Apparent magnitude in some waveband is relatively easy to measure since it's just a relative brightness measurement. The luminosities that you've quoted may or may not be bolometric (the sum of all wavebands) and there's also the issue of whether or not the luminosities have been corrected for extinction (aka reddening) along the line of sight.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9442203640937805, "perplexity_flag": "middle"} |
http://matthewkahle.wordpress.com/2010/11/05/ | # Packing tetrahedra
Last spring I saw a great colloquium talk on packing regular tetrahedra in space by Jeffrey Lagarias. He pointed out that in some sense the problem goes back to Aristotle, who apparently claimed that they tile space. Since Aristotle was thought to be infallible, this was repeated throughout the ages until someone (maybe Minkowski?) noticed that they actually don’t.
John Conway and Sal Torquato considered various quantitative questions about packing, tiling, and covering, and in particular asked about the densest packing of tetrahedra in space. They optimized over a very special kind of periodic packing, and in the densest packing they found, the tetrahedra take up about 72% of space.
Compare this to the densest packing of spheres in space, which take up about 74%. If Conway and Torquato’s example was actually the densest packing of tetrahedra, it would be a counterexample to Ulam’s conjecture that the sphere is the worst case scenario for packing.
But a series of papers improving the bound followed, and as of early 2010 the record is held by Chen, Engel, and Glotzer with a packing fraction of 85.63%.
I want to advertise two attractive open problems related to this.
(1) Good upper bounds on tetrahedron packing.
At the time of the colloquium talk I saw several months ago, it seemed that despite a whole host of papers improving the lower bound on tetrahedron packing, there was no upper bound in the literature. Since then Gravel, Elser, and Kallus posted a paper on the arXiv which gives an upper bound. This is very cool, but the upper bound on density they give is something like $1- 2.6 \times 10^{-25}$, so there is still a lot of room for improvement.
(2) Packing tetrahedra in a sphere.
As far as I know, even the following problem is open. Let’s make our lives easier by discretizing the problem and we simply ask how many tetrahedra we can pack in a sphere. Okay, let’s make it even easier: the edge length of each of the tetrahedra is the same as the radius of the sphere. Even easier: every one of the tetrahedra has to have one corner at the center of the sphere. Now how many tetrahedra can you pack in the sphere?
It is fairly clear that you can get 20 tetrahedra in the sphere, since the edge length of the icosahedron is just slightly longer than the radius of its circumscribed sphere. By comparing the volume of the regular tetrahedron to the volume of the sphere, we get a trivial upper bound of 35 tetrahedra. But by comparing surface area instead, we get an upper bound of 22 tetrahedra.
There is apparently a folklore conjecture that 20 tetrahedra is the right answer, so proving this comes down to ruling out 21 or 22. To rule out 21 seems like a nonlinear optimization problem in some 63-dimensional space.
I’d guess that this is within the realm of computation if someone made some clever reductions. Oleg Musin settled the question of the kissing number in 4-dimensional space in 2003. To rule out kissing number of 25 is essentially optimizing some function over a 75-dimensional space. This sounds a little bit daunting, but it is apparently much easier than Thomas Hales’s proof of the Kepler conjecture. (For a nice survey of this work, see this article by Pfender and Ziegler.)
Posted in expository, puzzles, research
# Mathematical art
Blog at WordPress.com. | Theme: Dusk To Dawn by Automattic. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427003264427185, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?t=311164&page=3 | Physics Forums
Page 3 of 7 < 1 2 3 4 5 6 > Last »
Blog Entries: 9
Recognitions:
Homework Help
Science Advisor
## why doesn't the electron fall into the nucleus!?
Quote by feynmann Feynman never say that his explanation is "semiclassical fairytales", I believe he intended to explain it in terms of "quantum mechanics", otherwise, He would say so. You are probably the first person to say that it's "semiclassical fairytales". I guess what you are saying is since the position of the electron is probabilistic, so they do not have potential energy and kinetic energy. I don't think that's true. The Schrodinger Equation itself is based on potential and kinetic energy. Yukawa invoked the uncertainty principle to predict that the pions are the carriers of strong force and that was confirmed by experiment. He got Nobel prize for this work. I guess his work is also "semiclassical fairytales" to you.
He didn't explain this in QM-terms, since the properties are found by solving the schrödinger equation - not by applying the HUP in this way. HUP is just a statistical statement of observables.
Yes, but one can not say that when the electron is closer to the nucleus, it has more KE, etc, since such statements of the electron is meaningless in QM.
No Yukawa did not use the uncertainty principle, he did advanced quantum field theoretical calculations. Which I actually went through this weekend...
Blog Entries: 9
Recognitions:
Homework Help
Science Advisor
Quote by feynmann The argument that the ground state should be interpreted as a superposition of the electron being in all possible position and the position of the electron is probabilistic fails to explain why hydrogen atoms have definite size. It's size is about the Bohr's radius.
i) you should not in one sentence disprove QM and in another one praise feynman and yukawa
ii) The atoms has NOT definite size, its "size" is a statistical statement. Like the root mean radius square and mean radius etc, only "mean" no "definite".
can you stop with this circus? It is clear to me that you have never studied QM from a textbook used in college...
Quote by thoughtgaze yes I know this question must have been asked before right? tried looking for it on here but I couldn't find it. the only answer I have gotten to this was just that particles are quantized... yes yes, fine... but WHY? for instance, why does an electron and a positron collide so readily? but an electron and a proton don't? someone explain please and thanks. P.S. I have heard an argument that deals with the uncertainty relation but it doesn't make any distinction between a proton and a positron... i don't think... any HELP!?
why doesnt the earth fall into the sun?
Blog Entries: 9
Recognitions:
Homework Help
Science Advisor
Quote by granpa why doesnt the earth fall into the sun?
not a perfect analogy, since a radial accelerating (non quantum) CHARGE will emit radiation and hence loose energy and decrease its radial distance.
Recognitions: Gold Member Take it back to basics and look at what an electron actually is.The electron has a rest mass and can display a relativistic mass increase.This fact alone appeals to intuition and leads us to imagine the electron as being a particle.The particle model may be over simplistic but it still has a tendency to persist in the mind even,I suspect, occasionally in the minds of some of the most able practitioners of QM.The particle model has its successes but also has its limitations as evidenced by the discussion here. Let's stick with the particle model for a moment and pick up on granpas "non perfect" but neverthless relevant analogy....."why doesn't the earth fall into the sun?"and let us now apply this to the electron and ask again "why does the electron not fall into the nucleus" Let the nucleus and the electron be at a position of momentary rest and about to approach under the influence of the Coulomb force.Using just the particle model we might predict that the electron and nucleus eventually collide and make actual contact.Good, but this is not what we observe the reason being that the particle model breaks down and we have to resort to the more powerful wave model and QM.QM,QED and the like may not appeal to intuition but nevertheless they work.Whether or not there will come a greater reconciliation between particle and wave remains to be seen.
Recognitions:
Science Advisor
Quote by malawi_glenn Yes, but one can not say that when the electron is closer to the nucleus, it has more KE, etc, since such statements of the electron is meaningless in QM.
I don't have a problem with saying that. The kinetic-energy operator then, operating on the electronic wave function, increases as $$r \rightarrow 0$$. If it didn't, the wavefunction would diverge, since the potential diverges there.
Now, unlike a classical particle, knowing the kinetic energy at any point in space requires knowing its kinetic energy at every point in space. And it the electron doesn't have a definite position and so on and so forth.
But as long as you recognize that you're not dealing with a classical particles and so on, I don't see a problem. I doubt most chemical physicists would take issue with a statement like "electrons move faster near the nucleus", although they'd no doubt prefer the more relativistically-correct "have higher momentum" to "move faster".
The interpretation given by Bohr is that the electron must follow orbit with an integer number of periods, that is only periodic orbits are allowed.
Quote by Halcyon-on The interpretation given by Bohr is that the electron must follow orbit with an integer number of periods, that is only periodic orbits are allowed.
No, according to Bohr Sommerfeld theory (a.k.a. "Old Quantum Theory"), the integral of P dQ integrated over a periodic orbit where P is the conjugate mometum to Q is a multiple fo Planc's constant. So, if you take Q to be the angle then the conjugate momentum is the angular momentum. Then, since for a circular orbit, the angular mometum is conserved we have:
L* 2pi = n h -------->
L = n h/(2pi) = n hbar.
f you then consider a classical orbit and the you only allow angular momenta that satisfy the above quantization rule, you find the equations for the Bohr orbits.
Since this is already too complicated to explain in the dumbed down high school physics classes, what they do is they use de-Broglie matter wave hypothesis (which came after Bohr Sommerfeld theory) and then they derive the result obtained by Bohr (who frst simply posulated that L is quantized and later with Sommerfeld came up with Old Quantum Theory).
But then it is incorrect to say that this is the derivation pesented by Bohr. So, I guess that Warren Siegel was too mild in this criticism of introductory physics textbooks, as these textbooks apperently don't even get the history correct.
Good to know! I have many other item to add to the Siegler´ s list! Concerning the atomic orbits, the Bohr Sommerfeld condition can be formulated in energy-time rather that in momentum-space, in fact the waves are classic and on-shell. The intagral along the orbits of duration t is simplified being the energy constant. The resultin condition is E t = n h in this way you avoid the assumption of circular orbits. Morover from de Broglie E = h v (v=1/T is the frequence and T the period of the n-th orbital) you get v t = n -> t = n T and finally you see that the duration of the orbits is such that it contains a integer number of orbits. So the fundamental idea came from de Broglie, but there is an additional periodicity condition. Here n is the atomic number. The quantization of the angular momenta is a consequence of the periodicity coming from the spherical symmetry of the problem (theta = theta + 2 pi ) whereas the spin can be interpreted as antiperiodicity of the electron wave. In this semi-classical way many other non-relativisitc quantum problems can be solved. (examples http://people.ccmr.cornell.edu/~much...feld/notes.pdf )
Recognitions:
Science Advisor
Quote by Halcyon-on In this semi-classical way many other non-relativistic quantum problems can be solved.
Well, semiclassical models have their uses, but they fail in general pretty miserably at describing atoms and molecules, with the sole exception of the Bohr model, really.
Perhaps you'd be interested in reading about some recent toyings with a semiclassical colinear helium model. They 'cheat' in a few ways (as does the Bohr model), but it's surprisingly good given the model.
But in general, semiclassical attempts at atoms/molecules are a dead end. E.g. the Thomas-Fermi model cannot form stable molecules. I doubt any semiclassical theory can, given the importance of exchange energy in chemical bonding.
Quote by malawi_glenn i) you should not in one sentence disprove QM and in another one praise feynman and yukawa ii) The atoms has NOT definite size, its "size" is a statistical statement. Like the root mean radius square and mean radius etc, only "mean" no "definite". can you stop with this circus? It is clear to me that you have never studied QM from a textbook used in college...
No one is saying in one sentence disprove QM and in another one praise feynman and yukawa. What was disproved is "Copenhagen Interpretation", not Quantum mechanics.
The probability of wavefunction is "Copenhagen Interpretation", Not "Quantum Mechanics" itself. There is No probability of wavefunction in Bohm's version of Quantum Mechanics
It's clear your knowledge of quantum mechanics is full of nonsense, since you don't even know the difference between quantum mechanics and "Copenhagen Interpretation", what a "Science Advisor". Would you stop calling yourself "Science Advisor"? I certainly don't need your "advice".
Recognitions:
Science Advisor
Quote by feynmann The probability of wavefunction is "Copenhagen Interpretation", Not "Quantum Mechanics" itself. There is No probability of wavefunction in Bohm's version of Quantum Mechanics
Wrong. $$|\psi(\mathbf{x},t)|^2$$ is a particle's spatial probability density in quantum mechanics. Regardless of the interpretation.
It's clear your knowledge of quantum mechanics is no better than high-school level, since you don't even know the difference between quantum mechanics and "Copenhagen Interpretation". What a Science Advisor
Maybe you should actually study the basis of the Bohm interpretation of which you speak before making such claims, in particular since I already explained this once.
The Copenhagen interpretation amounts to the claim, that that probability is truly random, i.e. indeterministic, rather than a result of incomplete knowledge of a deterministic system. i.e. hidden variables - such as the Bohm interpretation.
If you have some issue with how you can describe a deterministic system can be described in a probabilistic/statistical way, then your misunderstanding is with statistical mechanics, not QM.
Quote by alxm Wrong. $$|\psi(\mathbf{x},t)|^2$$ is a particle's spatial probability density in quantum mechanics. Regardless of the interpretation.
But when using the probability wave as answer for the question of the TS it seems to me that one needs the Copenhagen interpretation of this wave, being that the particle realy is 'spreadout' in it's wave (in superposition).
When one images the electron being like a fly circling around in a room one can calculate a probability wave for the fly being somewhere, but there is a great chance he finaly will be caught by the sticky strip in the middle.
This is similar to the question to the question: "Why doesn't the Earth fall into the Sun?". In both, the answer is that the revolving object has enough inertia to prevent it. Also, there is a confusion over quantum physics. While we hold Heisenberg's uncertainty principle to be true, this does not mean we do not know what they do in there. Think of the uncertainty principle as a veil over a fitting room: we know what goes on inside the fitting room, but we cannot see it without violating a law (physics or US) .
Quantum physics replaced Bohr-Sommerfeld theory in 1920's. Since then many phsicists such as Pauli, Heisenberg, Dirac... have given up explaining the motion of an electron concretely. because by equating the angular momentum of the spinning sphere of the electron to 1/2 h-bar, the sphere speed leads to about one hundred times the speed of light. The orbital angular momentum of the electron in the ground state of the hydrogen atom is zero, so the Coulomb potential is infinitely negative when the electron is close to the nucleus.
Quote by Modman This is similar to the question to the question: "Why doesn't the Earth fall into the Sun?". In both, the answer is that the revolving object has enough inertia to prevent it. Also, there is a confusion over quantum physics. While we hold Heisenberg's uncertainty principle to be true, this does not mean we do not know what they do in there. Think of the uncertainty principle as a veil over a fitting room: we know what goes on inside the fitting room, but we cannot see it without violating a law (physics or US) .
I don't know why people keep using this reasoning. Clearly the earth/sun analogy does not work at this level ELSE IT WOULD DEFINITELY FALL INTO THE NUCLEUS!!!!
Blog Entries: 9
Recognitions:
Homework Help
Science Advisor
Quote by feynmann No one is saying in one sentence disprove QM and in another one praise feynman and yukawa. What was disproved is "Copenhagen Interpretation", not Quantum mechanics. The probability of wavefunction is "Copenhagen Interpretation", Not "Quantum Mechanics" itself. There is No probability of wavefunction in Bohm's version of Quantum Mechanics It's clear your knowledge of quantum mechanics is full of nonsense, since you don't even know the difference between quantum mechanics and "Copenhagen Interpretation", what a "Science Advisor". Would you stop calling yourself "Science Advisor"? I certainly don't need your "advice".
I perfectly know the difference, it is custom to drop to the "copenhagen interpretation", since that is the standard in physics today. As we had in another thread "only amateur worries about different interpretations". So QM and Copenhagen interpretation of QM are interchangeable to many (almost all) physicists.
So let us go back to the atom again, it has no definite size, this is an experimental fact as well. If you claim that the atom has a definite size, you should have it backed up with articles. Claim was "hydrogen atoms has definite size. It's size is about the Bohr's radius", now prove it. Also you used the word "superposition" for continuous variable, position, I don't believe that makes sense in Real Analysis...
So let us go back to Yukawa again. You said that Yukawa used the heisenberg uncertainty relation to show that the strong force is mediated by pions. That is an insult to Yukawa, we actually went through Yukawa's theory in my quantum field class recently, and there is no Heisenberg principle used what so ever, just pure and nice Quantum Field Theory. Many popular science book uses Heisenberg to explain alot, even yukawa's theory, so one will, as you just proved, get the impression that it is used. But now you encountered an Science Advisor, who knows the things we are discussion in detail, since he has worked with these things, not just read them on wikipedia.
One also uses the Heisenberg principle in discussion about Feynman diagrams and virtual particles, but these "explanations" are never used in REAL textbooks. In REAL textbooks, one presents the REAL arguments. The reason for why Heisenberg principle is so applicable to explanations of quantum phenomenon is that is really easy to do so, it is really "the ultimate probabilistic" entity. It is almost like the good ol "God of the gasps", whenever one couldn't find an explanation due to lack of knowledge, one "blaimed" God. Today, people who does not know quantum mechanics uses Heisenberg Uncertainty principle for their explanations...
Page 3 of 7 < 1 2 3 4 5 6 > Last »
Tags
bohr model, superposition
Thread Tools
| | | |
|-----------------------------------------------------------------------|----------------------------------------|---------|
| Similar Threads for: why doesn't the electron fall into the nucleus!? | | |
| Thread | Forum | Replies |
| | Quantum Physics | 74 |
| | Introductory Physics Homework | 2 |
| | Quantum Physics | 2 |
| | Introductory Physics Homework | 7 |
| | High Energy, Nuclear, Particle Physics | 4 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.939197301864624, "perplexity_flag": "middle"} |
http://simple.wikipedia.org/wiki/Ramanujan_prime | # Ramanujan prime
In mathematics, a Ramanujan prime is a prime number that satisfies a result proven by Srinivasa Ramanujan. It relates to the prime counting function.
## Origins and definition
In 1919, Ramanujan published a new proof of Bertrand's postulate (which had already been proven by Pafnuty Chebyshev).
Ramanujan's result at the end of the paper was:
$\pi(x) - \pi(x/2)$ ≥ 1, 2, 3, 4, 5, ... for all x ≥ 2, 11, 17, 29, 41, ... (sequence A104272 in OEIS)
where $\pi$(x) is the prime counting function. The prime counting function is the number of primes less than or equal to x.
The numbers 2, 11, 17, 29, 41 are first few Ramanujan primes. In other words:
Ramanujan primes are the integers Rn that are the smallest to satisfy the condition
$\pi(x) - \pi(x/2)$ ≥ n, for all x ≥ Rn
This short article about mathematics can be made longer. You can help Wikipedia by . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8941740393638611, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/175577-implicit-differentiation-02-a.html | # Thread:
1. ## implicit differentiation...02
problem: $1+x = sin(xy^2)$
solution attempt: differentiating both sides, I have
$1 = -cosx \cdot\frac{d}{dx}[xy^2]$
$1 = -cosx(y^2+2xyy')$
dividing by $-cosx$ on both sides, I have
$-\frac{1}{cosx} =y^2+2xyy'$
dividing by $2xy$ on both sides, I have
$-\frac{2xy}{cosx} = \frac{y^2}{2xy} + y'$
subtracting $\frac{y^2}{2xy}$ from both sides, I have
$y' = -\frac{2xy}{cosx}-\frac{y^2}{2xy}$
then,
$y' = -\frac{y^2(4x^2 - cosx)}{2xycosx}$
2. You differentiated the right side wrong.
$\frac{d}{dx}sin(xy^2)=cos(xy^2)\frac{d}{dx}(xy^2)= (y^2+2xyy')cos(xy^2)$
Remember when you use the chain rule you have to keep whatever is in the parenthesis. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278743863105774, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/187774/real-numbers-expressing-in-terms-of-series/187790 | # Real Numbers expressing in terms of series
From the literature, I have found the following: Any real number A (say) can be expressed as
$A = a_1 + (1/a_1) + (1/a_2) + (1/a_3) +\ldots$ Where $a_1\ge2$ and the recurrence relation $a_{i+1}\ge a_i(a_i - 1) + 1$ for $i \ge 1$.
I could not understand this statement due to the fowling reason:
I fixed $a_1$ = 2 then, $A = 2 + ½ + (1/a_2) + (1/a_3) + \dots$ ---------(i)
We can find $a_2$ by recurrence relation; $a_2 \ge a_1(a_1 -1) + 1 = 2(2-1) + 1 = 3$
i.e., $a_2\ge 3$ and $a_3\ge4$ and so on… Now, by our (i), $A = 2 + 1/2 + 1/3 + 1/4 + \dots$
How any real number A is equal to (i)?
I could not understand this statement.
Also, the same A can be expressible in other two ways:
$A = a_0 + (1/a_1) + (1/a_1)(1/a_2) + (1/a_1)(1/a_2)(1/a_3)+ \ldots$ Where $a_1\ge2$ and recurrence relation $a_{i+1}\ge a_i$ for $i \ge1$.
Also, $$A = a_0 + (1/a_1) + (1/(a_1-1)) (1/a_1) (1/a_2) \\+ (1/(a_1-1)) ((1/a_2-1)) (1/a_1)(1/a_2)(1/a_3) +\ldots$$
Briefly, explain where I am wrong in my example or not? Also, how to deduce one equation to other equations of A? A waiting your explanations and proofs.
Edit Is it applicable for any GIVEN real number, instead of any real number? If yes, how to proceed to complete the proof? can we deduce from one representation to other representations?
-
1
Could you tell us where "in the literature" you found this? It might help us if we could see where this comes from. – John Wordsworth Aug 28 '12 at 6:31
2
If we are to take this at face value, real numbers $\le2$ don't exist. Curious. – Harald Hanche-Olsen Aug 28 '12 at 6:52
1
Yes, but which book does it come from? Who is the author? – John Wordsworth Aug 28 '12 at 7:21
3
As you have it there is no way of expressing zero (or any negative number, or indeed any real number less than 2.5) ... if you had $a_0$ the situation would be rather different, so you see it does matter what the statement is, and no-one will be able to answer unless you clarify the various matters noted in the comments. – Mark Bennet Aug 28 '12 at 7:39
2
@MarkBennet As a matter of fact, it is not so difficult to guess what would be a correct formulation of the first statement (and to prove it)... But I fully agree that the OP should first clarify (and maybe, o foolish hope, answer OldJohn's question about the source). – Did Aug 28 '12 at 7:56
show 7 more comments
## 1 Answer
I suspect that you are saying that real $A$ has a single representation as $$A = a_0 + (1/a_1) + (1/a_2) + (1/a_3) +\cdots$$ for integer $a_i$ where $a_1\ge2$ and the recurrence relation $a_{i+1}\ge a_i(a_i - 1) + 1$ for $i \ge 1$.
This is (at least for irrational $A$) often described as the Egyptian fraction representation: for example OEIS A001466 gives $$\pi = 3 + \frac{1}{8} + \frac{1}{61} + \frac{1}{5020} + \frac{1}{128541455}+\cdots .$$
To show it exists and is unique, consider the partial sum $A_i$ up to the $(1/a_i)$ term, where $a_0 = \lceil{A}\rceil - 1$ and $a_{i+1}=\left\lfloor{\frac{1}{A-A_i}}\right\rfloor +1$.
A proof will use the following points:
• $A_i \lt A \le A_{i-1} + \frac{1}{a_i - 1}$
• $\frac{1}{n-1}-\frac{1}{n} = \frac{1}{n(n-1)}$
• if $b_{j+1}= b_j(b_j - 1) + 1$ then $\sum_{j=k}^{\infty} \frac{1}{b_j} = \frac{1}{b_k -1}$
Your second expression can be handled in a similar way with suitable adjustments: for example the third bullet point becomes $\sum_{j=1}^{\infty} \left(\frac{1}{b_k}\right)^j = \frac{1}{b_k -1}$.
-
HENRY Sir, you have added some good flavor to my post. Can you prove the my initial statement? I confused. please do not give hints. Prove the fist statement and then I will try for other two statements of A. – vidyaojal Aug 28 '12 at 8:51
Sir! can you make proof of (i) by using your bullets? – vidyaojal Aug 28 '12 at 10:03
Henry Sir, can you make proof of (i) by using your bullets? – vidyaojal Aug 28 '12 at 10:03
Sir, kindly look at the proof of first part. Becoz, I applied your bullets. But those bullets are not enough to complete the proofs. Please look. I am waiting for your solution. – vidyaojal Aug 28 '12 at 10:26
@vidyaojal You will understand better why this leads to a proof if you experiment with various values of $A$ and use $a_0 = \lceil{A}\rceil - 1$ and $a_{i+1}=\left\lfloor{\frac{1}{A-A_i}}\right\rfloor +1$, keeping track of $A-A_i$ and $\frac{1}{A-A_i}$ – Henry Aug 28 '12 at 12:50
show 3 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9244136810302734, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?p=754980 | Physics Forums
Thread Closed
Page 4 of 4 < 1 2 3 4
## A proof of RH using quantum physics...:)
why is not my method acceptable?..from the integral equation above you could construct the potential V(x) by solving a non-linear equation for numerical values V(xj) j=1,2,3.....n or for example we could use an existence theorem for integral equations so we can deduce that the potential exists....wouldn,t that be acceptable?. we have the system of equations for the Potential in the form:
$$\sum_{j}K(n,V(x_{j}))=g(n)$$ now set n=1,2,3......k so we have the roots E1,E2,E3,................Ek upto afinite number k (we know the roots of Riemann function upto 10^{12} or even more.
I have a proof of RH the only "inconsistency" according to you is that i have not proved that the potential exist...but here i am giving an example of how to obtain its numerical value,now with those numerical value we could "modelize" our potential...
The other chance is to use the existence theorem of integral equations for the Kernel K(n,V(x)) and show that this integral exist..a question on rigour,if you prove that can calculate numerical values of the potential wouldn,t you have proved this potential exists?...
Recognitions: Homework Help Science Advisor your method isn't acceptable because it doesn't prove anything. i can tell that by the way you haven't proven anything; bit of a give away. all i see is a load of: if i do this, then i might be able to do something, that might up to an error term do something else, but i can't prove any of it. that isn't a proof. given the zeroes, i'm sure you can construct some differential operator as required, however even if that were the case then you've not shown how one may determine that it is hermitian. to do that one would need to make V(x) a function of zeta. zeta has not appeared at any point in your calculations. since you are not assuming that the non trivial zeroes lie on the critical line you are somehow creating a hermitian operator from these values. since zeta has not been mentioned at all in your working then the same logic applies to any complex numbers, and indeed one can conclude, accoridng to your logic, that all complex numbers are in fact real. you see, I don't need to show that something cannot be done (since it might be doable) to show that a method given is nonsense. as yours is as best we can tell. example: let S be the set of zeroes of the fucntion sin(iz) create a potential according to our method, and from this i conclude that all zeroes of this must satisfy z=z* ie are real since the operator so created is hermitian. however, I know where the zeroes are, thank you, and they are certainly not at real numbers, they are at purely imaginary numbers. so, why doesn't your method apply here? i see no reason for it not to.
Recognitions: Science Advisor Credit to matt grime for his patient arguing with a wall!
Recognitions: Homework Help Science Advisor And I further see no reason why your method picks out exactly the non-trivial zeroes and nothing else. but then the dependence on zeta has never been explained. so, if you want to make this metho work: take g(z) some function frmo C to C. explain how we may take a restricted set of its zeroes, call them T, and from T create a differential operator $$L=\partial^2_x + V_g$$ such that the spectrum is exactly T now specify the conditions on g and T that allow us to conclude L is hermitian, and hence T must be a set of real numbers. You have not in any decipherable way done any of those things. I am perfectly willing to believe that it can be done. but I have not seen you do it in a way that convinces me at all.
let be perturbation theory:(at first order of the energies)... $$E_{n}-E^{0}_{n}=\delta{E(n)}=<\phi|V|\phi>$$ (1) where E_{n} are the "energies" roots of the Riemann Z(1/2+is) and E^0_{n} are the Eneriges of the Hamiltonian H0=P^{2}/2m then (1) provides an integral equation for V we could find a resolvent kernel R for this so: $$V(x)=\int_{-\infty}^{\infty}dnR(n,x)\delta{E(n)}$$ with the resolvent Kernel: $$R=\sum_{m=0}^{\infty}b_{m}(K-cI)^{m}$$ is the Taylor expansion of the operator K^{-1} and K^{m} is the m-th iterated kernel,I is the identity operator and c is a real constant so the series converge for ||K||<c $$\delta{E(n)}=K[V(x)]$$ c is a constant so the series converges,the radius of convergence is given by the norm of K operator ||K||<c we have proved that V exist at first order in perturbation theory... with K the kernel given by $$K(x,n)=|\phi(x,n)|^{2}$$ phi are the eigenfunctions of the Hamiltonian H0 epending on n i have applied Neumann series to the operator to obtain the resolvent Kernel...
Recognitions: Science Advisor I'll let Matt continue to point out the circularity of the argument being presented here adn tackle it from a technical viewpoint but will add one other little piece of information to the discussion, just to demonstrate the futility of this in a completely different way. Supposing this were a valid proof, then credit for it should go to other people who have discovered it long ago. The connection between certain quantum systems and the RH is even well-known enough to be discussed at length in some of the recent popular books on the Reimann Hypothesis. So supposing this proof were correct, then it has already been demonstrated by numerous other people. But seeing as it's not a valid proof (which all these other people have recognized), there's not much to worry about. None of those people are going to claim you "stole their proof".
if they discovered the proof long ago...why did they not publish it?...in a post above i have proof that the potential exist and even calculated it to first order in perturbation theory depending on $$\delta{E(n)}$$ and the E_{n} and E^0_{n} are known even more if we call Z the inverse of the function $$\zeta(s)$$ then we could write: $$E_{n}=i(1/2-Z(0))$$ (i have oly inverted the function, and i have used simple integral equation theory to prove the existence of the potential.... The problem with Matt (as happen with most of math teachers) is that they have assumed certain conceptions in math and if you are out of these,you are nothing,i don,t know what argument will now matt grime have to say my maths are wrong,but is only an "approach to the potential" (is would be only correct to first order in perturbation theory, the whole serie of values of energy is perhaps even divergent) and the WKB is also an approach,to say that we can choose some functions that are on L^{2}(R) function space....$$\psi=Asen(S(x)/\hbar)$$ for example. another question as we are dealing with rigour an other things...can anyone of you brilliant,smart intelligent mathematician to prove the existence of infinitesimals,i,ll put even more easier,write (with numerical value) an infinitesimal...
Recognitions: Homework Help Science Advisor Hyper real numbers have infinitesimals in them: http://mathforum.org/dr.math/faq/ana...yperreals.html http://en.wikipedia.org/wiki/Hyperreal_number But what has that got to do with this? Anyway, eljose, I am unsure if the mathematics in your post makes any sense or not, it could well do, but I can clearly see it does not show that all s in the critical line of $\zeta (s) = 0$ satisfy $\Re (s) = 1/2$. And with all due respect, my 12 year old brother could see sentences in your proof that logically made no sense.
hyperreal numbers including infinitesimal..but could you write "an" infinitesimal?.. i have shown that the potential (and calculated it too) for $$\zeta(1/2+is)$$ is real,(the proof is that for any existing E_{n} also E*{n}=E_{k} is also an energy from this we deduce using the expected value of the Hamiltonian that V=V* so V is real,as you can see this only happens with a=1/2 the other cases there are complex energies in the form E*{n}+(2a-1)i, but a complex energies will come from a complex potential so $$\zeta(a+is)=0$$ can not have any real root except a=1/2 that have all the roots real as the potential is real) then i have also calculated (given an integral expression for) the potential upto first order in perturbation theory. I am checking my grammar and spelling to make it the most clearer of possible zurtex,i am not ofended .
Oh Jesus Christ and Holy Maria!!! What a thread what a thread????!!!!! I cant spare my time for this post so I just refer to the last post on V.S.'s "proof" of Fermat.
Recognitions: Gold Member Science Advisor Staff Emeritus Can I jump in just to point out that you can't prove a theorem in MATHEMATICS by using PHYSICS? The truth of a mathematics statement does not depend upon whether the physics statements are true or not.
Recognitions:
Science Advisor
Quote by eljose if they discovered the proof long ago...why did they not publish it?
Precisely for the reasons I said above: It's not a proof! People have made the observation you think you are making even more concretely but they all acknowledge that it is not a proof of anything. So what we are saying is that even if you turn this into a coherent statement, it is something that others have already said.
Quote by eljose ...in a post above i have proof that the potential exist
You don't need to prove that there is a potential that relates to the properties of the zeta function. That has already been done. It's just that it doesn't have anything to do with a proof.
Quote by eljose and even calculated it to first order in perturbation theory depending on $$\delta{E(n)}$$ and the E_{n} and E^0_{n} are known even more if we call Z the inverse of the function $$\zeta(s)$$ then we could write: $$E_{n}=i(1/2-Z(0))$$ (i have oly inverted the function, and i have used simple integral equation theory to prove the existence of the potential....
As soon as you say you have calculated something to first order, or used the WKB approximation, or anything like that we immediately KNOW that those statements can't be part of a proof.
People can already calculate all the zeros of the zeta function. But it will take an infinite amount of time, just as your approximate processes might be able to do the same calculation if you do it properly, but it still has nothing to do with a proof because an infinite process will never be considered a proof seeing as you can't complete it.
Quote by eljose The problem with Matt (as happen with most of math teachers) is that they have assumed certain conceptions in math and if you are out of these,you are nothing,
Why exactly are you interested in trying to show anything to mathematicians if you reject mathematics? Mathematics defines its own processes very carefully so you can either accept those processes and do mathematics, or you can reject those processes and accept that you are not doing mathematics. (In that case, though, why should mathematicians listen to you when you incorrectly claim to be doing mathematics?)
Quote by eljose i don,t know what argument will now matt grime have to say my maths are wrong,but is only an "approach to the potential" (is would be only correct to first order in perturbation theory, the whole serie of values of energy is perhaps even divergent) and the WKB is also an approach,to say that we can choose some functions that are on L^{2}(R) function space....$$\psi=Asen(S(x)/\hbar)$$ for example. another question as we are dealing with rigour an other things...can anyone of you brilliant,smart intelligent mathematician to prove the existence of infinitesimals,i,ll put even more easier,write (with numerical value) an infinitesimal...
I addressed the problem of using approximations above. As for the issue of infinitesimals, which is completely off-topic for this discussion, infinitesimals are not something whose existence should be proven - they are something that is defined. And if you knew that you would also know that it is impossible to write a numerical value for one because, by definition, they do not admit numerical expansions.
People have been pretty patient with you and tried to help you understand why what you are doing is not proof but you seem to refuse to even learn what "proof" means in mathematics and, instead, complain that mathematicians won't listen to you. Why should anybody listen to you tell them how to do their work when you have demonstrated that you don't understand what their work is?
Given that people have been trying to help you, don't you think it is pretty rude of you to refuse to listen or learn?
-To Hallsoft Ivy:this problem can be seen in a mathematical way as the Hilbert-Polya conjecture (a selfadjoint operator with real potential that has its eigenvalues as the roots of the function $$\zeta(1/2+is)$$,that is to prove that exist a real potential and a self adjoint operator with a given V so all its eigenvalue are the roots of the function zeta evaluated at 1/2+is,then i prove this is self-adjoint (as the potential is real and a Hamiltonian with a real potentia has all its eigenvalues real) for the cases a+is with a different from 1/2 there are compplex roots so the potential can not be real...all the steps i made are justified mathematically i have not introduced any physical or empirical proof. (you can view H as a self-adjoint operator,and WKB solutions as the solutions to an equation of the form: ey+f(x)y=0 with e an small parameter e<<1 as you can see all is math there -To symplectic manifold...if this proof were made by an universtiy smart and snob teacher i,m sure that you would accept it... -To David:i have proved that the potential exist and have given an expression for it in the form: $$V(x)=\int_{-\infty}^{\infty}dnR(n,x)\delta{E(n)}$$ (1) (although of course this is only an approximation,you can calculate it in finite time by integrating upto a N finite so you only should need to calculate a finite (but big ) number of roots of Z(1/2+is)). as you can see the potential exists and can be calculated,also for a=1/2 the potential is real i have calculated all the necessary steps to prove RH but if they don,t want to give you the fame and the prize because you are not famous they will invent any excuse....i am sure that if my steps (as i have said before) were made by a famous mathematician from a famous university the RH would have been proved long ago...as i have told before at least Gauss,Euler and others were given an opportunity to publish their ideas....my teachers don,t want to know anything from me or give me the chance to prove RH... From the mathematical point of view my proof of RH is similar to this: a differential operator $$H=aD^{2}+V(x)$$ where we must choos V so the eigenvalues of this operator are precisely the roots of $$\zeta(1/2+is)=0$$. for the potential V knowing some of the energies E_{n} we could obtain V in an integral form as expressed by (1). for a small a<<1 (in our case is m>>h) the WKB solutions are the solutions of a differential equation $$ey´´+f(x)y=0$$.. i have not made any physical assumptions at all.
Recognitions: Homework Help Science Advisor Let K be the set of all solutions that satisfy $\zeta (s) = 0$. Let there exist some p such that: $$\frac{\partial^2 p}{\partial i^2} a^3 - \Gamma (i^2) = 0$$ Now I can prove that these exists a solution $i = f_k (p)$ therefore i exists. If i exists there must be a limit to g(p) and p approaches infinity and therefore p exists. Thus the roots of: $$\int_0^\infty \frac{g(s)}{p \Gamma(s)} ds = \sin i$$ Are synonyms to the Zeta functions roots and all have roots of $\Re (p) = 1/2$ Therefore RH is proven and no one can come up with a counter-example.
Recognitions: Gold Member Science Advisor Staff Emeritus You were warned.
Thread Closed
Page 4 of 4 < 1 2 3 4
Thread Tools
Similar Threads for: A proof of RH using quantum physics...:)
Thread Forum Replies
General Physics 2
Quantum Physics 27
Quantum Physics 3
Quantum Physics 28
Forum Feedback & Announcements 44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 24, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9566909670829773, "perplexity_flag": "middle"} |
http://nrich.maths.org/6949 | ### Number Detective
Follow the clues to find the mystery number.
### A-maze-ing
Did you know that ancient traditional mazes often tell a story? Remembering the story helps you to draw the maze.
### The Naked Pair in Sudoku
A particular technique for solving Sudoku puzzles, known as "naked pair", is explained in this easy-to-read article.
# Which Numbers? (2)
##### Stage: 2 Challenge Level:
This problem is similar to Which Numbers? (1), but slightly more tricky. You may like to try that one first.
I am thinking of three sets of numbers less than $101$. They are the blue set, the red set and the black set.
Can you find all the numbers in each set from these clues?
These numbers are some of the blue set: $26, 39, 65, 91$, but there are others too.
These numbers are some of the red set: $12, 18, 30, 42, 66, 78, 84$, but there are others too.
These numbers are some of the black set: $14, 17, 33, 38, 51, 57, 74, 79, 94, 99$, but there are others too.
There are sixteen numbers altogether in the red set, seven numbers in the blue set and fifty numbers in the black set.
These numbers are some that are in just one of the sets: $6,10, 15, 24, 33, 48, 56, 65, 75, 93$, but there are others too.
These numbers are some that are in two of the sets: $12, 13, 36, 54, 72, 96$, but there are others too.
The only number in all three of the sets is $78$.
These numbers are some that are not in any of the sets: $5, 8, 22, 27, 44, 49, 63, 68, 82, 86, 100$, but there are others too.
There are twelve numbers that are in two of the sets.
There are $41$ numbers that are not in any of the sets.
You can download a sheet of all this information that can be cut up into cards.
Can you find the rest of the numbers in the three sets?
Can you give a name to the sets you have found?
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518266320228577, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/questions/2730/rsa-dsa-wouldnt-it-make-sense-to-sign-using-decoding-the-data-hash/2731 | # RSA/DSA: Wouldn't it make sense to sign using decoding the data hash?
Why is encoding using the private key used for signing? Wouldn't it make sense to keep the premise, that private is for decoding and public is for encoding? i.e. create a hash and threat it as a result of the crypting and decrypt it. The decrypted value being the "signature". Than when someone want to validate the signature, he would encrypt the signature and he would get the hash.
Am I missing something? Why is this not used?
-
Sorry, it is not really clear what you mean. What you describe is (with some complications left away) how an RSA signature works. What actually is your question? – Paŭlo Ebermann♦ May 28 '12 at 18:42
It is? Great :) I assumed it works somehow differently = that it uses different way to "encode" then m^d (mod n)... – Tomáš Fejfar May 28 '12 at 18:55
1
These are the "complications left away". To do it securely you'll have to use some kind of padding, and the padding schemes used for signing are something else than the padding schemes used for encryption (and obviously it is not the decryption unpadding). But the actual keyed operation is the same for signing, validating, encrypting and decrypting. – Paŭlo Ebermann♦ May 28 '12 at 19:00
## 1 Answer
Your question appears to be "why do we use the terminology 'encoding' when talking about what we do as a part of the signature operation". Well, we don't (at least, I don't, and I don't remember hearing that terminology from someone else).
As for RSA, well, the terminology you use is moderately irrelevant (as long as you do the cryptographical operations correctly, it doesn't matter what you call them); on the other hand, encoding would appear to imply that you are taking a 'signal', and putting it into a different representation, while decoding would appear to imply that your reversing that operation, and are putting the signal back into its original state. While I suppose you could call the signature operation an "encoding" one, calling it a "decoding" one would appear to me to be an abuse of the terminology.
AS for DSA, well, your description of "encrypt the signature and he would get the hash" is not how DSA verify works; instead, the verifier inserts the signature, values from the public key and the hash into a formula, and if both sides of the formula are the same value, then the signature verifies. There's nothing that can be usefully described as "decoding" here.
-
Thanks for your answer. You are correct - that it's mostly that I don't like terminology. Note that the private exponent is called `d` (as in decrypting) in most resources and the public is called `e` (as in encrypting). That puzzled me. – Tomáš Fejfar May 28 '12 at 19:00
1
@TomášFejfar: I believe that the common $d$ and $e$ convention dates back to the early days of RSA (probably the original SciAm article). While it's too historically entrenched to modify, you shouldn't think that it indicates what modern thinking of RSA is. – poncho May 28 '12 at 19:08
In PKCS#1 (the RSA standard), the term "encoding" clearly indicates all the various steps (including padding) that turn the message into an integer suitable for the RSA algorithm. I would not say that the terminology is irrelevant, at least if we give a value to the ability to be understood by the rest of the world... – SquareRootOfTwentyThree May 28 '12 at 19:29
@SquareRootOfTwentyThree: I forgot about that. On the other hand, I would note that PKCS#1 uses that for the operations other than the raw RSA operation, while the OP was talking about about the raw RSA operation itself. – poncho May 28 '12 at 19:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945021390914917, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/15498/could-hydrogen-liberated-from-water-provide-lifting-energy-which-exceeds-the-ene | # Could hydrogen liberated from water provide lifting energy which exceeds the energy it took to liberate it from water
I was thinking about Hydrogen balloons and that large ones which are used for weather balloons which sometimes go up to 100,000 ft (approx 30km). Then I was wondering, how much potential energy has the balloon gained with the balloon and the weight it carries to get up to 100,000 ft. It seems the object would have a lot of potential energy at that height. If the object was rolled down a ramp from that height, it would generate a lot of energy going down a 30km height ramp. Then how much energy was used to get it to lift, to produce the hydrogen in the first place?
So my question is, could there be any situation where the potential energy the balloon and its cargo gained exceed the energy it took to make the hydrogen in the first place? Then, if so, how could a cycle be set up where the lifting energy of the hydrogen is used to liberate more hydrogen and produce energy.
Here is another idea, what if the balloon started at the bottom of the ocean, a Electrolysis device is separating hydrogen and oxygen from the water down there. A balloon collects the hydrogen and oxygen and pulls upwards. The balloon is attached to a string which it pulls up and turns a pully (wheel) at the bottom as it goes up. Could the rotation of the wheel gain more energy than the cost to extract the hydrogen. I guess the weight of the string would be a factor to consider as well.
My thought is that all of this is very unlikely, as it seems like a perpetual motion device, as the hydrogen and oxygen could be re-combined and it would fall back downwards as water and the cycle would be repeated. The question would be, where does the energy come from? it has to come from somewhere, so this seems very unlikely. I cannot think of where the energy comes from.
But can anyone work out the calculations even for a very basic calculation?
-
Ever heard about Nernsts Law? In that law You find what You have to pay for the pressure of products by more voltage for electrolysis. Wtr to You question on "deap sea" electrolysis: look for solubility of gases in water. – Georg Oct 8 '11 at 12:15
## 4 Answers
I haven't done the calculations, but I doubt that this scheme would generate net energy. As was pointed out, electrolysis uses a lot of energy. However, after the H2 and O2 rises up the water column, you could get some of the energy back with a fuel cell that would convert the hydrogen and oxygen back to water and supply additional electric power, but inefficiencies at each stage would have to be made up from the energy gained by the rising gases in the column of water.
To show that there has to be a net overall loss of energy, consider the following modification to your problem. Imagine a long vertical pipe filled with water. Now instead of electrolysis, let's say you instead use an air pump to inflate a balloon near the bottom of the water filled pipe. This takes work which goes into lifting the column of water up such that the surface of the water rises enough to allow for the volume of the ballon. Once the ballon has risen through the water column the water level will go back to it's former position. That fall in water level is the source of the energy that the rising ballon could generate. So there is no free lunch or perpetual motion machine here - inefficiencies at each stage will insure that there is a net loss of energy.
By the way, I do not know this for a fact, but by this thought experiment, I would predict that electrolysis of water under high pressure would take more energy than under lower pressure. I say this because the electrolysis is effectively inflating a balloon against the pressure of the water - which will thus take more energy at high pressure.
-
@yotam, note that my answer which was posted before your answer did take into account the pressure factor for electrolysis, even though I did not know it was called Nernsts Law. So it was not accurate to say that the answers stated so far has neglected to take into account the pressure factor. That is exactly what my thought experiment of inflating a balloon with a pump was about. – FrankH Oct 10 '11 at 6:40
You are correct, they absolutely would constitute perpetual motion devices. A wide variety of schemes like this can be constructed, which all falter on the basis that they violate the first law of thermodynamics - that is, the total energy of a closed system (which both proposals are, though it may not be immediately apparent) is conserved.
Regarding water electrolysis, the massive amount of energy you would be putting in to the reaction $2\mathrm{H_{2}O} \rightarrow 2\mathrm{H_{2}} + \mathrm{O_{2}}$ would find very little representation in the buoyancy of the hydrogen balloon. A mixture of $\mathrm{H}_{2}$ and $\mathrm{O}_{2}$ is quite thermodynamically unstable with respect to $\mathrm{H_{2}O}$, which is the reason why $\mathrm{H}_{2}$ combusts so vigorously, however this thermodynamic instability that you've invested so much energy in contributes nothing to the lifting force of the balloon, which is simply an unrelated consequence of hydrogen just happening to have a lower molecular mass than air.
-
Without going into calculations, one can look at analogues.
There exists the analogue of the hydroelectric power. Essentially power from the sun transfers water from a lower gravitational potential to a higher, in the mountains, and the return of the water to the oceans releases gravitational energy; the sun directly by evaporation from the ocean surface and indirectly by wind energy, temperature, and atmospheric circulations in general.
Thus, what is necessary for your model to work with positive energy results is power outside the system. Solar panels providing electrolysis current would do the job and no perpetual motion is involved, the power will again come from the sun to change the gravitational potential energy of some mass.
It is not worth it economically, imo: why not use the electricity directly?
Edit
Anybody who has observed the sea bottom has seen bubbles rise from seaweed and corrals, and speed to the surface. There is kinetic energy in the buoyancy so the question has merrit.
To find the energy balance sheet for a process that uses solar energy for electrolysis at some depth, builds up a hydrogen balloon at some ocean depth ( and why not a second oxygen balloon?), releases it and attempts to recover energy from the kinetic energy of the buoyant balloon needs calculations. In principle the electrolytic energy can be recovered at surface. The kinetic energy of the bubble is the energy of the water falling to fill in the vacuum left from the rising balloon.
The question of energy conservation boils down to "where does buoyant kinetic energy come from". Somebody has done the work. From gravity, because the falling mass is larger than the rising mass.
-
The answers stated so far has neglected to take into account the pressure factor. (@George comment). When you try to do any chemical process, you need to pay the energy to overcome the entropy of the system and not only the potential, this is why we consider the free energy of something and not the potential energy.
If you don't consider that, you can replace water with theoretical chemical, let's call it amazingume, which has similar properties of water except that it is easy to do electrolysis to. You still won't get that desired free energy since you'll have to pay the energy of putting the amazingume elements in the atmospheric pressure.
-
A different case is Helium which I believe is mined from under the ground, if you put this in a balloon and float up, you have then gained some energy. If you then release the helium it will stay up there. – Phil Oct 9 '11 at 8:35
I didn't finish explaining, - the helium originally gained energy when the planet was formed by the gravity of the earth as it was buried with other minerals/rocks. The energy gained in the balloon filled with Helium comes from the energy it took to originally bring it into this gravity well. Although helium is similar in some ways to the hydrogen/water idea I presented, the different is the energy gained came from somewhere known. – Phil Oct 9 '11 at 8:37
I'd recommend to stop the discussion here. Phil seems to be one of those guys who jump to ever stranger "reasons". – Georg Oct 9 '11 at 11:23
@Phil: But what you are saying here is already done. Look for `geothermal` energy. – Yotam Oct 9 '11 at 14:11
No, Phils assumption on subterraneous heliums origin is wrong: ""when the planet was formed by the gravity of the earth as it was buried with other minerals/rocks. "" – Georg Oct 9 '11 at 15:26
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9686411023139954, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/33787/questions-about-shimura-curves | ## Questions about Shimura curves
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
1: Suppose $A_3$ is the moduli space of abelian varieties of dimension 3 .Is the union of all one dimension shimura varieties in $A_3$ connected?
2: Given a Shimura curve (explicit construction), can we decide what kind of CM point can occur in this Shimura curve?
-
1
Could you be more precise about what you mean by a "Shimura curve" inside $\mathcal{A}_g$? E.g., do you just mean a one-dimensional Shimura subvariety, or do you possibly really want $g = 2$ and are just talking about Shimura curves associated to quaternion algebras over $\mathbb{Q}$? – Pete L. Clark Jul 29 2010 at 14:36
(Assuming you mean the latter, I can answer this question, but I probably won't get to it for a little while because I have a three hour lecture to give in less than three hours. There are other regulars here who know the answer; if not, I'll come back later today...) – Pete L. Clark Jul 29 2010 at 14:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9274555444717407, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/tagged/special-functions+riemann-zeta | # Tagged Questions
0answers
88 views
### Identity involving $\zeta(3)$
This is a follow-up of this question and my partial answer to it. I've found that the proof of the 2nd identity reduces to showing that ...
2answers
239 views
### Clausen and Riemann zeta function
This is an exercise from the American Monthly Problems from last year. I would like prove two formulas: (1) \$\int_0^{2\pi}\int_0^{2\pi}\log(3+2\cos(x)+2\cos(y)+2\cos(x-y)) dxdy=8\pi ...
0answers
35 views
### The minimum of a function
Could anyone possibly give me any help with finding the minimum of this function? I believe the result to be $2\pi |n|$ from page 619 of this paper by W. G. C. Boyd. \begin{equation} ...
1answer
77 views
### How does one calculate the amount of time required for computation?
For example, to compute the zeroes of the Riemann zeta function using the Euler-Maclaurin summation method one has to do O(T) work. The Euler-Maclaurin summation method for zeta is given by \$ ...
2answers
191 views
### How to find integral of $\int_0^\infty \frac{\ln ^2z} {1+z^2}\mathrm{d}z$?
How do I find the value of $$\int_{0}^{\infty} \frac{(\ln z)^2}{1+z^2}\mathrm{d}z$$ without using contour integration, - using the usual special functions, e.g., zeta/gamma/beta/etc. Thank you,
0answers
267 views
### An Expression for $\log\zeta(ns)$ derived from the Limit of the truncated Prime $\zeta$ Function
I think, here, I found P_\color{red}x(\color{blue}s)=\sum_{p<\color{red}x} \frac{1}{p^{\color{blue}s}} =\sum_{\color{green}n=1}^{\infty}\frac{ \mu (\color{green}n)}{\color{green}n} ...
1answer
92 views
### Riemann's Zeta function [duplicate]
Possible Duplicate: Riemann Zeta Function and Analytic Continuation Calculating the Zeroes of the Riemann-Zeta function It is stated that Riemann's Zeta function has zeros at negative ...
1answer
162 views
### How does $\zeta(1 - s)$ become $(-1/s + \cdots)$?
On the Wiki page on $1 + 1 + \cdots$, how does $\zeta(1 - s)$ become $(-\frac{1}{s} + \cdots)$? A detailed explanation would be appreciated.
2answers
179 views
### Detailed proof of $\zeta(s)-1/(s-1)$ extends holomorphically to $\Re(s)>0$
I'm trying to understand the proof of PNT by Don Zagier. But his proof is too simplified so I can't understand it. I got stumped at step II: $\zeta(s)-1/(s-1)$ extends holomorphically to ...
3answers
395 views
### Derivatives of the Riemann zeta function at $s=0$
It's a curious fact that for $n>0$, $\zeta^{(n)}(0)\approx -n!$. Apostol gave a table for $\frac{\zeta^{(n)}(0)}{n!}$, among other results on $\zeta^{(n)}(0)$ . the sequence : \delta_{n}=\left | ...
0answers
75 views
### fastest way to evaluate $\arg\zeta\left(\frac{1}{2}+i\text{t}\right)$ [duplicate]
Possible Duplicate: evaluation of $\operatorname{Arg}\zeta (1/2+is)$ ?? If we consider \arg\zeta\left(\frac{1}{2} + i\text{t}\right) = \text{Im ...
1answer
182 views
### Improper integral about exp appeared in Titchmarsh's book on the zeta function
May I ask how to do the following integration? $$\int_0^\infty \frac{e^{-(\pi n^{2}/x) -(\pi t^2 x)}}{\sqrt{x}} dx$$ where $t>0$, $n$ a positive integer. This came up on page 32 (image) of ...
0answers
64 views
### Is this formula for $\sum_{n} (n^{2}+z^{2})^{-s}$ correct?
I would like to know if this formula is true: $$\sum_{n=1}^{\infty}\frac{1}{(z^{2}+n^{2})^s}=\frac{1}{\Gamma(s)} \sum_{n=0}^{\infty}\Gamma(s+n)\zeta(2s+2n)\frac{ (-z^2)^n}{n!}.$$ I have used the ...
2answers
92 views
### Riemann zeta sums and harmonic numbers
Given the nth harmonic number of order s, $$H_n(s) =\sum_{m=1}^n \frac{1}{m^s}$$ It can be empirically observed that, for $s > 2$, then, \sum_{n=1}^\infty\Big[\zeta(s)-H_n(s)\Big] = ...
1answer
208 views
### On the zeta sum $\sum_{n=1}^\infty[\zeta(5n)-1]$ and others
For p = 2, we have, $\begin{align}&\sum_{n=1}^\infty[\zeta(pn)-1] = \frac{3}{4}\end{align}$ It seems there is a general form for odd p. For example, for p = 5, define $z_5 = e^{\pi i/5}$. Then, ...
1answer
214 views
### What is the binomial sum $\sum_{n=1}^\infty \frac{1}{n^5\,\binom {2n}n}$ in terms of zeta functions?
We have the following evaluations: \begin{aligned} &\sum_{n=1}^\infty \frac{1}{n\,\binom {2n}n} = \frac{\pi}{3\sqrt{3}}\\ &\sum_{n=1}^\infty \frac{1}{n^2\,\binom {2n}n} = ...
1answer
146 views
### Limit of the function $\zeta(x)/\zeta(x+1)$ as $x \to \infty$
I am looking for a simple proof that $\zeta(\alpha)/\zeta(\alpha+1) \to 1$ as $\alpha \to \infty$ (where $\zeta(\alpha)$ denotes the Riemann zeta function, \$\zeta(\alpha) = \sum \limits_{n\geq 1} ...
2answers
990 views
### Proving an “amazing” claim regarding $\zeta( 3)$ and Apéry's proof
I recently printed a paper that asks to prove the "amazing" claim that for all $a_1,a_2,\dots$ $$\sum_{k=1}^\infty\frac{a_1a_2\cdots a_{k-1}}{(x+a_1)\cdots(x+a_k)}=\frac{1}{x}$$ and thus (probably) ...
2answers
226 views
### Evaluating $\zeta(0)$ using the functional equation of Riemann-Zeta function.
$$\zeta(it)=2it\pi it−1\sin(i\pi t/2)\Gamma(1−it)\zeta(1−it).$$ Everything on the RHS is never zero, Does that means LHS has no zeros, since $\sin(s)$ has a simple zero at $s=0$ while $\zeta(1−s)$ ...
0answers
147 views
### How is the Riemann-Siegel formula applied?
What is the application of the Riemann-Siegel formula: $$\zeta(s) = \sum_{n=1}^N\frac{1}{n^s} + \gamma(1-s)\sum_{n=1}^M\frac{1}{n^{1-s}} + R(s) ,$$ where \$ \displaystyle\gamma(s) = ...
1answer
149 views
### Zeta function identity
How does one prove the zeta function identity $$\sum_{s=2}^{\infty}\left(1-\sum_{n=1}^{\infty}\frac{1}{n^s}\right)=-1 \;?$$
1answer
195 views
### Verifying identities for Riemann zeta function
I ran across these two problems, while reading a text on number theory. The problem states: "Verify the following identities". What does "verify" mean in this context, and what strategies can I employ ...
3answers
279 views
### Limit of Zeta function
I'm looking for a reference for (or an elementary proof of) $$\lim_{s \rightarrow 1} \left( \zeta(s) - \frac{1}{s-1} \right) = \gamma$$ Thanks for your help.
1answer
452 views
### Upper bound on differences of consecutive zeta zeros
The average gap $\delta_n=|\gamma_{n+1}-\gamma_n|$ between consecutive zeros $(\beta_n+\gamma_n i,\beta_{n+1}+\gamma_{n+1}i)$ of Riemann's zeta function is $\frac{2\pi}{\log\gamma_n}.$ There are many ...
2answers
162 views
### Approximate Riemann zeta function
Given the function $Z(s,N)= \sum \limits_{n=1}^{N}n^{-s}$. In the limit $N \to \infty$ the function $Z(s,N) \to \zeta (s)$ Riemann Zeta function. My question is: Is there a Functional equation for ...
0answers
174 views
### New generalization of Riemann Zeta?
I am interested in the following generalization of the Riemann Zeta function: $$\zeta_M(s,c) = \sum_{n=1}^\infty \left(\frac{n^2}{c^2} + \frac{c^2}{n^2}\right)^{-s}$$ This is most closely related ...
2answers
599 views
### Logarithmic derivative of Riemann Zeta function
Given the logarithmic derivative of the zeta function $\dfrac{\zeta^\prime (s)}{\zeta(s)}$ how does it behave near $s=1$? I mean if for $s=1$ the Laurent series for the logarithmic derivative becomes ...
1answer
315 views
### elliptic generalizations of Euler's trick
So Euler employed the following identity $$\sin(z) = z \prod_{n=1}^{\infty} \left[1-\left(\frac{z}{n\pi}\right)^{2}\right]$$ to evaluate $\zeta(2n)$, for $n\in\mathbb{N}$ I'm curious if there's been ...
3answers
1k views
### Analytic continuation- Easy explanation?
Today, as I was flipping through my copy of Higher Algebra by Barnard and Child, I came across a theorem which said, The series $$1+\frac{1}{2^p} +\frac{1}{3^p}+...$$ diverges for $p\leq 1$ and ...
2answers
356 views
### Integrating $\frac{x^k }{1+\cosh(x)}$
In the course of solving a certain problem, I've had to evaluate integrals of the form: $$\int_0^\infty \frac{x^k}{1+\cosh(x)} \mathrm{d}x$$ for several values of k. I've noticed that that, for k a ...
3answers
305 views
### Erroneous numerical approximations of $\zeta\left(\frac{1}{2}\right)$?
By definition of the Riemann Zeta Function, $$\zeta\left(\frac{1}{2}\right) = \sum_{n=1}^\infty \frac{1}{\sqrt{n}}.$$ Since $\forall n \geq 1 : \frac{1}{\sqrt{n}} \geq \frac{1}{n}$, we have that for ...
2answers
347 views
### Is it problem of Mathematica or my own?
The following is a plot comparing Exp[Derivative[1,0][Zeta][0,x]+1/2Log[2 Pi]] and Gamma[x]: In theory the blue and the red ...
4answers
1k views
### Proving a known zero of the Riemann Zeta has real part exactly 1/2
Much effort has been expended on a famous unsolved problem about the Riemann Zeta function $\zeta(s)$. Not surprisingly, it's called the Riemann hypothesis, which asserts: \zeta(s) = 0 ...
4answers
473 views
### Are there addition formulas for the Riemann Zeta function?
In particular for two real numbers $a$ and $b$, I'd like to know if there are formulas for $\zeta (a+b)$ and $\zeta (a-b)$ as a function of $\zeta (a)$ and $\zeta (b)$. The closest I could find ...
2answers
1k views
### How to evaluate Riemann Zeta function
How do I evaluate this function for given $s$? $$\zeta(s) = \sum_{n=1}^\infty \frac1{n^s} = \frac{1}{1^s} + \frac{1}{2^s} + \frac{1}{3^s} + \cdots$$
2answers
505 views
### Question Relating Gamma Function to Riemann Zeta function evaluated at integers
I was just reading a paper of Ramanujan entitled " On question 330 of Professor Sanjana" when i got stuck up with a Proposition which i am unable to answer. The proposition is if \$ \displaystyle ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8890104293823242, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/45782?sort=votes | What’s the name of this flavor of n-category?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm looking for the name of a certain n-category definition. (Someone explained it to me a couple of years ago. I remember the definition, but not the name. Without the name it's difficult to search for a citation. I want the citation in order to explain something we're not doing in a paper.)
For background, consider the Moore loop space $\Omega_r$ of loops of length $r$ (that is, parameterized by the interval $[0,r]$). We have a strictly associative composition $\Omega_r\times \Omega_s\to \Omega_{r+s}$. The main idea of an "xxxx" n-category is to imitate this idea in higher dimensions. The $k$-morphisms are parameterised by $k$-dimensional rectangles with sides of lengths $r_1,\ldots,r_k$. Gluing rectangles together gives $k$ different strictly associative ways to compose $k$-morphisms.
Question: What is "xxxx" above?
Bonus question: What's the best (or any) citation for this idea?
EDIT: It turns out the definition I was trying to remember is unpublished work of Ulrike Tillmann. But the version from Ronnie Brown linked to in David Roberts' answer is pretty similar (for my purposes, at least).
-
are you familiar with arxiv.org/abs/math.CT/0107188? maybe there is an answer in his "chatty" bibliography – Sean Tilson Nov 12 2010 at 13:19
2 Answers
Moore hyperrectangles on a space form a strict cubical omega-category
arXiv
discussed briefly here at the nLab.
If you are instead thinking of a globular $n$-category, the closest I know of is a Trimble n-category, but that doesn't use Moore paths, but paths of length 1 and the $A_\infty$-co-category structure on $[0,1]$.
-
Thanks, that's helpful. The paper by Brown matches what I remember pretty well, but I thought there was some other name for this. I'll wait and see if any other answers are forthcoming. – Kevin Walker Nov 12 2010 at 4:27
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Simpson-semistrict $n$-categories could be what you're after: $n$-categories where everything except the unit laws holds strictly, generalising one of the crucial properties of Moore path spaces? It's not a specific definition of $n$-category, but a strictness property which can be applied within various definitions.
Carlos Simpson has conjectured that these are enough to model homotopy types; Moore path space show this in dimension 1. I know very little about the details of this myself, I'm afraid, but what I have read about it is mostly from these sources plus their links and discussions:
• Simpson, Homotopy types of strict 3-groupoids.
• nlab: semi-strict $\infty$-category
• nlab: Simpson’s conjecture (I can't figure out how to link this directly; the single-quote in the url seems to confuse markdown)
• n-Category Café: Urs Schreiber, Semistrict Infinity-Categories and ω-Semi-Categories
I believe several people have been making some progress on it recently; eg Makkai mentioned some results along these lines at the latest Octoberfest.
-
Not quite what I'm after, but interesting nevertheless. Thanks. – Kevin Walker Nov 12 2010 at 15:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9216912984848022, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/tagged/estimation+special-functions | # Tagged Questions
1answer
63 views
### Is the hypergeometric function $F(5/4,3/4; 2, z)$ bounded on $(0,1]$
Consider the classical hypergeometric function $F(5/4,3/4; 2, z)$ for $z\in (0,1]$. Is this bounded by some real number (independent of $z$)? I'm aware of Euler's formula: F(5/4,3/4; 2, z) = ...
2answers
141 views
### Bound for the Legendre function of the second kind of degree $1/2$
Let $Q_{1/2}(u)$ be the Legendre function of the second kind of degree $1/2$. One can show that $Q_{1/2}(u) = O(u^{-3/2})$ as $u\to \infty$; see Equation 21 in Section 3.9.2 of Higher transcendental ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8253920674324036, "perplexity_flag": "head"} |
http://physics.stackexchange.com/tags/minkowski-space/hot | # Tag Info
## Hot answers tagged minkowski-space
15
### Einstein's postulates <==> Minkowski space. (In layman's terms)
I will first describe the naive correspondence that is assumed in usual literature and then I will say why it's wrong (addressing your last question about hidden assumptions) :) The postulate of relativity would be completely empty if the inertial frames weren't somehow specified. So here there is already hidden an implicit assumption that we are talking ...
9
### Einstein's postulates <==> Minkowski space. (In layman's terms)
First, set the units so that the speed of light is equal to one, so that the path of light rays in space-time are at 45 euclidean degrees. Note that a moving observer has a space-time path which is tilted relative to the t-axis, and the t-axis describes the path of a stationary observer (what I really mean is, get comfortable with space-time pictures). Then ...
8
### twistor-spacetime correspondence
The ordinary twistor space is parameterized by $(\lambda^\alpha,\mu_{\dot\alpha})$. Here, the $\alpha$ is a 2-valued $SL(2,C)$ spinor index of one chirality and the dotted index is its complex conjugate, the index of the opposite chirality. At the level of spinors, vectors are equivalent to "spintensors" with one undotted and one dotted index. V_\mu = ...
5
### Einstein's postulates <==> Minkowski space. (In layman's terms)
Let's build up to this. Suppose you know about x and y coordinates in the Euclidean plane, but to you they're just arbitrary labels for points, like zip codes or phone numbers. Then suppose someone tells you that observers can view the plane from different directions, but the laws of geometry stay the same. You now know that x and y aren't really separate. ...
4
### Minkowski diagram and length contraction
One picture is worth a 1000 words here... The important point is that we make a snapshot of the moving object in a time coordinate which is not its proper time. Indeed, if we drew just one image of the moving object in a cut given by $t' = \rm const.$, we would get a projection on the $x$ axis longer than $l_0$. But we must use $t = \rm const.$ instead if ...
4
### twistor-spacetime correspondence
I would like to add some further points to the answers above on the Twistor Space <--> Spacetime correspondence. The Twistor space T is a four complex dimensional space with elements described by $(Z^0,Z^1,Z^2,Z^3)$ or $Z^{\alpha}=(\omega^A,\pi_{A'})$ in spinor terms. The incidence relation between Minkowski points and Twistors is given (in spinor ...
4
### Minkowski diagram, hyperbola and invariant quantity of relativity
1) As rotations in Euclidean geometry moves points along circular arcs, boosts in relativity move points along hyperbolic arcs. $\Delta s$ is only invariant in the sense that any given point lies on only one hyperbola (only one level curve of $x^2 - c^2 t^2$) which has a unique $\Delta s$ with respect to the origin. In other words, comparing between two ...
3
### Tensor manipulation
Start with (iii) $T^\mu{}_\mu = g_{\mu\nu}T^{\mu\nu}$ I don't think this can be correct because both indices appear twice. What's wrong with $g_{\mu\nu}T^{\mu\nu}$? Both indices are contracted. Explicitly it means $$\sum_{\mu=0}^3\sum_{\nu=0}^3 g_{\mu\nu}T^{\mu\nu}$$ which is a perfectly good scalar. $g^\mu{}_\mu =2$ here I summed all the ...
3
### Terminology for opposite null lines
To expand on Qmechanic's point, we can demonstrate how null directions get moved around even in flat space by Lorentz transformations: Given a Lorentz vector $X^a$ you can construct a 2x2 Hermitian complex matrix $$X^{AA'} = \frac{1}{\sqrt{2}}\left(\begin{array}{cc}X^0+X^3 & X^1+iX^2 \\ X^1-iX^2 & X^0-X^3\end{array}\right)$$ The Lorentz norm of ...
3
### Relativistic space-time geometry
As others mentioned, special relativity (by definition really) doesn't have anything to do with curved surfaces! Special relativity has a particular metric (minkowski metric) which has no curvature. If your interested in manifolds (particularly integration on them, since integration in minkowski space is pretty trivial) and things like that, you really ...
3
### twistor-spacetime correspondence
Given Luboš answer, a good place to learn about this stuff is straight from the horse's mouth:Spinors and Space-Time: Volume 1, Two-Spinor Calculus and Relativistic Fields and Spinors and space-time: Spinor and twistor methods in space-time geometry.
3
### Minkowski diagram and length contraction
What you've missed is that the distance along the $x'$ axis is not the same as the distance along the $x$ axis. The locus of events that are 1 unit of proper distance from the origin is a hyperbola. This can be used to calibrate the $x'$ axis. See calibration hyperbola.
2
### Lorentz transformation matrix and its meaning in Minkowski diagram
Your questions 1 and 2 can be answered if you consider one more geometrical aspect of the group of the one-dimensional Lorentz bosts and that is the orbit of a single point. In other words, the set obtained by transforming one fixed point, say, (0, 1), by Lorentz boosts of all possible $\beta$'s. If you would apply several transformations successively, this ...
2
### Limit on velocity in Minkowski Spacetime geometry
Since a worldline along the time axis on Minkowski diagram is at rest, it is more intuitive to measure angles from that axis instead, as then 'slope' is (space)/(time), i.e., a velocity. Then we have the trigonometric relationship: $$\frac{v}{c} = \tanh\alpha$$ where Minkowski spacetime follows hyperbolic trigonometry because of the sign difference in the ...
2
### Limit on velocity in Minkowski Spacetime geometry
Why do you stop your largest angle with ten 9s after the decimal point? If you added more of them, then you'd get a smaller bound for the velocity. And you keep adding 9s ad infinitum and you'll "eventually" reach $89.\bar{9}=90$. So eventually, you'll see that the velocity could be arbitrarily small. This just means that the worldline can be vertical... and ...
2
### Terminology for opposite null lines
There isn't an official standard name for opposite null lines. Note that opposite null lines are not a coordinate-independent geometric (invariant) notion, and hence it is not a very useful concept. If two null lines happen to lie on opposite sides of the light-cone in one reference frame, then they may not lie on opposite sides of the light-cone wrt. a ...
1
### Minkowski diagram and time dilation
We want to find the (coordinate) elapsed time between two events. Let those two events be the origin and $ct=1, x=0$. Clearly, these two events are co-located in the unprimed coordinate system and so, the unprimed coordinate elapsed time is equal to the proper elapsed time between these two events. To find the coordinate elapsed time in the primed ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9124172329902649, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/51706/how-can-mass-affect-spacetime | # How can mass affect spacetime?
In General Relativity Theory, mass can warp spacetime. However, in my view interaction only occurs between pieces of matter. Spacetime is not matter; how can it be affected by matter?
-
How do you understand the concept of spacetime? – elcojon Jan 20 at 16:23
@elcojon In my view, spacetime is a frame which has capability to contain realities, but itself is not a reality. – Popopo Jan 20 at 16:52
1
Your 'view' is an arbitrarily arrived at proposition. Spacetime can be affected by matter through gravity, why? --because that's what observations and theory suggest to be the case. – zhermes Jan 20 at 23:04
## 3 Answers
This is dangerously close to philosophy rather than physics, but your question has no answer because you appear to have a different concept of spacetime to the rest of us.
The concept of spacetime arose from special relativity and was proposed by Minkowski as a way to understand special relativity. There have been various questions on this site that expand on this. See for example What's the difference between space and time? for a discussion of why spacetime is a useful concept in SR.
The point of this is that we didn't start with an idea of spacetime and then try and work back to special relativity. Spacetime is a concept that emerged from SR. Exactly the same is true in general relativity, except that the metric is now variable rather than fixed and as you say in your question, the metric is determined by the presence of matter or more precisely the stress-energy tensor.
So in general relativity spacetime is defined as that which obeys the Einstein equation. That's why it doesn't make sense to ask how matter can warp spacetime, because that is the way that spacetime is defined.
You wouldn't be the first to find this slightly unsatisfactory and indeed Einstein himself was uneasy with an equation that had geometry on one side and matter on the other. Hence his famous comment about marble and wood. Nevertheless, as far as we know GR works perfectly. You are certainly at liberty to start with a different concept of spacetime and see where it leads, but you will find it hard to improve on GR!
-
In GR, is there no space-time independent from matters&fields? – Popopo Jan 21 at 3:56
This simple answer is that there is no spacetime independant from the stress-energy tensor. GR assumes that a manifold exists, and that it is four dimensional, however without a metric this hardly corresponds to a spacetime. It's the metric that depends on the stress-energy tensor. – John Rennie Jan 21 at 8:26
The OP seems to be aligned with the non-absolute space(-time) championed by Leibniz. A little bit of history: Newton and Leibniz debated more than just calculus. The former had an idea of space and time as real substances, while the latter thought of them more as conveniences for mathematically modeling the things we actually care about. Leibniz (and after a fashion Mach and Einstein) felt all we had was relations between objects, anything else (e.g. spacetime) was an invention that got the right answer. See for instance the Stanford philosophy encyclopedia for more details.
If you want, you can think of GR in this way too. Start with a distribution of real mass, energy, momentum, etc., bundled in the stress-energy tensor $\mathbf{T}$. This tells you the Einstein tensor $\mathbf{G}$: $$\mathbf{G} = 8\pi \mathbf{T}.$$ With $\mathbf{G}$ you can solve a horrendous mess of coupled, nonlinear partial differential equations to find the metric $\mathbf{g}$. Whether you take this to be a physical thing or not is the crux of your question, but it doesn't really matter in solving the problem.
In the reference frame of your choosing, you can define the Christoffel symbols $\Gamma^\rho_{\mu\nu}$ from $\mathbf{g}$. With these you get the set of partial differential equations of motion together known as the geodesic equation: $$\frac{\mathrm{d}^2x^\rho}{\mathrm{d}\lambda^2} + \Gamma^\rho_{\mu\nu} \frac{\mathrm{d}x^\mu}{\mathrm{d}\lambda} \frac{\mathrm{d}x^\nu}{\mathrm{d}\lambda}.$$ Plug in initial conditions and you have $\vec{x}$ as a function of some affine parameter $\lambda$. That is, you have predicted the motion (relative to some frame) of an object - it goes along the spacetime path $\vec{x}(\lambda)$ - starting with the "real stuff" in the universe.
We used the metric as a convenient waypoint, and it's form was certainly affected by what we included in $\mathbf{T}$. You are free to say $\mathbf{g}$ is just a mathematical symbol that encapsulates how all the matter in $\mathbf{T}$ affected the thing described by $\vec{x}$. However, you must admit that $\mathbf{g}$ is determined by the matter and energy in the universe; if it were constant we wouldn't have GR.
-
Stephen Hawking explains that the mass is a measure of space heterogeneity. For understanding this. for the understanding of the general theory of relativity, scientists spend years, and the presentation of these predicted effects is quite difficult.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9631943702697754, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/204927/how-does-one-compute-the-minimal-bounding-sphere-of-a-k-simplex?answertab=active | # How does one compute the minimal bounding sphere of a k-simplex?
Suppose I have a list of $k+1$ points in $\mathbb{R}^n$, and I let $\sigma^k$ be their convex hull. I want to know two things:
1. How can I determine, for any $\varepsilon$, whether open balls of radius $\varepsilon$ centered at each point intersect?
2. More generally, how can I compute the center of the minimal bounding sphere $S^{k+1}$ of $\sigma^k$, which would then tell me the infimal value of $\varepsilon$ in #1?
My main concern is whether these can be done efficiently (a linear time algorithm in $k$ to compute #2 would be wonderful). In general it is not the case that this polytope is cyclic (or else I could take any two faces and intersect perpendicular lines from their centers to compute the circumcenter).
Just in case this is relevant, I want to know this so I can build a Cech complex from the data for any parameter $\varepsilon$, and determine which values of $\varepsilon$ at which the complex changes.
-
## 2 Answers
1. You need to compute the distances ($\frac{n(n-1)}2$ calculations, using the Pythagorean thm), and insure that $X_iX_j > 2\varepsilon$ for all distinct pairs $X_i,X_j$ of vertices.
2. Calculate the equations of the midperpendicular plane of $X_0,X_1$ and that of $X_1,X_2$ and so on.. ($n$ calculations), and find their intersection point.
-
I think you're describing a Rips complex, and that is not the same as a Cech complex. Specifically, the vertices of an equilateral triangle would have balls with pairwise intersections, but no three-way intersection. – Bean Sep 30 '12 at 17:16
miniball ${}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9302401542663574, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/80036/when-do-adjunctions-preserve-equivalence | ## When do adjunctions preserve equivalence?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\mathcal{C}'$ and $\mathcal{D}'$ be categories so that $\mathcal{C} \subseteq \mathcal{C}'$ and $\mathcal{D} \subseteq \mathcal{D}'$ are full subcategories. Suppose the forgetful functors $F_{\mathcal{C}}:\mathcal{C} \to \mathcal{C}'$ and $F_{\mathcal{D}}: \mathcal{D} \to \mathcal{D}'$ have left adjoints $G_{\mathcal{C}}$ and $G_{\mathcal{D}}$. Then in general does an equivalence of categories between $\mathcal{C}'$ and $\mathcal{D}'$ induce an equivalence of categories between $\mathcal{C}$ and $\mathcal{D}$ by composing with the adjunctions? If this is not true in general are there any criteria to guarantee this?
The specific situation this came up in was trying to prove an equivalence of categories between sheaves on two different sites. I didn't want to deal with showing things were sheaves so I wanted to just prove an equivalence between the presheaf categories and say we can sheafify to get an equivalence on the sheaf level. I ended up proving that my functor sent sheaves to sheaves directly but I was still wondering if there was an answer to this question in general.
-
## 1 Answer
I think that in this generality the answer is "no". For example, take $\mathcal{C}'=\mathcal{D}'$ equal to some additive category, take the identity as the equivalence, take $\mathcal{C}=\mathcal{C}'$ and $\mathcal{D}=0$. This seems to satisfy your hypotheses.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9624316096305847, "perplexity_flag": "head"} |
http://en.wikisource.org/wiki/The_Measurements_of_Kaufmann | # The Measurements of Kaufmann
From Wikisource
Jump to: navigation, search
The Measurements of Kaufmann on the Deflectability of β-Rays in their Importance for the Dynamics of the Electrons (1906) by Max Planck, translated by Wikisource
In German: Die Kaufmannschen Messungen der Ablenkbarkeit der β-Strahlen in ihrer Bedeutung für die Dynamik der Elektronen , Physikalische Zeitschrift 7 (21): 753–761 (Read in September 19, 1906.) In this paper, Planck introduced the expression "relative theory" [German: Relativtheorie] for the so called "Lorentz-Einstein theory", because it obeys the principle of relativity. In the discussion section, Bucherer changed this to the now common expression "theory of relativity" [German: Relativitätstheorie].
The Measurements of Kaufmann on the Deflectability of β-Rays in their Importance for the Dynamics of the Electrons Max Planck Wikisource 1906
The Measurements of Kaufmann on the Deflectability of β-Rays in their Importance for the Dynamics of the Electrons
by Max Planck.
Gentlemen! Probably all physicists especially interested in the development of the newest branch of electrodynamics, the mechanics of electrons, have awaited with great interest the outcome of the subtle measurements of the electro-magnetic deflectability of β-rays of radium, performed last year by W. Kaufmann, and the expectations attached to such experiments have been met in large measure; because Kaufmann has gained from them a large amount [ 754 ] of valuable data, and he, what must be recognized particularly grateful, has also given numerical data to the public, which is rich and reliable[1] so that everyone is in a position to independently verify and complete the conclusions reached by Kaufmann.
I even more liked it to make use from this suggestion, as indeed the question at which Kaufmann's experiments was aimed for, is almost vital for various electrodynamic theories. It is known that there exist already a number of excellent mathematical investigations from several of these theories, and their physical meaning, of course, would be repealed at once if that theory would be defeated in the resulting competition.
## § 1. Equations of motion.
We can presuppose that the method by which Kaufmann has examined the contents of the various theories by his measurements is known. At first I was interested to see how far each of those measured deflections are removed from those, which can be calculated from the various theories on the basis of the measured "constants of apparatus" from the outset. Since I preferred not to reduce the measured deflections $(\overline{y},\overline{z})$ from the start "to infinitely small deflections (y', z'), the equations of motion of the electrons had to be fully integrated. This gives for all compared theories:
$\begin{matrix} \frac{d}{dt}\left(\frac{\partial H}{\partial\dot{x}}\right) & = & -\frac{e}{c}\dot{z}\mathfrak{H}\\ \\\frac{d}{dt}\left(\frac{\partial H}{\partial\dot{y}}\right) & = & e\mathfrak{E}\\ \\\frac{d}{dt}\left(\frac{\partial H}{\partial\dot{z}}\right) & = & \frac{e}{c}\dot{x}\mathfrak{H}.\end{matrix}$
Here, H is the kinetic potential (the Lagrangian function) of a moving electron as a function of the velocity
$q=\sqrt{\dot{x}^{2}+\dot{y}^{2}+\dot{z}^{2}}$,
$\mathfrak(E)$ and $\mathfrak(H)$ is the electric and magnetic field strength, both acting in the y-direction as known functions of x, e is the electrical elementary quantum, c is the speed of light. The electrical quantities are measured in electrostatic units.
The electron moves from the radiation source:
x = x0 = 0 y = 0 z = 0
through the diaphragm opening:
x=x1=1,994 y=0 z=0
to the point of the photographic plate:
x=x2=3,963 y=$\overline{y}$ z=$\overline{z}$.
To hit the diaphragm opening straight, one electron has to be emanated with a certain initial velocity in a certain direction from the radiation source. On that occasion, a certain curve ($\overline{y}, \overline{z}$) emerges on the photographic plate (x = x2), whose points depend on a single parameter, for example on the initial velocity.
## § 2. Determination of the external field components.
The integration of the equations of motion still requires knowledge of $\mathfrak{H}$ and $\mathfrak{E}$ as functions of x. The magnetic field strength $\mathfrak{H}$ I assumed to be constant, and so great that the value of the "magnetic field integral" is the same as Kaufmann's. Its value is:[2]
$\int_{x_{0}}^{x_{2}}dx\int_{x_{0}}^{x}\mathfrak{H}dx-\frac{x_{2}-x_{0}}{x_{1}-x_{0}}\int_{x_{0}}^{x_{1}}dx\int_{x_{0}}^{x}\mathfrak{H}dx=557,1$.
Herein we set $\mathfrak{H}$ constant, so it follows:
$\frac{1}{2}(x_{2}-x_{1})(x_{2}-x_{0})\mathfrak{H}=557,1$
and from the values given by x0, x1 and x2:
$\mathfrak{H}=142,8$.
The electric field strength $\mathfrak{E}$ is zero between the diaphragm and the photographic plate, and constant between the capacitor plates at a proper distance from the edges. As Kaufmann, we connect the field strength to its value in the homogeneous part of the field and take it as unity, calling $\mathfrak{E}_{1}$ the field strength measured in this way. Then we have:
for $x_{1}<x<x_{2}:\mathfrak{E}_{1}=0$.
Between the radiation source and the diaphragm I have $\mathfrak{E}_{1}$ assumed to be symmetrical with respect to the center of that distance: $x=\frac{x_{1}}{2}$, so that if one puts:
$x=\xi+\frac{x_{1}}{2},\ -\frac{x_{1}}{2}<\xi<\frac{x_{1}}{2},\ E_{1}(-\xi)=\mathfrak{E}_{1}(+\xi)$. 1)
The increase of the electric field strength $\mathfrak{E}_{1}$ from the radiation source to its constant value 1 is assumed to be linear, as well as the decrease to the value 0 at the diaphragm. That is: [ 755 ]
$\mathsf{for}\ 0<\xi<\xi'\ \mathsf{is}\ \mathfrak{E}_{1}=1$ $\mathsf{for}\ \xi'<\xi<\frac{x_{1}}{2}\ \mathsf{is}\ \mathfrak{E}_{1}=\varkappa-\lambda\xi$ 2)
Then, because of the continuity of $\mathfrak{E}_{1}$:
$\varkappa-\lambda\xi'=1$ und $\varkappa-\lambda\frac{x_{1}}{2}=$0.
The value of the constant ξ' I assumed to be as large, so that that the value of the "electric field integral" is the same as Kaufmann's. Its value is:[3]
$(x_{1}-x_{1})\cdot\left\{ \int_{x_{0}}^{x_{1}}\mathfrak{E}_{1}dx-\frac{1}{x_{1}-x_{0}}\int_{x_{0}}^{x_{1}}dx\int_{x_{0}}^{x}\mathfrak{E}_{1}dx\right\} =1,565$.
Substituting the above values, it follows:
$\frac{1}{2}(x_{2}-x_{1})\cdot\left(\frac{x_{1}}{2}+\xi'\right)=1,565$
and from this:
ξ' = 0,593 ϰ = 2,468 λ = 2,475.
To reduce the electric field strength to the absolute electrostatic unit: $\mathfrak{E}$, or to the absolute electromagnetic unit: $\mathfrak{E}_{m}$, one has:[4]
$\mathfrak{E}_{m}=\mathfrak{E}_{1}\cdot\frac{25\cdot10^{10}}{0,1242}=\mathfrak{E}\cdot3\cdot10^{10}$ 3)
Whether the simplistic assumptions made in relation to the electric and magnetic field are really sufficient for the relevant calculations, will be shown below.
## § 3. Magnetic deflection.
If we introduce into the equations of motion (§ 1) the momentum vector (quantity of motion)
$\frac{\partial H}{\partial q}=p$ 4)
and also the electromagnetic unit of the electric field strength ($\mathfrak{E}_{m}$) and for the electrical elementary quantum (ε), then they are:
$\frac{d}{dt}\left(p\frac{\dot{x}}{q}\right)=-\epsilon\dot{z}\mathfrak{H}$ 5)
$\frac{d}{dt}\left(p\frac{\dot{y}}{q}\right)=\epsilon\mathfrak{E}_{m}$ 6)
$\frac{d}{dt}\left(p\frac{\dot{z}}{q}\right)=\epsilon\dot{x}\mathfrak{H}$. 7)
Because $\mathfrak{H}$ is constant, (5) and (7) can be integrated with respect to time t. Dividing the two resulting equations, t, p and q are entirely eliminated, and a second integration yields the equation of the trajectory of projection on the xz-plane, a circle, which goes through the points x = 0, z = 0, x = x1, z = 0 and x = x2, $z = \bar{z}$ and is determined by it. The current coordinates x, z of the points of this circle can be represented as functions of one variable parameter: the angle φ which is the tangent of the circle in the direction of motion on the x-axis, and it is positive when the motion is to the side of the positive z-axis:
$x=\varrho\sin\varphi+\frac{x_{1}}{2},\qquad z=-\varrho\cos\varphi+\frac{x_{1}}{2}ctg\ \varphi_{1}$. 8)
Where
$\varrho=\frac{x_{1}}{2\sin\varphi_{1}}$ 9)
is the radius of the circle and φ1 the value of φ for x = x1. In these equations it is already expressed that for x = 0 and x = x1, z = 0. If it is considered that x = x2, $z=\bar{z}$, then we have the values:
$tg\ \varphi_{1}=\frac{x_{1}\bar{z}}{(x_{2}-x_{1})x_{2}+\bar{z}^{2}}$ 10)
and
$\sin\varphi_{2}=\frac{2x_{2}-x_{1}}{x_{1}}\sin\varphi_{1}$ 11)
and also $\varrho$ from (9).
By inserting (8) into (5) or (7) we have:
$\frac{p}{q}\cdot\dot{\varphi}=\epsilon\mathfrak{H}$. 12)
Now it is:
$q^{2}=\varrho^{2}\dot{\varphi}^{2}+\dot{y}^{2}$,
for which we can put with sufficient approximation:[5]
$q=\varrho\dot{\varphi}$. 13)
Therefore:
$p=\epsilon\mathfrak{H}\varrho=\epsilon\mathfrak{H}\frac{x_{1}}{2\sin\varphi_{1}}$. 14)
The momentum p of each electron is independent of time and, without entering into a special theory, can be calculated from the magnetic deflection $\bar{z}$.
Since p is independent of time t, the same follows for the velocity q and by (12) for the angular velocity $\dot{\varphi}$. So angle φ is linearly dependent on time t.
## § 4. Electrical deflection.
From (6) it follows:
$\frac{p}{q}\frac{d^{2}y}{d\varphi^{2}}\cdot\dot{\varphi}^{2}=\epsilon\mathfrak{E}_{m}$
and from this according to (12) and (13)
$\frac{d^{2}y}{d\varphi^{2}}=\frac{\varrho}{q}\frac{\mathfrak{E}_{m}}{\mathfrak{H}}$ 15)
[ 756 ] By this differential equation and by the condition that for φ = φ0 (x=0) and for φ = φ1 (x=x1) y = 0, y is defined as a function of φ.
Between source and diaphragm, according to (1) and
$\xi=\varrho\sin\varphi$,
we have
$\mathfrak{E}_{m}(-\varphi)=\mathfrak{E}_{m}(+\varphi)$.
Here, the course of the curve is symmetrical, so that:
$y_{-\varphi}=y_{+\varphi}$ und $\left(\frac{dy}{d\varphi}\right)_{-\varphi}=-\left(\frac{dy}{d\varphi}\right)_{+\varphi}$.
The integration of the differential equation (15) yields:
$\left(\frac{dy}{d\varphi}\right)_{\varphi_{1}}-\left(\frac{dy}{d\varphi}\right)_{\varphi_{0}}=\frac{\varrho}{q}\cdot\frac{1}{\mathfrak{H}}\int_{\varphi_{0}}^{\varphi_{1}}\mathfrak{E}_{m}d\varphi$
or, since φ0=-φ1:
$\left(\frac{dy}{d\varphi}\right)_{\varphi_{1}}=\frac{\varrho}{q\mathfrak{H}}\int_{0}^{\varphi_{1}}\mathfrak{E}_{m}d\varphi$.
Between the diaphragm and the plate it is $\mathfrak{E}_{m}=0$, so:
$\frac{dy}{d\varphi}=const=\left(\frac{dy}{d\varphi}\right)_{1}$,
and by integrating the last equation again:
$\bar{y}=\left(\frac{dy}{d\varphi}\right)_{1}\cdot(\varphi_{2}-\varphi_{1})=\frac{\varrho(\varphi_{2}-\varphi_{1})}{q\mathfrak{H}}\int_{0}^{\varphi_{1}}\mathfrak{E}_{m}d\varphi$. 16)
The integral occurring here is given, when one considers, that according to (2):
$0<\varphi<\varphi'\qquad\mathfrak{E}_{1}=1$
$\varphi'<\varphi<\varphi_{1}\qquad\mathfrak{E}_{1}=\varkappa-\lambda\varrho\sin\varphi$,
where
$\varrho\sin\varphi'=\xi'=0,593$. 17)
Then it follows:
$\int_{0}^{\varphi_{1}}\mathfrak{E}_{1}d\varphi=\varphi'+\varkappa(\varphi_{1}-\varphi')-\lambda\varrho(\cos\varphi'-\cos\varphi_{1})$
and by introduction of $\mathfrak{E}_{m}$ from (3):
$\bar{y}=\frac{25\cdot10^{10}}{0,1242}\cdot\frac{\varrho(\varphi_{2}-\varphi_{1})}{q\mathfrak{H}}$ $\cdot\{\varphi'+\varkappa(\varphi_{1}-\varphi')-\lambda\varrho(\cos\varphi'-\cos\varphi_{1})\}$ 18)
## § 5. Various theories.
The relation between the electric deflection $\bar{y}$ and the magnetic deflection $\bar{z}$ is due to the dependence of momentum p on the velocity q, and this is given by the expression of the kinetic potential H as a function of q, which is different for various theories. I have performed the calculations only for those two theories, which are the most developed today: that of Abraham,[6] in which the electron has the form of a rigid sphere, and that of Lorentz-Einstein,[7] in which the "principle of relativity" possesses exact validity. For brevity I shall denote in the following the first theory as "sphere theory", the second as "relative theory". Then, according to the sphere theory, no matter whether volume charge or surface charge will be adopted, since we are concerned only with quasi-stationary motions, the kinetic potential is:
$H=-\frac{3}{4}\mu_{0}c^{2}\left(\frac{c^{2}-q^{2}}{2qc}\log\frac{c+q}{c-q}-1\right)$ 19)
(μ1 is the mass of the electron for q = 0). Therefore:
$p=\frac{\partial H}{\partial q}=\frac{3}{4}\frac{\mu_{0}c^{2}}{q}\left(\frac{c^{2}+q^{2}}{2qc}\log\frac{c+q}{c-q}-1\right)$. 20)
However, according to the relative theory:[8]
$H=-\mu_{0}c^{2}\left(\sqrt{1-\frac{q^{2}}{c^{2}}}-1\right)$. 21)
Therefore:
$p=\frac{\partial H}{\partial q}=\frac{\mu_{0}qc}{\sqrt{c^{2}-q^{2}}}.$ 22)
Like Kaufmann, we introduce the two quantities β and u:
$\beta=\frac{q}{c}$ und $u=\frac{\mu_{0}c}{p}$ 23)
for the sphere theory it follows:
$\frac{1}{u}=\frac{3}{4\beta}\left(\frac{1+\beta^{2}}{2\beta}\log\frac{1+\beta}{1-\beta}-1\right)$ 24)
and in the relative theory:
$\frac{1}{u}=\frac{\beta}{\sqrt{1-\beta^{2}}}$ 25)
By introduction of u instead of p, equation (14) for the magnetic deflection becomes:
$u=\frac{\mu_{0}}{\epsilon}\cdot\frac{2c\ \sin\varphi_{1}}{x_{1}\mathfrak{H}}$. 26)
## § 6. Numerical values.
The comparison of the observed and the theoretical values was stated by me in a way, so that for each measured magnetic deflection $\bar{z}$ for each of the two theories, the corresponding value of the electrical deflection $\bar{y}$ was calculated and compared with the observed values. Accordingly, the following table in the first column [ 757 ] contains the magnetic deflection $\bar{z}$ according to Kaufmann's table VI (l. c. p. 524), the second column the corresponding values of the angle φ1 as calculated from (10), the third column the value of u in degrees following from (26), where the following value of the ratio of charge ε to mass μ0 [extrapolated by Kaufmann (l.c. p. 551) on the basis of Simon's number 1,865.107 valid for all theories] is:
$\frac{\epsilon}{\mu_{0}}=1,878\cdot10^{7}$ 27)
The fourth and sixth column contain the values of $\beta=\frac{q}{c}$ calculated from u according to (24) and (25), the fifth and seventh column contain the values of $\bar{y}$ following from (18), where the required φ' and φ2 are taken from (17) and (11); and finally the eighth column contain the "observed" values of $\bar{y}$ according to Kaufmann's table VI.
Observed
$\bar{z}$
φ1 u Sphere theory Relative theory Observed
$\bar{y}$
β $\bar{y}$ β $\bar{y}$
0,1354
0,1930
0,2423
0,2930
0,3423
0,3930
0,4446
0,4926
0,5522
1,977°
2,810
3,517
4,231
4,925
5,623
6,325
6,692
7,735
0,3871
(0,3870)
0,5502
(0,5502)
0,6883
(0,6881)
0,8290
(0,8286)
0,9634
(0,9630)
1,100
(1,099)
1,23
(1,234)
1,360
(1,358)
1,510
(1,506)
0,9747
0,9238
0,8689
0,8096
0,7542
0,7013
0,6526
0,6124
0,5685
0,0262
(0,0262)
0,0394
(0,0394)
0,0526
(0,0526)
0,0682
(0,0682)
0,0853
(0,0855)
0,1054
(0,1055)
0,1280
(0,1281)
0,1511
(0,1512)
0,1823
(0,1822)
0,9326
0,8762
0,8237
0,7699
0,7202
0,6728
0,6289
0,5924
0,5521
0,0273
(0,0274)
0,0415
(0,0415)
0,0555
(0,0554)
0,0717
(0,0717)
0,0893
(0,0895)
0,1099
(0,1099)
0,1328
(0,1328)
0,1562
(0,1561)
0,1878
(0,1874)
0,0247
0,0378
0,0506
0,0653
0,0825
0,1025
0,1242
0,1457
0,1746
First, in order to enable a comparison of my method of calculation with that of Kaufmann, I have put under the values of u as well as under the theoretical values of $\bar{y}$, those numbers in brackets resulting from the same quantities, when we (like Kaufmann) rely, not on the observed values $\bar{z}$, but on the values "reduced to infinitely small deflection" z' (l.c. Table VII, p. 529), and from which u is calculated by using Kaufmann's equations (14) and (17 ), determining the corresponding $v=\frac{u}{\beta}$ according to each of the two theories, and then pass to y' by using Kaufmann's equation (18). Then $\bar{y}$ is given by Kaufmann's equation (12). For this calculation, Kaufmann's constants A and B are of course not the "constants of curve" but the "constants of apparatus", which were measured independently of the deflection experiments. The comparison of the bracketed numbers with the numbers stated above shows, that the results of Kaufmann's method of calculation differ from those of mine only very marginally, so each of the two methods supports the other in some way.
As regards the comparison of theoretical values of $\bar{y}$ with the observed ones, it can be seen that the latter are closer to the sphere theory than to the relative theory. However, in my opinion this can not be interpreted as a final confirmation of the first and a refutation of the second theory. Because for that it would be necessary that the deviations of the theoretical numbers from those observed, are small for the sphere theory against those of the relative theory. But this is not at all the case: on the contrary, the deviations of the theoretical numbers from each other are throughout smaller than the deviations of any theoretical number from the observed ones.
One might think now, perhaps, that the lack of agreement is caused by the employed value (27) for the ratio ε:μ0, and that by a suitable amendment of this value a sufficient correspondence can be obtained for one of the two theories. This can be easily tested in the following way. Equation (18) gives, if one substitutes for $\bar{y}$ any observed value, the corresponding value of the velocity q = βc regardless of any special theory, and the corresponding value of u is derived separately for each theory by (24) or by (25), and then from (26) the ratio ε:μ0 can be calculated. This procedure gives not only for none of the two theories constant values for ε:μ0, but even for β it gives numbers that are unacceptable from the outset for each theory. The same is found, of course, in Kaufmann's method of calculation. Kaufmann[9] gives two equations for the deflections y' and z' , which combined have the form:
$\beta=\frac{E}{cM}\cdot\frac{z'}{y'}$.
Where
$\frac{E}{cM}=0,1884$
is a constant of apparatus, independent of the value of ε:μ0 and independent of any specific [ 758 ] theory. If one now take from table VII (p. 529), for example, z' = 0.1350 and y' = 0.0246, the result is:
$\beta=0,1884\cdot\frac{0,1350}{0,0246}=1,034$,
which is a priori not compatible with any of the theoretical formulas.
Thus, nothing seems to remain but the assumption that in the theoretical interpretation of the measured quantities there is still a major gap, which must first be filled before the measurements can be used as a final decision between the sphere theory and the relative theory. One might think of different ways, none of which I would like to discuss any closer, because it seems to me that the physical foundations for any of them are too unsure.
## § 7. Difference between the theories for rays of a certain magnetic deflectability.
However, I would like to bring in another point in more detail: that is the question in which areas of the "radiation spectrum" a decision between the conflicting theories will be possible at first. It seems that it is considered to be fairly common that the greatest differences of the theories are found using the fastest rays. This view apparently arises from the circumstances, that the momentum quantities p derived from the equations (20) and (22) for both theories, are more different from each other the closer β is coming to 1 - but this is incorrect; in some circumstances the very opposite is true. Because in the measurements we do not compare the observed values of p with the expected theoretical values of p at a certain β, but we compare, for example as with Kaufmann's measurements, the observed values of the electrical deflectability with the expected theoretical values of the electrical deflectability, at a certain magnetic deflectability, and that is something completely different.
When an electron ray is characterized by its magnetic deflectability, then this means that we denote to it a specific value of momentum p; since by (14) p is directly determined by the radius of curvature $\varrho$. To a certain value of p, to which by (23) also corresponds a certain value of u, there belong (according to the two theories) different values of β. We denote them by β and β', β may apply for the sphere theory, β' for the Relative theory, so by (24) and (25) we have:
$\frac{1}{u}=\frac{3}{4\beta}\left(\frac{1+\beta^{2}}{2\beta}\log\frac{1+\beta}{1-\beta}-1\right)=\frac{\beta'}{\sqrt{1-\beta'^{2}}}$.
It follows that always:
β' < β.
So, a ray of a certain magnetic deflectability possesses a smaller velocity in the relative theory than in the sphere theory.
Now consider the electrical deflectability in both theories. The electric beam in a certain (not too great) distance x is, as we find directly from (6), proportional to the ratio $\frac{u}{\beta}$. The expected electrical deflectabilities in the two theories have therefore the difference:
$\frac{u}{\beta'}-\frac{u}{\beta}>0$.
A beam of a certain magnetic deflectability will be more deflected in the relative theory than in the sphere theory, and the difference is the greater, the greater the magnetic deflectability is. Of course this applies, as well as the following analog principles below, to the absolute difference, not to the percental difference. To illustrate this we can use the calculated values of $\bar{y}$ for both theories from the table above, their difference increases with increasing $\bar{z}$.
For u = 0 (magnetic deflection is zero) we have:
$\frac{u}{\beta'}-\frac{u}{\beta}=0$.
For u = ∞ (magnetic deflection equal to infinity) we have:
$\frac{u}{\beta'}-\frac{u}{\beta}=\frac{1}{10}$.
Because an experimental decision between the two theories is the more likely, the more their results differ, we can assume that measurements of the electrical deflectability, which should lead to a decision between the theories, is more appropriate to conduct with cathode rays than with Becquerel rays.
## § 8. Difference between the theories for cathode rays of a certain discharge potential.
If we use homogeneous cathode rays for the deflection experiments, then, [ 759 ] except the magnetic and electric deflectability, a third characteristic of rays can be measured: the discharge potential, and it appears appropriate to directly characterize the value of the discharge potential of the ray. In this case the question arises: In which way, as regards the magnetic and electric deflectability of a ray of a certain discharge potential, can the theories be distinguished? By the discharge potential P volt, the energy E of the ray is given, because:
E=εP · 108.
Now for any theory:
$E=q\frac{\partial H}{\partial q}-H=qp-H$,
so for the sphere theory by (19):
$E=\frac{3}{2}\mu_{0}c^{2}\left(\frac{c}{2q}\log\frac{c+q}{c-q}-1\right)$
and for the relative theory by (21):
$E=\mu_{0}c^{2}\left(\frac{c}{\sqrt{c^{2}-q^{2}}}-1\right)$.
We denote the size calculated according to the latter theory, again by using primed variables and re-introduce β and u by (23), so that the relation between β and β' is expressed by the equation:
$\frac{3}{2}\left(\frac{1}{2\beta}\log\frac{1+\beta}{1-\beta}-1\right)=\frac{1}{\sqrt{1-\beta'^{2}}}-1$.
Furthermore, as previously:
$\frac{1}{u}=\frac{3}{4\beta}\left(\frac{1+\beta^{2}}{2\beta}\log\frac{1+\beta}{1-\beta}-1\right)$
$\frac{1}{u'}=\frac{\beta'}{\sqrt{1-\beta'^{2}}}$.
From these equations the results follow:
1. For the velocity:
β' < β,
i.e. a ray of a certain discharge potential possesses in the relative theory a smaller velocity as in the sphere theory.
2. For the magnetic deflectability:
u' < u,
i.e. a beam of a certain discharge potential possesses in the relative theory a smaller magnetic deflectability as in the sphere theory. The difference vanishes for infinitely great and infinitely small discharge potentials, and there is a maximum for the discharge potential P = 3,2 · 105 volt (β = 0,834). As regards the practical size of this number one may say that within the currently executable measurements the difference of the theories is the greater, the greater the discharge potentials are to which we advance.
3. For the electrical deflectability:
$\frac{u'}{\beta'}\begin{matrix} >\\ =\\ <\end{matrix}\frac{u}{\beta}$ für $P\begin{matrix} <\\ =\\ >\end{matrix}1,1\cdot10^{6}$ Volt $\left(\beta\begin{matrix} <\\ =\\ >\end{matrix}0,987\right)$,
i.e. a beam of a certain discharge potential possesses in the relative theory a greater, equal or smaller deflectability than in the sphere theory, depending on whether the discharge potential is smaller, equal or greater than 1,1 · 106 volt. Therefore, one may say that within the currently executable measurements, the electrical deflectability of such a ray is in the relative theory always greater than in the sphere theory, and the difference is the greater, the smaller the discharge potential is.
For P = 0 (β = β' = 0) there is especially:
$\frac{u'}{\beta'}-\frac{u}{\beta}=\frac{1}{20}$.
Simultaneous measurements of the discharge potentials, magnetic and electrical deflectability of the cathode rays have been known to be performed by H. Starke[10]. Maybe they can already be used for an examination of the two theories. However, so far I found no opportunity to dwell on this question.
## Discussion.
Kaufmann: As the one who is immediately concerned I'd like to add a few words. I ask you to circulate the drawn curve and the five original plates, where you see two symmetrical curve branches of the same form as in the drawing. As for the results, there is complete agreement between Planck and me, and it is gratifying to me that the very different account of Planck has led to identical resulting numbers. This suggests that there are no computational errors included in my calculations. As regards the conclusion, it follows from the observational facts that neither Lorentz's nor Abraham's theory agree with them. This conclusion is certain. Lorentz's theory is even worse than Abraham's. The deviations of Lorentz's theory (10-12 percent) are so great that they cannot be explained at any point by [ 760 ] observational errors. So if no fundamental error exists in the observations, then Lorentz's theory is abolished. In Abraham's theory, the deviations amount to 3-5 percent; these are also outside the margin of observational error. But the possibility of errors which sums up in a way that such a difference comes out, would be, nevertheless, still possible.
Planck: If the correction of the theoretical numbers necessary for a complete explanation of the observations, is outside the observational error, then it is conceivable to me that if one take them into account and make corrections in addition to the errors of observation, we could come closer to Lorentz's theory than Abraham's. From the mere fact that the deviations of a theory are smaller, a preference would not follow for it.
Bucherer: I would like to return to the remark of the speaker that my theory is not sufficiently developed to be discussed here even further. I was intensely engaged with it and its consequences, and found that it is not substantially better than the earlier theory of Lorentz and the later theory of Lorentz. One objection is to be found in the fact that in the dispersion theory the electromagnetic masses create difficulties; in all other respects the theory of deformed electrons at constant volume and of the corresponding deformed system performs the same as the newer theory of Lorentz. With the extension of Planck's considerations on my electron, I could not say what would be the consequences of this reasoning for my electron. But I want to call attention to some theoretical points. The dynamics of the electron is not only testable by the deflection of Becquerel rays. Abraham has alluded to the need of attributung to the Lorentz-electron a special internal energy. That difficulty seems greater to me than the deviation from Kaufmann's measurements. Against Einstein's theory of relativity certain objections can be raised. He uses Maxwell's equations, but ignores that certain conditions are not met, namely the validity of the divergence theorem of electrical force.
After I had recognized that the existing theories, including mine, do not meet the requirements, I have asked me the question, if it is possible to be consistent with experience while keeping Maxwell's equations and the principle of equality of action and reaction. This is possible when we rely on the following principle: the ponderomotive force between two systems in relative uniform motion to each other, by consideration of the sign, is the force calculated from the Maxwell-Lorentz equations acting upon a system which is arbitrarily regarded as at rest. Of course, in this model the concept of the aether is not present, because as soon as I introduce relative motions and define the coordinate system in an arbitrary body of the dynamic system, I give up the aether theory. I have followed the consequences and I came to some conclusions in relation to Kaufmann's measurements, which I want to tell. First, the rigid electron would come into question. Due to the relative theory we come to the conclusion that other forces act when the Becquerel rays are directed, not parallel but obliquely to the capacitor plate. Here an easy possibility would be given to test the principle of the relative theory on the basis of Maxwell's equations, it is sufficient to let Becquerel rays fly obliquely to the electric or magnetic field. For vertical motion you get oddly enough the same forces as Lorentz. I have considered whether the deviation of Kaufmann's measurements is based on the fact that an angle is formed.
Runge: I would like to ask Mr. Planck the following: In the contradiction, which he finds when he calculated β from the first value, it has to be taken care whether a small change in the observation already produces a large change in the value. One would have to calculate the interval of β, for which there are still permissible errors of observation.
Planck: β is proportional to $\frac{z'}{y'}$. A small change of y' would do much because y' is small compared to z'. The errors, however, are already so great that one can not use the values; we would have to eliminate the exterior values in any case, we can not use them for the theory. To Bucherer, I want to ask a question. Can your equations be brought into a Lagrangian form? And if so, what value has the Lagrangian function H for your newer theory, did you investigate it?
Bucherer: No, I have not studied it and could not decide it at this moment.
Planck: That would be very important, as due to the Lagrangian form the [ 761 ] equations of motion of the electron could be reduced to those of general mechanics.
Bucherer: I suspect that Maxwell's equations can be reduced to the Lagrangian form. I have not yet investigated this particular question, but I suppose it is possible, because I use Maxwell's equations without modification for the quasi-stationary motion, but I don't like to say anything definitive.
Abraham: If you look at the numbers, it is clear from them that the deviations of Lorentz's theory are at least twice as large as those of my own, so it may be said that the sphere theory represents the deflectability of the β-rays twice as good as the relative theory. (Great laughter.) When I consider what was the state of the question 5 years ago when I began my involvement with it, so I must be satisfied with the results; for I did not believe at first that the formula agrees with the experiments, and I was very surprised when Kaufmann told me one day that the formula agreed well with the more refined measurements. However, I see the advantage of the sphere over the relative theory not only in better agreement with the measurements, but also in the fact that it is a purely electromagnetic theory. We started from the question of whether the mass of the electron is a purely electromagnetic quantity. The sphere theory answers this question; it considers the energy of cathode rays as purely electromagnetic. The approaches to the electromagnetic energy density was also taken as a basis by Lorentz. However, the Lorentz electron has a kind of internal potential energy in addition to the electromagnetic energy, as I have shown and what is still not refuted. In the relative theory one would therefore not consider the cathode rays as purely electromagnetic processes, but as processes which cannot be explained by electrodynamics.
Gans: I would like to point out that any assumption about form-changes of the electron in motion brings, of course, more parameters into the theory, so that one can better adjust himself to the phenomena.
The Michelson-Morley and the Trouton-Noble experiment require a certain difference of the longitudinal and transverse dilatations, yet the ratio remains undetermined.
Yet, one could create more theories for which this ratio of the longitudinal to transverse dilatation always has different values; one would explain the phenomena of Becquerel best, but you could not say it is the best; it would only be a retroactive adjustment to the phenomena.
Planck: Abraham is right when he says the main advantage of the sphere theory would be that it would be a purely electrical theory. If this were feasible that would be very nice, for now it is only a postulate. The Lorentz-Einstein theory is based on a postulate, namely that no absolute translation can be demonstrated. It seems that the two postulates can not be united, and now it depends on which postulate is to be preferred. That of Lorentz is more sympathetic to me. It's probably the best thing when work continues in both areas and the experiments eventually give the decision.
Sommerfeld (Munich): The pessimistic view of Planck I wouldn't like to join for the time being. Because of the extraordinary difficulty of measuring, perhaps the deviations could have their reason in unknown sources of error. As to the question of principle formulated by Planck, I would suspect that the gentlemen under 40 years prefer the electrodynamic postulate, that those over 40 years prefer the mechanical-relativistic postulate. I prefer the electrodynamic. (Laughter.)
Kaufmann: To the postulate-question I would like to say that the epistemological value of the postulate of relative motion is, however, not very large, as it is only useful for uniform translation. As soon as we take into account rotation and irregular motions, we don't get along with it. Maybe one is trying to banish the aether (which is often perceived as uncomfortable) out of the world, but one has to introduce it in the case of rotational motions, such as in the case of the flattening of celestial bodies.
Planck: Of course, this is only about uniform translation. Irregular motion can already be demonstrated by mechanics, but uniform motion cannot. The requirement is, what cannot be verified in mechanics, cannot be verified in electrodynamics.
1. l. c. p. 525 and p. 544.
2. l. c. p. 526 and p. 547.
3. l. c. p. 547
4. l. c. p. 527.
5. M. Abraham, Ann. d. Phys. (4) 10, 105, 1903.
6. H. A. Lorentz, Versl. Kon. Akad. v. Wet Amsterdam 1904, S. 809. A. Einstein, Ann. d. Phys. (4) 17, 891, 1905. See also H. Poincaré, C. R. 140, 1504, 1905.
7. for example M. Planck, Verh. D. Phys. Ges. 8, 140, 1906.
8. l. c. p. 529, equations (14) and (15).
9. H. Starke, Verh. D. Phys. Ges. 5, 241, 1903.
This is a translation and has a separate copyright status from the original text. The license for the translation applies to this edition only.
Original:
This work is in the public domain in the United States because it was published before January 1, 1923. The author died in 1947, so this work is also in the public domain in countries and areas where the copyright term is the author's life plus 60 years or less. This work may also be in the public domain in countries and areas with longer native copyright terms that apply the rule of the shorter term to foreign works.
Translation:
This work is released under the Creative Commons Attribution-ShareAlike 3.0 Unported license, which allows free use, distribution, and creation of derivatives, so long as the license is unchanged and clearly noted, and the original author is attributed. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 119, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9326237440109253, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/5786?sort=newest | ## How do I check if a functor has a (left/right) adjoint?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Because adjoint functors are just cool, and knowing that a pair of functors is an adjoint pair gives you a bunch of information from generalized abstract nonsense, I often find myself saying, "Hey, cool functor. Wonder if it has an adjoint?" The problem is, I don't know enough category theory to be able to check this for myself, which means I can either run and ask MO or someone else who might know, or give up.
I know a couple of necessary conditions for a functor to have a left/right adjoint. If it doesn't preserve either limits or colimits, for example, I know I can give up. Is there an easy-to-check sufficient condition to know when a functor's half of an adjoint pair? Are there at least heuristics, other than "this looks like it might work?"
-
## 9 Answers
The adjoint functor theorem as stated here and the special adjoint functor theorem (which can also both be found in Mac Lane) are both very handy for showing the existence of adjoint functors.
First here is the statement of the special adjoint functor theorem:
Theorem Let $G\colon D\to C$ be a functor and suppose that the following conditions are satisfied:
(i) $D$ and $C$ have small hom-sets
(ii) $D$ has small limits
(iii) $D$ is well-powered i.e., every object has a set of subobjects (where by a subobject we mean an equivalence class of monics)
(iv) $D$ has a small cogenerating set $S$
(v) $G$ preserves limits
Then $G$ has a left adjoint.
Example I think this is a pretty standard example. Consider the inclusion CHaus$\to$Top of the category of compact Hausdorff spaces into the category of all topological spaces. Both of these categories have small hom-sets, it follows from Tychonoff's Theorem that CHaus has all small products and it is not so hard to check it has equalizers so it has all small limits and the the inclusion preserves these. CHaus is well-powered since monics are just injective continuous maps and there are only a small collection of topologies making any subspace compact and Hausdorff. Finally, one can check that $[0,1]$ is a cogenerator for CHaus. So $G$ has a left adjoint $F$ and we have just proved that the Stone-Čech compactification exists.
If you have a candidate for an adjoint (say the pair $(F,G)$) and you want to check directly it is often easiest to try and cook up a unit and/or a counit and verify that there is an adjunction that way - either by using them to give an explicit bijection of hom-sets or by checking that the composites $$G \stackrel{\eta G}{\to} GFG \stackrel{G \epsilon}{\to} G$$ and $$F \stackrel{F \eta}{\to} FGF \stackrel{\epsilon F}{\to} F$$ are identities of $G$ and $F$ respectively.
I thought (although I am at the risk of this getting excessively long) that I would add another approach. One can often use existing formalism to produce adjoints (although this is secretly using one of the adjoint functor theorems in most cases so in some sense is only psychologically different). For instance as in Reid Barton's nice answer if one can interpret the situation in terms of categories of presheaves or sheaves it is immediate that certain pairs of adjoints exist. Andrew's great answer gives another large class of examples where the content of the special adjoint functor theorem is working behind the scenes to make verifying the existence of adjoints very easy. Another class of examples is given by torsion theories where one can produce adjoints to the inclusions of certain subcategories of abelian (more generally pre-triangulated) categories by checking that certain orthogonality/decomposition properties hold.
I can't help remarking that one instance where it is very easy to produce adjoints is in the setting of compactly generated (and well generated) triangulated categories. In the land of compactly generated triangulated categories one can wave the magic wand of Brown representability and (provided the target has small hom-sets) the only obstruction for a triangulated functor to have a right/left adjoint is preserving coproducts/products (and the adjoint is automatically triangulated).
-
I wish I could double-vote! I think this is a very nice expansion of your original answer. – Andrew Stacey Nov 17 2009 at 8:27
Cool! Stone-Čech from abstract nonsense! – Harrison Brown Nov 17 2009 at 16:32
What a lovely answer. – Alex Collins Nov 17 2009 at 19:39
Well, in my limited experience Brown representability is very useful in pratice when you work with homotopy categories of model categories (which are complete cocomplete). The relevant functors on those big categories are sometimes hard to describe explicitely, although it may be possible to compute them on (some subclass of) compact objects. – Simon Pepin Lehalleur Aug 9 2010 at 18:49
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Certainly the fundamental theorem of the addition and the existence theorem of Freyd. However, this theorem is the base of a whole literature (eg categories reflective), to investigate whether and under what circumstances the conditions of the theorem of Freyd occurred
(See, for example, "Abstract and Concrete Categories - The Joy of Cats" in http://www.tac.mta.ca/tac/reprints/index.html, or the old H. Herrlich, Topologische Reflexionen und Coreflexionen. Lecture Notes in Mathematics 78).
But there Is another smart and strategic way to the existence or nature of a functor as added, because the theory of monads and tripleability, we can lift an adjoint to a functor posted a diagram with suitable assumptions (there are several theorems respect).
See "Triples, algebras and cohomology Jonathan Mock Beck "in http://www.tac.mta.ca/tac/reprints/index.html
-
Here is one situation where left and right adjoints always exist. Let $\mathcal{C}$ and $\mathcal{D}$ be semisimple $k$-linear categories such that
• $\mathcal{C}$ has only finitely many simple objects, and
• for any simple object $X$ of $\mathcal{C}$ or $\mathcal{D}$, $\operatorname{End}(X) \cong k$.
The second condition is automatic if $k$ is algebraically closed. Under these circumstances, any $k$-linear functor $\mathcal{C} \to \mathcal{D}$ has both a left and a right adjoint, and their constructions are quite explicit.
This fact is useful for working with fusion categories over an algebraically closed field (which automatically satisfy the conditions on $\mathcal{C}$).
-
Lots of people-who-are-fond-of-adjoint-functor-theorems have responded to this post saying "adjoint functor theorems". Let me give a more mundane and rather different answer which fits much better into my world view.
In my experience (which may differ from others), the true answer is that category theorists have these adjoint functor theorems which work well in some cases, but whose problem, as I see it, is simply that they are quite general. I am certainly not a category theorist but I am a "working mathematician" and my experience with these general theorems has been quite negative. For example take the notion of free groups. I was talking to another staff member here once and they said they'd just set an UG project on constructing free groups and I said "can't you just say 'done by adjoint functor theorem'?" and we both laughed because we knew it was true. And then I actually went and looked up SAFT and checked that it could construct free groups---and it can't, because the category of groups (hardly an exotic or esoteric category!) does not have a small cogenerating set. As I write, the top answer here has 15 votes and a lovely statement of SAFT but if you can't get free groups with it then you surely have to question its usefulness. In fact, although this sort of goes against the grain of what most people are saying here, in my experience you would be crazy trying to invoke adjoint functor theorems to construct free groups: you're much better off making them yourself, not least because making them yourself will teach you much more about how the objects work.
My experience is that things like SAFT are almost always justified with the statement "Stone-Cech compactification!". I have heard this justification, and only this, so often now that the excuse is wearing thin.
So here is my answer: Yes, there are some general theorems. But if you're not a category theorist then in my experience they have limited applicability. You're better off thinking about things on your own, saying "hmm, here's an object with structure X, how might one naturally build an object with structure Y from it?". If you can go both ways you might well have constructed a pair of adjoint functors, and you could then try to check this by doing mathematics rather than waving general category-theory theorems which have specifically beein designed with a one-size-fits-all purpose in mind, and which don't apply to such exotic categories as the category of groups.
-
1
I agree with much of this. SAFT, in particular, seems to have few uses - though there are significant uses other than Stone-Cech. Regarding free groups, SAFT doesn't guarantee existence, but GAFT (General AFT = "the" AFT) does, as per my answer. – Tom Leinster Nov 18 2009 at 8:49
I agree that SAFT has somewhat limited applicability and that it is often a good idea to try to actually build adjoints (as one often needs to understand them not just know they exist) and that this can be very enlightening. But I think that knowing the general categorical machinery exists is worthwhile in case one wants to use it. I also think that the feel of the proof of AFT is not unlike how one often goes about building these things by hand at least in some sense. In my opinion there are lots of general machines to which this comment and your answer applies. – Greg Stevenson Nov 18 2009 at 10:06
1
Greg: I disagree with your statement "the proof of AFT is not unlike how one often goes about building these things by hand". But that might well be because we're sampling from very different sample spaces. Here's an example of adjoints that has been fundamental in my mathematical career: pushforward and pullback of sheaves of abelian groups on schemes. Here pushforward and pullback are adjoints but neither construction, it seems to me, looks (to me) at all like what goes into AFT. [continued in a sec] – Kevin Buzzard Nov 18 2009 at 18:52
Here's what this example looks like (to me). Given f:X-->Y a map of schemes, then for a sheaf F on X one does ones best to define a sheaf f_*F on Y. Conversely given a sheaf G on Y one does one's best to define a sheaf f^*G on X. Now, after making the constructions, one does some mathematics and proves that the constructions are adjoint. The point I'm trying to make is that AFT ideas do not, it seems to me, go into the constructions. The work is in checking adjointness and so in some sense the mathematics seems to be elsewhere and not AFTish. But perhaps you are thinking of other examples! – Kevin Buzzard Nov 18 2009 at 18:55
@buzzard: It is probably true that our sample spaces are somewhat different. I mostly wanted to make it clear that I didn't mean the exact proof; it is probably the first place a lot of people run across the idea that "if you need to build something, take everything close enough and then beat it into submission with limits/colimits" which I think is useful. Also your example looks different to me. The first step is to build the inverse image which I think is best done in general via left Kan extension. Then one can play around and find out what one wants for sheaves of modules. – Greg Stevenson Nov 18 2009 at 20:41
show 1 more comment
Other people have mentioned the Adjoint Functor Theorems. Here's a different perspective.
There's a famous Cambridge exam question set by Peter Johnstone:
```Write an essay on
(a) the usefulness, or
(b) the uselessness,
of the Adjoint Functor Theorems.
```
I agree with the undertone of the question: the Adjoint Functor Theorems (AFTs) aren't as useful as you might think when you first meet them. They're not useless: but my own experience is that the range of situations in which I've had no easy way of constructing the adjoint, yet have been able to verify the hypotheses of an AFT, has been very limited.
Perhaps more useful than knowing the AFTs is knowing some large classes of situation where an adjoint is guaranteed to exist. Here are two such classes.
${}$1. Forgetful functors between categories of algebras. Any time you have a category $\mathcal{A}$ of algebras, such as Group, Ring, Vect, ..., the forgetful functor $\mathcal{A} \to \mathbf{Set}$ has a left adjoint. What's not quite so well-known is that you don't have to forget all the structure; that is, the codomain doesn't have to be Set.
For example, the functor $\mathbf{AbGp} \to \mathbf{Group}$ forgetting that a group is abelian automatically has a left adjoint. The functor $\mathbf{Ring} \to \mathbf{Monoid}$ forgetting the additive structure of a ring automatically has a left adjoint. The forgetful functor $\mathbf{Assoc} \to \mathbf{Lie}$, sending an associative algebra to its underlying Lie algebra (with bracket $[a, b] = a\cdot b - b \cdot a$) automatically has a left adjoint. (That might not look so much like a forgetful functor, but that's only because the bracket on an associative algebra isn't given as a primitive operation in the usual definition of associative algebra: it has to be derived from the other operations.)
The same can be said if you talk about topological groups, rings, etc, basically because Top has all small limits and colimits.
All that is a consequence of the General AFT (= 'the' AFT in some people's usage). To my mind it's the principal reason why it's worth learning or teaching the General AFT.
${}$2. Kan extensions. Let $F: \mathbf{A} \to \mathbf{B}$ be any functor between small categories. Then there's an induced functor $$F^{*}: {[\mathbf{B}, \mathbf{Set}]} \to {[\mathbf{A}, \mathbf{Set}]}$$ defined by composition with $F$. (Here ${[\mathbf{B}, \mathbf{Set}]}$ means the category of functors from $\mathbf{B}$ to $\mathbf{Set}$, sometimes denoted ${\mathbf{Set}}^{\mathbf{B}}$.)
The fact is that $F^{*}$ always has both a left and a right adjoint. These are called left and right Kan extension along $F$. The same is true if you replace $\mathbf{Set}$ by any category with small limits and colimits.
This is really useful, though that might not be obvious. For example, suppose we're interested in representations of groups. A group can be regarded as a one-object category, and the category of representations of a group $G$ is just the functor category $[G, \mathbf{Vect}]$. Now take a group homomorphism $f: G \to H$. The induced functor $$f^{*}: [H, \mathbf{Vect}] \to [G, \mathbf{Vect}]$$ sends a representation of $H$ to a representation of $G$ in the obvious way. And it's guaranteed to have both left and right adjoints. These adjoints turn a representation of $G$ into a representation of $H$, in a canonical way. I believe representation theorists call these the 'induced' and 'coinduced' representations, at least in the case that $G$ is a subgroup of $H$ and $f$ is the inclusion.
Exercise: let $G$ be a group. There are unique homomorphisms $G \to 1$ and $1 \to G$, where $1$ is the trivial group. Each of these two homomorphisms induces a functor "$f^{*}$" between the category $[G, \mathbf{Set}]$ of $G$-sets and the category $[1, \mathbf{Set}] = \mathbf{Set}$ of sets. These two functors each have adjoints on both sides. So we end up with six functors and four adjunctions. What are they?
The existence of Kan extensions is best derived from the theory of ends. In fact, ends allow you to describe them explicitly.
-
2
Is there a good place to learn about ends and coends with good (and not too sophisticated) examples? The page on the n-lab didn't help me that much because it goes very quickly into the enriched context and I think that your wonderful notes on category theory don't cover it. – Gonçalo Marques Aug 20 2010 at 13:52
There are two parts to this answer.
1. First, a functor must be continuous (cocontinuous) to have a left (right) adjoint. Most of the times, it is easy to check that a functor does not preserve (co)limits and thus it cannot have a a left (right) adjoint.
2. (co)continuity is not enough to actually prove that a functor has the required adjoint, but it is almost good enough. Let me elaborate on this. If you have a functor $F:P\to Q$ between complete partial orders (and thus cocomplete) then it is an easy exercise to construct a left adjoint by taking a $\sup$ of an appropriate subset. This can be generalized in a straightforward way to any functor by taking an appropriate (co)limit. The bad news is that this (co)limit is in general over a large category so it may not exist. This is where the so-called solution-set conditions come in; they are way to trim down this large category to a small one.
As many people already said there are various variations of this type of conditions, from the more general but also very cumbersome to check solution-set condition to easier conditions which combine some form of well-poweredness (each object has only a set of subobjects -- or quotients, whatever the case may be) with the existence of a small separating (or generating) set. One that guarantees the existence of a right adjoint and that sticks out particularly in my memory is the existence of a small dense subcategory -- check chapter V of Kelly's book on enriched category for the precise details. It is particularly memorable, because many categories come with god-given small dense categories like presheaf categories (courtesy of Yoneda) and sheaf categories (because dense composed with left adjoint is dense).
Later edit: many people have complained about the limited usefulness of the adjoint functor theorems in that in many cases there is a direct, and thus much more enlightening, construction. But there are situations where such a direct construction is not available. One that I came across recently is when studying P. Johnstone's book Stone spaces, more precisely chapter III and the section on Manes' theorem about the monadicity of the category of compact Hausdorff spaces. In the sequel, P. Johnstone proves another result due to Manes, the fact that category of algebras (in the sense of universal algebra) in the category of compact Hausdorff spaces is also monadic. He remarks that one has to use the GAFT (and Beck's monadicity theorem) in this case, because there is no easy direct description of the left adjoint. Later in the book (somewhere, I am quoting from memory and do not have the book by me), he argues why there is no simple recipe for the left adjoint.
-
Obligatory n-lab reference: adjoint functor theorems
Figuring out when functors had adjoints or not was something I did a lot of in Comparative Smootheology (section 8).
Edit: Thought I'd expand on my comment to Andrew Critch's answer. A simple application of the Special Adjoint Functor Theorem is to universal algebra where it becomes:
Theorem Let $D$ be a category that has finite products, is co-complete, is an $(E, M)$ category where $E$ is closed under finite products, is $E$-co-well-powered, and its finite products commute with filtered co-limits. Let $V$ be a variety of algebras. Let $F$ be a category with co-equalisers. Let $G : F \to DV$ (here, $DV$ is the category of $V$-algebra objects in $D$) be a covariant functor. Then the following statements are equivalent.
1. $G$ has a left adjoint.
2. The composition $|G| : F \to D$ of $G$ with the forgetful functor $DV \to D$ has a left adjoint.
In particular, if we take $D$ to be $Set$, the category of sets, then we obtain the following (which can be found in any text book on universal algebra), in which the variety of algebras $V$ is identified with its category of models in $Set$:
Corollary Let $F$ be a co-complete category, $V$ a variety of algebras. For a covariant functor $G : F \to V$, the following statements are equivalent.
1. $G$ has a left adjoint.
2. $G$ is representable by a co-$V$-algebra object in $F$.
3. $|G|$ is representable by an object in $F$.
(And, of course, all of this can be turned round for adjoint pairs of contravariant functors)
In further particular, if $G : F \to V$ preserves underlying sets then $|G|$ is representable (by the initial $F$-object) and so $G$ has a left adjoint.
-
Thanks Andrew. I would like to be able to double-vote your expansion; it is the other example I had in mind and I'm glad you added this since you've given a much cleaner and more complete statement than I would have. – Greg Stevenson Nov 17 2009 at 9:14
Thanks for this; I wish I could accept both your answer and Greg's, since they're both clear and useful! You have my +1, anyway. – Harrison Brown Nov 17 2009 at 16:35
I have to admit that I'm not overly happy with the "can only accept one answer" imposition by the software - I'd like to accept more than one on some of my questions. However, I think you made the right choice here. Stone-Cech outweighs anything else in my view. – Andrew Stacey Nov 17 2009 at 18:38
I think frequently the easiest to check sufficient conditions are the following, which also make precise why "this looks like it might work" is so often successful:
Theorem: A functor $G: C\to D$ is a right adjoint functor (i.e. has a left adjoint) if and only if for each object $Y$ in $D$, there exists an initial morphism $\phi_Y:Y\to G(I_Y)$ from $Y$ to $G$. Moreover, once you find such an initial morphism from each $Y$ to $G$, the association $Y\mapsto I_Y$ extends in a unique way to act on morphisms defining a functor $F: D\to C$, which moreover is left adjoint to the original functor $G$.
This is well-known and easy to prove (well, depending on who you ask), but is non-trivial and involves many steps, which are explained relatively well here. (Essentially one is recovering the entire adjoint situation from just one functor and a unit transformation.)
Once you know it, you can really take confidence in "follow your nose"-style adjoint construction. It doesn't involve having an "initial guess" for the left adjoint (as a functor), but actually constructs it for you in a way that is uniquely determined by the limited data of the initial morphisms --- really unique, not just up to natural isomorphism.
As an example of how this can be useful, think of the inclusion functor $U$ from $AbGrp$ to $Grp$. It's easy to see that any group $H$ has an abelianization $Ab(H) = H/[H,H]$ in $AbGrp$ with a map
$H\to Ab(H)$ satisfying an initial (universal) property. But then by the above theorem, we can automatically extend this association in a unique way to act on morphisms as well, defining an abelianization functor $Ab$ which is left adjoint to the inclusion $U$.
This same trick expedites the construction of adjoints in pretty much any situation you can think of.
Edit: Sometimes this theorem is used as an alternative definition for adjoint functors in terms of universal morphisms. However you look at it, the real utility is knowing that this "weak", and in fact asymetric, condition actually implies the "stronger", symmetric definitions of adjoints via hom-sets or units/counits. I think it's really worthwhile to sift through the three different characterizations of adjoints given on Wikipedia.
-
2
To me, this result says "A functor has an adjoint if you can construct the adjoint." Freyd's adjoint functor theorem is more like "A functor has an adjoint if it looks like it has an adjoint" which seems more useful. – Andrew Stacey Nov 17 2009 at 7:39
No! It says "if you can build just a tiny bit of an adjoint, than the rest of it falls into place." I edited my answer to elaborate on this, because I think this fact doesn't get enough attention in general :) – Andrew Critch Nov 17 2009 at 8:04
1
Isn't this "just" the characterization in terms of initial objects in comma categories? The work lies in showing uniquness of various morphisms, I'd have thought. In the course I took (waves at TL) it wasn't clear that this was a faster way to construct the free group functor than, say, applying GAFT. It all depends how (over)confident one is that there really is an adjoint pair – Yemon Choi Nov 17 2009 at 8:09
Well secretly you are building the unit and showing it gives bijections on hom-sets. It is a nice fact, and I am all for using the various definitions of adjunction as the situation warrants, but I feel like this still boils down to a sufficient condition for G to have an adjoint is that you can build one. Am I missing something? (By the way - I don't want this to sound snide - it is a good answer) – Greg Stevenson Nov 17 2009 at 8:12
@Yemen, it is "just" that :) @Greg, not quite: you don't build the unit, you start with it. Then you build a functor to make the unit an actual natural transformation. – Andrew Critch Nov 17 2009 at 8:40
show 2 more comments
There's the Freyd Adjoint Functor Theorem.
A right adjoint functor is continuous (commutes with limits) and a left adjoint functor is cocontinuous (commutes with colimits). So, if a functor has a left adjoint then it is continuous because it is a right adjoint. The adjoint functor theorem is a partial converse to this fact in the case that the domain category is complete (has all small limits) and the functor satisfies a "smallness condition".
-
I did say "easy-to-check..." :) – Harrison Brown Nov 17 2009 at 6:30
It isn't always "easy" although this depends on your definition ;) but they are surprisingly checkable in some situations. I'll try and think of some good examples where this is used and edit my answer to include them if that would help. – Greg Stevenson Nov 17 2009 at 6:35
I think by "easy" I mean "quickly verifiable?" Analogously to checking that some limit really doesn't commute, to check that it's not. But if you could illustrate an example, yeah, that'd be fantastic. – Harrison Brown Nov 17 2009 at 6:54
1
This is usually very easy to check. It's at least as easy as finding the initial morphism in Andrew Critch's answer because it's often easier to prove a general condition than find a specific instance. For example, for the adjoint of the inclusion functor you don't even need to know that abelianisation exists! It's enough that the inclusion AbGrp to Grp preserves the underlying sets and the adjunction follows for free. – Andrew Stacey Nov 17 2009 at 7:35
It's not all that hard to verify the definition directly in a lot of cases. Do you mean "not tedious"-to-check? – marc Nov 17 2009 at 18:16
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 116, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513566493988037, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/3452/can-gravitational-potential-energy-be-released-in-a-fire | # Can gravitational potential energy be released in a fire?
If one takes a bundle of wood up high to the mountains so its potential energy increases, would there be obtained more heat by burning it?
-
I just can't make any sense of this question... – David Zaslavsky♦ Jan 20 '11 at 23:27
4
I guess its just a bundle of logs for a fireplace. Answer: No, but You can "earn" back that potential energy when You carry the oxidation products down from the mountains. – Georg Jan 20 '11 at 23:45
3
Oh my. I stared at it for several minutes until realized that "vjazanka draw" means "вязанка дров" in russian. It means "bundle of logs" indeed. – Kostya Jan 21 '11 at 11:19
1
Apart from the mangled English which could have been corrected, it is an interesting question. – Philip Gibbs Jan 21 '11 at 15:32
1
I see, Kostya, haha. Even with my Western Slavic attitude, I can now understand that it is "vázanka dřev". The "draw" is a really cute spelling, especially because it is an English word that has nothing to do with the wooden logs. :-) – Luboš Motl Jan 22 '11 at 8:50
show 4 more comments
## 6 Answers
Yes (at least in certain circumstances).
For simplicity, let's assume that there's no ash left, and the combustion products are gasses that have the same density as the atmosphere. Now, the state of the system when you burn wood at the top of the mountain looks very much like the state of the system when you burn wood at the bottom of the mountain, so what happened to the energy you expended when you hauled the wood up the mountain? The answer is that the air pressure at the top of the mountain is lower than the air pressure at the bottom, so the fire at the top expends less work displacing air to account for the gasses released in combustion, and so produces more heat.
How about in real life? You would also have to take into account the density of the gasses produced by combustion, and compare the potential energy of these gasses (taking into account the oxygen consumed) with the work done displacing the air. I'm not going to do the research to figure this out, but I think it very likely the answer is also "Yes".
ADDED MATERIAL:
There's been an objection to this answer, based on the fact that if you burn magnesium (say), which consumes oxygen without producing any gaseous combustion products, the fire at the top of the mountain produces less heat than the fire at the bottom. But consider what happens when you burn magnesium. You turn magnesium, Mg, into magnesium oxide, MgO. MgO is heavier than Mg. This means that if you haul magnesium up a mountain, burn it, and then bring the ashes down, you've actually gained energy in this process, since the ashes going down are heavier than the magnesium you took up. Where did this extra energy come from? It comes from the fact that the fire produces more heat on the bottom of the mountain.
-
-1 Too learn what is the nonsense with Your answer, just do some circular process: Enclose the wood in a chamber, reduce pressure to the same pressure as will be on mountaintop, burn an then carry all on the mountain. – Georg Jan 22 '11 at 15:21
And you're not using any energy pumping the gas out of the chamber? The energy difference caused by pressure isn't very large compared to the energy released by burning the wood, but it exists. – Peter Shor Jan 22 '11 at 15:39
@Peter Shor: You make a good point regarding air pressure. The reasoning is slightly fuzzy to me, but I suspect it's correct nonetheless. – Noldorin Jan 22 '11 at 16:15
You're mixing up so many things in this answer I don't know where to start. How about this: If the thing you're burning produces 10 moles of gas for every 1 mole of O2 consumed, then the volume of gas increases from reactants to products, and the enthalpy change is decreased by the need to push away the atmosphere. It's decreased less at higher altitude, so your conclusion is correct. However, if the fuel produces 1 mole of gas for every 10 moles of O2 consumed, then the volume of gas decreases, and the enthalpy change is actually greater than the energy change due to the atmosphere's pushing. – Keenan Pepper Jan 22 '11 at 18:14
1
@Peter: I think this is a good example of exactly the problem you highlighted on meta months ago. – Joe Fitzsimons Jan 22 '11 at 18:20
show 7 more comments
Simply, no. Gravitational potential energy cannot be released by burning (combustion).
Gravitational potential energy is quite independent from chemical potential energy (which is ultimately electromagnetic in form). As of fundamental forces, they do not interact. Burning only converts chemical potential energy (related to the binding energy of chemical compounds) to heat (thermal energy and EM radiation). Hence, gravity is basically irrelevant.
Saying this, there may be a small indirect effect of gravity. For a consideration of the possible effect of air pressure (which varies with altitude), see Peter Shor's answer.
Philip Gibbs also makes an interesting point relating to General Relativity. However, this relies on space-time curvature for a fixed observer, and is not as simple as considering gravitational potential.
-
1
But the chemical energy has a mass equivalent which is lost on burning. This affects the amount of potential energy. So this conclusion is wrong. – Philip Gibbs Jan 22 '11 at 12:34
@Philip: What? The two forms of energy are independent. I'm pretty sure it is correct. – Noldorin Jan 22 '11 at 13:52
What is " chemical potential energy"? I know chemical potential, but that chemical potential energy is unknown in chemistry and thermodynamics. Somebody else used this expression some days ago here. – Georg Jan 22 '11 at 14:05
In general relativity no form of energy is independent of gravitational energy, because all forms of energy gravitate. Chemical energy certainly adds to gravitational pull otherwise the equivalence principle would not hold. I'm puzzled by what you mean by saying that they are independent. – Philip Gibbs Jan 22 '11 at 14:05
– Noldorin Jan 22 '11 at 14:59
show 16 more comments
It seems to me that there is an ambiguity here that is causing some trouble in giving a definitive answer. The question does not specify how the material for the fire is burned, or what it consists of, and this can potentially give rise to different answers. Clearly if we elevate the material to 100km, some material won't burn, because of the lack of an oxygen rich atmosphere. On the other hand, some materials which are already oxygen rich can still be burned.
To lift this ambiguity, and hopefully crystallize the question, lets consider a specific case: You have a sealed insulated contained in which a certain amount of fuel is burned with a fixed amount of oxygen. Further, let's assume there are no ashes left over. (Although it is relatively straight forward to expand this treatment to deal with that case, I think this gives a cleaner question). The container is then opened either at sea level, or at some elevation $H$. The question is now whether the latter process heats the atmosphere more than the first.
The answer to this question is clearly yes. In both cases exactly the same number and type of molecules are produced, and in each case these have the same kinetic energy. However the gas in the container that has been elevated also has some additional energy: the potential energy gained by each molecule due to it's relative elevation in the gravitational field. Once the containers are opened, in each case the gas will eventually come in to equilibrium with the atmosphere. Clearly in the latter case, the total energy of the atmosphere and gas as a whole is now higher than that of the first case by a factor of $m g H$, where $m$ is the mass of the gas released. As the average energy is increased, the temperature must also increase. This is because the possible energy levels for each molecule of the system are the same in both cases, the only way to achieve a Gibbs state with an increased energy expectation value is to increase the temperature.
I hope this can finally resolve this discussion.
-
Your answer seems fine, but it depends on a very specific setup. – Noldorin Jan 23 '11 at 16:35
@Noldorin: Yes, but there is an ambiguity in the question. As Peter mentioned, if there is no (or little) gas emitted then the situation is different. – Joe Fitzsimons Jan 23 '11 at 17:25
The question is about the effects of gravity on energy and can only be answered properly in the context of general relativity. The answer is that more energy is released when the wood is burnt on the top of the mountain, provided everything is measured from the point of view of a fixed observer and we make some assumptions so that we can ignore uninteresting effects.
I'll assume that the strength of the gravitational pull at rest on the mountain top is the same as at the bottom, because otherwise we would have to consider other effects such as the tiny amount of energy in the wood due to its very slight compression under the gravitational field. This would change as the gravitational pull changed. I don't think that is what the question is meant to be about so I'll ignore it. I will just look at the direct effect of the potential energy as the question seems to suggest.
Energy in relativity is a relative concept. It depends on the relative velocity of the various frames of reference. More significantly for this question, it depends on the position of an observer in a gravitational field. This effect can be measured by observing the frequency of radiation and how the measurement changes with altitude. In practice this has been done experimentally for even quite small vertical displacements.
Consider first observers who are fixed relative to the bundle of wood and ask about the difference in released energy they can measure, if either the wood is burnt at the foot of the hill or at the top. By the equivalence principle there can be no difference unless they include the extra energy released when ashes or gasses released return to ground level. To avoid this effect let's assume that the wood is actually burnt within a sealed container so that only the radiated heat can escape. The conclusion then is that the observers must see exactly the same amount of heat released if they are fixed relative to the container.
The more interesting and correct way to look at the situation is from the point of view of observers fixed relative to the mountian. They could be high up in space collecting all the emitted radiation to measure its total energy or they could be doing the same thing from a lower altitude. The height will certainly make a difference to the total energy gathered because radiation is redshifted as it escapes from a gravitational mass. Someone higher up will measure less energy than someone lower down, but that was not the question.
The question is about how a fixed observer will measure the amount of radiated energy released from the combustion. (We can take this to be a fixed observer at large distance so that even people who do not agree that energy conservation makes sense in general relativity will agree that the ADM energy is well defined for this situation)
The answer in this case has to be that when the wood is burnt on the mountain top, more energy will be released. This is because the radiation released from lower down will be redshifted more when it escapes to infinity so less energy will be measured.
Another way to look at this is to consider what happens when someone uses energy to carry the wood to the top of the mountain where it is burnt, and then he returns the ashes and released gasses to ground level harvesting the energy released in doing so. Because the wood was burnt and it released energy, the ashes and gasses will actually weight slightly less afterwards according to mass energy equivalence, even when all the released ashes and gasses are collected. This means that less energy is released when the remains are returned to ground than were expended in lifting the original wood to the top. This would not have happened if the wood was burnt at ground level. By conservation of energy we have to conclude that more energy was released when the wood was burnt on the mountain top. Notice that I am not saying that more energy is released because the ashes return to ground level. The argument tells us that more energy must have been released during the combustion at altitude even before the ashes are returned.
An exercise for the reader is to calculate the extra energy released using both the explanations I have given and show that the answer is the same.
Edit: To show this is not just hot air I'll give a formula.
If $M$ is the mass of the before burning and energy $E$ is released in heat when burnt on the ground then the mass after burning is $M' = M - \frac{E}{c^2}$.
The energy required to lift the wood up the hill a height $h$ is $V = Mgh$ and the enrgy released on taking the wood back down is $V' = M'gh = V - \frac{Egh}{c^2}$
The extra heat generated from burning at altitude is therefore $V-V' = \frac{Egh}{c^2}$
-
This is just wrong. There is no need to talk about relativity here. – Noldorin Jan 22 '11 at 15:00
@Noldorin, General relativity is exactly what is required to fully understand this problem. Your thinking is too Newtonian which is why you think there is no energy difference. In GR all forms of energy interact with gravity. Taking this into account you find the small difference of energy that the altitude makes from the perspective of a fixed observer. – Philip Gibbs Jan 22 '11 at 16:02
How do you know that? General relativity has not been quantised/described under QFT. This is a highly contentious area, about which no-one can speak properly. Perhaps you could explain in more detail how gravity supposedly affects chemical potential energy? I see now way this is possible. – Noldorin Jan 22 '11 at 16:11
How does your answer accord with the equivalence principle? If the atmospheric pressure and gravitational force have the same values at the top of the mountain as at the bottom, shouldn't the physical processes be exactly the same, and the exact same amount of heat released? – Peter Shor Jan 22 '11 at 16:43
@Peter, as I explained in the answer, the heat released as observed by different observers in a fixed position relative to the fire would be the same. This is a clear consequence of the equivalence principle. However, the question only really makes sense if answered from the perspective of a single observer who sees the fires in different relative positions. In that case he sees more energy coming from fires at higher altitude. – Philip Gibbs Jan 22 '11 at 16:55
show 13 more comments
The energy released in the fire is the difference between the energy of the fuel and the energy of the ash.
If you carry wood up a mountain, it gains energy.
But if you burn wood on a mountain, the ashes also have more energy than the same ashes would have at sea level.
The wood has more energy on the mountain, but the ashes also have more energy. If you burn the wood and end up with the ashes, it doesn't matter whether you burn it up high or burn it down low, because the gravitational energy cancels out.
The energy you put into the wood by carrying it up the mountain didn't go away, but the only way you can get it back is by letting the ashes fall down the mountain. (For example you could make the ashes pull a rope attached to a generator.)
(Note that in real life, the "ash" also includes the CO2 and other gases given off by the wood, so to do the experiment properly you'd have to burn it in some kind of large airtight container.)
APPENDIX: It seems like there are two other issues I need to address to satisfy some people.
The first issue could actually result in a measurable difference in the heat released from burning, and that's the difference in atmospheric pressure. What's going on is that when you completely burn the wood in O$_2$, the same amount of internal energy is always released, but what you actually measure is the enthalpy change $\Delta H = \Delta E + p \Delta V$. Let's look at the cases $\Delta V > 0$ and $\Delta V < 0$ separately.
If the burning of the fuel releases more moles of gas than the moles of O$_2$ it consumes (which happens to be the case for burning real wood), then $\Delta V > 0$ so $\Delta H$ is a less negative number (smaller in magnitude) than $\Delta E$. What this means is that some of the energy from the fire went into pushing away the atmosphere rather than producing heat. The pressure $p$ is less on top of the mountain, so from this you would conclude that you get MORE heat after you carry it up the mountain.
You might GUESS that this extra heat energy is somehow related to the gravitational potential energy, but that would be WRONG as I'm about to show.
Let's consider a different fuel that consumes more moles of gas than it releases when burned. (For example, consider burning a strip of magnesium, which undergoes the simple reaction 2 Mg + O$_2$ $\rightarrow$ 2MgO (s).) In this case $\Delta V < 0$, so $\Delta H$ is more negative (greater in magnitude) than $\Delta E$, which means that you actually GAIN energy from the pushing of the atmosphere. Since the pressure is less on top of the mountain, this means that burning a strip of magnesium would provide LESS heat up there than it would at sea level.
What's going on here? The wood and the strip of magnesium both gain potential energy from being carried up the mountain, but one provides more heat up there, and the other provides less! This is because in both cases, the difference between the amounts of heat released at different altitudes is due to the difference in the energy of the atmosphere, NOT the gravitational energy of the solid fuel.
APPENDIX 2:
-
This is wrong because the mass of the ashes plus gasses is slightly less after burning than the mass of the wood before. This is due to the release of energy which has a small mass equivalence. See my answer for the correct analysis. – Philip Gibbs Jan 22 '11 at 12:29
1
-1 This answer is hugely misinformed and misleading in my view. – Noldorin Jan 22 '11 at 13:50
Keenan, your additional analysis totally neglects the effects of gravity. Your model simply doesn't include it at all. If you really want to do this rigorously, write down the partition function for the gas in a gravitational field and do the full calculation. – Joe Fitzsimons Jan 22 '11 at 19:04
@Noldorin, what's your complaint? – Keenan Pepper Jan 22 '11 at 19:04
Well you didn't really differentiate between forms of energy. Your appendices help however, so I'm tempted to remove the down-vote now. Still, if you could clarify this, it would help I think. – Noldorin Jan 22 '11 at 19:32
I think the answer is very simple.
Suppose the combustion happens without smoke or gas rising for simplicity. All the mass stays at the same height, but it changes chemical form. There will be some slight variation due to binding energy, etc, but that behaviour is consistent at all heights and independent from the gravitational potential.
So, it is clear that whether you burn your fire at sea level or at 1000m, nothing changes from a gravitational potential standpoint.
This said, there are differences, due to enthalpy, due to the concentration of oxygen in the atmosphere, and so on. Some of these have been considered in other answers and are correct - i think they are unrelated to the basic question though.
Will the increased potential energy of the wood make a difference? No, it can't because the same mass ends up in the same relative place after the combustion, no matter at what height you perform the experiment.
-
1
The mass does not remain in the same place if there is gas produced. In that case some of the mass becomes gas, and the gas then falls into equilibrium. – Joe Fitzsimons Jan 22 '11 at 21:02
1
@Sklivvz: No, it doesn't. The molecules literally fall (on average). – Joe Fitzsimons Jan 22 '11 at 21:11
1
@Sklivvz: No, its not. It's due to the fact that the Gibbs states for a gas in a gravitational potential are different to those for a gas free from gravity. – Joe Fitzsimons Jan 22 '11 at 21:22
1
@Sklivvz: You do have them at a lower height, that's the whole point. The charred remains may stay at the top, but anything that has literally gone up in smoke will reach equilibrium with the atmosphere, which will alter its average elevation. – Joe Fitzsimons Jan 22 '11 at 21:38
1
@Sklivvz: of course you don't see an effect if you model the whole atmosphere without gravity (which is what you are doing here), but that model does not reflect reality. – Joe Fitzsimons Jan 23 '11 at 5:42
show 11 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550690054893494, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/53608/confused-over-complex-representation-of-the-wave | # Confused over complex representation of the wave
My quantum mechanics textbook says that the following is a representation of a wave traveling in the +$x$ direction:$$\Psi(x,t)=Ae^{i\left(kx-\omega t\right)}\tag1$$
I'm having trouble visualizing this because of the imaginary part. I can see that (1) can be written as:$$\Psi(x,t)=A \left[\cos(kx-\omega t)+i\sin(kx-\omega t)\right]\tag2$$
Therefore, it looks like the real part is indeed a wave traveling in the +$x$ direction. But what about the imaginary part? The way I think of it, a wave is a physical "thing" but equation (2) doesn't map neatly into my conception of the wave, due to the imaginary part. If anyone could shed some light on this kind of representation, I would appreciate it.
-
If you have to visualize it, it would be as a cylindrical spiral going through space, where you have a 2D plane being the complex plane, and the third axis being x. – NeuroFuzzy Feb 11 at 10:05
The wavefunction itself isn't a "thing" that has a real only value everywhere in space. The physical thing is the probability, which is obtained by multiplying the wavefunction with its complex conjugate and integrating over the space under consideration. – Chris Feb 11 at 10:20
3
In quantum mechanics you can always multiply every wavefunction by a phase $\exp(i\phi)$ and all physical quantities are unchanged, so the real part is no more or less physical than the imaginary part. In fact, splitting the complex plane into real and imaginary parts is rather unphysical, and not very useful most of the time. You can see immediately that equation (1) is a wave travelling in the $+x$ direction because it has the form $\Psi=f(kx-\omega t)$ for the function $f(\cdot)=A\exp(\cdot)$. Any such expression is a travelling wave. A wave does not have to be a $\sin$ or $\cos$. – Michael Brown Feb 11 at 10:25
More generally in quantum mechanical scattering theory, concerning the fact that a plane wave $Ae^{i\left(kx-\omega t\right)}$, with a stationary time-independent probability density $|A|^2$, is interpreted as a right mover, see also this Phys.SE post. – Qmechanic♦ Feb 11 at 13:31
## 3 Answers
What if I told you the wave equation was given by:
$$\Psi(x,t)=A \cos(kx-\omega t)\tilde{i}+A\sin(kx-\omega t)\tag2\tilde{j}$$
where $i$ and $j$ represent the unit vectors in the x and y directions.
If so, you could think about the wave oscillating in two seperate spatial dimensions.
Now the wave equation is actually instead:
$$\Psi(x,t)=A\left[ \cos(kx-\omega t)+i\sin(kx-\omega t)\right]\tag2$$
But what's the difference? In vectors you must keep $i$ and $j$ components seperately when doing equations, similarly in complex numbers you solve equations keeping real parts equal, and complex parts equal. You can thus think of the wave equation as having two dimensions, a real dimension and a complex dimension.
In vectors, you obtain the amplitude by adding the squares of the x-component and the y-component.
$\text{Amplitude} = a^2 + b^2$ if a is the x-component and b is the y-component of a vector.
Similarly, to obtain the physically meaningful result of probability in quantum mechanics you multiply the wave function and its complex conjugate:
$$\text{Probability density} = \Psi(x,t)\times\Psi^{\dagger}(x,t) = (a+bi)\times(a-bi) = a^2 + b^2$$ where $b$ is the complex part and $a$ is the real part of the wavefunction. So probability is effectivly the amplitude of the "wave-vector", which has components in the real dimension and the complex dimension.
-
Chris, you can use latex \text{ } tags to nicely format words in equations. Cheers! – Michael Brown Feb 11 at 12:30
Thanks broseph. – Chris Feb 11 at 12:34
The wave function itself is not a "real" thing. I.e. it is not an observable quantity. What's "real" is the probability distribution which is associated with the wave function. The probability of finding the particle between points $x=a$ and $x=b$ (restricting to one dimension for simplicity) is given by:
$$P(a\leq x\leq b)=\int_a^b |\Psi|^2 \mathrm{d}x$$
where $|\Psi|^2=\Psi^* \Psi$ and $\Psi^*$ is the wave-function's complex conjugate. $|\Psi|^2$ is a real-valued function (i.e. its imaginary part is zero). It isn't particularly useful to think of the wave function itself as being a physical wave. What matters is the magnitude of the wave function.
-
Also the mathematical wave nature gives rise to intereference patterns. The wavefunctions (both real and complex components) interfere, and this interference is seen physically after performing the probability calculation above. – Chris Feb 11 at 11:17
I think the question is more about the physical intepretation of the complex expression
$\psi (x,t)=Ae^{i(kx-\omega t)}$
than the mathematical meaning of it. For the physical meaning of it, we think of the probability amplitude like a rotating arrow, which rotates as the particle travels in space. The rotation frequency of the arrow is determined by the energy (frequency) of the particle (photon.) This arrow has been given the name 'phasor' because the argument $\phi =kx-\omega t$ is an angle (in wave mechanics it is called 'phase' of the wave). This phase tells us how many degrees the arrow has rotated from the moment the particle has been created until it reaches the point $x$ at time $t$ of its journey.
This complex number representation is very convenient, not only because it shows the phase of the wave but it also shows the direction (if the wave travels in 3-D.) However its importance in QM comes from the need to combine (add) waves comming from different sources at some point in space. This is not a simple algebraic addition because the angles involved make the problem geometrical, and the complex number representation does this very neatly. In a way the fasors add like vecors do (the real with the real, and the imaginary with the imaginary and its done!)
The calculation of the probabilities follows rules that are also geometrical. For example, let us think of two waves comming from the two slits in the DS experiment as:
from slit 1 $S_1: \psi_1(x_1,t)$ and from slit 2 $S_2: \psi_2(x_2,t)$.
The $x_1$ and $x_2$ show the distances the two phasors (waves) traveled by the time they reach some point P on the screen. When these two waves arrive at the screen, they will be added to get the total amplitude first
$A=\psi_1(x_1,t)+\psi_2(x_2,t)$
and then the probability will be the 'square of the modulus' of the total amplitude as
$P=|A|^2= |\psi_1 (x_1,t)|^2+ |\psi_2 (x_2,t)|^2 + 2|\psi_1 (x_1,t)|\times|\psi_2 (x_2,t)|\cos(\theta)$
The thrird term in the equation above, shows the real need for the complex representation of the wave functions in QM, as well as the need for finding first the total probability amplitude, and then finding the probability as the square of the total modulus. This term is the root of all beautiful interference phenomana we observe in the quantum mechanical world. I hope this helps a little.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9425327777862549, "perplexity_flag": "head"} |
http://www.reference.com/browse/blood%20flow | Definitions
Nearby Words
# Blood flow
Blood flow is the flow of blood in the cardiovascular system. Mathematically, blood flow is described by Darcy's law (which can be viewed as the fluid equivalent of Ohm's law) and approximately by Hagen-Poiseuille equation.
Blood is a heterogeneous medium consisting mainly of plasma and a suspension of red blood cells. Red cells tend to coagulate when the flow shear rates are low, while increasing shear rates break these formations apart, thus reducing blood viscosity. This results in two non-Newtonian blood properties, shear thinning and yield stress. In healthy large arteries blood can be successfully approximated as a homogeneous, Newtonian fluid since the vessel size is much greater than the size of particles and shear rates are sufficiently high that particle interactions may have a negligible effect on the flow. In smaller vessels, however, non-Newtonian blood behavior should be taken into account.
The flow in healthy vessels is generally laminar, however in diseased (e.g. atherosclerotic) arteries the flow may be transitional or turbulent. The first equation below is Darcy's law, the second is the Hagen-Poiseuille equation:
$F = frac\left\{Delta P\right\}\left\{R\right\}$
$R = \left(frac\left\{nu L\right\}\left\{r^4\right\}\right)\left(frac\left\{8\right\}\left\{pi\right\}\right)$
where:
F = blood flow P = pressure R = resistance ν = fluid viscosity L = length of tube r = radius of tube
In the last equation it is important to note that resistance to flow changes dramatically with respect to the radius of the tube. This is important in angioplasty, as it enables the increase of blood flow with balloon catheter to the deprived organ significantly with only a small increase in radius of a vessel.
## See also
Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445075392723083, "perplexity_flag": "middle"} |
http://quant.stackexchange.com/questions/4886/fastest-algorithm-for-calculating-retrospective-maximum-drawdown/4888 | Fastest algorithm for calculating retrospective maximum drawdown
Simple question - what would be the fastest algorithm for calculating retrospective maximum drawdown ?
I've found some interesting talks but I was wondering what people thought of this question here.
-
1
Your question is not clear to me. Do you mean the easiest technique to code, or code that is CPU efficient? – montyhall Dec 30 '12 at 6:28
A vectorized approach (since inception) is very fast. – pat Dec 30 '12 at 9:18
You should probably clarify your question. Most readers are assuming you are asking about retrospective maximum drawdown, whereas I infer from the PDF you want to compute an expectation of it. – Brian B Dec 30 '12 at 15:13
Sorry for the confusion, I indeed meant algorithm/code that is CPU efficient for calculating retrospective maximum drawdown. – speciman Dec 30 '12 at 15:24
3 Answers
I won't give you the answer delivered on a silver platter but hopefully the following will get your started:
a) you need to define exactly over which look-back period you aim to derive the maximum drawdown.
b) you need to keep track of the max price while you iterate the look-back window.
c) you need to keep track of the min price SUBSEQUENT to any NEW max, thus each time you make a new max you need to reset the max low to zero (relatively speaking as a divergence from the max value)
this should get you pretty easily to where you want to get without having to iterate the time series more than once. I disagree that a vectorized approach will solve this problem (@Pat, please provide an answer if you disagree I would be curious how you would approach this in a vectorized manner because the algorithm here is path-dependent).
-
There is only 1 path from inception (and only 1 iteration required with vectorized dd result of that path). What you are describing above is a rolling dd (*Note I specified since inception). If you have a time series and can show an example of your algorithm and time, I will reproduce using a vectorized approach and compare times. – pat Dec 30 '12 at 23:15
1
@pat, "rolling dd", well, that is pretty much industry practice. Nobody cares about the maximum draw down over a ten year window, at least I do not know of too many investors with such long term memory. Please go ahead and show your vectorized version (even the one where you define maximum dd from inception), it would add a lot of value to other users. What holds you back? – Freddy Dec 31 '12 at 8:25
Zipline, the opensource python backtester, has a batch and iterative implementation for max drawdown.
Here is the batch: https://github.com/quantopian/zipline/blob/master/zipline/finance/risk.py#L284
Here is the iterative: https://github.com/quantopian/zipline/blob/master/zipline/finance/risk.py#L578
disclosure: I'm one of the zipline maintainers
-
(After the clarification, this answer is no longer relevant)
Expected maximum drawdown is going to be highly sensitive to your choice of SDE, and to your calibration of it. Therefore you should play with a variety of parameterizations to estimate your model error.
So far as efficient computation goes, we can regard this as a payoff very similar to a lookback option (much as in the PDF you linked). As with lookback options, the first instinct is to price them using Monte Carlo techniques, but one can actually do so much more quickly using a multi-level PDE solver, at least for sufficiently simple SDEs.
The way a 2-level PDE solver works for a payoff like this is that, rather than having a grid of $(S,t)$ values on which you run your difference equations and boundary conditions, you have a grid of $( \{M,S\}, t )$ values, where $M$ represents the maximum achieved so far. Obviously there are some new boundary conditions that go with it, for example that $\frac{\partial M}{\partial S}=1$ at and above the line $S=M$.
Differencing and updating on this grid, you ultimately end up with a value $V_{0,0}$ corresponding to today's maximum $M_0$ and stock price $S_0$.
See section 5.3.2 of this pdf for how it works with lookbacks. Max drawdowns will be very similar.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258064031600952, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/210187/relation-between-positive-definite-matrix-and-strictly-convex-function | # Relation between Positive definite matrix and strictly convex function
I have a problem. From wikipedia http://en.wikipedia.org/wiki/Positive-definite_matrix any function can be written as $$z^TMz$$ where z is a column vector and M is a symmetric real matrix. However this quadratic function is strictly convex only when M is symmetric positive definite. Why ?, I thought any quadatic function should be convex ? doesn't $$z^TMz$$ >0 shows only that the range of this function is greater than zero? 1. Why isn't any symmetric matrix M(which represents a quadratic function) convex ?
1. Why is it only the case when $$z^TMz$$ denotes convex ?
Thanks
-
1
Suppose $M=\pmatrix{1&0\cr0&-1\cr}$. Then $M$ is symmetric and real, and $z^tMz=x^2-y^2$. Is $x^2-y^2$ convex? What's your definition of convex? – Gerry Myerson Oct 10 '12 at 1:17
– Jing Oct 10 '12 at 1:24
OK, so, does $f=x^2-y^2$ fit that definition? What if you pick points with $f(x_1)=f(x_2)=0$, say? – Gerry Myerson Oct 10 '12 at 1:30
hmm... I am confused, can you give me some sketch of mathematical proof? f(x) is convex iff M(f(x)) is SGD ? thankx – Jing Oct 10 '12 at 1:42
First, have you found an example to show that for the $M$ I gave, the function is not convex? Second, what is SGD? – Gerry Myerson Oct 10 '12 at 1:59
## 1 Answer
For any twice differentiable function, it is convex if and only if, the Hessian matrix is positive definite. You can find it from any standard textbook on convex optimization. Now here the function at hand is $z^TMz$ which is clearly twice differentiable (by virtue of being quadratic). Now the Hessian of this function is $M$ (please verify yourself, it helped me a lot to memorize it). So $M$ should be positive definite for that quadratic function to be convex.
-
Thank you ! that's very clear. – Jing Oct 10 '12 at 7:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9209948182106018, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/55795/tough-probability-question-fair-and-unfair-die-rolling | # Tough probability question: Fair and Unfair die rolling
Have been trying for the last three hours, and going nuts. Please provide HINTS only, not the solution (or answer).
I'm doing this by a dumb approach, way too much of calculations and excel madness. I'm looking for an intelligent shortcut (or many of them). Looking to cut down on things algebraically. Here's the question.
(Dice = plural of die, the thing with the numbers 1 to 6 on its surface.)
Q. 1000 fair and 1000 unfair dice are available in a bag. On a certain day, from 8 a.m. to 6 p.m., 200 dice are randomly selected from the bag every hour and rolled. The faces are recorded and summed, and the rolled dice are then thrown away, ensuring no replacement. The fair dice behave normally ALL the time, but the unfair dice follow a certain pattern:
(a) Between odd and even hours (e.g. between 9 am and 10 am), the unfair dice change their [1, 2, 3] faces to [4, 5, 6] faces for the whole hour. So they become a dice with [4, 5, 6, 4, 5, 6] as the faces.
(b) Between even and odd hours (e.g. between 8 am and 9 am), the unfair dice change their [4, 5, 6] faces to [1, 2, 3] faces for the whole hour. So they become a dice with [1, 2, 3, 1, 2, 3] as the faces.
(c) However, there is an overriding rule: If any of the two bounding hours is prime, then all other unfair dice patterns are discarded, and the unfair dice change ALL their faces to that prime number only. (e.g. [5, 5, 5, 5, 5, 5] from 4 pm to 5 pm and 5 pm to 6 pm.) If both bounding numbers are prime, then the higher one is shown.
Find the probability (correct to 5 decimal places) that the total sum of all throws from 8 am to 6 pm is (a) Even (b) Is a prime
(12 hour format to be followed after 12 noon. 1 o'clock is 1 o'clock, not 13:00 hours).
-
You've got them switched -- "dice" is the plural of "die". – joriki Aug 5 '11 at 13:45
Oh yes, how shameful. Corrected. Thank you for pointing out. – AndrewL.C. Aug 5 '11 at 13:50
You write "Between odd and even hours (e.g. between 9 am and 10 am)" -- but it seems that this is in fact the only case in which that rule applies, not just an example? – joriki Aug 5 '11 at 13:51
Do you have any reason (mathematical or contextual) to believe that there's a closed form for this? If not, it should best be done by computer. – joriki Aug 5 '11 at 13:52
That's how the problem is. If that's the only instance, then it is the only instance. Yes, you are right. And unfortunately no, the source of the problem does not mention is this is better solved by a computer. Which is why I have been trying with Excel as well. – AndrewL.C. Aug 5 '11 at 13:53
show 6 more comments
## 2 Answers
Sorry, I temporarily forgot about the capitalized "hints only" part -- hope you didn't see the answer? Here's a hint for part (a): whatever all the other dice show, what happens if you add a single fair die?
I don't immediately see a reason for part (b) to have a closed form, but you could try the following and check to see whether it's accurate enough either using known bounds on the approximations or by calculating the exact result by computer:
The distribution of the sum should be quite well approximated by a Gaussian. You can calculate its mean and variance relatively easily making use of the linearity of expectation. The mean is straightforward; the variance is a bit more messy but can be calculated from the expectation values of the pairwise products. You might even get by with ignoring the correlations and just adding up the variances for the individual dice.
The Gaussian will be rather broad and slowly varying, so it won't be sensitive to the details of the distribution of the primes. So you could integrate it with the prime density $1/\log n$. I wouldn't be surprised if the result was correct to $5$ digits.
[Update:]
It turns out the idea with the Gaussian was good but the ideas with the prime density and ignoring correlations were bad. The correlations reduce the variance by about a third. If you integrate the product of $1/\log n$ with a normal distribution with mean and variance corresponding to the mean and variance of the distribution of the dice sum, the answer is already off in the second digit. However, if you sum the normal distribution over the actual primes, the result is correct to six digits. So you don't need to calculate the whole distribution or to simulate it; all you need is the mean and variance and the primes. The mean is trivial due to linearity of expectation; the variance is a bit more complicated because for the cross product for two unfair dice you need to take into account that they're slightly less likely to be selected in the same hour than in different hours.
-
Hint: each die has 1/10 chance to be thrown each hour. For a fair die, it has 1/6 chance to come up each of 1,2,3,4,5,6. What is the probability distribution of an unfair die? For the even/odd case, you can then reduce to the probability that each type of die comes up even or odd. But combining the dice to find out if the total is prime sure looks like a mess.
-
1
The question is about probabilities, not averages. – joriki Aug 5 '11 at 13:59
@joriki: you are right. That makes it much messier. Fixed (for what it's worth) – Ross Millikan Aug 5 '11 at 14:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.953667938709259, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/8602/what-is-the-difference-between-0-rangle-and-0/8604 | # What is the difference between $|0\rangle$ and $0$
in the context of $$a_- |0\rangle =0$$
-
Are you asking what the bra and ket formalism represent? – anna v Apr 13 '11 at 20:07
## 3 Answers
$|0\rangle$ is just a quantum state that happens to be labeled by the number 0. It's conventional to use that label to denote the ground state (or vacuum state), the one with the lowest energy. But the label you put on a quantum state is actually kind of arbitrary. You could choose a different convention in which you label the ground state with, say, 5, and although it would confuse a lot of people, you could still do physics perfectly well with it. The point is, $|0\rangle$ is just a particular quantum state. The fact that it's labeled with a 0 doesn't have to mean that anything about it is actually zero.
In contrast, $0$ (not written as a ket) is actually zero. You could perhaps think of it as the quantum state of an object that doesn't exist (although I suspect that analogy will come back to bite me... just don't take it too literally). If you calculate any matrix element of some operator $A$ in the "state" $0$, you will get 0 as a result because you're basically multiplying by zero:
$$\langle\psi| A (a_-|0\rangle) = 0$$
for any state $\langle\psi|$. In contrast, you can do this for the ground state without necessarily getting zero:
$$\langle\psi| A |0\rangle = \text{can be anything}$$
-
$|0\rangle$ is a particular nonzero vector in the Hilbert space associated with this system. That vector is nonzero -- in fact, it's usually normalized to have magnitude 1. The 0 on the right refers to the zero vector in the Hilbert space. So they're quite different. For one thing, $|0\rangle$ is a possible state for a particle to be in. 0 isn't (since only unit-magnitude vectors are possible states).
-
@Tedd Bunn one question: can't we have a state $|0\rangle$ where the ket represents a column vector in a particular basis where all components are zero? for an analogy in 3-space.. take a point with finite coordinates and shift the origin to that point, and in this new basis the point is represented as a 0-component vector. – yayu Apr 14 '11 at 6:06
2
I think you're misunderstanding what a change of basis is. Shifting the origin of a vector space is not the same thing as a change of basis. A change of basis is an invertible linear transformation (i.e., multiplication by a nonsingular matrix for finite-dimensional spaces). One consequence of this: In any vector space, any nonzero vector is nonzero in all bases. – Ted Bunn Apr 14 '11 at 13:16
wow! I see. thanks! – yayu Apr 14 '11 at 14:11
You may consider 0 as an eigenvalue and write $a|0\rangle = 0|0\rangle$.
Any eigenvector $a|\alpha \rangle = \alpha |\alpha \rangle$ is of different "length" than the corresponding normalized vector $|\alpha \rangle$. In your particular case the vector $0|0\rangle$ is of zeroth length.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.952467143535614, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-algebra/212285-prove-total-number-cosets-particular-subgroup-particular-group-m.html | 2Thanks
• 1 Post By Deveno
• 1 Post By Deveno
# Thread:
1. ## Prove the total number of cosets of a particular subgroup in a particular group is m?
I'm having difficulty in understanding a proof given on wikipedia under "Examples" section. The proof can be found at:
Coset - Wikipedia, the free encyclopedia
Math Problem:
If $Z = \{..., -2, -1, 0, 1, 2, ...\}$ is an additive group of integers and $H$ a subgroup such that $mZ = \{...,-2m, -m, 0, m, 2m, ...\}$
where $m$ is an integer. Then prove that the cosets of $H$ in $G$ are the $m$ sets.
Proof:
Let $G$ be the additive group of integers $Z = \{..., -2, -1, 0, 1, 2, ...\}$ and $H$ the subgroup $mZ = \{..., -2m, -m, 0, m, 2m, ...\}$
where $m$ is a positive integer.
Then the cosets of $H$ in $G$ are the $m$ sets $mZ, mZ+1, ... , mZ + (m - 1)$, where $mZ+a=\{..., -2m+a, -m+a, a, m+a, 2m+a, ...\}$.
There are no more than $m$ cosets, because $mZ+m=m(Z+1)=mZ$. <--------------My problem?
The coset $mZ+a$ is the congruence class of a modulo $m$
My question:
Why is that the $mZ + m = m(Z+1) = mZ$? Specifically why's that $Z+1 = Z$?
I know it's true visually but how do I prove that $Z+1 = Z$?
Is it possible to kindly help me find the reason why's $Z+1 = Z$ and what it got to do with congruence class of modulo $m$ which is stated at the next line of the proof above?
2. ## Re: Prove the total number of cosets of a particular subgroup in a particular group i
suppose i have the entire set of integers:
Z = {.........-4,-3,-2,-1,0,1,2,3,4,5........}
what set do i get if i add 1 to every integer in this set?
one of the peculiarities of the integers is that there is no "smallest integer" nor no "largest integer" so there is no "bottom" that gets bigger when we add 1 to everything, nor no "top" that keeps everything under a certain amount.
EDIT: here is a "semi-formal" proof that Z = Z+1.
suppose k is in Z. then k-1 is in Z as well, so k = (k-1) + 1 is in Z+1.
on the other hand, suppose m is in Z+1. then m = n+1, for some integer n. hence m = n+1 is also an integer, whence m is in Z.
thus Z+1 = Z, since the two sets contain each other.
EDIT #2:
let's look at a particular m, say m = 5.
now 5Z = {........-25,-20,-15,-10,-5,0,5,10,15,20,25,30.......}.
1+5Z = {.......-24,-19,-14,-9,-4,1,6,11,16,21,26,31......}
2+5Z = {......-23,-18,-13,-8,-3,2,7,12,17,22,27,32.......}
3+5Z = {......-22,-17,-12,-7,-2,3,8,13,18,23,28,33......}
4+5Z = {.....-21,-16,-11,-6,-1,4,9,14,19,24,29,34......}
so far, we haven't had any "overlap" at all, every integer only occurs in ONE of these sets.
what happens when we have:
5+5Z = {.......-20,-15,-10,-5,0,5,10,15,20,25,30,35.....}
all we have done is "shifted 5Z over 5". but 5Z has no "start" and no "end" (it's infinite both ways), so we don't get any "extras" at the "end" or any "new ones" at the "start".
***************************
personally, i don't like this way of looking at Z/5Z. i prefer to think of it as a PARTITION of Z into 5 subsets:
5Z = all integers of the form 5k, for some integer k.
1+5Z = all integers of the form 5k+1
2+5Z = all integers of the form 5k+2
3+5Z = all integers of the form 5k+3
4+5Z = all integers of the form 5k+4
if you like you can think of this partition as being the equivalence classes of the following equivalence relation:
m~n if m-n is divisible by 5.
so now your question becomes: why are the integers of the form 5k+5 of the form 5k?
because 5k+5 = 5(k+1) (we just change k's).
3. ## Re: Prove the total number of cosets of a particular subgroup in a particular group i
Originally Posted by Deveno
suppose i have the entire set of integers:
Z = {.........-4,-3,-2,-1,0,1,2,3,4,5........}
what set do i get if i add 1 to every integer in this set?
one of the peculiarities of the integers is that there is no "smallest integer" nor no "largest integer" so there is no "bottom" that gets bigger when we add 1 to everything, nor no "top" that keeps everything under a certain amount.
EDIT: here is a "semi-formal" proof that Z = Z+1.
suppose k is in Z. then k-1 is in Z as well, so k = (k-1) + 1 is in Z+1.
on the other hand, suppose m is in Z+1. then m = n+1, for some integer n. hence m = n+1 is also an integer, whence m is in Z.
thus Z+1 = Z, since the two sets contain each other.
EDIT #2:
let's look at a particular m, say m = 5.
now 5Z = {........-25,-20,-15,-10,-5,0,5,10,15,20,25,30.......}.
1+5Z = {.......-24,-19,-14,-9,-4,1,6,11,16,21,26,31......}
2+5Z = {......-23,-18,-13,-8,-3,2,7,12,17,22,27,32.......}
3+5Z = {......-22,-17,-12,-7,-2,3,8,13,18,23,28,33......}
4+5Z = {.....-21,-16,-11,-6,-1,4,9,14,19,24,29,34......}
so far, we haven't had any "overlap" at all, every integer only occurs in ONE of these sets.
what happens when we have:
5+5Z = {.......-20,-15,-10,-5,0,5,10,15,20,25,30,35.....}
all we have done is "shifted 5Z over 5". but 5Z has no "start" and no "end" (it's infinite both ways), so we don't get any "extras" at the "end" or any "new ones" at the "start".
***************************
personally, i don't like this way of looking at Z/5Z. i prefer to think of it as a PARTITION of Z into 5 subsets:
5Z = all integers of the form 5k, for some integer k.
1+5Z = all integers of the form 5k+1
2+5Z = all integers of the form 5k+2
3+5Z = all integers of the form 5k+3
4+5Z = all integers of the form 5k+4
if you like you can think of this partition as being the equivalence classes of the following equivalence relation:
m~n if m-n is divisible by 5.
so now your question becomes: why are the integers of the form 5k+5 of the form 5k?
because 5k+5 = 5(k+1) (we just change k's).
Deveno,...Just brilliant! Thanks for help.
4. ## Re: Prove the total number of cosets of a particular subgroup in a particular group i
Sorry for prematurely closing this thread.
Deveno, I've a question about the notation you used.
You said(just after the asterisks symbol) that:
Deveno said:
"...personally, i don't like this way of looking at Z/5Z. i prefer to think of it as a PARTITION of Z into 5 subsets:"
I was just wondering:how did you interpret this notation of "Z/5Z"?
I understand on previous sentences you mentioned that "shifted 5Z over 5".
Can you tell me how that equals to "Z/5Z"?
5. ## Re: Prove the total number of cosets of a particular subgroup in a particular group i
Z/5Z is the set of cosets. the idea is that the "divisor" notation similarity is to remind you of ab/b = a ("factoring out b"), and so Z/5Z is called a "factor group" or more usually, a "quotient group" of Z.
imagine that the integers are like a deck of cards, and you start dealing the cards 1 by 1, into 5 "bowls". every 5th card goes into the same bowl. the bowls are like the cosets: we wind up with 5 subsets of Z, each of which is completely disjoint with all the other 4, and every integer winds up in exactly one coset (no integers get "left out").
for this to work, in general, the subgroup (in this case 5Z) has to have a certain kind of structure, called "normality". for abelian groups (such as Z), every subgroup is normal, so its a non-issue.
6. ## Re: Prove the total number of cosets of a particular subgroup in a particular group i
Again thanks for help. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9223759174346924, "perplexity_flag": "middle"} |
http://www.conservapedia.com/Fibonacci_sequence | # Fibonacci sequence
### From Conservapedia
The Fibonacci sequence is a series of numbers in which each number is the sum of the two preceding numbers, with the first two numbers being 0 and 1. The numbers are known as Fibonacci numbers.
Example: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144 ...
Written mathematically, the Fibonacci sequence Fn satisfies:
F0 = 0,F1 = 1 and $F_{n} = F_{n-1} + F_{n-2}, \forall n \geq 2$.
The Fibonacci numbers were first described by the Sanskrit linguist Pingala around the year 450 BC. They arose from the study of meters with long and short syllables.
The first Western mathematician to study this sequence was Leonardo of Pisa, commonly known as Fibonacci, in the consideration of the growth of an idealized rabbit population.
Johannes Kepler discovered that the limit of the ratios between succeeding Fibonacci numbers converges to the golden ratio. Fibonacci numbers also occur in Pascal's triangle and the run-time analysis of Euclid's Algorithm.
## Example of Fibonacci Sequences
• The family trees of honey bees can be depicted using Fibonacci numbers because unmated females will always hatch male young, while mated females will always hatch female young and the Fibonacci sequence has many practical applications in biology, e.g. in the study of reproductive patterns of animals, in the study of spiraling of flowers, etc. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304795265197754, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2007/09/23/ordered-linear-spaces-i/?like=1&source=post_flair&_wpnonce=eb8d9d939c | # The Unapologetic Mathematician
## Ordered Linear Spaces I
I saw a really great talk today by Howard Barnum of the Los Alamos National Laboratory. It dovetails wonderfully with what I’ve been talking about here, and I think I can help him by bringing my categorical inclination to bear on his subjects. I’ll omit the motivation he was using because I can’t really explain the background, but it makes for a great example of a category.
We define the category $\mathcal{O}rd\mathcal{L}in$ of “ordered linear spaces” by starting with a totally ordered field $\mathbb{F}$. If you know what the real numbers are (I still haven’t defined them here) use them, but otherwise you can get away for now with rational numbers. We consider the category $\mathbf{FinVect}_\mathbb{F}$ of finite-dimensional vector spaces over $\mathbb{F}$ and $\mathbb{F}$-linear maps between them.
Now an ordered linear space is a finite-dimensional vector space equipped with a certain partial order, compatibly with the linear structure. We can do this by specifying a “cone” of vectors to consider as being bigger than ${0}$. Then $v\geq w$ exactly when $v-w\geq0$. We require that if $v\geq0$ then so is each $\lambda v$ with $\lambda\geq0$ in the field $\mathbb{F}$. From this we can tell that if $u=\lambda v+(1-\lambda)w$ with $v\geq0$, $w\geq0$, and $0\leq\lambda\leq1$, then $u\geq0$ by the transitive property of partial orders. That is, the cone contains the line segment between any two of its points. In this situation we say it is a “convex set”. Finally, we require that we can find a positive basis of our vector space — one consisting of positive vectors. This is an ordered linear space, which is an object of $\mathcal{O}rd\mathcal{L}in$. Because the order is specified by its cone, we often call such a space a “cone”.
A morphism in our category is just a linear function from one ordered linear space to another that preserves the partial order. That is, we call a linear function $f:V\rightarrow W$ “positive” if whenever $v\geq0$ in $V$ then $f(v)\geq0$ in $W$. In other words, it sends the one cone into the other. An isomorphism is an isomorphism of vector spaces which identifies the two cones — the must be the “same shape”, up to a linear transformation. A subcone — the image of a monomorphism — works out to be exactly what it seems like it should be: a convex cone that fits inside another cone.
There’s a functor from the category $\mathbf{FinSet}$ of finite sets to $\mathcal{O}rd\mathcal{L}in$. We start with a finite set and construct the free vector space on it. We define the cone to be all those vectors with all components positive. For reasons related to our motivation, we call these cones — and any cone equivalent to one of them — “classical”. The linear transformation induced by a function between finite sets is clearly positive, and so this is indeed a functor. It’s not hard to see that the image of any morphism from a classical cone is again classical, and thus the classical cones form a full subcategory of $\mathcal{O}rd\mathcal{L}in$.
There’s a lot more to be said about these things, but I’ll leave it here for now.
### Like this:
Posted by John Armstrong | Category theory
## 4 Comments »
1. [...] posting things at the end of last week due to the conference, I’ll continue my discussion of ordered linear spaces with this observation: because each ordered linear space is a vector space with extra structure, [...]
Pingback by | September 24, 2007 | Reply
2. [...] on a roll with our discussion of ordered linear spaces. So I want to continue past just describing these things and prove a very interesting theorem that [...]
Pingback by | September 24, 2007 | Reply
3. [...] Linear Spaces — Solved! Okay, there’s some sort of problem with these things. I defined the category , showed some properties, and tried to prove a theorem. Here I want to collect what [...]
Pingback by | September 25, 2007 | Reply
4. Many basic properties of cones and partial ordering in the book:
Convex Optimization & Euclidean Distance Geometry
http://meboo.convexoptimization.com
http://convexoptimization.com
Comment by | September 26, 2007 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9261149168014526, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/4480/sangaku-a-geometrical-puzzle/4492 | # sangaku - a geometrical puzzle
Find the radius of the circles if the size of the larger square is 1x1.
Enjoy!
(read about the origin of sangaku)
-
## 3 Answers
Here's a nicer approach ...
Given two segments emanating from a point and ending at points of tangency with a circle, we know that those segments are congruent. In the figure, the three vertices of the right triangle give rise to three fundamental lengths; note that, since the "green" angle is a right angle, the lengths of the tangent segments are equal to the radius of the circle.
Pythagoras tells us that
$$(b+r)^2 + ( r + a )^2 = ( a + b )^2$$
But we also have that $a+b=1$ (the side of the square), and that $b=a+2r$ (via tangent circles on the "outside" of the right triangle). From these relations, we find that $a=1/2-r$ and $b=1/2+r$.
Therefore,
$$(1/2+2r)^2 + ( 1/2 )^2 = ( 1 )^2$$ $$1/4+2r+4r^2+1/4=1$$ $$8r^2+4r-1=0$$
The roots are $(-1\pm\sqrt{3})/4$, and we select the positive value: $r = (-1+\sqrt{3})/4$.
As Américo noted: The sides of the triangle have lengths $r+a=1/2$, $3r+a=\sqrt{3}/2$, and $1$, so that we have a 30-60-90 triangle.
(I like that there's but the one extraneous value this time, rather than the three in my first attempt. Is there yet a better approach that yields the answer directly, with no extraneous values?
Edit. There is. Immediately after we have deduced that $a=1/2-r$, we know that the short leg of the triangle has length $1/2$, so that its longer leg is $\sqrt{3}/2$. Since that longer leg is also $a+3r=1/2+2r$, we have that $r=(-1+\sqrt{3})/4$.)
-
This one's the most elegant of the lot, IMHO. :) – J. M. Sep 13 '10 at 1:49
accepted as answer for indeed looking nicer than your other solution. – stevenvh Sep 13 '10 at 13:52
1
Another curiosity: The lines that bisect the square vertically and horizontally are tangent to the outer circles (since $a+r=b-r=1/2$). – Blue Sep 13 '10 at 20:47
Let $r$ be the length the radius of the circles, and let $\theta$ be the measure of the (smaller) angle made at the corner of the big square.
The width of the square is equal to two radii and the projection of a double diameter (a quadruple-radius), so that
$(1)\hspace{1.0in}4r\cos\theta=1-2r$
Looking at the four right triangles, we see that the center circle's diameter is equal to the difference in the lengths of the legs; since the hypotenuse has length $1$, we have
$(2)\hspace{1.0in}2r = \cos\theta - \sin\theta$
From here, we simply need to eliminate $\theta$.
Multiplying (2) through by $4r$ and substituting in from (1) ...
$$8 r^2 = 4r\cos\theta - 4r \sin\theta = 1 - 2r - 4r \sin\theta$$ $$4r \sin\theta = 1 - 2r - 8 r^2$$
Therefore,
$$\begin{eqnarray}16r^2 &=& (4r \cos\theta)^2 + (4 r \sin\theta)^2 \\ &=& ( 1 - 2r )^2 + ( 1 - 2r - 8 r^2 )^2 \\ &=& 2 - 8 r - 8 r^2 + 32r^3 + 64 r^4 \end{eqnarray}$$
so that
$$0 = 32 r^4 + 16 r^3 - 12 r^2 - 4 r + 1 = (2r+1)(2r-1)(8 r^2 + 4 r - 1)$$
The roots of the polynomial are $\pm1/2$ and $(-1\pm\sqrt{3})/4$. We can eliminate three of them from consideration to conclude that $r = (-1+\sqrt{3})/4$.
-
1
+1 for the solution and the adequate picture. Just a curiosity: The angles of the triangles are $\pi /6,\pi /3,\pi /2$. – Américo Tavares Sep 12 '10 at 20:56
i.e. a 30-60-90 triangle, but it isn't that obvious until you actually go through the derivation. :) – J. M. Sep 12 '10 at 21:54
This is the approach I followed too, but I had some trouble finding a second equation to get rid of the theta :-( – stevenvh Sep 13 '10 at 13:54
Somewhat related to Don's solution: From the figure, we see that the four triangles are 1: congruent, and 2: right triangles. The hypotenuse of one triangle has length 1, and if we let $\theta$ be the smaller of the two angles of the right triangle, and use $r$ to denote the radius of one circle, then the Pythagorean relation is
$$\cos^2\;\theta+(\cos\;\theta-2r)^2=1$$
This can now be solved as a simultaneous equation with any of the other two equations Don obtained, or we can use another equation, the expression for the inradius $r$:
$$r^2=\frac{(s-1)(s-\cos\;\theta)(s+2r-\cos\;\theta)}{s}$$
where $s=\frac{1+\cos\;\theta+(\cos\;\theta-2r)}{2}$ is the semiperimeter.
If we eliminate $\cos\;\theta$ and solve the two equations here for $r$, we find that the roots of the resulting quartic equation are
$$r=\frac{\pm 1\pm\sqrt{3}}{4}$$
If we carry out Don's approach as well, we find that only one positive value of $r$ is consistent with both systems, and thus has to be the correct answer:
$$r=\frac{-1+\sqrt{3}}{4}$$
-
+1 for the elegant solution. – Américo Tavares Sep 12 '10 at 20:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323953986167908, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/27224/why-isnt-the-gear-predictor-corrector-algorithm-for-integration-of-the-equation?answertab=active | # Why isn't the Gear predictor-corrector algorithm for integration of the equations of motion symplectic?
Okumura et al., J. Chem. Phys. 2007 states that the Gear predictor-corrector integration scheme, used in particular in some molecular dynamics packages for the dynamics of rigid bodies using quaternions to represent molecular orientations, is not a symplectic scheme. My question is: how can one prove that? Does it follow from the fact that the Gear integrator is not time-reversible (and if so, how can one show that)? If not, how do you prove that an integration scheme is not symplectic?
-
## 1 Answer
Take a look at the notes on lectures 1 and 2 of Geometric Numerical Integration found here. Quoting from Lecture 2
A numerical one-step method $y_{n+1} = \Phi_h(y_n)$ is called symplectic if, when applied to a Hamiltonian system, the discrete flow $y \mapsto \Phi_h(y)$ is a symplectic transformation for all sufficiently small step sizes.
From your link you have $$x(t+h) = x(t) + h \dot{x}(t) + h^2 \left\{\frac{3}{24}f(t+h) +\frac{10}{24}f(t) -\frac{1}{24}f(t-h) \right\}$$ and $$\dot{x}(t+h) = \frac{x(t+h) - x(t)}{h} + h \dot{x}(t) + h \left\{\frac{7}{24}f(t+h) +\frac{6}{24}f(t) -\frac{1}{24}f(t-h) \right\}$$
Now take $\omega(\xi,\eta) = \xi^T J \eta$ where $J = \left(\begin{array}{cc} 0 & \mathbb{I} \\ \mathbb{I} & 0 \end{array}\right)$. Then the integrator is symplectic if and only if $\omega(x(t),\dot{x}(t))=\omega(x(t+h),\dot{x}(t+h))$ for sufficiently small $h$.
All that you need to do is to fill in the values of $x(t+h)$ and $\dot{x}(t+h)$ from the integrator, and show that this condition does not hold.
-
Dear Joe Fitzsimons, you should insert a minus sign into the matrix $J$ representing the symplectic form $\omega$. – Giuseppe Jan 21 '12 at 15:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.876262366771698, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/235578/finding-the-limit-of-x-1-0-x-n1-frac11x-n | # Finding the limit of $x_1=0 , x_{n+1}=\frac{1}{1+x_n}$
I have had big problems finding the limit of the sequence $x_1=0 , x_{n+1}=\frac{1}{1+x_n}$. So far I've only succeeded in proving that for $n\geq2$: $x_n>0\Rightarrow x_{n+1}>0$
(Hopefully that much is correct: It is true for $n=2$, and for $x_{n+1}>0$ exactly when $\frac{1}{1+x_n}>0$, which leads to the inequality $x_n>-1$ which is true by the induction assumption that $x_n>0$.)
On everything else I failed to come up with answers that make sense (such as proving that $x_{n+1}>x_n \forall n\geq1$). I'm new to recursive sequences, so it all seems like a world behind the mirror right now. I'd appreciate any help, thanks!
-
Have you tried writing down the first 5 or 10 terms of the sequence? You know $x_1=0$, so $x_2=1$, $x_3=1/2$, and so on. This usually helps get me going. – icurays1 Nov 12 '12 at 9:25
I did, it's basically fibonacci/fibonacci(n+1), or that much I could tell. But I'm not sure how to use that knowledge. – Dahn Jahn Nov 12 '12 at 9:27
1
Try solving $x=\dfrac{1}{1+x}$. It does not prove convergence, but one of the solutions is the limit if such a limit exists. – Henry Nov 12 '12 at 9:32
1
@DahnJahn: or you can use the closed form formula for the Fibonacci numbers. – Ittay Weiss Nov 12 '12 at 9:39
Couldn't I maybe prove that it is bounded and increasing without proving it's similarity to Fibonnaci? – Dahn Jahn Nov 12 '12 at 9:39
show 3 more comments
## 5 Answers
It is obvious that $f:x\mapsto\frac1{1+x}$ is a monotonically decreasing continuous function $\mathbf R_{\geq0}\to\mathbf R_{\geq0}$, and it is easily computed that $\alpha=\frac{-1+\sqrt5}2\approx0.618$ is its only fixed point (solution of $f(x)=x$). So $f^2:x\mapsto f(f(x))$ is a monotonically increasing function that maps the interval $[0,\alpha)$ into itself. Since $x_3=f^2(x_1)=\frac12>0=x_1$ one now sees by induction that $(x_1,x_3,x_5,...)$ is an increasing sequence bounded by $\alpha$. It then has a limit, which must be a fixed point of $f^2$ (the function mapping each term of the sequence to the next term). One checks that on $\mathbf R_{\geq0}$ the function $f^2$ has no other fixed point than the one of $f$, which is $\alpha$, so that must be value of the limit. The sequence $(x_2,x_4,x_6,...)$ is obtained by applying $f$ to $(x_1,x_3,x_5,...)$, so by continuity of $f$ it is also convergent, with limit $f(\alpha)=\alpha$. Then $\lim_{n\to\infty}x_n=\alpha$.
-
I like this solution. You just prove $(x_1,x_3,\ldots)$ is increasing (and hence converges), then use continuity of $f$ to prove $(x_2,x_4,\ldots)$ converges to the same limit. – littleO Nov 12 '12 at 11:22
Thanks! Your answer in particular got me to the final idea I used to prove this, only I used the sort of math I know how to use :) – Dahn Jahn Nov 13 '12 at 14:26
Here is a way to show that the sequence converges to the unique positive solution to $a=1/(1+a)$: Define $f(x)=1/(1+x)$. Then $f'(x)=-1/(1+x)^2$. For every $n$, the mean value theorem gives $$x_{n+1}-a=f(x_n)-f(a)=(x_n-a)f'(c_n)$$ for some $c_n$ between $x_n$ and $a$. Since $-1<f'(x)<0$ for all $x>0$, this shows that $\lvert x_n-a\rvert$ decreases with $n$. Moreover, we already have $f'(1/2)=-4/9$, and $f'$ is increasing, so $-4/9<f'(c_n)<0$ for all $n\ge2$. Thus $\lvert x_n-a\rvert$ decreases at least as fast as $(4/9)^n$, i.e., geometrically. In particular, the $x_n-a\to0$.
Addendum: To do this without differentiation, just rely on the computation $$f(x)-f(y)=\frac1{1+x}-\frac1{1+y}=\frac{y-x}{(1+x)(1+y)}$$ instead.
-
Isn't this sequence a Cauchy one? – Babak S. Nov 12 '12 at 9:43
It converges, therefore it is Cauchy. But yes, you can use this sort of argument to show the Cauchy property directly, but I wouldn't know if the OP is expected to know about Cauchy sequences. – Harald Hanche-Olsen Nov 12 '12 at 9:45
Exactly!. In fact, I wish the OP consider why did(and how did) you take $f(x)$ as defined above. Yours here is perfect Harald +1. – Babak S. Nov 12 '12 at 9:47
The problem would be that I'm not even supposed to "know" of differentiation and the mean value theorem yet (first uni semester..). I feel bad for putting such restraints on the method of solution, but although I shall definitely take a look at this method, I can't use it for my proof. But thanks for broadening my horizons :) – Dahn Jahn Nov 12 '12 at 9:47
1
Thanks. I do understand your approach, although at this point I'd never come up with it myself. Still, isn't there a more 'elementar' way of proving this? Or rather - there has to be one! – Dahn Jahn Nov 12 '12 at 9:58
show 1 more comment
Let $\omega\gt0$ such that $\omega^2+\omega=1$, then $x_{n+1}-\omega=-\omega\cdot\frac1{1+x_n}\cdot(x_n-\omega)$ hence $$|x_n-\omega|\leqslant\omega^n\cdot|x_0-\omega|,$$ and the rest is easy since $\omega\lt1$.
-
Firstly, I'd like to thank everybody who contributed! Your insight helped me greatly in understanding the problem. Here is my solution (one that I feel is more down to earth in terms of mathematical skills used!)
It is easy enough to prove that $x_n\in[0,1] \forall n$ through induction. Now we definitely have a bounded sequence.
However, it is not monotonic and after some investigation, one can see that it seems to oscilate around its limit $\frac{\sqrt{5}-1}{2}$ - that is, if it actually converges!
Now the crucial step. I used the expression $x_{n+2}=\cfrac{1}{1+\cfrac{1}{1+x_n}}$ to check for which values it is monotonic, which lead to this:
$x_{n+2}>x_n$ for $x_n\in(0,\frac{\sqrt{5}-1}{2})$
$x_{n+2}<x_n$ for $x_n>\frac{\sqrt{5}-1}{2}$
Thus we have two subsequences, both of which are bounded and monotonic and converging to the same limit, only from two different sides.
This answer isn't probably full, but the additional proofs needed are trivial enough, this was the most important step I needed to take to get to the answer.
-
I wonder which mathematical skills my answer is using, that are not down to earth. – Did Nov 13 '12 at 14:38
Your answer seems to be mathematically beautiful, but even after some working with pen and paper, I wasn't able to understand what is going on there. I wish I did! – Dahn Jahn Nov 13 '12 at 14:48
Did you try to compute $x_{n+1}-\omega$ in terms of $x_n$? Once again, I wonder what could prevent you to reach the first formula stated in my answer (especially since you have some pen and paper at your disposal...)--but surely I am missing something. – Did Nov 13 '12 at 18:25
Like that approach. +1 – Babak S. Dec 30 '12 at 9:50
If you're willing to assume the sequence converges to a number $x$, then you can see that $x = \frac{1}{1 + x}$, and then you can solve for $x$.
-
It is a very dangerous practice to be willing to assume a limit exists and then compute with it as if it really exists to find out what it would be if it were to exists. What if it does not exist??? The question was about finding the limit, and not about finding the value of the limit, should that exist. – Ittay Weiss Nov 12 '12 at 9:31
Then I'd get $x=\frac{-1\pm\sqrt5}{2}$, of which the positive one will be the right candidate. How to prove that it really does converge to that, though? edit: oh and sorry, I actually do want to find the value of the limit! – Dahn Jahn Nov 12 '12 at 9:32
@IttayWeiss: Yes and no. You shouldn't stop there, but it makes very good sense to find out first what the limit must be if it exists, and then try to show that the sequence does (or does not) converge to that limit. – Harald Hanche-Olsen Nov 12 '12 at 9:33
1
I agree that it's important to be careful about issues of convergence, but a different viewpoint is "too much rigor teaches rigor mortis". There's some value in doing "Eulerian math" as well. – littleO Nov 12 '12 at 9:33
@Harald: Yes and no :) It is a good technique since it would either show the limit does not exist or at least give you (several) candidates. But the question posted was about finding the limit and thus the solution offered, I find, reinforces dangerous tendencies of not worrying about existence proofs. – Ittay Weiss Nov 12 '12 at 9:35
show 6 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9623653292655945, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/questions/653/basic-explanation-of-elliptic-curve-cryptography/656 | # Basic explanation of Elliptic Curve Cryptography?
I have been studying Elliptic Curve Cryptography as part of a course based on the book Cryptography and Network Security. The text for provides an excellent theoretical definition of the algorithm but I'm having a hard time understanding all of the theory involved in ECC.
I'm looking for an explanation suitable for someone who has studied at undergraduate level in computer science. Can anyone explain how elliptic curve cryptography works in a simple, straightforward manner?
-
Good question. Can you tell us more? What do you want to know? Do you want to know how the mathematics works? Do you want to know what it gives you and why you should care about ECC (ignoring the mathematical innards and the details of how it works)? What background do you already have? Are you already familiar with public-key cryptography, digital signatures, modular arithmetic, RSA? – D.W. Sep 8 '11 at 5:03
Just out of curiosity, what book are you using? – mikeazo♦ Sep 8 '11 at 11:50
– mikeazo♦ Sep 8 '11 at 14:32
@mikeazo I have both William Stallings and Bruce Schneier book. – user5507 Sep 8 '11 at 16:08
@D.W. I have a basic understanding of Cryptography being a compsci undergrad. Don't know much about the underlying theoretical buildup. I want to know more from a practical view. A simplified theoretical view is highly desirable, since most text books look this from a high level view. – user5507 Sep 8 '11 at 16:11
show 1 more comment
## 4 Answers
There are some widely used cryptographic algorithms which need a finite, cyclic group (a finite set of element with a composition law which fulfils a few characteristics), e.g. DSA or Diffie-Hellman. The group must have the following characteristics:
• Group elements must be representable with relatively little memory.
• The group size must be known and be a prime number (or a multiple of a known prime number) of appropriate size (at least 160 bits for the traditional security level of "80-bit security").
• The group law must be easy to compute.
• It shall be hard (i.e. computationally infeasible, up to at least the targeted security level) to solve discrete logarithm in the group.
DSA, DH, ElGamal... were primarily defined in the group of non-zero integers modulo a big prime p, with modular multiplication as group law. The characteristics we look for are reached as long as p is large enough, e.g. at least 1024 bits (that's the minimal size for discrete logarithm to be hard in such a group).
Elliptic curve are another kind of group, appropriate for group-based cryptographic algorithm. An elliptic curve is defined with:
• A finite field, usually consisting in integers modulo some prime p (there are also other fields which can be used).
• A curve equation, usually $y^2 = x^3 + ax +b$, where $a$ and $b$ are constant values from the finite field.
The curve is the set of pairs of values $(x, y)$ which match the equation, along with a conventional extra element called "the point at infinity". Since elliptic curves initially come from a graphical representations (when the field consists in the real numbers $\mathbb{R}$), the curve elements are called "points" and the two values $x$ and $y$ are their "coordinates".
Then we define a group law, called point addition and denoted with a "$+$" sign. The definition looks quite artifical, with all the business about tracing a line and computing the intersection of that line with the curve; but the bottom-line is that it has the characteristics required for a group law, and it is easily computable (there are several methods; as a rough approximation, it costs about 10 multiplications in the base field). The curve order (the number of points on the curve) is close to $p$ (the size of the finite field): the curve order is equal to $p+1-t$ for some integer $t$ such that $|t| \leq 2\sqrt{p}$.
Compared to the traditional multiplicative group modulo a big prime, elliptic curve variants of cryptographic algorithms have the following practical features:
• They are small and fast. There is no known efficient discrete-logarithm solving algorithm for elliptic curves, beyond the generic algorithms which work on every group. So we get appropriate security as soon as $p$ is close to 160 bits. Computing the group law costs ten field operations, but on a field which is 6 times smaller; since multiplications in a finite field have quadratic cost, we end up with an appreciable speedup.
• Creating a new curve is uneasy. Generating a new big prime is a matter of a fraction of a second with a basic PC, but making a new curve is much more expensive (the hard part is figuring out the curve order). Since there is no security issue in using the same group for several distinct key pairs, it is customary, with elliptic curves, to rely on a handful of standard curves which have been created such that their order is appropriate (a big prime value or a multiple of a big enough prime value); see FIPS 186-3. The implementations are thus specialized and optimized for these particular curves, which again considerably speeds things up.
• Elliptic curves can be used to factor integers. Lenstra's elliptic curve factorization method can find some factors in big integers with a devious use of elliptic curves. This is not the best known factorization algorithm, except when it comes to finding medium-sized factors in a big non-prime integer.
• Some elliptic curves allow for pairings. A pairing is a bilinear operation which can link elements from two groups into elements of a third group. A pairing for cryptography requires all three groups to be "appropriate" (in particular with a hard-to-solve discrete logarithm). Pairings are an active research subject because they can be used to implement protocols with three participants (e.g. in electronic cash systems, with the buyer, the vendor and the bank, all mathematically involved in the system). The only known practical pairings for cryptography use some special elliptic curves.
Elliptic curves are usually said to be the next generation of cryptographic algorithms, in order to replace RSA. Performance of EC computations is the main interest of these algorithms, especially on small embedded systems such as smartcards (in particular Koblitz curves over binary fields); the biggest remaining issue is that public-key operations with group-based algorithms are a bit slow (RSA signature verification or asymmetric encryption, as opposed to signature generation and asymmetric decryption, respectively, is extremely fast, whereas analogous operations in the group-based algorithms are just fast). Also, involved mathematics are a bit harder than with RSA, and there have been patents, so implementers are a bit wary. Yet elliptic curves become more and more common.
-
## Did you find this question interesting? Try our newsletter
email address
Once upon a time, in a land far, far away, there lived two men by the name of Neal Koblitz and Victor S. Miller. They didn't know each other, however, in 1985 they both suggested using elliptical curves over finite fields for encrypting/decrypting data.
Seriously, though, the following explanation requires that you have a basic understanding of finite fields. Most of it is taken from the Wiki links suggested by D.W.
Elliptic curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields.
Public-key cryptography is based on the intractability of certain mathematical problems. Early public-key systems, such as the RSA algorithm, are secure assuming that it is difficult to factor a large integer composed of two or more large prime factors.
For elliptic-curve-based protocols, it is assumed that finding the discrete logarithm of a random elliptic curve element with respect to a publicly-known base point is infeasible. The size of the elliptic curve determines the difficulty of the problem. It is believed that the same level of security afforded by an RSA-based system with a large modulus can be achieved with a much smaller elliptic curve group. Using a small group reduces storage and transmission requirements.
For current cryptographic purposes, an elliptic curve is a plane curve which consists of the points satisfying the equation
$y^2 = x^3 + ax + b$,
along with a distinguished point at infinity. (The coordinates here are to be chosen from a fixed finite field of characteristic not equal to 2 or 3, or the curve equation will be somewhat more complicated.) This set together with the group operation of the elliptic group theory form an Abelian group, with the point at infinity as identity element. The structure of the group is inherited from the divisor group of the underlying algebraic variety.
How it works depends on the cryptographic scheme you apply it to. As an example, it can be applied it to the Diffie-Hellman key exchange, which is commonly known as the Elliptic Curve Diffie-Hellman (ECDH) key agreement protocol.
Suppose Alice wants to establish a shared key with Bob, but the only channel available for them may be eavesdropped by a third party. Initially, the domain parameters (that is, $(p,a,b,G,n,h)$ in the prime case or $(m,f(x),a,b,G,n,h)$ in the binary case) must be agreed upon. Also, each party must have a key pair suitable for elliptic curve cryptography, consisting of a private key d (a randomly selected integer in the interval $[1,n − 1]$) and a public key $Q$ (where $Q = dG$). Let Alice's key pair be $(d_A,Q_A)$ and Bob's key pair be $(d_B,Q_B)$. Each party must have the other party's public key (an exchange must occur).
Alice computes $(x_k,y_k) = d_AQ_B$. Bob computes $k = d_BQ_A$. The shared key is $x_k$ (the $x$ coordinate of the point).
The number calculated by both parties is equal, because $d_AQ_B = d_Ad_BG = d_Bd_AG = d_BQ_A$.
The protocol is secure because nothing is disclosed (except for the public keys, which are not secret), and no party can derive the private key of the other unless it can solve the Elliptic Curve Discrete Logarithm Problem.
-
I wanted to add a couple of very handy references.
Firstly, there is the self-acclaimed elliptic curve crypto blog (not mine, no self plugging today). But the exact page that I linked you to happens to have a large list of references to learn about crypto and, in particular, elliptic curve cryptography (including the book written by my current graduate advisor, which I haven't actually read).
But one of them, which has a few good quick overview parts is Smart's Cryptography, available free here (and legal, by the way - distributed by the author himself).
-
I recommend that you start by reading the description of elliptic curve cryptography in Wikipedia, and then let us know what you'd like to know: What didn't you understand? What didn't it cover that you'd like to know about?
The one-sentence version is that elliptic curve cryptography is a form of public-key cryptography that is more efficient than most of its competitors (e.g., RSA).
For every public-key cryptosystem you already know of, there are alternatives based upon elliptic curve cryptography (ECC). The ECC schemes are probably faster. Consequently, ECC is particularly appropriate for embedded devices and other systems where performance is at a premium. On the other hand, ECC is newer than some other well-known alternatives, and there is a bit of a patent minefield surrounding some kinds of elliptic-curve cryptography, so ECC hasn't seen as much deployment as classic RSA/DSA/El Gamal -- but ECC is used in the wild in some systems.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9477234482765198, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/86864?sort=votes | ## When can a family of polynomials get a weight function to be made orthogonal?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\lbrace P_n(z)\rbrace_{n\in\mathbb N_0}$ be a family of polynomials defined by a generating function $g(t,z)=\sum\limits_{n=0}^\infty P_n(z)t^n$ or by a contour integral $P_n(z)=\frac1{2\pi i}\oint\frac{g(t,z)}{t^{n+1}}dt$. Are there known sufficient conditions on $g$ or on the $P_n$ themselves that guarantee the existence of a weight function $w:I\to \mathbb R^+_0$ (where $I\subset\mathbb R$ is an appropriate interval) such that the $P_n$ are orthogonal w.r.t. $w$?
-
2
Orthogonal polynomials must satisfy a three-term recursion $x P_{n+1}(x) = A_n P_n(x) - B_n P_{n-1}(x)$ and have real interlaced roots. Not clear how to test these necessary conditions from a generating function or contour integral, let alone give sufficient conditions. – Noam D. Elkies Jan 27 2012 at 23:06
2
Oops, I started writing my answer before this comment appeared. The three-term recurrence should be $xP_{n}(x) = P_{n+1}(x) + A_n P_n(x) - B_n P_{n-1}(x)$ with $B_n>0$, and then it actually implies that the roots are interlaced. – Henry Cohn Jan 27 2012 at 23:18
(But you're right that this isn't a particularly useful condition for proving that polynomials are orthogonal. I view Favard's theorem as having mainly psychological value: if you observe a suitable recurrence experimentally, then you really ought to look for an orthogonality proof to explain it. It's sometimes possible to prove the recurrence directly and work from there, but this is generally not the most flexible or illuminating approach.) – Henry Cohn Jan 27 2012 at 23:21
Henry is right, and I apologize for the typo. – Noam D. Elkies Jan 27 2012 at 23:56
## 2 Answers
Favard's theorem characterizes this in terms of the three-term recurrence. Suppose the polynomials $P_n$ are normalized so that they are monic. Then they are orthogonal polynomials with respect to some Borel measure if and only if there are constants $\alpha_n$ and $\beta_n$ such that $P_n(x) = (x+\alpha_n) P_{n-1}(x) + \beta_n P_{n-2}(x)$ and $\beta_n < 0$. (The sign condition on $\beta_n$ is needed to get a positive measure. I think you still get a signed measure if you have a three-term recurrence with $\beta_n \ge 0$, but I'm not certain offhand.)
This is pretty easy to test for in practice if you are given a sequence of polynomials numerically. Strictly speaking, it doesn't guarantee a weight function as specified in your question, since the measure may not be absolutely continuous with respect to Lebesgue measure, but I assume that's not what you really care about. If it is, then I'm not sure offhand how to characterize that case.
-
Thank you. That's about what I expected: that one of the necessary conditions is essentially sufficient. Like Noam D. Elkies, I didn't really expect there to be any hope of deriving anything directly from a generating function. Let alone the even more hopeless question: If only $g(t,z)$ is given, is there even a way to know, without doing a Taylor expansion, that the coefficients will actually be polynomials in $z$? – spanferkel Jan 28 2012 at 10:24
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Hello everybody, Can anybody help me in finding out the weight function needed to orthogonalize a polynomial. Actually, in a paper they claim their polynomials are orthogonal, but I'm unable to verify. They have a three term recurrence relation.
-
Who is "they?" Formulating this as a question instead of an answer might give you better chances of a response, though it would be helpful if you flesh out your problem more. What, specifically, are you trying to do? What paper are you following? What specific problems are you having? – Emilio Pisanty Jul 20 at 15:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9431701302528381, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/24801/could-a-people-do-all-sort-of-gymnastics-movement-in-vacuum-space?answertab=oldest | # Could a people do all sort of gymnastics movement in vacuum space? [closed]
Could a people do all sort of gymnastics movement in vacuum space? I asked this because I am worry about that the astronaut leave the space shuttle during emergency could not go back to earth by himself if there are no fuel on the astronaut, could he swim in space to get back on earth even if there are no water?
-
I don't understand your question. Could you try clarifying? – Jonathan Gleason May 3 '12 at 22:10
More on swimming in zero gravity: physics.stackexchange.com/q/886/2451 and physics.stackexchange.com/q/9134/2451 – Qmechanic♦ May 3 '12 at 22:16
## closed as not a real question by Qmechanic♦, Sklivvz♦, Manishearth♦Dec 28 '12 at 12:42
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, see the FAQ.
## 1 Answer
No, you can't move in space (aside from following your orbit), unless you have something to push against. See Newton's Laws of Motion. (Also, if said astronaut did manage to re-enter the atmosphere, he or she would burn up on re-entry. The impact wouldn't be survivable either.)
Astronauts are well aware that many sorts of emergencies will not be survivable.
-
Er...while you can not influence either you linear or your angular momentum you can (in principle) influence your orientation (or the phase associated with a non-zero rotational motion). – dmckee♦ May 3 '12 at 22:45
@dmckee: Really? Is that true in practice, or is it only some GR result that would be undetectable on human scales? – Colin K May 4 '12 at 1:03
1
@ColinK: It's true in practice and is how springboard divers power half-twists (by contortion of their arms). Longer twists musts take advantage of setting up unstable rotations around $I_2$. Think of it this way. You can't change your $L$, but you can rotate one part of your body relative another part until you run out of range of motion, so the final result is that you continue with the same $\omega$ but a different phase. Not sure how practical it is going to be in a vacuum suit, however. – dmckee♦ May 4 '12 at 1:40
@dmckee - Sure, but if he's worried about astronauts in orbit around the Earth, changing orientation isn't going to do the trick. – Rex Kerr May 4 '12 at 15:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9551913738250732, "perplexity_flag": "middle"} |
http://gilkalai.wordpress.com/2010/08/10/faces-of-simple-4-polytopes/?like=1&source=post_flair&_wpnonce=420e8d64d7 | Gil Kalai’s blog
## Faces of Simple 4 Polytopes
Posted on August 10, 2010 by
In the conference celebrating Klee and Grünbaum’s mathematics at Seattle Günter Ziegler proposed the following bold conjecture about 4 polytopes.
Conjecture: A simple 4-polytope with $n$ facets has at most a linear number (in $n$) two dimensional faces which are not 4-gons!
If the polytope is dual-to-neighborly then the number of 2-faces is quadratic in $n$. For the dual-to-cyclic polytope the assertion of the conjecture is true.
### Like this:
This entry was posted in Convex polytopes. Bookmark the permalink.
### 3 Responses to Faces of Simple 4 Polytopes
1. Kristal Cantwell says:
If you started with the standard packing of spheres of unit radius in three dimensions and then wrapped it around a hypersphere of very large radius(there is going to be an error term from this since this doesn’t wrap exactly and then somehow corrected it through adjustments so it was simple. It seems to me that this idea ought to produce a class of simple polytopes that can have as large a number of facets as desired that have a a ratio of two dimensional faces as triangles to facets that as number of facets increases tends to a positive number. I am looking at this as a possible counterexample.
2. Kristal Cantwell says:
I am not sure the ideas in my first post work. I have thought of a simpler way to show the conjecture is false. Assume the conjecture is true start with a simple 4-polytope. Then simply use hyperplanes two cut of small simplices at all the vertices. Then all the original faces will now have twice as man sides so they will have at least 6 sides. The Newly formed faces will all be triangles. And so we now have simple four dimensional polytope all of whose faces do not have four sides.
3. Gil Kalai says:
Dear Kristal, the problem in your construction is that you add a facet for every vertex of the original polytope. Then in the new polytope the number of 2-faces will become just linear in the number of facets and the conjecture holds trivially.
• ### Blogroll
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306145310401917, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?p=3733034 | Physics Forums
## Formula for the Electric Field Due to Continuous Charge Distribution
1. The problem statement, all variables and given/known data
I am having trouble understanding how
$\textit{Δ}\vec{E}\textit{ = k}_{e}\frac{Δq}{{r}^{2}}$
(where ΔE is the electric field of the small piece of charge Δq)
turns into
$\vec{E}\textit{ = k}_{e}\sum_{i}\frac{{Δq}_{i}}{{{r}_{i}}^{2}}$
then into
$\vec{E}\textit{ = k}_{e} \lim_{Δq→0}\sum_{i}{\frac{{Δq}_{i}}{{{r}_{i}}^{2}}}$
which finally takes the form
$\vec{E}\textit{ = k}_{e}\int{\frac{dq}{{r}^{2}}}$
2. Relevant equations
- Listed Above -
3. The attempt at a solution
I understand the summation part, which just takes the sum of all the electric fields of each individual part of the continuous charge distribution.
What I don't understand is the latter part, where the limit is taken of the summation and then turned into an integral. What's going on there and why is that done?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Mentor
Quote by prosteve037 1. The problem statement, all variables and given/known data I am having trouble understanding how $\textit{Δ}\vec{E}\textit{ = k}_{e}\frac{Δq}{{r}^{2}}$ (where ΔE is the electric field of the small piece of charge Δq) turns into $\vec{E}\textit{ = k}_{e}\sum_{i}\frac{{Δq}_{i}}{{{r}_{i}}^{2}}$ then into $\vec{E}\textit{ = k}_{e} \lim_{Δq→0}\sum_{i}{\frac{{Δq}_{i}}{{{r}_{i}}^{2}}}$ which finally takes the form $\vec{E}\textit{ = k}_{e}\int{\frac{dq}{{r}^{2}}}$ 2. Relevant equations - Listed Above - 3. The attempt at a solution I understand the summation part, which just takes the sum of all the electric fields of each individual part of the continuous charge distribution. What I don't understand is the latter part, where the limit is taken of the summation and then turned into an integral. What's going on there and why is that done?
Have you had a course in integral calculus?
Quote by prosteve037 1. The problem statement, all variables and given/known data I am having trouble understanding how $\textit{Δ}\vec{E}\textit{ = k}_{e}\frac{Δq}{{r}^{2}}$ (where ΔE is the electric field of the small piece of charge Δq) turns into $\vec{E}\textit{ = k}_{e}\sum_{i}\frac{{Δq}_{i}}{{{r}_{i}}^{2}}$ then into $\vec{E}\textit{ = k}_{e} \lim_{Δq→0}\sum_{i}{\frac{{Δq}_{i}}{{{r}_{i}}^{2}}}$ which finally takes the form $\vec{E}\textit{ = k}_{e}\int{\frac{dq}{{r}^{2}}}$ 2. Relevant equations - Listed Above - 3. The attempt at a solution I understand the summation part, which just takes the sum of all the electric fields of each individual part of the continuous charge distribution. What I don't understand is the latter part, where the limit is taken of the summation and then turned into an integral. What's going on there and why is that done?
Think of the first formula as giving the i'th electric field due to the i'th (discrete) charge at the i'th position. This equation assumes that the i'th charge has some spatial extent. The i'th position is approximately at the centre of this spatial extent. (This charge has to tend to 0 as the spatial extent goes to 0).
You need to sum all the i'th terms to obtain the total electic field. But alas! The charges in the equation are not infinitesimally small, so that the position of each charge in the equation is at best an approximation of the spatial extent of the charge. Therefore, to obtain the total electric field, we need to sum all the i'th electric fields in the limit of the i'th charge tending to 0. this condition constrains the spatial extent to tend to 0, so that the position becomes exact. In other words, you are converting the discrete second equation into the continuous third equation. The limit taken for the charge tending to zero converts the discrete charges into infinitesimally small charges at unique points that have no spatial extent.
The fourth equation is simply the commonly used shorthand notation for the more easily understood third equation.
## Formula for the Electric Field Due to Continuous Charge Distribution
Quote by SammyS Have you had a course in integral calculus?
I took some Calculus back in high school, which was 2 years ago. I can't seem to remember much/enough to understand the mathematical progression here :/
Quote by failexam Think of the first formula as giving the i'th electric field due to the i'th (discrete) charge at the i'th position. This equation assumes that the i'th charge has some spatial extent. The i'th position is approximately at the centre of this spatial extent. (This charge has to tend to 0 as the spatial extent goes to 0). You need to sum all the i'th terms to obtain the total electic field. But alas! The charges in the equation are not infinitesimally small, so that the position of each charge in the equation is at best an approximation of the spatial extent of the charge. Therefore, to obtain the total electric field, we need to sum all the i'th electric fields in the limit of the i'th charge tending to 0. this condition constrains the spatial extent to tend to 0, so that the position becomes exact. In other words, you are converting the discrete second equation into the continuous third equation. The limit taken for the charge tending to zero converts the discrete charges into infinitesimally small charges at unique points that have no spatial extent. The fourth equation is simply the commonly used shorthand notation for the more easily understood third equation.
I'm a little confused about what you mean by "spatial extent". Are you saying that the discrete pieces of charge aren't taken to be at their exact locations in the first equation?
Tags
charge distribution, electric field, integral, limit
Thread Tools
Similar Threads for: Formula for the Electric Field Due to Continuous Charge Distribution
Thread Forum Replies
Introductory Physics Homework 1
Introductory Physics Homework 13
Introductory Physics Homework 6
Advanced Physics Homework 3
Classical Physics 22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397073984146118, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/tags/classical-cipher/hot?filter=all | # Tag Info
## Hot answers tagged classical-cipher
9
### Is a book cipher provably secure?
Using the book as a key is relatively similar to one-time pad, insofar as the book can be considered as a random stream of characters. But that's true only to some extent: a book consists of words, with meaning, which implies that characters which may appear at position 321:42:35 are not uncorrelated with characters which appear at positions 321:42:34 and ...
7
### Encryption/ciphers/codes in Chinese
I take your question to mean, how both historically and in the modern age one could construct a pen-and-paper cipher using the Chinese language. As pointed out in the question, Chinese is a logographic langauge and therefore has a far greater number of characters than Phonetic systems. Historically this has cause chinese codes not to be based around the ...
7
### How to attack a classical cipher using known partial plaintext?
As the other poster rightly pointed out, it's a Playfair cipher. Even without the known plaintext, the program "playn" here will give the right text in less than a second. (you can compile it yourself, and it uses the bigram statistics of English) I ran it, and the result was the following: IT XT UR NS OU TX TH AT OR IG AM IX IS AB RI LX LI AN TW AY TO ...
7
### Why does ROT13 provide no cryptographic security?
I think I understand what you're asking for. You're trying to learn how we know which algorithm was used, so we know how to attack it. That's a part of what is known as cryptanalysis, the task of breaking ciphers. If you are using a standard computer protocol, the encryption algorithm is defined as a part of the protocol. The computers can't talk unless ...
7
### security of Felix cipher
Cipher details Cipher type The Felix cipher can be broken down into two algorithms: a substitution cipher and a permutation of the character pairs. We obtain the substitution if we read the number pairs in figure 3.3 vertically rather than horizontally. Since the permutation is fixed, it has no cryptographic value. Therefore, we'll only analyze the ...
6
### Why does ROT13 provide no cryptographic security?
Kerckhoffs's principle states, that a cryptographic system shall be secure even if everything about the system, except the key, is known to the attacker. Typically an encryption algorithm has two inputs: a key and the data. In the case of Rot13, there is no key. So if you know the algorithm, there is nothing left to guess. Let's assume the algorithm ...
5
### How to attack a classical cipher using known partial plaintext?
I do not have a solution, but I pursued the cipher long enough to establish it wasn't one of the easy classical ciphers. This approach should get you started. The first thing you want to do is convert the text into numbers as many classic ciphers are mathematically-based (or at least easy represented mathematically). Using $A=0$, $B=1$, $\ldots$, the ...
5
### Is a book cipher provably secure?
An obstacle to proving that a book cipher is secure is that the letters in (most) books are not chosen independently at random. Thus, in principle, if two indices are chosen too close to each other, an adversary could deduce some statistical information about how the corresponding plaintext letters may be correlated. As a toy example, suppose that an ...
5
### How secure is the Vigenère cipher in file encryption if you encrypt the password first?
This is essentially a Vigenère cipher; it's been known for centuries. As for how secure it is, well, it is actually fairly easy to break (unless the key is both as long as the ciphertext, and randomly chosen; however, at that point, if you could remember the key, you could have well just remembered the plaintext). As for your colleague, he's right, and ...
5
### Toy cipher — does it have a name?
This is a simple substitution cipher, specifically a mixed/deranged alphabet cipher. See wikipedia's description: Substitution of single letters separately—simple substitution—can be demonstrated by writing out the alphabet in some order to represent the substitution. This is termed a substitution alphabet. The cipher alphabet may be shifted or reversed ...
5
### Is frequency analysis a useful tool against encryption by multiplication?
Frequency analysis would work on average quite poorly on ciphertext of the proposed cipher. How exactly depends a lot on the value of the key: for some weak keys likes 1, 3, 30, 30000.. it works essentially as well as for any mono-alphabetic cipher. For 103, it still works well. For any key, given enough (lots of) ciphertext, it could still distinguish the ...
5
### Benefit of combining classical substitution ciphers with modern cryptography
The fact that a given cipher has a key length of 296 bits doesn't mean at all that it provides 296 bits of security or even that a brute force attack would take $2^{296}$ steps. The problem of mono-alphabetic substitution cipher is the ridiculously small block size (in this case, barely $\log 64 = 6$ bits). If absolutely nothing about the plaintext is ...
4
### How many keys does the Playfair Cipher have?
When we consider that a Playfair key consists of the alphabet (reduced to 25 letters) spread on a 5x5 square, that's $25!$ keys (another formulation consider any string to be a key; then strings leading to the same square are equivalent keys). The rules of Playfair are such that any rotation of the lines in the square, and any rotation of its columns, lead ...
4
### How can I break a Vigenère cipher with partial plain text?
First guess the key length(Just try every plausible length, there aren't many). Then for each position where you know both plain- and ciphertext, calculate the key char. If you get a contradiction, the guessed key length was wrong. If the key length is short enough compared to the number of known pairs this will probably give you a large part of the key.
4
### How weak/strong is this hand cipher? (updated) [closed]
Your system is essentially: $c_0 = m_0 + k$ $c_i = m_i + c_{i-1}$ for $i>0$ It's absolutely trivial to break this, since the attacker knows the ciphertext, and thus knows both $c_i$ and $c_{i-1}$. Decrypting the first group is hard. But to decrypt any group but the first, simply subtract the previous group from the current group \$ m_i = c_i - ...
4
### security of Felix cipher
As the page explains, the cipher it describes is a simple variant of the bifid cipher, with the alphabet extended from the traditional 25 to 36 letters. As such, most techniques for breaking the bifid cipher ought to be more or less directly applicable to it. The bifid cipher is nowadays mainly used for crypto puzzles. Like most classical ciphers, it is ...
3
### Encryption/ciphers/codes in Chinese
It seems I can't comment on answers because the question is no longer on the Chinese Q&A, but I wanted to support fgrieu's suggestion. Certain web services don't have much care for security, but they want to avoid containing keywords that are blocked by the Chinese Firewall. One I'm familiar with is PIMCloud, a cloud-supported IME, which does exactly ...
3
### Cracking the beaufort cipher
yes (guessing ur doing the cypher challenge...?) This link has a really good decoding tool (saves you time), then trial and error keywords. also this tool; can be used to find the length of the keyword; paste the texts you're decoding then the number of the column(s) with the most x's is the length of the keyword
3
### What is the most secure hand cipher?
So the one's i'd bet on are either solitaire. Solitaire by Bruce schneier is probably your best bet. It has a few issues but it will work well for most things. It ends up having a small bias, but it takes about 15 seconds per character after the initial keystream has been generated. It is not nearly as widely studied a field since most people are assumed ...
3
### Obtaining the key length from the ciphertext of an auto-encipher
Both the Vigenère and autokey ciphers are classified as polyalphabetic substitution ciphers, so the cipher in your exam is not likely to be either of those. Rather, the phrasing of the question suggests that it belongs to the other branch of classical ciphers, transposition ciphers. Indeed, looking at the letter frequencies of the ciphertext strongly ...
3
### Example of CHI Square test on Caesar Cipher?
I'll assume that the objective is to assert if the distribution of the $f'_i/n'$ is sufficiently similar with the distribution of the $f_i/n$ to support that a substitution cipher (including Caesar cipher) with the same permutation table and same frequency of plaintext characters could be used in both case. If $n \gg n'$, $f_i \gg 5$ and ...
2
### Is a book cipher provably secure?
hummm... some thoughts about it: I think that it could be secure depending on what you want to hide. The bigger and the more "real world words" you want to protect, the easier it gets to crack. Why? Because in books, in general, you'll only have letters, few numbers, and that's only. Ok, so I know your transmitting one or more words. By the size of the ...
2
### Encryption/ciphers/codes in Chinese
If you save the current page and examine the file with an hex editor, you will likely find that your example Chinese string is represented by the bytes E6 88 91 E5 9C A8 E5 AD B8 E4 B8 AD E6 96 87 One option, very suitable for implementation by a machine, is to encipher the bytes representing the message in this (or other) format used by the machine. ...
2
### Assistance Cracking Classical Cipher
I don't know the solution, but since you say you're only asking for hints, here's a few that occurred to me: If this is a Vigenère cipher, the missing character at the beginning should not matter (much): if you encrypt a message with the key FOOBAR and drop the first letter of the output, you can decrypt the resulting ciphertext with the key OOBARF. As ...
2
### Why does ROT13 provide no cryptographic security?
I'm not exactly certain why "the obvious reasons" ROT13 isn't secure wouldn't be considered the appropriate answer; it's not secure (that is, doesn't provide privacy) because anyone can decrypt it trivially (whether they're the intended recipient or not). If you want to get into the details about why it is not secure, well, we need to talk about "security ...
2
### Toy cipher — does it have a name?
It's sometimes called a keyword cipher. As dr jimbob notes, it's a particular type of monoalphabetic substitution cipher. Ps. See also this recent question about breaking such ciphers.
2
### Can a shift cipher attain perfect secrecy?
The Caesar cipher (aka Shift cipher) has, as you said, a key space of size 26. To achieve perfect secrecy, it thus can have at most 26 plaintexts and ciphertexts. With a message space of one character (and every key only used once), it would fit the definition of perfect secrecy. For the usual use with messages longer than one character, or multiple ...
2
### Benefit of combining classical substitution ciphers with modern cryptography
Encrypting the AES key does not actually make a brute force search any harder: an attacker doesn't need to know the encrypted key to decode messages, they only need to know the actual AES key. Thus, the attacker only(!) needs to search the 256 bit AES keyspace, not the roughly 296+256 = 552 bit encrypted keyspace. Besides, even if the attacker did try an ...
2
### Is this hand cipher any more secure than the Vigenère cipher?
given that the permutation is fixed and the key step is independent of the permutation you can reduce this to an ordinary text-substitution cipher if the key is as long as the input you have a weak one-time pad (because the per letter change is limited to 10 instead of 26) however if the key is short then you have a vignere cipher (if you "decode" with ...
2
### Attacking historical ciphers methodology
If you don't know the system, you just check one after the other: frequency analysis of bigrams detects Ceasar and Playfair. Try Caesar first then Playfair. Auto correlation method for Vigenere (for each x: count the number of occurances, where letter at position i and i+x are equal. For the correct codeword length, it will spike) If you have a Hill ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391605257987976, "perplexity_flag": "middle"} |
http://mathhelpforum.com/discrete-math/191884-0-empty-sets.html | Thread:
1. 0 and empty sets
I'm having trouble understanding this question:
Does $0 \in \emptyset$? Does $\{ \emptyset \} \in \emptyset$. Explain Why.
My answer:
No for both.
Let me know if this is correct and also let me know how this works. Im having some trouble understanding it.
2. Re: 0 and empty sets
Originally Posted by maclunian
I'm having trouble understanding this question:
Does $0 \in \emptyset$? Does $\{ \emptyset \} \in \emptyset$. Explain Why. My answer:
No for both.
Let me know if this is correct and also let me know how this works. Im having some trouble understanding it.
The $\emptyset$ contains nothing, so $0\notin\emptyset~\&~\{\emptyset\}\notin\emptyset$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8967108726501465, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/97061?sort=newest | ## Adjoint of a Connection Using the Hodge Map?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For a Riemannian manifold $(M,g)$ with exterior derivative d, the codifferential d$^\ast$ is defined to be the unique map for which $$g(\omega,d\omega') = g(d^* \omega,\omega'), ~~~ \omega,\omega' \in \Omega^{\bullet}.$$ Now if $\ast$ is the Hodge map for $g$, then it is not too difficult to show that d$= (-1)^k\ast$ d $\ast$, when acting on $\Omega^k(M)$.
When $M$ is a complex manifold with holomorphic and anti-holomorphic partial derivatives $\partial$, and $\overline{\partial}$, we have a similarly defined $\partial^\ast$, and $\overline{\partial}^\ast$, and a similar relation between these objects and the original derivatives involving the Hodge map (well actually there's a reversal but no matter). For the Lefschetz map something similar also happens.
What I would like to know is whether the adjoint $\nabla^*$ of the Levi--Civita connection $\nabla$ has some similar re-expression? Or is this too naive?
-
1
You need to replace $d\omega'$ by $d^*\omega'$ (or $d\omega$ by $d^*\omega$). – Michael Albanese May 16 2012 at 13:24
Done - thanks for spotting that. – Mihail Matrix May 16 2012 at 15:32
Um, yr defining equation for $\mathrm{d}^*$ is not quite right: for example, the left hand side is tensorial in $\omega$ and the right hand side is not. The point is that the two sides differ by a divergence and so coincide after integrating against a volume form (so long as yr manifold is compact and without boundary). – Fran Burstall Jun 11 at 21:53
## 2 Answers
Yes, even in a more general situation: Let $E\to M$ be a vector bundle with metric and metric connection $\nabla.$ Then there exists $d^\nabla\colon\Omega^k(M,E)\to\Omega^{k+1}(M,E)$ satisfying $d^\nabla(s\otimes\omega)=\nabla s\wedge\omega+s\otimes d\omega,$ and also the Hodge star extends to $\Lambda T^*M\otimes E.$ With this it is a nice exercise to compute that the adjoint of $d^\nabla$ is $\delta^\nabla=(-1)^? * d^\nabla *,$ where the sign is the same as for the usual codifferential.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Unfortunately this does not work, because the Hodge $*$ operator commutes with the Levi-Civita connection. Indeed, we have $$\nabla (\langle u,v \rangle dV) = \langle \nabla u, v \rangle dV + (-1)^m \langle u, \nabla v \rangle dV = \nabla u \wedge * v + (-1)^m u \wedge *\nabla v$$ because $\nabla dV = 0$ since the metric $g$ is parallel with respect to $\nabla$. The left hand side of this formula is again $$\nabla (u \wedge * v) = \nabla u \wedge *v + (-1)^m u \wedge \nabla(*v),$$ from which we get $\nabla * = * \nabla$. It follows that $* \nabla * = ** \nabla = (-1)^l \nabla$ for some $l$.
There is an expression for $\nabla^*$ in terms of a local orthonormal frame in Werner Ballmann's Lectures on Kahler manifolds (Proposition 1.27, Chapter 1, p. 11) that says that if $(X_1, \ldots, X_n)$ is such a frame, and if $\hat\nabla$ is the dual connection, then $$\nabla^*u = - \sum_j X_j \llcorner \hat\nabla_{X_j} u.$$ I don't know if this is what you're looking for, if not you might have more luck with Bocher-Weitzenböck type identities.
-
Hmm... I just saw Sebastian's answer and am confused now. I think he's right, since if the metric is flat my answer would imply that the Laplacian was zero, which is absurd. On the other hand I can't quite figure out where I went wrong. – Gunnar Magnusson May 16 2012 at 8:19
5
Dear Gunnar, of course, the Hodge star is parallel, but the wedge product does not commute with the Hodge star, but becomes the insertion operator. – Sebastian May 16 2012 at 8:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468446373939514, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/32597/compact-surfaces-of-negative-curvature/82589 | ## Compact surfaces of negative curvature
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
John Hubbard recently told me that he has been asking people if there are compact surfaces of negative curvature in $\mathbb{R}^4$ without getting any definite answers. I had assumed it was possible, but couldn't come up with an easy example off the top of my head.
In $\mathbb{R}^3$ it is easy to show that surfaces of negative curvature can't be compact: throw planes at your surface from very far away. At the point of first contact, your plane and the surface are tangent. But the surface is everywhere saddle-shaped, so it cannot be tangent to your plane without actually piercing it, contradicting first contact.
This easy argument fails in $\mathbb{R}^4$. Can the failure of the easy argument be used to construct an example? Is there a simple source of compact negative curvature surfaces in $\mathbb{R}^4$?
-
2
What is the smallest dimension in which a uniformised genus $g>1$ Riemann surface embeds isometrically? – José Figueroa-O'Farrill Jul 20 2010 at 9:55
Great question. It shows how we still know so little about isometric embeddings of Riemannian manifolds in Euclidean space. – Deane Yang Jul 20 2010 at 14:40
## 2 Answers
You will find examples (topologically, spheres with seven handles) in section 5.5 of Surfaces of Negative Curvature by E. R. Rozendorn, in Geometry III: Theory of surfaces, Yu. D. Burago VI A. Zalgaller (Eds.) EMS 48.
Rozendorn tells us that «from the visual point of view, their construction seems fairly simple.» Well...
-
Mariano, is the curvature constant? Is the embedding at least $C^2$? – Victor Protsak Jul 20 2010 at 6:17
1
Victor, Rozendorn writes that «Efimov stated the supposition, at present not confirmed but also not disproved» that the surface can be deformed, preserving the sign of the curvature, to obtain in the limit a closed surface of constant negative curvature – Mariano Suárez-Alvarez Jul 20 2010 at 6:21
1
Also, the construction can be made $C^\infty$. – Mariano Suárez-Alvarez Jul 20 2010 at 6:25
Interesting, thank you! I even had that book in my possession within the last 3 months, but I completely missed that construction. – Matt Noonan Jul 20 2010 at 12:21
1
If anyone is not convinced by the elegant "throw tangent planes at the surface" argument, it can be translated to "for any compact surface in $\mathbb R^3$, find a point of maximum distance from the origin. At this point, the surface has curvature at least as positive as the sphere defined by that distance from the origin." – Elizabeth S. Q. Goodman Jan 27 2012 at 7:22
show 4 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In "Y. Martinez-Maure, A counter-example to a conjectured characterization of the sphere. (Contre-exemple à une caractérisation conjecturée de la sphère.) (French), C. R. Acad. Sci., Paris, Sér. I, Math. 332, 41-44 (2001), the author disproves an old characterization of the 2-sphere by giving an exemple of a "hyperbolic hedgehog" of R^3 (a sphere-homeomorphic envelope parametrized by its Gauss map whose Gaussian curvature K is everywhere negative excepted at four singular points where K is infinite).
By projective duality, this implies the existence of a 2-sphere C^2 embedded in the 3-sphere with a nonpositive extrinsic curvature but not totally geodesic.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8907373547554016, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/181880/probability-of-drawing-a-blue-ball-if-i-draw-a-ball-10-times?answertab=oldest | # Probability of drawing a blue ball if I draw a ball 10 times?
Suppose there are $3$ red balls and $1$ blue ball. What's the probability that if I draw a ball $10$ times, I don't draw a blue ball?
Is it $1-\Pr(\text{no red ball}) = 1-\big(\frac{3}{4}\big)^3$?
Also, what is the chance that I draw at least $1$ blue ball within the $10$ draws?
-
1
Do you mean 1 - (3/4)^$\color{red}{10}$? – user2468 Aug 13 '12 at 0:15
## 1 Answer
Assuming that you are drawing indpendently and with replacement. The probability would be $$(\hbox{probability of not drawing a blue ball})^{10}=(3/4)^{10}$$ the probability of drawing a red ball all 10 times.
To draw at least one blue ball in 10 draws all that is required is that you do not draw a red ball all 10 times.
So it is $${1-(3/4)^{10}}$$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9325339198112488, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2010/06/28/the-banach-space-of-totally-finite-signed-measures/?like=1&source=post_flair&_wpnonce=c1c9c39881 | # The Unapologetic Mathematician
## The Banach Space of Totally Finite Signed Measures
Today we consider what happens when we’re working over a $\sigma$-algebra — so the whole space $X$ is measurable — and we restrict our attention to totally finite signed measures. These form a vector space, since the sum of two finite signed measures is again a finite signed measure, as is any scalar multiple (positive or negative) of a finite signed measure.
Now, it so happens that we can define a norm on this space. Indeed, taking the Jordan decomposition, we must have both $\mu^+(X)<\infty$ and $\mu^-(X)<\infty$, and thus $\lvert\mu\rvert(X)<\infty$. We define $\lVert\mu\rVert=\lvert\mu\rvert(X)$, and use this as our norm. It’s straightforward to verify that $\lVert c\mu\rVert=\lvert c\rvert\lVert\mu\rVert$, and that $\lVert\mu\rVert=0$ implies that $\mu$ is the zero measure. The triangle inequality takes a bit more work. We take a Hahn decomposition $X=A\uplus B$ for $\mu+\nu$ and write
$\displaystyle\begin{aligned}\lVert\mu+\nu\rVert&=\lvert\mu+\nu\rvert(X)\\&=(\mu+\nu)^+(X)+(\mu+\nu)^-(X)\\&=(\mu+\nu)(X\cap A)-(\mu+\nu)(X\cap B)\\&=\mu(A)+\nu(A)-\mu(B)-\nu(B)\\&\leq\lvert\mu\rvert(A)+\lvert\mu\rvert(B)+\lvert\nu\rvert(A)+\lvert\nu\rvert(B)\\&=\lvert\mu\rvert(X)+\lvert\nu\rvert(X)\\&=\lVert\mu\rVert+\lVert\nu\rVert\end{aligned}$
So we know that this defines a norm on our space.
But is this space, as asserted, a Banach space? Well, let’s say that $\{\mu_n\}$ is a Cauchy sequence of finite signed measures so that given any $\epsilon>0$ we have $\lvert\mu_n-\mu_m\rvert(X)<\epsilon$ for all sufficiently large $m$ and $n$. But this is larger than any $\lvert\mu_n-\mu_m\rvert(E)$, which itself is greater than $\lvert\lvert\mu_n\rvert(E)-\lvert\mu_m\rvert(E)\rvert$. If $E\subseteq A$ is a positive measurable set then this shows that $\lvert\mu_n(E)-\mu_m(E)\rvert$ is kept small, and we find similar control over the measures of negative measurable sets. And so the sequence $\{\mu_n(E)\}$ is always Cauchy, and hence convergent. It’s straightforward to show that the limiting set function $\mu$ will be a signed measure, and that we will have control over $\lVert\mu_n-\mu\rVert$. And so the space of totally finite signed measures is indeed a Banach space.
### Like this:
Posted by John Armstrong | Analysis, Measure Theory
## 2 Comments »
1. I believe there’s a mistake in line 3 of the proof of the Triangle Inequality. Second term should actually be -(\mu + \nu) (X \cap B).
Comment by SW | October 27, 2010 | Reply
• I think you’re right, thanks. Does it look better now?
Comment by | October 27, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 24, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.939085066318512, "perplexity_flag": "head"} |
http://mathhelpforum.com/geometry/30382-cone-height.html | Thread:
1. Cone Height
Ok, so here's the problem:
"The volume of a cone is found by using the formula V = 1/3*Pi(Squared)*h
Sarah is experimenting to find how high she needs to build a concial storage tower. She knows the volume of a material to be stored is 112 m(cubed). She decides to rewrite the formula so that h is the subject and then experiment different values of r.
Write the formula with h as the subject."
Only looking for help to find the solution. Formula would be fine.
Thank you.
2. Originally Posted by tmrwcomestoday
Ok, so here's the problem:
"The volume of a cone is found by using the formula V = 1/3*Pi*r(Squared)*h (I suppose)
Sarah is experimenting to find how high she needs to build a concial storage tower. She knows the volume of a material to be stored is 112 m(cubed). She decides to rewrite the formula so that h is the subject and then experiment different values of r.
Write the formula with h as the subject."
$V = \left[\frac13 \cdot \pi \cdot r^2\right] \cdot h$ . To get h you have to get rid of the square bracket. Divide by the bracket and you should come out with:
$h = \frac{3V}{\pi \cdot r^2}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481452703475952, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/52604/calculating-electromagnetic-invariant-in-matrix-form | # Calculating electromagnetic invariant in matrix form
I'm kind of confused. I want to calculate the electromagnetic invariant $I := F^{\mu\nu}F_{\mu\nu}$, but I'm not sure what is the easiest way to do so. So, I was trying to do it in matrix form, i.e. defining $$\mathbf{F}:=\begin{pmatrix}0 & -E_{1} & -E_{2} & -E_{3}\\ E_{1} & 0 & -B_{3} & B_{2}\\ E_{2} & B_{3} & 0 & -B_{1}\\ E_{3} & -B_{2} & B_{1} & 0 \end{pmatrix}$$ and then calculate the quantity $I$, but I'm not sure how to obtain the matrix form for $I$ (By "matrix form" I mean expressed in terms of the matrix $\mathbf{F}$).
So, I have two questions, 1) what is the easiest way to calculate $I$?, and 2) how to obtain the matrix form for $I$ starting with $I := F^{\mu\nu}F_{\mu\nu}$?. I'm using the metric $\eta:=diag(1,-1,-1,-1)$.
-
2
– Qmechanic♦ Jan 31 at 1:17
I isn't a matrix, its a scalar. – DJBunk Jan 31 at 1:21
@DJBunk I know that, I'm looking for the scalar expression but in terms of the matrix F. – Anuar Jan 31 at 1:38
Actually I understand that $I$ can be putted as the trace of a matrix product, but I don't know how to obtain that product starting with the definition if $I$ that I've just gave. – Anuar Jan 31 at 1:40
@Qmechanic Yes, my 1st question is related with your link, but it's not my 2nd question. – Anuar Jan 31 at 1:46
## 1 Answer
Notice that if we define matrices $F = (F^{\mu\nu})$ and $\eta = (\eta_{\mu\nu})$, then notice that $$I = F_{\mu\nu}F^{\mu\nu} = \eta_{\mu\alpha}\eta_{\nu\beta}F^{\alpha\beta}F^{\mu\nu} = \eta_{\mu\alpha}F^{\alpha\beta}\eta_{\nu\beta}F^{\mu\nu} = -\eta_{\mu\alpha}F^{\alpha\beta}\eta_{\beta\nu}F^{\nu\mu} =-\mathrm {tr}(\eta F\eta F)$$ where $\mathrm{tr}$ denotes the trace and I have used antisymmetry and symmetry of $F$ and $\eta$ respectively. I'm guessing this is the type of matrix expression you're after?
Cheers!
-
Yes, you are right. By the way, there are another equivalent expressions $I=F:(\eta F\eta)^{T}=Tr(F\eta F^{T}\eta)=-Tr(F\eta F\eta)$ where $A:B=\sum_{\mu,\nu,\sigma,\rho}e_{\mu}e_{\nu}:e_{\sigma}e_{\rho}A_{\mu\nu}B_{\sigma \rho}=\sum_{\mu,\nu}A_{\mu\nu}B_{\nu\mu}$ is the dyadic product. – Anuar Jan 31 at 6:11
Hmm interesting hadn't seen that notation before. – joshphysics Jan 31 at 18:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383124709129333, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/114711/baum-connes-like-conjecture-for-lp-spaces/115182 | ## Baum-Connes-like “conjecture” for $l^p$-spaces
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $G$ be a (discrete) group. For the Baum-Connes conjecture, one looks at the reduced group $C^{\star}$-algebra: Look at the Hilbert space $l^2(G)$ and the representation of $G$ on this Hilbert space given by left multiplication. The norm-closure of the resulting $\mathbb{C}G$-representation in $B(l^2(G))$ is the reduced group $C^*$-algebra.
For any $p \geq 1$, we can pretty much do the same: Look at $l^p(G)$. We still have a representation of $G$ on $B(l^p(G))$ by left multiplication and hence obtain a kind of reduced Banach group algebra for $l^p$; lets call it $B^p(G)$.
There also should be an assembly map
$K_*(E_{Fin}G) \rightarrow K_*(B^p(G))$
as in the Baum-Connes conjecture. For $p = 1$, we have $B^1(G) = l^1(G)$ and we obtain the Bost assembly map. I have no reason to believe that for arbitrary p such an assembly map might be an isomorphism, but was wondering whether such group Banach algebras, and maybe even the assembly maps, have been considered anywhere in the literature.
-
I have no answer to your question but would just like to point out that what you are calling $B^p(G)$ is known in the harmonic-analysis literature as $PF_p(G)$, the algebra of $p$-pseudofunctions. These are not very well understood, for instance if $F_2$ denotes the free group on two generators and $p\notin\{1,2,\infty\}$, then I don't think it is even known if $PF_p(F_2)$ can contain non-trivial idempotents - so we don't have an `$\ell^p$-Kadison-Kaplansky', which makes me guess that getting Baum-Connes results in this setting would be substantially harder than in the $p=2$ case. – Yemon Choi Nov 27 at 23:36
1
BTW. if you do find, or know of, a proof of the $\ell^p$-Kadison-Kaplansky for the free group, please let me know :) – Yemon Choi Nov 27 at 23:48
## 2 Answers
It is likely that in cases where I proved Baum-Connes without coefficients (i.e. reductive groups over local fields and some discrete groups with RD), some variant of the Schwartz or Jolissaint algebra will be dense and stable under functional calculus in the algebra you call $B^p(G)$. This would imply the BC conjecture for it. In the case of Schwartz algebras, some arguments like this (with almost the right L^p estimates) are given in the last section of my paper in Inventiones. The big limitation of this $L^p$ variant of the Baum-Connes conjecture is that when you consider coefficients in a $G$-$C^*$-algebra A, $L^p(G,A)$ can be defined only in a naive way and cannot be a $A$-Hilbert module as in the case where p=2.
-
Many thanks for the suggestions. One timid remark: functional calculus in $PF_p(F_2)$ may be tricky, we know there is a self-adjoint element of the complex group ring of $F_2$ whose spectrum in the algebra $\ell^1(F_2)$ has non-empty interior, although I admit this does not rule out some Jolissaint-type algebra working – Yemon Choi Dec 2 at 19:05
1
Thanks for your answer. I do not really understand the last remark, though: Why should I expect/want $L^p(G,A)$ to be a Hilbert module? Even for $A = \mathbb{\IC}$ this does not work - or am i misunderstanding something? – Fabian Lenhardt Dec 5 at 9:28
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is a version of $KK$-theory for Banach algebras, which was developed by Lafforgue. There also is a paper titled Banach KK-theory and the Baum-Connes conjecture, which is probably relevant for this question. I think this is a survey of K-théorie bivariante pour les algèbres de Banach et conjecture de Baum-Connes.
-
Lafforgue's work is only directly useful if one deals with unconditional completions of $\mathbb{C}G$ to a Banach algebra: the norm on $\mathbb{C}G$ of, say, $\sum a_g g$ should only depend on $\right\vert a_g \left\vert$, not on the precise value. I don't think that for $p \neq 1$, any of the algebras I have learned are named $PF_p(G)$ are unconditional. – Fabian Lenhardt Nov 28 at 13:48
@Fabian: I also suspect that they are not unconditional in the sense you describe, even for G the group of integers – Yemon Choi Nov 28 at 16:11
@Yemon, Fabian: You are right, I should have read the question more carefully. – Ulrich Pennig Nov 28 at 17:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9325335621833801, "perplexity_flag": "head"} |
http://mathhelpforum.com/pre-calculus/107726-chords-integral-length.html | # Thread:
1. ## # of Chords with Integral Length
If the Point Z is 6 units from the center of a circle who's radius is 10.
Then, what is the number of chords with integral length, that run through Z.
2. Originally Posted by warriors837
Point P is 6 units from the center of a circle of radius 10.
Compute the number of chords with integral length that pass through P.
You can, without loss of generality, assume that the center of the circle is at (0, 0) on a coordinate system and that P is at (0, 6). Now the circle can be written as $x^2+ y^2= 100$ and any line through P as y= mx+ 6. Put that into the quadratic and solve for x to determine the points where the line crosses the circle (depending on m). Find the length of the line segment between the points (still depending on m). What values of m give an integer length?
3. Positive ones?
4. So you will arrive at:
x^2+y^2=100
x^2+(mx+6)=100
x^2+mx=94
But what does m equal?
5. The longest possible chord is the passes through both P and the center and has
length = diameter
= 20 units.
apothem d is the distance from center to a chord
The shortest possible chord is perpendicular to the longest chord and has d = 6.
central angle θ of the shortest chord = 2arccos(d/r)
= 2arccos(6/10)
= 106.26°
chord length c = 2·r·sin(θ/2)
= 2·10·0.8
= 16
You can construct a chord of any length from 16 through 20 by changing its slope.
There are five integral chord lengths from 16 through 20: 16, 17, 18, 19, 20
Correct? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.913568913936615, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/52811/why-is-it-easier-to-work-with-function-fields-than-with-algebraic-number-field | # Why is it “easier” to work with function fields than with algebraic number fields?
I just bought a copy of Jürgen Neukirch's book Algebraic Number Theory. While browsing through it I found a section titled § 14. Function Fields in chapter I. In it the author describes some aspects of an analogy between function fields and algebraic number fields.
This led me to google for a while and I ended up reading the Wikipedia entry for Global Field. And this is where my question comes from. In the last sentence of that entry there's the following passage, which I find really interesting:
It is usually easier to work in the function field case and then try to develop parallel techniques on the number field side. The development of Arakelov theory and its exploitation by Gerd Faltings in his proof of the Mordell conjecture is a dramatic example.
Unfortunately, being as dramatic as it is, the example mentioned does not tell me anything because not even the Wikipedia entry on Arakelov Theory is somehow close to give even a small hint as to what it is about.
So I would like to ask for some insight and/or examples that illustrate why it is said to be easier to work with function fields than with algebraic number fields and then try to develop parallel techniques for the number field case.
Thank you very much for any help.
-
4
I'd recommend taking a look at Rosen's "Number Theory on Function Fields". It really is interesting to see a lot of theorems proved much more easily in the function field setting. – John M Jul 21 '11 at 5:32
@John M Thanks, that books seems really nice. – Adrián Barquero Jul 21 '11 at 17:41
– Dylan Moreland Jul 22 '11 at 3:51
## 3 Answers
One answer is that we can take formal derivatives. For example, Fermat's last theorem is rather difficult but the function field version is a straightforward consequence of the Mason-Stothers theorem, whose elementary proof crucially relies on the ability to take formal derivatives of polynomials.
There is no obvious way to extend this construction to integers in a way that preserves its good properties. If there were, then the abc conjecture (of which Mason-Stothers is the function field version) would be trivial, which it's not. There is a thing called the arithmetic derivative, but it is of course not linear, and it doesn't seem to me to be very easy to prove anything with it.
The problem is that if we want to think of $\mathbb{Z}$ as being analogous to a function field, then the "field" that it's a function field over is the field with one element, so if a reasonable notion of formal derivative exists here it needs not to be $\mathbb{Z}$-linear, but to be $\mathbb{F}_1$-linear, whatever that means... if we understood what that meant, perhaps we could construct the "correct" version of the arithmetic derivative and presumably prove the abc conjecture.
Arakelov theory addresses another difference between function fields and number fields, which is the existence of Archimedean places. Over a function field all places are non-Archimedean and I understand this makes various things easier, but I don't know much about this so someone else should chime in here.
-
Incidentally, there's nothing formal about the derivative of a polynomial. Given a polynomial map, the derivative is the canonical map from the tangent bundle of the source to the pullback of the tangent bundle of the target. – Scott Carnahan Jul 21 '11 at 5:26
The primary reason that function field arithmetic is simpler than number field arithmetic is due to the existence of nontrivial derivations. With the availability of derivatives many things simplify.
E.g. for polynomials derivatives yield easy algorithms for squarefree testing, squarefree part, etc. Contrast this to the integer case. No feasible (polynomial time) algorithm is currently known for recognizing squarefree integers or for computing the squarefree part of an integer. In fact it may be the case that this problem is no easier than the general problem of integer factorization. This problem is important because one of the main tasks of computational algebraic number theory reduces to it (in deterministic polynomial time). Namely the problem of computing the ring of integers of an algebraic number field depends upon the square-free decomposition of the polynomial discriminant when computing an integral basis.
From derivatives also come Wronskians and associated measures of independence. For example, this is what is at the heart of Mason's trivial high-school level proof of the ABC theorem for polynomials - which is a difficult important open problem for numbers. From Mason's theorem follows immediately a trivial two-line proof of FLT for polynomials. If there existed some sort of analogous "derivative for integers" that yielded the corresponding ABC theorem, then it would yield an analogous trivial proof of FLT for integers (more precisely it would yield asymptotic FLT, i.e. FLT for all sufficiently large exponents).
Such observations have motivated searches for "arithmetic analogues of derivations". For example, see Buium's paper by that name in Jnl. Algebra, 198, 1997, 290-99, and see his book Arithmetic differential equations.
-
Let's consider an example to see why function fields are easier:
Let $q$ be a prime, and consider the global function field, $\mathbb F_q(T)$. An ideal $\mathfrak a$ of $\mathbb F_q[T]$ is just the principal ideal $\mathfrak a=(f)=(T^d+a_{d-1}T^{d-1}+\dots+a_0)$. The norm $N\mathfrak a=q^d$, and you can see that there are exactly $q^d$ ideals of norm $q^d$.
Then the zeta function over this field is $$\zeta_{\mathbb F_q[T]}(s)=\sum_{\mathfrak a \neq 0}N\mathfrak a^{-s}=\sum_{d=0}^\infty q^d(q^d)^{-s}=1/(1-q^{1-s})$$
That's a very simple expression for the zeta function. Note that it has no zeros, so it trivially satisfies the Riemann Hypothesis.
-
– John M Jul 23 '11 at 6:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9357313513755798, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-geometry/83604-two-lebesgue-integration-problems.html | # Thread:
1. ## Two Lebesgue Integration Problems
1. Let f be a nonnegative measurable function on $\Re$. Show that if $F : \Re \rightarrow \Re$ is defined by $\int_{(0,x]} f(t)dm(t)$ then F is continuous on $\Re$.
I know that to prove that a function is continuous, you have to show that for any open set $A \subseteq \Re$ so that $f^{-1}(A)$ is open, but I'm not sure what to do from there.
2. Let f be a nonnegative measurable function on a measurable set E with $\int_E f(x)dm(x)<\infty$. Prove that for each $\epsilon>0$ there exists $\delta>0$ such that for every measurable set $A \subseteq E$ with $m(A) < \delta$ we have $\int_A f(x)dm(x) < \epsilon$
I am clueless how to even start this problem.
2. For 1. use the sequential characterization of continuity. Take a sequence {x_n} converging to x. It suffices to show F(x_n) --> F(x).
A hint to do this is bring (0,x_n] in as a characteristic function and use an appropriate convergence theorem.
3. Originally Posted by grad444
2. Let f be a nonnegative measurable function on a measurable set E with $\int_E f(x)dm(x)<\infty$. Prove that for each $\epsilon>0$ there exists $\delta>0$ such that for every measurable set $A \subseteq E$ with $m(A) < \delta$ we have $\int_A f(x)dm(x) < \epsilon$
I am clueless how to even start this problem.
For 2.: Let $\varepsilon>0$. Using the bounded convergence theorem, prove that there exists $M$ such that $\int_{\{f>M\}} f(x)dm(x)<\varepsilon/2$ (integration on the set where $f(x)>M$). Then choose $\delta<\varepsilon/(2M)$, and conclude using:
$\int_A f(x)dm(x)= \int_{A\cap\{f>M\}} f(x)dm(x)+\int_{A\cap\{f\leq M\}} f(x)dm(x)\leq \varepsilon/2 + M m(A)$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937963604927063, "perplexity_flag": "head"} |
http://physics.stackexchange.com/tags/models/hot?filter=year | # Tag Info
## Hot answers tagged models
20
### Why do people categorically dismiss some simple quantum models?
I can tell you why I don't believe in it. I think my reasons are different from most physicists' reasons, however. Regular quantum mechanics implies the existence of quantum computation. If you believe in the difficulty of factoring (and a number of other classical problems), then a deterministic underpinning for quantum mechanics would seem to imply one of ...
9
### Modeling non-quantum objects (in finance, sociology etc) using fermionic fields?
Actually a paper recently came out, and highlighted in Popular Science, discussing using fermionic field concepts to model crowd avoidance at Netflix. You can imagine that the same concept could be used to consider in any situation where there are large numbers of people competing for limited preferred items. Update Now that we have a few minutes, ...
7
### Why do people categorically dismiss some simple quantum models?
This could have been a comment, but as it actually anwers the question asked in the title, I'll post it as such: As far as I can tell there's no rational reason to dismiss these models out of hand - it's just that quantum mechanics (QM) has set the bar awfully high: So far, there's no experimental evidence that QM is wrong, and no one has come up with a ...
5
### Bohr's model of an atom doesn't seem to have overcome the drawback of Rutherford's model
Classically emission is continuous and the electron would need to occupy a "in between" energy level for a while, and that is forbidden in Bohr's scheme, so the emission can't be allowed to happen. This doesn't really explain why it can't happen, but that's phenomenology for you: you line keep lining up facts until your kludge (1) gets the right answer and ...
5
### Macroscopic laws which haven't been derived from microscopic laws
This is an example from hydrodynamics. When the effects of viscosity can be ignored (inviscid flow), a uniform incident flow can exert on immersed bodies only lift forces perpendicular to the asymptotic flow velocity. However, there exist an infinite number of solutions of the flow equations of motion satisfying the asymptotic conditions at infinity and the ...
3
### Bohr's model of an atom doesn't seem to have overcome the drawback of Rutherford's model
Unfortunately, nobody reads Bohr nowadays so Bohr's arguments are not understood and transmitted. Modern quantum mechanics is more complete and superior as a physical theory to the old quantum mechanics, so the omission is perhaps understandable, but it is not forgivable. Bohr's ideas explain the stability of H-atoms in a reasonable way, which is also ...
3
### Why do people categorically dismiss some simple quantum models?
There are two questions here: Why criticize your models? And are there better ideas? I will try to answer the second question in a separate answer. Here I only give some comments of a general nature to adress the first question. I personally agree with you, and I think most people who care about this stuff do too, that it is disconcerting to have a theory ...
2
### Macroscopic laws which haven't been derived from microscopic laws
As Ron noted, there are many, many examples within condensed matter; they often share a very similar story where the microscopic laws are known well (exactly, for the case of simulations), but the macroscopic laws are derived by symmetry concerns. Take for example, liquid crystals. We could simulate a collection of hard rods or ellipsoids - this is our ...
2
### A problem of approximation [duplicate]
Even a physical quantity which changes by discrete amounts can often be well approximated by a continuous function of time. The derivative is a property of a mathematical function. Any differentiable function must necessarily be continuous, and a continuous function will change by arbitrarily small values for an arbitrarily small change in inputs. The ...
2
### Why do people categorically dismiss some simple quantum models?
Foundational discussions are indeed somewhat like discussions about religious convictions, as one cannot prove or disprove assumptions and approaches at the foundational level. Moreover, it is in the nature of discussions on the internet that one is likely to get responses mainly from those who either disagree strongly (the case here) or who can add ...
2
### Modeling incoming solar radiation
If you are interested just in the direct irradiance you can neglect the emmision and scattering terms in the Radiative Transfer Equation RTE wich can in this case be simplified to allow only for absorbtion and is known under the name Beer-Bougert-Lambert's law of absorption $$\cos\theta_{0}\,\frac{\partial}{\partial p}S^{i} = -\frac{\kappa^i}{g}\,S^{i}$$ ...
2
### Physics of the electric hot plate
I think there is some interesting physics to be had here. The rate of change of temperature depends on the rate of heat flow in from the electric heating element and the rate of heat flow out as heat is lost to the air. If we write the heat capacity of the hotplate as $C$ ($C$ is the traditional symbol for heat capacity) then: \frac{dT}{dt} = C \left( ...
1
### Eddy current losses in electric steel by harmonics of a magnetic field
I'm no expert, but ... MIT's "Magnetic Circuits and Transformers" discusses eddy current losses in chapter V.2. An approximate formula for a magnetic sine wave at frequency $f$ and peak amplitude $B_{max}$ (in Tesla, mks units throughout) is: $$P_e = k_e f^2 t^2 B_{max}^2 V$$ where $k_e = \pi^2/(6 \rho)$ theoretically (but in practice is often ...
1
### Modeling incoming solar radiation
Ok, I'm still not sure on what level you want to do this, but I will start you off with some basics. The most important factor is probably the solar elevation angle, $\theta$. As described on the wiki-page it can be calculated using this formula: $$\sin\theta=\cos h\cos\delta\cos\Phi+\sin\delta\cos\Phi$$ where $h$ is the hour angle, $\delta$ is the solar ...
1
### Is energy always proportional to frequency?
Yes. For photons in vacuum, the energy per photon is proportional to the photon's classical, electromagnetic frequency, as $E = \hbar \omega = h f$. Here, we see a connection between two classical properties of light: the energy and frequency. What is surprising is that the relation holds for matter, where there is no classical equivalent of the frequency. ...
1
### If a cart hits a wall, does the weight of it affect how it moves, when the center of gravity is constant?
No. When you hit the wall, the bicycle rotates around the front axis. The angular momentum L that you create for an arbitrary number of mass particles is $$L=\Sigma_i(r_i \times m_iv_i) .$$ If you split location r=R+r_i and v=V+v_i with R and V being center of mass location and velocity, respectively, and r_i and v_i deviation from it, then it can be shown ...
1
### Does a scale model with 1/2 the linear length have 1/8 the mass?
OK, now we have a better idea what you're trying to do. If we can assume elastic collisions, then answer should be independent of the mass of the bike (though it will depend on the relative distribution of the mass, i.e. center of gravity and moments of inertia). However you will need to think how to scale the velocity of your model. One way to think of ...
1
### What is the simplest system that has both, discontinous and continous phase transitions?
First, make sure to read up on definitions to clarify what you are looking for - classification of phase transitions isn't 100% science, and has a little bit of fussiness to it. Wikipedia's page isn't terrible. Second, I can't tell you whether it is the simplest or not, but as I understand your question, the Ising model itself satisfies your conditions, as ...
1
### What is the simplest system that has both, discontinous and continous phase transitions?
I think you are wrong with water. Above the critical point there is no transition in water at all. This is also true for any other isostructural transition: as soon as there is no symmetry difference between the phases, in the continuous case you cannot say, if the transition has already taken place. A correct example should be probably, KHP (potassium ...
1
### Why do people categorically dismiss some simple quantum models?
This question tries to reproduce quantum mechanics from classical automata with a probabilistically unknown state. Probability distributions on Automata states Start with a classical CA and a probability distribution on the CA. To keep things general, I allow the CA to have some non-determinstic evolution, but only stochastic probability, no quantum ...
1
### A problem of approximation [duplicate]
Physics is all about making the right approximations, in the hope that we can gain some actual physical insight into our problem and make verifiable predictions. For example, say you wanted to calculate the trajectory of a cannonball that has been fired from a cannon. It would be a Sisyphean task to account for all the possible variables that could affect ...
1
### Bohr's model of an atom doesn't seem to have overcome the drawback of Rutherford's model
Ron Maimon's answer is very good and I agree with him, but I would like to point out that Bohr's model is not completely consistent with classical electromagnetism. On the one hand, the electron does not radiate as dmckee and Ron have explained. But, on the other hand, in Bohr's model the electron is a pure particle in a closed orbit and it is therefore ...
1
### Formulation of general relativity
General relativity is a classical theory. I will restate your dilemma as follows, since this is how Einstein stated it: We have an abstract manifold consisting of points, vectors that link nearby points, and a metric tensor that tells you the distance between nearby points. What makes these points physical? How can we tell point A apart from point B? Since ...
1
### Can anyone estimate what proportion of water remains after I flush a toilet?
Use a commercial blue dye in the tank, and flush until the blue color disappears. You can find the e-folding rate by eye. The amount of water removed per flush depends on the toilet, from memory, it's about .9 in a normal flush, so that 90% of the blue color is gone after one flush, so that you lose all noticible blue after about 4 flushes.
1
### Macroscopic laws which haven't been derived from microscopic laws
Any problem that requires solving of non-trivial Schroedinger equations. For example, protein folding problem. It is known what equations the system should satisfy and those equations can be written down. Yet they cannot be solved with modern computers which would take millions of years tor that.
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484596252441406, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/51329/does-a-scale-model-with-1-2-the-linear-length-have-1-8-the-mass?answertab=votes | # Does a scale model with 1/2 the linear length have 1/8 the mass?
If a scale model has 0.5x the original length in all directions, should its mass be 1/8th of the original mass?
-
probably yes, because the volume is proportional to the cube. – elcojon Jan 15 at 21:28
Does the model use material of the same density? – Qmechanic♦ Jan 15 at 21:32
1
What is the purpose of your model? If it is just to look at, then the mass is irrelevant. If it is for some sort of test or simulation purposes, then it is a more complicated question. There's no a priori reason a model should have the same volumetric mass density. – user1631 Jan 15 at 21:34
1
By the way, if this is intended to mimic the behavior of a motorcycle (or a bicycle with articulated suspension) it will fail unless it also has a suspension with similar behavior. Weight transfer is heavily influenced by suspension. – Colin K Jan 15 at 21:58
1
If you just want to know, for example, the max deceleration before the back wheels come up, you could scale the mass however you wanted, since inertial and graviational forces both scale the same way with m. You need to write down all the pertinent equations and see how things scale. – user1631 Jan 15 at 22:09
show 8 more comments
## 2 Answers
OK, now we have a better idea what you're trying to do. If we can assume elastic collisions, then answer should be independent of the mass of the bike (though it will depend on the relative distribution of the mass, i.e. center of gravity and moments of inertia). However you will need to think how to scale the velocity of your model.
One way to think of it: you want to make a coordinate transformation from your model system that replicates the physics of your target system. The length transformation from model to target system is $l_t = 2*l_m$. However, your model system has a gravitational acceleration of $g = 9.8 m/s^2$, so your simulated target system will have an effective gravitational acceleration of $g_t = 2*9.8 m/s^2$ which is not right. How do you fix this? You have to rescale the time between model and target systems: $t_t = \sqrt2*t_m$. This in turn means your models velocity will be related to the target system velocity by $v_m = l_m/t_m = (\sqrt2/2)*l_t/t_t=v_t/\sqrt2$, so you will want to reduce the velocity of your model accordingly.
-
I think this is the answer to a different question. What bike, what velocity?? – Marcus Jan 17 at 14:27
Read the comment thread below the original question. – user1631 Jan 17 at 17:42
Yes, assuming it has the same density. Volme scales as length to the power of three and mass is proportional to volume.
$$\left(\frac{1}{2}\right)^3=\frac{1}{8}$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329211711883545, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/120647/surjective-functions/120662 | # Surjective Functions [duplicate]
Possible Duplicate:
Surjectivity of Function Compositions
I am trying to prove this statement:
If $f: A \rightarrow B$ and $g: B \rightarrow C$ are both surjective functions, show that $g \circ f : A \rightarrow C$ is also a surjective function.
I know that some elements B have corresponding elements in A and likewise, some elements in C have corresponding elements in B. Is it enough to say that some elements in C must therefore correspond in A, which is what surjective function, $g \circ f : A \rightarrow C$, would show?
-
2
Start by looking up the definition of "surjective". – Chris Eagle Mar 15 '12 at 20:50
yes, it is an exact duplicate. Sorry I did not see the previous question. feel free to close this or whatever needs to be done – Dominick Gerard Mar 15 '12 at 21:06
Is there a reason why you have a 0% accept rate? – dtldarek Mar 15 '12 at 21:24
@dtldarek sorry I am still new to this website, how can I change that? – Dominick Gerard Mar 15 '12 at 21:26
@DominickGerard I guess you already know ;-) Great! – dtldarek Mar 15 '12 at 21:34
## marked as duplicate by Asaf Karagila, Chris Eagle, Matt N., anon, Benjamin Lim Mar 15 '12 at 21:42
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 2 Answers
$g$ being surjective means that EVERY element of $C$ is mapped to by (at least) one element of $B$ by $g$, and similarly for $f$. It should now be fairly obvious that if both $g$ and $f$ are surjective then their composition must also be surjective.
-
Hint:
Consider three sets of people $A$, $B$, $C$. Assume that for every person $c_1 \in C$ there exists a person $b_1 \in B$ that knows $c_1$, and that for every person $b_2 \in B$ ($b_1 = b_2$ might or might not be true) there exists $a_1 \in A$ that knows $b_2$. Can you conclude that for every $c_2 \in C$ there exists $a_2 \in A$ that (indirectly) knows $c_2$?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9479809999465942, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/23317/fourier-transforms | # Fourier Transforms
I'm having a terrible time trying to understand Fourier transforms. I'm very visual so leaving the $X,Y,Z,t$ domain is not working form me :)
I'm trying to figure out the basics at the moment. Like, taking a Sine wave (they're odd right?) and converting it into its real and imaginary numbers. I'm pretty sure I got that working, but to make sure, what should the plotted data look like?
Also, how do I find the power spectrum of a transform? How do I use FT to identify the $n$ most significant frequencies in a signal? That last question shows how lost I am!
What I have:
I know how to get the real and imaginary numbers from a signal. I know how to get the phase and the magnitude. What I need to get is the power spectrum and the most significant frequencies. Also any dumbed down explanation of what's going on would be very helpful!
Thanks!
-
## 1 Answer
I'll try and give some intuition from an Electrical engineering perspective. It is useful to think of a Fourier transform as giving you the frequency domain picture of a given signal, i.e., it gives you a picture of both the amplitude and the phase of the different frequency components that make up the signal.
If you are familiar with Fourier series representation of periodic signals, the Fourier transform can be viewed as an extension to non periodic signals. Of course, not all continuous functions have a well defined Fourier transform, but if you look at typical engineering applications, that is not an issue.
When dealing with deterministic signals, the power spectrum is given by the squared magnitude of the Fourier transform. If $f(t)$ is the signal in the (continuous) time domain and $F(\omega)$ is the frequency domain representation, then the Power spectrum is $|F(\omega)|^2$.
When dealing with random signals, the power spectral density is the Fourier transform of the autocorrelation function of the process (Provided the process is wide sense stationary).
The above explanations carry over to the discrete domain as well. So, if you are working with DFTs of signal samples, things are not significantly different.
As for the N most significant frequencies, I think you need to take the DFT of your signal and then look at the frequencies in increasing order of amplitudes.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943461000919342, "perplexity_flag": "head"} |
http://www.sagemath.org/doc/reference/number_fields/sage/rings/number_field/order.html | # Orders in Number Fields¶
AUTHORS:
• William Stein and Robert Bradshaw (2007-09): initial version
EXAMPLES:
We define an absolute order:
```sage: K.<a> = NumberField(x^2 + 1); O = K.order(2*a)
sage: O.basis()
[1, 2*a]
```
We compute a basis for an order in a relative extension that is generated by 2 elements:
```sage: K.<a,b> = NumberField([x^2 + 1, x^2 - 3]); O = K.order([3*a,2*b])
sage: O.basis()
[1, 3*a - 2*b, -6*b*a + 6, 3*a]
```
We compute a maximal order of a degree 10 field:
```sage: K.<a> = NumberField((x+1)^10 + 17)
sage: K.maximal_order()
Maximal Order in Number Field in a with defining polynomial x^10 + 10*x^9 + 45*x^8 + 120*x^7 + 210*x^6 + 252*x^5 + 210*x^4 + 120*x^3 + 45*x^2 + 10*x + 18
```
We compute a suborder, which has index a power of 17 in the maximal order:
```sage: O = K.order(17*a); O
Order in Number Field in a with defining polynomial x^10 + 10*x^9 + 45*x^8 + 120*x^7 + 210*x^6 + 252*x^5 + 210*x^4 + 120*x^3 + 45*x^2 + 10*x + 18
sage: m = O.index_in(K.maximal_order()); m
23453165165327788911665591944416226304630809183732482257
sage: factor(m)
17^45
```
class sage.rings.number_field.order.AbsoluteOrder(K, module_rep, is_maximal=None, check=True)¶
Bases: sage.rings.number_field.order.Order
EXAMPLES:
```sage: from sage.rings.number_field.order import *
sage: x = polygen(QQ)
sage: K.<a> = NumberField(x^3+2)
sage: V, from_v, to_v = K.vector_space()
sage: M = span([to_v(a^2), to_v(a), to_v(1)],ZZ)
sage: O = AbsoluteOrder(K, M); O
Order in Number Field in a with defining polynomial x^3 + 2
sage: M = span([to_v(a^2), to_v(a), to_v(2)],ZZ)
sage: O = AbsoluteOrder(K, M); O
Traceback (most recent call last):
...
ValueError: 1 is not in the span of the module, hence not an order.
sage: loads(dumps(O)) == O
True
```
Quadratic elements have a special optimized type:
absolute_discriminant()¶
Return the discriminant of this order.
EXAMPLES:
```sage: K.<a> = NumberField(x^8 + x^3 - 13*x + 26)
sage: O = K.maximal_order()
sage: factor(O.discriminant())
3 * 11 * 13^2 * 613 * 1575917857
sage: L = K.order(13*a^2)
sage: factor(L.discriminant())
3^3 * 5^2 * 11 * 13^60 * 613 * 733^2 * 1575917857
sage: factor(L.index_in(O))
3 * 5 * 13^29 * 733
sage: L.discriminant() / O.discriminant() == L.index_in(O)^2
True
```
absolute_order()¶
Return the absolute order associated to this order, which is just this order again since this is an absolute order.
EXAMPLES:
```sage: K.<a> = NumberField(x^3 + 2)
sage: O1 = K.order(a); O1
Order in Number Field in a with defining polynomial x^3 + 2
sage: O1.absolute_order() is O1
True
```
basis()¶
Return the basis over $$\ZZ$$ for this order.
EXAMPLES:
```sage: k.<c> = NumberField(x^3 + x^2 + 1)
sage: O = k.maximal_order(); O
Maximal Order in Number Field in c with defining polynomial x^3 + x^2 + 1
sage: O.basis()
[1, c, c^2]
```
The basis is an immutable sequence:
```sage: type(O.basis())
<class 'sage.structure.sequence.Sequence_generic'>
```
The generator functionality uses the basis method:
```sage: O.0
1
sage: O.1
c
sage: O.gens()
[1, c, c^2]
sage: O.ngens()
3
```
change_names(names)¶
Return a new order isomorphic to this one in the number field with given variable names.
EXAMPLES:
```sage: R = EquationOrder(x^3 + x + 1, 'alpha'); R
Order in Number Field in alpha with defining polynomial x^3 + x + 1
sage: R.basis()
[1, alpha, alpha^2]
sage: S = R.change_names('gamma'); S
Order in Number Field in gamma with defining polynomial x^3 + x + 1
sage: S.basis()
[1, gamma, gamma^2]
```
discriminant()¶
Return the discriminant of this order.
EXAMPLES:
```sage: K.<a> = NumberField(x^8 + x^3 - 13*x + 26)
sage: O = K.maximal_order()
sage: factor(O.discriminant())
3 * 11 * 13^2 * 613 * 1575917857
sage: L = K.order(13*a^2)
sage: factor(L.discriminant())
3^3 * 5^2 * 11 * 13^60 * 613 * 733^2 * 1575917857
sage: factor(L.index_in(O))
3 * 5 * 13^29 * 733
sage: L.discriminant() / O.discriminant() == L.index_in(O)^2
True
```
index_in(other)¶
Return the index of self in other. This is a lattice index, so it is a rational number if self isn’t contained in other.
INPUT:
• other – another absolute order with the same ambient number field.
OUTPUT:
a rational number
EXAMPLES:
```sage: k.<i> = NumberField(x^2 + 1)
sage: O1 = k.order(i)
sage: O5 = k.order(5*i)
sage: O5.index_in(O1)
5
sage: k.<a> = NumberField(x^3 + x^2 - 2*x+8)
sage: o = k.maximal_order()
sage: o
Maximal Order in Number Field in a with defining polynomial x^3 + x^2 - 2*x + 8
sage: O1 = k.order(a); O1
Order in Number Field in a with defining polynomial x^3 + x^2 - 2*x + 8
sage: O1.index_in(o)
2
sage: O2 = k.order(1+2*a); O2
Order in Number Field in a with defining polynomial x^3 + x^2 - 2*x + 8
sage: O1.basis()
[1, a, a^2]
sage: O2.basis()
[1, 2*a, 4*a^2]
sage: o.index_in(O2)
1/16
```
intersection(other)¶
Return the intersection of this order with another order.
EXAMPLES:
```sage: k.<i> = NumberField(x^2 + 1)
sage: O6 = k.order(6*i)
sage: O9 = k.order(9*i)
sage: O6.basis()
[1, 6*i]
sage: O9.basis()
[1, 9*i]
sage: O6.intersection(O9).basis()
[1, 18*i]
sage: (O6 & O9).basis()
[1, 18*i]
sage: (O6 + O9).basis()
[1, 3*i]
```
module()¶
Returns the underlying free module corresponding to this order, embedded in the vector space corresponding to the ambient number field.
EXAMPLES:
```sage: k.<a> = NumberField(x^3 + x + 3)
sage: m = k.order(3*a); m
Order in Number Field in a with defining polynomial x^3 + x + 3
sage: m.module()
Free module of degree 3 and rank 3 over Integer Ring
Echelon basis matrix:
[1 0 0]
[0 3 0]
[0 0 9]
```
sage.rings.number_field.order.EquationOrder(f, names)¶
Return the equation order generated by a root of the irreducible polynomial f or list of polynomials $$f$$ (to construct a relative equation order).
IMPORTANT: Note that the generators of the returned order need not be roots of $$f$$, since the generators of an order are – in Sage – module generators.
EXAMPLES:
```sage: O.<a,b> = EquationOrder([x^2+1, x^2+2])
sage: O
Relative Order in Number Field in a with defining polynomial x^2 + 1 over its base field
sage: O.0
-b*a - 1
sage: O.1
-3*a + 2*b
```
Of course the input polynomial must be integral:
```sage: R = EquationOrder(x^3 + x + 1/3, 'alpha'); R
Traceback (most recent call last):
...
ValueError: each generator must be integral
sage: R = EquationOrder( [x^3 + x + 1, x^2 + 1/2], 'alpha'); R
Traceback (most recent call last):
...
ValueError: each generator must be integral
```
class sage.rings.number_field.order.Order(K, is_maximal)¶
Bases: sage.rings.ring.IntegralDomain
An order in a number field.
An order is a subring of the number field that has $$\ZZ$$-rank equal to the degree of the number field over $$\QQ$$.
EXAMPLES:
```sage: K.<theta> = NumberField(x^4 + x + 17)
sage: K.maximal_order()
Maximal Order in Number Field in theta with defining polynomial x^4 + x + 17
sage: R = K.order(17*theta); R
Order in Number Field in theta with defining polynomial x^4 + x + 17
sage: R.basis()
[1, 17*theta, 289*theta^2, 4913*theta^3]
sage: R = K.order(17*theta, 13*theta); R
Order in Number Field in theta with defining polynomial x^4 + x + 17
sage: R.basis()
[1, theta, theta^2, theta^3]
sage: R = K.order([34*theta, 17*theta + 17]); R
Order in Number Field in theta with defining polynomial x^4 + x + 17
sage: K.<b> = NumberField(x^4 + x^2 + 2)
sage: (b^2).charpoly().factor()
(x^2 + x + 2)^2
sage: K.order(b^2)
Traceback (most recent call last):
...
ValueError: the rank of the span of gens is wrong
```
absolute_degree()¶
Returns the absolute degree of this order, ie the degree of this order over $$\ZZ$$.
EXAMPLES:
```sage: K.<a> = NumberField(x^3 + 2)
sage: O = K.maximal_order()
sage: O.absolute_degree()
3
```
ambient()¶
Return the ambient number field that contains self.
This is the same as self.number_field() and self.fraction_field()
EXAMPLES:
```sage: k.<z> = NumberField(x^2 - 389)
sage: o = k.order(389*z + 1)
sage: o
Order in Number Field in z with defining polynomial x^2 - 389
sage: o.basis()
[1, 389*z]
sage: o.ambient()
Number Field in z with defining polynomial x^2 - 389
```
basis()¶
Return a basis over $$\ZZ$$ of this order.
EXAMPLES:
```sage: K.<a> = NumberField(x^3 + x^2 - 16*x + 16)
sage: O = K.maximal_order(); O
Maximal Order in Number Field in a with defining polynomial x^3 + x^2 - 16*x + 16
sage: O.basis()
[1, 1/4*a^2 + 1/4*a, a^2]
```
class_group(proof=None, names='c')¶
Return the class group of this order.
(Currently only implemented for the maximal order.)
EXAMPLES:
```sage: k.<a> = NumberField(x^2 + 5077)
sage: O = k.maximal_order(); O
Maximal Order in Number Field in a with defining polynomial x^2 + 5077
sage: O.class_group()
Class group of order 22 with structure C22 of Number Field in a with defining polynomial x^2 + 5077
```
class_number(proof=None)¶
Return the class number of this order.
EXAMPLES:
```sage: ZZ[2^(1/3)].class_number()
1
sage: QQ[sqrt(-23)].maximal_order().class_number()
3
```
Note that non-maximal orders aren’t supported yet:
```sage: ZZ[3*sqrt(-3)].class_number()
Traceback (most recent call last):
...
NotImplementedError: computation of class numbers of non-maximal orders is not implemented
```
coordinates(x)¶
Returns the coordinate vector of $$x$$ with respect to this order.
INPUT:
• x – an element of the number field of this order.
OUTPUT:
A vector of length $$n$$ (the degree of the field) giving the coordinates of $$x$$ with respect to the integral basis of the order. In general this will be a vector of rationals; it will consist of integers if and only if $$x$$ is in the order.
AUTHOR: John Cremona 2008-11-15
ALGORITHM:
Uses linear algebra. The change-of-basis matrix is cached. Provides simpler implementations for _contains_(), is_integral() and smallest_integer().
EXAMPLES:
```sage: K.<i> = QuadraticField(-1)
sage: OK = K.ring_of_integers()
sage: OK_basis = OK.basis(); OK_basis
[1, i]
sage: a = 23-14*i
sage: acoords = OK.coordinates(a); acoords
(23, -14)
sage: sum([OK_basis[j]*acoords[j] for j in range(2)]) == a
True
sage: OK.coordinates((120+340*i)/8)
(15, 85/2)
sage: O = K.order(3*i)
sage: O.is_maximal()
False
sage: O.index_in(OK)
3
sage: acoords = O.coordinates(a); acoords
(23, -14/3)
sage: sum([O.basis()[j]*acoords[j] for j in range(2)]) == a
True
```
degree()¶
Return the degree of this order, which is the rank of this order as a $$\ZZ$$-module.
EXAMPLES:
```sage: k.<c> = NumberField(x^3 + x^2 - 2*x+8)
sage: o = k.maximal_order()
sage: o.degree()
3
sage: o.rank()
3
```
fraction_field()¶
Return the fraction field of this order, which is the ambient number field.
EXAMPLES:
```sage: K.<b> = NumberField(x^4 + 17*x^2 + 17)
sage: O = K.order(17*b); O
Order in Number Field in b with defining polynomial x^4 + 17*x^2 + 17
sage: O.fraction_field()
Number Field in b with defining polynomial x^4 + 17*x^2 + 17
```
fractional_ideal(*args, **kwds)¶
Return the fractional ideal of the maximal order with given generators.
EXAMPLES:
```sage: K.<a> = NumberField(x^2 + 2)
sage: R = K.maximal_order()
sage: R.fractional_ideal(2/3 + 7*a, a)
Fractional ideal (1/3*a)
```
free_module()¶
Return the free $$\ZZ$$-module contained in the vector space associated to the ambient number field, that corresponds to this ideal.
EXAMPLES:
```sage: K.<a> = NumberField(x^3 + x^2 - 2*x + 8)
sage: O = K.maximal_order(); O.basis()
[1, 1/2*a^2 + 1/2*a, a^2]
sage: O.free_module()
Free module of degree 3 and rank 3 over Integer Ring
User basis matrix:
[ 1 0 0]
[ 0 1/2 1/2]
[ 0 0 1]
```
An example in a relative extension. Notice that the module is a $$\ZZ$$-module in the absolute_field associated to the relative field:
```sage: K.<a,b> = NumberField([x^2 + 1, x^2 + 2])
sage: O = K.maximal_order(); O.basis()
[(-3/2*b - 5)*a + 7/2*b - 2, -3*a + 2*b, -2*b*a - 3, -7*a + 5*b]
sage: O.free_module()
Free module of degree 4 and rank 4 over Integer Ring
User basis matrix:
[1/4 1/4 3/4 3/4]
[ 0 1/2 0 1/2]
[ 0 0 1 0]
[ 0 0 0 1]
```
gen(i)¶
Return $$i$$‘th module generator of this order.
EXAMPLES:
```sage: K.<c> = NumberField(x^3 + 2*x + 17)
sage: O = K.maximal_order(); O
Maximal Order in Number Field in c with defining polynomial x^3 + 2*x + 17
sage: O.basis()
[1, c, c^2]
sage: O.gen(1)
c
sage: O.gen(2)
c^2
sage: O.gen(5)
Traceback (most recent call last):
...
IndexError: no 5th generator
sage: O.gen(-1)
Traceback (most recent call last):
...
IndexError: no -1th generator
```
gens()¶
Return a list of the module generators of this order.
Note
For a (much smaller) list of ring generators use ring_generators().
EXAMPLES:
```sage: K.<a> = NumberField(x^3 + x^2 - 2*x + 8)
sage: O = K.maximal_order()
sage: O.gens()
[1, 1/2*a^2 + 1/2*a, a^2]
```
ideal(*args, **kwds)¶
Return the integral ideal with given generators.
EXAMPLES:
```sage: K.<a> = NumberField(x^2 + 7)
sage: R = K.maximal_order()
sage: R.ideal(2/3 + 7*a, a)
Traceback (most recent call last):
...
ValueError: ideal must be integral; use fractional_ideal to create a non-integral ideal.
sage: R.ideal(7*a, 77 + 28*a)
Fractional ideal (7)
sage: R = K.order(4*a)
sage: R.ideal(8)
Traceback (most recent call last):
...
NotImplementedError: ideals of non-maximal orders not yet supported.
```
This function is called implicitly below:
```sage: R = EquationOrder(x^2 + 2, 'a'); R
Order in Number Field in a with defining polynomial x^2 + 2
sage: (3,15)*R
Fractional ideal (3)
```
The zero ideal is handled properly:
```sage: R.ideal(0)
Ideal (0) of Number Field in a with defining polynomial x^2 + 2
```
integral_closure()¶
Return the integral closure of this order.
EXAMPLES:
```sage: K.<a> = QuadraticField(5)
sage: O2 = K.order(2*a); O2
Order in Number Field in a with defining polynomial x^2 - 5
sage: O2.integral_closure()
Maximal Order in Number Field in a with defining polynomial x^2 - 5
sage: OK = K.maximal_order()
sage: OK is OK.integral_closure()
True
```
is_field(proof=True)¶
Return False (because an order is never a field).
EXAMPLES:
```sage: L.<alpha> = NumberField(x**4 - x**2 + 7)
sage: O = L.maximal_order() ; O.is_field()
False
sage: CyclotomicField(12).ring_of_integers().is_field()
False
```
is_integrally_closed()¶
Return True if this ring is integrally closed, i.e., is equal to the maximal order.
EXAMPLES:
```sage: K.<a> = NumberField(x^2 + 189*x + 394)
sage: R = K.order(2*a)
sage: R.is_integrally_closed()
False
sage: R
Order in Number Field in a with defining polynomial x^2 + 189*x + 394
sage: S = K.maximal_order(); S
Maximal Order in Number Field in a with defining polynomial x^2 + 189*x + 394
sage: S.is_integrally_closed()
True
```
is_maximal()¶
Returns True if this is the maximal order.
EXAMPLE:
``` sage: k.<i> = NumberField(x^2 + 1)
sage: O3 = k.order(3*i); O5 = k.order(5*i); Ok = k.maximal_order(); Osum = O3 + O5
sage: Osum.is_maximal()
True
sage: O3.is_maximal()
False
sage: O5.is_maximal()
False
sage: Ok.is_maximal()
True
An example involving a relative order::
sage: K.<a,b> = NumberField([x^2 + 1, x^2 - 3]); O = K.order([3*a,2*b]); O
Relative Order in Number Field in a with defining polynomial x^2 + 1 over its base field
sage: O.is_maximal()
False```
is_noetherian()¶
Return True (because orders are always Noetherian)
EXAMPLES:
```sage: L.<alpha> = NumberField(x**4 - x**2 + 7)
sage: O = L.maximal_order() ; O.is_noetherian()
True
sage: E.<w> = NumberField(x^2 - x + 2)
sage: OE = E.ring_of_integers(); OE.is_noetherian()
True
```
is_suborder(other)¶
Return True if self and other are both orders in the same ambient number field and self is a subset of other.
EXAMPLES:
```sage: W.<i> = NumberField(x^2 + 1)
sage: O5 = W.order(5*i)
sage: O10 = W.order(10*i)
sage: O15 = W.order(15*i)
sage: O15.is_suborder(O5)
True
sage: O5.is_suborder(O15)
False
sage: O10.is_suborder(O15)
False
```
We create another isomorphic but different field:
```sage: W2.<j> = NumberField(x^2 + 1)
sage: P5 = W2.order(5*j)
```
This is False because the ambient number fields are not equal.:
```sage: O5.is_suborder(P5)
False
```
We create a field that contains (in no natural way!) W, and of course again is_suborder returns False:
```sage: K.<z> = NumberField(x^4 + 1)
sage: M = K.order(5*z)
sage: O5.is_suborder(M)
False
```
krull_dimension()¶
Return the Krull dimension of this order, which is 1.
EXAMPLES:
```sage: K.<a> = QuadraticField(5)
sage: OK = K.maximal_order()
sage: OK.krull_dimension()
1
sage: O2 = K.order(2*a)
sage: O2.krull_dimension()
1
```
ngens()¶
Return the number of module generators of this order.
EXAMPLES:
```sage: K.<a> = NumberField(x^3 + x^2 - 2*x + 8)
sage: O = K.maximal_order()
sage: O.ngens()
3
```
number_field()¶
Return the number field of this order, which is the ambient number field that this order is embedded in.
EXAMPLES:
```sage: K.<b> = NumberField(x^4 + x^2 + 2)
sage: O = K.order(2*b); O
Order in Number Field in b with defining polynomial x^4 + x^2 + 2
sage: O.basis()
[1, 2*b, 4*b^2, 8*b^3]
sage: O.number_field()
Number Field in b with defining polynomial x^4 + x^2 + 2
sage: O.number_field() is K
True
```
random_element(*args, **kwds)¶
Return a random element of this order.
INPUT:
• args, kwds – parameters passed to the random integer function. See the documentation for ZZ.random_element() for details.
OUTPUT:
A random element of this order, computed as a random $$\ZZ$$-linear combination of the basis.
EXAMPLES:
```sage: K.<a> = NumberField(x^3 + 2)
sage: OK = K.ring_of_integers()
sage: OK.random_element() # random output
-2*a^2 - a - 2
sage: OK.random_element(distribution="uniform") # random output
-a^2 - 1
sage: OK.random_element(-10,10) # random output
-10*a^2 - 9*a - 2
sage: K.order(a).random_element() # random output
a^2 - a - 3
```
```sage: K.<z> = CyclotomicField(17)
sage: OK = K.ring_of_integers()
sage: OK.random_element() # random output
z^15 - z^11 - z^10 - 4*z^9 + z^8 + 2*z^7 + z^6 - 2*z^5 - z^4 - 445*z^3 - 2*z^2 - 15*z - 2
sage: OK.random_element().is_integral()
True
sage: OK.random_element().parent() is OK
True
```
A relative example:
```sage: K.<a, b> = NumberField([x^2 + 2, x^2 + 1000*x + 1])
sage: OK = K.ring_of_integers()
sage: OK.random_element() # random output
(42221/2*b + 61/2)*a + 7037384*b + 7041
sage: OK.random_element().is_integral() # random output
True
sage: OK.random_element().parent() is OK # random output
True
```
An example in a non-maximal order:
```sage: K.<a> = QuadraticField(-3)
sage: R = K.ring_of_integers()
sage: A = K.order(a)
sage: A.index_in(R)
2
sage: R.random_element() # random output
-39/2*a - 1/2
sage: A.random_element() # random output
2*a - 1
sage: A.random_element().is_integral()
True
sage: A.random_element().parent() is A
True
```
rank()¶
Return the rank of this order, which is the rank of the underlying $$\ZZ$$-module, or the degree of the ambient number field that contains this order.
This is a synonym for degree().
EXAMPLES:
```sage: k.<c> = NumberField(x^5 + x^2 + 1)
sage: o = k.maximal_order(); o
Maximal Order in Number Field in c with defining polynomial x^5 + x^2 + 1
sage: o.rank()
5
```
residue_field(prime, name=None, check=False)¶
Return the residue field of this order at a given prime, ie $$O/pO$$.
INPUT:
• prime – a prime ideal of the maximal order in this number field.
• name – the name of the variable in the residue field
• check – whether or not to check the primality of prime.
OUTPUT:
The residue field at this prime.
EXAMPLES:
```sage: R.<x> = QQ[]
sage: K.<a> = NumberField(x^4+3*x^2-17)
sage: P = K.ideal(61).factor()[0][0]
sage: OK = K.maximal_order()
sage: OK.residue_field(P)
Residue field in abar of Fractional ideal (61, a^2 + 30)
```
ring_generators()¶
Return generators for self as a ring.
EXAMPLES:
```sage: K.<i> = NumberField(x^2 + 1)
sage: O = K.maximal_order(); O
Maximal Order in Number Field in i with defining polynomial x^2 + 1
sage: O.ring_generators()
[i]
```
This is an example where 2 generators are required (because 2 is an essential discriminant divisor).:
```sage: K.<a> = NumberField(x^3 + x^2 - 2*x + 8)
sage: O = K.maximal_order(); O.basis()
[1, 1/2*a^2 + 1/2*a, a^2]
sage: O.ring_generators()
[1/2*a^2 + 1/2*a, a^2]
```
An example in a relative number field:
```sage: K.<a, b> = NumberField([x^2 + x + 1, x^3 - 3])
sage: O = K.maximal_order()
sage: O.ring_generators()
[(-5/3*b^2 + 3*b - 2)*a - 7/3*b^2 + b + 3, (-5*b^2 - 9)*a - 5*b^2 - b, (-6*b^2 - 11)*a - 6*b^2 - b]
```
zeta(n=2, all=False)¶
Return a primitive n-th root of unity in this order, if it contains one. If all is True, return all of them.
EXAMPLES:
```sage: F.<alpha> = NumberField(x**2+3)
sage: F.ring_of_integers().zeta(6)
1/2*alpha + 1/2
sage: O = F.order([3*alpha])
sage: O.zeta(3)
Traceback (most recent call last):
...
ArithmeticError: There are no 3rd roots of unity in self.
```
class sage.rings.number_field.order.RelativeOrder(K, absolute_order, is_maximal=None, check=True)¶
Bases: sage.rings.number_field.order.Order
A relative order in a number field.
A relative order is an order in some relative number field
Invariants of this order may be computed with respect to the contained order.
absolute_discriminant()¶
Return the absolute discriminant of self, which is the discriminant of the absolute order associated to self.
OUTPUT:
an integer
EXAMPLES:
```sage: R = EquationOrder([x^2 + 1, x^3 + 2], 'a,b')
sage: d = R.absolute_discriminant(); d
-746496
sage: d is R.absolute_discriminant()
True
sage: factor(d)
-1 * 2^10 * 3^6
```
absolute_order(names='z')¶
Return underlying absolute order associated to this relative order.
INPUT:
• names – string (default: ‘z’); name of generator of absolute extension.
Note
There is a default variable name, since this absolute order is frequently used for internal algorithms.
EXAMPLES:
```sage: R = EquationOrder([x^2 + 1, x^2 - 5], 'i,g'); R
Relative Order in Number Field in i with defining polynomial x^2 + 1 over its base field
sage: R.basis()
[1, 6*i - g, -g*i + 2, 7*i - g]
sage: S = R.absolute_order(); S
Order in Number Field in z with defining polynomial x^4 - 8*x^2 + 36
sage: S.basis()
[1, 5/12*z^3 + 1/6*z, 1/2*z^2, 1/2*z^3]
```
We compute a relative order in alpha0, alpha1, then make the number field that contains the absolute order be called gamma.:
```sage: R = EquationOrder( [x^2 + 2, x^2 - 3], 'alpha'); R
Relative Order in Number Field in alpha0 with defining polynomial x^2 + 2 over its base field
sage: R.absolute_order('gamma')
Order in Number Field in gamma with defining polynomial x^4 - 2*x^2 + 25
sage: R.absolute_order('gamma').basis()
[1/2*gamma^2 + 1/2, 7/10*gamma^3 + 1/10*gamma, gamma^2, gamma^3]
```
basis()¶
Return module basis for this relative order. This is a list of elements that generate this order over the base order.
Warning
For now this basis is actually just a basis over $$\ZZ$$.
EXAMPLES:
```sage: K.<a,b> = NumberField([x^2+1, x^2+3])
sage: O = K.order([a,b])
sage: O.basis()
[1, -2*a + b, -b*a - 2, -5*a + 3*b]
sage: z = O.1; z
-2*a + b
sage: z.absolute_minpoly()
x^4 + 14*x^2 + 1
```
index_in(other)¶
Return the index of self in other. This is a lattice index, so it is a rational number if self isn’t contained in other.
INPUT:
• other – another order with the same ambient absolute number field.
OUTPUT:
a rational number
EXAMPLES:
```sage: K.<a,b> = NumberField([x^3 + x + 3, x^2 + 1])
sage: R1 = K.order([3*a, 2*b])
sage: R2 = K.order([a, 4*b])
sage: R1.index_in(R2)
729/8
sage: R2.index_in(R1)
8/729
```
is_suborder(other)¶
Returns true if self is a subset of the order other.
EXAMPLES:
```sage: K.<a,b> = NumberField([x^2 + 1, x^3 + 2])
sage: R1 = K.order([a,b])
sage: R2 = K.order([2*a,b])
sage: R3 = K.order([a + b, b + 2*a])
sage: R1.is_suborder(R2)
False
sage: R2.is_suborder(R1)
True
sage: R3.is_suborder(R1)
True
sage: R1.is_suborder(R3)
True
sage: R1 == R3
True
```
sage.rings.number_field.order.absolute_order_from_module_generators(gens, check_integral=True, check_rank=True, check_is_ring=True, is_maximal=None, allow_subfield=False)¶
INPUT:
• gens – list of elements of an absolute number field that generates an order in that number field as a ZZ module.
• check_integral – check that each gen is integral
• check_rank – check that the gens span a module of the correct rank
• check_is_ring – check that the module is closed under multiplication (this is very expensive)
• is_maximal – bool (or None); set if maximality of the generated order is known
OUTPUT:
an absolute order
EXAMPLES: We have to explicitly import the function, since it isn’t meant for regular usage:
```sage: from sage.rings.number_field.order import absolute_order_from_module_generators
sage: K.<a> = NumberField(x^4 - 5)
sage: O = K.maximal_order(); O
Maximal Order in Number Field in a with defining polynomial x^4 - 5
sage: O.basis()
[1/2*a^2 + 1/2, 1/2*a^3 + 1/2*a, a^2, a^3]
sage: O.module()
Free module of degree 4 and rank 4 over Integer Ring
Echelon basis matrix:
[1/2 0 1/2 0]
[ 0 1/2 0 1/2]
[ 0 0 1 0]
[ 0 0 0 1]
sage: g = O.gens(); g
[1/2*a^2 + 1/2, 1/2*a^3 + 1/2*a, a^2, a^3]
sage: absolute_order_from_module_generators(g)
Order in Number Field in a with defining polynomial x^4 - 5
```
We illustrate each check flag – the output is the same but in case the function would run ever so slightly faster:
```sage: absolute_order_from_module_generators(g, check_is_ring=False)
Order in Number Field in a with defining polynomial x^4 - 5
sage: absolute_order_from_module_generators(g, check_rank=False)
Order in Number Field in a with defining polynomial x^4 - 5
sage: absolute_order_from_module_generators(g, check_integral=False)
Order in Number Field in a with defining polynomial x^4 - 5
```
Next we illustrate constructing “fake” orders to illustrate turning off various check flags:
```sage: k.<i> = NumberField(x^2 + 1)
sage: R = absolute_order_from_module_generators([2, 2*i], check_is_ring=False); R
Order in Number Field in i with defining polynomial x^2 + 1
sage: R.basis()
[2, 2*i]
sage: R = absolute_order_from_module_generators([k(1)], check_rank=False); R
Order in Number Field in i with defining polynomial x^2 + 1
sage: R.basis()
[1]
```
If the order contains a non-integral element, even if we don’t check that, we’ll find that the rank is wrong or that the order isn’t closed under multiplication:
```sage: absolute_order_from_module_generators([1/2, i], check_integral=False)
Traceback (most recent call last):
...
ValueError: the module span of the gens is not closed under multiplication.
sage: R = absolute_order_from_module_generators([1/2, i], check_is_ring=False, check_integral=False); R
Order in Number Field in i with defining polynomial x^2 + 1
sage: R.basis()
[1/2, i]
```
We turn off all check flags and make a really messed up order:
```sage: R = absolute_order_from_module_generators([1/2, i], check_is_ring=False, check_integral=False, check_rank=False); R
Order in Number Field in i with defining polynomial x^2 + 1
sage: R.basis()
[1/2, i]
```
An order that lives in a subfield:
```sage: F.<alpha> = NumberField(x**4+3)
sage: F.order([alpha**2], allow_subfield=True)
Order in Number Field in alpha with defining polynomial x^4 + 3
```
sage.rings.number_field.order.absolute_order_from_ring_generators(gens, check_is_integral=True, check_rank=True, is_maximal=None, allow_subfield=False)¶
INPUT:
• gens – list of integral elements of an absolute order.
• check_is_integral – bool (default: True), whether to check that each generator is integral.
• check_rank – bool (default: True), whether to check that the ring generated by gens is of full rank.
• is_maximal – bool (or None); set if maximality of the generated order is known
• allow_subfield – bool (default: False), if True and the generators do not generate an order, i.e., they generate a subring of smaller rank, instead of raising an error, return an order in a smaller number field.
EXAMPLES:
```sage: K.<a> = NumberField(x^4 - 5)
sage: K.order(a)
Order in Number Field in a with defining polynomial x^4 - 5
```
We have to explicitly import this function, since typically it is called with K.order as above.:
```sage: from sage.rings.number_field.order import absolute_order_from_ring_generators
sage: absolute_order_from_ring_generators([a])
Order in Number Field in a with defining polynomial x^4 - 5
sage: absolute_order_from_ring_generators([3*a, 2, 6*a+1])
Order in Number Field in a with defining polynomial x^4 - 5
```
If one of the inputs is non-integral, it is an error.:
```sage: absolute_order_from_ring_generators([a/2])
Traceback (most recent call last):
...
ValueError: each generator must be integral
```
If the gens do not generate an order, i.e., generate a ring of full rank, then it is an error.:
```sage: absolute_order_from_ring_generators([a^2])
Traceback (most recent call last):
...
ValueError: the rank of the span of gens is wrong
```
Both checking for integrality and checking for full rank can be turned off in order to save time, though one can get nonsense as illustrated below.:
```sage: absolute_order_from_ring_generators([a/2], check_is_integral=False)
Order in Number Field in a with defining polynomial x^4 - 5
sage: absolute_order_from_ring_generators([a^2], check_rank=False)
Order in Number Field in a with defining polynomial x^4 - 5
```
sage.rings.number_field.order.each_is_integral(v)¶
Return True if each element of the list v of elements of a number field is integral.
EXAMPLES:
```sage: W.<sqrt5> = NumberField(x^2 - 5)
sage: from sage.rings.number_field.order import each_is_integral
sage: each_is_integral([sqrt5, 2, (1+sqrt5)/2])
True
sage: each_is_integral([sqrt5, (1+sqrt5)/3])
False
```
sage.rings.number_field.order.is_NumberFieldOrder(R)¶
Return True if R is either an order in a number field or is the ring $$\ZZ$$ of integers.
EXAMPLES:
```sage: from sage.rings.number_field.order import is_NumberFieldOrder
sage: is_NumberFieldOrder(NumberField(x^2+1,'a').maximal_order())
True
sage: is_NumberFieldOrder(ZZ)
True
sage: is_NumberFieldOrder(QQ)
False
sage: is_NumberFieldOrder(45)
False
```
sage.rings.number_field.order.relative_order_from_ring_generators(gens, check_is_integral=True, check_rank=True, is_maximal=None, allow_subfield=False)¶
INPUT:
• gens – list of integral elements of an absolute order.
• check_is_integral – bool (default: True), whether to check that each generator is integral.
• check_rank – bool (default: True), whether to check that the ring generated by gens is of full rank.
• is_maximal – bool (or None); set if maximality of the generated order is known
EXAMPLES: We have to explicitly import this function, since it isn’t meant for regular usage:
```sage: from sage.rings.number_field.order import relative_order_from_ring_generators
sage: K.<i, a> = NumberField([x^2 + 1, x^2 - 17])
sage: R = K.base_field().maximal_order()
sage: S = relative_order_from_ring_generators([i,a]); S
Relative Order in Number Field in i with defining polynomial x^2 + 1 over its base field
```
Basis for the relative order, which is obtained by computing the algebra generated by i and a:
```sage: S.basis()
[1, 7*i - 2*a, -a*i + 8, 25*i - 7*a]
```
#### Previous topic
Optimized Quadratic Number Field Elements
#### Next topic
Number Field Ideals
### Quick search
Enter search terms or a module, class or function name. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 20, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7188162803649902, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/30495?sort=oldest | ## pure dimensional and embedded components
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi.
Let $X$ be a pure $n$-dimensional complex subspace of manifold $Z$. It is true that $X$ has no embedded components? (perhaps that is obvious with Weierstrass preparation theorem or Noether normalisation theorem...)
Thank you.
-
2
Plenty of algebraic examples give analytic examples upon analytification; e.g., $z^2 = zw = 0$ in the $(z,w)$-plane $Z$ is topologically the $w$-axis (so pure dimension 1) and has the origin as an embedded point (in the sense that the maximal ideal of the local ring at the origin is the annihilator of $z$ in that local ring; this even holds in the completion of the local ring). – Boyarsky Jul 4 2010 at 11:42
3
I think it often depends also on what the writer means when he says "pure dimensional". Boyarsky's example is certainly right though. On the other hand, in Eisenbud's book "Commutative algebra with a view towards algebraic geometry", Eisenbud defines "pure codimension 1" in terms of associated primes, not just minimal primes. In otherwords, Boyarsky's example is not pure \emph{codimension} 1 from Eisenbud's terminology. – Karl Schwede Jul 4 2010 at 13:26
The question involves "pure dimension", which I have never seen used in the literature with a meaning that is sensitive to associated primes (and likewise for "pure codimension"). Does the algebraic geometry (as opposed to commutative algebra) literature use such alternative terminology? – Boyarsky Jul 4 2010 at 17:05
3
Usually when I see pure dimensional, it is in reference to reduced schemes/reducible varieties and so the issue of embedded primes would not show up. This is how it's done in Harris for example. I don't have strong feelings on this though, so you could be right about pure dimensional for schemes (but if it's non-standard...?) I've done a quick search of several canonical algebraic references and didn't find anything more illuminating unfortunately (I looked in the red book, Hartshorne, Bruns and Herzog, both Matsumuras, Nagata, Eisenbud and Harris, Eisenbud, Harris, Atiyah-Macdonald, etc). – Karl Schwede Jul 4 2010 at 18:45
## 1 Answer
The claim is false.
1) In analytic geometry, a complex space $X$ is said to be pure dimensional if all irreducible component of the reduction of $X$ have the same dimension (as Krull dimension of local ring).
In fact, we can produce many examples of pure dimensional space with embedded component obtained by base change. We can consider the simple and classical example of union of two plane which project on $C^{2}$:
Let $X:=\lbrace(u,v,t,w)\in {\Bbb C}^{4}: uv=uw=ut=tw=0\rbrace$ be the union of two planes which can rewriting (by change of coordinates) as the union of $X_{1}:=\lbrace t=w=0\rbrace$ and $X_{2}:=\lbrace t-u= w-v=0\rbrace$. Consider the projection $f:X\rightarrow {\Bbb C}^{2}$, $(u,v,t,w)\rightarrow (u,v)$.
Then $X$ is $2$-pure dimensional, $f$ is finite, open (universally open in Alg.geom) and surjective (but no flat because $X$ is not Cohen Macaulay!). denote by $Y$ the fiber product $X\times_{{\Bbb C}^{2}} X$. Then the canonical morphism (deduced by base change) $g:Y\rightarrow {\Bbb C}^{2}$ is finite, open and surjective too with $Y$ pure dimensional but not reduced and with an embedded component: the origin in ${\Bbb C}^{4}$!
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9205451011657715, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/50684?sort=oldest | ## Why only three classical matrix ensembles in RMT? (Newbie question)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am just starting out on understanding random matrix theory from a background in applied mathematics. I have a very basic question about the Gaussian ensembles: why are there only three classical Gaussian ensembles? This seems very mysterious to me. Is it historically motivated from applications, or is there a deeper reason? I thought it might arise from exhausting all possible classes of diagonalizable matrices of a certain symmetry, but I have no idea if this is true or not.
I haven't been able to find a good expository reference for this, so any thoughts along those lines are also welcome.
-
## 4 Answers
The short answer is that there are three kinds of positive-definite elementary inner products:
1. symmetric on $\mathbb{R}^n$, giving rise to the orthogonal ensemble;
2. hermitian on $\mathbb{C}^n$, giving rise to the unitary ensemble; and
3. hermitian on $\mathbb{H}^n$, giving rise to the symplectic ensemble.
Each one gives rise to a compact classical Lie group: $\mathrm{O}(n)$, $\mathrm{U}(n)$ and $\mathrm{Sp}(n)$, respectively. Compactness makes the integrals defining the matrix model convergent.
-
1
Incidentally, there was a recent question concerning classical Lie groups: mathoverflow.net/questions/50610/… – José Figueroa-O'Farrill Dec 29 2010 at 22:38
That helped a lot, thanks! But I am still unclear as to whether there could be more than three different cases. Could there be, for example, a fourth analogous ensemble arising from an Hermitian inner product over the octonions? – Jiahao Chen Jan 2 2011 at 1:56
2
@Jiahao, there is work out there on what are known as $\beta$ ensembles. I've seen a few talks but not participated in it myself. Where $\beta = 1$ is the orthogonal case, $\beta = 2$ is the unitary case, and $\beta = 4$ the symplectic case. However $\beta$ is usually allowed the run through all positive reals. As I recall there is no special behavior at $\beta=8$ which would indicate something something special for octonians. – BSteinhurst May 1 2011 at 15:53
@Jiahao, @BSteinhurst: I seem to recall a recent discussion on how the five exceptional Lie groups can be thought of as belonging an "infinite" series like ${\rm O}(n)$, ${\rm U}(n)$ and ${\rm Sp}(n)$ that fails and this is related to the failure of associativity for the octonians. Also, a reference that starts with the definition of inner product and builds all the groups that can rise from the definition is chapter 3 of Rossmann, "Lie Groups: An Introduction through Linear Groups". – WetSavannaAnimal aka Rod Vance Jul 26 2011 at 0:50
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here is the answer: http://arxiv.org/abs/cond-mat/9602137
-
3
Would you care to explain how this paper answers the question? – S. Carnahan♦ Jan 6 2011 at 18:13
Every Riemannian symmetric space can be associated with a random matrix ensemble, producing a total of 10 ensembles, as is nicely explained by Martin Zirnbauer: http://arxiv.org/abs/1001.0722
-
Thanks for the reference! It really brought to the fore the relationship between the relevant vector spaces and the resulting matrix ensembles. – Jiahao Chen May 8 2011 at 0:15
These three ensembles are hermitian matrices over a (finite dimensional real) field of numbers, and it is known that the only finite dimensional real fields are the real numbers, the complex numbers ($2$-dimensional) and the quaternionic numbers ($4$-dimensional). Octonions are not a field of number since you do not have associativity. The motivation in physics comes from the fact that an hermitian matrix represents a finite dimentional Hamiltonian (an Hermitian operator) in quantum mechanics (then you add randomness, in order to take in account the lack of information about your system, and you let the size of the matrix, that is the dimension of your state space where your Hamiltionian is acting on, going to infinity). In this setting, $N\times N$ quaternionic matrices have to be seen as subclasses of complex hermitian matrices (but of size $2N\times 2N$) and both real symmetric and quaternionic hermitian matrices are a subclass of complex hermitian matrices, with extra symmetries. Anyway, you may imagine many different matrix models relevant for studying (look for Wigner matrices, the answer of Beenakker about other symmetries in physics, the generalized $\beta$-ensemble of Edelman, etc ...)
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9217249751091003, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/68942/weil-pairing-and-millers-algorithm/68955 | ## Weil pairing and Miller’s algorithm
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm studying Weil pairing and its applications in cryptography. I already know that it can be defined like this:
`$$w(P, Q) = (-1)^n\frac{f_P(Q)}{f_Q(P)}\frac{f_Q}{f_P}(\mathcal{O})$$`
where
`$\textrm{div}(f_P) = n(P) - n(\mathcal{O})$` and `$\textrm{div}(f_Q) = n(Q) - n(\mathcal{O})$`.
This is not suitable for computation, so we shift the numerator by $R$ and the denominator by $S$ and we obtain:
`$$w(P, Q) = (-1)^n\frac{g_P(Q+S)}{g_Q(P+R)}\frac{g_Q(R)}{g_P(S)}$$`
where
`$\textrm{div}(g_P) = n(P+R) - n(\mathcal{R})$` and `$\textrm{div}(g_Q) = n(Q+S) - n(\mathcal{S})$`.
Enter Miller's algorithm. In order to calculate $g_P(A)$ we define $h_k$ for $k = 0,\ldots,n$ as:
`$$\textrm{div}(h_k) = k(P+R) - k(R) - (kP) + \mathcal{O}$$`
Now, $h_n = g_P$ and we can calculate $h_{k+l}(A)$ from $h_k(A)$ and $h_l(A)$. So we construct a "double-and-add" algorithm similar to fast exponentiation.
Although $g_P(A)$ is never zero or infinity (for $A = Q+S$ or $A = S$), it can happen that during the execution of "double-and-add" algorithm we can encounter some $h_k(A)$ equal to zero or infinity. The solution is to randomly select new $Q$ and $S$ and start over.
How do I prove that there exist such $Q$ and $S$ that I will be able to calculate the Weil pairing of $P$ and $Q$?
If possible, I'd like to see some simple argument that does not refer to algebraic geometry, as I don't know anything about it - I come from cryptography.
-
I fear that any legitimate proof that the algorithm succeeds after relatively few re-starts, or even that there exists a choice that would succeed, will require a little serious consideration of how elliptic curves work. Not "algebraic geometry", really. Ed Silverman has several books about elliptic curves addressing a range of audiences, starting with one with minimal prerequisites. – paul garrett Jun 27 2011 at 18:30
Argh. Of course it's Joe Silverman. I'm an idiot. – paul garrett Jun 27 2011 at 19:31
I think I recall that the additive group is the direct product of two cyclic groups or something similar. So you choose points in the "other half" (for the algorithm probabilistically they have high enough order, for the proof one exists with high enough order) so that no collisions/interactions happen. – Aeryk Jun 27 2011 at 19:40
## 2 Answers
I had thought that if suitably formulated, Miller's algorithm works without having to go back and recompute if some intermediate point turns out to be the point at infinity. Miller's algorithm is described in many books, but I'll point for example to the 2nd edition of my Arithmetic of Elliptic Curves (Springer GTM 106, 2009). The algorithm is described in Figure 11.6 on page 394, where note that the formula for $h_{P,Q}$ on that page depends on whether the relevant slope is finite or infinite. I'm not really an algorithms person, so there are likely to be ways to make the computation more efficient than what I described. And as a final note, there's an alternative way to compute the Weil and Tate pairings using elliptic divisibility sequences (nets) due to Kate Stange, The Tate pairing via elliptic nets, Pairing-Based Cryptography -- PAIRING 2007, Springer LNCS 4575 (2007), 329-348 [http://eprint.iacr.org/2006/392].
@Paul Garrett: Who is "Ed" Silverman?
-
@Joe... Argh... Sorry! In fact, there really was Ed Silverman, a professor at Purdue. – paul garrett Jun 27 2011 at 19:29
(Actually, Ed Silverman was a math prof at Purdue some decades ago, is how the name got into my head... But, still, sorry!) – paul garrett Jun 27 2011 at 19:32
Thank's, I'll definitely look into that. Meanwhile, I think I found a simple argument, see my answer below. – Jasiu Jun 27 2011 at 20:20
1
@Paul: Not a problem, I figured it was a slip of the keyboard. So Edward Silverman (Ph.D. University of California, Berkeley 1948) had two PhD students at Purdue in the 1970s. (The math genealogy project is such fun!) – Joe Silverman Jun 27 2011 at 21:04
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In order to calculate $h_{k+l}(A)$ from $h_k(A)$ and $h_l(B)$ you have to evaluate lines $l_1$ and $l_2$ at point $A$, where $l_1$ passes through $kP$ and $lP$ and $l_2$ is a vertical line through $(k+l)P$. The "double-and-add" algorithm takes $log_2(n)$ steps to compute $h_n(A)$, so only $O(log_2(n))$ points of form $mP$ are involved when determining lines $l_1$ and $l_2$ at various steps of the algorithm. There are certainly more points on an elliptic curve than that.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358828067779541, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/77944-help-limits.html | # Thread:
1. ## help with limits
Hi
I need help with the following limit:
$\displaystyle\lim_{a\to0} \dfrac{\ln(r/a)}{\ln(b/a)}$
where a < r < b and that a,b are constants.
thanks in advance
2. Apply L'Hopital's rule.
Note the numerator can be written: $ln(r) - ln(a)$, and similarly the denominator can be written: $ln(b) - ln(a)$.
The derivative with respect to a of the numerator and the denominator are both: $-\frac{1}{a}$
So the limit becomes:
$\lim_{a->0 } \frac{-\frac{1}{a}}{-\frac{1}{a}} = 1$
3. Originally Posted by august
Hi
I need help with the following limit:
$\displaystyle\lim_{a\to0} \dfrac{\ln(r/a)}{\ln(b/a)}$
where a < r < b and that a,b are constants.
thanks in advance
Can you check what the question should actually say? If a→0 then a is not a constant.
4. a is a constant but what happens in the limit as it goes to zero?
5. Apply L'Hopital's rule. You can check your answer by plugging in random positive numbers for r and b and then putting in a small number for a and checking it in your calculator. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9076971411705017, "perplexity_flag": "head"} |
http://stats.stackexchange.com/questions/22387/why-it-is-often-assumed-gaussian-distribution/27599 | # Why it is often assumed Gaussian distribution?
Quoting from a Wikipedia article on parameter estimation for a naive Bayes classifier: "a typical assumption is that the continuous values associated with each class are distributed according to a Gaussian distribution."
I understand that a Gaussian distribution is convenient for analytical reasons. However, is there any other real-world reason to make this supposition? What if the population consists of two sub-populations (smart/dumb people, large/small apples)?
-
5
Perhaps because of the central limit theorem, Gaussian distributions do fit many, though by no means all, measurements of physical phenomena? With sub-populations, one may get mixture Gaussian distributions. – Dilip Sarwate Feb 7 '12 at 14:34
1
The same section (I'm assuming you are looking at the Naive Bayes article) points out that binning is probably a better idea if you don't know the distribution. Someone should probably edit the wikipedia article to make it more clear that one should only assume gaussian if he can argue why it is gaussian (e.g. plot the data, or it follows the additive pattern of the CLT). – rm999 Feb 7 '12 at 15:42
1
– Elvis Feb 7 '12 at 23:33
## 3 Answers
My answer agrees with the first responder. The central limit theorem tells you that if your statistic is a sum or average it will be approximately normal under certain technical conditions regardless of the distribution of the individual samples. But you are right that sometimes people carry this too far just because it seems convenuent. If your statistic is a ratio and the denominator can be zero or close to it the ratio will be too heavytailed for the normal. Gosset found that even when you sample from a normal distribution a normalized average where the sample standard deviation is used for the normalization constant the distribution is the t distribution with n-1 degrees of freedom when n is the sample size. In his field experiments at the Guiness Brewery he has sample sizes that could be in the range of 5-10. In those cases the t distribution is similar to the standard normal distribution in that it is symmetric about 0 but it has much heavier tails. Note that the t distribution does converge to the standard normal as n gets large. In many cases the distribution you have might be bimodal as it is a mixture of two populations. Some times these distributions can be fit as a mixture of normal distributions. But they certain do not look like a normal distribution. If you look at a basic statistics textbook you will find many parametric continuous and discrete distributions that often come up in inference problems. For discrete data we have the binomial, Poisson, geometric, hypergeometric and negative binomial to name a few. Continous examples include the chi square, lognormal, Cauchy, negative exponential, Weibull and Gumbel.
-
At least for me, the assumption of normality arises from two (very powerfull) reasons:
1. The Central Limit Theorem.
2. The Gaussian distribution is a maximum entropy (with respect to the continuous version of Shannon's entropy) distribution.
I think you are aware of the first point: if your sample is the sum of many procceses, then as long as some mild conditions are satisfied, the distribution is pretty much gaussian (there are generalizations of the CLT where you in fact don't have to assume that the r.v.s of the sum are identically distributed, see, e.g., the Lyapunov CLT).
The second point is one that for some people (specially physicists) make more sense: given the first and second moments of a distribution, the distribution which less information assumes (i.e. the most conservative) with respect to the continuous Shannon's entropy measure (which is somewhat arbitrary on the continuous case, but, at least for me, totally objective in the discrete case, but that's other story), is the gaussian distribution. This is a form of the so called "maximum entropy principle", which is not so widespread because the actual usage of the form of the entropy is somewhat arbitrary (see this Wikipedia article for more information about this measure).
Of course, this last statement is true also for the multi-variate case, i.e., the maximum entropy distribution (again, with respect to the continuous version of Shannon's entropy) given first ($\vec{\mu}$) and second order information (i.e., the covariance matrix $\mathbf{\Sigma}$), can be shown to be a multivariate gaussian.
PD: I must add to the maximum entropy principle that, according to this paper, if you happen to known the range of variation of your variable, you have to make adjustments to the distribution you get by the maximum entropy principle.
-
The use of the CLT for justifying the use of the Gaussian distribution is a common fallacy because the CLT is applied to the sample mean, not to individual observations. Therefore, increasing your sample size, does not mean that the sample is closer to normallity.
The Gaussian distribution is commonly used because:
1. Maximum likelihood estimation is straightforward.
2. Bayesian inference is simple (using conjugate priors or Jeffreys-type priors).
3. It is implemented in most of the numerical packages.
5. Lack of knowledge about other options (more flexible). ...
Of course, the best option is to use a distribution that takes into account the characteristics of your context, but this can be challenging. However, is something that people should do
"Everything should be made as simple as possible, but not simpler." (Albert Einstein)
I hope this helps.
Best wishes.
-
Why the downvote? what counterargument is for this explanation? – lmsasu Feb 7 '12 at 15:32
3
The belief that "The use of the CLT for justifying the use of the Gaussian distribution is a common fallacy because the CLT is applied to the sample mean" is itself a fallacy. For example, the electrons in a conductor are moving about at random. The small charge on each electron contributes to a net noise voltage (called thermal noise) that can be measured across the terminals of the conductor. Each contribution is small, there are many electrons, and so via the CLT, the noise is modeled as a Gaussian random process. This model has been cross-validated in numerous experimental studies. – Dilip Sarwate Feb 7 '12 at 15:40
1
This first paragraph is confusing and seems off-topic. When applying the CLT we are often saying that a distribution is gaussian because each individual observation is the sum/mean of many processes. If the first paragraph were removed I think this would be good answer. – rm999 Feb 7 '12 at 15:51
1
@rm999 "If the first paragraph were removed I think this would be a good answer". Actually, the first paragraph is the crux of the answer since the rest merely points out how the Gaussian model is helpful analytically -- which the OP already understands -- and is not responsive to the question asked. – Dilip Sarwate Feb 7 '12 at 16:07
@Dilip: (+1) The kernel of a very good answer is present in your first comment. Please consider expanding on it in a separate post. – cardinal Feb 7 '12 at 17:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382168650627136, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-algebra/147076-solved-vector-spaces-over-division-rings.html | # Thread:
1. ## [SOLVED] Vector spaces over division rings.
Hi:
I have the following definition: let V be a vector space over
a division ring D. A mapping a of V into V is called a linear trans-
formation of V if it has the followiwng two properties:
(x+y)a = xa+ya for x,y $\in$ V,
(x $\alpha$)a = (xa) $\alpha$ for x $\in$ V, $\alpha$ D
And here I find an odd thing. If a is the mapping multiplication by
a scalar (that is, by an element of D), then a is not in general a linear trans-
formation of V according to the definition, because D needs not be
commutative. Any hint will be welcome.
2. First, in English, at least, a set with operations of addition and "scalar multiplication" with the scalars from a division ring rather than a field is usually called a "module", not a "vector space".
Second, If you define scalar multiplication by " $x\alpha$" for $\alpha$ in D, then you must either define $\alpha x= x\alpha$ or leave [tex]\alpha x[tex] undefined. The fact that multiplication of scalars is not commutative has nothing to do with the relationship between $x\alpha$ and $\alpha x$.
3. Originally Posted by HallsofIvy
First, in English, at least, a set with operations of addition and "scalar multiplication" with the scalars from a division ring rather than a field is usually called a "module", not a "vector space".
Actually, Thomas Hungerford defines a vector space using division rings.
4. Originally Posted by ENRIQUESTEFANINI
Hi:
I have the following definition: let V be a vector space over
a division ring D. A mapping a of V into V is called a linear trans-
formation of V if it has the followiwng two properties:
(x+y)a = xa+ya for x,y $\in$ V,
(x $\alpha$)a = (xa) $\alpha$ for x $\in$ V, $\alpha$ D
And here I find an odd thing. If a is the mapping multiplication by
a scalar (that is, by an element of D), then a is not in general a linear trans-
formation of V according to the definition, because D needs not be
commutative. Any hint will be welcome.
multiplication by an element of D is a linear transformation iff that element is in the center of D.
5. Originally Posted by roninpro
Actually, Thomas Hungerford defines a vector space using division rings.
And Neal H. McCoy too.
6. Originally Posted by NonCommAlg
multiplication by an element of D is a linear transformation iff that element is in the center of D.
Quite understandable. And if D is a field the center of D is D. So in this case that mapping is always a linear transformation. Thanks a lot, NonCommAlg. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9114591479301453, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/74252/things-that-should-be-positive-integers-really/74266 | ## Things that should be positive integers…really?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Kronecker. Nuff said. Even the numbers themselves historically started as positive integers and were subsequently generalized to hell and back. Here are some other well known concepts that "should" involve $\mathbb{N}$ but were generalized to $\mathbb{Q}$, $\mathbb{R}$ or even $\mathbb{C}$:
1. Dimension $\rightarrow$ Hausdorff dimension.
2. Factorial $\rightarrow$ gamma function.
3. Differentation $\rightarrow$ half-differentation (etc.)
So, can you extend this small to a big list?
(Motivation: Some hypothetic knot polynomial I calculated with demanded a dimension of its associated group representation - thus the "rt" tag - of 60/11. That is noooooot boding well for its existence. :-)
-
6
$S_n$ $\to$ $S_t$ (Deligne). – darij grinberg Sep 1 2011 at 12:54
3
Also, the classical: $\binom{n}{k}$ $\to$ $\binom{x}{k}=\frac{x\left(x-1\right)...\left(x-k+1\right)}{k!}$. – darij grinberg Sep 1 2011 at 12:56
5
I would rather be interested in the complementary question: what entities, defined on the positive integers, cannot be extended in any sensible way to a larger set? – Federico Poloni Sep 1 2011 at 13:38
3
Oh, and this should definitely be Community Wiki – Federico Poloni Sep 1 2011 at 13:39
2
The order of a numerical method in solving $f(x)=0$. At the beginning, it appears to be an integer: $1$ for many methods based upon the Picard contraction principle, $2$ for the Newton--Raphson method. But the order of the secant method is $\phi$, the Gloden ratio! – Denis Serre Sep 1 2011 at 14:38
show 8 more comments
## 7 Answers
The writhe is the fundamental differential geometric invariant of a closed space curve. I think it is the most useful topological invariant outside mathematics- biologists use it to study circular DNA molecules, and chemists use it in the study of long polymers. For space curve $C(t)$ it's defined as the double integral
$\frac{1}{4\pi}\int_{C\times C}\frac{C^\prime(s)\times C^\prime(t)\cdot (C(s)-C(t))}{|C(s)-C(t)|^3}ds dt.$
but most people think of it as the number of positive crossings minus the number of negative crossings. This quantity is naturally an integer. The integral formula is based on the Gauss integral for the linking number, but has a complicated history, with a lot of contribution from non-mathematicians.
But, what to do, most real-life long molecules aren't closed space curves. And so biologists, chemists, and physicists, followed by mathematicians, generalized the writhe to open space curves. The idea is that writhe makes sense for a tangle diagram, so they integrated over all projection angles of the open space curve. The result is a definition for the writhe of an open space curve, which is a real number (which can be efficiently estimated). I think it's differential geometry's most useful real numbers for studying open space curves where they occur in biology, chemistry, and physics.
A nice survey of writhe in various contexts is Berger and Prior's The writhe of open and closed space curves.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The natural extension of Euler characteristic to orbifolds is valued in Q.
-
There are several natural extensions of Euler characteristic to orbifolds. Some with values in $\mathbb{Q}$, others with vaues in $\mathbb{Z}$. – euklid345 Sep 1 2011 at 15:34
Motivic integration, where the underlying measures are valued in rings of motives.
-
1
Which is a simultaneous generalization of the integer-valued functions "Euler characteristic" and "count points over finite fields"! – JSE Sep 1 2011 at 16:14
Amen! Also, I guess I bent the rules a little, since measures aren't usually $\mathbf{Z}$-valued, but I think the discrepancy between real numbers and rings of motives is big enough that this still counts. :) – David Hansen Sep 1 2011 at 20:25
A natural generalization of cardinality of sets is groupoid cardinality, which is a real number.
-
nice to know this term (totally new to me)! – S. Sra Sep 1 2011 at 22:10
3
Be warned that number theorists call it "mass" instead of groupoid cardinality. – Omar Antolín-Camarena Sep 3 2011 at 15:02
Let $f:\mathbb{Z}_p\to\mathbb{Z}_p$ be a "nice" map on the $p$-adic integers (or a map on some more general space with a $p$-adic topology). People who study $p$-adic dynamcis investigate what the iterates of $f$ do to points of the space. So if we fix a point $\alpha\in\mathbb{Z}_p$, we can define an iteration map ````$$
I : \mathbb{N} \longrightarrow \mathbb{Z}_p,\qquad
I(n) = f^n(\alpha).
$$``` The map $I$ is naturally defined on $\mathbb{N}$, and if $f$ is invertible, then it clearly extends to $\mathbb{Z}$. But for various applications, one would like to evaluate $I(n)$ for $n\in\mathbb{Z}_p$. So the example is
• iteration an integral number of times $\to$ iteration a $p$-adic number of times.
A very pretty application of this idea is in the paper:
Bell, J. P. ; Ghioca, D. ; Tucker, T. J. The dynamical Mordell-Lang problem for étale maps. Amer. J. Math. 132 (2010), no. 6, 1655--1675.
-
It's interesting to note that this has happened with several notions of "dimension". Krull dimension of rings has been extended to notions as GK-dimension, for example.
As a complementary answer... what would be a ring of characteristic $-\pi$?
-
11
A ring of characteristic $\pi$ should at least be perfectly round! – Roland Bacher Sep 2 2011 at 9:16
(Probably should count towards your Dimension example)
Sobolev Spaces of Integer Dimension $\rightarrow$ Sobolev(–Slobodeckij) Spaces of fractional Dimension
Has Important applications for numerically solving boundary integrals
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8959843516349792, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/56718/list | ## Return to Question
6 added 10 characters in body
Let $X$ be a metrizable topological space and $G$ be a locally compact group. Given a continuous (left) action of $G$ on $X$, is there a metric on $X$, compatible with the topology, for which the action of $G$ becomes an isometric action? Conversely, given a metric on $X$, is there a nontrivial action of $G$ on $X$ that preserves the metric?
I am looking for the most general necessary and sufficient conditions and any possible obstructions. For the first question, the answer is obviously positive when $G$ is compact: one chooses a metric $d$ on $X$ and simply does an "averaging process along the orbits" by defining $$\rho(x,y)= \int_{G} d(g.x,g.yd(g^{-1}.x,g^{-1}.y) dg.$$ I suspect that a similar idea would work more generally using a "cut-off" function on $X$ when the action of $G$ on $X$ is proper. Any connections to amenability (of the group or the group action) would also be interesting.
5 added 43 characters in body
Let $X$ be a metrizable topological space and $G$ be a locally compact group. Given a continuous (left) action of $G$ on $X$, is there a metric on $X$ X$, compatible with the topology, for which the action of$G$becomes an isometric action? Conversely, given a metric on$X$, is there a nontrivial action of$G$on$X\$ that preserves the metric?
I am looking for the most general necessary and sufficient conditions and any possible obstructions. For the first question, the answer is obviously positive when $G$ is compact: one chooses a metric $d$ on $X$ and simply does an "averaging process along the orbits" by defining $$\rho(x,y)= \int_{G} d(g.x,g.y) dg.$$ I suspect that a similar idea would work more generally using a "cut-off" function on $X$ when the action of $G$ on $X$ is proper. Any connections to amenability (of the group or the group action) would also be interesting.
4 typo in title
# When do isemtricisometric actions exist?
3 added 10 characters in body; added 32 characters in body; deleted 32 characters in body
Let $X$ be a metrizable space and $G$ be a locally compact group. Given a continuous (left) action of $G$ on $X$, is there a metric on $X$ for which the action of $G$ becomes an isometric action? Conversely, given a metric on $X$, is there an a nontrivial action of $G$ on $X$ that preserves the metric?
I am looking for the most general necessary and sufficient conditions and any possible obstructions. For the first question, the answer is obviously positive when $G$ is compact: one chooses a metric $d$ on $X$ and simply does an "averaging process along the orbits" by defining $$\rho(x,y)= \int_{G} d(g.x,g.y) dg.$$ I suspect that a similar idea would work more generally using a "cut-off" function on $X$ when the action of $G$ on $X$ is proper. Any connections to amenability (of the group or the group action) would also be interesting.
2 added 26 characters in body
Let $X$ be a metrizable space and $G$ be a locally compact group. Given a continuous (left) action of $G$ on $X$, is there a metric on $X$ for which the action of $G$ becomes an isometric action? Conversely, given a metric on $X$, is there an action of $G$ on $X$ that preserves the metric?
I am looking for the most general necessary and sufficient conditions and any possible obstructions. For the first question, the answer is obviously positive when $G$ is compact: one chooses a metric $d$ on $X$ and simply do does an "averaging process along the orbits" by defining $$\rho(x,y)= \int_{G} d(g.x,g.y) dg.$$ I suspect that a similar idea would work more generally using a "cut-off" function on $X$ when the action of $G$ on $X$ is proper. Any connections to amenability (of the group or the group action) would also be interesting.
1
# When do isemtric actions exist?
Let $X$ be a metrizable space and $G$ be a locally compact group. Given a continuous action of $G$ on $X$, is there a metric on $X$ for which the action of $G$ becomes an isometric action? Conversely, given a metric on $X$, is there an action of $G$ on $X$ that preserves the metric?
I am looking for the most general necessary and sufficient conditions and any possible obstructions. For the first question, the answer is obviously positive when $G$ is compact: one chooses a metric $d$ on $X$ and simply do an "averaging process" by defining $$\rho(x,y)= \int_{G} d(g.x,g.y) dg.$$ I suspect that a similar idea would work more generally using a "cut-off" function on $X$ when the action of $G$ on $X$ is proper. Any connections to amenability (of the group or the group action) would also be interesting. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 75, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369143843650818, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/82485/dedekind-spectra | ## Dedekind Spectra
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there a class of ring spectra that corresponds to and/or extends the class of Dedekind rings from traditional algebra? Is there a notion of "ring of integers" of a ring spectrum? Additionally, is there a notion of an ideal class group of a Dedekind ring spectrum (Picard group)?
Thanks
PS This question was already asked on math.stackexchange, but I have heard there are some people on this site working on this sort of thing.
-
They seem like you could just pretty much directly try to port the definitions. The definition of module seems clear. For an ideal you just need a concept of what counts as an inclusion, which might just be a set-theoretic inclusion (up to homotopy) but might be stranger. The definition of a principal ideal is obvious, and this gets you the Picard group. Prime ideal seems a bit subtle. I would say that, for any two submanifolds not in the ideal, there product is homotopic to a mnaifold not in the ideal. I doubt a ring of integers would work because defining the field of fractions seems hard. – Will Sawin Dec 2 2011 at 20:27
I'm not sure I follow your use of manifolds? – Jon Beardsley Dec 2 2011 at 20:48
@Will I mean, okay yes there is the notion of modules over ring spectra in homotopy theory ($E$ is a module over $R$ if there is some map $E\wedge R\to E$ satisfying some diagrams, etc.). So for ideal I'm guessing you just use subspectra because we can always localize at that subspectrum? How is the definition of a principal ideal obvious? – Jon Beardsley Dec 2 2011 at 20:51
3
Actually, I think it's much more complicated than Will's comment suggests. Every map is homotopic to an inclusion, so "subobjects" are really not the way to go if you want to define ideals. Jeff Smith had a clever idea to do this by looking in the arrow category Arr$(C$) where objects are morphisms in $C$ and morphisms are commutative squares. My advisor, Mark Hovey, has written an extensive unpublished paper on this subject and has been giving talks on the subject for the past year. He found the requisite model structure and discussed a homotopy theory of ideals. I guess you could email him – David White Dec 2 2011 at 22:11
1
Let me comment that I thought a bit about how to do things like principal ideals and even that was not at all obvious. Ideals act very nicely in some regards, but they are objects in a different category now and a lot of things become very hard to define or compute. Rings of integers and ideal class groups seem wildly out of reach right now. – David White Dec 2 2011 at 22:13
show 1 more comment
## 1 Answer
In trying to generalize concepts from algebra to spectra, there are several issues that come into play.
In order for a concept in stable homotopy theory to be intrinsically meaningful it generally needs to be invariant under weak equivalence - whatever the appropriate notion of "weak equivalence" is (of spectra, of commutative ring spectra, etc). There are multiple reasons for this. On one hand, it's the homotopy category rather than the category of spectra that is "algebraic" enough to support generalizations like this. On the other hand, there is the practical consideration that there are many different models for spectra (e.g. symmetric spectra, orthogonal spectra, EKMM spectra, various diagram categories); if a concept isn't meaningful from the point of view of homotopy theory, it may have entirely different meanings in different models.
In addition, a concept may have several different directions of generalization. You could generalize an algebraic concept to one that's defined in terms of homotopy groups; this is easy to define and check, but tends to be less interesting and not satisfied in some principal cases of interest. You could try to phrase things in terms of categorical properties, and express a generalization that way; in order for this to be sensible you generally have to replace all concepts by their appropriate "derived" notions (derived pullback, derived invariants under a group action, etc), which makes it difficult to work with concepts that have almost no exactness properties. You could do something ad-hoc.
For these reasons, it's not a straightforward procedure. It's often a good idea to have some examples in mind or be looking for an application, rather than just generalizing for its own sake. A handy test for how difficult it will be is to try and determine a generalization for differential graded modules and algebras first.
Here are some of the pieces that show up in the definition of a Dedekind domain.
• Integral domains. I don't really know a useful generalization that doesn't involve being an integral domain on homotopy groups. This leaves out a lot of interesting examples - there is a large zoo of regularity conditions in algebraic geometry are not satisfied by a large class of ring objects in homotopy theory.
• Fields of fractions. Inverting elements - and localization in general - is something that works well in homotopy theory, and tends to give the expected results.
• Integral closure. This one is much more difficult, because it involves solutions of an equation, and trying to "adjoin" elements in the fraction field. As David White mentioned, the concept of being a "subobject" is one that doesn't translate well, and so there's not a straightforward way to take an element in the homotopy of the fraction field and adjoin it to the base ring. In general it is very difficult to construct commutative ring objects with prescribed properties.
• Rings of integers. See above. If you figure a out a useful notion for this, I'd love to hear from you.
• Ideals. Again, there isn't an intrinsic meaning to "subobject" or "quotient". There are generalizations of the concept of an "ideal", but all the ones that I'm aware of boil down to an ideal being, by definition, something that gives you a map out to another ring. More problematically, because taking the "quotient by an ideal" usually involves a mapping cone/cofiber, being an ideal isn't a property of a map $I \to R$ of modules - it is all the extra data that allows you to construct a ring structure on $R/I$. In addition, for $R$ commutative, ideals as an associative ring and ideals as a commutative ring become separate concepts.
• Principal ideals. A principal ideal, ideally, would be generated by an element in homotopy that you want to take the quotient by. Simply put, given an element in homotopy you may not be able to construct such an ideal even in cases that look amenable. If you can construct an ideal so that there is a quotient associative algebra, there are likely to be many different choices of quotient algebra structures. Being able to construct a quotient commutative algebra is an entirely different, much harder problem that often doesn't have a solution, and when we hope or expect it to have a solution we often can't prove it. There are decades-old conjectures about some of these.
• Dimension. To define dimension you usually need prime ideals. There are useful definitions of dimension that use thick subcategories of the homotopy category of perfect complexes - see the work of Paul Balmer in particular. However, it is much harder to translate "deep" results about dimension into homotopy theory. More seriously, a heavy ratio of the interesting examples we know don't satisfy anything like a Noetherian property.
Having said all of this, the subject is in flux and we understand more as time goes on.
The Picard group exists very generally, for some notion of "exists". A general definition was given in a paper by Hopkins, Mahowald, and Sadofsky. For a strictly commutative ring spectrum $R$, an invertible module $M$ is one such that there exists an object $N$ such that `$M \wedge_R N \simeq R$`. The category of such $M$ is the Picard groupoid; if the homotopy category is essentially small then there is an associated Picard group. This has applications in a number of areas including computational applications. You can also interpret $RO(G)$-graded homotopy groups in equivariant homotopy theory in terms of some part of the Picard group - but $RO(G)$-graded homotopy predates this work by a very significant margin.
-
@Tyler Thanks very much for this exposition. I will probably spend some time thinking about it. I'm extremely interested in such issues, at least, in the questions, though I'm not sure my education has progressed far enough yet for me to really talk about answers. My advisor recommended asking you specifically to find out what sort of things made such generalizations difficult, so I was delighted to see that you had responded. – Jon Beardsley Dec 5 2011 at 17:35
I have heard that Rognes has a concept of field extensions in the homotopy category? If we wanted to be somewhat closed minded, I suppose couldn't we look at "extensions" of HQ as "number fields" Hk? And then somehow look at finite wedges of HZ (from the maximal order point of view)? This idea is probably rather silly and useless. I imagine we'd lose a lot of specificity in some sense, but I guess we might have some general framework within which HO_k might be an example. I guess what I'd really like to see is some use of homotopy theory to answer number theory questions in greater generality. – Jon Beardsley Dec 5 2011 at 18:24
1
@JBeardz: I'm happy to discuss anything about this kind of material; feel free to contact me. – Tyler Lawson Dec 6 2011 at 2:41
3
Rognes has the notion of a Galois extension of commutative ring spectra (see his memoir, "Galois extensions of structured ring spectra"), and you can indeed use rings of integers in number fields to produce examples. One thing to note is that Rognes' definition will not apply to examples that have ramified (finite) primes, and for example there are no Galois extensions of the sphere spectrum or HZ without inverting some primes first. There are some specific reasons for this (for example, it's not possible to adjoin a p'th root of unity to the sphere without inverting p first). – Tyler Lawson Dec 6 2011 at 2:44
1
This is a really great answer, thanks! That said, I wanted to mention that some notions of dimension do seem to work well, namely homological dimensions like right global dimension or weak dimension. This won't help with the OP's question, where he really needs Krull dimension (for sufficiently nice rings the global dimension agrees with Krull dimension, but his rings won't be that nice), but it might help someone else who stumbles upon this thread. Here's a paper on the subject: arxiv.org/abs/1001.0902 – David White Dec 6 2011 at 14:39
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9575828313827515, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/57276/convolution-on-symmetric-group-sn/71717 | ## Convolution on symmetric group Sn
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have question regarding convolution of functions (say g and h) defined on Sn. In Fourier space this is equivalent to IFT(G.H), where G = FT(g) and H = FT(h).
Fast Fourier transforms (Clausen's FFT) proceeds by recursively breaking down Fourier transformation over Sn into smaller transforms over S_(n-1), S_(n-2)... and computing each S_(k)-transform from the k independent S_(k-1) transforms.
Now the question I have is - How does the convolution of two functions (g & h, each defined on Sn) translate to S_(n-1)? In other words, is their any defining expression involving G' and H' to provide the n-1 independent S_(n-1) transforms to get final the convolution.
G': descendant Fourier transform of G on S_(n-1) H': descendant Fourier transform of H on S_(n-1) FT: Fourier transform IFT: Inverse Fourier transform
I would appreciate if anyone can direct me to some papers/books which talk about these concepts.
DP
-
## 2 Answers
It is not an answer to your question, but I hope it will help:
In general arithmetic complexity of convolution in non-anelian groups "equivalent" to the complexity of matrix multiplication. Here is the reason why:
The way of doing Fourier Transform in abelian group $A$ can be described in the is the following way: Let $f,g \in F[A]$ We know that $F[A]$ is isomorphic to the space $F^A$ with pointwise multiplication. Let $T$(which is acctually Fourier Transform) be this isomorphism. If we want calculate $f*g$ then calculate $T^{-1}(T(f)\cdot T(g))$. In case of non abelian group like $S_n$ It holds that $F[G]$ is isomorphic to the direct sum of matrix algebras that is $F[G]\simeq\oplus M_{n_i}$. Thus using the same formula you can calculate convolution in $S_n$, but now you will need to multiply matrixes.
-
I guess you described convolution on Sn using matrix multiplication. My question is slightly different. Thanks for the response though. Deepti Pachauri – Deepti Pachauri Mar 7 2011 at 19:05
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
http://www.cbse.ucsc.edu/sites/default/files/convolutions.pdf http://www.mpi-hd.mpg.de/astrophysik/HEA/internal/Numerical_Recipes/f13-1.pdf | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.877700924873352, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/103399/how-can-i-prove-that-square-root-of-n-is-space-constructible?answertab=votes | # how can i prove that square root of n is space constructible
I know that square-root of n is space-constructible. I can't prove it by the space-constructible definition. How can I show that only $\sqrt{n}$ space is used?
-
1
I don't know what space-constructible means. I know $\sqrt n$ is "Euclid-constructible," that is, I can show you how to construct it with ruler and compass, if that's what you want. – Gerry Myerson Jan 28 '12 at 22:02
2
@Gerry: The tag says "complexity," so... – anon Jan 28 '12 at 22:10
1
Thanks. space-constructible means that it is possible to construct in o(n) space from 1^n to the binary representation of f(n). in our case f(n) is sqrt. Maybe the Euclid-constructible can help. – Eyal Golan Jan 28 '12 at 22:17
3
@anon, my comment was intended to extract some useful information from OP. I think it succeeded. – Gerry Myerson Jan 28 '12 at 23:11
4
You don't need anything nearly so complicated as Newton - keep in mind that there are no time bounds on the construction. Just test all the numbers $t$ from 1 forward until you find one with $t^2\gt n$. This is easy to do in $O(\log n)$ space just by using binary. – Steven Stadnicki Jan 28 '12 at 23:36
show 7 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9396436214447021, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/87486/reference-request-simple-facts-about-vector-valued-sobolev-space | ## Reference request: Simple facts about vector-valued Sobolev space
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $V,H$ be separable Hilbert spaces such that there are dense injections $V \hookrightarrow H \hookrightarrow V^*$. (For example, $H = L^2(\mathbb{R}^n)$, $V = H^1(\mathbb{R}^n)$, $V^* = H^{-1}(\mathbb{R}^n)$.) We can then define the vector-valued Sobolev space $W^{1,2}([0,1]; V, V^*)$ of functions $u \in L^2([0,1]; V)$ which have one weak derivative $u' \in L^2([0,1], V^*)$. Such spaces arise often in the study of PDE involving time.
I would like a reference for some simple facts about $W^{1,2}$. For example:
• Basic calculus: integration by parts, etc.
• The "Sobolev embedding" result $W^{1,2} \subset C([0,1]; H)$;
• The "product rule" $\frac{d}{dt} ||u(t)||_H^2 = (u'(t), u(t))_{V^*, V}$
• $C^\infty([0,1]; V)$ is dense in $W^{1,2}$.
These are pretty easy to prove, but they should be standard and I don't want to waste space in a paper with proofs.
Some of these results, in the special case where $V$ is Sobolev space, are in L. C. Evans, Partial Differential Equations, section 5.9, but I'd rather not cite special cases. Also, in current editions of this book, there's a small but significant gap in one of the proofs (it is addressed briefly in the latest errata). So I'd prefer something else.
Thanks!
-
I think this is a rather common situation in PDE's and analysis, because there are so many slight variants possible but it is often difficult to identify the right general formulation that covers them all. Whenever I encountered this, my solution was to write out careful statements and proofs of what I needed and put them all into an appendix of my paper. – Deane Yang Feb 4 2012 at 11:37
@Deane: That's what I have in my current draft, and it was a good exercise, but the appendix is half as long as the paper itself. – Nate Eldredge Feb 4 2012 at 15:43
Keep it unless a referee or editor makes you take it out. You won't regret it. – Deane Yang Feb 5 2012 at 19:08
## 3 Answers
J. Wloka "Partial differential equations", § 25 (p. 390 on, in my 1992 CUP edition) has an account of the space $W(0,T)=W_2^1(0,T)$ which is essentially the space $W^{1,2}([0,T];V,V^*)$.
-
This looks like just what I want. Thanks, and thanks to all for the other suggestions, which also look good. – Nate Eldredge Feb 6 2012 at 17:43
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Herbert Ammann's book on parabolic problems contains an excellent introduction.
-
If you read French then this book is the place you are looking for
Brézis, H. Opérateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert. (French) North-Holland Mathematics Studies, No. 5. Notas de Matemática (50). North-Holland Publishing Co., Amsterdam-London; American Elsevier Publishing Co., Inc., New York, 1973. vi+183 pp.Inc.,
Another source
Barbu, Viorel(R-IASIM) Nonlinear differential equations of monotone types in Banach spaces. Springer Monographs in Mathematics. Springer, New York, 2010. x+272 pp. ISBN: 978-1-4419-5541-8978-1-4419-5541-8
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9020193219184875, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/1431/prove-that-this-function-is-bounded | # Prove that this function is bounded
This is an exercise from Problems from the Book by Andreescu and Dospinescu. When it was posted on AoPS a year ago I spent several hours trying to solve it, but to no avail, so I am hoping someone here can enlighten me.
Problem: Prove that the function $f : [0, 1) \to \mathbb{R}$ defined by
$\displaystyle f(x) = \log_2 (1 - x) + x + x^2 + x^4 + x^8 + ...$
is bounded.
A preliminary observation is that $f$ satisfies $f(x^2) = f(x) + \log_2 (1 + x) - x$. I played around with using this functional equation for awhile, but couldn't quite make it work.
-
1
if you differentiate your functional equation you get that limx→1f′(X)lim_{x\to 1} f'(X) exists and is finite. doesn't that do it for you? – Eric O. Korman Aug 3 '10 at 0:46
1
Hmm. Maybe. If you wrote that up together with a proof that f' actually exists, I'll accept it. That seems too easy, somehow. – Qiaochu Yuan Aug 3 '10 at 1:11
A lot of solutions people claim are "from the book" are short and sweet :) – BlueRaja - Danny Pflughoeft Aug 3 '10 at 1:23
1
Plotting the function in Mathematica seems to indicate otherwise... – Mariano Suárez-Alvarez♦ Aug 3 '10 at 1:56
1
@Danny: it is only being claimed that the problems, not their solutions, are from the Book! The authors, by their own admission, don't know how to solve some of the exercises... – Qiaochu Yuan Aug 3 '10 at 2:02
show 1 more comment
## 3 Answers
OK, a second trick is needed (but it actually finishes the problem). It is nice and simple enough that it's probably what the authors intended by a "Book" solution.
Let $f(x) = x \log(2) - \log(1+x)$. We want to show that $S(x) = f(x) + f(x^2) + f(x^4) + \dots$ is bounded. Because $f(0)=f(1)=0$ and $f$ is differentiable, we can find a constant $A$ such that $|f(x)| \leq Ax(1-x) = Ax - Ax^2$. The sum of this bound over the powers $x^{2^k}$ is telescopic.
Notice that the role of $\log(2)$ was to ensure that $f(1)=0$.
-
2
Very nice! (and extra characters to satisfy the silly 15 character limit) – Mariano Suárez-Alvarez♦ Aug 3 '10 at 19:21
– Jason S Aug 3 '10 at 22:25
2
g(x) = f(x)/(x(1-x)) is defined on (0,1) and extends to a continous function on [0,1]. The extension exists because the limits of g(x) needed at the endpoints are f'(0) and f'(1). So differentiability of f(x) at 0 and 1 implies continuity of g(x), and for A just take any upper bound on the values of |g(x)| in [0,1]. – T.. Aug 3 '10 at 22:39
## Did you find this question interesting? Try our newsletter
email address
Starting from (the natural logarithm of) $(1-x)^{-1} = (1+x)(1+x^2)(1+x^4) \dots$, it becomes clearer where the $\log(2)$ factor comes from.
One has to show that $\Sigma (x^{2^k} - C\log(1 + x^{2^k}))$ is bounded sum of positive terms. The sum of the first $n$ terms approaches $n - Cn\log(2)$ as $x \to 1-$, so we need $C = 1/\log(2)$ if there is to be boundedness.
-
1
Nice observation. Can you finish the argument from here? – Qiaochu Yuan Aug 3 '10 at 3:07
Yes and no. I posted a completion of the proof (see other answer) but it requires an additional, independent observation. – T.. Aug 3 '10 at 19:04
If anyone's interested, here's another approach.
First, observe that $$\log{2} = \int_{2^r}^{2^{r+1}} \frac{1}{x}\;dx \le \sum_{k=2^r}^{2^{r+1}-1}\frac{1}{k} \le \int_{2^r-1}^{2^{r+1}-1} \frac{1}{x}\;dx = \log{2} + \log\left(1+\frac{1}{2^{r+1}-2}\right)$$ for $r\ge1$, so using $\log(1+x) \le x$ (for $x>-1$) and $2^{r+1} - 2 \ge 2^r$, we get $$\log{2} \le \sum_{k=2^r}^{2^{r+1}-1} \frac{1}{k} \le \log{2} + \frac{1}{2^r}$$ for all $r\ge0$ (check directly for $r=0$).
Thus for $x\in[0,1)$, we have $$\begin{align*} \lvert f(x)\log{2}\rvert = \lvert\sum_{r\ge0} x^{2^r}\log{2} - \sum_{n\ge1} \frac{x^n}{n}\rvert &= \lvert\sum_{r\ge0} x^{2^r}(\log{2} - \sum_{k=2^r}^{2^{r+1}-1}\frac{1}{k}) + \sum_{r\ge0}\sum_{k=2^r}^{2^{r+1}-1}\frac{x^{2^r} - x^k}{k} \rvert \\ &\le \sum_{r\ge0}\frac{x^{2^r}}{2^r} + \sum_{r\ge0}\sum_{k=2^r}^{2^{r+1}-1}\frac{x^{2^r} - x^k}{k} \\ &< 2 + \sum_{r\ge0}x^{2^r}(1-x)\sum_{k=2^r}^{2^{r+1}-1}\frac{1+x+\cdots+x^{k-2^r-1}}{k} \\ &\le 2 + (1-x)\sum_{r\ge0}x^{2^r}\sum_{k=2^r}^{2^{r+1}-1}\frac{k-2^r}{k} \\ &\le 2 + (1-x)\sum_{r\ge0}x^{2^r}(2^r\cdot 1) \\ &\le 2 + (1-x)(x + 2\sum_{r\ge1}\sum_{k=2^{r-1}+1}^{2^r}x^k) \\ &= 2 + x(1-x) + 2(1-x)\sum_{k\ge2} x^k \\ &= 2 + x + x^2, \end{align*}$$ so we're done.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936019778251648, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/31842/tensor-product-and-category-theory/31895 | ## Tensor product and category theory
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello. I'm trying to understand the definition of tensor product of two vector spaces. So far, I've read the one using free vector spaces and a quotient space (this one http://en.wikipedia.org/wiki/Tensor_product#Tensor_product_of_vector_spaces) , and I think I understand it well. However, I want to understand the other definitions I can find, and it seems as a very common way to define it is through the universal property (some category theory included, I suspect). Does anyone here know of a good treatment of this? I have no knowledge of category theory though, but would love to read some about it. I'm a second-year undergrad, so not too much of a high level would be nice.
-
1
The definition of $A \otimes B$ as a universal object in a category is treated in most graduate-level algebra books, e.g. Section IV.5 of Hungerford's Algebra. Here's a link: books.google.co.uk/… – Chris Phan Jul 14 2010 at 12:30
## 6 Answers
Just some definitions, in case you're unfamiliar with them: Let $\hat{V}$ denote the vector space of linear functions from a vector space $V$ to the scalar field. Remember, a multilinear map is one of the form $V \times V \times \cdots \times V \to W$ (with $n$ copies of $V$), where $W$ is another vector space, such that if we fix $n-1$ of the arguments, the function becomes a linear function from $V$ to $W$ in the argument not fixed. A multilinear form is one in which $W=K$, the scalar field (you can replace $K$ with $\mathbb{R}$ or $\mathbb{C}$ if you like). For example, the inner product on $\mathbb{R}^n$ is a bilinear form on $\mathbb{R}^n$, since if we fix one argument, it becomes linear in the other. If we view an $n \times n$ matrix as a conglomeration of $n$ columns, then the determinant is an $n$-form.
Then $\hat{V} \otimes \hat{V}$ corresponds to the set of bilinear forms, and in general, a tensor product of multiple copies of $\hat{V}$ corresponds to the set of $n$-linear forms (i.e. multilinear forms with $n$ arguments). That, there is a concrete description of tensor products of the dual space with itself, and many books which do not wish to develop the notion of tensor product will use this in place of tensor products. That is, all they must do is define a certain kind of map, and then the tensor product is just the set of maps of that kind. Then how do we explain the tensor product $V \otimes V$ (or more generally $V \otimes U$, where $U$ is another vector space)? We could note that $V$ is canonically isomorphic to its double dual, i.e. the dual space of $\hat{V}$, and then view $V \otimes V$ as the set of bilinear forms on $\hat{V}$. But there is a nicer way, and this uses the universal property.
A bilinear map $V \times V \to W$ corresponds to a linear map $V \otimes V \to W$. If $f(-,-)$ denotes the bilinear map, and $x,y \in V$, then our linear map sends $x \otimes y$ to $f(x,y)$. You could try to think of the tensor product as pairs of vectors, but the tensor product contains elements which are not $x \otimes y$ for some $x,y \in V$. We do have that $x_1 \otimes y_1 + x_2 \otimes y_2$ maps to $f(x_1,y_1)+f(x_2,y_2)$. In more generality, if $W$ and $U$ are two other vector spaces, linear maps $U \otimes V \to W$ correspond to bilinear maps $U \times V \to W$. Then what is an element of $U \otimes V$? It is a thing you stick into a bilinear map. This is the key idea which helped me understand tensor products. I repeat, an element of a tensor product is simply a thing you stick into a bilinear map. In general, elements of some universal construction defined by maps going out of a certain object have some description as "things you stick into some kind of map (or a collection of multiple maps)."
-
Does (dual of V) $\otimes$ (dual of V) give all the bilinear forms on $V$ when $V$ is infinite-dimensional? Certainly if one starts wanting to put norms on such spaces (when using continuous duals), care is needed. – Yemon Choi Jul 16 2010 at 8:39
Right, I'm thinking about finite-dimensional spaces. In the infinite dimensional case, it still gives some of them. – David Corwin Jul 17 2010 at 21:30
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You might like Brian Conrad's handouts for a sophomore differential geometry course. Especially relevant are Construction of tensor products and the two handouts after that one. They have some nice examples and a heavy emphasis on the universal property.
(I don't think this warranted more than a comment, but I can't post those yet.)
-
1
You should check Keith Conrad's handouts on the tensor product too. They are really good. math.uconn.edu/~kconrad/blurbs – Gonçalo Marques Jul 14 2010 at 13:17
The only reason I would say that Kieth Conrad's exposition of tensor products might not be appropriate is because the reader seems to be interested in vector spaces and, as a second year undergraduate, may not know anything about modules yet. – Keenan Kidwell Jul 14 2010 at 15:50
1
Gonçalo, there are quite a few notes posted on that page, so it may be better to give links directly to the files: math.uconn.edu/~kconrad/blurbs/linmultialg/… and math.uconn.edu/~kconrad/blurbs/linmultialg/…. Concerning Keenan's comment, I suppose if the OP replaces "ring" with "field", "module" with "vector space", and ignores all examples that don't make sense with vector spaces (e.g., Q/Z), then my notes may make sense. But that could be asking too much. If Dedalus tries that experiment I'd be curious to know if it works. – KConrad Jul 14 2010 at 23:32
I'll try your notes Keith. I do however know about modules, so I think it'll be alright. – Dedalus Jul 16 2010 at 10:42
My view of the pedagogy, based on teaching this to second year undergraduates at Cambridge.
The tensor product of vector spaces is defined by generators and relations. Also generators and relations, as a way of defining anything, is a method depending on a universal property (to make much sense).
If you take these two parts one at a time, you have a chance of understanding what is happening. The generators and relations are just bilinearity spelled out. The remark about generators and relations as a mode of defining anything can be learned anywhere you like (e.g. group theory): the reason that there is a universal property is just "stuff", "abstract nonsense", "mathematical maturity" even.
I believe, quite strongly, that the eliding of the punctuation between the two sentences is a negative in teaching this material. (I really do not care if this spoils Mac Lane's or anyone else's view of category theory and its role: "universal property" is only a stepping stone there, not the ultimate goal.)
-
1
Pedagogically, do you find your two sentences should go in the order you wrote them? For myself, I understood it the other way around. My mental story went "Wouldn't it be nice if we could somehow turn bilinear maps into linear maps, so we could reuse all our theorems? (ie, the desired universal property) Now let's go build a vector space to make this happen (ie, the generators and relations)." But I haven't ever taught this, so I'm curious which way around you found better. – Neel Krishnaswami Jul 14 2010 at 16:57
When I learned it all, the universal property came first: i.e. I learned the orthodox post-Bourbaki version of tensor products. But I only really felt I had understood it properly after I had taught it. I was particularly struck by a student's remark that generators and relations was "more sensible" than what they were taught in lectures (I wasn't lecturing, but picking up the pieces.) Now, you do need both sides. The real test came when I tried using tensor products of fields in Galois theory lectures ... another story. – Charles Matthews Jul 14 2010 at 17:56
I think this is definitely a “different strokes for different folks” issue. I was first taught it just in terms of the generators and relations (as a second-year undergraduate at Cambridge!) and while I was able to use it then, I didn't begin to grok it until someone pointed out the universal property to me, transforming it from something fiddly and ad hoc into something natural and tractable. I appreciate that some people may find it clearer to learn with generators and relations emphasised, but not all of us did! – Peter LeFanu Lumsdaine Jul 14 2010 at 18:26
1
I didn't understand what the universal property was for, and so didn't appreciate the construction, until the monoidal closure of vector spaces was pointed out to me. That's when it all fell into place -- I could now see why we wanted this particular universal property (so we could curry and uncurry linear maps a la functional programming). Then each piece of the construction with generators and relations made sense as the minimal construction to meet this requirement. – Neel Krishnaswami Jul 14 2010 at 18:42
2
@Peter: Yes, different types of students are going to be worried by the different questions (a) how do you manipulate this gadget, and (b) how do you know it isn't 0, anyway? Because you need to able to answer both to do any serious work, it is no good telling just half the story. But my point is that the "economy" of saying you can teach both at once is a false one. – Charles Matthews Jul 14 2010 at 19:48
I'm pretty much in your spot. I think part of the way there is learning to think with universal properties. I recently found a really good book (Algebra: Chapter 0, link below) on 'basic' algebra using category theory to unify things. All the basic stuff like products, disjoint union, surjections and injections are treated rigorously and in great generality through their universal properties. If you already know your group and set theory reading through the first few chapters can be done quickly, and should get you in the right mode of thought. I'm doing this myself right now, and so far I recommend you do the same.
http://www.amazon.com/Algebra-Chapter-Graduate-Studies-Mathematics/dp/0821847813/ref=sr_1_1?ie=UTF8&s=books&qid=1279112196&sr=8-1
EDIT: A nice application of the tensor product can be found in the first few pages of 'Differential Forms in Topology', that is, if $\Omega^*$ is the algebra generated by the formal symbols $dx_j,j=1,\dots,n$ under the relations $dx^2=0$ and $dx_idx_j=-dx_jdx_i$, then `$\Omega^*(U)=C^\infty(U)\otimes\Omega^*$` is the algebra of differential forms on the open set $U$ (under the wedge product). I'm not sure if that's how it's primarily used.
-
Fixed the LaTeX for you – Yemon Choi Jul 15 2010 at 7:55
Thank you Choi! – Eivind Dahl Jul 15 2010 at 7:58
A fully categorical approach that emphasizes the universal properties of the tensor product ,as well as a great deal of multilinear algebra, can be found in T.S.Blyth's Module Theory:An Approach to Linear Algebra. There's also a discussion in Steven Roman's Advanced Linear Algebra,but the presentation in Blyth's book isn't as dry and formal.
By the way,if anyone has a serious interest in algebra,Blyth's books are some of the great unsung textbooks in the subject. They really should be better known and used in the U.S. then they are.
-
Thank you all. Your documents has been most helpful. I also saw some papers on the tensor product of modules, especially: http://www.math.ucsb.edu/~mckernan/Teaching/05-06/Winter/220B/l_7.pdf was helpful, and http://www.dpmms.cam.ac.uk/~wtg10/tensors3.html gave some good info too.
Now I'm considering TeXing a file where I try to motivate why one defines the tensor product in the first place. I think that might help me learn the definition even more. I really like the definition in some strange way, even though I find it kind of hard. I want to learn.
So once again, Thank you.
-
Everyone finds the definition hard the first time. A reason why the tensor product is defined is to base extend a module over one ring to become a module over a second ring, subsuming at the same time two classical operations on polynomials: polynomials in Z[x] can be viewed in Q[x] and can be reduced mod p to be in (Z/p)[x]. Each of these is useful for different irreducibility tests, for instance, and the passage Z[x] --> Q[x] and Z[x] --> (Z/p)[x] are both examples of tensor products. See Section 6 of the first link I made in a comment to Dylan's answer. – KConrad Jul 16 2010 at 14:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9472157955169678, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/33145/recovering-joint-distribution-from-marginals | ## Recovering joint distribution from marginals
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose we have a Markov Random Field P(X1,...,Xn) on graph G. Suppose we know P(Xi,Xj) for every edge (i,j). Can we recover P(X1,...,Xn)?
If G is a tree, then there's a formula for joint (product of edge marginals divided by product of node marginals). Is there a nice formula that works for some non-tree graphs?
Edit: this is essentially equivalent to the following problem - given an exponential family, how do you write the joint in terms of mean parameters? There's a closed form solution when sufficient statistics are 2 variable functions defined on (Xi,Xj) pairs where (i,j) are edges in some tree graph, is there a closed form solution for other graphs?
Motivation: given an approximate marginalization method, can you fit parameters of a distribution by maximizing joint likelihood of the data under the model "implied" by this marginalization method?
-
could you clarify what you mean by 'markov with respect to graph G' ? – Suresh Venkat Jul 23 2010 at 22:48
## 3 Answers
No, a counter-example can be constructed as follows:
Let G be the complete graph on 3 vertices, where each vertex is a binary random variable. Let the joint distribution for each pair of vertices be independent Bernoulli with probability 1/2.
There are multiple joint distributions which satisfy such edge marginals. For example:
• Independent Bernoulli for each vertex, with probability 1/2 (complete mutual independence)
• Each variable could be the XOR (sum modulo 2) of the other two variables (i.e. functionally dependent)
• The probability of each outcome could be 3/16 if 3 or 1 vertices are 1's, and 1/16 if 2 or 0 are 1's.
So what do you require to be able to uniquely determine the joint distribution? If the graph is decomposable (aka chordal or triangulated), you require the joint distribution for each clique (maximal complete subset) of the graph. Then the joint density is then:
$p(X) = \frac{\prod_{\text{cliques }C} p(X_C)}{\prod_{\text{separators }S} p(X_S)}$
(see Dawid and Lauritzen (1993), Lemma 2.5)
If the graph is not decomposable, then the problem is a bit trickier: the only result I know of is Lauritzen (1996), Lemma 3.14. Basically, given the clique marginal distributions, uniqueness is determined when the sample space is finite, and each clique marginal density is the limit of a sequence of positive densities. I suspect this result could be made stronger in some way, but I am not aware of any efforts to do so.
-
Right, distr. over 3 binary variables has 7 degrees of freedom, while fixing edge/node marginals only constrains 6. So I'm looking for a formula of a joint, any joint, that's consistent with given marginals, in terms of those marginals. One choice might be to take the highest entropy distr consistent with marginals, but even for 3 variable distr, formula for such joint in terms of marginal parameters is quite large (mathurl.com/324o4a3) Testing if given set of marginals is consistent can takes exp time, so a formula must not provide an easy way to check that to be short – Yaroslav – Yaroslav Bulatov Jul 29 2010 at 18:15
2
Yeah, this stuff gets hard very quickly, even for seemingly trivial problems, hence the development of variational methods. If you're interested in that stuff, the standard reference is the paper by Wainwright and Jordan: nowpublishers.com/… – simon Jul 29 2010 at 19:13
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
the method for trees should generalize to graphs of bounded treewidth, where the formula might get exponentially long in the treewidth itself. Also, do you care how (algorithmically) complicated the formula is, because it's probably possible to write the general formula as some kind of sum over spanning trees of the graph.
-
I can see how it generalizes if you are given distributions over maximal cliques and separators, but how do you go from edge marginals to distributions over maximal cliques? – Yaroslav Bulatov Jul 24 2010 at 3:41
I was able to derive some closed form expressions using Mathematica for a binary loop Ising model, but they are unwieldy even for 3-loop and get complicated very fast with increasing loop size.
The basic idea is to use the fact that gradient of log Z gives marginal probabilities, so you invert that mapping algebraically, plug into the original equation of the joint and simplify.
For example, here's probability of 3-loop ising taking configuration {x1,x2,x3} where m1,m2,m3 are probabilities P(X1=X2),P(X2=X3),P(X3=X1) respectively, after simplification (Mathematica's FullSimplify) http://mathurl.com/324o4a3
For uniform potentials, it doesn't get a lot nicer. For instance take Ising model with uniform potentials on a loop of size n, then probability of all spins being +1 can be written in terms of marginal m = P(x1=x2) as
$exp(n j)/Z$ where Z=$\lambda_1^n + \lambda_2^n$, $\lambda_1=e^j+e^{-j}$, $\lambda_2=e^j-e^{-j}$ and j is the solution of
$\frac{\lambda_1^n(e^{2j}-1)+\lambda_2^n(e^{2j}+1)}{2 \lambda_1 \lambda_2}/Z=m$
Mathematica can solve the above equation for various values of n, but the expression becomes large very fast.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9064929485321045, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/harmonic-oscillator?sort=unanswered&pagesize=30 | # Tagged Questions
The term "harmonic oscillator" is used to describe any system with a "linear" restoring force that tends to return the system to a equilibrium state. There is both a classical harmonic oscillator and a quantum harmonic oscillator. Both are used to as toy problems that describe many physical systems.
2answers
48 views
### Does spatial coupling prohibit resonances due to an external source field?
The harmonic oscillator coupled to a sinodial external source $$\tfrac{\partial^2 x(t)}{\partial t^2}+\omega_0^2 x(t)=F_0\sin(\omega_\text{ext}\ t),$$ has the solution x(t)=x(0)\cos(\omega_0 t)+C ...
1answer
51 views
### Pendulum Wave Period
Recently I've seen various videos showing the pendulum wave effect. All of the videos which I have found have a pattern which repeats every $60\mathrm{s}$. I am trying to work out the relationship ...
1answer
38 views
### Metronome synchronisation applied to swings
The movement of several metronomes can be synchronised when a movable floor is utilised which couples the movement of the different metronomes. Is it possible to apply this sort of synchronisation to ...
1answer
179 views
### Simulating quantum network of harmonic oscillators
Let's say that I have a system of $n$ particles $p_1,\ldots,p_n\in\mathbb{R}^3$ (where $n$ here is on the order of 10,000). Furthermore, suppose we have a graph $G=(V,E)$ describing some network, ...
1answer
69 views
### Standing Waves: finding the number of antinodes
A string with a fixed frequency vibrator at one end forms a standing wave with 4 antinodes when under tension T1. When the tension is slowly increased, the standing wave disappears until tension T2 is ...
1answer
74 views
### Simple harmonic oscillator system and changes in its total energy
Suppose I have a body of mass $M$ connected to a spring (which is connected to a vertical wall) with a stiffness coefficient of $k$ on some frictionless surface. The body oscillates from point $C$ to ...
0answers
45 views
### Effective mass in Spring-with-mass/mass system
Suppose you have a particle of mass $m$ fixed to a spring of mass $m_0$ that, in turn, is fixed to some wall. I'm trying to calculate the effective mass $m'$ that appears in the law of motion of the ...
0answers
1k views
### Energy Levels of 3D Isotropic Harmonic Oscillator (Nuclear Shell Model)
One simple way of detailing the very basic structure of the nuclear shell model involves placing the nucleons in a 3D isotropic oscillator. It's easy to show that the energy eigenvalues are \$E = ...
0answers
60 views
### Relativistic genarization of Quantum Harmonic Oscillator
I am trying to find out relativistic description of a quantum harmonic oscillator. For a classical relativistic oscillator mass is a function of co-ordinates(http://arxiv.org/abs/1209.2876). ...
0answers
95 views
### Zero point fluctuation of an harmonic oscillator
In a paper, I ran into the following definition of the zero point fluctuation of our favorite toy, the harmonic oscillator: $$x_{ZPF} = \sqrt{\frac{\hbar}{2m\Omega}}$$ where m is its mass and ...
0answers
310 views
### Amplitude of a Forced Harmonic Oscillator
For an assignment in one of my maths units at uni, I've been asked to derive and solve the differential equation of motion for a forced harmonic oscillator, with the forcing function having the form ...
0answers
71 views
### How can I model a polyatomic molecule as a system of coupled oscillators?
(Classical Mechanics) Let's say I have a polyatomic molecule, what is the best way for finding the equations of oscillations if they are bounded by a torsion spring?
0answers
502 views
### Damping and stiffness constants of water
I'm working on a simulation of water drops falling into a pool. I'm specifically interested in the waves generated by the impact of the drops. In order to calculate the vertical motion of the waves, I ...
0answers
36 views
### Compound pendulum clarification?
I read in a book the following about compound pendulum and small displacements: There are two points only for which the time period is minimum. there are maximum 4 points for which the time ...
0answers
57 views
### Quantum harmonic oscillator. Finding operators
Problem: I'm trying to verify that $p_H(T)$ and $x_H(T)$ satisfy the following equations, (by solving the Heisenberg equation): $x_H(t)=x_H(0)cos(\omega t)+(1/m\omega)p_H(0)sin(\omega t)$ ...
0answers
72 views
### How does an oscillating particle in a non-inertial reference frame appear?
The general question is : given an oscillating particle in a non-inertial reference frame: How would it appear from outside the non-inertial reference frame ? How would an observer inside that ...
0answers
46 views
### Finding an efficient strategy for walking
Let's say you are already walking at a maximally efficient combination of pace and stride (or $\omega$ and $X_0$ I guess) but you need to reach your destination faster. Should you increase/decrease ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9089664816856384, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/74552/a-formal-definition-of-scaling-limits/74590 | ## A formal definition of Scaling Limits?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm looking for a formal definition of scaling limit in a rigorous math sense, also, if somebody knows a good translation to spanish. A good bibliography could be helpful.
-
Can you give us some context? Are you interested in say, Conformal Invariance? – Alex R. Sep 5 2011 at 4:52
Yes I'm interested in Conformal Invatiance, I'm studying the papers of Werner, Schramm, Lawler,...but intuitively I understand what is scaling limit, but I really wish a more formal definition. My best shot is this: For a proccess which takes values in a lattice $L\subseteq \mathbb{Z}^d$, the limit of the proccess $\delta L$ as $\delta$ tends to zero. Is needed to be a graph in the reals? – Murphy Sep 5 2011 at 14:03
1
You might also want a definition for "processes" which have a lattice as a domain instead of the set where it takes its values. This is the situation when approximating a quantum field theory using lattice theories. The best reference for this is probably section 3 of the Seminaire Bourbaki by Frohlich and Spencer: numdam.org/numdam-bin/… – Abdelmalek Abdesselam Sep 5 2011 at 19:38
## 4 Answers
This is the idea. Suppose that you have a family or sequence of structures of growing complexity (long random walks, realizations of a random field on a large piece of a lattice or in a large continuous domain, large random trees, etc.). You want to understand the behavior of the large structures of your family. Often you want to say that your large random object is similar to a simpler object that you can describe precisely. Since the random walk consisting of 1000 steps is quite different from the "same" random walk consisting of 100000 steps, but you still want to find similarities between them, it makes sense to normalize or rescale your objects appropriately. If you manage to find the right rescaling (it is given by shrinking the time by $n$ and space by $n^{1/2}$ for a standard simple symmetric random walk), then you might discover that thus rescaled (and appropriately embedded into the space of continuous functions or the Skorokhod space) random walk converges in distribution to the Wiener process.
So, scaling limits provide approximative descriptions of what your objects look like when "you look at them from a large distance" or "zoom out".
At a more formal level, suppose $\xi_n$ is a sequence of random objects in some space $X$ . Suppose $\phi_n$ is a (carefully chosen) sequence of scaling tansformations in $X$. It is hard to say precisely what a scaling transformation is, often it is a linear map depending on $n$ with coefficients decaying in $n$. Often, a (time)-reparametrization of the random objects involved is a part of $\phi_n$. A scaling limit is the distributional limit of $\phi_n(\xi_n)$.
Some more comments:
1. One point of view is understanding scaling limits as limiting points for renormalization group.
2. Papers by P.Major from around 1980 on self-semilarity and renormalization are useful in understanding the concept.
3. I will take this chance to advertise my own paper on scaling limits for random trees, where I describe what large random trees look like if drawn on the plane and looked at from a large distance. It appeared this year in Markov Processes and Related Fields and is also available at http://arxiv.org/abs/0909.2283 The construction and scaling used is different from Aldous's continuum trees (and there are strong connections to superprocesses).
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think the main issue you'll encounter with scaling limits and conformal invariance will be that it is a very new subject. Many of the papers will be very high level or still in preprint with details missing.
Here's a list of references that helped me learn the subject. I'll try to comment on some of them later on when time permits. I would first start with "Scaling Limits and SLE" by Greg Lawler. This set of notes is in context of self avoiding walks and how they connect to SLE and you'll find they quickly dive into scale invariance and conformal invariance.
If you are further interested in proofs, I would consult Lawler's "Conformally Invariant Processes in the Plane." This book gives very rigorous formulations of SLE and tackles a myriad of technical difficulties.
Next I would look at "Toward conformal invariance of 2D lattice models" by Smirnov. Here you'll be introduced to the notion of the duality between holomorphic martingales and scale invariance. In particular, you need to understand how discrete complex analysis connects with conformal invariance. I would hold out for proofs of many of the results for now.
Now you'll probably want to look at some simpler examples. I would first start trying to understand the proof of Cardy's Formula. Geoffrey Grimmett has an excellent set of lecture notes, "Probability on Graphs" which should be available off Grimmett's website (I can't seem to access it right now). As well I would consult the original paper by Smirnov, "Critical percolation in the plane". I'll add here that the proof of Cardy's formula doesn't need the full machinery of holomorphic martingales because of the really nice symmetries Smirnov observed. What IS important though is how the Riemann Hilbert boundary conditions are set up.
One of the big issues with discrete complex analysis is how to rigorously define discrete holomorphic functions on graphs. If you want the gory details, "Discrete complex analysis on isoradial graphs" by Chelkak and Smirnov is a good place to look.
-
I think the notion of scaling limit is really more of a group of ideas than a single definition, since there are different types of objects being studied and hence different notions of convergence. Typically though one considers convergence of some (rescaled) sequence of probability measures on "discrete" objects as a spatial parameter $\delta\rightarrow0$ to a probability measure on some "continuum" object. Part of the trouble is that it can be tricky to figure out what the right "continuum" object is and further how to put a probability measure on it. A nice discussion about scaling limits and conformal invariance are these lecture notes by Lawler.
The fundamental example of such a scaling limit is the convergence of random walks to Brownian motion; one source is chapter 5 of the book of Mörtens and Peres.
An interesting scaling limit with a different flavor than the SLE ones described in other answers is Aldous's Continuum Random Tree. The following is from the linked page:
Take a critical Galton-Watson branching process where the offspring law has finite non-zero variance, and condition on total population until extinction being $n$. This gives a random tree. Rescale edge-lengths to have length $n^{-1/2}$. Put mass $1/n$ on each vertex. In a certain sense that can be formalized, the $n \to \infty$ weak limit of these random trees is the Brownian CRT (up to a scaling factor).
This is a beautiful object on its own but also a key tool in several other scaling limits.
The CRT is used in the study of random planar maps, see these lecture notes of Le Gall and Miermont. Recent developments not covered in these notes are their proofs of the convergence of certain families of random planar maps to the so-called Brownian map (see here and here).
The CRT is also part of the construction of the scaling limit of connected components of Erdős–Rényi random graphs in the scaling window.
-
I was listening to Lawler give a series of talks this summer and his sense of scaling limit is the same as is often used in defining Brownian motion as a scaling limit of simple random walks. In which both time and space have to be scaled so that the limit process doesn't become trapped at the origin. The difference between Brownian motion and SLE is that the random walks are no longer simple. The conformal invariance in SLE comes of the properties of the not-simple random walks if memory serves.
So if one wanted a rigorous definition of a scaling limit I would look for a text on Brownian motion (most any probability with measure theory book will do this construction in the continuous time chapter) or a more specialized book on continuous time processes. There should be plenty of these in most any language you'd want.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415714144706726, "perplexity_flag": "head"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.